[This local archive copy is from the official and canonical URL, http://www.gca.org/conf/meta99/met99pgrm.htm; please refer to the canonical source document if possible.]
|
Michel Biezunski, Infoloom, and Steven R. Newcomb, TechnoTeacher, Inc.
Topic Maps (ISO/IEC 13250) provides a standard syntax for interchanging
the information needed to support collaborative creation and maintenance
of finding aids such as indexes and glossaries. Topic Maps permit such
index modeling information to be maintained separately from the materials
that are indexed. The indexed materials can be read-only, and they can
be in any data content notations. Topic maps can be merged, making it practical
to create master indexes for corpora consisting of resources that were
not originally authored or indexed in combination. This one-day tutorial
provides an overview of the Topic Maps architecture, covering concepts,
syntax, and an assortment of applications and business opportunities.
G. Ken Holman, Crane Softwrights Ltd.
XSLT is an emerging W3C-developed syntax for specifying transformations
between XML resources. Such transformations are often needed to complete
integrations where the structure of the input resources does not fully
conform to the structure of the output resources, i.e., the normal case,
in which processing includes selection, reorganization, recombination and
rendition.
John Aloysius Ogilvie, Killdara Corporation
"The health care industry is particularly ripe for reform through Internet technology because it involves a long paper trail among fragmented players, including manufacturing, pharmaceutical, health providers, and insurance companies." (Red Herring Magazine)
In this tutorial, John shares the results of Killdara's market and technical
research with those who wish to understand the opportunities available
to those who can deploy XML-based information management/communication
solutions in the health care industry. This tutorial includes an overview
of the industry, the emerging technical standards, and the potential for
XML-powered document exchange between the providers and payers of healthcare.
Forest Automata Theory is a recently rediscovered branch of computer science that is directly applicable to processing XML and SGML data. Forest automata can be used as a formal basis for schema languages and validation processors. They can also be used to implement efficient algorithms for transformations. This technical and mathematical talk will be of interest to XML implementors with experience in regular expression and context free grammar theory.
1/2 Day -- 2:00 pm - 5:00 pm
Python and XML
Paul Prescod, ISOGEN INTERNATIONAL
This tutorial will be an introduction to Python in general and its XML
processing features in specific. It will show how Python currently supports
both event-based APIs such as SAX and tree-based APIs such as a DOM. It
will also demonstrate how Python seamlessly intergrates these features
with access to Java classes, COM and CORBA objects, relational databases
and Internet protocols.
Your company's ability to compete and win online is defined by a whole
new set of market dynamics. These challenges demand a fresh appreciation
of customer relationships and organizational strategies. An emerging category
of products and services known as "Internet relationship management" assists
businesses in using the Web as a new channel for customer acquisition and
retention. The opportunity for businesses who take advantage of this new
channel is to build long-term, high-value relationships with customers
and then to use those relationships to create new market opportunities.
Dianne will give an update on the ICE initiative and describe how this
protocol is changing the way the world does E-business.
Data processing on the web made easy with XSL transformations. Generate
a data maintenance web with data-structure controlled by XML, screen designs
and database API controlled by XSL.
This paper considers the existing use of groves and suggests that there
is a missing application of this technology. The missing class is concerned
with representing applications, programs with functional intent, and the
states within applications as grove models. This paper presents the problems
and requirements for representing applications as groves and what it means
to link to a node in a grove.
The rise of distributed component models like CORBA CCM and Enterprise
Java Beans conjures a vision of a world where complex systems can be built
merely by connecting and configuring ready-made software components. This
paper presents the experience gained by building a markup-aware pilot publishing
system framework based on distributed object components. Attempts were
made to create reusable components at the system level, processing level,
and options like adding more advanced functionality such as linking support
and workflow integration were also investigated. The pilot system was realized
in CORBA using Java and the not-yet-finalized CORBA Components Model, but
many of the experiences gained should be easily transferrable to RMI/EJB
and DCOM/ActiveX.
The EXPRESS language of the STEP standard, ISO 10303, provides a powerful
and standardized language for describing complex data models. The SGML
family of standards define the formal data model of SGML documents (and
other data types and data abstractions) using property sets. One goal of
the STEP and SGML harmonization is to apply the more complete modeling
power of the EXPRESS language to SGML and related standards by restating
their existing data models using the EXPRESS language, in particular, the
SGML property set and the HyTime property set. This paper presents the
results of the initial effort to define these data models, first describing
the EXPRESS formalism and then showing how it was used to express the structures
defined by the corresponding property sets.
Within the OMG (Object Management Group) or the Microsoft environment,
meta-model technologies are becoming ready for prime time. One of
the main enabling factors is that they are now supported by XML-related
languages and this presentation will study the synergy between these two
emerging fields (XML and meta-model technology), at the conceptual and
the practical level. The MOF (Meta-Object Facility) is a new emerging
OMG standard that may have an important impact on many areas of object-oriented
software engineering. The MOF is an outcome of the OMG ADTF (Analysis
and Design Task Force) and is rapidly gaining practical importance, between
UML and XML, in the industrial strategy of several important companies.
Many product definitions, like the UML language itself, are already based
on the MOF. Many more are presently being built on a MOF-compliant
basis. this presentation will provide some comparison elements between
the architecture of CDIF, MOF and the Microsoft/MDC OIM meta-model architecture.
An introduction to the XML-based XMI model interchange format, approved
by OMG in January 1999, will serve to illustrate the main ideas presented
in the talk.
10:30 - 11:00 Break
11:00 - 11:45 To Be Determined
We propose the use of user-defined DTDs to drive the extraction of relevant
information from the Web. In particular, our objective is the development
of Web systems able to extract relevant data from Web information sources
in the form of XML documents that are valid with respect to a user-defined
DTD expressing the semantics of these data. The presence of heterogeneous
information sources, whose models are not known a priori, makes scalability
one of the main features for such systems. Indeed the system must be "easily"
programmed for dealing with a large number of different information sources.
This paper presents an alternative and more realistic XML-based approach
to add semantics to existing web pages without having to change their content.
This approach, developed in our laboratory, allows automatic data extrication
from the web. The approach proposes to describe the meaning of web pages
in separate documents to which computer programs can refer whenever they
need to manipulate or extract data from these web pages. These documents
are written in Web ONtology DEscription Language (WONDEL), an XML and XPointer-based
language, which we have defined to express the basic knowledge information
from the web. WONDEL has been successfully used to extract information
about several universities around the world. One of the main interesting
features of WONDEL is that it takes advantage of the existing regularity
in web pages, which reduces the amount of the WONDEL code. Indeed, the
authors have been able to extract a large volume of data from several university
web sites with a limited number of small WONDEL documents.
The apparent similarities in the structures of these concepts leads
to some interesting questions. Is it possible to store semantic network
information in a topic map? Is it possible to build a semantic network
from a topic map? Would it be possible to design a computer program that
identifies the knowledge contained within chunks of text? If such a system
could be built, would a computer be able to identify and interpret the
knowledge found within a collection of documents? In such a system, a user
would be able to query the database for specific information. This system
could be used to interpret the knowledge contained within the nodes. The
user could begin a browsing session based on a piece of knowledge desired.
The user could also use the system to interpret the knowledge in the databse
without browsing through the nodes. This paper will describe, and possibly
demonstrate, such a system and discuss possible real-world applications
of these concepts.
3:30 - 4:00 Break
Enterprise resources could be managed with an "enterprise table of contents".
Some popular ERP applications like peopleSoft are already taking a similar
approach for their application collection. Internal and external enterprises'
resources (information and applications) could be presented as a kind of
big book and a table of contents used to browse the resources.
A topic map structures link networks as SGML/XML structures data. Applying SGML/XML markup to raw data creates information. Applying a topic map to an information pool creates knowledge structures. The presentation will cover three technical key issues about topic maps: collecting the declarative part of a map in the "Topic Map Template", checking the consistency of a map using constraints, and automatic generation of a topic map from a given set of structured information resources using generation rules.
For more information, contact:
Rose Marie Masal
|
Copyright © 1999, Graphic Communications
Association
Revision date: 20 July 1999 |