The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: November 05, 2000
SGML and XML News. Q4. October - December 2000

Related News:   [XML Industry News] -   [XML Articles] -   Current SGML/XML News -   [2000 Q3] -   [2000 Q2] -   [2000 Q1] -   [SGML/XML News 1999 Q4] -   [SGML/XML News 1999 Q3] -   [SGML/XML News 1999 Q2] -   [SGML/XML News 1999 Q1] -   [SGML/XML News for 1998] -   [SGML/XML News for 1997] -   [SGML/XML News for 1996] -   [SGML News for 1995]

Site Search: [VMS Indexed Search]

  • [December 29, 2000]   Market Data Markup Language (MDML).    A recent announcement from The Financial Information Services Division (FISD) of the Software and Information Industry Association (SIIA) describes the formation of an XML for Market Data Working Group. The XML Working Group will attempt "to consolidate industry efforts to define the parameters of the XML for market data discussion (i.e., the fields needed to describe a security and its price). ['Fields' mean (e.g.) Identifier, ISIN, CUSIP, Last, Open, Best Bid, Best Ask, Close, Next Bid, Size, Maturity Date, Coupon, Yield, Call/Put, Strike Price, etc.] If the industry is able to unify around a common size and scope definition, FISD is interested in (1) serving as the facilitator of the discussion to create a standard, (2) supporting and maintaining the standard as a permanent home and (3) coordinating with the other major financial industry XML standards efforts such as NewsML (news), FpML (derivatives), IRML (investment research) and XBRL (business reporting) as appropriate." According to the announcement, "user firms are interested in extracting information from a vendor's data feed ('vendor' defined as any information provider including exchanges and contributors) into common desktop applications. The requirement is not for an individual trader who wants access to multiple sources of data, but rather for distributing information to disparate systems throughout the organization. For that, the users need a standard protocol and common data format. Jeremy Sanders [Merrill Lynch] envisions using the XML standard primarily for end-of-day and intra-day snapshot application feeds rather than for streaming data." An antecedent to the FISD working group effort is a draft Market Data Markup Language (MDML) Specification authored by Bridge, which "describes an implementation of XML to be used for distributing financial information. A primary goal of this implementation is to be flexible enough to carry data from any financial market data vendor, including, but not limited to, exchanges, brokerage houses, banks, and information vendors." Background to the MDML XML specification is described thus: "While all market data vendors carry largely overlapping sets of data, they have all developed different data models, protocols and symbologies. Each vendor supplies a highly specialized application set for making the best use of their data. Extracting information from a vendor's data feed into common desktop applications can only be done with custom software. A class of products called integration platforms has been developed to ease this problem. Integration platforms convert vendor data into their own proprietary models so their specialized application set can make use of data from many vendors. While this solves the immediate problem of a trader who wants access to multiple sources of data through a single desktop, it doesn't solve the larger problem of distributing information to disparate systems in an organization... Most if not all market data vendors have solved the problem of data modeling by developing simple models which allow them to carry an ever increasing variety of information without making changes to their infrastructure or simple display applications. There are two broad classes of representation, which here will be referred to as pages and records. Pages are unstructured collections of data formatted for display in a fixed sized window. A record is a set of properties each containing a piece of information related to that record. Records representing different types of securities (i.e., Japanese Government Bonds and Dollar Yen Swaps) will contain different sets of properties. Even when vendors' wire protocol represents information in fixed sized structures, it is still conceptually sets of properties. Pages themselves can be, and often are, distributed as sets of properties - containing the page's width, height, text, color attributes in separate properties. Other types of information can be represented as sets of properties, but tend to have more complex request parameters, and usually contain repeating rows of data. Examples are news headlines, historical trades, and options and futures chains. In order to represent this information in XML, MDML defines a property and its attributes, as well as conventions for assembling properties into the objects that are returned from queries on a vendor's data feed. MDML also defines a convention for representing repeating rows of data. In order to improve access to the information, MDML defines a default set of object types with default properties, as well as request structures for them. The set of object types, and their properties, is extensible so a market data vendor can include its own value-added information and differentiate its products. This specification includes examples of MDML, but not schemas or DTDs. Examples are much better at conveying the structure and use of an XML. Once MDML is finalized, schemas and/or DTDs will be published, separate from this document, to allow parsers to validate MDML... MDML tags [used in the draft specification] are defined to be in the MDML namespace, which is Appendix 2, 'Market Data Control Set', lists the elements and attributes that are used by MDML." See further references in "Market Data Markup Language (MDML)."

  • [December 29, 2000]   Askemos XML-Enabled Application Server.    Joerg F.Wittenberger posted an announcement for the early release of the Askemos project (version 0.6), with an invitation for public evaluation. Askemos is an "application server targeted towards document management and workflow tasks... designed to support many application languages." Description from the README document: The framework dream began with such notions as "a rootless object network model, persistent data, non data specific, XML optimized, flexible name space management, object autonomy, ACID transactions, simple messaging concept, any extension language feasible... The framework, when used to support its own development, will be the basis for the ideal tool for collaborative programming. This work reflects about the basic mechanism of understanding, communication and trust -- and how to tell the mechanism apart from policy. These mechanisms are the basic principles, or common code, of viable (sustaining) communities, societies etc. as expressed in their language, rules and laws. As such it's only loosely connected to software. To put it different, no sane rule or law contradicts this text. If any did that's a problem/bug of either the rule or this work. One part of the full text is a project to create an environment, where these mechanism can be used effectively. The project implements a framework to manipulate small amounts of information in the 'dimensions' structure, context interaction and rights (more to be added if discovered to build trustworthiness. It is already efficient enough to allow to test and demonstrate multiple policies in real world environments and applications. The current extension languages include XSLT (incomplete) and a DSSSL-alike scheme. It supports rendering to PDF PostScript and plain text via Lout, has HTTP server and client (with some TODO), SMTP client (error handling needs work). There is a persistent store, which contains junks of data (prefered XML documents). To each of them there's some meta data maintained. We call such an object a place. Autonomy of Places, Presentation and Manipulation: One of the meta data slots of a place is a so called action, which is essentially the code of the object (from a OO point of view, junk + junks'meta + action = object). This action is (essentially) the only function, which can modify the slots of the place. The read operation (MVC terminology: View) delivers the data at the place, possibly transformed by a function (side effect free!). A write operation (MVC terminology Controller) changes all data slots at the place in one transaction using the result(s) of another function... One action defines XSLT documents (implemented as server extension). Their program and data are just one style sheet (possibly distributed over multiple places). If a XSLT document wants to change state, it must recreate itself with state elements replaced. Different data base adaptors have different strength. And distributed object data bases can not beat the performance of specialized relational data bases when searching large relational tables while they are superior at less structured data. Relational data bases are accessd by XSQL..." Related projects are said to include Oxygen [similar goals in the long run, much larger scope], FramerD, Casbah, ApacheCocoon, Zope, conge, xmlblaster Charlie, and <bigwig>.

  • [December 29, 2000]   Release of DSML Tools Version 1.0.    Gervase Markham recently announced the availability of 'DSML Tools' Version 1.0. The DSML Tools suite is a set of Java utilities for handling Directory Services Markup Language (DSML) data; the toolset is under development as part of Markham's third year project. The DSML Tools "provide the following capabilities: (1) Querying of any LDAP directory, with search results output as DSML (2) Import of DSML data into any LDAP directory (3) Directory-context validation of DSML (checking for illegal attributes in the entries, etc.) (4) Calculating the differences (for a directory) of two DSML documents - an XML Diff algorithm for DSML data In other words, this software makes all LDAP-supporting directories DSML-enabled. In addition to that, it provides the useful function of checking the integrity of generated DSML data, and showing at a glance how two data sets represented as DSML differ. Within this tool suite, 'LDAP2DSML' performs an LDAP search and outputs the results as DSML; 'DSML2LDAP' takes a DSML file and modifies a directory based on its contents; 'DSMLDiff' shows the effective differences in the data between two DSML files; 'DSMLValidate' checks the integrity of DSML against a directory schema." The code is MPL/GPL dual-licensed. This is, as far as Markham is aware, the first open-source implementation of an LDAP to DSML gateway. Development background: "DSML (Directory Services Markup Language) is an XML dialect for directory information. A directory is a hierarchically-organised data store - in other words, a tree of data nodes. For example, a company may have organisational units, each unit will have employees, and each employee will have a name and an email address. Such hierarchically-organised data does not fit well in a database, but is much more suited to a directory. There is a common standard for directory access in LDAP (Lightweight Directory Access Protocol), version 3 of which is defined in RFC 2251. This allows clients to connect to any directory to read information. There is also a common interchange format, called LDIF (LDAP Data Interchange Format) defined in RFC 2849. However, with the new generation of web applications being XML-aware, an XML dialect for directory information was thought necessary. Hence DSML. DSML allows the new generation of XML-aware applications to use directory information." See further references in: (1) "Directory Services Markup Language (DSML)" and (2) the DSML Web site resources.

  • [December 28, 2000]   NKOS Working Group Develops XML-Based Vocabulary MarkUp Language.    The Networked Knowledge Organization Systems/Services (NKOS) Working Group is developing an XML DTD/Schema called 'Vocabulary [Products] MarkUp Language (VocML)' which will support the structured representation of a wide range of KOS resources, "including authority files, hierarchical thesauri (including those with polyhierarchies), classification schemes, digital gazetteers, and subject heading lists." References for the group's draft Taxonomy of KOSs and other work products are provided on the project web site; the site contains a "set of pages is devoted to the discussion of the functional and data model for enabling knowledge organization systems (KOS), such as classification systems, thesauri, gazetteers, and ontologies, as networked interactive information services to support the description and retrieval of diverse information resources through the Internet." A summary of the working group's progress is provided by Gail Hodge (Consultant/National Biological Information Infrastructure, Information International Associates, Inc.) in a recent issue of D-Lib Magazine. "Interest in controlled vocabularies, categorization schemes, authority files and other knowledge organization systems (KOSs) for organizing and standardizing subject access has increased substantially with the introduction of the Web and knowledge management initiatives within organizations. As companies consider the development of KOSs, the extensive investment required to develop and maintain them becomes apparent. One way to reduce the investment is to use KOSs that already exist in a variety of subject areas from architecture to zoology. However, many of these KOSs are not available on the Internet, or they are not in an electronic format that allows for easy access to and retrieval of 'pieces' of the vocabulary with its structure intact. This problem is the focus of the Networked Knowledge Organization Systems/Services (NKOS) Working Group, an ad hoc group of more than 70 KOS developers and implementers from 10 countries. Beginning with an initial workshop at the ACM DL 97 Conference, the group has focused on the standards needed for interoperable, networked KOSs -- metadata for describing KOSs and a protocol for transferring information from the electronic KOS to the application that will use it. At a recent meeting held in conjunction with the American Society for Information Science and Technology Annual Meeting in Chicago on November 13, members of NKOS focused on a scheme for marking up a KOS. A draft XML DTD, developed by Joseph Busch and Ron Daniel of Metacode, Inc. (now part of Interwoven, Inc.) was presented and reviewed. The schema, called VocML (Vocabulary MarkUp Language), defines a structure for tagging KOS content to retain the structure. The DTD allows for Dublin Core metadata that describes the KOS itself. It also provides tags and syntax for uniquely identifying each term, its relationship to other terms (using the standard Z39.19 relationships as well as more detailed types of associative relationships), and information such as scope notes and definitions. The goal is to make the DTD as generalized as possible..." See other references and the draft version 1.0 XML DTD in "Vocabulary Markup Language (VocML)."

  • [December 28, 2000]   WeatherML for the Weather Derivatives Trading Community.    A recent announcement from the Weather Risk Management Association (WRMA) outlines a proposed XML-based standard for weather derivatives transactions: "The Weather Risk Advisory, an independent software and consulting company specializing in weather derivatives, is leading an initiative to develop WeatherML, an XML-based data protocol for electronic processing of weather derivatives. WeatherML will be developed and promoted on a global basis by the WeatherML Steering Committee, a group led by Weather Risk Advisory and comprised of key weather derivatives market players. The committee will include representatives of each type of organization within the weather derivatives trading community -- trading organizations, banks, insurers, reinsurers, and brokers. WeatherML already has the backing of the majority of players in the weather risk market. WeatherML will enable organizations to reduce trading costs and operational risks associated with the use of weather derivative products. WeatherML will also offer increased flexibility in systems design and interfacing, and will facilitate enhanced scalability, particularly as it will not be tied to any operating system or programming language. Weather Risk Advisory has been working on the WeatherML concept for the last six months, and Version 1.0 will be completed in the second quarter of 2001, from which time new interim releases will be issued approximately quarterly. A proposal has been made to the industry's trade group, the Weather Risk Management Association (WRMA), for them to endorse WeatherML. WRMA is active in promoting the weather derivatives market and developing initiatives to support it, and it is hoped that the standard can be developed in partnership with them. Peter Brewer, WeatherML Steering Committee Chairman and CEO of Weather Risk Advisory, said, 'WeatherML will be adopted as the industry-wide standard. The wider its adoption, the greater its value to those involved.' The weather derivatives market is still in its early stages, allowing the industry to reduce the costs of the inevitable standardization by developing and adopting WeatherML while the market itself is developing, and there are still a limited number of players. Weather Risk Advisory will be working closely with the creators of other XML standards, such as FpML (Financial products Markup Language), to ensure compatibility. Jürgen Gaiser-Porter, WeatherML Standards Committee Chairman and Head of Research at Weather Risk Advisory said, 'The broad range of players and the international nature of trading within the weather derivatives market brings additional challenges in developing standardized contracts and confirmations. WeatherML will make this possible, and in doing so will galvanize the weather risk community.' WeatherML (Weather Markup Language) is a data standard for electronic processing of weather derivatives. It is XML-based and is designed to be broadly compatible with other XML data standards initiatives, such as FpML (Financial products Markup Language) and those covering reinsurance and energy trading. XML allows data to be presented in a format readable by both computers and people." For other description and references, see "Weather Markup Language (WeatherML)."

  • [December 23, 2000]   W3C Publishes XML Protocol Requirements Document.    The W3C XML Protocol Working Group has published a public working draft specification for XML Protocol Requirements. Reference: W3C Working Draft 19-December-2000. The document "describes the XML Protocol Working Group's requirements for the XML Protocol specification." Revisions from the previous draft of 7-December-2000 are presented in a color-coded diff document. As part of the W3C's Architecture Domain, the W3C XML Protocol Activity is designed to address the problem of "standardized application-to-application messaging." According to the activity statement the XML Protocol Working Group has thus been chartered "to design four things and to produce a Recommendation based on them: (1) An envelope to encapsulate XML data for transfer in an interoperable manner that allows for distributed extensibility, evolvability, as well as intermediaries like proxies, caches, and gateways (2) In cooperation with the IETF (Internet Engineering Task Force), an operating system-neutral convention for the content of the envelope when used for RPC (Remote Procedure Call) applications (3) A mechanism to serialize data based on XML Schema datatypes (4) In cooperation with the IETF, a non-exclusive mechanism layered on HTTP transport." Included among the general requirements delineated in the new working draft document: "(1) The specification will make reasonable efforts to support (but not define) a broad range of programming models suitable for the applications intended for XP. (2) The specification will make reasonable efforts to support (but not define) a broad range of protocol bindings between communicating peers. (3) The specification developed by the Working Group must support either directly or via well defined extension mechanisms different messaging patterns and scenarios. The specification will directly support One-way and Request-response patterns as part of permanently and intermittently connected scenarios. The specification will not preclude the development of other patterns at either the application or transport layers. Examples of such patterns may include publish-subscribe or multicast delivery. All patterns and scenarios will be described by relevant use cases. (4) The Working Group will coordinate with W3C XML Activities through the XML Coordination Group and shall use available XML technologies whenever possible. If there are cases where this is not possible, the reasons must be documented thoroughly. (5) The specification developed by the Working Group shall be as lightweight as possible keeping parts that are mandatory to the minimum. Optional parts of the specification should be orthogonal to each other allowing non-conflicting configurations to be implemented. (6) The specification must be suitable for use between communicating parties that do not have a priori knowledge of each other. (7) The specification must focus on the encapsulation and representation of data being transferred between parties capable of generating and/or accepting an XP protocol envelope." For other information, see the XML Protocol Home Page and "W3C XML Protocol."

  • [December 22, 2000]   Project "eL" - The XML Leningrad Codex Markup Project.    A recent workshop presentation delivered by Kirk Lowery and Patrick Durusau at the annual meeting of the Society of Biblical Literature (Nashville, Tennessee, November 17, 2000) outlines the goals of the XML Leningrad Codex Markup Project. The goal of the XML Leningrad Codex markup project "is to produce a fresh, from scratch 'mirror image' of the Leningrad Codex of the Hebrew Bible, encoded at the character/glyph level in UNICODE, which will be suitable for use in 'XML-aware' applications (word processors, database engines, web-applications). Such an encoded text can be used for an infinite variety of purposes and will allow for collaborative projects via the Internet to 'pyramid' knowledge, encourage the reuse of basic data and analysis, extend the value of limited human and financial resources, and reduce duplication of effort. Our intention is to make a complete representation of all material found in the manuscript, including the masora magna hyperlinked to the corresponding place in the text, the so-called 'carpet pages' of the Diqduqe ha-te'amim, and two poems, all the colophons, marginal notes and masora parva as they actually stand on each page. The complete text will be encoded with character level markup (each character will bear a unique ID). Unique IDs will make it possible to refer to specific characters for correction of mistakes or annotation of the text; character level markup will also facilitate the construction of variant texts." Textual analysis is to be implemented using XLink/XPath/XPointer and "standoff markup" mechanisms. To facilitate use of that text by scholars, the 'eL' project will maintain a web addressable version of the text and produce for automatic email delivery portions of the text with word level markup, verse level markup, including or excluding Masora magna or parva... Project 'eL' has several innovative and unique aspects: (1) it will be an Open Source project; (2) it will invite the participation of the general public; (3) it will endeavor to markup the entire manuscript; (4) it will produce a freely available UNICODE Hebrew font. By 'open source' we mean that the resulting text, although copyrighted and with an institutional custodian, will be freely distributable for any purpose. Project 'eL' is also a sociological experiment, testing an adaptation of a model for human collaboration in the production of knowledge which has been successful in the software development community (e.g., GNU/Linux) and natural sciences. We believe that 'eL' will encourage the use and study of the Hebrew Bible across the world via the Internet, and that other disciplines will be able to profit from our experience..."

  • [December 22, 2000]   RDFStore Version 0.31: Perl API for RDF Storage.    Alberto Reggiori recently announced the release of RDFStore version 0.31. RDFStore provides a 'Perl API for RDF Storage'. The updated version "introduces the new RDFStore::* Perl package namespace to avoid conflits/pollution with other installation; a list of bug fixes have also been done. Version 0.4 will come over in January 2001, possibly with a full Perl TIE interface over the API, a much more efficient/thin indexing method, and a DBD driver possibly compliant to Squish RDF queries." Description: "RDFStore is a set of Perl modules to manage Resource Description Framework (RDF) model databases in a easy and straightforward way. It is a pure Perl implementation of the Draft Java API from the Stanford University DataBase Group by Sergey Melnik with some additional cool modules to read/write RDF triples directly from the Perl language environment. By using the Perl TIE interface, a generic application script can access RDF triplets using normal key/value hashes; the storage can happen either in-memory data structures (not tie) or on the local filesystem by using the or modules. An experimental remote storage service is also provided using a custom module coupled with a fast and performant TCP/IP deamon. The deamon has been written entirely in the C language and is actually storing the data in Berkeley DB v1.x files; such a software is similar to the rdfbd approach from Guha. The input RDF files are being parsed and processed by using a streaming SiRPAC like parser completely written in Perl. Such an implementation includes most of the proposed bug fixes and updates as suggested on the W3C RDF-interest-Group mailing list and on the SiRPAC Web site. A strawman parser for a simplified syntax proposed by Jonathan Borden, Jason Diamond's and Dan Connolly is also included. By using the Sablotron XSLT engine is then possible to easily tranform XML documents to RDF and query them from the Perl language." Principal features include: (1) Modular interface using packages; (2) Perl-way API to fetch, parse, process, store and query RDF models; (3) W3C RDF and strawman syntax parsing; (4) Perl TIE seamless access to RDF triplet databases; (5) Either DB_File and BerkeleyDB support; (6) Automatic Vocabulary generation; (7) Basic RDF Schema support; (8) Initial TCP/IP remote storage service support." For other Perl XML tools, see the archives of the Perl-XML mailing list, which is "dedicated to the discussion of enhancing Perl's ability to work with XML and for using Perl with XML documents." For related RDF tools, see "Resource Description Framework (RDF)."

  • [December 22, 2000]   Prowler: Open Source XML-Based Content Management Framework.    Lars Martin (SMB) announced the release of Prowler version 0.4, now available for download. Prowler is a "content management framework based on XML. It provides a foundation for applications which have to deal with content management problems. It consists of classes which implement the infrastructure and basic functionality to provide the programmer a transactional XML Facade for the underlying data sources; plus a set of APIs that allows to plug-in new back-end systems. The APIs are completely XML centric. That is, XML is not just used to transfer data to/from the system, but all features, that the API exposes, are reflecting the special needs of XML and the currently available XML tools and technologies, such as DOM, SAX, XPath, parsers, etc. The 100% focus on XML may limit the possibilities of the system. But on the other hand it keeps the design clear and the implementation efficient." Key features: (1) Hierarchical view on the content: Prowler provides a directory structure, which is used to define several access paths to the data. This directory itself is XML. So in fact, the programmer sees the entire content of one system as one big XML document. This for example allows to search the content of different data sources via one XPath query. (2) Versioning Versioning of course does only apply to data sources that are able to natively handle XML data, for example XML databases, or back-end systems that support Versioning by itself. (3) Querying: XML gives the data a hierarchical structure. Such structures are not suited for some kind of querying, for example data mining. On the other hand XML data is well suited to do semantically rich queries. Of course, the use of XML alone does not enable semantic searching features per se. However, in the relatively closed world of one application it is possible to implement semantic querying features. (4) Metadata: Each application provides a different view on the content. For example most Enterprise Information Systems need access control and workflow features. Instead of implementing such features in the Prowler kernel it just provides a way to assign metadata to the actual content and a very flexible way to check these metadata." Prowler is the core component in the Infozone open source project suite. Other components include: "RelDom (an XML-SQL mapper that can be used as a Content Adapter for Prowler), ozone/XML (an Object-oriented XML database), Lexus (a 100% pure Java-based implementation of XUpdate - the XML Update Language), Infozone Tools (a collection of useful tools), and Schemox (which allows one to generate input forms for XML data extracted from the underlying data structures)."

  • [December 22, 2000]   XML Linking Language (XLink) and XML Base Specifications Issued as W3C Proposed Recommendations.    On December 20, 2000, the W3C published Proposed Recommendation specifications for XLink and XML Base. XML Linking Language (XLink) Version 1.0 [W3C Proposed Recommendation 20-December-2000] has been edited by Steve DeRose (Brown University Scholarly Technology Group), Eve Maler (Sun Microsystems), and David Orchard. The XLink specification "defines the XML Linking Language (XLink), which allows elements to be inserted into XML documents in order to create and describe links between resources. It uses XML syntax to create structures that can describe links similar to the simple unidirectional hyperlinks of today's HTML, as well as more sophisticated links. XLink provides a framework for creating both basic unidirectional links and more complex linking structures. It allows XML documents to: (1) Assert linking relationships among more than two resources; (2) Associate metadata with a link; (3) Express links that reside in a location separate from the linked resources... Using XLink potentially involves using a large number of attributes for supplying important link information. In cases where the values of the desired XLink attributes are unchanging across individual instances in all the documents of a certain type, attribute value defaults (fixed or not) may be added to a DTD so that the attributes do not have to appear physically on element start-tags... This specification defines only attributes and attribute values in the XLink namespace. There is no restriction on using non-XLink attributes alongside XLink attributes. In addition, most XLink attributes are optional and the choice of simple or extended link is up to the markup designer or document creator, so a DTD that uses XLink features need not use or declare the entire set of XLink's attributes. Finally, while this specification identifies the minimum constraints on XLink markup, DTDs that use XLink are free to tighten these constraints. The use of XLink does not absolve a valid document from conforming to the constraints expressed in its governing DTD." The XML Base specification, edited by Jonathan Marsh (Microsoft), "proposes a facility similar to that of HTML BASE for defining base URIs for parts of XML documents." The review period for both PRs extends until 31-January-2001. For related references, see "XML Linking Language."

  • [December 22, 2000]   OASIS Registry and Repository Technical Committee Completes New Technical Specification.    A new version of the "OASIS Registry/Repository Technical Specification has been released. Reference: Working Draft 1.1 December 20, 2000. 152 pages. This release follows the face-to-face meeting in Washington, DC on December 5, 2000, and the follow-on teleconference December 15, 2000. Abstract: "This specification represents the collective efforts of the Registry and Repository Technical Committee of OASIS, the Organization for the Advancement of Structured Information Standards. It specifies a registry/repository information model and a registry services interface to a collection of registered objects, including but not limited to XML documents and schemas. The information model uses UML diagrams and written semantic rules to specify logical structures that serve as the basis of definition for an XML-based registry services interface. The information model is used for definitional purposes only; conformance to this specification depends solely on correct implementation of some designated subset of the registry services interface. The registry services interface consists of request services to create new registry information or to modify or supplement existing registry entries. It also consists of query and retrieval services to search registry content and retrieve selected registry information, or to retrieve registered objects via object references or locators. The registry services interface supports browsing by arbitrary electronic agents as well as interoperation among conforming implementations. This document deals primarily with the registry, although some scenarios and requirements for the repository are included. This document is a draft proposal under development by the Oasis Registry/Repository Technical Committee. Its purpose is to solicit additional input and to convey the current state of the Oasis Registry/Repository Information Model and Technical Specification...This document represents a work in progress upon which no reliance should be made. Its temporary accessibility, until more permanent accessibility is established at the OASIS web site, is via the following URL: See the announcement from Len Gallagher, "New Version 1.1 - OASIS Reg/Rep Technical Specification." The objective of the Registry and Repository Committee is to develop one or more specifications for interoperable registries and repositories for SGML- and XML-related entities, including but not limited to DTDs and schemas., an initiative of OASIS, intends to construct and maintain a registry and repository in accordance with these specifications, including an interface that enables searching and browsing of the contents of a repository of those entities. The registry and repository are to be designed to interoperate and cooperate with other similar registries and repositories..." For related references, see "XML/SGML Name Registration."

  • [December 22, 2000]   W3C Releases XHTML Basic Specification as a W3C Recommendation.    The World Wide Web Consortium recently issued an announcement for the release of XHTML Basic as a W3C Recommendation: "Continuing its mission to create one Web for all users, the World Wide Web Consortium (W3C) today released XHTML Basic as a W3C Recommendation. The specification reflects cross-industry agreement on a set of markup language features that allows authors to create rich Web content deliverable to a wide range of devices, including mobile phones, personal digital assistants (PDAs), pagers, and television-based Web browsers. A W3C Recommendation indicates that a specification is stable, contributes to Web interoperability, and has been reviewed by the W3C Membership, who favor its adoption by the industry. In January 2000, W3C published the XHTML 1.0 Recommendation, which combined the well-known features of HTML with the power of XML. In another W3C specification entitled 'Modularization of XHTML', W3C's HTML Working Group describes a mechanism that allows authors to mix and match content from well-defined subsets of XHTML 1.0 elements and attributes. The XHTML Basic Recommendation combines some of these XHTML modules in a manner well-suited to mobile Web applications. 'Interoperability has always been essential to the Web,' said Tim Berners-Lee, W3C Director. 'The simplicity of early versions of HTML made interoperability easy. While XHTML 1.0 is a powerful language, support for the full XHTML 1.0 feature set may be too much to expect browsers on cell phones and other small devices to handle. XHTML Basic offers the simplicity and wide interoperability of early versions of HTML and reflects ten years of Web experience, including advances in XML and accessibility.' XHTML Basic is designed so that it may be implemented by all user agents, including mobile devices, television-based devices, and other small Web devices. 'The minimalist nature of the XHTML Basic document type ensures that all Web clients, including mobile phones, PDAs, pagers, set-top boxes, and PCs, can support a common subset of XHTML,' said Dave Raggett, W3C Fellow and Senior Architect at Openwave Systems Inc. 'XHTML Basic provides a powerful building block for use across increasingly diverse platforms, and can be extended with various specialized markup such as for multimedia (SMIL), mathematics (MathML), vector graphics (SVG), and forms (XForms).' The XHTML Basic specification is the result of significant collaborative efforts of the W3C HTML Working Group, including participants from AOL/Netscape; CWI; Ericsson; IBM; Intel; Matsushita Electric Industrial Co., Ltd.; Microsoft; Mozquito Technologies; Openwave Systems Inc.; Philips Electronics; Quark Inc.; and Sun Microsystems. In addition, the Working Group integrated feedback from the W3C Mobile Access Interest Group and the WAP Forum in an effort to ensure demonstrable functionality in wireless devices. Many industry players support, or have plans to support, XHTML Basic, including the WAP Forum. Today, content developers interested in making XHTML Basic documents can create them with W3C's own browser/editor, Amaya..." For other details, see: (1) the testimonials from industry partners and (2) the full text of the announcement, "World Wide Web Consortium Issues XHTML Basic as a W3C Recommendation. XHTML Basic Provides the Key to Full Web Access to Mobile Devices."

  • [December 22, 2000]   Extreme 2001 Call for Participation.    Tommie Usdin (Mulberry Technologies, Inc.) has posted a Call for Participation in the Extreme 2001 Conference. The conference will be held August 5-10, 2001 at the Hotel Wyndham, Montréal, Canada. Extreme Markup Languages 2001 is a "highly technical peer-reviewed 3.7-day conference preceded by two days of tutorials. Subjects include SGML, XML, Topic Maps, query languages, linking, schemas, transformations, inference engines, formatting and behavior, and more. Submissions are due by March 31, 2001. Guidelines for submission and the DTDs are available on the conference web site. "There will be four types of presentations at Extreme 2001: peer reviewed technical papers, late breaking news, posters, and invited keynotes. All will be new material, address some aspect of information management from a theoretical or practical standpoint, and be detailed and rigorous. Come join us to discuss information alchemy: making documents into information and data into gold. Extreme Markup Languages brings together software developers, markup theorists, information visionaries, and other assorted geeks for formal presentations, poster sessions, question and answer sessions, hallway discussions, arguments and gesticulations in front of flip charts, table-top software demos, coffee, and the cuisine, ambience, and charm of Montréal in August. Extreme conference participants include thought leaders from corporate and academic information management, knowledge engineering, enterprise integration/corporate memory, science, and technical and cultural research." Contact the Graphic Communications Association (GCA) for additional conference information. For other conferences, see the events calendar.

  • [December 22, 2000]   Revised Working Draft for the W3C XML Information Set.    Paul Grosso (W3C XML Core Working Group Co-chair) announced the release of a new working draft specification for the XML Information Set. Reference: W3C Working Draft 20-December-2000, edited by John Cowan and Richard Tobin. The specification "provides a set of definitions for use in other specifications that need to refer to the information in an XML document." Description: "This technical report defines an abstract data set called the XML Information Set (Infoset). Its purpose is to provide a consistent set of definitions for use in other specifications that need to refer to the information in a well-formed XML document. It does not attempt to be exhaustive; the primary criterion for inclusion of an information item or property has been that of expected usefulness in future specifications. An XML document has an information set if it is well-formed and satisfies the namespace constraints described below. There is no requirement for an XML document to be valid in order to have an information set. An XML document's information set consists of a number of information items (the information set for any well-formed XML document will contain at least a document information item and several others). An information item is an abstract representation of some part of an XML document: each information item has a set of associated properties. The types of information item are listed in section 2. The XML Information Set does not require or favor a specific interface or class of interfaces. This specification presents the information set as a modified tree for the sake of clarity and simplicity, but there is no requirement that the XML Information Set be made available through a tree structure; other types of interfaces, including (but not limited to) event-based and query-based interfaces are also capable of providing information conforming to the XML Information Set. As long as the information in the information set is made available to XML applications in one way or another, the requirements of this document are satisfied. The terms 'information set' and 'information item' are similar in meaning to the generic terms 'tree' and 'node', as they are used in computing. However, the latter terms were avoided in this document to reduce possible confusion with other specific data models. Information items do not map one-to-one with the Nodes of the DOM or the 'tree' and 'nodes' of the XPath data model.' Document status: 'Though this specification has already had a Last Call review on an earlier version, in light of the review and much discussion, the XML Core Working Group has reworked the specification. The WG has decided (member only) to publish this working draft as representing its latest work and invites public comment on this specification." Review comments are publicly archived. For background, see: (1) the XML Information Set Requirements document, and (2) the W3C XML Activity.

  • [December 15, 2000]   Unicode in XML and other Markup Languages.    W3C and the Unicode Consortium have jointly published the document Unicode in XML and other Markup Languages, which "contains guidelines on the use of the Unicode Standard in conjunction with markup languages such as XML." The document is published as a W3C Note [W3C Note 15 December 2000] and as Unicode Technical Report #20. Principal authors include Martin Dürst and Asmus Freytag. The W3C Internationalization Working Group/Interest Group has contributed to this document in the context of the W3C Internationalization Activity. The base version of the Unicode Standard for the document is Version 3.0. Description: "There are several general points to consider when looking at the interaction between character encoding and markup. (1) Linearity of text vs. hierarchy of markup structure; (2) Overlap of control codes and markup semantics; (3) Coincidence of semantic markup and functions; (4) Extensibility of markup; (5) Markup vs. Styling... The Unicode Standard [Unicode] defines the universal character set. Its primary goal is to provide an unambiguous encoding of the content of plain text, ultimately covering all languages in the world. Currently in its third major version, Unicode contains a large number of characters covering most of the currently used scripts in the world. It also contains additional characters for interoperability with older character encodings, and characters with control-like functions included primarily for reasons of providing unambiguous interpretation of plain text. Unicode provides specifications for use of all of these characters. For document and data interchange, the Internet and the World Wide Web are more and more making use of marked-up text such as HTML and XML. In many instances, markup provides the same, or essentially similar features to those provided by format characters in the Unicode Standard for use in plain text. Another special character category provided by Unicode are compatibility characters. While there may be valid reasons to support these characters and their specifications in plain text, their use in marked-up text can conflict with the rules of the markup language. Formatting characters are discussed in chapters 2 and 3, compatibility characters in chapter 4. The issues of using Unicode characters with marked-up text depend to some degree on the rules of the markup language in question and the set of elements it contains. In a narrow sense, this document concerns itself only with XML, and to some extent HTML. However, much of the general information presented here should be useful in a broader context, including some page layout languages... Many of the recommendations of this report depend on the availability of particular markup. Where possible, appropriate DTDs or Schemas should be used or designed to make such markup available, or the DTDs or Schemas used should be appropriately extended. The current version of this document makes no specific recommendations for the design of DTDs or schemas, or for the use of particular DTDs or Schemas, but the information presented here may be useful to designers of DTDs and Schemas, and to people selecting DTDs or Schemas for their applications. The recommendations of this report do not apply in the case of XML used for blind data transport and similar cases." See related resources in "XML and Unicode."

  • [December 15, 2000]   IdooXoap for Java version 1.0.    Jacek Kopecky has announced the release of IdooXoap for Java version 1.0. IdooXoap is an "implementation of the SOAP protocol. Using this package you can easily build clients that can access services described by WSDL or SCL descriptions, you can also build your own services. IdooXoap provides the tool for generating WSDL description from Java classes (Java2WSDL compiler) and also the tool necessary for easy SOAP development - ServiceCompiler. This one can create service stubs for you and also generate a skeleton implementation of a service. Major improvements since pre-betas include: (1) WSDL support; (2) Arrays support; (3) SOAP Headers support (4) Selective Java to WSDL compilation; (5) EJB support; (6) Improved interoperability." See also "Simple Object Access Protocol (SOAP)."

  • [December 12, 2000]   Proof-of-Concept Demonstration for the ebXML Technical Infrastructure.    From a recent industry consortium announcement: "The United Nations CEFACT and OASIS today announced that the core technical infrastructure of ebXML, the Electronic Business XML Initiative, nears completion and will be delivered in March 2001, two months ahead of schedule. The technical specifications for the transport, routing and packaging (TRP), trading partner agreements (TPA), and registry/repository (REG/REP) components of ebXML provide the required pieces to ensure interoperability based on XML standards for global business on the Internet. Enterprises are demanding a standards-based framework for global trading, and developers are demanding the availability of an open, business-quality architecture that they can begin evaluating and implementing now. Progress on the 18-month ebXML initiative has been so substantial, organizers agreed to move the delivery date forward to meet this demand. At a recent ebXML meeting in Tokyo, hundreds of organizations from Asia, Australia, Europe and North America gathered to advance the development of ebXML. As a highlight of this meeting and a ratification of this decision, sixteen companies collaborated in an interactive proof-of-concept demonstration of the ebXML technical infrastructure. Cisco, Fujistu, IBM, Interwoven, IPNet, Netfish Technologies, NTT Communications, Savvion, Sterling Commerce, Sun Microsystems, TIE, Viquity and XMLSolutions collaborated to build an interactive implementation of ebXML interoperability. In addition, Extol, webMethods and XML Global tracked the POC event closely and indicated that they would be interested in participating in future ebXML events. The demonstration, which was presented in North America for the first time today at a media event in San Francisco, showed how businesses can use ebXML to dynamically formulate trading partnerships through a registry service and exchange electronic business transactions using a consistent XML-based messaging infrastructure. The ebXML demonstration showed dynamic business transactions using payloads from the Automotive Industry Action Group. 'These vendors, many of whom are competitors, came together to prove that one of ebXML core strength's is interoperability', said Robert S. Sutor, Ph.D. of IBM, vice chair of ebXML and member of the OASIS Board of Directors. 'Early completion of the ebXML technical infrastructure will pave the way for rapid availability of multiple commercial integrated ebXML-compliant solutions. These will reduce the costs of deployment and ensure the flexibility required for e-commerce success in the global market'." See (1) the full text of the announcement "United Nations CEFACT and OASIS to Deliver ebXML Technical Infrastructure Ahead of Schedule. Proof-of-Concept Demo with Thirteen Vendors Proves Readiness of Electronic Business Infrastructure.", and (2) "Electronic Business XML Initiative (ebXML)."

  • [December 12, 2000]   BizTalk Framework 2.0 Final Version Published.    Microsoft has announced the publication of the "final version of its BizTalk Framework 2.0 specification, which is now available for download. Based on industry standards for data exchange and security such as SOAP 1.1 (Simple Object Access Protocol), XML and S/MIME, the BizTalk Framework enables the secure and reliable exchange of business documents over the Internet. Development of the BizTalk Framework is overseen by the BizTalk Steering Committee, which comprises industry partners, consortiums and standards bodies." The published specification offers a general overview of the BizTalk Framework 2.0 conceptual architecture, including the BizTalk Document and BizTalk Message. It provides detailed specifications for the construction of BizTalk Documents and Messages, and their secure transport over a number of Internet-standard transport and transfer protocols. Background: "Extensible Markup Language (XML) and XML-based schema languages provide a strong set of technologies with a low barrier to entry. These languages enable one to describe and exchange structured information between collaborating applications or business partners in a platform- and middleware-neutral manner. As a result, domain-specific standards bodies and industry initiatives have started to adopt XML and XML-based schema languages to specify both their vocabularies and content models. These schemas are becoming widely published and implemented to facilitate communication between both applications and businesses. Wide support of XML has also resulted in independent solution providers developing solutions that enable the exchange of XML-based information with other third-party or custom-developed applications. Several solution- or middleware/platform-specific approaches have been taken to address the lack of middleware-neutral, application-level communication protocols. However, no single proprietary solution or middleware platform meets all the needs of a complex deployment environment. These proprietary initiatives have generally resulted in customers facing broad interoperability issues on their own. The BizTalk Framework addresses these interoperability challenges in a platform- and technology-neutral manner. It provides specifications for the design and development of XML-based messaging solutions for communication between applications and organizations. This specification builds upon standard and emerging Internet technologies such as Hypertext Transfer Protocol (HTTP), Multipurpose Internet Mail Extensions (MIME), Extensible Markup Language (XML), and Simple Object Access Protocol (SOAP). Subsequent versions of the BizTalk Framework will be enhanced to make use of additional XML and Internet-related, messaging-standards work as appropriate. It is important to note that the BizTalk Framework does not attempt to address all aspects of business-to-business electronic commerce. For instance, it does not deal directly with legal issues, agreements regarding arbitration, or recovery from catastrophic failures, nor does it specify specific business processes such as those for purchasing or securities trading. The BizTalk Framework provides a set of basic mechanisms required for most business-to-business electronic exchanges. It is expected that other specifications and standards, consistent with the BizTalk Framework, will be developed for the application- and domain-specific aspects."

  • [December 12, 2000]   W3C Publishes XSL Transformations (XSLT) Version 1.1.    W3C has released an XSLT revision in a working draft document XSL Transformations (XSLT) Version 1.1. Reference: W3C Working Draft 12-December-2000, edited by James Clark. An HTML version with color-coded revision indicators has been prepared to reveal changes vis-à-vis the W3C Recommendation of 1999-11-16. The non-normative Appendix G supplies a listing of "Changes from XSLT 1.0." Appendix D provides a "DTD Fragment for XSLT Stylesheets." Document abstract: "This specification defines the syntax and semantics of XSLT, which is a language for transforming XML documents into other XML documents. XSLT is designed for use as part of XSL, which is a stylesheet language for XML. In addition to XSLT, XSL includes an XML vocabulary for specifying formatting. XSL specifies the styling of an XML document by using XSLT to describe how the document is transformed into another XML document that uses the formatting vocabulary. XSLT is also designed to be used independently of XSL. However, XSLT is not intended as a completely general-purpose XML transformation language. Rather it is designed primarily for the kinds of transformations that are needed when XSLT is used as part of XSL." Document status: "The working draft is based on the W3C XSLT 1.0 Recommendation. The changes made in this document are intended to meet the requirements for XSLT 1.1 and to incorporate fixes for errors that have been detected in XSLT 1.0." For related information, see (1) the W3C Style Activity and (2) "Extensible Stylesheet Language (XSL/XSLT)."

  • [December 12, 2000]   Updated W3C Candidate Recommendation for Canonical XML.    The W3C's Candidate Recommendation for Canonical XML Version 1.0 has been updated in light of reviewers' comments in the current implementation phase. Reference: W3C Candidate Recommendation 12-December-2000, edited by John Boyer (PureEdge Solutions Inc.). Document abstract: "Any XML document is part of a set of XML documents that are logically equivalent within an application context, but which vary in physical representation based on syntactic changes permitted by XML 1.0 and Namespaces in XML. This specification describes a method for generating a physical representation, the canonical form, of an XML document that accounts for the permissible changes. Except for limitations regarding a few unusual cases, if two documents have the same canonical form, then the two documents are logically equivalent within the given application context. Note that two documents may have differing canonical forms yet still be equivalent in a given context based on application-specific equivalence rules for which no generalized XML specification could account." Document status: "This revised Candidate Recommendation of the IETF/W3C XML Signature Working Group includes three clarifications resulting from comments made during the four week call for implementation, which formally ended November 24, 2000. The XML Signature Working Group believes this specification incorporates the resolution of all last call and call for implementation issues; furthermore it considers the specification to be very stable, as demonstrated by its interoperability report. We hope to refer this document to the W3C Director for consideration as Proposed Recommendation in early January, 2001."

  • [December 12, 2000]   SOAP Messages with Attachments.    The W3C has acknowledged receipt of a submission from Commerce One, Inc., Hewlett Packard Company, International Business Machines Corporation, IONA Technologies, Microsoft Corporation, Oracle Corporation and webMethods, Inc. on SOAP 1.1 message binding for transmission within a MIME multipart/related message: SOAP Messages with Attachments. Reference: W3C Note 11-December-2000, by John J. Barton (Hewlett Packard Labs), Satish Thatte (Microsoft), and Henrik Frystyk Nielsen (Microsoft). The document abstract: "This document defines a binding for a SOAP 1.1 message to be carried within a MIME multipart/related message in such a way that the processing rules for the SOAP 1.1 message are preserved. The MIME multipart mechanism for encapsulation of compound documents can be used to bundle entities related to the SOAP 1.1 message such as attachments. Rules for the usage of URI references to refer to entities bundled within the MIME package are specified." The NOTE submission constitutes a suggestion for message packaging for the W3C XML Activity on XML Protocols. Description: "A SOAP message may need to be transmitted together with attachments of various sorts, ranging from facsimile images of legal documents to engineering drawings. Such data are often in some binary format. For example, most images on the Internet are transmitted using either GIF or JPEG data formats. In this document we describe a standard way to associate a SOAP message with one or more attachments in their native format in a multipart MIME structure for transport. The specification combines specific usage of the Multipart/Related MIME media type (RFC 2387) and the URI schemes discussed in RFC 2111 and RFC2557 for referencing MIME parts. The methods described here treat the multipart MIME structure as essentially a part of the transfer protocol binding, i.e., on par with the transfer protocol headers as far as the SOAP message is concerned. The multipart structure, though given a name (SOAP message package) is not an entity that can be unambiguously identified as such because there is no token explicitly expressing the intent to make it such an entity. A conscious choice in this document was to avoid adding a new entity type based on a recognizable token. The purpose of this document is to show how to use existing facilities in SOAP and standard MIME mechanisms to carry and reference attachments. In other words, we take a minimalist approach to show what is already possible with existing standards without inventing anything. More rigorous semantics for message packages requires a new entity type. Such a type can be built by extending the approach described here with a new SOAP header entry which, for instance, may be used to provide a manifest of the complete contents of the message package." Rationale: "The co-submitters of this specification believe strongly that this specification provides important functionality that allows a SOAP message to be transferred in a MIME multipart wrapper along with so-called attachments of any media type supported by MIME without changing any of the existing specifications referenced. Especially, it does not require any changes to the SOAP/1.1 W3C Note. Because of the earlier SOAP/1.1 submission, the W3C is well suited to co-ordinate work in this area. The W3C member companies submitting this document suggest that the Consortium include this submission as consideration in the XML Protocol Activity although not necessarily within the existing XML Protocol Working Group." The W3C staff comment says, in part: "Direct handling of binary data has been considered as a low priority for this Working Group. Reusing a similar, MIME-based, solution could be a low-cost option for the XML Protocol Working Group. The XML Protocol Working Group will determine whether, when, or how to incorporate this submission in their work." See related references in "Simple Object Access Protocol (SOAP)."

  • [December 11, 2000]   The Active Digital Profile Initiative.    Led by Business Layers, several companies have formed the Active Digital Profile Initiative, designed to "standardize interfaces and methodologies used to provision digital resources that span devices, applications and services within the enterprise and between enterprises." Background to the initiative is the "complex supply chain... companies must provision voice and data network resources, security systems, remote access systems, operating systems, applications, Web-based information services -- in addition to services that are outsourced to traditional outsourcers or ASPs." The initiative's response is the Active Digital Profile (ADPr) -- "a proposed open XML-based specification that will allow companies to share provisioning information across multi-vendor systems. When fully adopted and deployed, an enterprise will be able to hire a new employee or invite a new business partner to share their network resources knowing that everything that person needs to be productive will simply and automatically be delivered to the right person at the right time. The initiative invites anyone interested in expanding the scope of their provisioning solutions to join its effort to bring openness and interoperability to the eProvisioning process... The ADPr is an XML-based specification that supports any application, in any scenario. The ADPr is an eProvisioning specification, not a network management specification. It is designed to handle the adds, moves, changes, and deletion of users associated with a broad range of services or resources, across the extended enterprise. The specification defines a document that will include a header containing authentication and authorization information, a context used to identify the user and all bounding conditions such as contracts, SLAs, organizations, domains, etc., and one or more tasks and the associated data that is valid within the scope defined by the context... Based on Business Layers' advanced eProvisioning software, used by customers around the world, the ADPr specification has already undergone significant development. Business Layers will continue to work with various industry leaders to refine the new specification and submit it to OASIS, the Organization for the Advancement of Structured Information Standards." A draft specification containing the XML DTD is available on the Active Digital Profile Web site. For other description and references, see: (1) the announcement "Business Layers Leads Effort to Develop First XML-Based eProvisioning Specification. Check Point Software Technologies, ePresence, Netigy, Novell and Other Leading Companies Applaud Proposed Active Digital Profile (ADPr) Specification.", and (2) "Active Digital Profile."

  • [December 09, 2000]   Triple-s XML Survey Interchange Standard.    Triple-s is an XML-based "open survey interchange standard" for the encoding and interchange of survey data collected and analyzed by social science professionals. The standard "defines a means by which both survey data and meta-data (variables) may be transferred between different survey programs running on different software and hardware platforms." The domain problem is typical: "Increasingly, users of survey software are demanding that data be exchangeable between survey software systems from different vendors and possibly running on different hardware and/or different operating system platforms. The transfer may be required because an client wants to perform some more detailed analysis of aspects of a survey originally conducted by an agency and the two parties use different survey software. The initial version of the triple-s standard (version 1.0) was devised by Keith Hughes, Stephen Jenkins and Geoff Wright, and published in 1994. The impetus was a paper by Peter Wills. During 1996 the same group of people met to enhance and extend the standard, based on comments from implementers and users. An interim result of these meetings was presented as a paper to the ASC (Association for Survey Computing) International Conference in 1996. The preliminary specification for version 1.1 of the triple-s standard was agreed in December 1996 and published in March 1998." Thus, triple-s has been designed "as an interchange format; it was not conceived as a native survey definition format, nor is it a replacement for the many proprietary survey definition languages currently in use. The triple-s XML format provides for the cross-platform transfer of both survey data and survey variables using universal industry standard protocols. The syntax of a triple-s XML document is described by the freely available triple-s DTD. triple-s XML provides for the description of the five most common types of variable: (1) SINGLE variables interpret categorical data with one response allowed; (2) MULTIPLE variables interpret categorical data with any number of responses allowed; (3) QUANTITY variables interpret open numeric value (integer or real); (4) CHARACTER variables interpret character data; (5) LOGICAL variables interpret individual Yes/No or True/False data values. triple-s XML allows for both integer and real coded values to be represented. Two formats for the representation of multi-response data are supported. Where standard coding has been used to represent special values -- for example, where '9' is used to represent 'Not Answered' -- that coding is maintained through the transfer operation rather than being closed down on a question by question basis. Furthermore, the fact that a particular code is 'special' in some way can be represented and thus indicated to a survey importer. A triple-s survey is described in two text files. One, the Definition File, contains version and general information about the survey together with definitions of the survey variables. This is used to interpret the contents of the Data file..." For description and references, see: (1) the Triple-s Home Page, and (2) "Triple-s XML Survey Interchange Standard."

  • [December 09, 2000]   CaveScript XML for Speleologists.    CaveScript XML, being developed by Michael Lake, is "the generic name of a cave survey and map data format that could store all the information about a cave survey or an entire cave map. It is designed to assist speleologists and cavers in cave surveying and drawing up cave maps. CaveScript XML consists of a suite of utility programs, a specification for a Cave Survey Markup Language and some Document Type Definitions for the language. The two principal DTDs are CaveSurvey.dtd and the CaveMap.dtd. These form the foundations for the CaveScript Mapping Program which can generate Postscript files showing survey legs and cave features such as the walls, avens etc. If later the survey data changes, because errors are fixed or loops closed, the mapping program will automatically modify the wall detail to 'refit' the changed survey legs. CaveView is the CaveMap XML to Postscript Converter; it is a Perl script that reads a CaveMap XML file and creates a Postscript file for printing the cave map. The programs still need lots of code and a GUI frontend. The GUI part will probably be written using GTK. The CaveScript markup language developed for the project is based on XML. CaveScript is released under the GNU General Public License... CaveScript is still just a draft of a new language for cave survey data. Its goal is to provide a data format to store information about a cave and its map and CaveScript won't be a data reduction engine. Survex is excellent for that and so the need to have scripts so that I can convert my XML data to Survex [a free open-source cave survey tool with a powerful heirarchical file system for station naming]." The Document Type Definitions for the CaveScript XML, documentation, and example XML files are available for download. Also available are programs, examples and documentation for the Perl scripts which convert Survex to XML (svx2xml) and XML to Survex (xml2svx), and sources for CaveView.

  • [December 07, 2000]   US Patent and Trademark Office Deploys XML Solutions for Electronic Filing.    The US Patent and Trademark Office "is one of the world's largest Intellectual Property Offices, now processing in excess of 400,000 patent and trademark applications and in excess of 1,600,000 transactions in connection with these applications in 1999." According to recent publications in government journals and a USPTO 'Request for Agreement' (RFA RFA Solicitation No.: 60-PBPT-0-00001, 2000-10-03), efforts are now underway to develop and deploy XML-based solutions for USPTO electronic filings. An attachment to the recent RFA supplies a 'List of Trademark, Patent, and Assignment DTDs' which are in development or use in the PTO's electronic filing initiatives. The USPTO "has based its electronic filing and business communication initiatives on Extensible Markup Language (XML)-tagged documents and has developed standard formats for applications and most applicant/USPTO correspondence received and sent by the USPTO during the prosecution of a patent as well as post grant correspondence. Similarly, the USPTO has developed XML DTDs for trademark applications and for required post application and post registration filings. At this time some 23 patent related and 8 trademark XML documents have been defined of which a smaller number have been validated through use. The focus of this USPTO program is to encourage COTS IP software management companies to include the ability to produce the XML encoded application documents compliant with the USPTO DTDs as part of, or as an extension to, existing software products." SGML DTDs defined in the Grant Red Book Specification for SGML Markup of United States Patent Grant Publications have been in use for some time; the reference document also contains links for (1) the "Application Red Book: Specification for SGML Markup of United States Patent Application Publications" and for (2) "Electronic Filing System DTDs." USPTO plans call for the use of SGML through 2001, followed by complete transition to XML DTDs in 2002. The Grant Red Book DTD V2.4 issued 10/17/2000 reflects changes made in the DTD ('st32-us-grant-024nf.dtd') for compatibility with XML. A USPTO Electronic Filing System (EFS) already supports secure electronic filing of Patent application. "EFS provides Patent applicants and practitioners with software capabilities and technical guidance to electronically author Patent application information for submission to the USPTO via the Internet. EFS is comprised of two software components: (1) authoring software that complies with USPTO business rules and electronic data capture standards; and (2) submission software that validates bundles, compresses, and securely submits the electronic application files and information. USPTO makes available at no cost authoring and submission software. To author your specification document one may use the preferred Authoring tool known as PASAT (Patent Application Specification Authoring Tool)... The submission software is called the electronic Packaging and Validation Engine, or ePAVE. The submission software after successful transmission, returns an acknowledgement receipt that includes the date of receipt at the USPTO and an assigned Patent application number. EFS implements Patent business rules and practices using Internet technologies. The Extensible Markup Language (XML) is one technical standard implemented. Applicants author their Patent application specifications off-line as intelligent, tagged, electronic documents using XML. Using ePAVE applicants author other patent application information as XML 'forms'. The Extensible Markup Language is a non-proprietary standard approved by the World Wide Web Consortium (W3C). XML is a format used for exchange of information between different applications as well as for publishing information. USPTO EFS software automatically tags the patent application specification and other related application information." For description and references, see "US Patent and Trademark Office Electronic Filing System."

  • [December 07, 2000]   SyncML Initiative Publishes SyncML 1.0 Specification.    Founders of The SyncML Initiative, including Ericsson, IBM, Lotus, Matsushita, Motorola, Nokia, Palm, Inc., Psion, and Starfish hosted a briefing today in connection with the release of the SyncML 1.0 specification. Douglas Heintzman, Chairman of SyncML, hosted the teleconference call with other SyncML founders. The SyncML Initiative "develops and promotes an open industry specification for universal data synchronization of remote data and personal information across multiple networks, platforms and devices. SyncML is a XML-based data synchronization protocol designed to create the optimal mobile computing experience by supporting enhanced data synchronization, including e-mail, calendar, contact management information, enterprise data stored in databases, Web-based documents and new forms of content from systems available in the future." Several XML DTDs and related specifications documents (e.g., SyncML Synchronisation Protocol Specification V1.0 and SyncML Representation Protocol Specification V1.0) are now available for download. From the announcement: "SyncML, the initiative sponsored by Ericsson, IBM, Lotus, Matsushita, Motorola, Nokia, Palm, Inc., Psion and Starfish Software, has today released the SyncML 1.0 specification providing tomorrow's synchronization technology for today's mobile solutions. In less than one year, SyncML has successfully developed and published a powerful protocol for universal data synchronization of both remote and local data. In addition to the specification, SyncML initiative also released a SyncML Reference Toolkit source code, enabling companies to rapidly bring SyncML-compliant products to the market. 'The SyncML initiative is proud to deliver this exciting technology to the market in record-breaking time. Full interoperability among mobile terminals and server infrastructures is a fundamental ingredient in the successful deployment of mobile Internet services. The entire industry will greatly benefit from the success of SyncML,' said Ilari Nurmi, vice chairman of the SyncML initiative. SyncML-enabled products and services will offer consumers mobile freedom by synchronizing personal data and providing interoperability among all SyncML-compliant products and services. Consumers and business professionals alike will be able to synchronize their personal data, such as contacts and calendars, in mobile terminals with various applications and services including corporate personal information managers, Internet calendars, Internet address books and more. Synchronization will be possible locally and remotely through various transports, such as infrared, Bluetooth, HTTP and WAP, regardless of platform or manufacturer. This open standard will enable device manufacturers, application developers, Internet companies, and wireless operators to have SyncML-compliant products and services commercially available as early as the Q1 2001. Founded in February 2000, the SyncML initiative has recognized the growing need for a single data synchronization protocol. With the industry-wide proliferation of mobile devices and the evolution of these devices as the major means of information exchange, synchronization of data will be of integral importance. The SyncML initiative, officially supported by more than 500 device manufacturers, service providers and application developers, welcomes new supporters to join the initiative. New members have the opportunity to make contributions to the specification work and will receive advanced solution development tools provided by the SyncML initiative." In this connection, Nokia "showcased the world's first SyncML implementation with the Nokia 9210 Communicator. The demonstration also included a powerful SyncML enabled Internet calendar solution, the Nokia Mobile Calendar, which in addition to SyncML supports legacy mobile phones too. The Nokia Mobile Calendar is a product, which operators and Internet service providers can offer to their subscribers." See description and references in "The SyncML Initiative."

  • [December 07, 2000]   miniXML Parser with Source Code.    In the January 2001 issue of Dr. Dobb's Journal, Xerox researcher David Cox presents a tree-based "miniXML" parser for XML that is written in C++ using the Standard Template Library for strings and various containers. The parser works with canonical XML, and is very fast, though limited to smaller XML documents. The author concludes from his parser development experience, narrated in the article, that canonical XML is useful, and that small XML parsers embedded in applications can get a lot of work done. The web site contains sample code listings and the complete and source code for the miniXML parser. For related tools, see "XML Parsers and Parsing Toolkits."

  • [December 07, 2000]   Squish RDF Query Tool Released.    Libby Miller has posted an announcement for an alpha release and demonstration of 'Squish' - a Java tool for processing complex RDF queries. "Squish is demonstration software written in Java for making complex queries of RDF on top of Java RDF APIs such as Jena and the Stanford RDF API. The SQL-like query language it uses is similar in some aspects to that used by R.V. Guha's rdfDB, and allows you to make complex queries of RDF models instead of navigating incrementally around them. Squish also uses the JDBC API to make RDF querying easier over RDF stored in SQL databases (Postgres) or in in-memory models. The distribution includes the Java servlet runner Tomcat, and sample JSPs for querying RDF databases using JDBC, including a JSP which allows you to generate and display RSS 1.0 channels. This implementation is intended to demonstrate the possibilities of this approach, and is only appropriate for extremely small scale use. Comments are very welcome." For related tools, see "Resource Description Framework (RDF)." [12-Dec-2000 update: "a few minor bug fixes, improvements to the generation and display of RSS 1.0 files, and migration to SiRPAC 1.15"; see the eGroups mailing list for announcements, and the alternate URL,]

  • [December 06, 2000]   Microsoft Releases XML for Analysis Specification.    Microsoft recently announced an 'XML for Analysis Specification' as a protocol for extending business intelligence to Web Services. This specification is now available for download from the Universal Data Access Web Site. It is open for public feedback from 10/30/00 to 1/15/01; an updated specification will be posted approximately 1/30/01. From the text of the press release: "Microsoft Corporation today announced the release of the beta specification for XML for Analysis -- a new protocol that extends the Microsoft business intelligence strategy to the Microsoft .NET vision of Web services, allowing application developers to provide analytic capabilities to any client on any device or platform using any programming language. Built on HTTP, XML and SOAP and with more than 50 industry players having been instrumental in its development, XML for Analysis is being hailed by developers of analytical tools as the first cross-platform solution designed to address the unique challenges of analytical data access and manipulation. As an extension to OLE DB for OLAP and OLE DB for Data Mining, XML for Analysis uses the Internet as the platform for building analytical applications from diverse sources of data, thus enabling developers to provide better Web services for analytic data. Corporations can now allow trading partners, customers and suppliers to access data easily and securely over the Web without worrying about client operating system, application language or middleware compatibility issues. XML for Analysis expands access to business intelligence by increasing the flexibility for developers to incorporate analytical data within applications that reside remotely on the Internet, or even those that are hosted by another company. Users can achieve a new level of pervasive data analysis because they have access to data from any client ranging from a PDA to an Internet-enabled phone, interactive TV device, laptop computer or PC. XML for Analysis is a fully compatible advancement to the OLE DB for OLAP and OLE DB for Data Mining protocols. Thousands of applications developers, representing hundreds of third-party products currently using the existing OLE DB for OLAP and OLE DB for Data Mining standards, can quickly and easily upgrade to XML for Analysis. Over 100 developers and architects from more than 50 companies were involved in the review process of the XML for Analysis specification. These include Adaytum Inc., AlphaBlox Corp., Andersen Consulting, ANGOSS, Brio Technology Inc., Broadbase Software, Business Objects, Cognos Corp., Knosys Inc., Maximal Software Inc., PricewaterhouseCoopers, SAP Americas Inc., SAS Institute Inc., Seagate Software, SPSS Inc., Symmetry and Walker Interactive Systems Inc. Developer feedback was captured during a preview event held at the Microsoft campus in late October and via a newsgroup facility... 'Web-based services for e-business are definitely on the rise, and -- in terms of business intelligence -- this means accessing analytic databases hosted over the Internet,' said Philip Russom, research director of business intelligence at Hurwitz Group. 'Microsoft's XML for Analysis addresses this need with a protocol that's based on Internet standards and optimized for interaction with Web services. Unlike newer attempts at a standard protocol, XML for Analysis is based on OLE DB for OLAP, which has seen almost three years of industry review, IT implementation and support by third-party analytic software. And it's not just for OLAP; XML for Analysis also supports Web-based data mining'." For other details, see the full text of the announcement: "Microsoft Offers XML-Based Protocol for Extending Business Intelligence to Web Services. Industry Rallies Around Platform-Independent XML for Analysis Specification."

  • [December 06, 2000]   Program Announced for XML DevCon Europe Conference Spring 2001.    Ken North has posted an announcement with the program listing for the XML DevCon Europe Conference in London. "The XML DevCon Europe Spring 2001 Conference runs from February 21-23, 2001 at the Novotel London West Hotel and Convention Centre. The conference has an enterprise XML focus with four tracks of classes that cover developer techniques, applied XML, middleware, servers, and databases. There is also a Lagniappe track with sessions about Web Services, wireless technologies, business-to-business (B2B) integration, Java programming, and other topics. Program highlights includes a keynote address by W3C Fellow Henry Thompson, a keynote panel discussion, and hands-on workshops. Participants in the keynote panel discussion include Paul Brown, Martin Bryan, Simon Nicholson, David Orchard, Sebastian Rahtz, and Henry Thompson. The workshops are hands-on reviews of schemas and stylesheets, including those submitted by the public prior to the conference. The Stylesheets and Transformations Workshop will be presented by Bob DuCharme, G. Ken Holman, and Sebastian Rahtz. The XML Schema Workshop will be presented by Henry Thompson, Michael Rys, and Priscilla Walmsley. The technical program offers dozens of other sessions of interest to serious XML developers." For other XML conferences, see the events calendar.

  • [December 05, 2000]   XML Topic Maps (XTM) Specification Featured in the GCA's Topic Map Special Interest Day.    The new XTM (XML Topic Maps) Specification was featured in the December 5th, 2000 "Topic Map Special Interest Day" at XML 2000. XTM Co-chairs Michel Biezunski and Steven R. Newcomb presented the new specification, and members of the XTM Working Group provided a walk-through. XTM represents an XML grammar for interchanging Web-based Topic Maps, currently under development by the Topicmaps.Org Authoring Group. The working group has announced the public release of three principal XML specifications documents, along with other supporting resources. (1) XML Topic Maps (XTM) 1.0 Core Deliverables [XTM-Core] represents "portions of the XTM 1.0 Specification that are not subject to any future change that would invalidate any XTM document or XTM application that conforms to the syntactic and other constraints [...] are intended to impose in order to guarantee reliable interchange of Web-based topic map information in XML." This includes the XTM 1.0 DTD, the XTM 1.0 Published Subject Indicators (an XTM topic map), and the XTM 1.0 Conformance clause. "This specification provides a grammar for representing the structure of information resources used to define topics, and the associations (relationships) between topics. Names, resources, and relationships are said to be characteristics of abstract subjects, which are called topics. Topics have their characteristics within scopes: i.e., the limited contexts within which the names and resources are regarded as their name, resource, and relationship characteristics. One or more interrelated documents employing this grammar is called a 'topic map'." (2) XML Topic Maps (XTM) 1.0. TopicMaps.Org AG Review Specification [XTM] describes version 1.0 of XML Topic Maps, an XML grammar for interchanging Web-based topic maps. "This document is in the Authoring Group Review phase of development. Except for specific parts that appear in Core Deliverables, the contents of this document represent portions of the XTM 1.0 Specification that are subject to changes made in the course of an Authoring Group (AG) Review process." Annex B provides the XTM Conceptual Model; the diagrams are 'class diagrams' and 'object diagrams' that use the conventions of the Unified Modeling Language (UML). Annex C provides the XTM 1.0 Document Type Declaration; Annex D presents XTM 1.0 Published Subject Indicators; in addition to the XTM Published Subject Indicators topic map, XTM topic maps for natural language and country (e.g., for use in topic map internationalization), are provided. Annex E provides a link to information describing the transformation of topic map documents conforming to ISO 13250 into XTM 1.0 syntax: ISO 13250 to XTM 1.0 Document Transformation 1.0. (3) XML Topic Maps (XTM) Processing Model 1.0 [XTMP] describes version 1.0 of XML Topic Maps (XTM) Processing Model 1.0, a processing model for XTM. The document provides a description of the processing model bridging the gap between the XTM abstract conceptual model and the XTM interchange syntax. This document is in the Authoring Group Review phase of development. Except for specific parts that appear in Core Deliverables, "the contents of this document represent portions of the XTM 1.0 Specification that are subject to changes made in the course of an Authoring Group (AG) Review process." TopicMaps.Org is an independent consortium of parties developing the applicability of the Topic Map paradigm [ISO13250] to the World Wide Web by leveraging the XML family of specifications. For other references, see: (1) the XTM web site; (2) the XTM resource listing in Murray Altheim's posting "Final Release of XTM 1.0 Specifications"; (3) the announcement from Michel Biezunski; (4) the '' XTM repository home page; (5) "(XML) Topic Maps."

  • [December 05, 2000]   SemanText for Topic Maps and Semantic Networks.    Eric Freese (ISOGEN International/DataChannel) posted an announcement for the version 0.71 release of SemanText, an open source Topic Map application which can be downloaded from the SemanText web site. SemanText is "a prototype application developed to demonstrate how the topic map standard (ISO/IEC 13250:2000) can be used to represent semantic networks. Semantic networks are a building block for artificial intelligence applications such as inference engines and expert systems. SemanText builds a knowledge base, in the form of a semantic network, from the topic map. New information can be added to the knowledge base and topic map automatically when the user defines rules which are used to infer new knowledge. All of this is done using constructs defined in the topic map standard. The benefit of this is that the new knowledge is then interchangeable with any other topic map enabled system. As more and more topic map enabled applications are developed, the ability to share, interpret, and create new knowledge will be greatly increased. SemanText is written in Python which means that it is platform independent. It uses many existing tools such as the wxPython GUI library, the PyXML libraries, and the tmproc topic map processor. Its user interface provides a simple, intuitive mechanism for working with the topic map information. SemanText uses the constructs defined in the topic map standard to model the knowledge processed and managed by the system. Topics and topic types are used to represent the nodes within the semantic network. The topics and topic types also form a class-instance hierarchy which allows SemanText to infer knowledge about specific topics based on their types. Associations are used to represent the links between the topics. Semantics are attached to the associations which allow the inference engine to build upon the internal knowledge base. Facets are used to store metadata about the topics within the knowledge base. Occurrences, which are not yet implemented, will provide background or source information about the associations and topics within the knowledge base. Scopes and themes are also not implemented currently, but will be used to limit the applicability of certain pieces of knowledge. The power of scoping will allow the inference engine to make inferences based on knowledge which is relevant to a certain set of conditions. SemanText's inference engine allows the user to define and use rules which are then applied to the knowledge base to develop new knowledge based on the relationships between the topics. This 'learning' mode can be switched on and off, in order to minimize impact on the system when the rules are being processed. When learning is activated, any new additions to the knowledge base are immediately examined to determine if they can be used to provide new knowledge to the knowledge base. In the near future, the rules themselves will be stored and managed using topic map constructs. This provides a method for interchanging the inferencing rules in a standard way..." For related resources, see: (1) "(XML) Topic Maps" and (2) "XML and 'The Semantic Web'."

  • [December 05, 2000]   dbXML Core Edition Released With Enhanced XPath Query Support.    A posting from Kimbro Staken (Chief Technology Officer, dbXML Group L.L.C) announces the release of dbXML Core Edition version 0.4. "The dbXML Group is proud to announce the release of version 0.4 of the dbXML Core Edition. The dbXML Core Edition is the world's first Open Source native XML database application server. It is a data management system designed specifically for collections of XML documents and is easily embedded into existing applications, highly configurable, and openly extensible. The source code has been released under the GNU Lesser General Public License and is available at the dbXML Group's Core Edition web page. This release updates the dbXML distribution adding new features bug fixes and better documentation. New features added in this release include: (1) Completed Compressed DOM implementation. (2) Indexing system enhancements to allow explicit index creation for XPath queries. (3) Integration of the Cocoon XSL-T engine into the core server to enable internal transformation of XML data. (4) Enhanced XMLObject Architecture to provide robust server side embedded logic. (5) Nested Collection support for improved storage layout efficiency. The dbXML Core Edition is available for download. The dbXML Group focuses on next-generation web application development tools and services specifically in the realm of XML-related technologies." See related resources in "XML and Databases."

  • [December 05, 2000]   Open Source NewsML Toolkit for Processing NewsML Packages.    A communiqué from David Megginson reports on the release of a news toolkit, announced on 2000-12-05 by Reuters and Wavo at the XML 2000 Conference. "Reuters and Wavo will announce version 0.1alpha of the NewsML Toolkit, an Open Source (LGPL) Java2-based library for processing NewsML packages. The library is available at and at the Reuters web site. NewsML is the new XML-based packaging and metadata format for news distribution, approved this fall by the International Press Telecommunications Council (IPTC). The IPTC's membership includes many of the world's major news providers, such as Reuters, the Associated Press, and Agence France Presse, as well as many other companies working in the news industry. For more information see The NewsML Toolkit was written for Reuters and Wavo by David Megginson of Megginson Technologies. The library is a joint project of Reuters PLC, a leading international information services provider, and Wavo, a leading news amalgamator. The NewsML Toolkit works with the Document Object Model (DOM) interface developed by the World Wide Web Consortium (W3C)." The toolkit "provides a simple interface that lets you perform the most important NewsML processing tasks without any knowledge of XML or the intricacies of NewsML markup. Java developers with no prior XML knowledge can use the NewsML Toolkit to extract many kinds of information from a multimedia NewsML package, including news lines, permissions, dates, whether a story is embargoed, and where to find the individual news objects, all using regular Java object methods. While the initial NewsML Toolkit release concentrates on presenting the most important information as simply as possible, the full XML markup is always available through the DOM whenever needed. The initial release of the NewsML Toolkit comes bundled with a simple demonstration application, the NewsML Explorer for browsing NewsML packages. The NewsML Explorer requires the Apache Xerces-Java XML library together with a Java2-compliant virtual machine." See also: (1) "NewsML and IPTC2000" and (2) the announcement from Reuters, "Reuters and WAVO Team Up to Launch Industry Tool for NewsML."

  • [December 05, 2000]   W3C Workshop on Digital Rights Management.    W3C has issued a Call For Participation in connection with a Workshop on Digital Rights Management, to be held on January 22-23, 2001 at INRIA Sophia Antipolis, France. Position papers should be submitted to the Workshop Chairs by December 22, 2000 and workshop registration is open through January 12, 2001. "The goal of the workshop is to discuss and address DRM issues across multiple sectors and communities to enable the Web to deliver trusted rights management services. The intent is to find and highlight expressions, processes and methods for DRM applications that could be subject of a W3C Activity Proposal. The participants will discuss and debate the merits of a W3C Activity to investigate and propose DRM specifications to add value and services to the Web community in an open and extensible manner. Likely participants would be drawn from the following communities: publishers, creators/authors, content/data service providers, online trading services, trusted third parties, stakeholders from the library-, and other user communities. DRM raises a mixture of technical, social, business and legal issues. The Workshop will concentrate on addressing the technical issues with DRM, though we will also seek background on legal and social considerations that effect the technical requirements of DRM for the Web. The technical issues include: (1) DRM architectures, (2) Trading protocols, (3) Protection and security mechanisms, (4) DRM languages [semantics and encoding], (5) Interoperability. The discussion of DRM languages will include consideration of candidates for standard vocabularies for the expression of terms and conditions over digital assets. In DRM languages, authors, distributors and other intermediaries may express permissible usages for various digital asset manifestations, payment terms, tracking and security. We expect to discuss how DRM languages might reuse existing W3C standards (RDF, XML Signature, Micropayments, P3P)." The organization of the workshop is being governed by the W3C process; there will be a limit of 100 participants, who may be employees of a W3C Member organization or invited experts. Details concerning the Workshop Notes and Presentations, Chair Report and Summary, Position Papers, Participants, Mailing List, and FAQ will be posted. For other XML-related digital rights management efforts, see (1) Extensible Rights Markup Language (XrML); (2) Digital Property Rights Language (DPRL); (3) "Electronic Book Exchange (EBX) Working Group."; (4) Open Digital Rights Language (ODRL); (5) Open eBook Initiative [Digital Rights Management Strategy Working Group].

  • [December 04, 2000]   The Apache Batik SVG Toolkit.    Company announcements have been released by Sun Microsystems and ILOG in connection with the Apache Batik SVG Toolkit Project. Apache's Batik, now available for download, is a "Java based toolkit for applications that want to use images in the Scalable Vector Graphics (SVG) format for various purposes, such as viewing, generation or manipulation." Batik contributors and supporters include CSIRO, ILOG, The Koala Team, Eastman Kodak Company, Sun Microsystems, Inc., and IBM. The project's ambition is "to give developers a set of core modules which can be used together or individually to support specific SVG solutions. Example modules are, SVG parsers, SVG generators and SVG DOM implementations. Another ambition for the Batik project is to make it highly extensible; for example, Batik allows the developer to handle custom SVG tags. Even though the goal of the project is to provide a set of core modules, one of the deliverables is a full fledged SVG Viewer implementation which validates the various modules and their inter-operability. With Batik, you can manipulate SVG documents anywhere Java is available. You can also use the various Batik modules to generate, manipulate, transcode and search SVG images in your applications. Batik makes it easy for Java based applications to deal with SVG contents. For example, using Batik's SVG generator, a Java application can very easily export its graphics in the SVG format. Using Batik's SVG processor and viewer, an application can very easily integrate SVG viewing capabilities. Another possibility is to use Batik's modules to convert SVG to various formats, such as raster images (JPEG or PNG)... Batik provides building blocks that developers can assemble in various ways in their Java technology applications to generate, parse, view or convert SVG contents. For example, Batik contains a Swing component that can add SVG viewing capability to all Java technology applications. Batik can also be used to generate SVG on a client or on a server, and Batik can convert SVG content into other formats such as JPEG or PNG. Batik's goal is to make it easy for application developers to handle SVG content for various purposes, client-side or server-side. Batik contains several modules that can be used independently such as an SVG parser, a object oriented vector toolkit (GVT) and a set of extensions to the Java 2D API (such as sophisticated fill types and filter effects). Batik will likely be used in Cocoon for server side rasterization of SVG images. In addition, the Batik and the FOP teams have started to work together to define how the projects can leverage each other's work for SVG to PDF conversion." The online FAQ document provides additional detail for developers. See the announcements from SUN and ILOG for implementation news: (1) "Sun Microsystems Continues Strong Relationship With the Apache Software Foundation on Technology Development and Distribution. Batik Project, a New XML-based Graphical Toolkit, Is Newest Addition to Joint Technology Initiatives", and (2) "New ILOG JViews One Of First Products To Support SVG, Emerging XML Graphics Standard. JViews Developers Playing Role in Creation of New Open-Source Batik 1.0 Toolkit." See also "W3C Scalable Vector Graphics (SVG)."

  • [December 04, 2000]   Character Mapping Markup Language Published as Unicode Technical Report.    Mark Davis posted an announcement for the publication of the Unicode Character Mapping Markup Language (CharMapML) as a full Technical Report. Reference: Unicode Technical Report #22, by Mark Davis (with contributions from Kent Karlsson, Ken Borgendale, Bertrand Damiba, Mark Leisher, Tony Graham, Markus Scherer, Peter Constable, Martin Duerst, Martin Hoskin, and Ken Whistler). This Unicode technical report "specifies an XML format for the interchange of mapping data for character encodings. It provides a complete description for such mappings in terms of a defined mapping to and from Unicode, and a description of alias tables for the interchange of mapping table names." The Unicode Technical Committee "intends to continue development of this TR to also encompass complex mappings such as 2022 and glyph-based mappings." Background: "The ability to seamlessly handle multiple character encodings is crucial in today's world, where a server may need to handle many different client character encodings covering many different markets. No matter how characters are represented, servers need to be able to process them appropriately. Unicode provides a common model and representation of characters for all the languages of the world. Because of this, Unicode is being adopted by more and more systems as the internal storage processing code. Rather than trying to maintain data in literally hundreds of different encodings, a program can translate the source data into Unicode on entry, process it as required, and translate it into a target character set on request. Even where Unicode is not used as a process code, it is often used as a pivot encoding. Data can be converted first to Unicode and then into the eventual target encoding. This requires only a hundred tables, rather than ten thousand. Whether or not Unicode is used, it is ever more vital to maintain the consistency of data across conversions between different character encodings. Because of the fluidity of data in a networked world, it is easy for it to be converted from, say, CP930 on a Windows platform, sent to a UNIX server as UTF-8, processed, and converted back to CP930 for representation on another client machine. This requires implementations to have identical mappings for a character encoding, no matter what platform they are working on. It also requires them to use the same name for the same encoding, and different names for different encodings. This is difficult to do unless there is a standard specification for the mappings so that it can be precisely determined what the encoding actually maps to. This technical report provides such a standard specification for the interchange of mapping data for character encodings. By using this specification, implementations can be assured of providing precisely the same mappings as other implementations on different platforms The report references several related data files, including (1) DTD file for the Character Mapping Data format [CharacterMapping.dtd]; (2) DTD file for the Character Mapping Alias format [CharacterMappingAliases.dtd]; (3) Sample mapping file [SampleMappings.xml]; (4) Sample alias file [SampleAliases.xml]; (5) Sample alias file #2 [SampleAliases2.xml]. See "XML and Unicode."

  • [December 04, 2000]   Universal Learning Format (ULF) for eLearning Data Interchange.    A communiqué from Daniel Lipkin (Chief Architect, Saba Software) reports on the development of the Universal Learning Format (ULF) and its RDF mapping to the IEEE Learning Objects Metadata (LOM) format. "Universal Learning Format (ULF) is a complete suite of XML and RDF-based data formats for describing and exchanging eLearning data. The standards. The formats build on and are compatible with a wide variety of industry standards for exchanging learning data, including ADL, IEEE, IMS, Dublin Core, and vCard. ULF's compatibility with other standards ensures that data described in ULF is universally portable across all systems and taxonomies that are designed to support virtually any recognized industry standard. It also means that the ULF will shadow new developments in its constituent standards, thus providing a direct path for future extensibility. Universal Learning Format comprises Learning Catalogs and Metadata, Online Classes and Assessments, Learner Profiles, Competency Libraries and Certification Libraries. ULF includes a Catalog Format, which is an RDF mapping of IEEE LOM, augmented with additional catalog and eCommerce information. For more information, including examples and a tutorial, please refer to the ULF web site at" Details are provided in the principal specification document: "A Comprehensive Architecture for Learning. Universal Learning Format, Version 1.0." Note than the IEEE Learning Object Metadata Working Group [IEEE P1484.12], operating under the IEEE Learning Technology Standards Committee (LTSC), is developing a standard to "specify the syntax and semantics of Learning Object Metadata, defined as the attributes required to fully/adequately describe a Learning Object. Learning Objects are defined here as any entity, digital or non-digital, which can be used, re-used or referenced during technology supported learning. Examples of technology supported learning include computer-based training systems, interactive learning environments, intelligent computer-aided instruction systems, distance learning systems, and collaborative learning environments. Examples of Learning Objects include multimedia content, instructional content, learning objectives, instructional software and software tools, and persons, organizations, or events referenced during technology supported learning. The Learning Object Metadata standards will focus on the minimal set of attributes needed to allow these Learning Objects to be managed, located, and evaluated. The standards will accommodate the ability for locally extending the basic fields and entity types, and the fields can have a status of obligatory (must be present) or optional (may be absent). Relevant attributes of Learning Objects to be described include type of object, author, owner, terms of distribution, and format..." See related references in: (1) "Universal Learning Format Technical Specification"; (2) "Educom Instructional Management Systems Project (IMS) Metadata Specification" and (3) "IEEE LTSC XML Ad Hoc Group."

  • [December 04, 2000]   Overview of the W3C Speech Interface Framework and Voice Browser Activity.    The W3C Voice Browser Working Group was chartered by the World Wide Web Consortium (W3C) within the User Interface Activity in May 1999; the working group is now "designing markup languages for dialog, speech recognition grammar, speech synthesis, natural language semantics, and a collection of reusable dialog components." The new W3C working draft document Introduction and Overview of W3C Speech Interface Framework supplies an overview of this activity. Reference: W3C Working Draft 4-December-2000, edited by Jim A. Larson (Intel Architecture Labs). Comments on the WD may should be sent to the publicly-archived mailing list Document abstract: "The World Wide Web Consortium's Voice Browser Working Group is defining several markup languages for applications supporting speech input and output. These markup languages will enable speech applications across a range of hardware and software platforms. Specifically, the Working Group is designing markup languages for dialog, speech recognition grammar, speech synthesis, natural language semantics, and a collection of reusable dialog components. These markup languages make up the W3C Speech Interface Framework. In addition to voice browsers, these languages can also support a wide range of applications including information storage and retrieval, robot command and control, medical transcription, and newsreader applications. The speech community is invited to review and comment on the working draft requirement and specification documents." For other information, see (1) the W3C Voice Browser Activity statement and (2) "VoiceXML Forum.".

  • [December 04, 2000]   W3C XML Query Working Group Publishes XML Query Algebra Working Draft.    The first W3C public working draft for The XML Query Algebra has been released for review. Reference: W3C Working Draft 04-December-2000, latest draft, edited by Peter Fankhauser (GMD-IPSI), Mary Fernández (AT&T Labs - Research), Ashok Malhotra (IBM), Michael Rys (Microsoft), Jérôme Siméon (Bell Labs, Lucent Technologies), and Philip Wadler (Avaya Communication). The document "introduces the XML Query Algebra as a formal basis for an XML query language." The development work "builds on long standing traditions in the database community. In particular, we have been inspired by systems such as SQL, OQL, and nested relational algebra (NRA). We have also been inspired by systems such as Quilt, UnQL, XDuce, XML-QL, XPath, XQL, and YaTL. We give citations for all these systems below. In the database world, it is common to translate a query language into an algebra; this happens in SQL, OQL, and NRA, among others. The purpose of the algebra is twofold. First, the algebra is used to give a semantics for the query language, so the operations of the algebra should be well-defined. Second, the algebra is used to support query optimization, so the algebra should possess a rich set of laws. Our algebra is powerful enough to capture the semantics of many XML query languages, and the laws we give include analogues of most of the laws of relational algebra. It is also common for a query language to exploit schemas or types; this happens in SQL, OQL, and NRA, among others. The purpose of types is twofold. Types can be used to detect certain kinds of errors at compile time and to support query optimization. DTDs and XML Schema can be thought of as providing something like types for XML. The XML Query algebra uses a simple type system that captures the essence of XML Schema Structures. The type system is close to that used in XDuce. On this basis, the XML Query algebra is statically typed. This allows to determine and check the output type of a query on documents conforming to an input type at compile time rather than at runtime. Compare this to the situation with an untyped or dynamically typed query language, where each individual output has to be validated against a schema at runtime, and there is no guarantuee that this check will always succeed..." A tutorial introduction in the WD 'The Algebra by Example' introduces the main features of the algebra, using familiar examples based on accessing a database of books. In Appendix A 'The XML Query Data Model', the authors present a formal mapping relating the algebra to the W3C XML Query Data Model. See related references in "XML and Query Languages."

  • [December 04, 2000]   Freeware OilEd Ontology Editor.    Ian Horrocks (University of Manchester) posted an announcement for an 'OilEd Ontology Editor' supporting the Ontology Interchange Language (OIL). "OilEd is a simple ontology editor developed by Sean Bechhofer at the University of Manchester. OilEd allows the user to: (1) build ontologies; (2) use the FaCT reasoner to check the consistency of ontologies and add implicit subClassOf relations; (3) export ontologies in a number of formats including both OIL-RDF and DAML-RDF. For further details and information about OIL, consult the OIL Home Page. The intention behind OilEd is to provide a simple, freeware editor that demonstrates the use of, and stimulates interest in, OIL. OilEd is not intended as a full ontology development environment - it will not actively support the development of large-scale ontologies, the migration and integration of ontologies, versioning, argumentation and many other activities that are involved in ontology construction. Rather, it is the 'NotePad' of ontology editors, offering just enough functionality to allow users to build ontologies and to demonstrate how the FaCT reasoner can be used to check and enrich ontologies. OilEd is available as freeware, but we ask that you provide us with some details before downloading. This will allow us to keep track of who is using it and why. OilEd will not be fully supported or maintained although we will try and fix major problems or bugs. You can download the installer for OilEd. The development of OilEd was supported by the University of Manchester, the Free University of Amsterdam and Interprice GmbH. OilEd uses Robert Kosara's Bonfire parser generator and the JGL collection libraries from Objectspace." The Ontology Inference Layer (OIL) is "a proposal for a web-based representation and inference layer for ontologies, which combines the widely used modelling primitives from frame-based languages with the formal semantics and reasoning services provided by description logics. It is compatible with RDF Schema (RDFS), and includes a precise semantics for describing term meanings -- thus also for describing implied information. Preliminary OIL, also known as OIL 1.0, is described in the technical report, and the syntax is descibed in DTD and a XML Schema." See related references in (1) "Ontology Interchange Language (OIL)", and (2) "DARPA Agent Mark Up Language (DAML)."

  • [December 01, 2000]   NIST and US Federal CIO Council XML Working Group Prepare '' Portal.    A recent article in the Federal Computer Week magazine announces the creation of the XML.ORG portal, being developed jointly by NIST and the US Federal CIO Council XML Working Group. The US Chief Information Officers (CIO) Council provides "recommendations for information technology management policies, procedures, and standards; identifying opportunities to share information resources; and assessing and addressing the needs of the Federal Government's IT workforce." In July 2000, the CIO Council's Enterprise Interoperability and Emerging Information Technology (EIEIT) Committee chartered an XML Working Group (XMLWG) to promote more efficient data reuse among government agencies. The CIO Council's Extensible Markup Language Working Group is now "putting the finishing touches on -- a portal that will serve as a resource and demonstration site for XML technology. A prototype of the portal exists now, and the group hopes to take the site live in January, 2001. 'We hope that will be the focal point for all government agencies to go to learn XML and to experience XML, and share XML experiences,' said Marion Royal, an agency expert at the General Services Administration and co-chairwoman of the XML Working Group. The XML specification makes it simpler for applications to exchange information by defining a common method of identifying and packaging data. However, interoperability is easier if agencies can agree on common definitions. The site eventually may host an online registry that contains XML definitions used by agencies." The new portal will help the XML Working Group achieve its goal of enabling workers "to capitalize on the potential of XML more efficiently and effectively on a Government-wide basis. The purpose of the XMLWG is to accelerate, facilitate and catalyze the effective and appropriate implementation of XML technology in the information systems and planning of the Federal Government. Wherever possible, the Working Group seeks to achieve the highest impact from resources by building on initiatives and projects that are underway in the Federal Government, or elsewhere in the public or private sectors. The XMLWG will not take on continuing operational or policy responsibilities. The Working Group focuses on the highest-payoff opportunities for application of XML technologies, which now appear to be that: (1) XML offers a non-proprietary and inexpensive way to achieve a high degree of interoperability among heterogeneous systems; XML is especially well adapted to a networked environment where there is a requirement to work with a rapidly changing set of partner and customer systems with unknown and diverse architectures. (2) XML offers a non-proprietary and inexpensive way to promote reuse of data by providing a way to locate it (semantic search), and by providing a standard way to transform and move it between applications." Based upon XML's potential to "alleviate many of the interoperability problems associated with the sharing of data within and across organizations," the CIO Council XML Working Group has been tasked with four activities: (1) Identify pertinent standards and best practices; (2) Establish partnerships with industry and public interest groups; (3) Establish partnerships with governmental communities of interest; (4) Education and outreach." Some of the projected activities of the XML Working Group are outlined in the "Recommendations of the ad-hoc XML Working Group to the CIO Council's EIEIT Committee" (May 2000) and in the Federal CIO Council XML Working Group Meeting Minutes from September/October 2000. For other description and references, see "US Federal CIO Council XML Working Group."

  • [December 01, 2000]   IBM 'XDRtoXSD' Tool Translates XDR Schemas to W3C XML Schemas.    The IBM alphaWorks XML Application Development team has released a new tool for converting XDR Schemas to W3C XML Schemas. XDRtoXSD is a "Java program which takes as input an XML Schema written in the XDR [XML-Data Reduced] schema language (used by Microsoft Internet Explorer and BizTalk) and translates them into an equivalent schema expressed in the W3C XML Schema language, the emerging standard for representing XML vocabularies, grammars and constraints. This tool provides some coverage even for XDR schemas which are incorrect, and detects a few cases in which the W3C XML Schema language can not exactly express the semantics of the XDR schema, in which case the closest possible schema is produced along with an explanation. Included in the tool is an importable W3 XML Schema which defines simpleTypes, used to map the XDR datatypes to the corresponding W3C datatypes. The tool can also be run in batch mode to translate multiple XDR schemas in a single run. For example, (1) you can list more than one filename on the command line, (2) you can use a filespec, such as *.xdr or PO*.*; (3) you can tell the tool to look for names matching a certain filespec in a directory and all nested [and indirectly nested] directories, using the -r command line option." XDRtoXSD requires Java Runtime Environment version 1.2 or later. For related resources, see "XML Schemas."

  • [December 01, 2000]   AND Global Address XML Definition.    According to a recent company announcement, AND Data Solutions "has made its global address XML definition available to the OASIS Customer Information Quality (CIQ) Technical Committee. With this global address definition, AND wishes to contribute to the development of open world standards for address management. AND Data Solutions has developed the XML data structure to standardize on the worldwide presentation of addresses. The global address structure is now available for 36 countries, which will grow to 85 individual countries on short notice. Customer data, with address information as a key component, forms the foundation to build effective customer relationships. OASIS has therefore recently set up the CIQ Technical Committee, dedicated to work on cross industry XML standards for customer profile management and exchange. The committee's work will help improve information system interoperability, and enable consistent communication and handling of customer information by trading partners... driven by the use of the Internet, the current enthusiasm for customer relationship management and the proliferation in the number of business using call centers, the amount of information generated shows no sign of slowing down. The research shows that three quarters of respondents expect their address databases to grow in terms of records in the next year, some expecting increases of up to 300 per cent. Yet this data -- supplied through multi-channel sources such as the Internet, telephone, WAP phones and even digital and interactive TV -- can be unreliable. Many companies are capturing this information from e-business activities where customers are responsible for inputting their own details. This research has shown an urgent need for good quality address products, for which AND can provide online address verification and completion services based on its worldwide address data. Address Data consists of datasets including postcodes, cities, streets and locations covering more than 80 countries across the world. The growing AND Global Address Data customer base includes Sony, Philips, Kodak, Compaq, Xerox, Gateway 2000 and Client Logic." Note: The objective of the OASIS Technical Committee (TC) on Customer Information Quality (CIQ) "is to deliver XML standards for customer profile/information management to the industry. Customer data forms the foundation to build effective customer relationships. To be effective, customer data must meet the highest possible standards of both quality and integrity. Therefore, customer information/profile management is critical..." For references, see (1) the full text of the announcement "AND Makes Global Address XML Definition Available to OASIS", and (2) "AND Global Address XML Definition."

  • [November 30, 2000]   XML Key Management Specification (XKMS).    VeriSign, Microsoft, and webMethods have "created the open XML Key Management Specification (XKMS) with the goal of efficient integration of digital signatures and encryption -- to simplify the integration of standard methods for securing Internet transactions (PKI and digital certificates) with XML applications." The version 1.0 document XML Key Management Specification (XKMS) "specifies protocols for distributing and registering public keys, suitable for use in conjunction with the proposed standard for XML Signature developed by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF) and an anticipated companion standard for XML encryption. The XML Key Management Specification (XKMS) comprises two parts -- the XML Key Information Service Specification (X-KISS) and the XML Key Registration Service Specification (X-KRSS). The X-KISS specification defines a protocol for a Trust service that resolves public key information contained in XML-SIG elements. The X-KISS protocol allows a client of such a service to delegate part or all of the tasks required to process <ds:KeyInfo> elements. A key objective of the protocol design is to minimize the complexity of application implementations by allowing them to become clients and thereby shielded from the complexity and syntax of the underlying PKI used to establish trust relationships. These may be based upon a different specification such as X.509/PKIX, SPKI or PGP. The X-KRSS specification defines a protocol for a web service that accepts registration of public key information. Once registered, the public key may be used in conjunction with other web services including X-KISS. Both protocols are defined in terms of structures expressed in the XML Schema Language, protocols employing the Simple Object Application Protocol (SOAP) v1.1 and relationships among messages defined by the Web services Definition Language v1.0 (WSDL). Other compatible expressions are possible." The public announcement for XKMS reads, in part: "VeriSign Inc., Microsoft Corp. and webMethods Inc. have introduced a breakthrough XML-based framework, the XML key management specification (XKMS), to enable a broad range of software developers to seamlessly integrate digital signatures and data encryption into e-commerce applications. To accelerate the development of applications incorporating these advanced technologies, the XKMS specification -- jointly designed and prototyped by VeriSign, Microsoft and webMethods with industry support from other technology leaders -- was made publicly available today and will be submitted to the appropriate Web standards bodies for consideration as an open Internet standard. In addition, XKMS will be built into the Microsoft.NET architecture to ensure broad and rapid adoption of this framework in both B2B and B2C environments. The new XKMS specification revolutionizes the development of trusted B2B and B2C applications by introducing an open framework that enables virtually any developer to easily access applications from any public key infrastructure products and services. With the XKMS specification, developers are able to integrate advanced technologies such as digital signature handling and encryption into their web-based applications. The XKMS specification promotes the interoperability of advanced technologies because it is based on XML, a rapidly growing standard for application development. Currently, developers choosing to enable applications to handle digital keys for authentication and digital signatures are often required to purchase and integrate specialized toolkits from a Public Key Infrastructure (PKI) software vendor which only interoperate with that vendor's PKI offerings. Functions such as digital certificate processing, revocation status checking, and certification path location and validation are all built into the application via the toolkit. With the new XKMS specification, those functions are no longer built into the application but instead reside in servers that can be accessed via easily programmed XML transactions. The XKMS architecture, along with the recently drafted XML digital signature standards and the emerging XML encryption standard, provides a complete framework for ensuring broad interoperability across applications developed by enterprises, B2B exchanges and other Internet communities of interest. XKMS is also compatible with the emerging standards for Web Services Description Language (WSDL) and Simple Object Access Protocol (SOAP)..." For other description and references, see "XML Key Management Specification (XKMS)."

  • [November 30, 2000]   VeriSign's Extensible Provisioning Protocol (EPP).    Extensible Provisioning Protocol (EPP) is one of four principal components in the VeriSign XML Trust Services suite recently announced in connection with the XML Key Management Specification (XKMS). Overview: "To enable Internet registrars that sell online identity services to access central domain name registry data more efficiently, VeriSign has developed the EPP (Extensible Provisioning Protocol) to support an XML-based domain name management utility. EPP enables VeriSign Global Registry Services' accredited registrar partners to sell domain names, telephone numbers, and other identity assets via EPP, which permits greater information sharing and flexibility as new identification technologies gain acceptance... The Extensible Provisioning Protocol (EPP) is a connection-oriented, application layer client-server protocol for the provisioning and management of objects stored in a shared central repository. Specified in the schema notation of the Extensible Markup Language (XML), the protocol defines generic object management operations and an extensible framework that maps protocol operations to objects. A complete set of protocol specifications was recently published with the Internet Engineering Task Force (IETF) as Internet-Draft documents. XML provides a rich set of features that allows communicating peers to create data tags that have semantic meaning in the operating environment shared by the peers. While in general this is a very desirable feature, it introduces an element of instability for protocol designers. Once a protocol has been formally specified, adding new tags to extend the protocol means changes to published specifications. Over time this can lead to a lack of interoperable implementations and specification confusion. EPP takes a different approach. The base protocol itself is very simple, defining a set of object management features that are not explicitly tied to specific objects. The base protocol is intended to be stable and unchanging to ease development of interoperable implementations. EPP operations are mapped to objects using XML namespaces that provide 'hooks' to loosely coupled object specifications so that definitions for management of new objects can be done outside the base protocol. For example, the protocol can be extended to support provisioning of purchase orders by defining a new specification that defines how purchase order objects are managed. EPP provides features for session management, object query, and object management. Sessions are established between a client and a server, and once a session is established the client and server exchange commands and responses. Security services are available at both the application and transport layers. The EPP protocol suite currently contains a base protocol specification and mappings for three different objects: Internet domain names, Internet host names, and 'contact' identifiers associated with humans and organizations'. Specifications for other objects may be developed as needs are identified. EPP is connection oriented, but transport independent. A specification for transport using the Transmission Control Protocol (TCP) exists; specifications for transport using other protocols or applications frameworks may be produced in the future." There are five published components in the EPP Specification: (1) Base Specification, (2) Domain Name Mapping, (3) Host Mapping, (4) Contact Mapping, (5) Transport over TCP. For other references, see "Extensible Provisioning Protocol (EPP)."

  • [November 30, 2000]   XMLPay Specification.    VeriSign, Ariba, and other vendors have created the XMLPay specification "for sending payment requests and responses through financial networks; [the specification is designed] to help Internet merchants process a broad range of Web-based payment types, including credit debit card, purchase card, and Automated Clearinghouse, or ACH payments, for B2B and B2C e-commerce. The XMLPay Specification consists of three parts. (1) 'XMLPay: Core' is the heart of XMLPay. It defines the basic XML datatypes needed to unify B2C and B2B payment processing applications. (2) 'XMLPay: Registration' captures automation of payment-related enrollment functions, such as merchant registration and configuration. (3) 'XMLPay: Reports' specifies mechanisms for automating merchant transaction reporting functions in the payments back office. The first of these specifications, XMLPay Core, is available now [2000-11-30] Teams working on XMLPay are planning to extend the functionality to registration and reporting. The driving goal is to provide a public specification for Web payment interoperability, from merchant service sign-up, to payment execution, to reporting functions after payments have taken place." From the text of the specification: "This document, the XMLPay 1.0 Core Specification, defines an XML syntax for payment transaction requests, responses, and receipts in a payment processing network. The typical user of XMLPay is an Internet merchant or merchant aggregator who wants to dispatch credit card, corporate purchase card, Automated Clearing House (ACH), or other payment requests to a financial processing network. Using the data type definitions specified by XMLPay, a user creates a client payment request and dispatches it -- using a mechanism left unspecified by XMLPay -- to an associated XMLPay-compliant server component. Responses are also formatted in XML and convey the results of the payment requests to the client. XMLPay includes support for digitally-signed XML objects. Digital signatures are used both for the purpose of authenticating requests and responses and as a foundation for a higher-level digital receipt architecture based on an X.509 Public Key Infrastructure. XMLPay uses the digital signature format being specified by the joint IETF/W3C XML Digital Signature Working Group." Appendix A, "XMLPay Schemas," provides standard W3C schemas for XMLPay and XMLPay Types; Appendix B, "XMLPay DTD," presents the Document Type Definition for XMLPay... XMLPay supports payment processing using the following payment instruments: (1) Retail credit and debit cards; (2) Corporate purchase cards: Levels 1, 2 and 3; (3) Internet checks; (4) ACH. Typical XMLPay operations include: (1) Funds authorization and capture; (2) Sales and repeat sales; (3) Voiding of transactions. XMLPay is intended for use in both Business-to-Consumer (B2C) and Business-to-Business (B2B) payment processing applications. In a B2C transaction, the Buyer presents a payment instrument (e.g., credit card number) to a Seller in order move money from the Buyer to the Seller (or vice-versa in the case of a credit or refund). Use of XMLPay comes into play when the Seller needs to forward the Buyer's payment information on to a Payment Processor. The Seller formats a XMLPayRequest and submits it either directly to an XMLPay-compliant payment processor or, as pictured, indirectly via a XMLPay-compliant Payment Gateway. Responses have type XMLPayResponse. The Buyer-to-Seller and Payment Gateway-to-Payment Processor channels are typically left unaffected by use of XMLPay. For example, XMLPay is typically not used in direct communications between the buyer and the seller. Instead, conventional HTML form submission or other Internet communication methods are typically used. Similarly, because Payment Processors often differ considerably in the formats they specify for payment requests, it is often desired to localize XMLPay server logic at the Payment Gateway, leaving the legacy connections between gateways and processors unchanged. When used in support of B2B transactions, the Seller does not typically initiate XMLPay requests. Instead, an aggregator or trading exchange uses XMLPay to communicate business-focused purchasing information (such as level 3 corporate purchase card data) to a payment gateway. In this way, the trading exchange links payment execution to other XML-based communications between Buyers and Sellers such as Advance Shipping Notice delivery, Purchase Order communication, or other B2B communication functions..." For references, see "XMLPay Specification."

  • [November 29, 2000]   Clinical Data Interchange Standards Consortium Publishes XML Specification for Drug Development and Regulatory Review Processes.    A recent announcement from the Clinical Data Interchange Standards Consortium (CDISC) describes the completed development of FDA safety domain metadata models and an XML DTD for clinical data interchange. The XML DTD and associated documentation from the CDISC Operational Data Modeling Group are available for download. The announcement says, in part: "The Clinical Data Interchange Standards Consortium has achieved two significant milestones towards its goal of standard data models to streamline drug development and regulatory review processes. CDISC participants have completed metadata models for the 12 safety domains listed in the FDA Guidance regarding Electronic Submissions and have produced a revised XML-based data model to support data acquisition and archive. The Submissions Data Standards team has been working since early 1999 to define a metadata model that is designed to: (1) provide regulatory submission reviewers with clear descriptions of the usage, structure, contents and attributes of all submitted datasets and variables; (2) allow reviewers to replicate analyses, tables, graphs and listings with minimal or no transformations; (3) enable reviewers to easily view and subset the data used to generate any analysis, table, graph or listing without complex programming. This team, under the leadership of Wayne Kubick of Lincoln Technologies, and Dave Christiansen of Genentech, presented their metadata models to a group of representatives at the FDA on October 10 and discussed future cooperative efforts with Agency reviewers. The CDISC Operational Data Modeling (ODM) Working Group released their Version 1.0 model for data acquisition, interchange and archive. A small, interdisciplinary team was formed in September of 1999 to examine two different XML-based data interchange models (which had been separately put forward by PHT/Lincoln Technologies and by Phase Forward) -- specifically to assess the feasibility of developing an integrated, CDISC standard data and metadata model to support data acquisition. The resulting CDISC model is based on the Extensible Markup Language (XML), which is gaining wide acceptance as a general data interchange framework, and has been determined to be an effective approach to clinical data interchange. The goal of the CDISC XML Document Type Definition (DTD) Version 1.0 is to make available a first release of the definition of this CDISC model, in order to support sponsors, vendors and CROs in the design of systems and processes around a standard interchange format. 'The release of the CDISC Version 1.0 DTD provides the industry with a foundation of standards that will support unprecedented improvements in the quality and efficiency of future data interchange,' said Ken Harter, senior systems analyst, Amgen Inc. Both CDISC models can be reviewed at Comments are requested by January 31, 2001 and should be posted using the CDISC Web site Discussions option. CDISC is a non-profit organization with a mission to lead the development of standard, vendor-neutral, platform-independent data models that improve process efficiency while supporting the scientific nature of clinical research in the biopharmaceutical and healthcare industries." For additional description and references, see "Clinical Data Interchange Standards Consortium."

  • [November 29, 2000]   Submissions to the OMG Gene Expression RFP.    Several submissions have now been published in response to the Object Management Group's Gene Expression RFP, originally issued in March 2000 (LSR RFP-7/lifesci/00-03-09). The RFP overview: "Life sciences research has experienced rapid growth in the number of gene expression analysis techniques and is faced with explosive growth in the amount of data produced by these experiments. The creation and adoption of standardized programmatic interfaces is a crucial step in support of automated data exchange and interoperability among different gene expression data systems. This RFP solicits proposals which define interfaces and services in support of array based gene expression data collection, management, retrieval, and analysis." The RFP also requests definition of one or more XMI compliant Document Type Definitions (DTDs) "intended for use as self-describing data structures for encapsulation of hybridization, expression, and cluster data." In response to this RFP, relevant documents have been submitted by the European Bioinformatics Institute, Rosetta Inpharmatics, and NetGenics. (1) The EBI Initial Submisison regarding the Gene Expression RFP proposes "a framework for describing information about a DNA-array experiment and a data format -- Microarray Markup Language (MAML) -- for communicating this information... MAML is based on the Extensible Markup Language XML. MAML is independent of the particular experimental platform and provides a framework for describing experiments done on all types of DNA-arrays, including spotted and synthesized arrays, as well as oligo-nucleotide and cDNA arrays, and is independent of the particular image analysis and data normalization methods. MAML does not impose any particular image analysis or data normalization method, but instead provides format to represent microarray data in a flexible way, which allows to represent data obtained from not only any existing microarray platforms, but also many of the possible future variants, including protein arrays. The format allows representation of raw and processed microarray data. The format is compatible with the definition of the 'minimum information about a microarray experiment' (MIAME) proposed by the MGED group. (2) On behalf of the GEML Community, Rosetta Inpharmatics has submitted to the Object Management Group (OMG) a proposed DTD based on the new version of Gene Expression Markup Language - GEML 2.0. Rosetta Inpharmatics Initial Submission regarding the Gene Expression RFP describes work in connection with the GEML DTD: "Rosetta Inpharmatics and Agilent Technologies have been using the GEML 1.0 format as part of internal pipelines for the past year. Rosetta has been continuously loading XML files on the order of thirteen megabytes into the Rosetta Resolver system, an enterprise expression data analysis product. We recently used internal tools to export the more than one thousand profiles, assigned annotations, and supporting patterns that constituted the data for the article, Functional Discovery via a Compendium of Expression Profiles, that appeared in the July 7, 2000 issue of Cell. The total size of the export, when compressed, was a little over a half of gigabyte of data. That data was then imported by Harvard into their Rosetta Resolver system. We have not, as of yet, implemented the interfaces contained in this proposal but given that the size of the compressed XML files has proven no technical obstacle, we see no technical problems in implementing the interfaces. Rosetta has developed the freeware GEML Conductor tools for visualization of GEML formatted data and for conversion of gene expression data in other formats into GEML." See the XML DTD and IDL file. (3) In the NetGenics Submission, the UML model is normative. "The UML, which follows the recently adopted UML Profile for CORBA, permits semantic specifications that go beyond what is expressible in IDL. Given the size of typical data sets, a stream-based externalization approach makes sense. The stream would likely contain XML (e.g., Rosetta Inpharmatics' GEML), a popular means of representing gene expression data..." See the associated XMI file for details. See also: (1) "Gene Expression Markup Language (GEML)"; (2) "OMG Gene Expression RFP"; and (3) "Microarray Markup Language (MAML)."

  • [November 29, 2000]   Conversion Tool for DAML-O, RDF Schema, and UML/XMI.    A posting from Sergey Melnik and Stefan Decker (Stanford University) announces the availability of an online tool for converting between different data representations. The conversion tool is documented on the Interdataworking web site. This work-in-progress tool features: "(1) support for conversion between DAML-O, RDF Schema, and UML/XMI; (2) translation between quad and built-in reification, representation of order using 'RDF Seq' and 'order-by-reification' mechanisms; (3) support for conversion from Protégé RDF files to DAML-O restrictions; (4) a new XML serializer for RDF (trivial RDF/XML syntax) with support for embedded models and statements." The web site "provides a testbed for the concept of gateways in the 'interdataworking' technology. Interdataworking is a novel software structuring technique that facilitates data exchange between heterogeneous applications. The testbed supports data conversion from one format into another; the source data can be specified using a URL or uploaded as a file from your file system. You can choose a parser for your data. The object model delivered by the parser is sent through a sequence of gateways. The list of gateways can be selected, and the order is important. The result is output using a specified serializer..." Theoretical background may be found in papers written by the developers: (1) "A Layered Approach to Information Modeling and Interoperability on the Web" (Melnik and Decker), and (2) "Representing UML in RDF" (Melnik). See related resources in "XML and 'The Semantic Web'."

  • [November 28, 2000]   W3C Releases Jigsaw WebDAV Package.    The W3C's Jigsaw development team recently released a downloadable version of the Jigsaw Web server platform with a WebDAV package. Jigsaw is a W3C Open Source Project which provides a sample HTTP 1.1 implementation and a variety of other features on top of an advanced architecture implemented in Java. The new WebDAV implementation is "based on Jigsaw 2.1.2, and has been tested with cadaver, DAVExplorer and WebFolders." WebDAV (Web-based Distributed Authoring and Versioning) is an XML based protocol which "defines a set of new methods (PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK) and a set of new headers (DAV, Depth, If, Destination, ...); it supplies a set of extensions to the HTTP protocol which allows users to collaboratively edit and manage files on remote web servers." For additional references, see: (1) the WebDAV FAQ document; (2) the article by Tom Bednarz showing how to enable WebDAV (mod_dav) for the Apache server that ships with Mac OSX (beta); (3) "WEBDAV (Extensions for Distributed Authoring and Versioning on the World Wide Web."

  • [November 27, 2000]   IEEE Workshop on XML-Enabled Wide Area Search in Bioinformatics (XEWA).    A two-day IEEE Workshop on XML-Enabled Wide Area Search in Bioinformatics (XEWA) will be held on December 13-14, 2000. The XEWA workshop is sponsored by the IEEE Computer Society, Mass Storage Systems and Technology Technical Committee. Workshop goals will be to: "(1) Enumerate relevant service types for bioinformatics; (2) Prioritize services according to those whose availability would provide the most bang for the buck; (3) Explore alternative representations for representing the schemata (e.g., RDF, XML, XOL), and converge on one or a few preferable options; (4) Produce several service-oriented schemata that provide the 'connective tissue' needed to access existing sites and services using a representation neutral format (e.g., ER / OO / UML diagrams... The goal of the XEWA workshop is to define a format capable of describing how to interact with a data source. This format should be simple enough to enter the description by hand, flexible enough to link in to existing ontologies, and descriptive enough to be useful to automated tools trying to access the source." Background and rationale: "There are well over 500 public domain data sources of interest to genomics/proteomics researchers. Many of these 'data sources' do more than just provide data, they also provide access to a wide range of services. A good example of this are sequence homology search engines. Given the differences in interfaces, syntax and semantics between sites, there is no practical path for a given researcher or research team to use more than a few. Data warehouses, federated systems, and the like help, but only a little. The number of new sources coming online every year, and the number of changes to existing sources, is simply overwhelming. This is one of the major problems driving bioinformatics today. We picture a genomics world in which scientists, search engines, and soft-bots can browse and execute (limited) queries against a wide range of sites, with no significant per-site overhead. Rather than attempting to integrate these sources (thus allowing complex queries against few sites), we advocate providing just enough connective tissue to allow semi-intelligent agents or search engines to execute simplified queries against hundreds of sites. The connective tissue can take the form of a collection of loose, service-oriented "schemata" that provide such systems with the information needed to work their way through the interface at each site, to get to the underlying services. A schema might include structured metadata with domain-specific information, a thesaurus, service descriptions, and typical web interfaces." For additional information, see the call for papers and the workshop program. In addition to the "500+ public domain data sources", there are well over a dozen XML DTDs and schemas for bioinformatic and genome mapping disciplines. See for example: (1) Gene Expression Markup Language (GEML); (2) CellML; (3) Genome Annotation Markup Elements (GAME); (4) XML for Multiple Sequence Alignments (MSAML); (5) Systems Biology Markup Language (SBML); (6) Bioinformatic Sequence Markup Language (BSML); (7) BIOpolymer Markup Language (BIOML); (8) "The Clone Annotation DTD"; (9) MAML DTD (microarray format markup language, being developed by a community of developers in the Array XML Working Group -- including Berkerley, NCBI, EBI, NCGR, Stanford -- as part of the MGED initiative).

  • [November 27, 2000]   OASIS XML-Based Security Services Technical Committee to Define Security Framework.    An OASIS Technical Committee for 'XML-Based Security Services' is being formed with the goal of defining a "framework for sharing security information and security services on the Internet through XML documents." The initial members are from Sun Microsystems, JamCracker, and Netegrity. Projected deliverables include "a set of XML Schemas and an XML-based request/response protocol for authentication and authorization services. A draft of the Committee Specification (Version 0.8) will be based on the Security Services Markup Language (S2ML) co-authored by Netegrity, Inc. and its partners. The Committee Specification Version 0.8 will be ready by December 15, 2000. The final Committee Specification (Version 1.0) is scheduled for the second quarter 2001. The XML-Based Security Services TC intends to submit the Committee Specification as an OASIS standard after sufficient implementation experience has been gathered..." Subscription to the associated OASIS mailing list is open to OASIS affiliates: send subscribe as the body of an email message to The discussion list is publicly archived. For additional description and references, see (1) "Security Services Markup Language (S2ML)" and (2) the text of the announcement.

  • [November 25, 2000]   DAML-ONT Specification and DAML-ONT Theoretic Semantics Model.    The "DARPA Agent Mark Up Language (DAML)" is part of a new effort to "help bring the 'semantic web' into being, focusing on the eventual creation of a web logic language. DAML is being designed as an XML-based semantic language that ties the information on a page to machine-readable semantics (ontology). DAML represents joint work between DoD, industry and academia in both the US and the European Community and we hope it will lead to the eventual web standard in this area." The W3C mailing list '' now hosts a very active discussion on the developing DAML Ontology Language Specification, released in October 2000. Several new resources are available from the project web sites. The DAML Ontology Library provides a summary of submitted ontologies, sortable by URI, Submission Date, Keyword, Open Directory Category, Class, Property, Funding Source, and Submitting Organization. A technical document by Richard Fikes and Deborah L. McGuinness "A Model Theoretic Semantics for DAML-ONT" outlines a "model-theoretic semantics for the DAML-ONT language by providing a set of first-order logic axioms that can be assumed to hold in any logical theory that is considered to be logically equivalent translation of a DAML-ONT ontology. The intent is to provide a precise, succinct, and formal description of the relations and constants in DAML-ONT (e.g., complementOf, intersectionOf, Nothing). The axioms provide that description by placing a set of restrictions on the possible interpretations of those relations and constants. The axioms are written in ANSI Knowledge Interchange Format (KIF), which is a proposed ANSI standard. The document is organized as an augmentation of the DAML-ONT specification. Each set of axioms and their associated comments have been added to the specification document immediately following the portion of the specification for which they provide semantics. For example, the axioms providing semantics for the property complementOf immediately follow the XML property element that defines complementOf. We have maintained the ordering of the definitions from the original DAML-ONT specification, although that ordering is not optimal for understanding the axioms. In particular, the following terms are used in axioms before they are defined in the document: Class, Property, domain, range, type, List." An "Annotated DAML Ontology Markup - Walkthrough" supplies an example DAML Ontology; the example ontology demonstrates each of the features in DAML-ONT, an initial specification for DAML Ontologies. Other recently published resources include documents comparing DAML (or DAML-ONT) to: (1) "Simple HTML Ontology Extensions (SHOE)" and "Ontology Interchange Language (OIL)." For additional information, see (1) the archives of the W3C discussion list for DAML-ONT (''); (2) the DAML web site; (3) "DARPA Agent Mark Up Language (DAML)."

  • [November 25, 2000]   RELAX Core Published as ISO/IEC DIS 22250-1 with Technical Report in English.    Murata Makoto recently announced that RELAX Core has been released as an ISO document: ISO/IEC DIS 22250-1. Text and office systems -- Regular Language Description for XML (RELAX) -- Part 1: RELAX Core Document Type: DIS (Fast Track). Voting on the DIS will end on 2001-05-02. An English translation of the RELAX Core specification (JIS TR) is now available in PDF and .DOC formats. A copy of the DIS is available from ISO for standard ISO charges. The original TR specification (JIS TR X 0029:2000, Regular Language Description for XML (RELAX): RELAX Core) available in English is a 36-page technical report which "specifies mechanisms for formally specifying the syntax of XML-based languages. For example, the syntax of XHTML 1.0 can be specified in RELAX. Compared with DTDs, RELAX provides the following advantages: (1) Specification in RELAX uses XML instance (i.e., document) syntax, (2) RELAX provides rich datatypes, and (3) RELAX is namespace-aware. The RELAX specification consists of two parts, RELAX Core and RELAX Namespace. This Technical Report specifies RELAX Core, which may be used to describe markup languages containing a single XML namespace. Part 2 of this Technical Report specifies RELAX Namespace, which may be used to describe markup languages containing more than a single XML namespace, consisting of more than one RELAX Core document. Given a sequence of elements, a software module called the RELAX Core processor compares it against a specification in RELAX Core and reports the result. The RELAX Core processor can be directly invoked by the user, and can also be invoked by another software module called the RELAX Namespace processor. This Technical Report also specifies a subset of RELAX Core, which is restricted to DTD features plus datatypes. This subset is very easy to implement, and with the exception of datatype information, conversion between this subset and XML DTDs results in no information loss. RELAX Core uses the built-in datatypes of XML Schema Part 2. Datatypes can be used as conditions on attributes or used as hedge models. The TR also defines some datatypes specific to RELAX." Annex A supplies an XML DTD for RELAX Core. Annex B gives a RELAX Module for RELAX Core. For related XML schema research, see "XML Schemas."

  • [November 24, 2000]   W3C's Amaya 4.1 Browser/Editor Supports Advanced Features.    W3C has announced the release of Amaya version 4.1, supporting HTML 4.0, XHTML 1.0, HTTP 1.1, MathML 2.0, and many CSS 2 features; it also provides RDF and XPointer/Xlink support in connection with its collaborative annotation system. Source code and binaries are available for download; see also the CVS database. Description: "Amaya is W3C's own versatile editor/browser. With the extremely fast moving nature of Web technology, Amaya plays a central role at the Consortium. Easily extended to integrate new ideas into its design, Amaya provides developers with many specialized features including multiple views, where the internal structural model of the document can be displayed alongside the browser's view of how it should be presented on the screen. Amaya has a counterpart called Jigsaw which plays a similar role on the server side. Amaya is a complete Web browsing and authoring environment and comes equipped with a WYSIWYG style of interface, similar to that of the more popular commercial browsers. Amaya maintains a consistent internal document model adhering to the Document Type Definition (DTD), meaning that it handles the relationships between various document components: paragraphs, headings, lists and so on, as laid down in the relevant W3C Recommendation." Amaya offers advanced transport protocols support (e.g., content negotiation and 'keep alive' connections per libwww and HTTP/1.1), CSS stylesheet editing/publishing, WYSIWYG interface editing and rendering of mathematical expressions (MathML), and advanced graphics support (e.g., PNG, Scalable Vector Graphics). Amaya 4.X also "includes a collaborative annotation application based on Resource Description Framework (RDF), XLink, and XPointer. From the technical point of view, annotations are usually seen as metadata, as they give additional information about an existing piece of data. In this project, we use a special RDF annotation schema for describing annotations. Annotations can be stored locally or in one or more annotation servers. When a document is browsed, Amaya queries each of these servers, requesting the annotations related to that document. Amaya uses XPointer to describe where an annotation should be attached to a document. With this technique, it is possible to annotate any Web document independently, without needing to edit that document. Finally Amaya presents annotations with pencil annotation icons and attaches XLink attributes to these icons. If the user single-clicks on an annotation icon, the text that was annotated is highlighted. If the user double-clicks on this icon, the annotation text and other metadata are presented in a separate window..." For documentation on the RDF/XPointer implementation, see: (1) "Annotations in Amaya"; (2) "Annotation Server HOWTO" ['how to set up and use an W3C-Perllib Annotations server']; (3) the special RDF annotation schema.

  • [November 22, 2000]   XEXPR - A Scripting Language for XML.    W3C has acknowledged a submission from eBusiness Technologies, Inc. for XEXPR - A Scripting Language for XML. Reference: W3C Note 21 November 2000, by Gavin Thomas Nicol (Chief Scientist, eBusiness Technologies, Inc.). Document abstract: "In many applications of XML, there is a requirement for using XML in conjunction with a scripting language. Many times, this results in a scripting language such as JavaScript being bound within the XML content (like the <script> tag). XEXPR is a scripting language that uses XML as its primary syntax, making it easily embeddable in an XML document. In addition, XEXPR takes a functional approach, and hence maps well onto the syntax of XML." An associated specification XTND - XML Transition Network Definition (published as a separate NOTE) provides a generic DTD which uses XEXPR. Description: "In XML-based standards there often arises the need for two components: (1) A component for describing, declaratively, a set of states, and transitions between them: for example, when describing business processes, protocols, or decision trees. (2) A component allowing logic to be embedded into the XML. This submission is made up of two parts: XTND and XEXPR. XTND is a generic DTD that can be used for describing transition networks, and their interaction with the outside world. XEXPR is a scripting language that uses XML syntax, and hence is designed to be embedded in XML. XTND uses XEXPR. eBT is submitting these two specifications to the W3C in the hope that they will be incorporated into future specifications that need such functionality." The XTND part of the specification (XML Transition Network Definition), published as W3C Note 21-November-2000, provides formal constructs for encoding states and transitions in events in processes. Description: "In many systems, transition networks are used to describe a set of states and the transitions that are possible between them. Common examples are such things as ATM control flows, editorial review processes, and definitions of protocol states. Typically, each of these transition networks has its own specific data format, and it's own specific editing tool. Given the rapid transition to pervasive networking, and to application integration and interchange, a standard format for transition networks is desirable. This document defines such an interchange format, defined in XML: the interchange language for the Internet... Loosely speaking, a transition network is a set of states and the transitions between them. They are good at capturing the notion of process. For example: (1) Control processes such as those in a digitally controlled heating system. (2) Processes controlling manufacture or design. (3) Workflow processes such as those found in product data management software. They are also useful in modeling the behavior of systems and can be used in object-oriented analysis to create formal models of object interaction and larger system behavior. Transition networks are closely related to finite state machines (FSM), and to data flow diagrams(DFD), but they are augmented with the following capabilities: (1) Transition networks are not limited to "accepting or rejecting their input". Transition networks may execute actions or fire off events during transitions. (2) Transition networks can interact with other objects, thereby affecting change in the transition network (or in other networks). (3) Transitions in transition networks can be controlled by guard conditions that prohibit or allow the transition to be followed. (4) These guard conditions can be dependent on any predicate involving objects from within the environment of the transition network. As such, transition networks can be used to describe far more complex interactions or processes than either FSMs or DFDs allow." The W3C staff comment says in part: "It is common to combine the declarative potential of XML with imperative scripting languages such as ECMAScript. The submission defines a new scripting language (XEXPR) which is itself expressed directly in XML. The language takes a functional approach and avoids the need for further parsing machinery as would be needed for a syntax featuring infix operators. The submission demonstrates the use of XML for defining a functional scripting language and for representing finite state transition networks. This may prove to be of interest to future W3C work on dialogs for human-computer interaction, and more generally as a component for a Web application framework. Current W3C work on voice browsers is taking a different approach, using a form filling metaphor for representing dialogs, with a focus on easy authoring for voice applications. This work is drawing upon rich experience with earlier markup languages for voice interaction, and it is unclear whether the more abstract approach presented in the submission is relevant. W3C's work on forms is using XML Schema as the basis for the modelling data, with the addition of dynamic integrity constraints that act over multiple fields. For example, the total value of an order can be defined in terms of a computation over the values of other fields such as unit prices, quantities, discounts, and tax and shipping costs. Such computations can be conveniently represented as expressions that evaluate to typed values. The focus is on a simple side-effect free representation of constraints, based upon the type system defined by XML Schema and the use of XPath for addressing form data. The XML scripting language proposed in the submission could be of interest to the XForms working group, but may prove to be too complicated, for the restricted requirements for forms. XForms is expected to have to interoperate with popular scripting languages such as ECMAScript. This avoids the need for the constraint language to evolve into a general purpose scripting language."

  • [November 22, 2000]   empolis K42 Knowledge Server.    Jasmin Franz (STEP Electronic Publishing Solutions GmbH) recently posted an announcement for the release of an evaluation version of its 'K42 knowledge server'. Excerpts: "empolis, a world class provider of knowledge management solutions, proudly announces the beta release of empolis K42, its cutting edge knowledge server that is fully compliant with the ISO standard Topic Maps (see The free evaluation is available at Knowledge management is recognised as a crucial part of utilising information assets, whether it is for corporate or commercial publishers. empolis K42 Knowledge Server provides a real time, persistent and scalable solution to approaching knowledge management. Written in Java, in order to aid cross-platform support, it has an extensive API allowing it to be customised and extended to better meet customer's individual requirements. Utilising the latest standards including XML, XLink, Topic Maps, and XTM, empolis K42 provides access to knowledge through its Knowledge Author and Knowledge Navigator components - both of which run within a web browser. The Gartner Group said of Topic Maps: 'the paradigm is powerful, flexible and extensible, topic maps will become a mainstream technology by 2003.' empolis employees are actively involved in the Topic Maps and XTM standard developments. empolis K42 provides a new paradigm for organising, maintaining and navigating information. The information models it stores are independent of the physical domain in which that information resides. These models can provide the routes to information, such as a set of web resources on a server and do not have to be contained within that information. As a result they can be used to deploy information sets in different environments with different requirements, and can also be personalised by individual users and user communities... Some of the highlights of empolis K42 Knowledge Server: (1) empolis K42 provides a Knowledge Author component to enable the creation and maintenance of the knowledge data. It allows the knowledge server to be updated in real time. (2) Knowledge Navigator provides a delivery solution that can be rapidly implemented to enable companies to deliver the knowledge data in their own corporate style through the use of XML and XSL. (3) empolis K42 is written in Java in order to aid cross-platform support and has a comprehensive API to expose the functionality it provides and to enable customisation and integration of the software. (4) empolis K42 has already been tested to persist and provide access to over a million topics and is designed to scale to tens of millions. empolis K42, as a beta version, utilises and supports the Topic Map standard. But empolis K42 is a knowledge server that will enable portals, corporates and communities to capture, manage and deliver valuable knowledge assets. As such, empolis K42 will support not only Topic Maps but will include other such effective standards that help capture and express knowledge." For reference, see "(XML) Topic Maps."

  • [November 22, 2000]   XSLTDoc for Browsing XSLT Stylesheets.    Jeni Tennison recently announced the (alpha) availability of a tool designed to help people browse their stylesheets. The tool itself is an XSLT application. "For beginners, it gives a description of what each instruction is doing in theory (it doesn't trace the actual running of the stylesheet), including a summary of any XPaths. For people writing complex stylesheets, it provides summary views. The XSLTDoc application gives you: (1) links to the called template from any xsl:call-template instruction; (2) links to the definitions of the variables/parameters wherever they're used; (3) a sortable summary tables giving template matches and modes. It's all import/include aware, and tells you when a particular named template, variable declaration and so on are overridden in importing stylesheets. Getting linking done with matching/moded templates is a goal, but it's pretty tricky especially as there may be several templates that match in a particular case, and it's really impossible to know which will do so without having a specific source XML instance. The tool is available for download from the utilities page on Jeni Tennison's web site. Just "download the .ZIP archive, unzip it into a working directory, and load xslt-doc.xsl; you will be prompted for a stylesheet to load; enter its file name relative to the XSLTDoc directory." Note also Jeni's XSLT Pages with tutorials.

  • [November 22, 2000]   IBM alphaWorks Releases XSLbyDemo Tool for XSLT Rules Generation.    A new tool from IBM's alphaWorks XML Application Development team is XSLbyDemo. XSLbyDemo is a technology "for generating XSLT rules on the basis of editing operations conducted under the WYSIWYG mode of Page Designer, which is a full-fledged HTML authoring tool provided with IBM WebSphere Studio. The remarkable feature of XSLbyDemo is that users can create an XSLT stylesheet automatically solely on the basis of the knowledge of HTML editing. The remarkable feature of XSLbyDemo is that users can create an XSLT stylesheet automatically solely on the basis of the knowledge of HTML editing. The users do not have to know anything about the syntax/programming of XSLT, and need not be aware the rule generation process, which happens behind the HTML authoring in the WYSIWYG mode. The users are thus allowed to concentrate on the styling of the HTML document, relying on the Page Designer's full capabilities for HTML and CSS authoring. XSLbyDemo finally produces an XSLT stylesheet that transforms a given HTML document to a desired document obtained as the results of the WYSIWYG authoring. XSLbyDemo runs under Windows NT 4.0 with Service Pack 4, Windows 95, Windows 98, or Windows 2000." For related tools, see "XSL/XSLT Software Support."

  • [November 22, 2000]   IBM's XML and Web Services Development Environment.    New from IBM alphaWorks labs: XML and Web Services DE. "The IBM XML and Web Services Development Environment is the first development environment that creates open, platform-neutral Web services for deployment across heterogeneous systems. This tool allows HTML, Java, SQL and XML developers to quickly extend existing e-business applications so that they can deliver business informational Web services. Database developers can also use SQL as a programming language to quickly build data-aware Web services. Web developers can create Web services with minimal knowledge of Java, XML or SOAP. It turns the power of XML and Java technology into competitive e-business advantage. It provides all of the tooling needed to create Web services... (1) Discover - Browse the UDDI Business Registry to locate existing Web services for integration. The Web becomes an extension of the development environment. (2) Create/Transform - Use powerful XML editing functions to quickly develop new Web services. Complete transformation (edit and mapping) tools are also provided so that developers can create Web services from existing XML, Java, or SQL applications. (3) Build - Wrap existing bean components as SOAP-accessible services and describe them in the Web services description language (WSDL). Generate SOAP proxies to Web services described in WSDL. Generate bean skeletons from WSDL. Minimal knowledge of SOAP or WSDL is required. (4) Deploy - Deploy the Web service on the developer's machine or to a remote, production-level server for testing right away. After testing, publish the Web service immediately to the application server (WebSphere Application Server or Apache Tomcat). (5) Test - Test applications as they run locally or remotely, and get instant feedback. (6) Publish - In addition to creating and deploying Web services, the development environment can also publish them to the UDDI Business Registry. This advertises your Web services so that other businesses can access them." See (1) "Universal Description, Discovery, and Integration (UDDI)"; (2) "Simple Object Access Protocol (SOAP)"; (3) "Web Services Description Language (WSDL)."

  • [November 21, 2000]   Extensible Stylesheet Language (XSL) Specification Becomes W3C Candidate Recommendation.    W3C has announced the promotion of the XSL specification to the status of a W3C Candidate Recommendation: Extensible Stylesheet Language (XSL) Version 1.0. Reference: W3C Candidate Recommendation 21-November-2000, edited by Sharon Adler (IBM), Anders Berglund (IBM), Jeff Caruso (Pageflex), Stephen Deach (Adobe), Paul Grosso (ArborText), Eduardo Gutentag (Sun), Alex Milowski (Lexica), Scott Parnell (Xerox), Jeremy Richman (BroadVision), Steve Zilles (Adobe). Document abstract: "XSL is a language for expressing stylesheets. It consists of two parts: (1) a language for transforming XML documents, and (2) an XML vocabulary for specifying formatting semantics. An XSL stylesheet specifies the presentation of a class of XML documents by describing how an instance of the class is transformed into an XML document that uses the formatting vocabulary." Description: "XSL is a language for expressing stylesheets. Given a class of arbitrarily structured XML documents or data files, designers use an XSL stylesheet to express their intentions about how that structured content should be presented; that is, how the source content should be styled, laid out, and paginated onto some presentation medium, such as a window in a Web browser or a hand-held device, or a set of physical pages in a catalog, report, pamphlet, or book... An XSL stylesheet processor accepts a document or data in XML and an XSL stylesheet and produces the presentation of that XML source content that was intended by the designer of that stylesheet. There are two aspects of this presentation process: first, constructing a result tree from the XML source tree and second, interpreting the result tree to produce formatted results suitable for presentation on a display, on paper, in speech, or onto other media. The first aspect is called tree transformation and the second is called formatting. The process of formatting is performed by the formatter. This formatter may simply be a rendering engine inside a browser. Tree transformation allows the structure of the result tree to be significantly different from the structure of the source tree. For example, one could add a table-of-contents as a filtered selection of an original source document, or one could rearrange source data into a sorted tabular presentation. In constructing the result tree, the tree transformation process also adds the information necessary to format that result tree. Formatting is enabled by including formatting semantics in the result tree. Formatting semantics are expressed in terms of a catalog of classes of formatting objects. The nodes of the result tree are formatting objects. The classes of formatting objects denote typographic abstractions such as page, paragraph, table, and so forth. Finer control over the presentation of these abstractions is provided by a set of formatting properties, such as those controlling indents, word- and letter-spacing, and widow, orphan, and hyphenation control. In XSL, the classes of formatting objects and formatting properties provide the vocabulary for expressing presentation intent. The XSL processing model is intended to be conceptual only. An implementation is not mandated to provide these as separate processes. Furthermore, implementations are free to process the source document in any way that produces the same result as if it were processed using the conceptual XSL processing model." The new CR has been produced by the XSL Working Group as part of the W3C Style Activity. The Candidate Recommendation review period ends on February 28, 2001; meantime, comments may be sent to the publicly archived XSL mailing list. The following exit criteria for the CR (preceding advancement to PR) are proposed: "(1) Sufficient reports of implementation experience have been gathered to demonstrate that XSL processors based on the specification are implementable and have compatible behavior. (2) An implementation report shows that there is at least one implementation for each basic formatting object and property. (3) Providing formal responses to all comments received." The specification is available also in PDF, XML, HTML, and .ZIP archive formats. For related references, see (1) the W3C XSL specification work and (2) "Extensible Stylesheet Language (XSL/XSLT)."

  • [November 20, 2000]   W3C's Natural Language Semantics Markup Language for the Speech Interface Framework.    The W3C has issued a new working draft specification which describes markup for representing natural language semantics: Natural Language Semantics Markup Language for the Speech Interface Framework. Reference: W3C Working Draft 20-November-2000, by Deborah A. Dahl (Unisys). Document abstract: "The W3C Voice Browser working group aims to develop specifications to enable access to the Web using spoken interaction. This document is part of a set of specifications for voice browsers, and provides details of an XML markup language for describing the meanings of individual natural language utterances. It is expected to be automatically generated by semantic interpreters for use by components that act on the user's utterances, such as dialog managers." In this proposal, the NL semantics representation "uses the data models of the W3C XForms draft specification to represent application-specific semantics. While XForms syntax may change in future revisions of the specification, it is not expected to change in ways that affect the NL Semantics Markup Language significantly." The authors of the WD are members of the W3C Voice Browser Working Group. The specification has been produced as part of the W3C Voice Browser Activity, and forms part of the proposals for the W3C Speech Interface Framework. The specification includes a set of draft elements and attributes and [later will include] a draft DTD. Markup uses a root element <result> (with attributes grammar, x-model, and xmlns) which includes one or more <interpretation> elements. Multiple interpretations result from ambiguities in the input or in the semantic interpretation. The <interpretation> element has attributes confidence, grammar, x-model, and xmlns. The <interpretation> element includes an <input> element which contains the input being analyzed, optionally a <model> element defining the XForms data model and an <instance> element containing the instantiation of the data model for this utterance. Description: "The general purpose of the NL Semantics Markup is to represent information automatically extracted from a user's utterances by a semantic interpretation component, where utterance is to be taken in the general sense of a meaningful user input in any modality supported by the platform. Referring to the sample Voice Browser architecture in Introduction and Overview of the W3C Speech Interface Framework, a specific architecture can take advantage of this representation by using it to convey content among various system components that generate and make use of the markup. Components that generate NL Semantics Markup: (1) ASR, (2) Natural language understanding, (3) Other input media interpreters [e.g. DTMF, pointing, keyboard], (4) Reusable dialog component, (5) Multimedia integration component. Components that use NL Semantics Markup: (1) Dialog manager, and (2) Multimedia integration component. A platform may also choose to use this general format as the basis of a general semantic result that is carried along and filled out during each stage of processing. In addition, future systems may also potentially make use of this markup to convey abstract semantic content to be rendered into natural language by a natural language generation component..." Comments on the working draft may be sent to the publicly archived W3C mail list ''. See also the related grammar specification: Speech Recognition Grammar Specification for the W3C Speech Interface Framework.

  • [November 17, 2000]   Rule Markup Language (RuleML).    The RuleML Initiative represents a collaborative research effort by an international team of participants seeking to develop a shared Rule Markup Language (RuleML). The project is consciously related to other standards work, including Mathematical Markup Language (MathML), DARPA Agent Markup Language (DAML), Predictive Model Markup Language (PMML), Attribute Grammars in XML (AG-markup), and Extensible Stylesheet Language Transformations (XSLT). From the web site description: "The participants of the RuleML Initiative constitute an open network of individuals and groups from both industry and academia. We are not commencing from zero but have done some work related to rule markup or have actually proposed some specific tag set for rules. Our main goal is to provide a basis for an integrated rule-markup approach that will be beneficial to all involved and to the rule community at large. This shall be achieved by having all participants collaborate in establishing translations between existing tag sets and in converging on a shared rule-markup vocabulary. This RuleML kernel language can serve as a specification for immediate rule interchange and can be gradually extended - possibly together with related initiatives - towards a proposal that could be submitted to the W3C. Rules can be stated (1) in natural language, (2) in some formal notation, or (3) in a combination of both. Being in the third, 'semiformal' category, the RuleML Initiative is working towards an XML-based markup language that permits Web-based rule storage, interchange, retrieval, and firing/application. Rules in (and for) the Web have become a mainstream topic since inference rules were marked up for E-Commerce and were identified as a Design Issue of the Semantic Web, and since transformation rules were put to practice for document generation from a central XML repository (as used here). Rules have also continued to play an important role in Intelligent Agents and AI shells for knowledge-based systems, which need a Web interchange format, too. The Rule Markup Initiative has taken initial steps towards defining a shared Rule Markup Language (RuleML), permitting both forward (bottom-up) and backward (top-down) rules in XML for deduction, rewriting, and further inferential-transformational tasks. The initiative started during PRICAI 2000, as described in the Original RuleML Slide, and was launched in the Internet on 2000-11-10. A complementary effort coordinates the development of Java rule engines. A Rule Markup Workshop is planned in conjunction with the third International Conference on Electronic Commmerce, ICEC2001, in Vienna, Austria, in October 2001." For background and references, see (1) the RuleML web site and "Rule Markup Language (RuleML)." See similarly Relational-Functional Markup Language (RFML) and Business Rules Markup Language (BRML).

  • [November 17, 2000]   Ontopia Topic Map Navigator Publicly Available.    A communiqué from Sylvia Schwab announces the public availability of Ontopia's Topic Map Navigator (limited edition): "Ontopia is pleased to announce that you can now download a free version of the Topic Map Navigator directly from the Ontopia website. The Navigator allows you to browse your topic maps in a convenient web interface with no need for programming or configuration. Ontopia will be adding support for the XTM (XML Topic Map) DTD as soon as its been finalized; in the meantime an XML DTD (Document Type Definition) defined by Ontopia is required. If your topic map is valid against the following DTD ( you can load it into the navigator and start browsing it right away. The free version of the Navigator is restricted to only accept topic maps smaller than 5 kilotao in size. This means that the topic map can have no more than 5000 topics, associations and occurrences. The Navigator will expire on 15 April 2001 and is intended for non-commercial use. Shortly before the expiry date, you will be able to upgrade to a trial of our 1.0 version of the software... The Ontopia Navigator is a navigational interface for topic maps built using the Ontopia Engine. It is written as a collection of Java Server Pages (JSPs) that use the Ontopia Engine to load a topic map and produce a navigational web interface to it. This means that the Navigator can be deployed on any web server that supports JSP. It includes a high-level API which enables any Java developer or web-developer with JSP skills to quickly create fully-functional, customised web applications. The resulting interface consists of simple HTML web pages using frames and some very simple JavaScript for the implementation of the default occurrence types extension. This means that the Navigator works with any web browser that supports frames. The Navigator package also includes a reference implementation to provide a starting point for developing new visualisations." [Note: 'kilotao' in the announcement is a suspected Ontopian neologism, derived from "kilo" (1024) + "TAO = 'topic, association, occurrence'," as in "The TAO of Topic Maps,", by Steve Pepper.] For other TM information, see (1) "The Ontopia Topic Map Engine: A Technical Introduction" -- a brief introduction to the Ontopia Topic Map Engine and Navigator for technically oriented readers, by Lars Marius Garshol; (2) online demonstrations of the Navigator; (3) the XTM (XML Topic Maps) Document Web site; (4) "(XML) Topic Maps."

  • [November 17, 2000]   IDOOX Releases XDB: XML Database.    Miloslav Nic announced the pre-release publication of XDB: XML Database. "XDB is an XML document repository providing structured storage of XML data, at present using an RDBMS (Relational Database Management System) mapping over PostgreSQL. As the first step, our plan is to develop a lightweight XML persistent storage engine on top of a relational database backend to come up with a UI and API in short time and replace it by our native XML storage system in the second step to satisfy complex XML processing requirements. XDB intention is to offer a fast, reliable and scalable XML database framework with powerful querying techniques according to W3C standards (XPath, XML Query) and standard XML processing APIs (SAX, DOM)... the main purpose of XDB is to provide native storage of XML data. RDBMS is not the target, but just temporal method which will be replaced by dedicated storage within couple of months. Principal features: (1) Ability to store and process large collections of XML documents; (2) Stores any well-formed document; (3) Provides SAX interface; (4) RDBMS mapping of XML documents; (5) Access via XPath based query language; (6) Independence on database system." See also the associated white paper. For related tools, see "XML and Databases."

  • [November 16, 2000]   MATE Project Uses XML Tools for Spoken Language Dialogue Corpora.    Numerous encoding initiatives now employ XML in the annotation of spoken language dialogue corpora. One such XML-based project is MATE (Telematics Project LE4-8370; Multilevel Annotation, Tools Engineering), which "aims to facilitate re-use of language resources by addressing the problems of creating, acquiring, and maintaining language corpora. The problems are addressed along two lines: (1) through the development of a standard for annotating resources; (2) through the provision of tools which will make the processes of knowledge acquisition and extraction more efficient. Specifically, MATE treats spoken dialogue corpora at multiple levels, focusing on prosody, (morpho-) syntax, co-reference, dialogue acts, and communicative difficulties, as well as inter-level interaction. The results of the project will be of particular benefit to developers of spoken language dialogue systems but will also be directly useful for other applications of language engineering." The 'MATE Dialogue Annotation Guidelines' provide "a comprehensive collection of recommendations or guidelines for representing descriptive annotation of spoken dialogue material. Descriptive annotation includes any information that encodes linguistic data with respect to their physical, perceptual, or functional dimensions. Spoken dialogue material refers to any collection of spoken dialogue data (human-human, human-system, or human-human-system), including not only speech files but also logfiles or scenarios which are related to the spoken dialogues. Spoken dialogue annotation is the only area considered in this report, however this does not exclude that the recommendations may apply to other areas as well. It builds on a common standard framework in terms of a coding module at the conceptual level and an underlying representation in XML at the implementational level. For each level considered by MATE recommendations are provided on how to encode relevant phenomena, one or more best practice coding modules are provided and several examples are given. The descriptions given in this document allow a complete separation from the underlying machine representation for which MATE uses XML. The separation means that in principle one could decide to other formats than XML at the implementational level without affecting the coding module in any way. In this document recommendations will be made that rely on a given markup language, XML, that has already found broad support. This is an important factor as the availability of parsers and other software enhances the integration of this proposal into existing environments." Annex C supplies the XML DTDs. The associated MATE workbench program "provides support for flexible display and editing of XML annotations, and complex querying of a set of linked files. The workbench was designed to support the annotation of XML coded linguistic corpora, but it could be used to annotate any kind of data, as it is not dependent on any particular annotation scheme. Rather than being a general purpose XML-aware editor it is a system for writing specialised editors tailored to a particular annotation task. A particular editor is defined using a transformation language, with suitable display formats and allowable editing operations. The workbench is written in Java, which means that it is platform-independent. This paper outlines the design of the workbench software and compares it with other annotation programs. . . The major features of the MATE workbench are: (1) An internal database - using arbitrary XML as an interchange format, extended to cover multiple hierarchies and arbitrary directed graphs using hyperlinks or ID/IDREF pointers between elements. This extension from trees to graphs is required to allow XML to represent more complex data. (2) A query language which is tailored to this internal representation. This language returns tuples instead of single elements (as in the XSLT query language). The architecture allows us to add new structure to the database by evaluating a query. (3) A transformation language and processor that goes beyond XSLT in some respects. (4) A display and editing engine for displaying to the user and enabling editing actions. The MATE workbench uses XML as its input/output format, and uses a similar internal data model. However, the strictly hierarchic nature of XML is at odds with certain aspects of linguistic (particularly speech) data. In multi-speaker dialogues, speech may overlap, and different annotation hierarchies coded on a corpus may overlap, for example prosody and syntax. One way to indicate this non-hierarchical structure in XML is by the use of standoff annotation. Linking between elements is done by means of a distinguished href attribute of elements, which uses a subset of the XPointer proposal to point to arbitrary elements in the same or different files. Such attributes are often called hyperlinks. This extended data model allows us to represent overlapping or crossing annotations...for example, such that XML represent a case where a contrastive marking is on the subject and verb and crosses a <vp> constituent..." For references, see "Multilevel Annotation, Tools Engineering (MATE)." Related speech data annotation projects include, for example: (1) DARPA Communicator Project and XML Log Standard; (2) Computing Environment for Linguistic, Literary, and Anthropological Research (CELLAR); (3) Architecture and Tools for Linguistic Analysis Systems (ATLAS); (4) TalkBank and the Codon XML-Based Annotation Framework; (5) ACE Pilot Format DTDs; (6) Transcriber - Speech Segmentation and Annotation DTD.

  • [November 16, 2000]   AuthXML Standard for Web Security.    Securant Technologies recently "announced the formation of an open industry working group to facilitate the creation of the first XML-based standard for Web security, called AuthXML. This standard will leverage XML, which is platform and programming language independent, to enable authentication and authorization functions to be performed across and interoperate with multi-vendor Web security systems, packaged and custom Web applications, and network level security systems. AuthXML will allow integrated Web commerce and a transparent user experience by providing a standardized approach for presenting and keeping track of security details as a transaction or session traverses linked Web sites based on disparate technologies, applications and platforms. Securant has been working with its key customers and partners for several months to develop a framework for the AuthXML specification, and is now opening up its research and design efforts to help foster and accelerate the adoption of a universal standard. AuthXML is a vendor-neutral standard that enables integration of Web security, network security, B2B infrastructures and applications. AuthXML is named as such because it comprises 2 primary components: Authentication and Authorization and is designed to ease integration of transactions between trading partner sites that may be using different security systems and within a given site that may be deploying multiple applications that need integrated security. AuthXML will enable: (1) Faster deployment for customers through standards based integration, (2) Interoperability between Web security vendors allowing for secure and simplified integrated commerce, (3) Simplified user experience through reduced sign-ons across Web networks, (4) More tightly integrated Web sites and applications based on non-proprietary integration. AuthXML is intended to be a completely open standard for Web-based application security and inter-application integration. The standard defines a set of XML message formats, XML schemas and interaction models that web sites can use in order to provide seamless user experience and business transactions that span multiple parties and security domains across the Internet. AuthXML is not owned by any one vendor. Instead, the standards proposal will be submitted to an appropriate open standards body to ensure that it remains an open industry standard in which any interested companies and organizations can participate. The AuthXML 1.0 Specification is currently [2000-11-16] under development by Securant Technologies and some of its key partners and customers." For references, see "AuthXML Standard for Web Security."

  • [November 16, 2000]   Security Services Markup Language (S2ML).    Netegrity, Inc. has "announced that it is working with a group of industry leading companies to define the first standard for enabling secure e-commerce transactions using XML. The industry's first major collaboration, called Security Services Markup Language (S2ML), will create a common language for sharing security information about transactions and end users between companies engaged in online B2B and B2B2C transactions. Authors of the S2ML specification are Bowstreet, Commerce One, Jamcracker, Netegrity, Sun Microsystems, VeriSign, and webMethods. Reviewers of the specification include Art Technology Group, Oracle, PricewaterhouseCoopers, and TIBCO. S2ML is intended to solve [security] problems by helping to unify access control methods through an open, standards-based framework for the next generation of secure e-commerce transactions. The S2ML specification addresses three main areas of security services: authentication, authorization, and entitlement/privilege. S2ML defines standard XML schemas, as well as an XML request/response protocol, for describing authentication and authorization services through XML documents. S2ML also will provide specific bindings for various protocols such as HTTP and SOAP and B2B messaging frameworks such as ebXML. S2ML will deliver the following benefits: (1) Interoperability: With S2ML e-marketplaces, service providers, and end user companies of all sizes will be able to securely exchange information about authenticated users, Web services, and authorization information without requiring partners to change their current security solutions. S2ML will become the common language for different infrastructures to communicate security data. (2) Open Solution: S2ML is designed to work with multiple XML document exchange protocols and frameworks such as SOAP, OAG, MIME, Biztalk, and ebXML. (3) Single Sign-On Across Partner Sites: S2ML will enable users to travel across sites with their credentials and entitlements so that companies and partners in a trusted relationship can deliver single sign-on across sites, regardless of the security infrastructures in place. The S2ML effort is an open industry initiative in which any organization can participate and implement the specifications. The vendors behind the S2ML initiative plan to submit the S2ML 0.8 specification to the World Wide Web Consortium (W3C) and OASIS for consideration within the next 30 days." For other details, see (1) the S2ML web site; (2) "Security Services Markup Language (S2ML)"; and (3) the full text of the announcement: "Netegrity And Industry Leaders To Define First XML Standard For Secure E-Commerce. Art Technology Group, Bowstreet, Commerce One, Jamcracker, Oracle, PricewaterhouseCoopers, Sun Microsystems, TIBCO Software Inc., VeriSign, and webMethods join Netegrity to Develop Security Services Markup Language (S2ML)."

  • [November 16, 2000]   Oracle Releases XML SQL Utility Beta Version 2.1.0.    Steve Muench recently announced that release 2.1.0 Beta of the Oracle XML SQL Utility is now available on the Oracle Technet web site. Oracle's XSU tool "generates an XML Document from SQL queries, outputs text or Document Object Model from a SQL query string or a JDBC ResultSet object, and writes data from an XML document into a database table or view." New features in XSU: (1) SAX2 output from any SQL query for handling XML query output of arbitrary size in custom programs or SAX filters; (2) Full support for any JDBC driver, removing previous restrictions...; (3) Initial XML Schema support, allowing you to produce inline XML Schema for the XML result of any SQL query; (4) New support for retrieving data as XML attributes instead of elements by using standard SQL column aliasing."

  • [November 14, 2000]   SpeechObjects Specification Published as a W3C NOTE.    The W3C has acknowledged a submission from Nuance Communications, Inc. for a SpeechObjects Specification Version 1.0. Reference: W3C Note 14-November-2000, edited by Daniel C. Burnett. Document abstract: "This document describes SpeechObjects, a core set of reusable dialog components that are callable through a dialog markup language such as VoiceXML, to perform specific dialog tasks, for example, get a date or a credit card number, etc. The major goal of SpeechObjects is to complement the capabilities of the dialog markup language and to leverage best practices and reusable component technology in the development of speech applications." Description: "SpeechObjects are reusable software components that encapsulate discrete pieces of conversational dialog. SpeechObjects are based on an open architecture that can be deployed on any of the major server and IVR (interactive voice response) platforms. This paper describes a specification based on Nuance's Java implementation of SpeechObjects. Simply stated, a SpeechObject is a reusable software component that implements a dialog flow and is packaged with the audio prompts and recognition grammars that support that dialog. An implementation of the foundation set of SpeechObjects, including source code, is freely available to the SpeechObjects developer community as part of Nuance's Open Voice Framework initiative." The specification from Nuance is set against the backdrop of work conducted in the W3C Voice Browser Working Group, which "has determined requirements for several specifications including one for a Reusable Dialog Component Requirements." According to the W3C staff comment: "W3C is working to expand access to the Web to allow people to interact with Web sites via spoken commands, and listening to prerecorded speech, music and synthetic speech. The W3C Voice Browser Activity has produced a set of requirements for interactive voice response applications and is now developing a set of specifications that meet these requirements... The W3C Voice Browser Working Group plans to develop specifications for its Speech Interface Framework using SpeechObjects as a model for work on reusable dialog components. This work is already underway, following the publication of a requirements draft for reusable dialog components. A specification meeting these requirements is under development, with the goal of being used together with W3C's dialog markup language. It is recommended that the Nuance Communications SpeechObjects submission is carefully examined in the context of this work." See further: (1) the W3C Voice Browser Activity and (2) "VoiceXML."

  • [November 14, 2000]   DOM Level 2 Published As a W3C Recommendation.    W3C has released the Document Object Model (DOM) Level 2 Core Specification Version 1.0 and its associated modules as a W3C Recommendation. Core Reference: W3C Recommendation 13-November-2000, edited by Arnaud Le Hors, Philippe Le Hégaret, Lauren Wood (WG Chair), Gavin Nicol, Jonathan Robie, Mike Champion, and Steve Byrne. Four other modules released with the Core include: (1) Document Object Model (DOM) Level 2 Views Specification; (2) Document Object Model (DOM) Level 2 Events Specification; (3) Document Object Model (DOM) Level 2 Style Specification; (4) Document Object Model (DOM) Level 2 Traversal and Range Specification. At the same time, a working draft has been issued for Document Object Model (DOM) Level 2 HTML Specification (to ensure backwards compatibility). Excerpts from the W3C press release: "Leading the Web to its full potential, the World Wide Web Consortium (W3C) today released the Document Object Model Level 2 specification as a W3C Recommendation. The specification reflects cross-industry agreement on a standard API (Applications Programming Interface) for manipulating documents and data through a programming language (such as Java or ECMAScript). A W3C Recommendation indicates that a specification is stable, contributes to Web interoperability, and has been reviewed by the W3C Membership, who favor its adoption by the industry. Created and developed by the W3C Document Object Model (DOM) Working Group, this specification extends the platform- and language-neutral interface to access and update dynamically a document's content, structure, and style first described by the DOM Level 1 Recommendation. The DOM Level 2 provides a standard set of objects for representing Extensible Markup Language (XML) documents and data, including namespace support, a style sheet platform which adds support for CSS 1 and 2, a standard model of how these objects may be combined, and a standard interface for accessing and manipulating them. DOM Level 1 was designed for HTML 4.0 and XML 1.0. With DOM Level 2, authors can take further advantage of the extensibility of XML. Simply put, anywhere you use XML, you can now use the DOM to manipulate it. The standard DOM interface makes it possible to write software (similar to plug-ins) for processing customized tag-sets in a language- and platform-independent way. A standard API makes it easier to develop modules that can be re-used in different applications. DOM Level 2 provides support for XML namespaces, extending and improving the XML platform. As more sites move to XML for content delivery, DOM Level 2 emerges as a critical tool for developing dynamic Web content. The DOM defines a standard API that allows authors to write programs that work without changes across tools and browsers from different vendors. But beyond this, it provides a uniform way to produce programs that work across a variety of different devices, so all may benefit from dynamically generated content.. The DOM Level 2 Cascading Style Sheet (CSS) API makes it possible for a script author to access and manipulate style information associated with contents, while preserving accessibility. DOM Level 2 also includes an Events API to provide interactivity anywhere someone uses XML - in documents, in data, or in B2B applications..." For related references, see: (1) testimonials for the DOM Level 2 Recommendation, (2) the DOM Activity report, and (3) "W3C Document Object Model (DOM)."

  • [November 13, 2000]   Comprehensive Real Estate Transaction Markup Language (CRTML).    The Alliance for Advanced Real Estate Transaction Technology (AARTT) recently announced an initiative "to create open standards for data exchange within the real estate industry in order to streamline the online home-buying and selling process. The initiative is called CRTML (Comprehensive Real Estate Transaction Markup Language). Member companies include: 9keys, AppraisalHub, Bowstreet, Commission Advance, Deloitte & Touche, GHR Systems, Homeadvisor Technologies Inc., Homebid, iLumin, Inciscent, InfoStream, Instanet Forms, InteliTouch, Interealty, iProperty, MarketLinx, Property I.D., Supra Products, and VISTAinfo. The mission of AARRT is to promote and coordinate data interchange standards for the Real Estate industry, based on XML, that will significantly enhance and automate all key elements of Real Estate transactions allowing forging of strong alliances between Real Estate technology providers to foster end-to-end solutions for the industry, and by doing this, to facilitate the acceleration of migrating existing industry participants core business processes towards fully integrated and streamlined Real Estate transactions. The initial objectives or AARTT are: (1) to coordinate the development of standards between the various groups -- RETS, MISMO, LegalXML, etc., (2) to promote the development of standards in areas of the industry not covered by existing initiatives, and (3) to develop interoperability standards between segments of the industry. The results of this effort, in cooperation with the segment-specific standards bodies, will be what we call a Comprehensive Real Estate Transaction Markup Language (CRTML), which adds an interoperability standard so that each of the segment XML standards can talk with one another without friction. It is AARTT's goal to incorporate current schemas wherever practical and participate in an open dialog with all recognized XML workgroups currently active in the Real Estate sector. At the same time, AARTT will continue forging efficiently ahead to develop CRTML which will be designed to augment and fill the gaps in existing schemas while forming the agreed upon foundation for data-interchange between all parties in the Alliance. Each Alliance partner company will agree to incorporate the CRTML standard into its products as soon as possible after release of the specification. By analyzing the core data elements that are required by all participants in order to transact real estate, CRTML will be able to significantly speed up the process of delivering on the promise of seamless data interchange, and efficient, end-to-end, single-point of data entry systems." For references and related initiatives, see "Comprehensive Real Estate Transaction Markup Language (CRTML)."

  • [November 13, 2000]   MathML 2.0 Published as W3C Candidate Recommendation.    As part of the W3C User Interface Domain activity, the W3C Math Working Group has produced a Candidate Recommendation specification for Mathematical Markup Language (MathML) Version 2.0. Reference: W3C Candidate Recommendation 13-November-2000, edited by David Carlisle (NAG), Patrick Ion (Mathematical Reviews, American Mathematical Society), Robert Miner (Design Science, Inc.), and Nico Poppelier (Penta Scope). Document abstract: "This specification defines the Mathematical Markup Language, or MathML. MathML is an XML application for describing mathematical notation and capturing both its structure and content. The goal of MathML is to enable mathematics to be served, received, and processed on the World Wide Web, just as HTML has enabled this functionality for text. This specification of the markup language MathML is intended primarily for a readership consisting of those who will be developing or implementing renderers or editors using it, or software that will communicate using MathML as a protocol for input or output. It is not a User's Guide but rather a reference document. This document begins with background information on mathematical notation, the problems it poses, and the philosophy underlying the solutions MathML proposes. MathML can be used to encode both mathematical notation and mathematical content. About thirty of the MathML tags describe abstract notational structures, while another about one hundred and fifty provide a way of unambiguously specifying the intended meaning of an expression. Additional chapters discuss how the MathML content and presentation elements interact, and how MathML renderers might be implemented and should interact with browsers. Finally, this document addresses the issue of MathML characters and their relation to fonts. While MathML is human-readable, it is anticipated that, in all but the simplest cases, authors will use equation editors, conversion programs, and other specialized software tools to generate MathML. Several early versions of such MathML tools already exist, and a number of others, both freely available software and commercial products, are under development." Document revisions: "Chapters 1 and 2, which are introductory material, have been revised to reflect the changes elsewhere in the document, and in the rapidly evolving Web environment. Chapters 3 and 4 have been extended to describe new functionalities added as well as smaller improvements of material already proposed. Chapter 5 has been newly written to reflect changes in the technology available. The major tables in Chapter 6 have been regenerated and reorganized to reflect an improved list of characters useful for mathematics, and the text revised to reflect the new situation in regard to Unicode. Chapter 7 has been completely revised since Web technology has changed. A new Chapter 8 on the DOM for MathML has been added; the latter points to new appendices D and E for detailed listings." Available also as HTML zip archive, XHTML zip archive, XML zip archive, PDF (screen), PDF (paper). See aditionally: (1) W3C Math activity; (2) Math Working Group mailing list archives; (3) Math Working Group Charter (4) "Mathematical Markup Language (MathML)."

  • [November 10, 2000]   XML Database Products List Updated.    Ronald Bourret announced a major update to his document/database 'XML Database Products': "I've added roughly 20 new products, added a new category for native XML databases, and substantially rewritten a number of product descriptions, especially in the areas of XML-Enabled Databases and XML Servers... In this Web page, I have tried to capture the current state of the market, gathered from Web sites, product reviews, XML webzines, XML resource guides, and email from product users and developers." The 'XML Database Products' listing has a companion resource XML and Databases which supplies a "description of how to use XML with databases." In the revised listing, Bourret introduces and supplies description for in seven categories: "(1) Middleware: Software you call from your application to transfer data between XML documents and databases; (2) XML-Enabled Databases: Databases with extensions for transferring data between XML documents and themselves; (3) Native XML Databases: Databases that store XML in 'native' form. The term is not well defined, but these are designed to maintain the structure of XML documents; (4) XML Servers: Platforms that serve data -- in the form of XML documents -- to and from distributed applications, such as e-commerce and business-to-business applications; (5) XML Application Servers: Web application servers that serve XML -- usually built from dynamic Web pages -- to browsers; (6)Content Management Systems: Systems for managing fragments of human-readable documents and include support for editing, version control, and building new documents from existing fragments; (7) Persistent DOM Implementations: DOM implementations that use a database for speed and to avoid memory limits." For related listings, see (1) the "Free XML Software" listing from Lars Marius Garshol, and (2) "XML and Databases."

  • [November 08, 2000]   UK GovTalk Web Site Opened.    The UK e-Government Interoperability Framework (e-GIF) is now supported in its schema development program by a reference portal, the UK GovTalk web site. The e-GIF initiative was announced earlier as a framework focused upon the adoption of Internet and World Wide Web standards for all UK government systems. It represents a "strategic decision to adopt XML and XSL as the core standards for data integration and presentation. This includes the definition and central provision of XML schemas for use throughout the public sector. The e-GIF also adopts standards that are well supported in the market place. It is a pragmatic strategy that aims to reduce cost and risk for government systems whilst aligning them to the global Internet revolution. Adherence to the e-GIF standards and policies is mandatory." The new UK GovTalk portal is part of this implementation strategy for the e-Government Interoperability Framework aimed at achieving seamless electronic government. "The purpose of the UK GovTalk web site is to enable the Public Sector, Industry and other interested parties to work together in developing and agreeing policies and standards for e-government. This is achieved through the UK GovTalk RFP and RFC processes. The site also provides repositories for draft and agreed schemas, toolkits, best practice and relevant information for the running of the e-GIF programme. XML schemas will be developed by specialist groups, established to support specific projects, or by open submission to the UK GovTalk web site either in response to a Request for Proposals or as an unsolicited proposal. In each case, the UK GovTalk Group will manage the acceptance, publication, and any subsequent change requests for the schema. XML schemas that have been accepted by the group will be published on UK GovTalk and will be open for public comment and requests for change." The UK GovTalk portal now provides access to a number of RFPs and other schema-related materials. For project description and references, see UK e-Government Interoperability Framework (e-GIF)

  • [November 08, 2000]   XML Schema for DocBook.    Norman Walsh (Chair, DocBook Technical Committee) announced a provisional draft DocBook XML Schema, available on the Sun Microsystems DeveloperConnection web site. Norm writes: "The DocBook XML Schema Version attempts to be an accurate translation of the DocBook XML V4.1.2 DTD. In this version, the parameterization of the schema is roughly identical to the parameterization of the DTD. This may change as I begin to experiment with the construction of derivative schemas. [This] DocBook XML Schema V4.1.2.1 'Alpha Release' is an experimental release. It validates with XSV version 1.166/1.77 of 2000/09/28 15:54:50 on my system. I welcome reports of success or failure with other XML Schema validation tools. The namespace names (URIs) used in this schema are purely imaginary. They have no official status, nor do they foreshadow the future existence of any similar official URIs. I had to use something. The DocBook XML Schema is known to differ from the DocBook DTD in the following ways: (1) There are no named character entities. You can't define those in XML Schema. (2) The table model is the OASIS Exchange Table Model, not the CALS Table Model. This table model is less rich than the CALS model, lacking spanspec, tfoot, and a few other things. (3) Inside the table model, the tgroup element and all of its descendants are in a different namespace. (4) There are bugs, perhaps dozens, possibly hundreds. With the exception of tables, which will definitely require some markup changes in the instances, documents that are valid against the DTD should be valid against this schema..." For related references, see "DocBook XML DTD."

  • [November 07, 2000]   XML Used in the MPEG-7 Description Definition Language.    The Moving Picture Coding Experts Group (MPEG) is "a working group of ISO/IEC in charge of the development of international standards for compression, decompression, processing, and coded representation of moving pictures, audio and their combination. MPEG has started work on a new standard known as MPEG-7: a content representation standard for information search, scheduled for completion in Fall 2001. The main tools used to implement MPEG-7 descriptions are the Description Definition Language (DDL), Description Schemes (DSs), and Descriptors (Ds). Descriptors bind a feature to a set of values. Description Schemes are models of the multimedia objects and of the universes that they represent e.g., the data model of the description. They specify the types of the descriptors that can be used in a given description, and the relationships between these descriptors or between other Description Schemes. The DDL forms a core part of the MPEG-7 standard. It provides the solid descriptive foundation by which users can create their own Description Schemes and Descriptors. The DDL defines the syntactic rules to express and combine Description Schemes and Descriptors. The DDL must satisfy the MPEG-7 DDL requirements. It has to be able to express spatial, temporal, structural, and conceptual relationships between the elements of a DS, and between DSs. It must provide a rich model for links and references between one or more descriptions and the data that it describes. It also has to be capable of validating descriptor data types, both primitive (integer, text, date, time) and composite (histograms, enumerated types). In addition, it must be platform and application independent and human- and machine-readable. The general consensus within MPEG-7 is that it should be based on XML syntax. . . The DDL design has been informed by numerous proposals and input documents submitted to the MPEG-7 DDL AHG since the MPEG-7 Call for Proposals in October 1998. It has also been heavily influenced by W3C's XML Schema Language and the Resource Description Framework (RDF). At the 51st MPEG meeting in Noordwijkerhout in March 2000, it was decided to adopt XML Schema Language as the MPEG-7 DDL. However the DDL will require some specific extensions to XML Schema Language to satisfy all of the requirements of MPEG-7. Some of the required extensions are described here. However their precise implementation is still being investigated and further extensions may be required... The following features will need to be added to the XML Schema Language specification in order to satisfy specific MPEG-7 requirements: Parameterized array sizes; Typed references; Built-in array and matrix datatypes; Enumerated datatypes for MimeType, CountryCode, RegionCode, CurrencyCode and CharacterSetCode [Viz., (1) MimeType - IANA list of Mime Types (type= IANA-MimeType); (2) Country Code - ISO3166-1:1997 (type="ISO3166-1CountryCode"); (3) Region Code - ISO3166-2:1998 (type="ISO3166-2RegionCode"); (4) Currency Code - ISO4217:1995 (type="ISO4217CurrencyCode"); (5) Character Set Code - IANA List of Character Sets (type="IANA-CharacterSetCode")]. MPEG-7-specific parsers will be developed by adding validation of these additional constructs to standard XML Schema parsers..." See details in (1) MPEG-7 DDL Working Draft 4.0. ISO/IEC JTC1/SC29/WG11 N3575, MPEG 00/N3575. Edited by Jane Hunter (DSTC Pty Ltd). Beijing, July 2000; (2) "Moving Picture Experts Group: MPEG-7 Standard."

  • [November 07, 2000]   MusicXML DTD Version 0.1.    A communiqué from Michael Good (President, Recordare) reports that version 0.1 of the MusicXML DTD is now available for download from Recordare's web site at MusicXML "is designed to represent musical scores, specifically common western musical notation from the 17th century onwards. It is designed as an interchange format for notation, analysis, retrieval, and performance applications. MusicXML's design is based on the MuseData and Humdrum formats, two of the most significant pre-XML representation languages for musical scores. Humdrum explicitly represents the two-dimensional nature of musical scores by a 2-D layout notation. Since XML is a hierarchical format, we cannot do this directly. Instead, there are two top-level formats: (1) partwise.dtd, containing measures within each part, and (2) timewise.dtd, containing parts within each measure. Two XSLT stylesheets are provided to convert between the two formats. The partwise and timewise score DTDs represent a single movement of music. Multiple movements or other musical collections are represented using opus.dtd. The opus document contains XLinks to individual scores, and will evolve to include more detailed reference and musicological information. This version of MusicXML has been tested with software that (1) Reads from MuseData, NIFF, and Finale Enigma Transportable Files (2) Writes to Standard MIDI Files (Format 1), MuseData files, and the Sibelius and Finale applications. The MuseData coverage is 100% for both reading and writing. The other formats have more partial coverage." The project web site provides several other resources, including examples and detailed description of the MusicXML markup language. For example, see "Representing Music Using XML", Good's abstract for a poster session presented at the International Symposium on Music Information Retrieval October 23-25, 2000 in Plymouth, Massachusetts. For other SGML/XML DTDs used in musical notation, see "XML and Music."

  • [November 07, 2000]   Visualizing XSLT: Zvon XSLTracer.    Jiri Jirat posted an announcement for the 'Zvon XSLTracer' tool which enables one to visualize the processing of an XML file by an XSLT stylesheet and XSLT processor. XSLTracer "traces the evaluated XSLT instructions there and back" and "simultaneously shows the XML node being currently processed." During tracing it also "displays (1) the name of the currently processed XML element or attribute; (2) full XPath of the currently processed XML element or attribute; (3) values of parameters and variables; (4) all nodes of node-set which is matched by select expression in xsl:apply-templates or xsl:for-each; (5) values returned by xsl:value-of." XSLTracer is available for download. Related tools and tutorials are available on the Zvon web site. See other software tools in "XSL/XSLT Software Support."

  • [November 06, 2000]   NISO Standards Free on the Web.    Pat Harris (Executive Director, National Information Standards Organization - NISO) recently posted an announcement to the effect that "all NISO standards and technical reports are now available for free in downloadable PDF files from the NISO website. From the NISO homepage ( click on the NISO Press icon, and then click on Standards, Books and Software. You can search for the title you want or review a list of all the approved and published NISO standards and technical reports. 'Very few standards developers have elected to make their standards available for free on the web,' noted NISO chair Don Muccino. 'NISO is clearly taking the lead in making its standards freely available to support the widest possible dissemination of our publications. The NISO Board believes that free distribution and easy access to our standards supports implementation.' This new NISO service is made possible by NISO's seventy-five Voting Members and the libraries supporting the NISO Information Standards Forum. NISO will continue to sell its standards in hardcopy both on the web and through NISO Press Fulfillment." A number of NISO's technical initiatives and standards relate to SGML and XML, including: (1) Draft Standard Z39.85-200X, The Dublin Core Metadata Element Set (see the Dublin Core Metadata Element Set TC, NISO Fast Track); (2) the Digital Talking Book; (3) Technical Metadata for Digital Still Images (Data Dictionary -- Technical Metadata for Digital Still Images, per DIG35: Metadata Standard for Digital Images recommendations); (4) ANSI/NISO/ISO 12083 Electronic Manuscript Preparation and Markup ("The standard specifies the SGML declaration defining the syntax used by the document type definitions [DTD] and document instances, and a definition for mathematics which may be embedded in other SGML applications").

  • [November 06, 2000]   SODA2: An XML Semistructured Database System.    A research team at the School of Computer Science and Engineering, University of New South Wales, Sydney, Australia is developing an XML (semistructured) database system called SODA2. Project principals include Raymond Wong, Franky Lam, Michael Barg, and Milivoj Savin. SODA2 ('Semistructured Object DAtabase, Version 2') is "a client-server, semistructured database system which is tailor-made for managing XML information. Query processing and optimization are implemented in and executed by clients while the server is responsible for storing and retrieving objects; handling transactions, object locks, garbage collection, database backups and recovery. Object access management policies and transaction models can be changed to fit the needs of a specific application without affecting the application code. Online database backup is supported without stopping the client working with the database. A lazy object conversion approach is used for versioning. Different clients can simultaneously work with different versions of DTD. The novel SODA2 architecture facilitates several crucial features which are seldom available in other database systems. The SODA2 query processor is mainly located at the client side. Each query processor contains an internal query translator that maps an query from one language into a SODA2 internal micro-query language. Therefore, SODA2 supports multiple query languages which include XPath expressions, XQL and XML-QL to date (SODA QL is supported for downward compatibility with SODA version 1). Web and e-commerce applications can be built by linking to the SODA2 client library. The library interface supports embedded query languages such as XPath, XQL, and XML-QL for rapid application development. XML parser or loader is itself a database client and multiple loaders can be run simultaneously to load multiple documents while the database is being updated concurrently by multiple users at the same time. This feature is a must for large-scale enterprise applications instead of small corporate or personal applications. Advanced wrapper system plays an important role in SODA2, as it provides a bridge between SODA2 database and other different data types or different data sources, for instance, emails, HTML, SGML, RTF, EDI, and so on. SODA2 server itself consists of a number of components. Each of these components is responsible for its own task and interacts with other components by means of a strictly defined interface. These components include a storage memory manager, an access control manager, a transaction manager, a page pool manager, an object access manager and an index manager. The modular design makes SODA2 possible to choose different implementations for each component and also to fine-tune SODA2 according to the efficiency of various database management algorithms and strategies for specific application requirements. SODA2 server can hook to other relational database systems such as Oracle or Sybase through ODBC interface. The underlying physical repository supports standard DOM, and SOM (SODA Object Model) which provides system-level interface to the SODA2 physical storage. Compression and low-level optimization are supported with meta information such as DTD and XSchema defined by the users or automatically learnt from the XML documents. Advantages of SODA2 over relational DBMS: (1) Data is stored in a single tree; Adding fields to just one record involves restructuring of entire tables (2) Adding fields to a record is a trivial operation which does not affect other records; (3) Most changes to database will not break old clients, due to the flexible nature of the SODA2 XML query language; (4) XML data is stored in a tree structure which preserves all XML information and allows efficient query and update of this information at the server level; (5) Tree structure allows some query optimisation which is not possible with table structure; (6) XML structure implicitly holds a lot of information about relationships between data - query language is designed to take advantage of this; (7) It's designed and built specifically for storing and querying XML data." For related XML resources, see: (1) "SODA2 - An XML Semistructured Database System"; (2) Ronald Bourret's document "XML Database Products"; (3) "XML and Databases"; (4) "XML and Query Languages."

  • [November 06, 2000]   dbXML Core Edition Version 0.3.    Kimbro Staken recently announced the release of dbXML Core Edition Version 0.3. The dbXML Core Edition is a "data management system designed specifically for collections of XML documents and is easily embedded into existing applications, highly configurable, and openly extensible. The source code has been released under the GNU Lesser General Public License and is available online at the dbXML Group's Core Edition web site. This release updates the dbXML distribution adding many new features. (1) Initial Compressed DOM implementation. (2) Basic indexing system. (3) Server side auto-linking of database resources. (4) Experimental support for XPath querying. (5) XML Schema Compiler. (6) SOAP Support -- All dbXML XMLObjects, Procedures and stored documents are automatically exposed by the server as SOAP services. (7) Command line administration tools. (8) Better documentation and examples. The dbXML Core Edition is available for download from the website at" For related resources, see "XML and Databases."

  • [November 06, 2000]   W3C XHTML Basic Advanced to Proposed Recommendation Status.    W3C has announced the release of XHTML Basic as a Proposed Recommendation. Reference: W3C Proposed Recommendation 3-November-2000, edited by Mark Baker (Sun Microsystems), Masayasu Ishikawa (W3C), Shinichi Matsui (Panasonic), Peter Stark (Ericsson), Ted Wugofski (, and Toshihiko Yamakami (ACCESS Co., Ltd.). The PR document "has been produced as part of the W3C HTML Activity, and it has been prepared by the Mobile Subgroup of the W3C HTML Working Group based on input from the WAP Forum Application's group and members of the W3C Mobile Access Interest Group. This document will be used by the Mobile Subgroup of the W3C HTML Working Group and the W3C Mobile Access Interest Group to find a common ground for future markup languages aimed at content for small information appliances." Document abstract: "The XHTML Basic document type includes the minimal set of modules required to be an XHTML Host Language document type, and in addition it includes images, forms, basic tables, and object support. It is designed for Web clients that do not support the full set of XHTML features; for example, Web clients such as mobile phones, PDAs, pagers, and settop boxes. The document type is rich enough for content authoring. XHTML Basic is designed as a common base that may be extended. For example, an event module that is more generic than the traditional HTML 4 event system could be added or it could be extended by additional modules from XHTML Modularization such as the Script Module. The point is that XHTML Basic always is the common language that user agents support. The document type definition is implemented using XHTML modules as defined in "Modularization of XHTML". Design rationale: "HTML 4 was designed for large devices, overlapping windows/frames menus, mouse input pointing device, high powered CPU, large power supply. Requiring a full fledged computer for access to the World Wide Web excludes a large portion of the population from consumer device access of online information and services. Because there are many ways to subset HTML, there are many almost identical subsets defined by organizations and companies. Without a common base set of features, developing applications for a wide range of Web clients is difficult. The motivation for XHTML Basic is to provide an XHTML document type that can be shared across communities (e.g., desktop, TV, and mobile phones), and that is rich enough to be used for simple content authoring. New community-wide document types can be defined by extending XHTML Basic in such a way that XHTML Basic documents are in the set of valid documents of the new document type. Thus an XHTML Basic document can be presented on the maximum number of Web clients." See related references in "XHTML and 'XML-Based' HTML Modules."

  • [November 06, 2000]   RQL: A Proposed RDF Query Language.    Greg Karvounarakis (ICS-FORTH Institute of Computer Science) posted an announcement for a proposed RDF query language. The document is available on the URL Abstract: "Information systems such as organizational memories, vertical aggregators, infomediaries, etc. are expected to play a central role in the 21st-century economy by enabling the development and maintenance of specific communities of interest (e.g., enterprise, professional, trading) on corporate intranets or the Web. Such Community Web Portals essentially provide the means to select, classify and access, in a semantically meaningful and ubiquitous way various information resources (e.g., sites, documents, data) for diverse target audiences (corporate, inter-enterprise, e-marketplace, etc.). Yet, in commercial software for deploying Community Portals, querying is still limited to full-text (or attribute-value) retrieval and more advanced information-seeking needs require navigational access. Furthermore, recent Web standards for describing resources [W3C Metadata Activity: RDF/ RDF Schema] are completely ignored. Moreover, standard (relational or object) databases are too rigid for capturing the peculiarities of RDF descriptions and schemas. Motivated by the above issues, we propose a new data model and a query language for RDF descriptions and schemas. Our language, called $RQL$, relies on a formal graph model, that captures the RDF modeling primitives, also providing a richer type system, and permits the interpretation of RDF descriptions by means of one or more schemas. In this context, $RQL$ adapts the functionality of semistructured query languages to the peculiarities of RDF but also extends this functionality in order to query RDF schemas. The novelty of $RQL$ lies in its ability to smoothly switch between schema and data querying while exploiting - in a transparent way - the taxonomies of labels and multiple classification of resources." For related references, see "Resource Description Framework (RDF)."

  • [November 03, 2000]   Fujitsu XLink Processor.    The Fujitsu XLink Processor, developed by Fujitsu Laboratories Ltd., is an implementation of XLink and XPointer. "This processor supports XML Linking Language (XLink) Version 1.0 Candidate Recommendation. You may use this processor and other included programs without charge for 60 days after the installation. You must read the 'Fujitsu XLink Processor License' before you begin your installation..." Principal features: "Multi-Platform: Developed with Java, this processor can be used on many platforms which support Java Runtime Environment. Support for XLink Ver.1.0CR: This processor supports XLink Ver.1.0CR, which is now being discussed in W3C. XLink/XPointer processing with DOM: This processor works with DOM. This processor can work with any XML processor or parser which can create DOM, on condition that an appropriate interface between this processor and it is implemented. Supported Features: XLink Features: simple-type element and its related attributes extended-type element and its related attributes locator-type element and its related attributes resource-type element and its related attributes arc-type element and its related attributes title-type element, Linkbases. XPointer Features: Bare Names, Child Sequences. The 'XLink Tree Demo Application' is a tree browser to demonstrate several functions of XLink and XPointer. Since it is written in Java, it can be executed on many platforms. URLs: demo application; download; license. Contact: or Masatomo Goto. For related software, see "XML Linking Language."

  • [November 03, 2000]   Revised Candidate Recommendation for Scalable Vector Graphics (SVG) 1.0 Specification.    As part of the W3C Graphics Activity, the W3C SVG Working Group has issued a revised Candidate Recommendation for the Scalable Vector Graphics (SVG) 1.0 Specification. Reference: W3C Candidate Recommendation 02-November-2000, edited by Jon Ferraiolo (Adobe). The CR specification defines the features and syntax for Scalable Vector Graphics (SVG), "a language for describing two-dimensional vector and mixed vector/raster graphics in XML. SVG allows for three types of graphic objects: vector graphic shapes (e.g., paths consisting of straight lines and curves), images and text. Graphical objects can be grouped, styled, transformed and composited into previously rendered objects. The feature set includes nested transformations, clipping paths, alpha masks, filter effects and template objects. SVG drawings can be interactive and dynamic. Animations can be defined and triggered either declaratively (i.e., by embedding SVG animation elements in SVG content) or via scripting. Sophisticated applications of SVG are possible by use of supplemental scripting language with access to SVG's Document Object Model (DOM), which provides complete access to all elements, attributes and properties. A rich set of event handlers such as onmouseover and onclick can be assigned to any SVG graphical object. Because of its compatibility and leveraging of other Web standards, features like scripting can be done on XHTML and SVG elements simultaneously within the same Web page." Status: This revised Candidate Recommendation specification "is being published to reflect minor changes to the specification and editorial updates resulting from implementation feedback. The Candidate Recommendation review period ends when there exists at least one SVG implementation which passes each of the Basic Effectivity (BE) tests in the SVG test suite. The implementation status of SVG is already very good, and at this point, most of the test are passed by one or multiple implementations, but as yet the exit criteria have not been met. It is anticipated that implementation status will be such that the exit criteria will be met in approximately one month [viz., until about December 02, 2000]." Also available in PDF and as a zip archive of HTML. For other references, see "W3C Scalable Vector Graphics (SVG)."

  • [November 03, 2000]   XML Query Engine Version 0.89.    Howard Katz posted an announcement for the availability of XML Query Engine v0.89: "This is a major update that fixes a number of outstanding bugs and adds several new features and optimizations. This version is getting very close to beta territory, since all major features, with the exception of a persisted store, are now in place. Updates include: (1) logical subquery operators 'and' and 'or'; (2) set operators 'union' and 'intersect'; (3) namespace support (4) setDoFullText(boolean) api for turning off element text indexing (speeds up indexing, query retrieval, and drastically reduces index size in appropriate cases); (5) showDocTree(docID) api for quick visualization of element hierarchy (6) simple compound-word matching in element content; (7) attribute content moved into index for improved speed and precision; (8) a number of other optimizations, primarily to improve performance and reduce memory footprint during full-text queries." The XML Query Engine is "a JavaBean component that lets you search your XML documents for element, attribute, and full-text content. It can index multiple documents using a SAX parser of your choice. The index, once built, can be queried using XQL, a de facto standard for searching XML that is, very nearly, a proper subset of XPath. XML Query Engine extends XQL's syntax to provide a full-text capability, something lacking in standard XQL. This lets you say such things as Find me the first paragraph within either a division or a chapter that contains both the words 'xml' and 'xsl' or Give me a list of all elements containing an href attribute which points to a '.com' organization. XML Query Engine is an embeddable component that's callable from your application. It requires some straightforward Java programming to wire the query engine to your front-end code. The engine uses a result-listener architecture to deliver its results: You register an XQL result listener with the engine before calling your first query. Once your query's been resolved, the result-set document is delivered to your listener's results() method. Query results can be delivered in one of three formats. Two of these are XML, one of which is a standard result format, similar in structure to that returned by other XQL vendors, while the other is specialized to return 'navigational metadata' describing the nodes it contains in terms of their location within their originating documents. You can use this metadata to easily re-navigate, via either SAX or DOM, back into the original documents for further post-processing if desired. The third result-set format is CSV, Comma-Separated-Values, for particularly fast and compact result delivery of navigational metadata. XML Query Engine is a work in progress. The current version is fast approaching beta status. I've implemented all the core XQL features necessary to support full-text capability on top of the standard language. XML Query Engine uses a traditional inverted index scheme to internally track every element, attribute, and the words contained in each for every document you index. Any document to be queried needs to be indexed first. Indexing is the process of pre-building the internal data structures needed to enable subsequent fast retrieval from the indexed documents. Before you can index though, you have to tell the query engine what sorts of things to index or ignore..." See related information in "XML and Query Languages."

  • [November 03, 2000]   M Project: Java XML-Based Messaging System.    Rajiv Mordani (Sun Microsystems) announced the 'M Project' as an "early access, pre-alpha, use-at-your-own-risk-only prototype implementation of an XML-based messaging system. It is based on work currently in progress as part of the ebXML initiative and the Java Community Process JSR-000067. [JSR-000067 'Java APIs for XML Messaging 1.0' (JAXM) provides an API for packaging and transporting business transactions using on-the-wire protocols being defined by, OASIS, W3C and IETF.] The overall goal is to provide a prototype for discussion of a messaging system for use in 'B2B' systems. These 'B2B' scenarios are generally conceived of as involving two or more business entities communicating via the Internet (TCP/IP). In particular, a Java application developer should be able to easily communicate with other business entities which have agreed to adhere to specifications which the ebXML initiative has put forth by working with a set of simple Java interfaces. Note that as of this announcement (October 18, 2000), the ebXML specifications upon which this work is based are not final, and therefore this release can not be considered to be an 'implementation' of the ebXML specification(s) upon which it is based." See: (1) the Sun Microsystems web site for other references to Java Technology and XML, and (2) "M Project: Java XML-Based Messaging System."

  • [November 02, 2000]   RDFStore for RDF Model Databases.    Alberto Reggiori recently announced the availability of RDFStore. RDFStore is a set of Perl modules to manage Resource Description Framework (RDF) model databases in a easy and straightforward way. It is a pure Perl implementation of the Draft Java API from the Stanford University DataBase Group by Sergey Melnik with some additional cool modules to read/write RDF triples directly from the Perl langauge environment. By using the Perl TIE interface, a generic application script can access RDF triplets using normal key/value hashes; the storage can happen either in-memory data structures or on the local filesystem by using the or modules. An experimental remote storage service is also provided using a custom module coupled with a fast and performant TCP/IP deamon. The deamon has been written entirely in the C language and is actually storing the data in Berkeley DB v1.x files; such a software is similar to the rdfbd approach from Guha. The input RDF files are being parsed and processed by using a streaming SiRPAC like parser completely written in Perl. Such an implementation includes most of the proposed bug fixes and updates as suggested on the W3C RDF-interest-Group mailing list [and SiRPAC discussion]. A strawman parser for a simplified syntax proposed by Jonathan Borden, Jason Diamond and Dan Connolly is also included. By using the Sablotron XSLT engine is then possible to easily tranform XML documents to RDF and query them from the Perl language. Initial RDFStore features include: (1) Modular interface using packages; (2) Perl-way API to fetch, parse, process, store and query RDF models; (3) W3C RDF and strawman syntax parsing; (4) Perl TIE seamless access to RDF triplet databases; (5) Either DB_File and BerkeleyDB support; (6) Automatic Vocabulary generation; (7) Basic RDF Schema support; (8) Initial TCP/IP remote storage service support." For related tools, see "Resource Description Framework (RDF)."

  • [November 02, 2000]   CPExchange Publishes XML DTDs for Customer and Privacy Information.    The Customer Profile Exchange Network (CPExchange) recently "launched the newly authored Customer Profile Exchange standard, which creates the first global standard for privacy-enabled customer data interchange." The specification is Customer Profile Exchange (CPExchange) Specification, edited by Kathy Bohrer and Bobby Holland. October 20, 2000. Version 1.0. 127 pages. "The CPExchange standard allows enhanced customer service in e-business relationships. The new standard will automate tasks and tie the digital economy more tightly together." The Standard "integrates online and offline customer data in an XML-based data model for use within various enterprise applications both on and off the Web. The definition of CPExchange 1.0 XML messages is provided by a set of DTD files. There are individual DTDs for various categories of information. One top level DTD includes all the other DTD files to define the full set of CPExchange information." The new release includes a .ZIP archive with CPExchange XML DTDs: Complete Customer Profile Exchange DTD, CPExchange P3P Privacy Subset DTD, CPExchange XML Schema Datatypes, CPExchange Privacy Category DTD, CPExchange Support DTD, CPExchange Name Information DTD, CPExchange Contact Information DTD, CPExchange Role Information DTD, CPExchange Preferences DTD, CPExchange Business Object DTD, CPExchange InteractionHistory DTD, and CPExchange Web DTD. CPExchange is "a volunteer organization dedicated to developing an open standard to facilitate the exchange of privacy-enabled customer information across enterprise applications. The CPExchange Network is hosted by the International Digital Enterprise Alliance (IDEAlliance), a non-profit, vendor-neutral organization dedicated to the development and implementation of open interoperability standards." See further references in "CPExchange Network."

  • [November 02, 2000]   Sabre Releases XML Toolkit for its Global Distribution System.    Sabre Holdings Corporation has "announced the availability of its new XML (Extensible Markup Language) tool kit, a product to enable flexible design and a scalable interface for travel agency Web sites and online travel sites to utilize the extensive content of the Sabre global distribution system (GDS). 'In order to further eCommerce in travel, Sabre has created a standard solution for Web sites and client-server applications to communicate with our system by using a common language and structured data,' said Thaddeus Arroyo, senior vice president of product marketing and development for Sabre. 'As a leader in technology and marketing services for the travel industry, this is just the first phase of Sabre's very aggressive roadmap to deliver the industry's most robust XML offering.' Sabre offers its XML tool kit as part of the Sabre Do-it-yourself tools portfolio, a set of components that enables developers to create customized applications according to their companies' unique business needs. XML furnishes Sabre customers with an environment to easily create high-volume Web booking engines and client-server applications. The Do-it-yourself tools suite is a component of Sabre eVoya eStorefront, which provides the technologies needed to leverage the power of the Sabre system with the opportunities of the Internet to create and grow a Web presence. Sabre eVoya provides agencies with value-added solutions to compete in online travel, better serve clients, more effectively manage an agency and create new distribution and revenue opportunities through the Internet. The XML tool kit also includes Sabre Data Source (SDS), a data protocol that provides Sabre responses in a structured message format, allowing for easy XML translation onto a Sabre Connected travel agency screen or Web site. Sabre Data Source (SDS) is data protocol that provides Sabre responses in a structured message format, allowing for easy XML translation; it provides a method by which Sabre can send data in a structured format, as opposed to being formatted for viewing on a terminal screen. The advantage os SDS is threefold: (1) Data is be more easily parsed (read) by a computer, as the format is dictated by an 'MDR' -- Message Definition Record; (2) More data can be sent at one time from the Sabre host; (3) Changes in an MDR can be downloaded from the host and implemented automatically, so that reprogramming is not necessary should a host format change occur. The XML Agent API is an ANSI standard C++ Object that can be incorporated into an application. This object will provide conversion of SDS to XML message formats. SDS eliminates all screen formatting, which most Sabre users are accustomed to seeing. With this combination of SDS and XML, Web developers will be able to create their applications faster and with much more flexibility. In addition to the XML tool kit, Sabre is introducing two new development tools that utilize Application Programming Interface (API) technology. eStorefront Sabre API (eSAPI) and C Common Sabre API (CCSAPI) are also part of the Do-it-yourself suite of development tools -- all designed to satisfy various levels of technical expertise. eSAPI is a user-friendly product that provides a flexible design environment and connectivity to the Sabre system. Users of eSAPI do not need the technical knowledge that is generally required for standard APIs. CCSAPI is Sabre's new Web version of the basic CSAPI tool. This new low-cost tool enables easier Web development and communication with the Sabre GDS and is targeted to technically savvy Web site developers. Sabre is the leading provider of technology and marketing services for the travel industry. Headquartered in Dallas/Fort Worth, Texas, the company has nearly 10,000 employees worldwide who span 45 countries. Sabre reported 1999 revenues of $2.4 billion." For details, see the full text of the announcement: "Sabre Announces XML Tool Kit as Part of New Solutions for Web and Server-Based Applications. Next-Generation Development Tools Benefit Online Travel Sites and Agencies."

  • [November 02, 2000]   DOM Level 3 Content Models and Level 3 Load/Save API.    Philippe Le Hégaret (W3C, DOM Activity Lead) announced the release of a W3C working draft from the DOM Working Group which "includes an update of the Content Models and the first bits of the Load and Save module." This WD is part of the W3C Document Object Model (DOM) Activity, released as a preliminary version of the Level 3 API. Document Object Model (DOM) Level 3 Content Models and Load and Save Specification, Version 1.0. Reference: W3C Working Draft 01-November-2000, edited by Ben Chang (Oracle), Andy Heninger (IBM), and Joe Kesselman (IBM). Document abstract: "This specification defines the Document Object Model Content Models and Load and Save Level 3, a platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of documents. The Document Object Model Content Models and Load and Save Level 3 builds on the Document Object Model Core Level 3... Section 1 provides a representation and operations for XML content models like DTDs and W3C XML schemas. Section 2 provides an API for loading XML source documents into a DOM representation and for saving a DOM representation as an XML document." Description: "This 'Content Models and Validation' module provides a representation for XML content models, e.g., DTDs and XML Schemas, together with operations on the content models, and how such information within the content models could be applied to XML documents used in both the document-editing and CM-editing worlds. It also provides additional tests for well-formedness of XML documents, including Namespace well-formedness. A DOM application can use the hasFeature method of the DOMImplementation interface to determine whether a given DOM supports these capabilities or not. The feature string for all the interfaces listed in this section is 'CM'. This chapter interacts strongly with the 'Load and Save' chapter, which is also under development in DOM Level 3. Not only will that code serialize/deserialize content models, but it may also wind up defining its well-formedness and validity checks in terms of what is defined in this chapter. In addition, the CM and Load/Save functional areas will share a common error-reporting mechanism allowing user-registered error callbacks. Note that this may not imply that the parser actually calls the DOM's validation code -- it may be able to achieve better performance via its own -- but the appearance to the user should probably be 'as if' the DOM has been asked to validate the document, and parsers should probably be able to validate newly loaded documents in terms of a previously loaded DOM CM." Comments on the new working document are invited, and may be sent to the public-archived mailing list Available also in Postscript, PDF, and .ZIP formats. For related references, see "DOM Level 3 Specifications."

  • [November 02, 2000]   IETF/W3C XML-Signature Syntax and Processing Specification Published as W3C Candidate Recommendation.    As part of the XML Digital Signature Activity, the IETF/W3C XML Signature Working Group has published a 'W3C Candidate Recommendation' specification for XML-Signature Syntax and Processing. Reference: W3C Candidate Recommendation 31-October-2000, edited by Donald Eastlake, Joseph Reagle, and David Solo. The 'XML Signature' joint Working Group of the IETF and W3C has been chartered "to develop an XML compliant syntax used for representing the signature of Web resources and portions of protocol messages (anything referencable by a URI) and procedures for computing and verifying such signatures." The CR document "specifies XML digital signature processing rules and syntax. XML Signatures provide integrity, message authentication, and/or signer authentication services for data of any type, whether located within the XML that includes the signature or elsewhere. Digital signatures are created and verified using cryptography, the branch of applied mathematics concerned with transforming messages into seemingly unintelligible forms and then back again. Digital signatures are created by performing an operation on information such that others can confirm that a holder of a secret performed the operation and that the signed information has not subsequently changed. In a symmetric key system, both the sender and receiver need to be privy to the secret. In the public key cryptographic system, the holder of the private (secret) key signs information, but anyone with access to the public key can confirm that the signature is valid. The novel feature of public key cryptography is that knowledge of the public key used to confirm signatures does not reveal information about the private key itself." The W3C CR updates the previous last call working draft of 2000-10-12. The duration of Candidate Recommendation will last approximately three months (until January 31 2001); after which it should proceed to Proposed Recommendation. Implementations: The specification already has significant implementation experience as demonstrated by its Interoperability Report. "We expect to meet all requirements of that report within the three month Candidate Recommendation period. Specific areas where we would appreciate further implementation experience are: (1) XPath is RECOMMENDED. Signature applications need not conform to the XPath specification in order to conform to this specification. However, the XPath data model, definitions (e.g., node-sets) and syntax is used within this document in order to describe functionality for those that want to process XML-as-XML (instead of octets) as part of signature generation. It appears all known implementations are satisfying the functional requirements by implementing XPath, consequently should we make it MANDATORY? (2) Minimal canonicalization (defined by this specification) is RECOMMENDED. There are no implementations of this algorithm: should we make it OPTIONAL or even remove it? [...]" Comments on the CR may be sent to the publicly archived mailing list at See other references in "XML Digital Signature (Signed XML - IETF/W3C)."

  • [November 02, 2000]   Canonical XML Specification Released as a W3C Candidate Recommendation.    Canonical XML Version 1.0 has now been published as a W3C Candidate Recommendation. Reference: W3C Candidate Recommendation 26-October-2000, edited by John Boyer (PureEdge Solutions Inc.). It has been produced by the IETF/W3C XML Signature Working Group. The CR specification updates the previous working draft for Canonical XML Version 1.0, published 11-October-2000. Document abstract: "Any XML document is part of a set of XML documents that are logically equivalent within an application context, but which vary in physical representation based on syntactic changes permitted by XML 1.0 and Namespaces in XML Names. This specification describes a method for generating a physical representation, the canonical form, of an XML document that accounts for the permissible changes. Except for limitations regarding a few unusual cases, if two documents have the same canonical form, then the two documents are logically equivalent within the given application context. Note that two documents may have differing canonical forms yet still be equivalent in a given context based on application-specific equivalence rules for which no generalized XML specification could account." Status: "The XML Signature Working Group believes this specification incorporates the resolution of all last call issues; furthermore it considers the specification to be very stable (as demonstrated by its interoperability report) and invites further implementation feedback during this period. The duration of Candidate Recommendation will last approximately four weeks (viz., until November 24, 2000)." Review comments on the CR may be sent to the mailing list at

  • [November 01, 2000]   IBM's XML Bridge for SAP.    New among XML applications from IBM alphaWorks Labs is a Java based server application named 'XML Bridge for SAP.' "The XML Bridge for SAP is designed to provide XML integration between SAP R/3 systems on the one side and arbitrary SAP R/3 or non-SAP systems on the other side. To do this two principal design decisions have been made while working out the design for XML Bridge for SAP: (1) XML should be used as the data format; (2) The design should be open for all different kinds of transmission infrastructures. To deliver on these principles we have defined a generic architecture to support plugging in different infrastructures for XML exchange. This allows it to extend the bridge by adding plugins for arbitrary transmission middlewares. The standard set of plugins covers HTTP and MQSeries. The bridge is designed to support synchronous, asynchronous and transactional RFCs, BAPIs and IDocs. Scenarios: The XML Bridge for SAP supports inbound and outbound scenarios. The terms inbound and outbound describe the view from the SAP R/3 system's perspective. Inbound calls are calls from the outside into the SAP R/3 system. Outbound calls are calls from an application running inside the SAP R/3 application server to the outside world. A. Inbound scenario The inbound scenario involves the following major steps: (1) The XML Bridge for SAP receives an XML document containing an RFC or BAPI call (2) The call gets executed in the SAP R/3 target system specified by the XML document. A result document is created and sent back to the originator of the call. B. Outbound scenario The outbound scenario involves the following major steps: (1) An SAP R/3 application sends an RFC or BAPI call to the XML Bridge for Java (2) An XML document is created and send to the destination system of the call (3) The XML result document from the destination system gets received by the XML Bridge for Java (4) Then the XML Bridge for Java parses the XML document and transfers the result to the SAP R/3 application..."

  • [October 30, 2000]   RDF Schema Explorer for Querying/Validating/Extending RDF Models.    Wolfram Conen recently announced RDF Schema Explorer -- a service based on Jan Wielemaker's SWI-Prolog (3.4.0), his SGML/RDF parser, and an adaptation of his CGIServ code. RDF Schema Explorer can be used as follows: "(1) You can feed some RDF into the Explorer, either by keying it directly into the text field below or by uploading a file. This will be parsed with Jan Wielemaker's SWI-Prolog parser and the resulting triples will be asserted to the fact base. (2) You can check/validate your model against the rule set provided in version 1.2 of the paper 'A logical interpretation of RDF.' (3) You can query the model repeatedly by using the provided rule/fact set and by providing your own additional rules/queries. (4) You can define/extend the semantics for your own predicates directly from within your RDF document, along the guidelines presented in RDF Semantic Extensions. Version 1.2 of the paper 'A logical interpretation of RDF' is currently under public review in the Semantic Web (SEWEB) area of the web site Electronic Transactions on Artificial Intelligence (ETAI), where it is possible to comment/discuss the paper." For related resources, see "Resource Description Framework (RDF)."

  • [October 27, 2000]   Electronic Book Exchange (EBX) Working Group Publishes Digital Rights Specification.    The Technical Committee of the Electronic Book Exchange (EBX) Working Group has published an initial draft digital rights standard "for protecting copyright in electronic books and for distributing electronic books among publishers, distributors, retailers, libraries, and consumers." The Electronic Book Exchange (EBX) Working Group "is an organization of companies, organizations, and individuals developing a standard for protecting copyright in electronic books and for distributing electronic books among publishers, distributors, retailers, libraries, and consumers. The draft EBX specification accommodates a variety of content formats for electronic books, including Open eBook Publication Structure and Adobe Portable Document Format (PDF). The EBX Working Group operates under the auspices of the Book Industry Study Group." The draft specification is The Electronic Book Exchange System (EBX). Version 0.8. July 2000 Draft, 109 pages. [An interim, incomplete draft and has not been approved as a standard by the EBX Working Group.] "[This is] the complete technical specifications for the Electronic Book Exchange (EBX) system for interoperable applications and devices that use public-key cryptography for copyright protection and distribution of electronic books. The EBX system is being developed by the EBX Working Group, whose members are Adobe Systems Incorporated, the American Library Association, Audible, ContentGuard,, Glassbook, GlobalMentor, Nokia,,, Thomson Consumer Electronics, Versaware, and Yankee Rights Management." This document describes the Electronic Book Exchange (EBX) system. The EBX system defines the way in which electronic books (e-books) are distributed from publishers to booksellers and distributors, from booksellers to consumers, between consumers and between consumers and libraries. It describes the basic requirements of electronic book reading devices and the electronic books themselves. It also describes how these 'trusted' components interact to form a comprehensive copyright protection system that both protects the intellectual property of authors and publishers as well as describes the capabilities required by consumers. In addition, the model describes in general how products and revenue for those products are generated and managed." While the The EBX system does not define a specific 'content' file format, it does define vouchers, which are encoded in XML. "EBX is primarily concerned with the creation and transfer of digital objects called vouchers. A voucher is an electronic description of e-book permissions transferred from one book owner in the network to another book owner. EBX vouchers are encoded in XML." A sample EBX voucher encoded in XML is available for inspection. See in this connection the feature article by Mark Walter and Mike Letts "Mad Scramble for Mindshare In Digital Rights Management. [Digital Rights Management: Peacekeepers Needed]," in The Seybold Report on Internet Publishing Volume 5, Number 2 (October 2000), pages 9-15. For DRM in relation to XML, see: (1) Extensible Rights Markup Language (XrML); (2) Digital Property Rights Language (DPRL); (3) Electronic Book Exchange (EBX) Working Group; (4) Open Digital Rights Language (ODRL); (5) Open eBook Initiative; (6) "IOTP Requirements for Digital-Right Trading."

  • [October 27, 2000]   Payment API for Internet Open Trading Protocol (IOTP) Version 1.    Members of the Internet Open Trading Protocol Working Group have published a specification for the "Payment API for v1.0 Internet Open Trading Protocol (IOTP)." IETF Internet Draft. TRADE Working Group. By Hans-Bernhard Beykirch, Werner Hans, Masaaki Hiroya, and Yoshiaki Kawatsura. Reference: 'draft-ietf-trade-iotp-v1.0-papi-02.txt'. September 2000. "The Internet Open Trading Protocol provides a data exchange format for trading purposes while integrating existing pure payment protocols seamlessly. This motivates the multiple layered system architecture which consists of at least some generic IOTP application core and multiple specific payment modules. This document addresses the common interface between the IOTP application core and the payment modules, enabling the interoperability between these kinds of modules. Furthermore, such an interface provides the foundations for a plug-in-mechanism in actual implementations of IOTP application cores. Such interfaces exist at the Consumers', the Merchants' and the Payment Handlers' installations connecting the IOTP application core and the payment software components/legacy systems. . .The Payment API is formalized using the Extensible Markup Language (XML). It defines wrapper elements for both the input parameters and the API function's response. In particular, the response wrapper provides common locations for Error Codes and Error Descriptions. It is anticipated that this description reflects the logical structure of the API parameter and might be used to derive implementation language specific API definitions..." Relevant XML DTDs are presented in the draft document. The Internet Open Trading Protocol "provides an interoperable framework for Internet commerce. It is optimized for the case where the buyer and the merchant do not have a prior acquaintance and is payment system independent. It will be able to encapsulate and support payment systems such as SET, Mondex, CyberCash's CyberCoin, DigiCash's e-cash, GeldKarte, etc. IOTP is able to handle cases where such merchant roles as the shopping site, the payment handler, the deliverer of goods or services, and the provider of customer support are performed by different Internet sites." See related specifications referenced in "Internet Open Trading Protocol (IOTP)."

  • [October 26, 2000]   XML Encoding for Sumerian Literary Texts.    A recent communiqué from the University of Oxford Electronic Text Corpus of Sumerian Literature reports on initial work toward the creation of XML DTDs for transliteration-level encoding and publishable translations for a large online corpus of Sumerian texts. This markup language design is part of a broader project endeavor to analyze the digital library corpus in order to document and describe aspects of its style, lexis, grammar and register. The goal of the Sumerian Literature project, now substantially completed, has been "to produce a 'collected works' of over 400 poetic compositions of the classical [Sumerian] literature, equipped with translations. This standardised, electronically searchable SGML corpus, which is based to a large degree on published materials, comprises some 400 literary compositions of the Isin/Larsa/Old Babylonian Period, amounting to approximately 40,000 lines of verse (excluding Emesal cult songs, literary letters, and magical incantations). The full catalogue can be found on the project web site. The online compositions are presented in single-line composite text format (in a standardised transliteration) with newly-prepared English prose translations, and a full bibliographical database, thereby making available for the first time a collected works of Sumerian literature. The corpus is freely available to anyone who wishes to use it via this World Wide Web site... The literature written in Sumerian is the oldest human poetry that can be read, dating from approximately 2100 to about 1650 BC. The main 'classical' corpus can be very roughly estimated at 50,000 lines of verse, including narrative poetry, praise poetry, hymns, laments, prayers, songs, fables, didactic poems, debate poems and proverbs. The majority of this has been reconstructed during the past fifty years from thousands of often fragmentary clay tablets inscribed in cuneiform writing. Relatively few compositions are yet published in satisfactory modern editions. Much is scattered throughout a large number of journals and other publications. Several important poems must still be consulted in twenty-year-old unpublished doctoral dissertations, some with translations which have now become unusable because of progress in our knowledge of the language. Major compositions have not yet been edited at all. The slow progress of research, with little organised collaboration until recently, means that Sumerian literature has [hitherto] remained inaccessible to the majority of those who might wish to read or study it, and virtually unknown to a wider public." This University of Oxford project is representative of a large number of academic digital library projects which are migrating from SGML to XML-based markup for document structuring and delivery. The extraordinary "Perseus Project" -- with its thousands of SGML-encoded Greek and Latin texts -- provides another example. For additional description and references on the Oxford project, see "Electronic Text Corpus of Sumerian Literature (ETCSL)." The development of online reference materials for ancient civilizations presents significant challenge to the extent that the ancient writing systems (especially cuneiform and hieroglyphic) are very complex, and the literary traditions very rich recensionally. Standards efforts have been painfully slow, but collaborative work is now being done in several research initiatives. See a partial listing of projects in the document "Encoding and Markup for Texts of the Ancient Near East." Readers are encouraged to send notification concerning related ancient language projects.

  • [October 25, 2000]   Transaction Authority Markup Language (XAML).    From a recent announcement: "Leading proponents of e-business interoperability Bowstreet, Hewlett-Packard Company, IBM, Oracle Corporation, and Sun Microsystems, today announced they are leading an initiative to define a vendor-neutral industry standard that will enable the coordination and processing of on-line, multi-party transactions in the rapidly emerging world of XML-based web services. The initiative is called XAML (Transaction Authority Markup Language). [The XAML initiative addresses] coordinated processing of transaction-supporting web services between internal fulfillment services (the chemical provider's inventory system) and external services such as: (1) An insurance policy service to insure the product being shipped; (2) A financing service to ensure payment according to vendor terms; (3) A transportation service to guarantee timely shipment/delivery of product; (4) A regulatory service to ensure compliance with government safety requirements. The XAML standard will: (1) Provide a specification for the XML message interfaces and interaction models of web services to support the coordination and processing of multi-stage transactions on the Internet; (2) Specify interfaces and protocols that preserve investment and strengths in transaction monitors and resources; (3) Specify interfaces and protocols that can be 'added on' to existing and emerging web service interfaces and protocols; (4) Specify interaction models for software systems to provide business-level transactions that coordinate the processing of multiple distributed web services; (5) Build on existing and emerging industry standards. The XAML initiative is so-named because it is an extension of XML, the common language of e-commerce, which supports transactional semantics as defined by the widely adopted standard for two-phase commit, XA (Transaction Authority). XAML intends to provide a means for transaction supporting web services to participate in higher-level business transactions. The XAML proposal will be submitted to one or several standards bodies that may include the W3C, OASIS (Organization for the Advancement of Structured Information Standards) and/or the IETF (Internet Engineering Task Force)." For other details, see: (1) the "XAML Transaction Authority Markup Language White Paper" and (2) the full text of the announcement: "Bowstreet, HP, IBM, Oracle and Sun Microsystems Join Forces to Create Standard for e-Business Transactions Across the Internet. XAML proposal focuses on creating XML standard to guarantee multi-vendor transactional integrity across web services... There are emerging standards for web services (such as SOAP, ebXML, XP, UDDI, e-Speak and WSDL), and there are existing transaction management standards (such as XA and JTA); XAML ties the two groups of standards together..." For other references, see "Transaction Authority Markup Language (XAML)."

  • [October 25, 2000]   XLink Markup Name Control.    W3C has published a NOTE under the title XLink Markup Name Control. Reference: W3C Note 24-October-2000, edited by [W3C XML Linking Working Group co-chairs] Eve Maler (Sun Microsystems) and Daniel Veillard (W3C). Document abstract: "This document proposes a possible XML Schema-based solution to the need to use XLink in XML-based languages such as XHTML 1.0." The note addresses the particular problem of 'namespaces' when attempting to upgrade existing document markup to be interpreted as XLink syntax. "Currently, XLink requires applications to recognize a particular set of attribute names in the XLink namespace in order to do their work... [suppose] you already have some marked-up information that provides some of the same kinds of linking information that XLink is designed to provide: in order to incorporate XLink usage directly into the existing vocabulary as a first-class construct, you would have to force the vocabulary to undergo a backwards-incompatible change from href to xlink:href. XLink's attributes must have namespace prefixes on them because of the way XML namespaces work; 'global' attributes that can be attached to any element must be prefixed because they cannot identify themselves in any other way..." The NOTE's proposed solution builds upon a suggestion from Henry Thompson of the W3C XML Schema Working Group. A future version of W3C XLink might allow applications "to take advantage of XML Schema datatypes instead, or in addition, as a way to recognize Schema-XLink data. The idea is that any attribute name could be used, as long as the attribute were 'marked' with an appropriate datatype, made available through a post-schema-validation information set or by other means. . . If Schema-XLink were to define such datatypes, it could provide a normative XML Schema module that merely contains a series of type definitions. Note, however, that as of this writing, XML Schema does not have facilities to specify additional normative constraints of the style that XLink needs; prose would still be needed to specify the combinations of attribute types that are expected to appear on particular 'XLink element types'..." For related references, see "XML Linking Language."

  • [October 25, 2000]   Using CSS for Everything.    As part of the W3C style activity, the W3C CSS Working Group has produced a "work in progress" Working Draft specification Syntax of CSS Rules in HTML's "style" Attribute. Reference: W3C Working Draft 25-October-2000, edited by Tantek Çelik (Microsoft) and Bert Bos (W3C). Document abstract: "HTML provides a 'style' attribute on most elements, to hold a fragment of a style sheet that applies to those elements. One of the possible style sheet languages is CSS. This draft describes the syntax of the CSS fragment that can be used in the 'style' attribute." The WD illustrates how one can directly express and control processing behaviors on an element-by-element basis throughout a document (whether visual, aural, tactile, or other behaviors) through the use of a globally-defined style attribute. Thus, from the examples offered, one can: (1) specify display behavior by "setting properties on the element itself, [using] no pseudo-elements or pseudo-classes; (2) colorize a first letter (by "setting properties on the element, as well as on the first letter of the element, by means of the ':first-letter' pseudo-element"; (3) regulate other appearances and effects, by "setting properties on a source anchor for each of its dynamic states, using pseudo-classes." The Working Draft document "defines both the simple case (only properties on the element itself), as well as the more complex case (properties on the element's pseudo-elements and pseudo-classes), and generalizes the cascading order rule for "the case where the inline fragment contains inline rule-sets: the declarations are treated the same as if they occured in the same order at the end of the author's style sheet with a specificity equal to that of a selector with one ID-selector and as many pseudo-elements and pseudo-classes as in the inline rule-set." The working draft would appear to extend the life of HTML 4.0's style attribute, along with the HTML META HTTP-EQUIV Content-Style-Type selector mechanism, since the processing specification principle is to be applied generally to any XML vocabularies (document types) in which designers want to be able to control processing directly from within the document instance. The WD states: "This document recommends that any future XML based languages which have presentational information (whether visual, aural, tactile, or other) also add a STYLE attribute which similarly permits the user to use CSS to style the document and elements in documents written in that language." For one person's doubts about this apparent notion of a 'global use' attribute without (apparent) namespace declaration mechanism, see the W3C comment list. For other references on CSS, see "W3C Cascading Style Sheets."

  • [October 24, 2000]   XML Schema Becomes a W3C Candidate Recommendation.    A W3C press release announces the publication of XML Schema as a W3C Candidate Recommendation. "The World Wide Web Consortium (W3C) has issued XML Schema as a W3C Candidate Recommendation. Advancement of the document to Candidate Recommendation is an invitation to the Web development community at large to make implementations of XML Schema and provide technical feedback. Simply defined, XML Schemas define shared markup vocabularies and allow machines to carry out rules made by people. They provide a means for defining the structure, content and semantics of XML documents. 'Databases, ERP and EDI systems all know the difference between a date and a string of text, but before today, there was no standard way to teach your XML systems the difference. Now there is,' declared Dave Hollander, co-chair of the W3C XML Schema Working Group and CTO of Contivo, Inc. 'W3C XML Schemas bring to XML the rich data descriptions that are common to other business systems but were missing from XML. Now, developers of XML ecommerce systems can test XML Schema's ability to define XML applications that are far more sophisticated in how they describe, create, manage and validate the information that fuels B2B ecommerce.' By bringing datatypes to XML, XML Schema increases XML's power and utility to the developers of electronic commerce systems, database authors and anyone interested in using and manipulating large volumes of data on the Web. By providing better integration with XML Namespaces, it makes it easier than it has ever been to define the elements and attributes in a namespace, and to validate documents which use multiple namespaces defined by different schemas. XML Schema introduces new levels of flexibility that may accelerate the adoption of XML for significant industrial use. For example, a schema author can build a schema that borrows from a previous schema, but overrides it where new unique features are needed. his principle, called inheritance, is similar to the behavior of Cascading Style Sheets, and allows the user to develop XML Schemas that best suit their needs, without building an entirely new vocabulary from scratch. XML Schema allows the author to determine which parts of a document may be validated, or identify parts of a document where a schema may apply. XML Schema also provides a way for users of ecommerce systems to choose which XML Schema they use to validate elements in a given namespace, thus providing better assurance in ecommerce transactions and greater security against unauthorized changes to validation rules. Further, as XML Schema are XML documents themselves, they may be managed by XML authoring tools, or through XSLT. . . Candidate Recommendation is W3C's public call for implementation, an explicit invitation for W3C members and the developer community at large to review the XML Schema specification and build their own XML Schemas. This period of implementations and reporting allows the editors to learn how developers outside of the Working Group might use them, and where there may be ambiguities for implementors. Public testing and implementation contribute to a more robust XML Schema, and to more widespread use." The CR specification is published in three parts: (1) XML Schema Part 1: Structures. W3C Candidate Recommendation 24-October-2000, edited by Henry S. Thompson (University of Edinburgh), David Beech (Oracle Corp.), Murray Maloney (for Commerce One), and Noah Mendelsohn (Lotus Development Corporation). Part 1 defines the XML Schema definition language, "which offers facilities for describing the structure and constraining the contents of XML 1.0 documents, including those which exploit the XML Namespace facility. The schema language, which is itself represented in XML 1.0 and uses namespaces, substantially reconstructs and considerably extends the capabilities found in XML 1.0 document type definitions (DTDs). This specification depends on XML Schema Part 2: Datatypes. Appendix A supplies a normative "Schema for Schemas"; Appendix F contains a non-normative "DTD for Schemas"; Appendix J gives brief summaries of the substantive changes to this specification since the public working draft of 7 April 2000. (2) XML Schema Part 2: Datatypes. W3C Candidate Recommendation 24-October-2000, exited by Paul V. Biron (Kaiser Permanente, for Health Level Seven) and Ashok Malhotra (IBM). Part 2 of the specification for the XML Schema language "defines facilities for defining datatypes to be used specifications. The datatype language, which is itself represented in XML 1.0, provides a superset of the capabilities found in XML 1.0 document type definitions (DTDs) for specifying datatypes on elements and attributes." Appendix A provides the normative "Schema for Datatype Definitions" and Appendix B gives the non-normative "DTD for Datatype Definitions." (3) XML Schema Part 0: Primer. W3C Candidate Recommendation 24-October-2000, edited by David C. Fallside (IBM). "XML Schema Part 0: Primer is a non-normative document intended to provide an easily readable description of the XML Schema facilities and is oriented towards quickly understanding how to create schemas using the XML Schema language. XML Schema Part 1: Structures and XML Schema Part 2: Datatypes provide the complete normative description of the XML Schema language -- this primer describes the language features through numerous examples which are complemented by extensive references to the normative texts." In connection with this CR publication, Henry Thompson announced the availablility of a self-installing version of XSV, the W3C/University of Edinburgh XML Schema validator; 'WIN32 for now, UN*X coming soon'. See also: (1) Testimonials for XML Schema Candidate Recommendation; (2) the longer memo from Henry S. Thompson (Janet Daly) explaining why the I18N WG dissented from the specification's treatment of dates and times, and the CR exit criteria; (3) W3C XML Schema; and (4) full references in "XML Schemas."

  • [October 24, 2000]   Rapid Progress on XML Topic Maps.    Several subgroups working within the initiative are making noteworthy progress: these include the Interchange Syntax Subgroup (ISS), Conceptual Model Subgroup (CMS), Authoring Group (AG), XTM Syntax Subgroup, and XTM Modeling Subgroup. Murray Altheim (Sun Microsystems) recently announced the availability of minutes from the October 13-15 XTM meeting in Swindon. This document is available on the XTM Repository web site, which provides also the current 'discussion DTD', the Topicmaps.Org Charter and By-laws, and other relevant resources. The eGroups mailing list '' serves as host for XTM email. The XTM Conceptual Model Subgroup (CMS) is attempting to explicate the relationship between the Topic Maps and RDF models, and has participated in a number of discussions with W3C's RDF design teams; the XTM syntax may be recommended for the interchange ('serialization') of RDF statements. See further references in "(XML) Topic Maps."

  • [October 24, 2000]   XSLTMark: XSLT Benchmark and Compliance Testing Suite.    Eugene Kuznetsov has announced the availability of XSLTMark, an XSLT benchmark and a small compliance testing suite. "XSLTMark Version 1.1.0 is available now and is the first release to the general public. The XSLTMark test cases have been designed to challenge processors with a variety of tasks and input conditions in order to provide a well-rounded benchmark and to facilitate analysis of which processors perform best for various applications. XSLTMark measures performance in four major categories: (1) Pattern Matching - this category covers XSLT template pattern matching and template instantiation. This performance category is important to stylesheets with many template rules and with many expected apply-template invocations. (2) XPath Selection - this category covers nodeset selection through the evaluation of XPath path expressions. This performance category is crucial for stylesheets that contain lots of xpath expressions, particularly ones with predicates. (3) XPath Library Functions - this category covers the execution of XPath library functions, particularly the frequently used string functions. This category is most important to stylesheets that perform a lot of string processing. (4) XSLT Control - this category covers the control structures defined by XSLT elements, including variable and parameter handling. This category is most relevant for stylesheets that perform tricky calculations involving calling templates with parameters. . . There are about 40 different testcases in this release; see documentation for descriptions and several third-party credits. A variety of java and C/C++ processors are supported, and drivers for other XSLT engines are easy to add. Source and makefiles are being released (with an emphasis on Linux X86, although Win32 X86 and Solaris SPARC are also supported, and other platforms should be fairly straightforward). We are also making available some initial benchmark results for several popular and well-regarded XSLT processors. We welcome comments, benchmark results submissions and new test drivers for other XSLT processors. DataPower's XSLTMark is the first comprehensive benchmark for measuring the performance of XSL processors. It can be used to test the XSLT performance of XSL processors for XML-to-XML and XML-to-HTML transformations. It also provides basic compliance testing to ensure that benchmark results are not distorted by incorrectly functioning processors. The benchmark is a java application that uses a 'Driver' class to communicate with the XSL processor under test. Both java and native (C/C++) processors are supported, with driver modules available for many popular XSLT engines on a variety of platforms. XSLTMark is currently being used for performance and compliance testing at DataPower, but also has a core suite of tests to yield benchmark figures for external comparison purposes. The tool features: (1) Processing throughput measurement; (2) Normalized score calculation; (3) Balanced test suite; (4) Optional standards compliance testing; (5) Support for processors written in both java and C/C++; (6) Cross-platform operation; (7) Test drivers for most popular XSLT processors [XT (James Clark), Saxon (Michael Kay), Transformiix (Mozilla), Xalan-J (Apache), Xalan-C++ (Apache), MSXML (Microsoft)]. For related topics, see "XSLT/XPath Conformance."

  • [October 24, 2000]   W3C Specification for Modularization of XHTML Advances to Candidate Recommendation.    The W3C specification for the Modularization of XHTML has been promoted to the status of a Candidate Recommendation. Reference: W3C Candidate Recommendation 20-October-2000, edited by Robert Adams (Intel Corporation), Murray Altheim (Sun Microsystems), Frank Boumphrey (HTML Writers Guild), Sam Dooley (IBM), Shane McCarron (Applied Testing and Technology), Sebastian Schnitzenbaumer (Mozquito Technologies), and Ted Wugofski ( The new Candidate Recommendation "specifies an abstract modularization of XHTML and an implementation of the abstraction using XML Document Type Definitions (DTDs). This modularization provide a means for subsetting and extending XHTML, a feature needed for extending XHTML's reach onto emerging platforms." Status: This version of XHTML "incorporates some comments from the Last Call Working Draft review period. A diff-marked version from the Last Call draft is available for comparison purposes. Major changes in this version include: (1) Re-integration of the Building document into this document; (2) Incorporation of the Henry Thompson/Dan Connolly XML Namespace handling process with substantial additions by the Math and HTML working groups; (3) Complete worked examples including modules and miniature DTDs; (4) Minor restructuring of abstract module definitions, including the creation of a 'style attribute module', a 'name identification module' and a 'target' module; (5) Tweaking of some of the module contents based on review comments. On 20 October 2000, this document enters a Candidate Recommendation review period. From that date until 17-November-2000, W3C members are encouraged to review and implement this specification and return comments to W3C is looking for testimonials from users of this specification. Additionally, experience using all of the modules is being sought to create a coverage table of the use of each module. These two criteria are needed to advance this specification to Proposed Recommendation." Available from W3C as a Single HTML file, Postscript version, PDF version, ZIP archive, or Gzip'd TAR archive. See additional references in "XHTML and 'XML-Based' HTML Modules."

  • [October 24, 2000]   Update of XGMML (Extensible Graph Markup and Modeling Language) Schema.    John Punin has announced an updated version of the XML Schema for XGMML (Extensible Graph Markup and Modeling Language). The revised XML Schema is based on the W3C XML Schema Working Draft 22-September-2000; it has been validated using the XSV Validator version 1.166/1.77. XGMML (Extensible Graph Markup and Modeling Language) "is an XML application based on GML which is used for graph description. XGMML uses tags to describe nodes and edges of a graph. The purpose of XGMML is to make possible the exchange of graphs between differents authoring and browsing tools for graphs. The conversion of graphs written in GML to XGMML is trivial. Using XSL with XGMML allows the translation of graphs to different formats. XGMML was created to be used for the WWWPAL System that visualizes web sites as a graph. Web Robots can navigate through a web site and save the graph information as an XGMML file. XGMML, as any other XML application, can be mixed Robots can navigate through a web site and save the graph information as an XGMML file. XGMML, as any other XML application, can be mixed with other markup languages to describe additional graph, node and/or edge information." The XGMML 1.0 Draft Specification is available online. See further reference in "Extensible Graph Markup and Modeling Language (XGMML)."

  • [October 24, 2000]   Pixy System 2 Astronomical Image Software Uses RELAX/Relaxer.    Murata Makoto recently announced the availability of Pixy System 2 from the MISAO Project. The MISAO Project "aims to make much use of images taken all over the world for searching and tracking astronomical remarkable objects." Pixy (Practical Image eXamination and Inner-objects Identification system) "is an automated astronomical image examination system developed by Seiichi Yoshida, used in the cource of new object survey of the MISAO Project. It automatically detects all stars from an image, collates them with star data recorded in catalogs such as GSC, USNO-A2.0, etc., and finds out new objects or variable stars. It also prints out all astrometric and photometric data of all detected stars, so it is also useful for astrometry of minor planets or photometry of variable stars." Pixy System 2 is implemented using Java language; it runs both on a Windows PC and a UNIX workstation. The software is available for download, and its API documentation may be read online. Pixy System 2 "heavily uses RELAX/Relaxer: the class files in the net.aerith.misao.xml.relaxer package are created by Relaxer from the RELAX files." Relaxer is a Java class generator that operates on a XML document defined by a RELAX grammer. In the new version of Pixy, one may 'save the examination result in XML file, and review of the desktop from the XML file'.

  • [October 19, 2000]   Research Information Exchange Markup Language (RIXML).    A recent announcement from a group of industry financial leaders describes the formation of, created to support the development of "an open protocol to improve the process of categorizing, aggregating, comparing, sorting, and distributing global financial research." The fifteen founding members of include five Asset Managers and ten Broker-Dealers. Details: "A group of major financial firms announced the formation of, a global industry association of buy-side and sell-side firms whose mission is to develop an open standard for the electronic exchange of investment research. The new specification, to be known as RIXML (Research Information eXchange Markup Language), will be based on XML, the emerging standard for data sharing between applications. RIXML will provide a structure for parsing and classifying investment research in a way that enables recipients to access information in a customizable format through standard sorting and filtering criteria. Once developed through the collaborative efforts of the association's members, the RIXML specification will be made available for use by firms within the financial services industry as well as other interested parties." See (1) the text of the announcement: "Financial Industry Leaders Join Forces to Develop a Global Standard for Investment Research", and (2) "Research Information Exchange Markup Language (RIXML)."

  • [October 18, 2000]   New Working Draft for W3C Extensible Stylesheet Language (XSL) Version 1.0.    Max Froumentin (W3C XSL Staff Contact) announced the release of a new working draft for Extensible Stylesheet Language (XSL) Version 1.0. Reference: W3C Working Draft 18-October-2000. By Sharon Adler (IBM), Anders Berglund (IBM), Jeff Caruso (Pageflex), Stephen Deach (Adobe), Paul Grosso (ArborText), Eduardo Gutentag (Sun), Alex Milowski (Lexica), Scott Parnell (Xerox), Jeremy Richman (BroadVision), and Steve Zilles (Adobe). Document overview: "This specification defines the Extensible Stylesheet Language (XSL). XSL is a language for expressing stylesheets. Given a class of arbitrarily structured XML documents or data files, designers use an XSL stylesheet to express their intentions about how that structured content should be presented; that is, how the source content should be styled, laid out, and paginated onto some presentation medium, such as a window in a Web browser or a hand-held device, or a set of physical pages in a catalog, report, pamphlet, or book. Formatting is enabled by including formatting semantics in the result tree. Formatting semantics are expressed in terms of a catalog of classes of formatting objects. The nodes of the result tree are formatting objects. The classes of formatting objects denote typographic abstractions such as page, paragraph, table, and so forth. Finer control over the presentation of these abstractions is provided by a set of formatting properties, such as those controlling indents, word- and letter-spacing, and widow, orphan, and hyphenation control. In XSL, the classes of formatting objects and formatting properties provide the vocabulary for expressing presentation intent. The XSL processing model is intended to be conceptual only. An implementation is not mandated to provide these as separate processes. Furthermore, implementations are free to process the source document in any way that produces the same result as if it were processed using the conceptual XSL processing model." WD status: "This version supersedes the previous draft released on March 27, 2000. The working group is issuing this interim public draft as it sets out a number of changes made in response to comments received on the Last Call draft. The Working Group intends to submit a revised version of this specification for publication as a Candidate Recommendation in the near future. Items under consideration for change for Candidate Recommendation include the name of the font-height-override-before and font-height-override-after properties. Discussion is invited and comments can be sent to the editors at" The WD is available in several formats: PDF, XML file, HTML single file and .ZIP file. See related references in "Extensible Stylesheet Language (XSL)."

  • [October 18, 2000]   jUDDI: Bowstreet Hosts Open Source Java-based UDDI Toolkit Development on SourceForge.    Bowstreet recently announced 'jUDDI' as "the industry's first implementation of broad industry-initiated standard to link e-businesses to the 'Yellow Pages' of B2B web services. The jUDDI implementation, available immediately, comes after Ariba, IBM and Microsoft unveiled a draft specification of the UDDI standard. Bowstreet has introduced jUDDI as free, open source software that is available for anyone to use. UDDI -- which stands for Universal Description, Discovery and Integration -- is designed to make it easy for businesses to create partnerships and new business models using platform-neutral application components called web services. The initiative will create a distributed registry, or Yellow Pages, for publishing, finding and using web services that companies wish to offer to the marketplace. Bowstreet's jUDDI (pronounced 'Judy') is an open source Java-based toolkit for developers to make their applications UDDI-ready. jUDDI-enabled applications will be able to look up a web service in a UDDI registry. A retail chain, for example, could use the toolkit to jUDDI-enable its online catalog. With jUDDI, the catalog could call another company's shopping cart and a third company's transaction web service, creating an instant web-based store. Companies will eventually create many connections like this, spawning "business webs," or dynamic collections of businesses, on a massive scale. jUDDI and UDDI will complement DSML (Directory Services Markup Language) -- the directory services standard launched last year by Bowstreet, IBM, Microsoft, Oracle and the Sun-Netscape Alliance. Directories provide users with a powerful way to manage web services, including web services published in UDDI registries. Bowstreet sees synergy between DSML and UDDI and will actively explore a relationship between the two specifications, according to Tauber, who is chairman of the DSML 2.0 working group. The jUDDI project is hosted at SourceForge and available as downloadable software from jUDDI is the latest in a long line of Bowstreet's industry firsts that advance intercompany interoperability on the Internet. 'UDDI, Microsoft's .NET, HP's e-Speak, ebXML, DSML and a host of other initiatives confirm what Bowstreet customers already know,' said Bob Crowley, Bowstreet's president and chief executive officer. 'They know that plug-and-play e-commerce is possible and inevitable for the 21st century, because they're doing it.' Bowstreet, a founding advisor to the UDDI initiative, was one of the first companies to recognize the importance of web services and act on it commercially. In 1998, the company announced a software architecture for deploying and managing web services across multiple vendor platforms." For references on UDDI, see "Universal Description, Discovery, and Integration (UDDI)."

  • [October 18, 2000]   Revised Working Draft for W3C's Platform for Privacy Preferences 1.0 Specification.    As part of the W3C's P3P Activity, the P3P Specification Working Group has released a new 'last call' working draft for the The Platform for Privacy Preferences 1.0 (P3P1.0) Specification. Reference: W3C Working Draft 18-October-2000, edited by Massimo Marchiori (W3C/MIT/UNIVE). Description: "The Platform for Privacy Preferences Project (P3P) enables Web sites to express their privacy practices in a standard format that can be retrieved automatically and interpreted easily by user agents. P3P user agents will allow users to be informed of site practices (in both machine- and human-readable formats) and to automate decision-making based on these practices when appropriate. Thus users need not read the privacy policies at every site they visit. The P3P1.0 specification defines the syntax and semantics of P3P privacy policies, and the mechanisms for associating policies with Web resources. P3P policies consist of statements made using the P3P vocabulary for expressing privacy practices. P3P policies also reference elements of the P3P base data schema -- a standard set of data elements that all P3P user agents should be aware of. The P3P specification includes a mechanism for defining new data elements and data sets, and a simple mechanism that allows for extensions to the P3P vocabulary. P3P policies use an XML encoding of the P3P vocabulary to identify the legal entity making the representation of privacy practices in a policy, enumerate the types of data or data elements collected, and explain how the data will be used. In addition, policies identify the data recipients, and make a variety of other disclosures including information about dispute resolution, and the address of a site's human-readable privacy policy." Appendices 4 and 5 of the Working Draft provide the 'XML Schema Definition' and the 'XML DTD Definition'. Status: This Last Call Working Draft is submitted for review by W3C members and other interested parties; the last call review period ends 31 October 2000. "Following this Last Call period, the Working Group intends to submit this specification for publication as a Candidate Recommendation." For related references, see "Platform for Privacy Preferences (P3P) Project."

  • [October 17, 2000]   Systems Biology Markup Language (SBML).    The Caltech ERATO Kitano Systems Biology Project is developing the Systems Biology Markup Language (SBML), using XML and UML for representation and modeling of the information components in the system. The research team is attempting to specify "a common, model-based description language for systems biology simulation software; we call this the Systems Biology Markup Language (SBML). The overall goal is to develop an open standard that will enable simulation software to communicate and exchange models, ultimately leading to the ability for researchers to run simulations and analyses across multiple software packages. SBML is the result of merging the most obvious modeling-language features of BioSpice, DBSolve, E-Cell, Gepasi, Jarnac, StochSim, and Virtual Cell. The description language is encoded in XML, the Extensible Markup Language. The XML encoding of the description language can define a file format; however, at this time, we are focusing on using the XML-based description language as an interchange format for use in communications between programs. Appendix B [in the principal specification] contains the current version of this XML schema. As XML Schemas are difficult to read and absorb by human readers, we define the proposed data structures using a succinct graphical notation based on a subset of UML, the Unified Modeling Language. . . The SBML representation language is organized around five categories of information: model, compartment, geometry, specie, and reaction. Not all ofthese will be needed by every simulation package; rather, the intent is to cover the range of data structures needed by the collection of all of the simulators examined so far..." For other description and references, see "Systems Biology Markup Language (SBML)."

  • [October 16, 2000]   Gnome XML Library's libxml-2.2.5 Supports XPointer and XPath.    Daniel Veillard (W3C) posted an announcement for the release of libxml-2.2.5 in the Gnome XML library with XPointer and XPath implementations, including an initial testsuite. "I usually don't post announces of new releases of libxml to xml-dev, but this this version has XPointer support which was requested previously here I think it makes sense: Libxml is the XML C library developped for the Gnome project, it allow to parse, manipulate, and save XML and HTML documents but it does not expose a GUI interface. Here are some key points about libxml (a.k.a. gnome-xml): (1) Libxml exports Push and Pull type parser interfaces for both XML and HTML. (2) Libxml can do DTD validation at parse time, using a parsed document instance, or with an arbitrary DTD. (3) Libxml now includes a nearly complete XPath and XPointer implementations. (4) It is written in plain C, making as few assumptions as possible, and sticking closely to ANSI C/POSIX for easy embedding. Works on Linux/Unix/Windows (5) Basic support for HTTP and FTP client allowing to fetch remote resources (6) The design is modular, most of the extensions can be compiled out. (7) The internal document repesentation is as close as possible to the DOM interfaces. (8) Libxml also has a SAX like interface; the interface is designed to be compatible with Expat. (9) This library is released both under the W3C IPR and the GNU LGPL; use either at your convenience. URLs for libxml include and" For related XLink/XPointer tools, see "XML Linking and Addressing Languages (XPath, XPointer, XLink)."

  • [October 16, 2000]   XML Developers' Day Call for Presentations.    Marion Elledge (GCA) has posted a call for presentations in connection with XML Developers' Day at the XML 2000 Conference. "Since 1984, the fall GCA SGML/XML conference has been the one annual must-attend event for the structured markup community. XML was first introduced to the world at this conference in 1996, and the event continues to be a focal point for meetings of XML-related OASIS, W3C, IDEAlliance, and ISO working groups. For information on the conference, held this year in Washington, D.C., see XML Developers' Day on Monday 4 December is intended for conference attendees with a special interest in the latest XML tools and advanced techniques. If you have applications that feature innovative uses of XML, this is your chance to share your accomplishments with other advanced workers. Proposals of 1-3 paragraphs clearly describing the presentation should be sent in plain text directly to the chair of the XML Dev Day track, Jon Bosak. Submissions must be mailed no later than Monday, 23-October-2000." See the text of the announcement for details.

  • [October 16, 2000]   HL7's Clinical Document Architecture (CDA).    A recent announcement from Health Level Seven reports on the progress of the Clinical Document Architecture (CDA): "Health Level Seven, Inc. (HL7) successfully balloted what it believes to be the first XML-based standard for healthcare -- the Clinical Document Architecture (CDA). The CDA, which was until recently known as the Patient Record Architecture (PRA), provides an exchange model for clinical documents (such as discharge summaries and progress notes) -- and brings the healthcare industry closer to the realization of an electronic medical record. The CDA Standard is expected to be published as an ANSI approved standard by the end of the year. By leveraging the use of XML, the HL7 Reference Information Model (RIM) and coded vocabularies, the CDA makes documents both machine-readable-so they are easily parsed and processed electronically-and human-readable-so they can be easily retrieved and used by the people that need them. CDA documents can be displayed using XML-aware Web browsers or wireless applications such as cell phones, as shown by Nokia at the HIMSS 2000 demonstration. The CDA is only the first example of HL7's commitment to the advancement of XML-based e-healthcare technologies within the clinical, patient care domain. Along with the CDA, HL7 is developing XML-based Version 3 messages. These Version 3 messages enhance the usability of HL7 by offering greater precision and less optionality, conformance profiles that will help guarantee compliance, coded attributes linked to standard vocabularies, and an explicit, comprehensive, and open information model-the HL7 RIM. All this, packaged in a standardized XML syntax for ease of interoperability. In 1999, HL7 also successfully balloted a recommendation for sending V2.3.1 messages using XML encoding. In 2001, HL7 will ballot, as a normative standard, a methodology for producing HL7 approved DTDs for Version 2.4 and previous versions. Said Stan Huff, chair of the HL7 board of directors: 'XML is an encoding that complements the semantic content provided by the HL7 RIM, allowing users to exploit all the possibilities of the Internet. The extensibility inherent in XML is resulting in an explosion of schemas and DTDs from diverse sources, which actually decreases the ability to provide plug and play applications. The development of a model-based, standardized and industry-accepted application of XML, as provided by HL7, will help decrease the cost of integration, and improve the reliability and consistency of communications between disparate systems and enterprises.' HL7's history with the Web and XML stretches back to the inception of the technologies. The organization is a long-standing and active member of the World Wide Web Consortium-the creators and keepers of XML. It has also exchanged sponsor memberships with OASIS, a non-profit, international consortium that operates, a global XML industry portal used to collect and distribute XML schemas." For other information, see "Health Level Seven XML Patient Record Architecture."

  • [October 14, 2000]   StarOffice Software 'Open' Source Available at    A recent announcement from Sun announces the availability of StarOffice source code as 'open' source, and the decision to adopt XML to replace the old binary file format; the project is dedicated to establishing open productivity XML-based file formats and language-independent component APIs. "The source code for StarOffice software is now available under the GNU Public License at Sun has also made the StarOffice APIs and XML file formats available as well, in an effort to drive standardization across office productivity suites. Developers around the world now have the freedom to use StarOffice technology to best suit their needs, whether to improve their own products, build new value-added products on top of the StarOffice suite, improve existing technology in StarOffice software, or contribute new StarOffice components to the open source community. This move opens up the office productivity market to unlimited possibilities for innovation. In one of the largest actions of its kind, Sun is working with the leaders of the free software and open source community to make the source code for its StarOffice software suite freely available under the GNU General Public License (GPL). In addition, Sun will commit the efforts of its development team, as well as the resources of a $14 billion global company, to work side by side with members of the community to continue to develop the code at, a site hosted and managed by CollabNet. No longer will any one company determine what is best for the market or the user, but the market will decide and users will choose. No longer will files and documents wear the cement shoes of a single vendor or operating system, but standards will flourish and compatibility reign across platforms. For the first time, a commercial grade, full-featured office suite will be opened up to the innovative input of the global developer community." [On the XML File Format:] "We adopted XML to replace the old binary file format and become the suite's new native file format. Our goals were twofold: to have a complete specification encompassing all components, and to provide an open standard for office documents. One single XML format applies to different types of documents -- e.g., the same definition applies for tables in texts and in spreadsheets. XML is ideal as an open standard because of the free availability of XML specifications and DTDs, and XML's support for XSL, XSLT, Xlink, SVG, MathML, and many other important and emerging standards. Beside replacing the binary file format with XML, the suite will use XML internal for exchanging any type of content between the different applications. provides today an infrastructure for using different XML components. The XML-Parser and the XML-Printer are all implemented as components. Every of these component support the Simple API for XML (SAX). This infrastructure will allow in the future to dynamically configure a pipelines of different XML components, like XML-Parser, XSLT-Processor, etc. to process XML-Input and Output. This will allow transformation of XML-Data into different formats on the fly, without storing intermediate files and parse them again for every transformation step. See the latest draft of the XML File Format Specification; the XML DTDs are available through the CVS access. There are many benefits to making StarOffice software open source, including: (1) Higher quality product. Since there are more developers on the project fixing bugs, there will be fewer bugs. (2) Faster development time. Leveraging the efficiencies of the open source model, the community will get access to new features sooner. (3) Ports to any platform. Since the code is open, anyone can port the StarOffice code to any platform. (4) Many languages. It will be possible to localize StarOffice software to any language the community has knowledge of. (5) Standard APIs. A single API set for manipulating and extending StarOffice software. (6) Standard file formats. XML will allow any XML-capable program to read StarOffice files. (7) More templates and sample documents. By building a community, users will be able to share sample documents, document templates, and macros, making it easier to produce professional-quality content. . . With XML file formats and language-independent APIs, ushers in an era of compatibility, giving developers the power to innovate and build new applications that easily work together, regardless of platform. End users will be able to choose from an array of powerful, free software, assured that their work is transportable and can be shared with anyone. Sun will continue to drive the development of the source code and distribute its own certified, StarOffice branded version of the software for free. To ensure consumer confidence and promote uniformity, will also allow other companies the opportunity to license the source for commercial release under a royalty-free Sun Industry Standards Source License (SISSL) that requires only that they maintain compatibility with the GPL reference implementation. Companies that meet this requirement may also qualify for and license the StarOffice brand for use on their product. . . As promised, Sun Microsystems and CollabNet have worked together to build the infrastructure to put the StarOffice code into the open source arena on October 13, 2000. The CVS repository is up and running, and the code is now available for checkout and download. A complete set of technical documentation is available, including a guide to the projects, whitepapers, a 'build guide,' and a porting guide..." See (1) the Technical Overview, (2) the main development web site, and (3) "StarOffice and XML."

  • [October 13, 2000]   W3C Acknowledges XML Messaging Specification (XMSG).    The W3C has acknowledged receipt of a submission for XMSG - XML Messaging Specification. Reference: W3C Note 13-October-2000, by R. Alexander Milowski (of Lexica, LLC). Document abstract: "XMSG is a specification for using XML to send messages that contain a set of XML documents, embedded non-XML data, and references to non-XML documents in a fashion that supports scalable transactions and operates on a participant model." The submission forms a multi-part document consisting of: (1) the XMSG (XML Messaging) Specification, (2) an XMSG DTD, (3) an XMSG Schema, and (4) the XMSG Schema Documentation. Lexica, LLC requests in the submission that the W3C Consortium include the submission as consideration in the XML Protocol Activity. Description: The XML messaging specification "is based on the basic principle of providing a simple way to transport multiple XML documents within one logical XML construct without dictating any layered semantics of a messaging protocol that might be layered on top. The general philosophy is to provide the general structure upon which messaging protocols for specific business or technological purposes can be layered allowing the identification of that messaging intent but not dictating the exact syntax and semantics of the subject message. In this way, manifests, metadata, and other messaging specific constructs can be tailored to specific vertical markets or technology applications. In general, the idea of an XML message presented by this specification is three-fold: (1) A pair or triplet of participants involved in the message are identified by URI values. (2) Metadata may be associated with the message itself. (3) A set of documents is contained and identified by URI allowing for document specific metadata. The goals of this specification are (1) To provide the ability to transport multiple documents and references to associated data objects within a single document (a 'message') and preserve their identity. (2) To provide the ability to associate metadata with both the documents and the message without modifying the original document or schemas for those documents. (3) To provide the ability to transport non-XML data as a document within the message. (4) To provide a simple way to accomplish XML messaging." W3C Team Comment on the NOTE has been provided by Yves Lafon, W3C lead for XML Protocol Activity: "The submission provides a description of using XML to send MIME mail like messages that contain XML documents, non-XML data and references to other documents. In XMSG, a message consists of information about the message itself, such as the origin, the destination, a unique ID used to identify and track it, as well as management-oriented information, such as its priority, expire time and receipt management. XML Documents are embedded using a special tag in the message format, with an ID to provide easy reference inside the message. Even if errors codes remain application-specific, having classes of errors may be helpful for intermediaries..." [cache]

  • [October 13, 2000]   Redfoot RDF Store/Viewer/Editor Framework.    James Tauber (Director XML Technology, Bowstreet) has posted an announcement to the W3C '' mailing list for the release of Redfoot Version 0.9.0. "Redfoot is a store/viewer/editor framework for RDF that includes peer-to-peer communication between stores. It is written in Python by James Tauber and Daniel Krech, with open source development hosted on SourgeForge. "At present, Redfoot includes: (1) an RDF database; (2) a query API for RDF with numerous higher-level query functions; (3) an RDF parser and serializer; (4) a simple HTTP server providing a web interface for viewing and editing RDF; (5) the beginnings of a peer-to-peer architecture for communication between different RDF databases. Although the peer-to-peer functionality is embryonic, the RDF viewing/editing capabilities are of beta quality... In the future, Redfoot will hopefully include: (1) a full peer-to-peer architecture for discovery of RDF statements; (2) an inference engine; (3) a fully customizable UI; (4) connectors for mapping non-RDF data into RDF triples; (5) sample applications built on top of Redfoot. Redfoot is written in pure Python and is being tested on Python 1.6 and 2.0b1 (soon 2.0b2). Redfoot makes extensive use of callbacks as a means of processing RDF structures rather than building large temporary data structures in memory. For other details, see the development documentation. For related resources, see "Resource Description Framework (RDF)."

  • [October 13, 2000]   JDF Specification Draft Spiral Version 4.0.    A level 4.0 draft specification has been published for the XML-based Job Definition Format (JDF) and its counterpart, the Job Messaging Format (JMF). JDF is an open, extensible, XML-based print workflow specification framework. "Four companies prominent in the graphic arts industry -- Adobe, Agfa, HEIDELBERG, and MAN Roland -- have united to create this extensible, XML-based format built upon the existing technologies of CIP3's Print Production Format (PPF) and Adobe's Portable Job Ticket Format (PJTF). JDF provides three primary benefits to the printing industry. Unlike any previous format, it has the ability to unify the pre-press, press, and post-press aspects of any printing job. It also provides the means to bridge the communication gap between production services and Management Information Systems (MIS). And finally, it is able to carry out both of these functions no matter what system architecture is already in place, and no matter what tools are being used to complete the job. In short, JDF is extremely versatile and comprehensive. JMF messages are most often encoded in pure XML, without an additional MIME/Multipart wrapper. Only controllers that support JDF job submission via the message channel must support MIME for messages. Appendix A of the 389-page specification lists a number of commonly used JDF data types and structures and their XML encoding, based upon the W3C XML Schema datatypes. Data types are simple data entities such as strings, numbers and dates. They have a very straightforward string representation and are used as XML attribute values. Data structures, on the other hand, describe more complex structures that are built from the defined data types, such as colors..." For references, see "Job Definition Format (JDF)." For related initiatives, see: (1) Printing Industry Markup Language (PrintML); (2) PML: Markup Language for Paper and Printing; (3) XML for Publishers and Printers (XPP). See also the PrintTalk Consortium and PrintCafe's eProduction eCommerce eXchange (PCX), now being described primarily as 'a framework for integrating industry standards' supporting XML-based specifications for the printing and publishing supply chain. The PrintTalk implementation supports use of the proposed Job Definition Format (JDF) standard for its job specification semantic and Commercial eXtensible Markup Language (cXML) to define the business objects; four of thirteen business objects have been defined so far.

  • [October 13, 2000]   Tutorials and Reference for XPointer and Extended-XLink.    Jiri Jirat recently announced the availability of tutorial resources for the W3C XML Linking specifications. These materials are posted on the web site along with a collection of related tutorials covering XSLT, SOAP, XUL, CSS, Namespaces, etc. The online XPointer reference "allows easy access to definitions of locations, errors and functions, with links to relevant examples in XPointer tutorial. The XPointer tutorial explains the concepts of XPointer using more than 30 examples. It is aimed at 'ordinary' user, who will use XPointer mainly in the href attribute of XLink. A tutorial for extended-type XLink has also been added. The Zvon XLink reference has been updated with cross-references to examples for extended-type XLink." In this connection, note Daniel Veillard's reminder that the W3C specifications for XPointer and XLink are currently in Candidate Recommendation stage at W3C, and that the XML Linking Working Group is seeking implementation reports for XPointer and XLink. The CR stage "is dedicated to implementors, and the specifications are allowed to pursue their way toward the final Recommendation status only if the prerequisite of implementability have been verified." Implementation feedback for XPointer and Xlink may be sent to the publicly archived mailing list Review comments may also be sent to the XML Linking Working Group co-chairs, Eve Maler and Daniel Veillard. Some examples of XLink/XPointer implementations are provided in "XML Linking Language."

  • [October 13, 2000]   W3C Publishes CSS Mobile Profile 1.0 for Mobile Devices.    W3C has issued CSS Mobile Profile 1.0 as a working draft to define a subset of CSS2 features that provides a minimal guarantee of interoperability on mobile devices. Reference: Working Draft 13-October-2000, by Ted Wugofski (, Doug Dominiak (Motorola), and Peter StarkEricsson). Document abstract: "This specification defines a subset of the Cascading Style Sheets Level 2 specification tailored to the needs and constraints of mobile devices." The Working Draft of the CSS Mobile Profile specification has been published by the W3C CSS Working Group part of the Style activity. The document supplies a "profile of the Cascading Style Sheets, level 2 (CSS2) specification appropriate for mobile devices such as wireless phones. Conformance to this profile means that a user agent supports, at minimum, the features defined in this specification per the CSS2 conformance. CSS2 specifies how developers can author style sheets for presenting documents across multiple devices and media types. While this is very important, it is also important that authors have an understanding of what features are supported on these different devices. Likewise, it is important that similar devices operate in a similar manner. Otherwise, authors will need to develop style sheets for each version of each device -- raising the cost of content development and decreasing interoperability. The CSS Mobile Profile specifies a conformance profile for mobile devices, identifying a minimum set of properties, values, selectors, and cascading rules. The resulting CSS Mobile Profile is very similar to CSS1." Section 3 provides a tabular summary of CSS Mobile Profile selector syntax. The CSS Mobile Profile uses the same syntax as specified in CSS2, with a subset of values; in general, the CSS Mobile Profile uses the same cascading rules as in CSS2. A CSS Mobile Profile conforming user agent must also be able to process media-dependent stylesheets as specified in CSS2." For related specifications, see "W3C Cascading Style Sheets."

  • [October 12, 2000]   IBM Licenses New XML Technologies.    From an IBM announcement: "IBM today made seven new alpha technologies, including six based on the XML (eXtensible Markup Language) standard, available for licensing through alphaWorks, IBM's free, on-line resource for developers. Today's announcement brings the total number of alpha technologies available for licensing on the alphaWorks site to 13. The first six were launched together with the IBM licensing initiative in August. The move has been welcomed by developers who have requested that the free 90-day trial license model expand to commercial purchase rights. New technologies available for licensing include Xeena, a visual XML editor that can be used with XML document type descriptions (DTDs). The popular Xeena editor was downloaded tens of thousands of times with numerous requests for licenses through the alphaWorks home page. Other XML technologies now available as a part of alphaWorks' new licensing initiative include: (1) XML EditorMaker - A text-based XML document editor. This tool automatically creates visual, Java-based XML editors that developers can easily use to create and modify XML documents, increasing development and deployment speed of XML-based documents. (2) XML Productivity Kit, which allows for rapid integration of XML documents into a Java development environment. (3) XTransGen, which enables developers to easily define and store the mapping relationship between two XML document types (DTDs). Once this initial translation is completed, XML documents can be converted quickly. (4) XML Lightweight Extractor for defining sources of information for a particular XML document. This information can be stored and recalled dynamically to populate XML documents with the appropriate data. It works with any JDBC-compliant relational database. (5) XML Master, a tool for creating custom, Java-based logic for the manipulation of XML documents. Developers can build programming frameworks for a particular XML document type and then automatically generate Java code that can be imported into a Java development environment (e.g., VisualAge for Java). The seventh new technology is the Remote Method Invocation for Microsoft Internet Explorer 4.X (RMI for IE4), a package that provides support for the Microsoft JVM (Java Virtual Machine) not included in older versions of Explorer. . . developerWorks, IBM's free, on-line collection of content and resources, enables developers worldwide to build better software and to enhance their technical skills by offering a wealth of tools, tutorials, code, tips, news, white papers and how-to articles focused on open standards and cross platform development. Committed to providing the most informative, reliable and accurate technical information by tapping into IBM and industry leaders; developerWorks content is valuable to developers regardless of their application development tool of choice. A major component of developerWorks is alphaWorks, IBM's emerging technology broker. alphaWorks provides early adopters and innovators direct access to IBM's 'alpha' technologies through free download and commercial licenses. Both IBM sites respond to the needs of developers by providing relevant technical content and cutting-edge emerging technology."

  • [October 12, 2000]   IBM's Agent Building and Learning Environment (ABLE).    IBM's Agent Building and Learning Environment (ABLE) "is a toolkit from the IBM T.J. Watson Research Center for developing hybrid intelligent software agents and agent applications in Java. The update provides new neural and Bayesian learning algorithms, GUI enhancements, XML rule parsing, bug fixes, and documentation on adding custom beans. . . ABLE provides a set of reusable JavaBean components, called AbleBeans, along with several flexible interconnection methods for combining those components to create software agents. AbleBeans implement data access, filtering and transformation, learning, and reasoning capabilities. Function-specific AbleAgents are provided for classification, clustering, prediction, and genetic search. Application-specific agents can be constructed using one or more of these AbleBeans. AbleAgents are situated in their environment through the use of sensors and effectors, which provide a generic mechanism for linking them to Java applications. A GUI-based interactive development environment, the Able Agent Editor, is provided to assist in the construction of AbleAgents using AbleBean components. In the ABLE framework, an agent is an autonomous software component. It could be running on its own thread of control or could be called synchronously by another agent or process either through a direct method call or by sending an event. By combining one or more AbleBeans, agents can be extremely lightweight (e.g. a few lines of Java code) or can be relatively heavy weight, using multiple forms of inferencing (e.g. fuzzy rule systems, forward and backward chaining) and learning (e.g. neural classification, prediction, and clustering). . . ABLE is meant to make your life easier if (1) you are an application developer, by providing a set of intelligent beans, and a editor for combining them into agents. (2) you are doing research on intelligent agents, by providing a flexible Java framework for combining the ABLE beans with your algorithms or ideas about how agents should be constructed."

  • [October 12, 2000]   Proposed OASIS Technical Committee on Entity Resolution.    A recent announcement released by Karl Best (OASIS - Director, Technical Operations) describes a proposed 'Entity Resolution' technical committee, to be formed under the rules of the Technical Committee Process as announced in early October. The new committee would continue work begun under the SGML Open Technical Resolution on Entity Management (entity catalog formats, formal system identifiers, etc.), updating this work to cover XML. "A new OASIS technical committee is being formed. The Entity Resolution TC has been proposed by Lauren Wood, SoftQuad Software Inc.; Norman Walsh, Sun Microsystems; Paul Grosso, Arbortext, Inc.; and John Cowan, Reuters Health. The request for a new TC meets the requirements of the OASIS TC process. . . The objective of the Entity Resolution TC is to provide facilities to address issue A of the OASIS catalog specification (TR 9401). These facilities will take into account new XML features and delete those features of TR 9401 that are only applicable to SGML, as well as those features applicable only to issue B in TR 9401. Deliverables: The Entity Resolution TC will produce a Committee Specification that uses XML syntax and provides a DTD (potentially also an XML Schema) for that syntax. This specification will be ready by August 2001. The Entity Resolution TC intends to submit the Committee Specification as an OASIS Standard after sufficient implementation experience has been gathered." Note also that the formation of a technical committee for 'Customer Information Quality' was announced in February: "The objective of the Technical Committee (TC) on Customer Information Quality (CIQ) formed by OASIS is to deliver XML standards for customer profile/information management to the industry. The Customer Information Quality TC has been proposed by Ram Kumar, Cognito, Inc; Vincent Buller, AND Data Solutions; John Bennett,; and Graham Lobsey, Cognito, Inc." See also the list of active OASIS TCs. On entity resolution, see the topic "SGML/XML Entity Types, and Entity Management," and the following section "Catalogs, Formal Public Identifiers, Formal System Identifiers."

  • [October 12, 2000]   Revised IETF/W3C XML-Signature Syntax and Processing Specification.    The IETF/W3C XML Signature Working Group has issued an updated Last Call Working Draft for the XML-Signature Syntax and Processing specification. Reference: W3C Working Draft 12-October-2000, edited by Donald Eastlake, Joseph Reagle, and David Solo. The document "specifies XML digital signature processing rules and syntax. XML Signatures provide integrity, message authentication, and/or signer authentication services for data of any type, whether located within the XML that includes the signature or elsewhere. Enveloped or enveloping signatures are over data within the same XML document as the signature; detached signatures are over data external to the signature element. More specifically, this specification defines an XML signature element type and an XML signature application; conformance requirements for each are specified by way of schema definitions and prose respectively. This specification also includes other useful types that identify methods for referencing collections of resources, algorithms, and keying and management information. The XML Signature is a method of associating a key with referenced data (octets); it does not normatively specify how keys are associated with persons or institutions, nor the meaning of the data being referenced and signed. Consequently, while this specification is an important component of secure XML applications, it itself is not sufficient to address all application security/trust concerns, particularly with respect to using signed XML (or other data formats) as a basis of human-to-human communication and agreement. Such an application must specify additional key, algorithm, processing and rendering requirements." Document status: This WD represents an "update to the second last call version, with an abbreviated last call termination date of October 20, 2000 (5 weeks in total). This update includes minor editorial changes, reference to the latest Canonical XML, as well as an adoption of the latest XML Schema specification. Barring substantive comment, we will request Candidate Recommendation status as soon as possible following the Canonical XML request. However, we do wish to ensure that readers are aware of following three substantive changes in the second last call: (1) We've changed the Reference Processing Model (section to permit the presentation and acceptance of XML node-sets between Transforms (and resulting from some URI References) when appropriate; we accomplish this by heavily relying upon the XPath specification but still do NOT require a conformant XPath implementation. (2) We've revised the treatment of pre-pended algorithm object identifier within the encoded RSA SignatureValue by the PKCS1 algorithm (section 6.4.2). (3) We've revised the X509Data element (section 4.4.4) to clarify the treatment of certificate 'bags' and CRLs within that structure." See references in "XML Digital Signature (Signed XML - IETF/W3C)."

  • [October 11, 2000]   XML Adoption in the UK's e-Government Interoperability Framework (e-GIF).    One of the three key policy decisions in the UK 'e-GIF' program is identified as the "adoption of XML as the primary standard for data integration and presentation on all public sector systems...the adoption of XML (Extensible Mark-up Language) and XSL (Extensible Stylesheet Language) form the cornerstone of the government data interoperability and integration strategy." Some details of the "Data integration policies" are highlighted in the "Policies and technical standards" section of the e-GIF report: The "UK Government policy is to use: (1) XML and XML schemas for data integration; (2) UML, RDF and XML for data modelling and description language; (3) XSL, DOM and XML for data presentation." The model also identifies the use of GML (Geospatial Markup Language) as defined by Open Geographic Council. "XML products will be written so as to comply with the recommendations of the World Wide Web Consortium (W3C). Where necessary the government will base the work on the draft W3C standards but will avoid the use of any product specific XML extensions that are not being considered for open standardisation within the W3C. Centrally agreed XML schemas are approved through the UK GovTalk processes..." According to an announcement of the plan by Cabinet Office Minister Ian McCartney, "e-GIF is a key plank in the Government's drive to get all its services online by 2005 and cut bureaucracy within the public sector. Speaking at London's QE2 Centre, Mr McCartney launched the e-Government Interoperability Framework (e-GIF) - a piece of policy which will help IT systems across the whole public sector to communicate smoothly with each other. There are two main benefits the policy will bring: (1) Creating 24-hour one-stop Government: e-GIF is key to creating one-stop Government where services are available 24-hours a day from a single electronic point of access. For example, the UK online portal - built around e-GIF standards - will offer services around life episodes, giving the user information they need about a particular experience such as having a baby or learning to drive. (2) Banishing bureaucracy in Government: Step-up the red-tape revolution within Government, moving the public sector away from traditional paper-based ways of working by electronically joining up information across a range of Government departments and organisations. Again this is built around e-GIF standards... The main thrust of the framework is to adopt the Internet and World Wide Web standards for all government systems. There is a strategic decision to adopt XML and XSL as the core standard for data integration and presentation. This includes the definition and central provision of XML schemas for use throughout the public sector. The e-GIF also adopts standards that are well supported in the market place. It is a pragmatic strategy that aims to reduce cost and risk for government systems whilst aligning them to the global Internet revolution. Specifying policies and standards in themselves is not enough. Successful implementation will mean the provision of support, best practice guidance, toolkits and centrally agreed data schemas. To provide this, the government has launched the UK GovTalk initiative. This is a Cabinet Office led, joint government and industry forum for generating and agreeing XML data schemas for use throughout the public sector... The primary role of the UK GovTalk Group is to promote the production and management of the XML schemas necessary to support data interoperability requirements of the e-government strategy. XML schemas will be developed by specialist groups, established to support specific projects, or by open submission to the UK GovTalk web site either in response to a Request for Proposals or as an unsolicited proposal. In each case, the UK GovTalk Group will manage the acceptance, publication, and any subsequent change requests for the schema. XML schemas that have been accepted by the group will be published on and will be open for public comment and requests for change. The Portal Data Schemas Project has been established by the UK GovTalk Group to manage the generation and timely delivery of the agreed XML data schemas required for government services delivered through the Portal. The XML data schemas required for the portal services will be the first outputs of the Portal Data Schemas Project and will be agreed through the GovTalk processes as a prioritised delivery. The scope of the e-GIF includes intradepartmental systems and the interactions between: UK Government department and other UK Government departments, UK Government and wider public sector, UK Government and foreign governments (UK/EC, UK/US etc), UK Government and businesses world wide, and UK Government and citizens. UK Government includes central government departments and their agencies, local government and the devolved administrations. The wider public sector includes non departmental public bodies (NDPBs) and the National Health Service. The e-GIF standards are mandated on all new systems. Legacy systems which need to link to the Government Secure Intranet (GSI), Government Portal (Gateway and UK Online), the Knowledge Network or other systems, which are part of electronic service delivery, will need to comply with these standards." For other references, see "e-Government Interoperability Framework (e-GIF)."

  • [October 11, 2000]   Electronic Transactions on Artificial Intelligence (ETAI) Features "The Semantic Web" Department.    Guus Schreiber (Department of Social Science Informatics, University of Amsterdam) posted an announcement inviting submissions for a new area of the electronic journal Electronic Transactions on Artificial Intelligence (ETAI) entitled "The Semantic Web." The new semantic Web area "is concerned with modeling semantics of web information, and covers theory, methods, and applications. . . Tim Berners-Lee coined the vision of a 'semantic web' in which background knowledge is stored on the meaning or content of web resources through the use of machine-processable metadata. The semantic web should be able to support automated services based on these descriptions of semantics. The semantic or "knowledge" web is seen as a key factor in finding a way out of the growing problems of traversing the expanding web space, where currently most web resources can only be found through syntactic matches (e.g., keyword search). This ETAI area is targeted at all research efforts aimed at constructing, maintaining and using such a knowledge-intensive information and service web. Not surprisingly, our field is interdisciplinary in its very nature covering various aspects dealt with in various communities of Artificial Intelligence and Computer Science. It covers aspects from knowledge engineering, databases and information systems, knowledge representation, information retrieval, digital libraries, multi-agent systems, natural-language processing, and machine learning. We envisage paper submissions falling within at least one of the following categories: Metadata, knowledge markup, and formal annotations of web information; Information extraction, automatic and semi-automatic generation of meta data for web information; Knowledge representation for the web; Generic and heuristic reasoning methods for the web; Integration of databases in the knowledge web; Interoperability of web services at the semantic and pragmatic levels; Standard ontologies for content description of web information; Distributed ontologies, knowledge composition and transformation; Scalability of knowledge-intensive web services; Content-based information retrieval; Knowledge retrieval; Tool environments, development methodologies, case studies and applications for and of the knowledge web; Web-based knowledge management and electronic commerce. ETAI (Electronic Transactions on Artificial Intelligence) makes submitted articles directly available on-line and promotes public discussions on the submissions. Each year, accepted articles are also collected in a printed volume, mainly for library use. Area editors include Dan Brickley (University of Bristol), Dieter Fensel (Free University of Amsterdam), Yolanda Gil (ISI), Jim Hendler (University of Maryland / DARPA), Ora Lassila (OKIA), Deborah McGuinness (Stanford University), Robert Meersman (Free University of Brussels), and Guus Schreiber (University of Amsterdam)." See: "XML and 'The Semantic Web'."

  • [October 11, 2000]   Extensible Programming Language (XPL).    Michael Lauzon posted an announcement for the development of Extensible Programming Language (XPL). "XPL is an open source initiative, and is also an application of XML, it is a new programming language that will be based on XML as an application of XML. XPL is conceived as a framework or meta-language for defining XML document types which operate as programming languages. [Rationale:] The practice of programming stands to benefit by exploiting the evident virtues of XML: its cross-platform availability, its open textual format, its extension over a very large class of data; structures, and the networking infrastructure available to it. The practice of XML document exchange stands to benefit by exploiting the public body of programming-language concepts and applications, by bringing programming architectures into XML itself; XML is a meta-data language: the goal of XPL is to be a meta-process language. . . To learn more about it please go to The programming language will be partly derived from Miva & Cold Fusion. Interested parties may consult: (1) the eGroups mailing list, (2) the XPL draft specification, (3) the XPL FAQ document, (4) the [Jonathan Burns'] annotated version of Paul Prescod's document "Why the Web needs Groves" and (5) the list of script tags.

  • [October 11, 2000]   Reuters Presents NewsML Showcase.    Reuters, "the global information, news and technology group, is unveiling a showcase on to demonstrate how its news delivery will be revolutionized by NewsML, the new industry standard for delivering news. NewsML, conceived by Reuters, was ratified by the IPTC on 6-October-2000. NewsML is a new Internet standard for the packaging of news. It provides the structure for the publication of multimedia content in XML. It is expected to become the lingua franca of news. Reuters have produced a showcase demonstrating the power of NewsML. NewsML is the structure used to publish news in any format. It can be used by news providers to combine their pictures, video, text, graphics and audio files in news output available on web sites, mobile phones, high end desktops, interactive television and any other device." The showcase describes the "values and benefits of this open standard news format. It provides a demonstration of how multimedia content can be pulled together. Advantages including multiple languages can be seamlessly provided for. The technical details and specifications are also available, along with the latest press details." According to a related Reuters announcement: "NewsML, based on the World Wide Web Council's (W3C) Extensible Markup Language (XML), provides a new standard framework to describe, package, store and deliver multimedia news. The technology is dedicated to the description of news in a standard structure, to facilitate the processing of news by computers. In April, Reuters launched a delivery mechanism for its media news products called Reuters Internet Delivery System (IDS). IDS enables Reuters to deliver its news content in XML. IDS utilizes a Reuters prototype NewsML DTD (Document Type Definition). Text news as well as photos and video files can be delivered either as independent media streams or as linked multimedia news packages. IDS is currently being used as the principal delivery mechanism for Reuters growing range of Online Report services. . . NewsML enables news publishers in all market sectors to create a higher quality product by: (1) providing access to all the available media to tell a story; (2) clearly identifying the details of a story leading to quicker production and editorial decisions; (3) allowing stories to be delivered to a range of different devices (mobile, desktop, PDA, etc.); (4) enabling greater description of data making it easier for publishers to provide updates as stories develop. NewsML will enhance the financial professional's news experience by creating more compelling stories: (1) Accuracy - improved search and information management capabilities increasing the relevance of stories received; (2) Personalized News - users can select the stories of most interest to them and have these delivered to their most preferred device (i.e., mobile phone, desktop PC, Palm Pilot, etc.); (3) Access to the bigger picture - stories will contain links to relevant background and related news... Reuters is an active advocate of using XML for news delivery. The concept was brought to life in 1999 when Reuters presented its initial proposals for NewsML to the International Press Telecommunications Council (IPTC). This evolved into an IPTC generated Document Type Definition (DTD), which was released by the IPTC on October 11, 2000. The IPTC's approval of a NewsML version 1.0 specification reinforces Reuters global vision for using NewsML as the industry standard for the delivery of news." For related NewsML description, see "NewsML and IPTC2000."

  • [October 11, 2000]   New W3C Working Draft for Canonical XML Version 1.0.    The IETF/W3C XML Signature Working Group has released a revised Working Draft for Canonical XML Version 1.0. Reference: W3C Working Draft 11-October-2000, edited by John Boyer (PureEdge Solutions Inc.). Document status: "This document is referred to the W3C Director for review and consideration as a Candidate Recommendation. It addresses all issues raised during the second Last Call. The list and disposition of last call issues is a living document maintained by the XML Signature Working Group. A draft interoperability matrix [Canonical XML Interoperability] is also provided. This specification includes editorial and technical clarifications and corrections suggested by last call reviewers. Additionally, this version also includes one substantive difference from the previous version: the recent XML plenary decision regarding deprecation of relative namespace URIs is represented in this specification." Document abstract: "Any XML document is part of a set of XML documents that are logically equivalent within an application context, but which vary in physical representation based on syntactic changes permitted by XML 1.0 and Namespaces in XML. This specification describes a method for generating a physical representation, the canonical form, of an XML document that accounts for the permissible changes. Except for limitations regarding a few unusual cases, if two documents have the same canonical form, then the two documents are logically equivalent within the given application context. Note that two documents may have differing canonical forms yet still be equivalent in a given context based on application-specific equivalence rules for which no generalized XML specification could account."

  • [October 11, 2000]   Release of Xalan-C++ Version 1.0.    A posting from David Marston announces the release of Apache's Xalan-C++, Version 1.0. "Xalan-C++ version 1.0 is a robust implementation of the W3C Recommendations for XSL Transformation(XSLT) and the XML Path Language (XPath). It uses version 1.3.0 of Apache's Xerces-C++ XML parser. Xalan (named after a rare musical instrument) takes input in the form of a file or URL, a stream, or a DOM. Xalan-C++ performs the transformations specified in the XSL stylesheet and produces a file, a stream, or a DOM as you specify when you set up the transformation. Along with a complete API for performing transformations in your C++ applications, Xalan-C++ provides a command line utility for convenient file-to-file transformations. Xalan-C++ also supports C++ extension functions." Major updates since version 0.40.0 include: "(1) Full support for namespace handling; (2) Full implementation of the format-number() function and support for the decimal-format element; (3) Integration with the International Components for Unicode (ICU) for number formatting, sorting, and output encoding; (4) Support for the exclude-result-prefixes attribute; (5) Support for the output encoding attribute. Download links are provided for Win32, Linux, and AIX versions. To build applications with Xalan and Xerces, you also need the Xerces-C++ binary distribution for your platform, which you can download from the Xerces-C++ distribution directory. Some people have been looking at porting issues for Solaris and HP-UX. Volunteers are more than welcome to help develop builds for other platforms..." For related resources, see "XSL/XSLT Software Support."

  • [October 10, 2000]   LiveDTD Hypertext Tool for DTD Visualization.    Bob Stayton recently announced the availability of a 'LiveDTD' tool. LiveDTD is a perl program which converts an SGML/XML Document Type Definition (DTD) into a hypertext document. It parses the DTD files and generates a copy with HTML markup inserted. The result is the exact same text of the original DTD, but with live links that let you navigate through the DTD. Click on a name, and you are transported to where that name is declared in the DTD. Both elements and parameter entities are hot linked. For a simple DTD, this may not be very useful. But for complex DTDs like DocBook and TEI that use hundreds of elements and parameter entities, it's a great help. . . If you have ever worked with a highly parameterized DTD like DocBook or TEI, you know how much the indirection makes you jump around in the DTD to find where something is really defined. It gets worse if you add a customization layer, because then you have more than one declaration for the same name. You have to track down the 'live' one through the marked sections and customization modules. This program does that for you. In fact, I originally wrote it to keep from going crazy managing a customization layer for DocBook. Principal features: (1) Frames-based interface makes navigating easy since active names are listed in the left column. (2) Works with any XML or SGML DTD, in a single file or spread over multiple files. (3) Can use a catalog file to resolve PUBLIC or SYSTEM identifiers. (4) HTML version is an exact replica of the text of the DTD, preserving spacing, line breaks, and the multiple files (if any) of the original. (5) Respects marked sections, including those whose status keyword is a parameter entity. It only enlivens those marked sections whose status resolves to INCLUDE. (6) If a parameter entity name is declared more than once, only the first instance becomes live. (7) Marks the name in each live declaration in red. Marks all live references as hot links to the declared name. (8) Generates usage tables for element names and parameter entity names that shows all the locations where each name appears in the DTD. (9) You can specify the output directory to write the HTML files to, and a prefix for all the filesnames. That lets you put more than one version of a LiveDTD in one directory without filename conflicts." The latest DocBook DTDs (converted) are also available for download. See the documentation and download page.

  • [October 10, 2000]   Xyvision Enterprise Solutions Offers WorX SE XML Editing Tool.    A recent announcement from Xyvision Enterprise Solutions describes the availability of a 30-day free trial of the XyEnterprise WorX SE XML editing software. "WorX SE is a plug-in for Microsoft Word that allows users to author and edit valid XML from within the Word environment. By making use of predefined document templates, WorX SE users can begin authoring valid XML documents while continuing to utilize the familiar features, functionality and interface of Microsoft Word. A WorX SE user can seamlessly switch between authoring valid XML output to simply authoring Word binary files -- a factor that greatly reduces the normal learning curve associated with XML authoring and editing software. WorX SE contains a 'Tagger' that is context sensitive and guides users through creation of structured documents or the user can simply employ the Element Tab to markup selected information. The Tagger recognizes special document elements such as tables, pictures, and objects and provides an easy method to create valid markup. In fact, lists and list items are automatically recognized as XML elements. WorX SE does not soley rely upon the use of styles but a Word document that is created following structured style guidelines can easily be converted to XML using those styles as a basis. WorX SE is available for Word 2000 on Windows 98, 2000 and NT platforms. . ."

  • [October 10, 2000]   RELAX Core Specification Submitted to ISO.    MURATA Makoto reported on the '' mailing list: "The English version of the RELAX Core specification has been sucessfully submitted to the fast track procedure of ISO. It will automatically become a Draft Technical Report of ISO. I will speak with the chair of SC34 about the possibility of disclosing the submitted document to the public." RELAX (REgular LAnguage description for XML), according to the developers, "is a combination of (1) features of DTD, and (2) rich datatypes of XML Schema represented in the XML syntax. RELAX also has some other mechanisms, but they have been eliminated from the conformance level 'classic'. RELAX helps migration from DTD to XML Schema. You can assume that RELAX is DTD combined with datatype information in the XML instance syntax and start to use RELAX right now. When XML Schema is available, migration from RELAX to XML Schema will be possible without loss of datatype information. RELAX consits of RELAX Core and RELAX Namespace. RELAX Core handles elements in a single namespace and their attributes. RELAX Namespace is concerned with multiple namespaces. RELAX Core has two conformance levels. Conformance level 'classic' restricts structural features of RELAX by eliminating features more advanced than DTD. Conformance level 'fully relaxed' allows all features of RELAX Core. It is hoped that conformance level 'classic' will be widely implemented, since it is so simple. See further: (1) the main RELAX Web site and (2) "REgular LAnguage description for XML (RELAX)."

  • [October 10, 2000]   Preview Release of 'repat' RDF Parser Toolkit.    Jason Diamond ( announced the preview release of an open source RDF Parser written in ANSI C. "The parser is dubbed repat since both its interface and implementation is a callback-based RDF parser based on James Clark's expat; it is available for download from While the parser is not quite ready for prime time, it does -- to the best of my knowledge -- correctly parse all of the examples in the W3C RDF Model & Syntax Specification. I'm looking for feedback on its usability and also on its stability. I'm hoping that it will compile on platforms other than my own (Windows) without any changes. In order to correctly handle all of the examples from the M&S, I took the liberty of 'enhancing' the syntax described therein but not strictly prescribed to by its authors. The syntax is much more flexible and -- in my opinion -- more internally consistent. For example, I've removed the somewhat arbitrary restrictions placed on container descriptions and rdf:li elements which can now contain embedded resources as objects. The web site documentation for 'repat' provides a brief overview of how the parser works and also details my list of issues with the syntax and how I resolved them; these are mostly rehashes of several of my messages to the list. I'm looking forward to your comments, criticisms, and patches. repat will be released under an open source license... 'repat' was originally based on David Megginson's RDFFilter but has changed signifigantly since its inception; any bugs or deficiencies were undoubtedly introduced by myself." Note also the XSLT RDF Parser on the web site. On RDF, see (1) RDF Developer Tools listed on the W3C web site, and (2) "Software Tools for RDF."

  • [October 09, 2000]   XML:DB Standards Initiative for XML Databases.    A standards initiative for XML databases was announced by Kimbro Staken (Chief Technology Officer, dbXML Group L.L.C) on the SourceForge DBXML list. From the announcement: "SMB GmbH, dbXML Group L.L.C, and The OpenHealth Care Group have joined together to create the XML:DB initiative. Our goal is to develop open standards for XML databases along with open source implementations of those standards. Our first project will be the development of an XML update language. It is our goal to fast track the development of this update language and a reference implementation leveraging the open source development model. All implementations will be licensed under the Apache open source license." The announcement was made on behalf of XML:DB, "an industry initiative chartered with the development of open specifications for the XML database industry. Currently all XML database vendors are forced to develop their own proprietary mechanism for managing the data stored by their product. We are concerned that, without some initiative to bring these efforts together, this will lead to considerable confusion and duplication among users and, that as a result, the opportunities that XML databases offer to the market will not be maximized. Standards will facilitate the growth of a knowledgeable work force comfortable with the use of XML database products and the tools associated with them. Current database workers assume the existence of standards for RDBMS products and will therefore expect the same to be available for XML databases. More information about XML:DB can be found on our Web site The W3C has been the primary force behind the development of XML standards and is currently in the process of specifying a standard for XML query. The XML:DB initiative is not a replacement for the efforts of the W3C; however, it is our feeling that the development of standards for XML databases falls outside the current charter for the W3C. In particular, the first task for XML:DB will be the development of a standard XML update language. The current specification for XML Query states that update languages will be considered in a future version of the XML Query standard. This presents a serious problem for XML database companies who require an update language that is available today." From the web site: "XML:DB is also supported by a growing list of organizations with interest in XML and XML databases. XML:DB provides a community for collaborative development of specifications for XML databases and data manipulation technologies. Along with each specification an open source reference implementation will be developed to validate the ideas put forth in the specification and to more rapidly drive acceptance of the specification in real products. XML:DB's long term goals are: (1) Development of standardized technologies for managing the data in XML Databases; (2) Contribution of reference implementations of those technologies under an Open Source License; (3) Evangelism of XML database products and technologies to raise the visibility of XML databases in the marketplace. Membership in XML:DB is free and all interested parties are invited and encouraged to participate." See further (1) the XML:DB FAQ document, the list of projects currently under development by the XML:DB initiative, and (3) "XML and Databases."

  • [October 09, 2000]   W3C Publishes XML 1.0 Second Edition.    The W3C's XML Core Working Group has published a new W3C Recommendation for Extensible Markup Language (XML) 1.0 (Second Edition). Reference: W3C Recommendation 6-October-2000, edited by Tim Bray, Jean Paoli, C. M. Sperberg-McQueen, and Eve Maler. This REC specification follows the earlier publication of a public Review Version. "This second edition is not a new version of XML (first published 10-February-1998); it merely incorporates the changes dictated by the first-edition errata as a convenience to readers. The errata list for this second edition is available at The document abstract, unchanged from the 1998 first edition, appears to validate the hermeneutical theory that a text's intent escapes from the author and passes immediately into the control of the community upon utterance: "The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document. Its goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. XML has been designed for ease of implementation and for interoperability with both SGML and HTML." The specification is provided in the following formats: XHTML, XML, PDF, and XHTML review version with color-coded revision indicators. See also the (non-normative) "Production Notes" in Annex I: "This Second Edition was encoded in the XMLspec DTD (which has documentation available). The HTML versions were produced with a combination of the xmlspec.xsl, diffspec.xsl, and REC-xml-2e.xsl XSLT stylesheets. The PDF version was produced with the html2ps facility and a distiller program." For other references, see (1) "XML Specification DTD" and (2) "XML/XLink/XSL Specifications: Reference Documents" (translations, versions).

  • [October 09, 2000]   XSLT Stylesheets for TEI -> HTML/FO Conversion.    Sebastian Rahtz (of Oxford University Computing Services) announced two new tools for use of TEI and HTML. The Text Encoding Initiative (TEI) is an international project to develop encoding guidelines for the preparation and interchange of electronic texts for scholarly research, and to satisfy a broad range of uses by the language industries; its SGML/XML DTDs are now used widely in digital library projects within academia and government. Sebastian writes of the new tools: "I have revised and expanded my XSLT stylesheets which transform TEI XML documents to HTML, and to XSL FO. They are documented at I have also written a new utility which TEI HTML users may find helpful, at This is a web form for XSL TEI HTML stylesheet parameterization which asks you lots of question about how you want your HTML to look, and then generates an XSLT stylesheet for you. It does this by setting values for the 50 or so variables which are provided for customization of the main TEI HTML stylesheets. I'd be very happy to get feedback on the usefulness of this, and ideas on how to improve it..." These tools are part of the larger suite of "XSL stylesheets for TEI XML," described in the introduction thus: "I have prepared a set of XSLT specifications to transform TEI XML documents to HTML, and to XSL Formatting Objects. I have concentrated on TEI Lite, but adding support for other modules should be fairly easy. In the main, the setup has been used on `new' documents, ie reports and web pages that I have authored from scratch, rather than traditional TEI-encoded existing material. The stylesheets have been tested with the XT, Saxon, Xalan and Oracle XSLT processors; the last of these does not support multiple file output, which means that you cannot use the 'split' feature of the stylesheets to make multiple HTML files from one XML file." Note that Sebastian Rahtz also maintains PassiveTeX, a system using XSL formatting objects to render XML to PDF via LaTeX. On TEI, see "Text Encoding Initiative (TEI) - XML for TEI Lite."

  • [October 08, 2000]   Chess Markup Language (ChessML).    Chess Markup Language (ChessML) is an XML standard for chess currently under development by Oliver Sick of the Global Analysis Group in the Math Department, University of Bonn, Germany. Principal project goals are to: (1) Define a data storage format which preserves all abstract information; an important example is the PDB database developed by Gerd Wilts and others; (2) Build an interface to chess problem software such as Popeye, Alybadix, Natch and others; (3) Develop a flexible format providing simple export functions to LaTeX, HTML, PDF, RTF and others; (4) Provide simple interfaces for the data conversation between this hypothetical standard and other chess standard such as PGN and FEN. The ChessML web site provides a working draft specification for ChessML, with four XML DTDs, documentation, examples, and FAQ document. The ChessML sources, example and the documentation files are distributed under the Gnu Public License. ChessML design motivation: "...there is the well documented, non proprietary and very intuitive PGN format ['Portable Game Notation'] for chess which can be imported and exported by almost all chess databases and chess programs. PGN itself uses the ECO Codes as an internal encoding scheme for different chess openings. ECO Codes in PGN are an equivalent to ENTITIES in XML. Also XML documents usually are very easy to understand (if its DTD is 'good'). But PGN does not provide any of the features of a markup language like XML or SGML. So it is natural to look for an implementation of PGN in XML. Indeed ChessML is an extension of this idea. It uses the rich structure of XML and so it has many more capabilities than PGN itself." Using XML: "One of most important differences of XML and ChessML as an XML representation compared to PGN is its linking capabilities (called Xlink and Xpointer). This means there are very efficient ways to point to other parts of an ChessML document with respect to other parts. And this is indeed a very important fact if one remembers the citation of openings during an analysis or of a particular position of another game. My XSL file is called chess.xsl and is indeed very rudimentary, but I'm on the way... Combining the DTD and the Stylesheet with a ChessML document you can for example view the documnent in the Microsoft Internet Explorer 5.x." Related efforts cited by Oliver Sick include (1) ChessGML - Chess Game Markup Language; (2) Caxton Chess XML (CaXML); (3) Board Game Markup Language (BGML); (4) SGFML, an XML DTD based on SGF [Smart Game Format]; and (5) Jago Client with XML format. Note also the Chess Viewer application from RenderX; it uses an XSL stylesheet to transform an XML source document into PDF. For other references, see "Chess Markup Language (ChessML)."

  • [October 08, 2000]   IPTC Membership Approves the NewsML Version 1.0 DTD.    David Allen of the International Press Telecommunications Council (IPTC) announced the approval of the NewsML version 1.0 DTD for public release. Prose documentation is supplied in the DTD file within XML comments. Additional material will become available shortly and will be posted to the web site. NewsML is described as "an XML encoding for news which is intended to be used for the creation, transfer, delivery and archiving of news. NewsML is media independent, and allows equally for the representation of the evening TV news and a simple textual story. Specifically, NewsML provides the following features: (1) All formats and media types recognised equally; (2) Facilitates the development of NewsItems; (3) Collections of NewsItems; (4) Named relationships within and between NewsItems; (5) Structure consisting of ContentItems, NewsComponents and named relationships between NewsComponents; (6) Alternative representations within the same NewsComponent; (7) Explicit inclusion, inclusion by reference and exclusion of NewsComponents and alternatives; and (8) Attachment of metadata from standard and non-standard Controlled Vocabularies." See further references and related news metadata specifications in "NewsML and IPTC2000." Related endeavors include: (1) XMLNews: XMLNews-Story and XMLNews-Meta; (2) "News Markup Language (NML)": (3) "News Industry Text Format (NITF)"; (4) "Publishing Requirements for Industry Standard Metadata (PRISM)." Note also the posting of Daniel Rivers-Moore, who comments on the specification's "Confidence and Importance ratings" and on NewsML's "syntactic constructs of TopicSet, Topic, TopicSetRef, FormalName, Scheme, Property, Value, ValueRef, TopicOccurrence, TopicUse (among others), which are intended to map readily to constructs with similar names in the Topic Maps specification."

  • [October 06, 2000]   University of Virginia Ships over 600,000 XML EBooks.    From a recent UVA announcement: "From the Bible and Shakespeare to Jane Austen and Jules Verne, the University of Virginia Library's Electronic Text Center (Etext Center) is making more than 1,200 of its 50,000 online texts available as free e-books that may be downloaded from the World Wide Web and read using free Microsoft Reader software. With over 600,000 downloads since the project was launched in August, the Etext Center is the largest and busiest public e-book library in the world, library officials said. The Microsoft Reader software may be installed on a desktop or laptop computer, or on a Pocket PC hand-held computer. The software displays the electronic text on a computer screen so that it resembles the pages of a electronic text on a computer screen so that it resembles the pages of a traditional book. 'The goal is to read pages on the computer screen for extended periods of time, rather than to print them out,' said David Seaman, director of the Etext Center at the University of Virginia Library. The e-books are available free of charge at and titles are added regularly. E-books currently available include the Bible, all of Shakespeare, and classics from Dickens, Lewis Carroll, Robert Frost, Arthur Conan Doyle, Shelley, Darwin, and Jane Austen. The collection also includes American fiction and history from Franklin, Jefferson, Madison, Twain, Melville, Stowe, Hawthorne and Poe; early science fiction by Edgar Rice Burroughs, Jules Verne, and others; writings from Native American and African-American authors; and illustrated children's classics. Aesop's Fables alone has been downloaded more than 4,000 times, Seaman said. Readers from more than 100 countries have downloaded e-books from the Etext Center. 'The use of our e-books is truly global, with users coming not only from North America, but also from Europe, New Zealand, Australia, and even a good many from Asia, Africa, and the Russian Federation. The enormous popularity of our e-book holdings does much to validate the concept of the e-book software as a reading environment,' said Seaman. The audience is broad, including high school and college students, teachers, parents, and the general reading public... E-books also retain some of the best features of paper books. Users can write notes on a page and even 'dog-ear' pages." The University of Virginia has used SGML/XML encoding in its humanities computing projects for many years; the Electronic Text Center 'combines an on-line archive of tens of thousands of SGML and XML-encoded electronic texts and images with a library service that offers hardware and software suitable for the creation and analysis of text'. See further (1) "Open Ebook Initiative"; (2) "University of Virginia Electronic Text Center"; and (3) "IATH - Institute for Advanced Technology in the Humanities, University of Virginia at Charlottesville."

  • [October 06, 2000]   James Clark Releases expat Version 1.2.    James Clark has announced the release of expat version 1.2, now available for download from the web site, togeher with the expat FAQ document. This version adds support for parsing external DTDs and parameter entities. Win32 executables and Win32 import libraries are included in the distribution. Clark's expat - XML Parser Toolkit is "an XML 1.0 parser written in C. It aims to be fully conforming. It is currently not a validating XML processor. Version 1.2 is a production version of expat. Compiling with -DXML_DTD enables this support. There's a new -p option for the xmlwf application which will cause it to process external DTDs and parameter entities; this implies the -x option. See the comment above XML_SetParamEntityParsing in xmlparse.h for the API addition that enables this. The directory xmlparse contains an XML parser library which is built on top of the xmltok library. The interface is documented in xmlparse/xmlparse.h. The directory sample contains a simple example program using this interface; sample/build.bat is a batch file to build the example using Visual C++. The directory xmlwf contains the xmlwf application, which uses the xmlparse library. The arguments to xmlwf are one or more files which are each to be checked for well-formedness. An option -d dir can be specified; for each well-formed input file the corresponding canonical XML will be written to dir/f, where 'f' is the filename (without any path) of the input file. An -x option will cause references to external general entities to be processed. An -s option will make documents that are not standalone cause an error (a document is considered standalone if either it is intrinsically standalone because it has no external subset and no references to parameter entities in the internal subset or it is declared as standalone in the XML declaration)." Version 1.2 of expat is now available under a more permissive license (the MIT license rather than the MPL/GPL). James Clark announces expat version 1.2 as his own "final major production release." He says: "I am happy to announce that I am handing over future development and maintenance of expat to a team led by Clark Cooper, hosted on Clark is working towards an expat 2.0 that adds several new features, including better support for use as a shared library under Linux and other Unix variants. A beta release is available already; see" For related tools, see "XML Parsers and Parsing Toolkits."

  • [October 05, 2000]   Java API for XML Processing Version 1.1 Available for Public Review.    Sun Microsystems has announced the availability of JSR-000063 Java API for XML Processing 1.1, accessible online and presented for 'Public Review' until November 6, 2000. The specification Java API for XML Processing Version 1.1. Public Review has been written by James Duncan Davidson and Rajiv Mordani (Sun Microsystems). Reference: JSR-000063, Java API for XML Processing (JAXP) Specification, October 2, 2000; 52 pages. The proposed JAXP specification, as presented in the project summary, "will define a set of implementation independent portable APIs supporting XML Processing. This specification will be a follow-on specification to the Java API for XML Parsing (JAXP) 1.0 which was produced under JSR-000005. This specification will update the JAXP 1.0 specification support for SAX and DOM by endorsing SAX2 and DOM Level 2 respectively. In addition, it will define a set of implementation independent APIs for supporting XML Stylesheet Language / Transformation (XSLT) processors as well as possibly utilizing the XML utility standards of XBase, XLink, XPath, and XPointer. This draft is available for Public Review as per Section 3.1 of the Java Community Process Program." Comments on the specification should be sent to Excerpts: "In many ways, XML and the Java Platform are a partnership made in heaven. XML defines a cross platform data format and Java provides a standard cross platform programming platform. Together, XML and Java technologies allow programmers to apply 'Write Once, Run Anywhere' fundamentals to the processing of data and documents generated by both Java based programs and non-Java based programs. . . This document describes the Java API for XML Processing, Version 1.1. This version of the specification introduces basic support for parsing and manipulating XML documents through a standardized set of Java Platform APIs. When this specification is final there will be a Reference Implementation which will demonstrate the capabilities of this API and will provide an operational definition of the specification. A Technology Compatibility Kit (TCK) will also be available that will verify whether an implementation of this specification is compliant. These are required as per the Java Community Process 2.0 (JCP 2.0). The specification is intended for use by: (1) Parser Developers wishing to implement this version of the specification in their parser, and (2) Application Developers who use the APIs described in this specification and wish to have a more complete understanding of the API." The JAXP specification builds upon several others, including the W3C XML 1.0 Recommendation, the W3C XML Namespaces 1.0 Recommendation, Simple API for XML Parsing (SAX) 2.0, W3C Document Object Model (DOM) Level 2, and XSLT 1.0. "This [JSR-000063] version of the Java API for XML Processing includes the basic facilities for working with XML documents using either the SAX, DOM and XSLT APIs; however, there is always more to be done. [Plans for future versions include:] (1) As future versions of SAX and DOM evolve it will be incorporated into the future version of this API; (2) In a future version of the specification, we would like to provide a plugability API to allow an application programmer to provide an XML document and an XSLT document to a wrapped XSLT processor and obtain a transformed result." For related work, see (1) "Java API for XML Parsing (JAXP)" and (2) the XML section on [cache]

  • [October 05, 2000]   Revised CSS3 Module on W3C Selectors.    As part of the W3C style activity, the W3C CSS & FP working group has released a new working draft specification for the CSS3 Module on W3C Selectors. Reference: W3C Working Draft 5-October-2000, edited by Tantek Çelik (Microsoft Corporation), Daniel Glazman, Peter Linss (formerly of Netscape Communications), and John Williams (Quark, Inc.). This WD updates the previous draft of 2000-04-10; section 12 supplies a list of changes from previous versions. Document abstract: "CSS (Cascading Style Sheets) is a language for describing the rendering of HTML and XML documents on screen, on paper, in speech, etc. To bind style properties to elements in the document, it uses selectors, which are patterns that match to elements. This draft describes the selectors that are proposed for CSS level 3. It includes and extends the selectors of CSS level 2." Description: "This document is a draft of one of the 'modules' for the upcoming CSS3 specification. It not only describes the selectors that already exist in CSS1 and CSS2, but also proposes new selectors for CSS3 as well as for other languages that may need them. The Working Group doesn't expect that all implementations of CSS3 will have to implement all types of selectors. Instead, there will probably be a small number of variants of CSS3, so-called 'profiles'. For example, it may be that only the profile for non-interactive user agents will include all of the proposed selectors. . .The modularization of CSS and the externalization of the general syntax will reduce the size of the specification and allow new types of specifications to use selectors and/or CSS general syntax -- for instance behaviours or tree transformations. A W3C selector represents a structure. This structure can be understood for instance as a condition (e.g., in a CSS rule) that determines which elements in the document tree are matched by this selector, or as a flat description of the HTML or XML fragment corresponding to that structure." For description of other CSS3 modules, see (1) the CSS3 Roadmap and (2) "W3C Cascading Style Sheets."

  • [October 05, 2000]   Linux Documentation Project to Use DocBook XML for Authoring.    A thread on the LDP DocBook mailing list (noted by Michael Smith) indicates the decision of the Linux Documentation Project (LDP) to officially support authoring and publishing technical documentation using the XML version (4.1.2) of the DocBook -- in addition to the SGML DTDs (Linuxdoc, DocBook v3.x, DocBook v4.x) they already support. The Linux Documentation Project "is working on developing free, high quality documentation for the GNU/Linux operating system. The overall goal of the LDP is to collaborate in all of the issues of Linux documentation. This includes the creation of 'HOWTOs' and 'Guides'. The LDP's Author Guide has specified (hitherto) that all HOWTO documents must be in one of the two SGML formats: LinuxDoc or DocBook; in the current mode of production, DSSSL stylesheets are used to create output from the DocBook SGML source. The DocBook DTD, maintained in SGML and XML versions by the DocBook Technical Committee of OASIS, "has been adopted by a large and growing community of authors writing books of all kinds. DocBook is supported 'out of the box' by a number of commercial tools, and there is rapidly expanding support for it in a number of free software environments. These features have combined to make DocBook a generally easy to understand, widely useful, and very popular DTD. Dozens of organizations are using DocBook for millions of pages of documentation, in various print and online formats, worldwide." In this connection, note also the recent announcement from Karl Best (OASIS - Director, Technical Operations) that the DocBook Technical Committee has submitted the latest version of the DocBook DTD to OASIS, requesting that the OASIS membership vote to approve the Committee Specification as an OASIS standard. "The submission meets the requirements of the OASIS technical committee process. The DocBook Technical Committee has submitted DocBook v4.1.2, an XML version of the DTD, and v4.1, an SGML version of the DTD, and certifies that these are valid DTDs of their type. The DTDs may be found at and The submission is documented. DocBook 3.1 was fully documented by DocBook: The Definitive Guide at The changes introduced in DocBook 4 have not yet been integrated into TDG, but they are available at As specified by the Technical Committee Process, balloting takes place as follows: The OASIS membership has 30 days to respond to this ballot; balloting will close on 31 October 2000. One vote from each OASIS member organization is allowed; an organization may change its vote by sending in another vote up until the close of balloting..." See further references in "DocBook XML DTD."

  • [October 04, 2000]   Megginson Releases SAX2-ext Package.    David Megginson has announced the final 1.0 release of the SAX2-ext Java package at The handler classes in the SAX2-ext 1.0 disribution "have been spun off from the main SAX2 distribution to allow for separate development and easier updating. The handlers provide optional reporting comments, CDATA sections, entity references, the DOCTYPE declaration, and several types of DTD declarations. This package provides optional extension interfaces that XML parsers can use to report non-core information like comments, CDATA section boundaries, and attribute and element type declarations. In addition to the SAX2-ext package, you need a parser with a SAX2 driver that supports these extensions (Apache's Xerces 1.2 currently supports SAX2-ext, and JAXP will do so in its 1.1 release). Microsoft's MSXML parser supports a C++ or BASIC translation of these interfaces as well (I cannot tell which one -- it's not clear from their download page); [Note 2000-10-05 from Eldar Musayev: 'Microsoft's MSXML parser supports both a C++ and Visual Basic translation of these interfaces..."] There are no changes from the 1.0pre release except for version numbers in the documentation." Note in this connection that David Megginson has announced his intention to hand off SAX maintenance to another party: "I'm planning to put out a SAX2/Java bug-fix release this fall, and I may still try to help with a C++ version, but other than that, I think that I'm done with SAX. SAX needs a new maintainer. Since SAX is through its initial rapid-development stage, I'm inclined to hand it over to an institution rather than to an individual. I've considered the W3C and OASIS, but since SAX is really a developers' project rather than a standards-writers' project, I wondered if the Apache Project might not make the best home -- they're well set up to deal with this sort of thing, and have demonstrated a high degree of technical competence. The alternative is to find someone who can continue to maintain SAX through XML-Dev, and who can win the support of enough XML-Devvers to make real progress. I'll be happy to listen to nominations. SAX is in the Public Domain, but morally, at least, XML-Dev owns it, and the members should collectively decide what's going to happen to it." Post to XML-DEV 2000-09-29. [See the suggestion of Jon Bosak in light of a new OASIS Technical Committee Process.]

  • [October 04, 2000]   PassiveTeX XSL FO Implementation Version 1.1.    Sebastian Rahtz (Oxford University Computing Services) has announced a new release of his PassiveTeX XSL FO processor. PassiveTeX is "a library of TeX macros which can be used to process an XML document which results from an XSL transformation to formatting objects. It provides a rapid development environment for experimenting with XSL FO, using a reliable pre-existing formatter. Running PassiveTeX with the pdfTeX variant of TeX generates high-quality PDF files in a single operation. PassiveTeX shows how TeX can remain the formatter of choice for XML, while hiding the details of its operation from the user." Sebastian writes of version 1.1: "There are a variety of bug fixes (nothing too dramatic), and some new implementations of FO elements and characteristics. One important addition is that fo:marker and fo:retrieve-marker now work, more or less, allowing dynamic headers and footers. I have tested this by formatting the XSL FO spec itself, with satisfactory results. As ever, PassiveTeX is for you if: (1) you have an existing TeX system which you understand; (2) you need decent hyphenation, justification and page-breaking now; (3) you want MathML support; (4) you want high-quality PDF [compressed, bookmarks, links etc.]; and (5) you are into big files and long batch processing. PassiveTeX isn't for you if: (1) you want a Java solution which you can embed; (2) you have never seen TeX and dont want to; (3) you want SVG support; and (4) your life revolves around complex tables. See [which] is a typical example of what I use this system for. I expect, by the way, to share TeX details with the Unicorn XSL FO processor in the future; and possibly move to a comparable method myself." For related resources, see "XSL/XSLT Software Support."

  • [October 03, 2000]   W3C <corr sic="Candidate Recommendation">Revised Working Draft</corr> for the Modularization of XHTML.    (sic!) As part of the W3C HTML Activity, the HTML Working Group has published a new Working Draft specification for the Modularization of XHTML. Reference: W3C Working Draft 4-October-2000, edited by Robert Adams (Intel Corporation), Murray Altheim (Sun Microsystems), Frank Boumphrey (HTML Writers Guild), Sam Dooley (IBM), Shane McCarron (Applied Testing and Technology), Sebastian Schnitzenbaumer (Mozquito Technologies), and Ted Wugofski (, formerly Gateway). Document abstract: "This Working Draft specifies an abstract modularization of XHTML and an implementation of the abstraction using XML Document Type Definitions (DTDs). This modularization provide a means for subsetting and extending XHTML, a feature needed for extending XHTML's reach onto emerging platforms." Document status: "This is the Working Draft of 'Modularization of XHTML'. It is a version that incorporates some comments from the Last Call Working Draft review period. The Working Group anticipates asking the W3C Director to advance this document to Candidate Recommendation after the Working Group processes Last Call review comments and incorporates resolutions into the Guidelines. A diff-marked version from the previous Last Call draft is available for comparison purposes. Major changes in this version include: (1) Re-integration of the Building document into this document; (2) Incorporation of the Henry Thompson/Dan Connolly XML Namespace handling process with substantial additions by the Math and HTML working groups; (3) Complete worked examples including modules and miniature DTDs; (4) Minor restructuring of abstract module definitions, including the creation of a 'style attribute module' and a 'name identification module'; (5) Tweaking of some of the module contents based on review comments, including the addition of a 'target' module to separate the 'target' attribute from the frame module." Changes from the previous 'Last Call' Working Draft version may be inspected in the diff-marked version; the new WD is also available as a single HTML file, a Postscript version, a PDF version, and a ZIP archive. For the development context, see "XHTML and 'XML-Based' HTML Modules." [Note: the CR was published, then apparently retracted and replaced by a "Working Draft" version. Description in this news item may now be out of sync, though I have attempted to correct the URLs to reflect the ersatz WD version. 2000-10-05.]

  • [October 03, 2000]   Knowledge Technologies 2001 Conference.    Marion L. Elledge (Senior VP/IT, Graphic Communications Association) recently posted an initial announcement for a GCA-sponsored 'Knowledge Technologies 2001 Conference.' "Knowledge Technologies 2001 will replace XTech in Austin, Texas, March 4-7, 2001. Throughout the GCA events in 2000, there has been growing interest in technologies that support knowledge management. In Europe there were 250-300 attendees in the sessions on topic maps. At Extreme there was tremendous interest in RDF and Topic Maps. Ontologies and semantics have also proven to be major topics thus far. Based on the papers received for XML 2000, one full track is devoted to Knowledge Technologies. Therefore, we at GCA, agree that the next revolution for the Web will be the shift from an information base to a knowledge base. And the revolution will be grounded by emerging knowledge technologies to make the semantic web a reality. To achieve the goal of a quality conference to address these issues, the conference will be structured based on the input of the papers received and on the insight of the Conference Board of Advisors. If you have suggestions for the program, would like to submit a paper, or possibly serve on the Board of Advisors, please contact me." See the GCA Web site for related conferences, and "XML and 'The Semantic Web'."

  • [October 02, 2000]   Extensible Name Service (XNS).    XNS Public Trust Organization (XNSORG) recently announced the Extensible Name Service (XNS) as "a new open protocol and open-source platform for universal addressing, automated data exchange, and privacy control. XNS is based on two key technologies: XML, the new global standard for platform-independent information exchange, and web agents, a patented new technology that automates the exchange, linking, and synchronization of information between publishers and subscribers over digital networks. XNS combines XML and web agents to create a complete integrated infrastructure for automated information exchange between consumers and businesses anywhere on the wired or wireless Internet. The architects of XNS set out to solve three primary design objectives. (1) Universal Addresses: true 'universal address,' e.g., a single human-friendly name that can function as an address for all types of digital communications. Because this address can be resolved into an XML document containing any other communications network address (phone number, fax number, email address, URL, etc.), it is completely 'abstracted' from any particular communications network. This has three key advantages: The address never needs to change for the life of the resource it represents, or longer; Links to the address never have to break; the address doesn't need to follow any special formatting or syntactic restrictions -- it can be as simply as any name or phrase in XML (i.e., Unicode). (2) Automatic Linking and Synchronization: web agents [need] to create them automatically when information is exchanged and update them automatically when information changes. (3) Negotiated Control and Privacy Protection: provide negotiated control over any information exchange between two web agents using an extensible control vocabulary. . . Like DNS, XNS is a globally distributed network that can be implemented by any ISP, portal, corporation, university, or other network service provider. Unlike DNS, however, all XNS agencies and agents enter into registration agreements incorporating global terms specified by the XNS Public Trust Organization (XNSORG), an independent non-profit organization responsible for governance of the XNS global trust community. . . The next evolutionary step beyond a domain name, an XNS address is not just an email address, a phone number, a fax number, or a Web page, but a single 'superaddress' which consolidates all other addressing and profile data into a single XML digital container. This container is managed by an XNS agent following the owner's privacy and security rules. The beauty of XNS addresses is that they never have to change for the lifetime of a person, product, service, or company, no matter how often any other contact data changes. Furthermore, an XNS address can be as simple as your name-up to 64 characters, in any Unicode language, with no awkward syntax or punctuation. . . XNS provides the first open-source, globally distributed solution to universal registration. One click on the XNS login button at any XNS-enabled web site and your personal web agent instantly negotiates a private login key, so all you ever need to remember is your own XNS name and password. Every XNS form negotiated between two XNS agents results in an XNS contract stored by each agent. Besides recording the applicable privacy and security policies (including support for new W3C P3P privacy policies), XNS contracts record each XNS privacy permission granted by the agent owner for the user of their data. XNS privacy contracts are the missing foundation in a global privacy framework, giving consumers easy, immediate access to their permission records and businesses a simple, global vocabulary for true permission marketing. . . "XNS data schemas are defined using XML itself, following the proposed W3C XML Schemas specification. XNS is designed to resolve a name into any type of attribute which can be defined in an XML schema and exchanged using XML. In addition, XNS schemas are themselves registered in XNS. This means schema definitions are easily named, addressed, and synchronized just like any other XNS data instance. As with XML documents, XNS objects are a nested tree of component objects which are all one of two types: schema objects, which represent registered XNS schema definitions, and instance objects containing the attributes values for the resource. Following the rules of XML Schemas, all instance objects must be valid instances of schema objects. Because all XNS schema objects are themselves registered in XNS, XNS acts as one completely self-referential logical XML document." Phase Two of the will also introduce user-defined schemas: "as it is with XML, distributed schema authoring is one of the key extensibility features of XNS. In Phase Two agencies, businesses, and individuals will be able to define, publish, subscribe, and update their own XNS schemas in addition to those defined by XNSORG." For other description, see: "Extensible Name Service (XNS)."

  • [October 02, 2000]   COSCA/NACM JTC XML Court Filing Project.    The Electronic Filing Standards sub-committee of the COSCA/NACM Joint Technology XML Standards Committee is working with to develop an XML standard for electronic court filing via the Internet. Within the framework of, "the Court Filing Workgroup focuses on document and information exchange formats for electronic court filing applications. The chair of the Court Filing Workgroup is John Greacen (email: In July 2000, a second revised version of a proposed XML Standards Development Project Electronic Court Filing Draft Specification was issued, together with the XML DTDs. Reference: PS_10001_2000_07_24 (July 24, 2000); by Marty Halvorson and Richard Himes, edited by Winchel 'Todd' Vincent III. Document abstract: "This Draft Specification provides the XML DTD required for Court Filing. The document is intended to describe the information required for electronic court filing, and the structure of that information. No information regarding the content of the pleading or other legal device (e.g., contracts, orders, judgments, etc.) is included, other than that required to accomplish the task. This document is a Proposed Standard collaboratively developed by the COSCA/NACM Joint Technology Committee and the Legal XML Court Filing Workgroup. Portions of this document were derived from the Court Filing Strawman collaboratively developed by the U.S. District Court for the District of New Mexico; New Mexico Administrative Office of the Courts; SCT Global Government Solutions, Inc.; and West Group." Background: Following a planning meeting in Santa Fe, NM on August 30-31, 1999, the Joint Technology Committee (JTC) of the Conference of State Court Administrators (COSCA) and the National Association of Court Managers (NACM) formed an Electronic Filing Standards sub-committee to define a court XML national Standard to allow electronic filing via the Internet. The Electronic Filing Standards sub-committee of the JTC has sponsored a series of meetings, beginning November 4, 1999. Further description and references are available in "COSCA/NACM JTC XML Court Filing Project." Related work on XML-based legal applications may be found in: (1) "Legal XML Working Group", (2) "Open XML Court Interface (OXCI)", (3) "New Mexico District Court XML Interface (XCI)", (4)

  • "Georgia State University Electronic Court Filing Project", and (5) "University of Cincinnati College of Law, Center for Electronic Text in the Law."

  • [October 02, 2000]   AxKit Version 1.0 Released.    A communiqué from Matt Sergeant (Director and CTO, Ltd.) announces the release of AxKit Version 1.0. "AxKit, A mod_perl-based XML Application Server for Apache, has reached version 1.0 and has been officially released to the public via the project website at Employing a rich set of standards-compliant techniques as well as extensible scripting options, AxKit provides on-the-fly conversion from XML to a variety of other formats including HTML, WAP, and plain text. AxKit's notable technical features include the introduction of XPathScript, a powerful, Perl-based transformation language, built-in support for XSLT, 'smart' caching and the easy creation of dynamic XML documents. In addition to its broad set of XML transformation choices, AxKit also provides developers with an impressive configuration interface that allows fine-grained control over such advanced options as dynamic stylesheet chaining, selection of alternate stylesheets based on a variety of conditions, and control over the output character sets of transformed documents. AxKit is open source, free software, available under either the GNU GPL Version 2.0, or the Perl Artistic License." See also the "AxKit Quickstart Guide" and "An Introduction to AxKit."

  • [October 02, 2000]   Creating Barcodes Using XSL.    In the category of creative applications: the 'XSL Barcode Generator' from RenderX. Nikolai Grigoriev recently announced the XSL-based barcode generator on the Mulberry XSL-List: "As a byproduct of our main activity, we have developed an XSL stylesheet that draws barcodes from digit sequences. We hope a thing like this may be useful for people wishing to add barcode labelling to their XSL-based publishing solutions. Barcodes implemented are the most popular ones that you can see on items in stores and groceries - UPC/EAN, to be precise; other systems can be easily added. The output is an XSL FO table that can be rendered to PDF. As most of the barcode-drawing logic is independent of the output graphical format, modifying the stylesheet to produce SVG or something similar should be a relatively simple task. The stylesheet and examples are freely available from" The RenderX web site document "Creating Barcodes in XSL" supplies additional description: "Barcodes are very simple in physical structure - just rectangular black bars separated by white spaces. Representing such a structure in XSL Formatting Objects is easy, but a really challenging problem is to calculate bar widths starting from a string of symbols to encode. However, with XSLT, it becomes plausible. To confirm this claim, a stylesheet was designed to draw UPC/EAN barcodes. UPC/EAN (Universal Product Code / European Article Numbering) is probably the most widespread barcode system: virtually every item found in stores bears a characteristic bars-and-digits label stamped on its back. EAN codes (8 and 13 digit) are used in all European countries, while UPC ones (8 and 12 digit) are more frequent in the North America. The encoding scheme for both barcode types is the same (12-digit UPC is a subset of 13-digit EAN). Representation of digits by bars in UPC/EAN has some peculiarities: (1) every digit can be represented by two alternative bar patterns; (2) some digits may be encoded by the alternation of bar patterns of other digits; (3) the last digit serves as check digit, calculated by a tricky algorithm... An exhaustive description of the encoding algorithm can be found on the web site. No wonder that it took several hundred lines of XSLT code to get from a sequence of digits to the corresponding bar pattern. The result is a fairly complete XSL implementation of barcode drawing component. The stylesheet handles EAN codes (8-digit and 13-digit) and UPC codes (versions A and E). It provides for checksum calculation (including the trickiest UPC E case), and builds a complete pattern with numbers written at the bottom. The stylesheet does more than just creating a sequence of bars: it also chooses the most traditional format for each type of code. There are subtle differences between EAN and UPC concerning digit grouping, bar lengths etc; all these are taken into account in the stylesheet."

  • [October 02, 2000]   QARE: An Open Source XML/Java Portal.    Bill la Forge posted an announcement for the alpha release of QARE -- 'Quick Agent Runtime Environment'. "QARE (pronounced 'care') is an XML/Java Portal, providing an easy-to-use platform for processing XML over HTTP. QARE, which runs as a simple Java Servlet, is now complete, except for plugins (jar file extensions), though testing has been minimal. The documentation is currently limited to the API. QARE can handle any number of different markup languages. The type of markup language is specified in the file info portion of the URL used to invoke the servlet. QARE processes POSTed XML documents by converting them into agents. These agents then perform the request specified by the XML document and respond with an object which is converted back into XML and returned to the requestor. These agents are safe and virus-proof, as they are built using code resident on the server. Agents are composed using the incoming XML documents, which are validated prior to invoking the agent. With appropriate content validation, incoming documents are constrained to only those which create proper agents. While most agents are transitory, agents can also be persistent. A snapshot facility is provided so that persistent agents can be restarted after the web server has crashed. Local agent/agent communication is supported, so that transient agents can make use of the services provided by persistent agents. This makes QARE extensible through the use of appropriate persistent agents. And while plugins (jar files) are not yet supported, agent/agent communication uses the Quick Transcribe capability, allowing each agent to use its own classes for messages passed between agents. QARE builds on Quick. [Quick, as a tool for Java programmers who need to work with XML, is a collection of utilities and Java packages with a very small API; Quick does not use XML DTDs, but uses a schema language, QJML, to define markup languages and their relationship to Java classes.] QARE is Open Source (development is hosted on SourceForge) and is licensed under LGPL." For other information, see the developers' forum, the QARE API documentation, and the download.

  • [October 02, 2000]   DOM Level 1 Second Edition Working Draft Released.    A communiqué from Philippe Le Hégaret (World Wide Web Consortium, DOM Activity Lead) announces the release of DOM Level 1 Second Edition from the W3C DOM Activity: Document Object Model (DOM) Level 1 Specification (Second Edition) Version 1.0. Reference: W3C Working Draft 29-September-2000, edited by Lauren Wood, Arnaud Le Hors, Vidur Apparao, Steve Byrne, Mike Champion, Scott Isaacs, Ian Jacobs, Gavin Nicol, Jonathan Robie, Robert Sutor, and Chris Wilson. Note that DOM Level 2 is now a W3C Proposed Recommendation, and that DOM Level 3 Specifications are available. Document status: "This second edition is not a new version of the DOM Level 1; it merely incorporates the changes dictated by the first-edition errata list. [It] is a version of the DOM Level 1 Recommendation incorporating the errata changes as of September 29, 2000. It is released by the DOM Working Group as a W3C Working Draft to gather public feedback before its final release as the DOM Level 1 second edition W3C Recommendation (as these changes are editorials, there will be no Candidate Recommendation or Proposed Recommendation stages). The review period for this Working Draft is 4 weeks ending October 27 2000." Abstract: "This specification defines the Document Object Model Level 1, a platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of documents. The Document Object Model provides a standard set of objects for representing HTML and XML documents, a standard model of how these objects can be combined, and a standard interface for accessing and manipulating them. Vendors can support the DOM as an interface to their proprietary data structures and APIs, and content authors can write to the standard DOM interfaces rather than product-specific APIs, thus increasing interoperability on the Web." For other references, see: "W3C Document Object Model (DOM)."

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: