The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: November 17, 2004
XML Articles and Papers November 2004

XML General Articles and Papers: Surveys, Overviews, Presentations, Introductions, Announcements

Other collections with references to general and technical publications on XML:

November 2004

[Under construction]

  • [November 17, 2004] "The Magic of RFID." By Roy Want (Intel Research). In ACM Queue Volume 2, Number 7 (October 2004), pages 40-48. Special Issue on RFID. "RFID is an electronic tagging technology (see figure 1) that allows an object, place, or person to be automatically identified at a distance without a direct line-of-sight, using an electromagnetic challenge/response exchange. Typical applications include labeling products for rapid checkout at a point-of-sale terminal, inventory tracking, animal tagging, timing marathon runners, secure automobile keys, and access control for secure facilities... An RFID system is composed of readers and tags. Readers generate signals that are dual purpose: they provide power for a tag, and they create an interrogation signal. A tag captures the energy it receives from a reader to supply its own power and then executes commands sent by the reader. The simplest command results in the tag sending back a signal containing a unique digital ID (e.g., the EPC-96 standard uses 96 bits) that can be looked up in a database available to the reader to determine its identity, perhaps expressed as a name, manufacturer, SKU (stock keeping unit) number, and cost. An RFID tag is built from three components: Antenna; Silicon chip; Substrate or encapsulation material. These tags are generally referred to as passive because they require no batteries or maintenance. Passive tags that operate at frequencies up to 100 MHz are usually powered by magnetic induction, the same principle that drives the operation of household transformers. An alternating current in the reader coil induces a current in the tag's antenna coil, allowing charge to be stored in a capacitor, which then can be used to power the tag electronics. Information in the tag is sent back to the reader by loading the tag's coil in a changing pattern over time, which affects the current being drawn by the reader coil — a process called load modulation. To recover the identity of the tag, the reader simply decodes the change in current as a varying potential developed across a series resistance... the memory available in current tags (typically 2 kilobits) will probably be too small for an efficient representation in XML; a more compact notation would need to be standardized. While Moore's law continues to increase the memory capacity potential of RFID tags possible at reasonable cost, XML may well be used for this purpose in the future... In practice most of the lower-frequency RFID systems can read tags at a maximum distance of about a meter, and the UHF systems extend that to three to four meters..." See: (1) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)"; (2) "Radio Frequency Identification (RFID) Resources and Readings."

  • [November 17, 2004] "Integrating RFID." By Sanja Sarma (OATSystems and MIT). In ACM Queue Volume 2, Number 7 (October 2004), pages 50-57. Special Issue on RFID. "[Part of the RFID] strategy was to put much of the data and intelligence associated with tagged items, which had hitherto resided on the RFID tags themselves, on the network instead. We achieved this by proposing a new, unique numbering scheme called the EPC (Electronic Product Code). The EPC would act as a pointer to data on the network in much the same way as a license plate on a car can be used to refer to the traffic tickets associated with that car. We then developed an infrastructure for associating these EPC tags with databases across the network using a variant of the DNS (Domain Name System), which we called the ONS (Object Name System). The ONS can be used to find the authoritative owner of the original data associated with an EPC tag. Other infrastructure components include the EPCIS (EPC Information Service), which is being standardized using a Web Services architecture. It can be used to extract information about an EPC from either a trading partner or another EPC-related application or repository within the enterprise.... The ability to read without line-of-sight is a principal advantage of RFID systems over bar-code systems. The fact that every bar-coded item needs to be handled to enable a successful read makes bar codes fundamentally manual... packages in [some] industries tend to be of standard shapes and sizes, with the bar codes at predictable locations, so scanning can even be automated. The standard supply chain, however, offers neither the homogeneity to permit automation, nor the incidental opportunity to perform manual scanning of bar codes. RFID readers, on the other hand, can sense items even when their tags are hidden, or sometimes, within the bounds of physics, when the tagged item is hidden behind other tagged items. This enables automation. Unfortunately, the very 'locational tolerance' that makes RFID tags easier to read also makes it difficult to understand whether a tag is in fact in the reader's prescribed zone, or whether the read tag is simply passing by..." See: (1) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)"; (2) "Radio Frequency Identification (RFID) Resources and Readings."

  • [November 16, 2004] "Auction of Internet Commerce Patents Draws Concern." By John Markoff. In The New York Times (November 16, 2004). "More than three dozen patents said to cover key facets of Internet transactions will soon be auctioned off by Commerce One, a bankrupt software company. But even before the sale, some technology executives and lawyers are worried that potential buyers might wield the patents in infringement lawsuits against companies that are engaged in online commerce, like I.B.M. and Microsoft. The 39 patents cover basic activities like using standardized electronic documents to automate the sale of goods and services over the Internet. Some intellectual property experts said that these patents, which have broad reach, could be used to challenge Web services like the .Net electronic business system from Microsoft or Websphere software from I.B.M... The Commerce One patents cover a technology known as 'Web services,' software at the heart of computerized systems designed to automate the buying and selling of goods and services online. If the patents are upheld by the courts and are used 'offensively' in an effort to obtain licenses from Internet commerce companies, they could limit innovation, said a representative of the World Wide Web Consortium, an Internet standards body. 'The consequences are potentially substantial,' said Daniel J. Weitzner, the head of the technology and society group at the Web consortium. 'We've had a number of situations where technology development has been blocked because there has been confusion about whether particular patents apply.' Mr. Glushko, for one, said that Commerce One had originally intended to create a public standard with its work, not constrain the use of online commerce processes. 'We filed these patents to describe a standard method for using documents to connect services into business networks,' said Mr. Glushko, who is now an adjunct professor at the University of California at Berkeley in the School of Information Management and Systems. 'At Commerce One, our business model depended on an open infrastructure for doing that. It is completely antithetical to our intent to use the patents to prevent it.' Commerce One, founded in 1994 and based in Santa Clara, Calif., developed software applications for electronic commerce. In 1999, it acquired a small start-up firm, Veo Systems, which had developed electronic commerce technology based on set of protocols known as Extensible Markup Language, or XML. The idea was that a publicly available technology like XML would help electronic markets grow rapidly. Mr. Glushko said that as a co-founder of Veo he had contributed the ideas in several of the key patents to industry standards groups, a move that may have placed those ideas in the public domain. Mr. Glushko contends that those contributions make the patents harder to enforce. However, a representative of Commerce One in the bankruptcy proceeding said that his firm had explored that issue and that the patents were enforceable..." See also the Commerce One announcement.

  • [November 15, 2004] "Tags for Identifying Languages." By Addison P. Phillips (editor; Director, Globalization Architecture, webMethods) and Mark Davis (IBM). Also available in HTML format with hyperlinks. IETF Network Working Group. Internet Draft. Reference: 'draft-phillips-langtags-08'. November 9, 2004, expires May 10, 2005. 46 pages. "This document describes the structure, content, construction, and semantics of language tags for use in cases where it is desirable to indicate the language used in an information object. It also describes how to register values for use in language tags and a construct for matching such language tags, including user defined extensions for private interchange. This document replaces RFC 3066 (which replaced RFC 1766)." Editor's note: "You should note that we think that this will be very near to the final version of this document. As such we have created an external document describing in very broad terms the design and design decisions made in hopes of better documenting the whys-and-wherefores for potential implementers. This document is available for public comment..." See the announcement for Draft-08. IETF ephemeral source: See "Language Identifiers in the Markup Context."

  • [November 15, 2004] "Reasons for Enhancing RFC 3066." Addison P. Phillips (ed). Inter-Locale. Document for Public Review. "RFC 3066 and its predecessor, RFC 1766, define language tags for use on the Internet. Language tags are necessary for many applications, ranging from cataloging content to computer processing of text. The RFC 3066 standard for language tags has been widely adopted in various protocols and text formats, including HTML, XML, and CLDR, as the best means of identifying languages and language preferences. This specification proposes enhancements to RFC 3066. Because revisions to RFC 3066 therefore have such broad implications, it is important to understand the reasons for modifying the structure of language tags and the design implications of the proposed replacement. The proposed successor to RFC 3066, addresses a number of issues that implementers of language tags have faced in recent years: (1) Stability of the underlying ISO standards; (2) Accessibility of the underlying ISO standards for implementers; (3) Ambiguity of the tags defined by these ISO standards; (4) Difficulty with registrations and their acceptance; (5) Identification of script where necessary; (6) Extensibility. The stability, accessibility, and ambiguity issues are crucial. Currently, because of changes in underlying ISO standards, a valid RFC 3066 language tag may become invalid (or have its meaning change) at a later date. With much of the world's computing infrastructure dependent on language tags, this is simply unacceptable: it invalidates content that may have an extensive shelf-life. In this specification, once a language tag is valid, it remains valid forever... The authors of this specification have worked for the past year with a wide range of experts in the language tagging community to build consensus on a design for language tags that meets the needs and requirements of the user community. Language tags form a basic building block for natural language support in computer systems and content. The revision proposed in this specification addresses the needs of this community of users with a minimal impact on existing content and implementations, while providing a stable basis for future development, expansion, and improvement..." See "Language Identifiers in the Markup Context."

  • [November 15, 2004] "Web Services Reliability Becomes OASIS Standard." Systems/Enterprise. By Alan J. Weissberger [WWW] (NEC Corp, and voting member of OASIS WSRM Technical Committee). In Grid Today: Daily News and Information for the Global Grid Community Volume 3, Number 46 (November 15, 2004). "At its November 10-11 [2004] meeting, the OASIS WSRM (Reliable Messaging) TC voted to send the WS-Reliability specification version 1.1 to OASIS for publication as a standard. An OASIS Standard signifies the highest level of ratification for a specification developed by an OASIS TC. Developed through an open process, WS-Reliability enables companies to conduct reliable business-to-business trading or collaboration using Web services. Three protocol capabilities are provided by this standard: guaranteed delivery, ordered delivery, and duplicate elimination... Additionally, four companies (NEC, Fujitsu, Hitachi and Oracle) participated in a successful interoperability demo of the WS-Reliability specification. This was the third such validation of multi-vendor interoperability by the WSRM TC. WS-Reliability will be used in the Japanese Business Grid project to ensure reliable delivery of SOAP formatted notification messages, which are sent based on some predefined condition, e.g., CPU/Server throughput exceeds a pre-set threshold level or drops below a 'low water mark.' These reliable notification messages may be sent between different companies at different geographical locations... WS-Reliability is an open specification for ensuring reliable message delivery for Web services. Reliability, in this context, is defined as the ability to guarantee message delivery to 'users' with a chosen level of protocol capability and Quality of Service (QOS). Again, the users are either other WS protocols (e.g., WS Security, WS Distributed Management, WS-Notifications, etc), or Application layer/user information messages which are exchanged between the end points of the connection. To facilitate WS-Reliability, there is a need for SOAP based Reliable Messaging Processors (RMPs) — in the sender and in the receiver endpoints — that work together to ensure that messages are delivered in a reliable manner over a connection that may be inherently unreliable. The sender and receiver RMPs operate on newly defined SOAP headers that are transmitted as either self contained messages, or they are attached to other WS protocol messages or user data messages (all of which are SOAP/XML encoded). Fault messages may extend to the SOAP message body. The users determine the level of WS Reliability. Reliability may include one or more reliable messaging protocol capability for the delivery of WS messages..." General references in "Reliable Messaging."

  • [November 15, 2004] "Web Services Context Specification (WS-Context). Edited by Mark Little, Eric Newcomer, and Greg Pavlik. Approved OASIS Committee Draft. Produced by members of the OASIS Web Services Composite Application Framework (WS-CAF) TC. Announced as a Committee Draft for public review from 12-November-2004 through 12-December-2004. Version 0.8. 3-November-2004. 23 pages. PDF extracted from the ZIP package. See also the XML Schema and WSDL file. "Web services exchange XML documents with structured payloads. The processing semantics of an execution endpoint may be influenced by additional information that is defined at layers below the application protocol. When multiple Web services are used in combination, the ability to structure execution related data called context becomes important. This information is typically communicated via SOAP Headers. WS-Context provides a definition, a structuring mechanism, and a software service definition for organizing and sharing context across multiple execution endpoints. The ability to compose arbitrary units of work is a requirement in a variety of aspects of distributed applications such as workflow and business-to-business interactions. By composing work, we mean that it is possible for participants in an activity to be able to determine unambiguously whether or not they are participating in the same activity. An activity is the execution of multiple Web services composed using some mechanism external to this specification, such as an orchestration or choreography. A common mechanism is needed to capture and manage contextual execution environment data shared, typically persistently, across execution instances. In order to correlate the work of participants within the same activity, it is necessary to propagate context to each participant. The context contains information (such as a unique identifier) that allows a series of operations to share a common outcome..." See also OASIS Web Services Composite Application Framework (WS-CAF) TC in "Messaging and Transaction Coordination." [source .ZIP file]

  • [November 15, 2004] Web Services Coordination (WS-Coordination). November 2004. 20 pages. With XML Schema and WSDL. A "second joint publication of the specification." Copyright (c) 2002-2004 BEA Systems, International Business Machines Corporation, Microsoft Corporation. See the licensing terms and Transaction Specification Index Page (Microsoft). "This specification describes an extensible framework for providing protocols that coordinate the actions of distributed applications. The framework enables existing transaction processing, workflow, and other systems for coordination to hide their proprietary protocols and to operate in a heterogeneous environment. Additionally this specification describes a definition of the structure of context and the requirements for propagating context between cooperating services." See general references in Web Services Transaction [Framework] [source PDF]

  • [November 15, 2004] Web Services Atomic Transaction (WS-AtomicTransaction). November 2004. 17 pages. With XML Schema and WSDL. A "second joint publication of the specification." Copyright (c) 2002-2004 BEA Systems, International Business Machines Corporation, Microsoft Corporation. See the licensing terms and Transaction Specification Index Page (Microsoft). "This specification provides the definition of the atomic transaction coordination type that is to be used with the extensible coordination framework described in the WS-Coordination specification. The specification defines three specific agreement coordination protocols for the atomic transaction coordination type: completion, volatile two-phase commit, and durable two-phase commit. Developers can use any or all of these protocols when building applications that require consistent agreement on the outcome of short-lived distributed activities that have all-or-nothing semantics." See general references in Web Services Transaction [Framework] [PDF source]

  • [November 15, 2004] Web Services Business Activity Framework (WS-BusinessActivity). November 2004. 22 pages. With XML Schema and WSDL. A "second joint publication of the specification." Copyright (c) 2002-2004 BEA Systems, International Business Machines Corporation, Microsoft Corporation. See the licensing terms and Transaction Specification Index Page (Microsoft). "This specification provides the definition of the business activity coordination type that is to be used with the extensible coordination framework described in the WS-Coordination specification. The specification defines two specific agreement coordination protocols for the business activity coordination type: BusinessAgreementWithParticipantCompletion, and BusinessAgreementWithCoordinatorCompletion. Developers can use any or all of these protocols when building applications that require consistent agreement on the outcome of long-running distributed activities." Earlier details in "WS-BusinessActivity Specification Completes the Web Services Transaction Framework." See general references in Web Services Transaction [Framework] [PDF source]

  • [November 12, 2004] "Exploiting ebXML Registry Semantic Constructs for Handling Archetype Metadata in Healthcare Informatics." By Asuman Dogac, Gokce B. Laleci, Yildiray Kabak, Seda Unal (Middle East Technical University, Turkey); Thomas Beale, Sam Heard (Ocean Informatics, Australia); Peter Elkin (Mayo Clinic, USA); Farrukh Najmi (Sun Microsystems, USA); Carl Mattocks (OASIS ebXML Registry SCM SC, USA); David Webber (OASIS CAM TC, USA). 13 pages (with 48 references). Posted to the OASIS International Health Continuum (IHC) TC list by DeLeys Brandman 12-November-2004. "Using archetypes is a promising approach in providing semantic interoperability among healthcare systems. To realize archetype based interoperability, the healthcare systems need to discover the existing archetypes based on their semantics; annotate their archetypes with ontologies; compose templates from archetypes and retrieve corresponding data from the underlying medical information systems. In this paper, we describe how ebXML Registry semantic constructs can be used for annotating, storing, discovering and retrieving archetypes. For semantic annotation of archetypes, we present an example archetype metadata ontology and describe the techniques to access archetype semantics through ebXML query facilities. We present a GUI query facility and describe how the stored procedures we introduce, move the semantic support beyond what is currently available in ebXML registries. We also address how archetype data can be retrieved from clinical information systems by using ebXML Web services. A comparison of Web service technology with ebXML messaging system is provided to justify the reasons for using Web services... A number of standardization efforts are progressing to provide the interoperability of healthcare systems such as CEN TC 251 prEN13606, openEHR, and HL7 Version 3. However, exchanging machine processable electronic healthcare records have not yet been achieved. For example, although HL7 Version 2 Messaging Standard is the most widely implemented standard for healthcare information in the world today, being HL7 Version 2 compliant does not imply direct interoperability between healthcare systems. Version 2 messages contain many optional data fields. This optionality provides great exibility, but necessitates detailed bilateral agreements among the healthcare systems to achieve interoperability. To remedy this problem, HL7 has developed Version 3 which is based on an object-oriented data model, called Reference Information Model (RIM). Yet, given the large number of standards in the healthcare informatics domain, conforming to a single standard does not solve the interoperability problem..."

  • [November 12, 2004] "Artemis: Deploying Semantically Enriched Web Services in the Healthcare Domain." By A. Dogac [WWW], G. Laleci, S. Kirbas, Y. Kabak, S. Sinir, A. Yildiz, and Y. Gurcan (Software Research and Development Center, Middle East Technical University - METU, Ankara Turkiye). Posted to the OASIS International Health Continuum (IHC) TC list by DeLeys Brandman 12-November-2004. 41 pages (with 49 references). Preprint of paper submitted to Elsevier Science. 20-October-2004. "An essential element in defining the semantics of Web services is the domain knowledge. Medical informatics is one of the few domains to have considerable domain knowledge exposed through standards. These standards offer significant value in terms of expressing the semantics of Web services in the healthcare domain. In this paper, we describe the architecture of the Artemis project, which exploits ontologies based on the domain knowledge exposed by the healthcare information standards through standard bodies like HL7, CEN TC251, ISO TC215, and GEHR. We use these standards for two purposes: first to describe the Web service functionality semantics, that is, the meaning associated with what a Web service does, and secondly to describe the meaning associated with the messages or documents exchanged through Web services. Artemis Web service architecture uses ontologies to describe semantics but it does not propose globally agreed ontologies; rather healthcare institutes reconcile their semantic differences through a mediator component. The mediator component uses ontologies based on prominent healthcare standards as references to facilitate semantic mediation among involved institutes. Mediators have a P2P communication architecture to provide scalability and to facilitate the discovery of other mediators... We mainly focus on the clinical concept part of the message ontologies. Our main motivation for concentrating on clinical concept ontologies is that the electronic healthcare record based standards present detailed semantics in this regard. However healthcare is a many-to-many business. It is not only connecting a hospital to its branch clinics but to an array of internal and external agencies such as insurance entities, financial institutes and government agencies. Therefore there are other aspects of healthcare informatics such as billing and insurance that need to be covered. Our future work includes extending message ontologies with semantic concepts to handle these aspects including financial information..."

  • [November 09, 2004] "What's New in JAXP 1.3? Part 1: An overview of the technology, and a look at parsing API changes and a new validation API." By Neil Graham (Manager, XML Parser Development, IBM) and Elena Litani (Software Developer, IBM). From IBM developerWorks (November 09, 2004). ['JAXP 1.3, which will be part of J2SE 5 and J2EE 4, is the first major release of this API in over three years. In this pair of articles, the authors explore each of the areas of new functionality added to JAXP in this new version. They provide a brief overview of the JAXP specification, give details of the modifications to the javax.xml.parsers package, and describe a powerful schema caching and validation framework.'] "For a mature technology, the XML space is surprisingly active. Java API for XML Processing (JAXP) 1.3 was recently finalized, and is the conduit through which many of the newest open standards relating to XML will enter the J2SE platform... Originally christened the Java API for XML Parsing, JAXP 1.0 simply provided a vendor-neutral means by which an application could create a DOM Level 1 or a SAX 1.0 parser. With the advent of JAXP 1.1 in 2001, the 'P' came to signify Processing rather than Parsing, and the API's focus broadened to provide a standardized means for applications to interact with XSLT processors. JAXP 1.1 was made part of both the Java 2 Standard Edition (J2SE) 1.4 and the Java 2 Enterprise Edition (J2EE) 1.3. JAXP 1.2 emerged in 2002 as a minor revision of the specification, and added a standardized means of invoking W3C XML Schema validation in JAXP-compliant parsers... To ensure that applications depending on a specific version of JAXP have the maximum amount of portability, ever since its inception, JAXP specification versions have been tied to specific versions of DOM and SAX, as well as the underlying XML and XML Namespaces specifications. None of these specifications have been static in the three years since JAXP's last major revision (JAXP 1.1), so JAXP 1.3 steps up to the most recent versions of each of the specifications, allowing them to make their way into J2SE and J2EE... Many applications seek to validate XML documents against a schema, such as one defined according to the W3C XML Schema Recommendation. To validate a document, a validating processor needs to parse the schema document, build an internal in-memory representation of this schema, and then use this in-memory schema to validate an XML document. Hence, validation can entail a large performance cost if a validating processor needs to parse and build an in-memory representation of a schema before validating each XML document. Normally, an application has a limited set of schemas, and therefore wants the processor to build an in-memory representation of a given schema once and use it to validate documents. So far, implementations have had to provide their own mechanisms for caching schemas. For example, the Apache Xerces-J parser defines its own grammar caching API. Now JAXP 1.3 defines a standard API (the javax.xml.validation package) that lets an application re-use schemas and therefore improve overall performance..." See Java API for XML Processing (JAXP).

  • [November 05, 2004] Universal Business Language (UBL) Naming and Design Rules. Lead Editor: Mark Crawford (LMI). Publication Date: 5-November-2004. Document identifier: 'cd-UBL-NDR-1.0.1' 112 pages. Produced by the UBL Naming and Design Rules Subcommittee under SC Co-chairs: Mavis Cournane (Cognitran Ltd), Mark Crawford (LMI), and Lisa Seaburg (Aeon LLC). Draft document [Second CD Candidate]. "This specification documents the naming and design rules and guidelines for the construction of XML components for the UBL vocabulary. It conveys a normative set of XML schema design rules and naming conventions for the creation of business based XML schema for business documents being exchanged between two parties using XML constructs defined in accordance with the ebXML Core Components Technical Specification... UBL employs the methodology and model described in Core Components Technical Specification, Part 8 of the ebXML Technical Framework, Version 2.01 of 15 November 2003 (CCTS) to build the UBL Component Library. The Core Components work is a continuation of work that originated in, and remains a part of, the ebXML initiative. The Core Components concept defines a new paradigm in the design and implementation of reusable syntactically neutral information building blocks. Syntax neutral Core Components are intended to form the basis of business information standardization efforts and to be realized in syntactically specific instantiations such as ANSI ASC X12, UN/EDIFACT, and XML representations such as UBL. The essence of the Core Components specification is captured in context neutral and context specific building blocks. The context neutral components are defined as Core Components (ccts:CoreComponents). Context neutral ccts:CoreComponents are defined in CCTS as 'A building block for the creation of a semantically correct and meaningful information exchange package. It contains only the information pieces necessary to describe a specific concept.'... The context specific components are defined as Business Information Entities (ccts:BusinessInformationEntities). Context specific ccts:Business InformationEntities are defined in CCTS as 'A piece of business data or a group of pieces of business data with a unique Business Semantic definition.'... [As shown in Figure 2-2, Business Information Entities Basic Definition Model], there are different types of ccts:CoreComponents and ccts:BusinessInformationEntities. Each type of ccts:CoreComponent and ccts:BusinessInformationEntity has specific relationships between and amongst the other components and entities. The context neutral ccts:CoreComponents are the linchpin that establishes the formal relationship between the various context-specific ccts:BusinessInformationEntities..." [Section 2, Universal Business Language (UBL) Naming and Design Rules] Note: Jon Bosak reported on November 14, 2004: "I am pleased to announce that the revised UBL Naming and Design Rules have been approved by the UBL TC and are now in the process of submission for OASIS Standardization. The revised CD can be found [online]. These are the naming and design rules that were used in generating the schemas in the UBL 1.0 Standard. The NDR CD may undergo minor editorial tweaks and a change of location when it moves into consideration by the OASIS member organizations, but with regard to content, it is now complete. See "Second UBL NDR 1.0 Committee Draft Approved." [source PDF]

Earlier XML Articles

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: