The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: February 09, 2009
XML Daily Newslink. Monday, 09 February 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Oracle Corporation

A Message Type Architecture for SOA
Jean-Jacques Dubray, InfoQueue

One of the main objectives of a SOA Governance organization is to define the processes and policies that foster the development of reusable services. As such a SOA Governance organization will be involved across the service lifecycle from identification, funding, design, deployment, operation, versioning and retirement. An EDM is a Logical Data Model, an Ontology if you will, of the overall Information System. Its structure is often abstract and loosely related to the physical structure of the systems of record. However, all data elements stored in any given system of record should be traceable to an element in the EDM. The EDM is often used to construct maps to transform data being synchronized or replicated from one system to another. In addition to the EDM, Data Governance owns processes that have an impact on Service Design, Operation, Versioning and Consumption: these processes include Data Quality, Metadata Management, Reference Data changes, Business Rules Changes, External Data requirements, Data Model changes... Ever since XML was invented in the mid 90s, people have argued about the best way to describe the structure of XML documents, especially when it comes to creating reusable XML fragments. Three camps emerged with different, sometimes diverging, requirements: the Web camp, the Document camp and Data camp. When it became clear to everyone that DTDs were not going to suffice, the W3C quickly published an XML Schema specification which is now nearly a decade old. Only minor changes are expected for the coming minor version (XML Schema 1.1). Despite some of the criticisms (complexity, shortcomings) and the development of alternative technologies (Relax NG) or complementary ones (Schematron), XML Schema has been and will remain the standard used to describe XML Message Types. Yet, no one has really found an efficient way to model EDMs using XML Schema definitions only. This has contributed to keep the two disciplines, Data and SOA governance separate. In this article we will argue that Message Types should be generated from EDM metadata. We will also argue that the usage of traditional models such as XML, ERD or UML is not suited to enable the consumption of the EDM for this purpose. We will propose to define two complementary DSLs (Domain Specific Languages), one for the EDM and one for the Message Types referencing the elements of the EDM. These DSLs will be used to generate a textual notation from which EDM and Message Type definition can be captured. These DSLs are also well suited to create a graphical notation.

The Self-Describing Web
Noah Mendelsohn (ed), W3C TAG Finding

W3C announced the publication of a final W3C TAG Finding, authored by members of the W3C Technical Architecture Group. "The Self-Describing Web" describes how document formats, markup conventions, attribute values, and other data formats can be designed to facilitate the deployment of self-describing, Web-grounded Web content. The Web is designed to support flexible exploration of information by human users and by automated agents. For such exploration to be productive, information published by many different sources and for a variety of purposes must be comprehensible to a wide range of Web client software, and to users of that software. HTTP and other Web technologies can be used to deploy resource representations that are self-describing: information about the encodings used for each representation is provided explicitly within the representation. Starting with a URI, there is a standard algorithm that a user agent can apply to retrieve and interpret such representations. Furthermore, representations can be what we refer to as grounded in the Web, by ensuring that specifications required to interpret them are determined unambiguously based on the URI, and that explicit references connect the pertinent specifications to each other. Web-grounding ensures that the specifications needed to interpret information on the Web can be identified unambiguously. When such self-describing, Web-grounded resources are linked together, the Web as a whole can support reliable, ad hoc discovery of information.

See also: Findings of the W3C Technical Architecture Group (TAG)

XBRL Becomes Mandatory: This Should Be Interesting
Kurt Cagle,

The announcement came quietly, a briefly worded memo from the SEC in December 2008 that as of the the third fiscal quarter of 2009 (starting in June), companies over $5 billion in assets would be required to start reporting their earnings using the Extensible Business Markup Language, or XBRL. Other companies would be required to follow suit according to whether they use GAAP (which have a one year grace period) or IFRP (starting 2011). The XBRL so provided would be data-centric rather than document-centric, and would be provided in addition to format text submissions of tax filings rather than replacing them. Additionally, each company would be required to host their XBRL enabled filings on their websites for a period of one year. The XBRL so submitted is required but currently has no liability save that associated with outright fraud, though this liability will be phased out in stages by 2014. From the IT perspective, the formal adoption of XBRL as a mandatory requirement is likely to have a number of implications, not least of which being a suddenly high demand for XML skilled people in general, and XBRL people in particular, as well as a boon for XBRL service providers and tools vendors. As with the OOXML/ODF controversy of 2007, it is very likely that 2009 will be a banner year for XML technologies in general, as two of the key issues that are highly visible this year -- financial transparency within corporations and the streamlining of health care, both involve rich XML standards—XBRL for financial reporting, HL7 v3 for electronic health records. Additionally, it is very likely that as companies began incorporating such standards into their financial and reporting systems, this will also open up the possibility of incorporating XML into other aspects of an organization's communication channels, such as the use of HR-XML standards for handling personnel management, pay and performance tracking...

vCard Extensions to WebDAV (CardDAV)
Cyrus Daboo (ed), IETF Internet Draft

Members of the IETF vCard and CardDAV (VCARDDAV) Working Group have published an updated Internet Draft for "vCard Extensions to WebDAV (CardDAV)." The specification defines extensions to the Web Distributed Authoring and Versioning (WebDAV) protocol to specify a standard way of accessing, managing, and sharing contact information based on the vCard format. In this version, the 'limit' element definition does not imply any formal ordering of results; the 'prop-filter' element was changed to to allow zero or more text-match elements rather than zero or one (etc). Background: Address books containing contact information are a key component of personal information management tools, such as email, calendaring and scheduling, and instant messaging clients. To date several protocols have been used for remote access to contact data, including Lightweight Directory Access Protocol LDAP (RFC 4511), Internet Message Support Protocol IMSP and Application Configuration Access Protocol ACAP (RFC 2244), together with SyncML used for synchronization of such data. WebDAV (RFC 4918) offers a number of advantages as a framework or basis for address book access and management. Most of these advantages boil down to a significant reduction in design costs, implementation costs, interoperability test costs and deployment costs... a CardDAV address book is modeled as a WebDAV collection with a well defined structure; each of these address book collections contain a number of resources representing address objects as their direct child resources. Each resource representing an address object is called an "address object resource". Each address object resource and each address book collection can be individually locked and have individual WebDAV properties. A CardDAV server is an address-aware engine combined with a WebDAV server. The server may include address data in some parts of its URL namespace, and non-address data in other parts. A WebDAV server can advertise itself as a CardDAV server if it supports the functionality defined in this specification at any point within the root of its repository. That might mean that address data is spread throughout the repository and mixed with non-address data in nearby collections...

See also: the IETF vCard and CardDAV (VCARDDAV) Working Group

What's In A Conversation?
Raghavan Srinivas, Unanswered Questions Blog

As the planning for the Intuit Partner Platform proceeds, the topic of a long-lived transaction popped up. A long-lived transaction, or a conversation is asynchronous by its very nature. Other properties of a conversation is that data flows across multiple organizational boundaries in the same cloud or different clouds. Do we absolutely need to have the ACID properties of a typical transaction associated with it? Gregor Hohpe of Google has written in a very succinct manner about why Starbucks does not use Two-phase commit. The asynchronous fashion of coffee ordering and delivery could be solved by a correlation identifier which is roughly the name of the customer and the drink scribbled on the coffee cup... I'll take this metaphor to Panera Bread. When the customer places the order, (s)he is handed a token which vibrates and lights up when the order is ready. Out-of-order delivery is almost the rule than the exception, but, the 'take order and deliver' conversation with the customer is complete when (s)he picks up the order and places the lighted token in the bin. It is by no means a perfect protocol and depending on the speed and scheduling of the workers, it's likely that a similar order placed ahead of another order might be delivered later than the order which was placed later. But, does it matter as long as the customer is sitting for a few extra minutes next to a cozy fireplace and enjoying a drink? The article also talks about compensating transactions to deal with some of the issues. However, the main goal of maximing throughput without compromising customer satisfaction entirely seems to be achieved. This form of a conversation is probably acceptable for ordering coffee or small ticket items. Can a similar conversation be applied when a customer is buying a car? There a various regulatory requirements that need to be complied with during the process of 'take order and deliver'. For example, unless a valid driving license is produced, customer may not be able to take vehicle for a test drive and obviously until there is a valid proof of auto insurance, the vehicle cannot be driven out of the dealers lot. There are still some optimizations that could be made -- the credit check, application and approval for an auto loan could happen whilst he customer is doing the test drive or is doing the price negotiations. How could you capture the true essense of a conversation on the cloud? The Web Services Choreography Description Language (WS-CDL), the Web Services Business Process Execution Language (WS-BPEL) and Web Services Transactions (WS-Transactions) can all be used to capture the context of a conversation or depending on the the nature of the conversation you could prefer to go the Starbucks or the Panera way.

Open-Source Firm Reworks Enterprise SOA Framework
Ed Scannell, InformationWeek

WSO2, specialists in open source-based service-oriented architecture software, has released a completely reworked version of its enterprise service bus (ESB) that now ascribes to WSO2 Carbon's componentized SOA framework. By fully complying with Carbon, version 2.0 of WSO2 Enterprise Service Bus affords IT shops significantly more flexibility in managing service-oriented connections throughout a large enterprise, company officials believe. The rejuvenated release also lets corporate and third-party developers hook in additional components to handle chores like service hosting, managing business processes, and greater governance abilities. Such customization capabilities reportedly serve to reduce the complexity of integration services and so reduce costs... Another major technical advantage of version 2.0 is that developers can now pry the management console logic away from the ESB routing and transformation engine. This makes it feasible to use just a single management console contained in the WSO2 ESB 2.0, making it possible for administrators to juggle a handful of back-end ESB tasks at the same time. From the announcement: " The WSO2 Carbon SOA platform uses OSGi as its underlying core modularization technology, which supports the ability to plug in new components in a managed way via versioning and a clean separation of functions. WSO2 Carbon also goes beyond the basics of OSGi to define a richer model for SOA. For example, even when new service types are added into the Carbon platform, they automatically inherit tracing, security and other capabilities from the platform. The Carbon platform defines how to build a consistent SOA platform and how the platform components share functionality. This approach allows developers to combine as many WSO2 middleware components as they need to assemble systems customized to their specific requirements. More components can be added to an existing installation over time, as those requirements change. Developers also can deploy other OSGi bundles -- either existing open source projects or their own custom-coded OSGi components—within the Carbon SOA platform. The components of the Carbon platform are based on Apache projects, including Apache ODE, Axis2, Synapse, Tomcat, Axiom, among many core libraries. Other key features include: (1) Full registry/repository integration that allows a complete distributed Carbon fabric to be driven from a central WSO2 Registry instance. (2) Eventing support, including a WS-Eventing Broker, to support event-driven architectures (EDA). (3) WS-Policy Editor for defining Web service dependencies and other attributes. (4) Transactional support for JMS and JDBC, facilitating robust error handling for services and ESB flows. (5) Transport management control for all services, making it much simpler to support File, Mail and JMS-based integration. (6) Active Directory and LDAP support across all products, providing simple integration into existing user stores including Microsoft environments...

See also: the announcement

W3C Publishes RDB2RDF Incubator Group Final Report
Ashok Malhotra, RDB2RDF XG Report

W3C announced the release of a Final Report from members of the RDB2RDF Incubator Group. The XG recommends that the W3C initiate a Working Group to standardize a language for mapping Relational Database schemas into RDF and OWL. This standard "will enable the vast amounts of data stored in Relational databases to be published easily and conveniently on the Web. It will also facilitate integrating data from separate Relational databases and adding semantics to Relational data. The recommendation is based on the a survey of the State Of the Art conducted by the XG as well as the use cases, as presented. The mapping language defined by the WG would facilitate the development of several types of products. It could be used to translate Relational data into RDF which could be stored in a triple store. This is sometimes called Extract-Transform-Load (ETL). Or it could be used to generate a virtual mapping that could be queried using SPARQL and the SPARQL translated to SQL queries on the underlying Relational data. Other products could be layered on top of these capabilities to query and deliver data in different ways as well as to integrate the data with other kinds of information on the Semantic Web. The mapping language should be complete regarding when compared to to the relational algebra. It should have a human-readable syntax as well as XML and RDF representations of the syntax for purposes of discovery and machine generation. There is a strong suggestion that the mapping language be expressed in rules as defined by the W3C RIF WG. The syntax does not have to follow the RIF syntax but there should a round-trippable mapping between mapping language and a RIF dialect. The output of the mapping should be defined in terms of an RDFS/OWL schema. It should be possible to subset the language for simple applications such as Web 2.0. This feature of the language will be validated by creating a library of mappings for widely used apps such as Drupal, Wordpress, phpBB. The mapping language will allow customization with regard to names and data transformation. In addition, the language must be able to expose vendor specific SQL features such as full-text and spatial support and vendor-defined datatypes. The final language specification should include guidance with regard to mapping Relational data to a subset of OWL such as OWL/QL or OWL/RL...

See also: the W3C Incubator Activity


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: