The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: November 24, 2008
XML Daily Newslink. Monday, 24 November 2008

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation

W3C Last Call: SOAP over Java Message Service 1.0
Peter Easton, Bhakti Mehta, Roland Merrick (eds), W3C Technical Report

Members of the W3C SOAP-JMS Binding Working Group have published a Last Call Working Draft for "SOAP over Java Message Service 1.0." Public comment is welcome through January 13, 2009. The work described in this and related documents is aimed at a set of standards for the transport of SOAP messages over JMS (Java Message Service). The main purpose is to ensure interoperability between the implementations of different Web services vendors. It should also enable customers to implement their own Web services for part of their infrastructure, and to have this interoperate with vendor provided Web services. The main audience will be implementers of Web services stacks; in particular people who wish to extend a Web services stack with an implementation of SOAP/JMS. It should enable them to write a SOAP/JMS implementation that will interoperate with other SOAP/JMS implementations, and that will not be dependent on any specific JMS implementation. A motivational example is a customer who has different departments that use Web services infrastructure from two different vendors, VendorA and VendorB. The customer has a need for reliable Web services interaction between the departments. Where both these vendors provide support for SOAP/JMS according to this standard, it should be possible for a client running using VendorA to interoperate with a service using VendorB. The standards will also be of interest to providers of Web services intermediary services such as routing gateways; or SOAP/HTTP to SOAP/JMS gateways. We do not discuss any details of how such gateways should be designed and configured, but adherence to the standard will help the gateway ensure proper interoperation with SOAP/JMS clients and services.

See also: the W3C SOAP-JMS Binding Working Group

OGF Public Review: Guidelines of Requirements for Grid Systems v1.0
Ravi Subramaniam, Toshiyuki Nakata (et al), Open Grid Forum Technical Report

Open Grid Forum Editor Greg Newby announced that the Enterprise Grids Requirements Research Group (EGR-RG) has released "Guidelines of Requirements for Grid Systems v1.0" for public comment through December 20, 2008. The memo provides information to the Grid community on guidelines of requirements for Grid systems, and contains recommendations on the designing grid systems. Excerpts: "This standard describes requirements to be considered in integration and operation of grid systems that effectively provide services by virtualizing and flexibly assigning, collaborating and using various resources including computers, storages and networks in accordance with different purposes. In order for the systems to effectively function, clarification and operational management of many related activities are required. In grid systems suppliers provide services to consumers, and in many cases consumers themselves may become suppliers and provide services to other consumers. Coordinated construction and operation of grid systems generate opportunities for ongoing management, greater efficiency and continual improvement. This standard is assumed to target people who use and operate grid systems. This standard may be used by the following business enterprises, organizations and applications. (1) Organizations who design, construct and operate grid systems. (2) Commercial Data Centers that provide hosting and housing services as their business. (3) Service providers who provide applications, IT resources and others. (4) Organizations that mediate various information services. This standard [...] defines a grid system as a hierarchical structure that consists of four layers. The first layer is the physical environment layer that consists of hardware components associated with servers, storages and networks. The second layer is the operating environment layer that consists of a number of software such as an operating system and a file system that makes the first layer operable. The third layer is the platform layer that consists of a number of softwares to achieve operations over multiple components such as database and grid middleware. The forth layer is the application service layer that consists of applications and portals. Consumers who use the forth layer are called end-users...

See also: OGF documents for public comment

vCard and CardDAV Working Group: Update of Extended MKCOL for WebDAV
Cyrus Daboo (ed), IETF Internet Draft

Members of the IETF vCard and CardDAV Working Group have published an updated -01 version of the "Extended MKCOL for WebDAV" specification. WebDAV (RFC 4918) defines an HTTP method MKCOL. This method is used to create WebDAV collections on the server. However, several WebDAV-based specifications such as DeltaV (RFC 3253), CalDAV (RFC 4791) define "special" collections or resources—ones which are identified by additional values in the 'DAV:resourcetype property' assigned to the collection resource, or through other means. These "special" collections are created by new methods (e.g., MKACTIVITY, MKWORKSPACE, MKCALENDAR). The addition of a new MKxxx method for each new "special" collection or resource adds to server complexity and is detrimental to overall reliability due to the need to make sure intermediaries are aware of these methods. This specification 'Extended MKCOL for WebDAV' proposes an extension to the WebDAV MKCOL method that adds a request body allowing a client to specify WebDAV properties to be set on the newly created collection or resource. Section 5 provides the XML Element Definitions. In particular, the 'DAV:resourcetype property' can be used to create a "special" collection, or other properties used to create a "special" resource. The WebDAV MKCOL request is extended to allow the inclusion of a request body. The request body is an XML document containing a single 'DAV:mkcol' XML element at the top-level. One or more 'DAV:set' XML elements may be included in the 'DAV:mkcol' XML element to allow setting properties on the collection as it is created. In particular, to create a collection of a particular type, the DAV: resourcetype XML element must be included in a DAV:set XML element and must specify the correct resource type elements for the new resource... A server supporting the features described in this document, must include "extended-mkcol" as a field in the DAV response header from an OPTIONS request on any resource that supports use of the extended MKCOL method. [Note: the vCard and CardDAV Working Group has also updated "vCard Extensions to WebDAV (CardDAV)". That specification defines extensions to the Web Distributed Authoring and Versioning (WebDAV) protocol to specify a standard way of accessing, managing, and sharing contact information based on the vCard format.]

See also: the WG's updated vCard Extensions I-D

Test Driving MarkLogic 4.0 XML Server
Kurt Cagle, O'Reilly Technical

XML databases have long been something of a niche category in the database world, trying with varying degrees of success to provide the level of ease and accessibility for semi-structured content that is a hallmark of SQL databases, while at the same time providing as much of the sophisticated processing that XPath enables for stand-alone documents. The need is certainly there—a significant amount of the total "data" in the world does not necessarily fall neatly into Ted Codd's relational table structures without significant shredding... It is not surprising then that as XQuery, the W3C XML query language standard released in February 2007 has gained acceptance, so too has interest in XML databases that support this standard. On the commercial side, one of the most well known (and solidly entrenched) is the MarkLogic XML Server... I had a chance recently to "get under the hood" and spend some time evaluating the MarkLogic Server 4.0 release. I was impressed by the product; it was both feature rich and satisfyingly fast, though there were a few facets of the server that I felt could have used some improvement. Overall, however, it is easy to see why MarkLogic holds the place in the XML database space that it does... After a year and a half of the XQuery specification being finalized, XQuery support is de rigour in an XML database, and MarkLogic definitely exceeds expectations here. The core XQuery function set is fast and, in my initial tests anyway, seem to work flawlessly. However, one of the real advantages of the XQuery specifications is in its extension mechanism—you can create additional libraries, either out of XQuery modules or via external libraries, and it is here that MarkLogic 4.0 supports the full 1.0 XQuery specification, but it also provides a 1.0-ml extension set that is rather stunning in its breadth, along with a 0.9-ml set that provides backwards compatibity for existing MarkLogic applications on the older 3.0 series. The first piece of this augmented XQuery set is the introduction of transactions (note that this capability is also being discussed as part of the XQuery 1.1 specification)... A second aspect of MLServer which is both (fairly) easy to use is the combination of alerts and triggers. An alert is a notification that the system raises whenever the state of the database changes in accordance with a given XQuery filter. In essence, whenever an update is performed on a database with alerts present, a "reverse query" is performed on the dataset, and if that query returns true, then some predefined action is launched... MarkLogic Server supports the XPointer specification, including not only the simple id and element() methods, but also the xpath() method. This means that you can retrieve a collection of nodes or fragments from a document or a collection of documents through XPointer notation, rather than going through the construction of a formal XQuery script. All of these are accomplished by a pipeline in which Xincludes and Xpointers within the source document are parsed, expanded and processed. This pipeline can handle a number of other things as well—the pipeline can be used to validate documents, to run XQuery "filters" on them and to expand on embedded tags. One of Entity Enrichment. Enrichment can be thought of as a semantic "process"; it scans a document for lexically "interesting" terms, compares these words or terms with its own map of terms and then wraps semantic information around the term itself...

See also: the Jason Hunter interview

W3C Markup Validation Service Now Supports HTML 5
Olivier Thereaux, W3C Announcement

W3C staff have announced the release of the W3C Markup Validation Service Version 0.8.4. W3C's validator checks the markup validity of Web documents in HTML, XHTML, SMIL, MathML, etc. While new version identifier '0.8.4' may sound like a very minor step from the version 0.8.3 released in August 2008, this new release of the W3C Markup Validator brings some very important change: in addition to checking documents against established standards such as HTML 4.01 and XHTML 1.0, the validator can now check documents for conformance to HTML5, thanks to the integration with the html5 engine. HTML5 is still work in progress and support for this next generation of the publishing language of the World Wide Web will remain experimental. The integration of the html5 engine in the validator should provide experimentation grounds for those interested in trying on authoring in this new version of HTML, as well as a feedback channel for the group working on building a stable, open standard... The validator is free and open source, and anyone is welcome to dowload it and use on a local server. Specific to this version 0.8.4, you may want to install a local instance of the HTML5 Conformance checker. If you want to use the integration with the HTML5 engine, you will need a fairly recent version of the libxml2 library. Olivier Thereaux reminds users: "As an open-source software project, this Validator exists thanks to all your help, contributions and feedback. For this release, special thanks go, in no particular order, to Henri Sivonen and all the contributors to the engine, Ville Skytta, Frank Ellermann, Etienne Miret, and Moto Ishizawa for patches and help, as well as the community of users and contributors on the mailing-list, wiki, and the bugzilla. Now more than ever, the validators need *you*. Millions use these tools daily to make the Web a better, more usable, more interoperable and accessible place, and these projects can use help from all, be it for bug hunting, documentation, user support, translations, etc."

See also: News for the W3C Markup Validator

ISO Standard 'Office' Formats Overpromise Compatability?
Rick Jelliffe, O'Reilly Technical

Apropos of a new Gartner report [Gartner RAS Core Research, ID Number G00161923], Rick Jelliffe comments: "Of the three pages, I pretty much agree with their first and third pages. Of course, the title is bogus -- standards don't promise anything, let alone overpromise, but that is small beans. The general thrust is, standards are good, ODF and OOXML are a different in extent, people's requirements are different, product features are different. So choose products and standards to fit users for each situation, people may want a unified standard forcing them to use one or the other of the existing ones may flounder. In any case, choosing either ODF or OOXML may tend to promote the suites from which the standards originated. That is all fine and good. And the mention that macros and scripting is a gaping gap in both standards is well made. Towards the middle it gets a little, err, nutty to me... the claim At the end of the day, people use products not formats to get the job done goes too far. In my business we spend all our day working on formats, and our customers use systems not products, where the system is made to reflect the capabilities of the format and the format made to reflect the user's requirements. A lot of Web systems are built that way. So I think the report doesn't really mention smart documents or communications with back-end systems (though it does mention Google Docs at least!). I think the problem that a lot of us in the standards community have is that in our heads it is 2009 or even 2010 already. An ODF person will be judging ODF by ODF 1.2 naturally. So articles on standards in the sense of what you get when you install a product jars against our understanding of the potential of a standard, and what we have seen in our prototypes and custom integrations. But if our heads are in 2009, I think the article is perhaps a little in 2007 in parts. While they won't be perfect for all sorts of reasons, the ODF and OOXML import/export is increasingly becoming acceptable over 2008/2009. In our office here we use ODF and OOXML all the time, and it is not causing any pain. It is a point I have made before: at the end of the day, no major vendor can afford to ignore any important format. There are so many voices of panic, as if moving to any XML-in-ZIP format would not be accompanied by transition pain..."

See also: the Cover Pages news story on ISO/IEC 29500

Google Adds OAuth to Widget Mashups
David Meyer, CNET

Google announced that it has adopted OAuth, an open Web authentication standard for controlling privacy, for its widget platform, Google Gadgets. If a user has personal information stored on one Web site, OAuth provides a mechanism for him or her to authorize that Web site to share the data with another Web site or widget. It also makes it possible to do this without the first site having to reveal the user's identity to the second site. Previously Google announced that it was to adopt OAuth for sharing data through its Google Data application programming interface. The company on Tuesday said it will now also use OAuth for Google Gadgets, which are interactive mini applications for the desktop that show, for example, personalized news feeds or localized weather reports. Eric Sachs, Google's senior product manager for security: "We also previously announced that third-party developers can build their own iGoogle gadgets that access the OAuth-enabled APIs for Google applications such as Calendar, Picasa, and Docs. In fact, since both the gadget platform and OAuth technology are open standards, we are working to help other companies who run services similar to iGoogle to enhance them with support for these standards. The new OAuth-enabled gadgets being created for iGoogle would also work on those other sites, including many of the gadgets that Google offers for its own applications. This provides a platform for some interesting mashups. It would allow a mutual fund, for example, to provide an iGoogle gadget to their customers that would run on iGoogle, and show the user the value of his or her mutual fund, but without giving Google any unique information about the user, such as a Social Security number or account number. In the future, maybe we will even see industries like banks use standards such as OAuth to allow their customers to authorize utility companies to perform direct debit from the user's bank account without that person having to actually share his or her bank account number with the utility vendor."

See also: Eric Sachs' blog article

Claim Catalog Design: We MUST Get This Right
Pamela Dingle, Blog

During IIW, the ICF Schema Working Group proposed and approved its first standardized claim definition. [An implementation was illustrated in the Equifax online identity card, which enables people to verify their 'over-18' identity online. People who obtain the Equifax I-Card are offered Parity's Azigo I-card management software to enable one-click sign-in and identity verification. Consumers can get their Equifax I-Card free of charge for use exclusively at a proof-of-concept site.] I've been following the workings of the schema group but not closely, and I was taken by surprise at the values defined as part of this precedent-setting claim element: "Claim Name: age-18-or-over, with Proposed Values of 0, 1, and 2"... Want to know what the values MEAN? What [will] a Mother or Father see when they view values passed between the Identity Provider they are trusting to make claims about their children's age, and a website that may restrict content based on that value... I believe we need to set some very specific best practices around these schema elements, first and foremost being the primary design principle that these atomic elements should be designed for regular people, not for developers, and not for machines... [And the subsequent questions from a design meeting]... What are the expectations of the 'Display Claim' versus the actual claim in providing human-readable claim values? Is it reasonable (or even preferable) to define a claim value that is not human-readable and trust that the STS will be responsible for mapping that value to something useful? Is it expected that the selector will do a metadata discovery on each and every claim passed?...

See also: the Claim Catalog Design discussion thread

Polymorphic Web Services, Part 1: Polymorphic Data
Scott M. Glen, IBM developerWorks

The potential benefits of a Service-Oriented Architecture (SOA) in terms of loose coupling and reuse, leading to business agility, have been well publicised for some time. But for SOA to provide a truly flexible platform for business process management (BPM), you need to introduce an element of abstraction into your service invocations. Polymorphism is a well-understood object-oriented programming technique that allows programs to be written based only on the abstract interfaces of the objects and functions to be manipulated. This means that future extensions, in the form of new types of objects, are easy if they conform to the original interface. I assume you broadly understand the concept of polymorphism, which you can think of as providing capabilities in the following two areas: (1) Polymorphic data refers to the ability to supply an object of a derived type to a generic operation, which handles the object as though it were of the base type, but calls to implicitly access operations and data from the derived class. (2) Polymorphic function provides the ability to invoke an operation; but through inheritance or some other means, another more specific operation is implicitly invoked to handle the request. This article shows you how to use XML extensions and dynamic service invocation techniques to provide a double whammy of polymorphism, creating truly flexible service invocations while simplifying business processes. It demonstrates two approaches that go some way to providing the polymorphic capabilities described above. Part 1 of the article series uses IBM Rational Software Architect in conjunction with XML extensions to provide a polymorphic approach to handling hierarchal data objects through Web services. Part 2 will address the polymorphic function aspect, using mediations within an enterprise service bus (ESB) to dynamically and implicitly invoke an appropriate Web service. Clearly there are some restrictions [in the examples shown] . You must use the 'instanceof' operator or the 'type' attribute from the base class to distinguish between the various derived accounts, and the introduction of further account types would require using the Rational Software Architect wizards to generate appropriate data classes for both client and server sides. Furthermore, remember that whilst this provides some degree of polymorphic behaviour, you're not dealing with an object-oriented remote invocation mechanism. Web services generally adopt a stateless invocation pattern; they accept a stream of XML data typically transmitted over HTTP, reconstitute it into object form, and execute some business function, often returning a result to the caller. State is not maintained between invocations. However, you have seen how you can, with minimal coding, use the Rational Software Architect modeling and transformation capabilities, in conjunction with the Software Services profile, to adopt an MDD approach that can generate a Web service interface to act on a hierarchy of related data elements...

Ad Hoc Discovery in Mesh Networks
Josh Patterson, Blog

This article lays out how decentralized discovery works in one ad-hoc routing algorithm, "Termite", which was related to Patterson's thesis. Patterson reports that he is building on some ideas to sketch out an outline of where decentralized discovery has been, and how it can be applied to today's web. "For my Master's thesis, I did work in Mobile Ad Hoc NETworks (MANETs) and self organizing routing algorithms. At the present time, I can't post my thesis online as we are waiting to see about some journal publications, but I'd like to talk a little about decentralized discovery in MANETs (however, I have a large article ready to go about my Master's thesis, so when I get the clearance, it will be posted). MANETs are decentralized networks that have no leader node or central controller where each node can only hear a subset of all nodes. Every node has to be a good citizen and agree to forward packets for other nodes in order for the network to function. Routing is a key issue in MANETs since a different set of obstacles are faced as opposed to traditional wired networks. There are many different ways that a network can approach the issue of routing, however there are two (main) classes in MANETs: Proactive and Reactive networks. (1) Proactive: A proactive ad-hoc network seeks to constantly have the best possible global routing information in its routing tables, updating the routing table as new information comes online. (2) Reactive: A reactive ad-hoc network only caches routing information in its tables relative to the routes it has seen recently or needs to forward the current packets in its queue .. Something that has really interested me is the parallel between ad-hoc networks and the emerging ad-hoc nature of today's web of linked data. The more I dig, the more I see similarities in the development of a decentralized ecosystem that has value in how it is interconnected. As we've seen with Termite, there is also tremendous value in a system that is interconnected, yet loosely coupled. Something I want to explore further is how we can add small properties to the web at the 'node' level that give it more decentralized robustness, more ways to auto discover itself..."

See also: the Metadata Discovery Coordination Group

DMTF's Technical Standards for Managing a Virtual Environment
Linda Musthaler, Network World

The DMTF is the global industry organization that leads the development, adoption and promotion of management standards and interoperable systems management. Without the groundbreaking work of the DMTF, it would be much harder—and way more expensive—to have a heterogeneous collection of PCs, servers, and storage devices. The DMTF is probably best known for its technology standards, which benefit both developers implementing management solutions and IT administrators who need to simplify the management of their environments. Important DMTF standards include Common Information Model (CIM); Common Diagnostics Model (CDM); Web-Based Enterprise Management (WBEM); Desktop and mobile Architecture for System Hardware (DASH); Alert Standard Format (ASF); Systems Management Architecture for Server Hardware (SMASH); and System Management BIOS (SMBIOS). Now the DMTF has taken on developing standards specifications for managing a virtualized environment. Called System Virtualization Management, or VMAN, this technology provides a standardized approach for IT managers to: deploy virtual computer systems, discover/inventory virtual computer systems, manage the lifecycle of virtual computer systems, create/modify/delete virtual resources, and monitor virtual systems for health and performance. As virtualization sweeps the data center, it is important that the IT industry adopt a vendor-neutral standard for the packaging of virtual machines (VM) and the metadata that are required to automatically and securely install and deploy the virtual appliance on any virtualization platform. The DMTF has introduced a packaging standard, Open Virtualization Format (OVF), to address the portability and deployment of virtual appliances. OVF enables simplified and error-free deployment of virtual appliances. Virtual appliance hardware requirements can be automatically validated during installation using OVF metadata. Virtual appliances can be quickly deployed with pre-built configurations using OVF metadata and can be easily customized during installation. Multiple virtual machines can be packaged as a virtual appliance and deployed easily in a single OVF package. This simplifies deployment of complex multi-tier enterprise applications (where there is one or more VMs per-tier) as well as large scale deployment of a cluster of VMs in a cluster.

See also: DMTF Virtualization Management (VMAN) Initiative


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: