The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: April 17, 2008
XML Daily Newslink. Thursday, 17 April 2008

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:

Is HTML in a Race to the Bottom? A Large-Scale Survey and Analysis of Conformance to W3C Standards
Patricia Beatty, Scott Dick, James Miller; IEEE Internet Computing

In this article, researchers from the University of Alberta discuss an analysis they've done on how the various versions of HTML and XHTML are used, looking at exactly these problems. The World Wide Web Consortium (W3C) promulgates the HTML standards used on the Web, but it has no authority to enforce the adoption of one standard in favor of another. In this environment, developers have some incentive to ignore up-to-date W3C standards given that the transitional versions of HTML 4.01 and XHTML 1.0 offer most of the capabilities of the newer ones but are less stringent in their requirements. If most Web sites migrate to these "transitional" standards and remain there, future versions might be mere academic exercises for the W3C. The "race to the bottom" is a familiar phenomenon that occurs when multiple standards compete for acceptance. In this environment, the most lenient standard usually attracts the greatest support (acceptance, usage, and so on), leading to a competition among standards to be less stringent. This also tends to drive competing standards toward the minimum possible level of quality. One key prerequisite for a race to the bottom is an unregulated market because regulators mandate a minimum acceptable quality for standards and sanction those who don't comply. In examining current HTML standards, we've come to suspect that a race to the bottom could, in fact, be occurring because so many competing versions of HTML exist... David Hammond, an advocate for standards-based Web technologies, examined mainstream browsers' level of compliance with common Web technologies, recommendations, and standards. With respect to HTML and XHTML, he explored compliance in terms of functionality for each and every language element within these recommendations. [See the table for] browser support for HTML 4.01, XHTML 1.0, and XHTML 1.1... Our survey findings imply that the current effort to develop XHTML 2.0 might well be just an academic exercise. The race to the bottom, however, isn't inevitable...

Toward Integration: Demystifying RESTful Data Coupling
Steve Vinoski, IEEE Internet Computing

Developers who favor technologies that promote interface specialization typically raise two specific objections to the uniform-interface constraint designed into the Representational State Transfer (REST) architectural style. One is that different resources should each have specific interfaces and methods that more accurately reflect their precise functionality. The other objection to the concept of a uniform interface is that it merely shifts all coupling issues and other problems to the data exchanged between client and server. Yet, that's based on the invalid assumption that the data exchanged in a REST system is just like the data exchanged in systems such as Web services and Corba, which require interface specialization... Compared to approaches such as Web services and the Web Services Description Language (WSDL), which promote specialization for each service interface, the uniform-interface constraint reduces client-server coupling and helps minimize gratuitous differences in interface and method semantics across disparate resources. REST isn't a silver bullet, but its flexibility and relative simplicity make it highly applicable not only to Web-scale systems but also to a wide variety of enterprise integration problems... Is REST's approach to dealing with data coupling some sort of magic? Of course not. Although the REST constraints explored here can definitely help reduce data coupling when compared to interface specialization approaches, there are still issues to watch for. For instance, sending representations typically means sending more data with each call than in RPC-oriented systems. Even though RESTful systems are often simpler and more efficient than their non-REST counterparts, this extra data overhead can sometimes cause efficiency problems. Ultimately, reducing interface and data coupling for your distributed applications isn't easy.

BPEL Should Not Be the Serialization Standard
Jesper Joergensen, Blog

Bruce Silver has written another chapter (and follow-up) in the developing story of BPMN and BPEL ("More on BPMN-to-BPEL"), which is here taken up in Joergensen's blog "You can serialize BPMN into BPEL, but BPEL should not be the serialization standard."—BPMN is not always easy to map to BPEL and the reason is that BPMN allows you to draw a full graph while BPEL is block oriented, and Bruce's examples illustrates it well. The pro-BPEL folks say that this is not a problem because you can always devise a strategy to "unfold" the graph into a block structure. That's true, you can always find a way to represent a BPMN diagram in BPEL. But the way you choose to do this is likely to be your way. It will not be standardized. Therefore, the same BPMN diagram will end up looking different in BPEL depending on your tool/engine/custom method. In theory, when the process executes, this doesn't matter. The "unfolding" should not change the behavior specified by BPMN, so the result of the execution should be the same if two different strategies are chosen. But the problem is that the strategy is proprietary. For example, A BPMN compliant BAM tool doesn't know how to show process trace for a process executed in BPEL. A BPMN compliant simulation tool cannot take the BPEL file and turn it back into the original BPMN file because it doesn't know the strategy used to go from BPMN to BPEL... It's pretty likely that no two applications will invent the same BPMN meta model and thus the diagrams will look and/or behave different even if it wasn't intended. This is the wrong track. It is too error-prone, requires unnecessary duplication of work (many meta models) and it forces too much focus on BPEL when BPEL is really mostly relevant for execution. BPEL is not suitable for business process modeling because it's too restrictive and machine oriented. This has been acknowledged pretty widely in the industry, so I hope it's not a controversial statement. This is not a criticism of BPEL as an execution format. But it appears much more sensible to standardize a meta model for BPMN and then let vendors build implementation specific mappings to BPEL from there.

See also: Bruce Silver's 'BPMN-to-BPEL'

Uncertainty Reasoning for the World Wide Web
Kenneth J. Laskey (et al., eds), W3C Incubator Group Report

W3C announced that the Uncertainty Reasoning for the World Wide Web Incubator Group has published its final report. Uncertainty is an intrinsic feature of many of the required tasks, and a full realization of the World Wide Web as a source of processable data and services demands formalisms capable of representing and reasoning under uncertainty. Although it is possible to use semantic markup languages such as OWL to represent qualitative and quantitative information about uncertainty, there is no established foundation for doing so. Therefore, each developer must come up with his/her own set of constructs for representing uncertainty. This is a major liability in an environment so dependent on interoperability among systems and applications. This Report is the major deliverable of the URW3-XG and describes the work done by the XG, identifies the elements of uncertainty that need to be represented to support reasoning under uncertainty for the World Wide Web, and includes a set of use cases illustrating conditions under which uncertainty reasoning is important. Along with the use cases (Section 5), this report also includes the Uncertainty Ontology (Section 3) that was developed during the discussions within our work, an overview of the applicability to the World Wide Web of numerous uncertainty reasoning techniques and the information that needs to be represented for effective uncertainty reasoning to be possible (Section 4), and a discussion on the benefits of standardization of uncertainty representation to the WWW and the Semantic Web (Section 6). Finally, it includes a Reference List of work relevant to the challenge of developing standardized representations for uncertainty and exploiting them in Web-based services and applications.

FAQs on ISO/IEC 29500
Staff, International Organization for Standardization (ISO)

Excerpts: "Because the information technology (IT) sector is fast-moving, the joint technical committee ISO/IEC JTC 1, Information technology, introduced the "fast track" process for the adoption as ISO/IEC standards of documents originating from the IT sector on which substantial development has already taken place... Why would ISO and IEC allow two standards for the same subject? The ICT industry has a long history of developing multiple standards providing similar functionalities. After a period of co-existence, it is basically the market that decides which survives. A past example within ISO concerned the SGML (Standard Generalized Markup Language) and ODA (Office Document Architecture). In this particular case, some claim that the Open Document Format (ODF), which is also an ISO/IEC standard (ISO/IEC 26300) and ISO/IEC 29500 are competing solutions to the same problem, while others claim that ISO/IEC 29500 provides additional functionalities, particularly with regard to legacy documents. The ability to have both as International Standards was something that needed to be decided by the market place. ISO and IEC and their national members provided the JTC 1 infrastructure that facilitated such a decision by the market players... While the voting on ISO/IEC 29500 has attracted exceptional publicity, it needs to be put in context. ISO and IEC have collections of more than 17,000 and 7,000 successful standards respectively, these being revised and added to every month. This suggests that the standards development process is credible, works well and is delivering the standards needed, and widely implemented, by the market. Because continual improvement is an underlying aim of standardization, ISO and IEC will certainly be continuing to review and improve its standards development procedures..."

Make SOA Transactional
Rajiv Ramachandran, IBM developerWorks

In the world of enterprise application integration (EAI), it's essential that all participating systems operate under an overarching global transaction so that these systems all return to a consistent state in case of a failure. With the various systems supporting different protocols, the transaction semantics must be propagated across these protocols so they can seamlessly participate in the global transaction. This article walks you through the steps required to make an example of a common integration scenario a transactional integration. It provides a simple example involving an SCA component, an EJB component, a Web service, and a message queue. The proposed solution uses the approach of defining an SCA component (BPEL process) as the orchestrator for this integration. It listens for input on a WebSphere MQ queue and then invokes the EJB component and the Web service interfaces exposed by the two systems to which it needs to pass the received data. However, the key for this solution to implement the requirements as stated in the previous section is to use the distributed transactional capabilities in WebSphere. WebSphere acts as the transaction coordinator and manages the above scenario as a single distributed unit of work that's distributed across various resource managers that include DB2, Oracle, and WebSphere MQ. This article shows you how SCA transaction qualifiers, Java 2 Platform, Enterprise Edition (J2EE) transaction specifications, the Web Service Atomic Transaction (WS-AT) specification definitions, and WebSphere MQ setup all work together to create a transactional enterprise integration solution...

Requirements of Japanese Text Layout
Toshi Kobayashi and Yasuhiro Anan (eds), W3C Technical Report

W3C reports that participants from four W3C Groups (CSS, Internationalization Core, SVG, and XSL Working Groups), as part of the Japanese Layout Task Force, have published "Requirements of Japanese Text Layout." This document describes requirements for general Japanese layout realized with technologies like CSS, SVG and XSL-FO. The document is mainly based on a standard for Japanese layout, JIS X 4051. However, it also addresses areas which are not covered by JIS X 4051. A Japanese version is also available. Writing systems are main part of cultures, together with languages and scripts. Each cultural community has its own language, script and writing system. In that sense, the transfer of each writing system into cyber space is a task with very important responsibility for information and communication technology. As one of the basic work items of this task, this document describes issues of text layout in the Japanese writing system. The goal of this task is not to propose solutions itself but describe important issues as the basic information for actual implementations. This task force is an outstanding effort as a bilingual methodology. Discussion is mainly conducted in Japanese, because of the Japanese issues, but, minutes and mailing list are written in English. As a process of the development, the task force held already one face-to-face meeting with the participating Working Groups. This document itself was also developed bilingually, and is published bilingually.

See also: Japanese Layout Task Force

Google Data APIs Patent License
Joe Gregorio, Google Blog

Joe Gregorio, Technical Program Manager of the Google Data APIs Team wrote: "We've always encouraged other developers to adopt Atom, the Atom Publishing Protocol, and the extensions that Google has created on top of those standards, but we realized the issue of patents may have held back some adopters. Well, those concerns end today as we are giving a no-charge, royalty-free license to any patents we have that you would need to implement Atom, AtomPub, or any of those extensions. The exact license text: 'Subject to the terms and conditions of this License, Google hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this License) patent license for patents necessarily infringed by implementation (in whole or in part) of this specification. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the implementation of the specification constitutes direct or contributory patent infringement, then any patent licenses for the specification granted to You under this License shall terminate as of the date such litigation is filed.' Now the official way to announce such a license for specifications under the IETF is to register them in the IETF IPR Disclosure Database; you can read the disclosures yourself in the IETF Disclosure Database. Google's extensions are also covered by this patent license, and you can find a link to them on the bottom of the authentication schemes AuthSub and ClientLogin on our core Google Data extensions and Common Elements, and on [several] Data APIs... We hope this will encourage sites who want to expose APIs for things like photos, videos, calendar, or contacts to reuse our schemas where they can, rather than reinventing the wheel." In this connection, note Mark Nottingham's muse upon the possible defect in some of the blanket non-assert declarations, in his blog "Moving the Goalposts: 'Use' Patents and Standards"— "It's become quite fashionable for large IT shops to give blanket Royalty-Free licenses for implementation of core technologies, such as XML, Web Services and Atom... IT folk see these licenses, nod their heads and relief, and assume that all is well; they can use this technology in their projects without fear of at least a handful of big, bad companies coming to get them. That's not the case. You see, most of these licenses are restricted to the implementation of this technology, not its use. This clears the people who actually write the code that implements the [XML, Web Services, Atom] parsers, processors and tools, but it doesn't help the folks that use those things. In effect, the vendors are pooling together their IP and giving each other free cross-licenses on chosen technologies—calling a truce, if you like—but not including their users..." [Note: Non-assert agreements and covenants have been initiated by Sun Microsystems and others as an alternative (or supplement) to traditional binding IPR agreements crafted by standards organizations: many regard these instruments superior because they (can) eliminate the need for signatures or other executable documents that require laywers' involvement. Similar legal instruments have been used by Computer Associates, IBM, Novell, Open Source Development Labs (OSDL), Red Hat, and the OSGi Alliance (with Nokia, IBM, Samsung, Makewave, and ProSyst Software).]

See also: Mark Nottingham's blog

Security for Services and Mashups
Steven Robbins, InfoQueue

Security has become a rising concern in most applications and systems today. Whether you are building small mashups, enterprise applications, or a platform for SOA, there are several issues and approaches that are being discussed. Erica Naone talked about dealing with security in the world of mashups recently while Bob Rhubart and David Garrison from BEA discussed securing the services you deploy. Naone talked with experts in the industry from the OpenAjax Alliance, Microsoft Research, information security firm Mandiant, and mashup maker JackBe in assessing the current state and future concerns of mashup security. In general, the browser security model is not up to the task of providing a model for security mashups... Most of the industry seemed to indicate that mashup security is still not a solved problem and will be a rising concern as Web 2.0 applications become more ubiquitous. Not all applications are mashups, though and BEA has focused on developers who are creating services that will be deployed in their organizations... David Garrison uses BEA's Security Services framework as an example of how to get it done. Garrison points out that BEA uses a Security Services framework model within their products and then discussed the fundamentals of designing such a framework using the AquaLogic Enterprise Security (ALES) products. Garrison identified that there are 5 major services/providers in a security services framework and described the responsibilities of each. The 5 items are: Authentication, Role Mapping, Authorization, Credential Mapping, and Audit. Unsurprisingly, the listed items are not really different from a traditional application-focused security framework.

See also: Garrison's blog


XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.
IBM Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: