The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: July 06, 2009
XML Daily Newslink. Monday, 06 July 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Oracle Corporation

W3C Increases HTML Working Group Resources to Accelerate HTML 5 Progress
Staff, W3C Announcement

W3C recently announced a decision to increase resourcing for the HTML Working Group, in hopes of accelerating the progress of HTML 5 and clarifying W3C's position regarding the future of HTML. The W3C XHTML2 Working Group, part of the HTML Activity, was chartered through December 31, 2009 "to fulfill the promise of XML for applying XHTML to a wide variety of platforms with proper attention paid to internationalization, accessibility, device-independence, usability and document structuring." W3C announced that this XHTML2 Working Group will not be rechartered in 2010, while various efforts will be undertaken to support the XML serialization of HTML (so as to remain compatible with XML) and to provide for an XML serialization of HTML that supports XML namespaces.

According to the XHTML FAQ document: "After discussion with the XHTML 2 Working Group participants, W3C management has decided to allow the Working Group's charter to expire at the end of 2009 and not to renew it. The HTML Working Group will continue with its current charter. W3C management plans to increase staff resources dedicated to the Group, including dedicated Michael Smith's time entirely to it. We will continue to strive to make sure the group serves the interests of the W3C community... Regarding the XML serialization of HTML, the HTML 5 specification includes a section on XML serialization, as well as a section on text/html serialization. W3C plans to continue work on both serializations in the HTML Working Group. Thus, we expect the next generation XML serialization of HTML to be defined in the HTML 5 specification..."

The W3C HTML Working Group is the W3C working group responsible for the HTML 5 specification's progress along the W3C Recommendation track. "HTML 5: A vocabulary and Associated APIs for HTML and XHTML" defines the fifth major revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). In this version, new features are introduced to help Web application authors, new elements are introduced based on research into prevailing authoring practices, and special attention has been given to defining clear conformance criteria for user agents in an effort to improve interoperability."

See the discussion thread on the XML-DEV list for varying opinions about XHTML and HTML 5.

See also: the W3C HTML Working Group

The HTML 5 Layout Elements Rundown
Kurt Cagle,

"HTML 5 is a broad specification with dozens of distinct changes from HTML 4. This article focuses on the HTML 5 layout elements. Subsequent articles in the series will examine forms-related changes (which are substantial), the new media elements, and DOM-related changes...

One of the most significant changes in HTML 5 is that both the HTML and XHTML formats are recognized as legitimate expressions of the specification. This is a major change that has among its implications the requirement for browsers to recognize fully the XHTML version of the syntax—this currently is not the case with Internet Explorer, for instance. This also means that all browsers should recognize 'application/xhtml+xml' or 'application/xml' as legitimate mime-types for encoding HTML documents. HTML 5 also recognizes SVG and MathML as additional valid formats within even HTML documents. Such instances do not necessarily need to incorporate namespaces in HTML, although they are of course required in XHTML...

The new 'article' and 'section' grouping elements indicate that the emerging HTML document bears an increasingly close resemblance to DocBook... While HTML 5 includes significant concessions to the AJAX revolution, one of its more fundamental goals is to make the language an appropriate document language... time will tell whether the changes being introduced now succeed in achieving that goal."

See also: HTML 5 Editor's Draft

Bringing OAuth to the IETF
Trent Adams and Mirjam Kuehne, IETF Journal

"At IETF 74 in San Francisco, IETF Journal editor Mirjam Kühne and Trent Adams (Outreach Specialist, Identity Community, at the Internet Society) sat down with OAuth BoF cochairs Hannes Tschofenig and Blaine Cook as well as Eran Hammer-Lahav, who authored the OAuth specification document, to find out more about the decision to bring OAuth into the Internet Engineering Task Force (IETF), about how the specification compares with similar resources, and about next steps in its development and application...

Hammer-Lahav: "SAML was designed mainly for use within business enterprises, and it is perhaps the most complicated of the three because of its use of XML structures. It is also very robust and very powerful. OpenID is fundamentally about single sign-on, and it depends on Web redirections. OpenID is meant exclusively for Web usage and is designed for interactions between human beings. Because of its architecture, OpenID can communicate only what will fit in a URI. It is not capable of handling anything more sophisticated. OAuth is a way to delegate both access and permission. It is very simple, and in many ways, it borrows from the culture of OpenID in terms of equal access, while at the same time learning from its mistakes. It is designed to be a generic access mechanism. So, if you have other authentication mechanisms on the Web, this is just one more option in that stack. However, it does provide more options: it can be used from server to server as well as from client to user. Right now, OAuth is not really a standard; it is primarily a guide to best current practices. The next phase of its development will need to focus on interoperability..."

See also: IETF Journal Issue 5/1 in PDF

A Conversion of Location Related Extensible Markup Language (XML) Elements to Type-Length-Value (TLV) Fields
James Polk, Allan Thomson, Marc Linsner (eds), IETF Internet Draft

An initial version -00 Internet Draft has been published for A Conversion of Location Related Extensible Markup Language (XML) Elements to Type-Length-Value (TLV) Fields. The document "defines a common TLV (Type/Length/Value) payload for communicating a location shape towards another entity. Specifically, the intent is to emulate in an easy TLV format, XML elements that are used within the Geopriv architecture, which includes OpenGIS's Geography Markup Language (GML). OpenGIS GML is an extensive syntax for expressing all the individual shape elements that make up a point, a circle, a polygon, an arc-band, ellipsoid and other shapes. This document describes the payload of the communication, it does not specify any protocol transport.

"The driving reason for this capability is that not every transport protocol can incorporate XML into their payloads easily or at all. Certainly not as easy as text-based protocols such as SIP or HTTP. Though, this document does not prohibit these text-based protocols from carrying this TLV payload, the payload is generalized for binary protocols to easily transport this location information parts from one entity to another. No assumption is made about how the sending entity attained this location information. This document is describing the TLV payload, most of which are present in GML. Because of this, these payload types will map back to XML in a Presence Information Data Format - Location Object (PIDF-LO), defined in RFC 4119 for an entity to transport a Location Object when the identity of the location target is included—even if as an RFC 3693 defined unlinkable pseudonym..."

See also: the IETF Geographic Location/Privacy (GEOPRIV) Working Group Status Pages

CAP: Advanced EAS Gains Momentum
Randy J. Stine, Radio World Online

"The process of drawing a roadmap to develop an enhanced public warning infrastructure is gaining momentum thanks to a renewed collaboration among stakeholders in the new Emergency Alert System. For the first time, FEMA officials have acknowledged they will not implement any new system that will require broadcasters to purchase new equipment until enough updated equipment is available from suppliers to meet demand. The [U.S.] Federal Emergency Management Agency is implementing several projects simultaneously to modernize and integrate EAS and National Alert and Warning System (NAWAS) programs into a national-level all-hazards warning system. Collectively these systems will constitute the Integrated Public Alert and Warning System, or IPAWS. Meanwhile, FEMA's adoption of the Common Alerting Protocol standard—a text-based, detail-rich system that local and state emergency managers will use to generate public warning messages—appears to be getting closer. CAP permits a warning message to be disseminated simultaneously over many different warning systems and ultimately allows the president of the United States to address the nation during a national emergency. FEMA has yet to settle on what the final CAP architecture will look like, but it's nearing a process to migrate EAS to become CAP-capable..."

According to Wade Witmer, Acting Director of the IPAWS program management office at FEMA: "FEMA will not start the 180-day clock with formal adoption until it is convinced broadcasters will be able to comply and purchase the new equipment required... FEMA's roadmap to a new EAS will include multiple pathways to reach more people using the latest technology..."

See also: CAP references

Putting Government Data Online
Tim Berners-Lee, Blog

This Blog article declares that 2009 "is the year for putting government data online. Both US and UK governments made public commitments toward open data... Groups from the Guardian to the Sunlight Foundation have already been pushing for it for a long time. People like,, and had been pushing by publishing government data themselves in various formats, including Linked Data... This article addresses what you should do if you want to put government data online..."

Abstract: "Government data is being put online to increase accountability, contribute valuable information about the world, and to enable government, the country, and the world to function more efficiently. All of these purposes are served by putting the information on the Web as Linked Data. Start with the 'low-hanging fruit'. Whatever else, the raw data should be made available as soon as possible. Preferably, it should be put up as Linked Data. As a third priority, it should be linked to other sources. As a lower priority, nice user interfaces should be made to it—if interested communities outside government have not already done it. The Linked Data technology, unlike any other technology, allows any data communication to be composed of many mixed vocabularies. Each vocabulary is from a community, be it international, national, state or local; or specific to an industry sector. This optimizes the usual trade-off between the expense and difficulty of getting wide agreement, and the practicality of working in a smaller community. Effort toward interoperability can be spent where most needed, making the evolution with time smoother and more productive."

See also: the TED talk on Linked Data

Replacing BNF with RELAX NG in Standards?
Rick Jelliffe, O'Reilly Technical

Extended BNF (ISO/IEC 1497) is the official ISO standard for describing grammars with BNF, and the standard is available free from ISO public site... "The trouble with ABNF and EBNF is that there are not the kind of ubiquitous, free tools around to support them that XML has. When you cannot test a grammar, there is every chance that you will make a mistake... And even if the standards-makers get it right, the users have to check by eye, which is not the most reliable method. Bringing the notation into the XML eco-system has obvious advantages for low-hanging fruit... So this brings up another approach for standards- makers rather than using ABNF. This is to specify the grammar for your notation using a familiar XML schema language (such as RELAX NG Compact Syntax) and then give parsing rules for converting from the non-XML form to the XML form. Indeed, this is how RELAX NG Compact Syntax itself is specified...

It seems natural for humans to partition off common changes of domain into changes of notation: C-style syntax won over LISP-style syntax. HTML has CSS and JavaScript and dates and RGB etc. SGML gave support for this, but in a way that was not layered enough to be sustainable over time—non-layerable technologies rarely last, falling apart under their own farinaceous weight... Many languages are two-layer: there is a lexical analyser to produce tokens then a grammar language that works on these tokens. Rather than using ANBF, I think there is a current sweet spot for using XML and XML schema languages (such as the ISO DSDL suite of RELAX NG, CRDL, and Schematron, or even XSD) for specifying the underlying grammar..."

See also: Freely Available ISO/IEC Standards


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: