The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: June 20, 2008
XML Daily Newslink. Friday, 20 June 2008

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc.

W3C Call for Implementations: RDFa in XHTML: Syntax and Processing
Ben Adida, Mark Birbeck, et al. (eds), W3C Technical Report

W3C has announced the advance of the following specification to Candidate Recommendation status: "RDFa in XHTML: Syntax and Processing. A Collection of Attributes and Processing Rules for Extending XHTML to Support RDF." The This is a Candidate Recommendation and call for implementations was produced jointly by the W3C Semantic Web Deployment Working Group and the XHTML 2 Working Group. The working groups also released revised a revised version of "RDFa Primer: Bridging the Human and Data Webs" and "RDFa Implementation Report." The specification is considered stable by the working groups, whose members intend to submit it consideration as a W3C Proposed Recommendation after 19-July-2008, having met the following criteria (1) At least two implementations have been demonstrated that pass all tests in the test suite; (2) All issues raised during the CR period against this document have received formal responses. Specificatio summary: Today's web is built predominantly for human consumption. Even as machine-readable data begins to appear on the web, it is typically distributed in a separate file, with a separate format, and no correspondence between the human and machine versions. As a result, web browsers can provide only minimal assistance to humans in parsing and processing web data: browsers only see presentation information. We introduce RDFa, which provides a set of HTML attributes to augment visual data with machine-readable hints. We show how to express simple and more complex datasets using RDFa, and in particular how to turn the existing human-visible text and links into machine-readable data without repeating content... RDFa is a specification for attributes to be used with languages such as HTML and XHTML to express structured data. The rendered, hypertext data of XHTML is reused by the RDFa markup, so that publishers don't need to repeat significant data in the document content. This document only specifies the use of the RDFa attributes with XHTML. The underlying abstract representation is RDF, which lets publishers build their own vocabulary, extend others, and evolve their vocabulary with maximal interoperability over time. The expressed structure is closely tied to the data, so that rendered data can be copied and pasted along with its relevant structure. The rules for interpreting the data are generic, so that there is no need for different rules for different formats; this allows authors and publishers of data to define their own formats without having to update software, register formats via a central authority, or worry that two formats may interfere with each other. RDFa shares some use cases with microformats. Whereas microformats specify both a syntax for embedding structured data into HTML documents and a vocabulary of specific terms for each microformat, RDFa specifies only a syntax and relies on independent specification of terms (often called vocabularies or taxonomies) by others. RDFa allows terms from multiple independently-developed vocabularies to be freely intermixed and is designed such that the language can be parsed without knowledge of the specific term vocabulary being used.

See also: the RDFa Implementation Report

Conference Information Data Model for Centralized Conferencing (XCON)
O. Novo, G. Camarillo, D. Morgan, R. Even (eds), IETF Internet Draft

Members of the IETF Centralized Conferencing (XCON) Working Group have published a revised Internet Draft for the "Conference Information Data Model for Centralized Conferencing (XCON)." This 75-page specification defines an Extensible Markup Language (XML)-based conference information data model for centralized conferencing (XCON). A conference information data model is designed to convey information about the conference and about participation in the conference. The conference information data model defined in this document constitutes an extension of the data format specified in the Session Initiation Protocol (SIP) Event Package for Conference State. Appendix A supplies the Non-Normative RELAX NG Schema in XML Syntax. Overview: There is a core data set of conference information that is utilized in any conference, independent of the specific conference media. This core data set called the 'conference information data model' is defined in this document using XML. The conference information data model defined in this document is logically represented by the conference object. Conference objects are a fundamental concept in Centralized Conferencing, as described in the Centralized Conferencing Framework (RFC 5239). A conference object contains data that represents a conference during each of its various stages (e.g., created/creation, reserved/reservation, active/activation, completed/completion). A conference object can be manipulated using a conference control protocol at a conference server. The conference object represents a particular instantiation of a conference information data model. Consequently, conference objects follow the XML format defined in this document. A conference object contains the core information of a conference (i.e., capabilities, membership, call control signaling, media, etc.) and specifies who, and in which way that information can be manipulated... The data model specified in this document is the result of extending the data format defined in IETF RFC 4575 with new elements. Examples of such extensions include scheduling elements, media control elements, floor control elements, non-SIP URIs, and addition of localization extensions to text elements. This data model can be used by conference servers providing different types of basic conferences. It is expected that this data model can be further extended with new elements in the future in order to implement additional advanced features.

See also: the IETF Centralized Conferencing (XCON) Working Group Status Pages

Emergency Data Exchange Language Resource Messaging (EDXL-RM) 1.0
Patti Aymond, Rex Brooks, Tim Grapes (et al., eds), OASIS PR Draft

OASIS announced that the Emergency Management Technical Committee has released an approved Committee Draft for public review: "Emergency Data Exchange Language Resource Messaging (EDXL-RM) 1.0," Public Review Draft 03. The public review period ends July 05, 2008. As detailed in the EDXL-DE Specification, the goal of the EDXL project is to facilitate emergency information sharing and data exchange across the local, state, tribal, national and non-governmental organizations of different professions that provide emergency response and management services. EDXL will accomplish this goal by focusing on the standardization of specific messages (messaging interfaces) to facilitate emergency communication and coordination particularly when more than one profession or governmental jurisdiction is involved. The primary purpose of the Emergency Data Exchange Language Resource Messaging (EDXL-RM) Specification is to provide a set of standard formats for XML emergency response messages. These Resource Messages are specifically designed as payloads of Emergency Data Exchange Language Distribution Element- (EDXL-DE)-routed messages. Together EDXL-DE and EDXL-RM are intended to expedite all activities associated with resources needed to respond and adapt to emergency incidents. The Distribution Element may be thought of as a "container". It provides the information to route "payload" message sets (such as Alerts or Resource Messages), by including key routing information such as distribution type, geography, incident, and sender/recipient IDs. The Resource Message is constrained to the set of Resource Message Types contained in this specification. The Resource Message is intended to be the payload or one of the payloads of the Distribution Element which contains it.

See also: OASIS Emergency Management TC specifications

Make Use of WS-I Resources to Test for Web Service Interoperability
Klaus Berg, Java World Blog

In theory, Web services are especially designed to offer "reusable" features that are discovered and bound at runtime using technical "loose coupling." The Open Solutions Alliance (OSA) has been formed expressly to help speed the creation and adoption of integrated, interoperable business applications based on open source. The OSA recommends that each vendor or project lead should carefully think about which functions in their to need to be triggered by other applications, and ensure that they are exposed in a loosely coupled way so customers and integrators can take care of implementing the process. These functions should be exposed as a service and should be implementation language neutral, so for example a PHP application can invoke a feature in a Java application. Having in mind that Web services are used by consumers unknown at design-time, and looking at the "Publish-Discover-Invoke-Paradigm" based on standards it will become evident that Web services are fundamentally about "interoperability". In reality, however, the "standard" protocols are not standard enough to ensure automatic interoperability... WS-I is an open industry organization "chartered to establish Best Practices for Web services interoperability, for selected groups of Web services standards, across platforms, operating systems and programming languages". The organization is a consortium of Web services companies to provide guidance, recommended practices, and supporting resources for developing interoperable Web services in the SOA world... WS-I testing tools are used to determine whether the messages exchanged with a Web service conform to WS-I guidelines. These tools monitor the messages and analyze the resulting log to identify any known issues, thus improving interoperability between applications and across platforms. Together with the tools WS-I offers: Implementation and testing guidance with respect to interoperability, sample programs, and, of course, Web service profiles. WS-I Profiles are addressing the interoperability issues by prescribing a set of specifications or standards at specific version levels, and by adding guidelines and conventions for using these specifications together. The most fundamental profile is the Basic Profile (BP), which addresses the integration of the following specifications and standards: SOAP 1.1, WSDL 1.1, UDDI 2.0, XML Schema, XML 1.0 (Second Edition), HTTP 1.1, TLS 1.0 or SSL 3.0 (HTTPS). More than 200 interoperability issues have been resolved by adoption of this Basic Profile. However, WS-I offers also other profiles, some of them are already finalized, others are currently still in progress...

See also: the Web Services Interoperability Organization (WS-I)

WS-Transfer, WS-Enumeration, WS-MetadataExchange, WS-ResourceTransfer
W3C Members, Public Posting

A memo to W3C from W3C AC Representatives Steve Holbrook (IBM), Jeff Mischkinsky (Oracle), Kazunori Iwasa (Fujitsu), and Paul Lipton (CA) recommends the creation of new Working Group "Web Services Resource Access Working Group (suggested) to four Web services specifications: WS-Transfer, WS-Enumeration, WS-MetadataExchange, WS-ResourceTransfer. The proposers express the belief that "it is time for the next step in the open standardization of [these] key specifications that address this issue. We believe that [the] four specifications, in particular, work together to provide mechanisms for accessing and manipulating the XML representation of a resource as well as any metadata associated with that resource." According to the text of the proposed Working Group Charter, the anticipated submission specifications "define mechanisms for accessing and updating the XML representation and metadata of Web Service resources." WS-Transfer defines base CRUD (Create, Read, Update, Delete) type of operations against Web Service resources. Specifically, it defines two types of entities: 'Resources', which are entities addressable by an endpoint reference that provide an XML representation and 'Resource factories', which are Web services that can create a new resource from an XML representation. WS-ResourceTransfer enhances these operations, through the extensibility points of WS-Transfer, with the addition of fragment and batched access. WS-Enumeration provides a protocol that allows a resource to provide a session abstraction, called an enumeration context, to a consumer that represents a logical cursor through a sequence of data items. In its simplest form, WS-Enumeration defines a single operation, Pull, which allows a data source, in the context of a specific enumeration, to produce a sequence of XML elements in the body of a SOAP message. Each subsequent Pull operation returns the next N elements in the aggregate sequence. WS-MetadataExchange defines a mechanism by which metadata about a Web Service resource can be retrieved. When used in conjunction with WS-Transfer, WS-ResourceTransfer and WS-Enumeration, this metadata can be managed just like any other Web Service resource.

See also: on WS-MetadataExchange

Preservation DataStores: Storage Paradigm for Preservation Environments
S. Rabinovici-Cohen, M. Factor (et al., eds), IBM Systems Journal

Today we are facing a paradox. We can read and interpret the Dead Sea scrolls created two millennia ago, but most of us no longer have the means to read or interpret data we may have generated ourselves two decades ago on a 5.25-inch floppy disk. Nevertheless, long-term preservation of digital data is being required by new regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), Sarbanes-Oxley, Occupational and Safety Health Administration (OSHA) regulations, and other federal securities laws and regulations. These rules can require keeping medical data for the life of a patient or financial data for the life of an account. In addition to regulatory requirements, there are business, cultural, and personal needs to preserve digital information for periods longer than the lifetime of the technology that was used to create the data. For example, earth observation data from the European Space Agency and cultural heritage data from UNESCO (United Nations Educational, Scientific and Cultural Organization) must be kept for decades and centuries. Finally, the amount of long-lived data is expected to grow even more with the vast amounts of digital data being generated by emerging digital devices... In this article, the authors describe Preservation DataStores, an innovative storage architecture that facilitates robust and optimized preservation environments. It is a layered architecture that builds upon open standards, including Open Archival Information System (OAIS), XAM (Extensible Access Method), and Object-based Storage Device. They also describe the integration of Preservation DataStores with existing file systems and archives and discuss some design and implementation issues. They are developing Preservation DataStores as an infrastructure component of the European Union CASPAR (Cultural, Artistic and Scientific knowledge for Preservation, Access and Retrieval) project, where it will be used to preserve scientific, cultural, and artistic data. PDS is an innovative storage architecture for OAIS-based preservation-aware storage. We described some of the design and implementation issues encountered while developing the PDS prototype. The preservation layer, the compound object layer, and the stored-object layer are based on the OAIS, XAM, and OSD open standards, respectively. Each layer provides object abstraction using AIP, XSet, and OSD objects, linked by generic mappings.

See also: the U.S. LOC PREMIS preservation project

Project Concordia Shows Important Step in Federation Interoperability
Felix Gaehtgens, Blog

At the recent RSA conference in San Francisco in the second week of April [2008], several vendors demonstrated new interoperability between previously incompatible federation protocols. Through Project Concordia, a new project co-sponsored by the Liberty Alliance and several other vendors, several profiles were shown that showed seamless integration of SAML, WS-Federation, and CardSpace. This demonstration is significant, because it shows that vendors, especially Microsoft, are bowing to increased pressure from customers to focus on interoperability. It also highlights the challenges that are still ahead and yet to be solved... At the interop, FuGen Solutions, Internet2, Microsoft, Oracle, Ping Identity, Sun Microsystems and Symlabs showed several use cases that combined these technologies. At the forefront of the demonstration was to show that integration of federation scenarios using a mixture of SAML2 and WS-Federation protocols was now possible. Those companies that managed to implement support for both of these protocols in their products showed how a server running the vendors' federation software could transparently (for the user) bridge between systems using the SAML2 protocol, and the WS-Federation protocol. For example, a user that had previously federated successfully using SAML2 technology could now seamlessly access a Resource Partner (federation client) such as Microsoft SharePoint. The vendors' federation server acts simultaneously as a SAML2 Identity Provider (IdP) and a WS-Federation Account Partner (AP), and translates authentication tokens from one protocol to the other. Another interesting demonstration was the use of SAML2 tokens within the WS-Federation protocol. Even though this feature has always been foreseen from the specification, Microsoft and IBM, the main drivers behind the WS-* specification including WS-Federation, had never implemented support for SAML2 tokens within their implementation, instead opting to support only SAML1 security tokens embedded within WS-Federation protocol messages. A month ago, Joe Long from Microsoft made a groundbreaking announcement at Netpro's Directory Experts Conference in Chicago. He mentioned that it was already possible to include SAML2 tokens with ADFS, Microsoft's Active Directory Federation Services, and that Microsoft was currently re-evaluating whether to support SAML2 as a native protocol. Previously, Microsoft had steadily refused to support SAML2, pointing out that WS-Federation was the intended standard for federating within the Microsoft ecosystem. [Other recent Concordia news: Concordia Project Sponsors Entitlements Management Workshop.]

See also: the Project Concordia web site

DITA Open Platform Version 1.0.0
Claude Vedovini, Blog

Members of the DITA Open Platform Project announced the first milestone of the DITA Open Platform version 1.0.0. This milestone is a test release in order to see if there is interest in the DITA community for what the DITA Open Platform project plans to offer. It is also a mean to collect suggestions and ideas from the community. The goal of this project is to provide the DITA community with a free and easy-to-deploy DITA oriented production platform. It is targeted at small companies or teams that do not need a complete CMS solution. The key deliverable of this milestone is the DITA-OP Editor, an Eclipse-based set of plugins featuring: (1) The complete DITA architecture and language specification available through the Eclipse help system; (2) A DITA project nature which enables DITA files validation pure XML validation and hyperlink references validation) and problem markers; (3) Wizards and templates to create new DITA files—topics, concepts, references, tasks, maps, bookmaps and processing profiles; (4) A processing profile (ditaval) form editor; (5) A topic editor which leverage the power of the Eclipse XML editor (content assist, templates, as you type validation, formatting) with a dedicated preview page; (6) A launch configuration dedicated to the DITA Open Toolkit which enables setting up the toolkit scripts, saving your configuration for later reuse or sharing and even adding automatic build to your DITA project. Next step will be to provide a server packaging enabling, for example, configuration management of the DITA files, management of the authoring process, and management of the publication process.

See also: DITA resources

NYS Open Records Discussion Must Recognize Technical Requirements
Jon Bosak, Public Policy Technical Contribution

"The normally somnolent world of computer data format standards has been roiled over the last year by a clash between the biggest names in the computer business, a struggle that has spread to include not just industry giants but also national governments and large sectors of the programming community as well. Everyone agrees that it's time to move beyond the dominant, proprietary Microsoft Office formats (the familiar .doc, .xls, and .ppt files) and into the new world of open, accessible XML-based formats. The question is, which one? On one side: ODF (Open Document Format), backed by a group of companies that includes IBM, Google, and Sun Microsystems (my employer), plus most of the 'open-source' software community. ODF was approved as an International Standard (ISO 26300) in 2006. On the other side: OOXML (Office Open XML), backed by Microsoft, its industry partners, and a vast army of Microsoft developers. OOXML was rushed into standardization in order to preserve Microsoft's historic domination of office productivity formats. It has been tentatively approved as ISO 29500 pending the resolution of appeals lodged by several national standards organizations. The struggle to establish one or the other of these competing standards as the single format for office productivity software (a generic term that means Microsoft Office and its competitors, most notably the free open-source OpenOffice suite) has gone beyond the technical questions to raise larger issues ranging from European antitrust policy to the validity of the standards process to the ability of governments to provide universal data access for their citizens. In the U.S., no fewer than seven states have introduced legislation seeking to define public policy in this area. The most notable activity has taken place in Massachusetts, which mandated 'open standards' and found itself in the end supporting both ODF and OOXML as overlapping formats... The state of New York has not been lagging in its own efforts to resolve the issue... Which editable format to adopt for document creation remains an open question. I believe that there are strong reasons for standardizing on ODF as the document creation format across all [New York] state agencies, but this is an issue separate from which format to use for the electronic publication of state documents that are not intended to be filled out and sent back. For the publication of the ordinary run of state documents there is only one sensible choice—PDF/A. As a New York State resident, I call upon the Legislature to recognize this technical reality before mandating a broken policy that we will have to live with long into the future."

Some AIR in Adobe's Web Services?
Erin Joyce,

Adobe Systems has updated its software for Web services, LiveCycle Enterprise Suite (ES), with the integration of its Flex platform and AIR runtime environments. The additions are designed to juice up Web applications and improve end users' experience with Web applications, such as filling out accident forms online. Speed is also a factor in the upgrades. Brian Wick, director of Adobe's LiveCycle product marketing, said the LiveCycle ES Update 1 adds components designed to help developers build content-rich applications at a rapid clip... The upgrade to the LiveCycle ES comes about a year after Adobe integrated its Flex development environment, PDF technologies, its Flash Player and Adobe Reader with the tools in LiveCycle Enterprise Suite. Now, the addition of AIR to the suite helps developers build more Web applications that function much like the more sophisticated applications that often only reside on desktops. AIR is shorthand for Adobe Integrated Runtime, which is the company's foundation for building rich Internet applications (RIA). Like the addition of Flex, the developer framework that exists in Adobe's Dreamweaver authoring software for Web application development, the AIR platform helps developers reuse code that was used for a Flash-based animation and deploy in a Web application. The AIR runtime enables developers to use HTML, Ajax , Flash and Flex to add more whiz bang to rich Internet applications that work across operating systems. As for whether the upgrade is a competitive response to Microsoft's Silverlight platform, the well-received cross-browser technology that competes with Adobe's Flash platform... "we'll have to see how that plays out."

See also: Adobe AIR


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: