The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: August 16, 2010
XML Daily Newslink. Monday, 16 August 2010

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation

New IETF Internet Draft: A SASL and GSS-API Mechanism for OpenID
Eliot Lear, Hannes Tschofenig, Henry Mauldin, Simon Josefsson; IETF I-D

Members of the recently rechartered IETF Common Authentication Technology Next Generation (KITTEN) Working Group have published an initial level -00 Internet Draft A SASL and GSS-API Mechanism for OpenID. Abstract: "OpenID has found its usage on the Internet for Web Single Sign-On. Simple Authentication and Security Layer (SASL) and the Generic Security Service Application Program Interface (GSS-API) are application frameworks to generalize authentication. This memo specifies a SASL and GSS-API mechanism for OpenID that allows the integration of existing OpenID Identity Providers with applications using SASL and GSS-API."

From the Introduction: "Introduction OpenID is a three-party protocol that provides a means for a user to offer identity assertions and other attributes to a web server (Relying Party) via the help of an identity provider. The purpose of this system is to provide a way to verify that an end user controls an identifier. Simple Authentication and Security Layer (SASL) as defined in RFC 4422 is used by application protocols such IMAP, POP and XMPP, with the goal of modularizing authentication and security layers, so that newer mechanisms can be added as needed. This memo specifies just such a mechanism.

The Generic Security Service Application Program Interface (GSS-API) provides a framework for applications to support multiple authentication mechanisms through a unified interface. This document defines a pure SASL mechanism for OpenID, but it conforms to the new bridge between SASL and the GSS-API called GS2. This means that this document defines both a SASL mechanism and a GSS-API mechanism. We want to point out that the GSS-API interface is optional for SASL implementers, and the GSS-API considerations can be avoided in environments that uses SASL directly without GSS-API. As currently envisioned, this mechanism is to allow the interworking between SASL and OpenID in order to assert identity and other attributes to relying parties. As such, while servers (as relying parties) will advertise SASL mechanisms, clients will select the OpenID mechanism.

The OpenID mechanism described in this memo aims to re-use the available OpenID specification to a maximum extent and therefore does not establish a separate authentication, integrity and confidentiality mechanism. It is anticipated that existing security layers, such as Transport Layer Security (TLS), will continued to be used... This document requires enhancements to the Relying Party and to the Client (as the two SASL communication end points) but no changes to the OpenID Provider (OP) are necessary. To accomplish this goal indirect messaging required by the OpenID specification is tunneled within SASL..."

See also: the OpenID specifications

OASIS ebXML Messaging Services Version 3.0: Part 2, Advanced Features
Jacques Durand, Sander Fieten, Pim van der Eijk (eds), OASIS PRD

Members of the OASIS ebXML Messaging Services Technical Committee have approved a Committee Draft for public review through October 12, 2010: OASIS ebXML Messaging Services Version 3.0: Part 2, Advanced Featurescite. This specification complements the ebMS 3.0 Core Specification by specifying advanced messaging functionality for message service configuration, message bundling, messaging across intermediaries (multi-hop) and transfer of (compressed) messages as series of smaller message fragments.

The OASIS ebXML Messaging Services (ebMS) Version 3.0 core specification defines an advanced Web Services-based message protocol that leverages standards for SOAP-based security and reliability. It supports and extends the core functionality of version 2.0 of ebMS. The core specification is focused on point-to-point exchange of messages between two ebMS message service handlers. It does not explicitly consider multi-hop messaging. Messaging across intermediaries is a common requirement in many e-business and e-government communities and is functionality provided by many messaging protocols, including the version 2.0 of ebXML Messaging.

This chapter defines a multi-hop profile of ebMS 3.0 that extends the functionality of the version 3.0 ebMS core specification to multi-hop messaging across ebMS intermediaries. The main function of intermediaries as defined in this specification is to provide message routing and forwarding based on standardized SOAP message headers, allowing a sending MSH to ignore the ultimate message destination and abstract away from lower-level transport parameters such as the URL of the ultimate receiving MSH and message exchange pattern bindings.

The intermediary functionality defined here supports message relaying across segmented networks, synchronous and asynchronous bindings and both active (push) and passive (pull message stores) forwarding styles. Multi-hop paths may consist of any number of intermediaries... A key end-user requirement that this specification supports is end-to-end reliable messaging and end-to-end security and compliance with Web Services interoperability profiles..."

See also: the announcement

W3C Privacy Workshop Participants Share Implementation Experience
Daniel Appelquist and Thomas Roessler (eds), Workshop Report

A published report W3C Workshop on Privacy for Advanced Web APIs summarizes the highlights and outcomes of a 'W3C Workshop on Privacy for Advanced Web APIs' held July 12-13, 2010, in London. This workshop brought together about forty-five (45) participants from industry (including browser vendors, mobile operators, device manufacturers and service providers), academia, and standardization. The workshop's main goal was to outline next steps for the W3C concerning the privacy considerations for advanced APIs that make personal information and sensor data available to Web applications, following up on W3C's previous work on exposing a user's geographical location through the geolocation API.

More generally, workshop participants reviewed the W3C's overall direction in the privacy space, discussed approaches toward better privacy on the Web, and toward standards bodies' roles and responsibilities in this space. A number of members of the W3C Device APIs and Policy WG (DAP) attended this workshop and discussed privacy implications from the workshop during the subsequent DAP face-to-face meeting.

The two practical proposals that drew most interest and discussions were the Mozilla privacy icon approach and CDT's privacy rule-set idea. Both proposals received a lot of positive feedback, and questions about their viability. In addition to technical and user interface challenges, there were questions about the business incentives for browser vendors and large Web providers, as one of the main obstacles for getting privacy from research and standardization to deployment. Nevertheless, further investigation and experimentation with both approaches seems worthwhile and was encouraged.

There was agreement that it is useful to capture best current practices gained during early implementation efforts (such as those presented during the workshop regarding the geolocation API). Furthermore, investigating how to help specification writers and implementers to systematically analyze privacy characteristics in W3C specifications was seen as a worthwhile effort. To this end, the W3C staff plans to propose a charter for a Privacy Interest Group that can serve as a forum for this work. Such an Interest Group could also provide a focal point for privacy-related coordination with other interested standard development organizations..."

See also: the Privacy Workshop web site

Nimbula Rains On Narrow, Proprietary Cloud Formats
Charles Babcock, InformationWeek

"Chris Pinkham was the designer of Amazon's EC2 cloud as an outgrowth of its retailing Web services. He designed and managed the software systems behind the Amazon online store as VP of engineering for infrastructure. He is now CEO of a start up called Nimbula, which is developing a cloud operating system. I sat down with Pinkham recently to learn where that effort is leading. He and Willem van Biljon, VP of products, teamed up to found Nimbula in early 2009. He and Willem are on the board of directors along with Roelof Botha, venture capitalist at Sequoia Capital and another graduate of the University of Cape Town.

Pinkham is not trying to create a duplicate of the EC2 infrastructure, which is a distinct and proprietary cloud with its own virtual machine file format. Nor is he trying to create a Savvis or AT&T or Verizon Business compatible cloud, all of which are VMware-oriented with APIs architected to know what to do with VMware file formats. Pinkham is trying to move the internal enterprise cloud to a more fundamental layer where the software can provide essential cloud services to any standard virtual machine file formats...

In its initial phase, Nimbula will work with the open source Xen and KVM hypervisors. Next up is VMware's ESX Server. Ultimately, it will treat the hypervisor as a common commodity, regardless of where it comes from; this suggests that Nimbula has drawn its lessons, not from the proprietary format of EC2 but from the neutral migration format, OVF, from the DMTF standards group... By building an infrastructure that starts with a neutral workload approach as opposed to building around a specific existing one, Nimbula has started down a path to produce a general purpose private cloud, usable in many existing settings and perhaps one day linked to a public cloud, such as EC2..."

According to the Nimbula Cloud Operating System technical white paper, this technology is "an automated cloud management system delivering Amazon EC2-like services behind the firewall. Nimbula's technology allows customers to easily repurpose their existing infrastructure and build a computing cloud in the trusted environment of their own data center. Using simple and rapid deployment technologies, The Nimbula Cloud OS transforms under-utilized private data centers into muscular, easily configurable compute capacity, quickly and cost effectively. With access to both on- and off-premise cloud services available via a common API, the Nimbula Cloud OS combines the benefits of capitalizing on internal resource capacity and controlled access to additional external compute capacity... The Nimbula Cloud OS abstracts the underlying technology to provide a coherent view of a completely automated virtual data center. Nimbula's intelligent cloud control software isolates customers from the operational and hardware complexity associated with deploying compute in a static private data center. A RESTful HTTP API provides a simple and comprehensive interface to all aspects of cloud resource control. Cloud resources can also be managed via a command line interface (CLI) and web control panel, built on top of the API..."

See also: the Nimbula Cloud Operating System

Expressing SNMP SMI Datatypes in XML Schema Definition Language
Mark Ellison and Bob Natale (eds), IETF Internet Draft

The Internet Engineering Steering Group (IESG) announced the publication of a new Request for Comments specification in the online RFC libraries: Expressing SNMP SMI Datatypes in XML Schema Definition Language. This document, produced by members of the IETF Operations and Management Area Working Group Working Group, is now an IETF Proposed Standard Protocol.

The specification "defines the IETF standard expression of Structure of Management Information (SMI) base datatypes in XML Schema Definition (XSD) language. The primary objective of this memo is to enable the production of XML documents that are as faithful to the SMI as possible, using XSD as the validation mechanism. This standard expression enables Internet operators, management application developers, and users to benefit from a wider range of management tools and to benefit from a greater degree of unified management. Thus, standard expression enables and facilitates improvements to the timeliness, accuracy, and utility of management information. Section 4 presents the XSD for SMI Base Datatypes.

From the Introduction: "Numerous use cases exist for expressing the management information described by SMI Management Information Base (MIB) modules in XML. Potential use cases reside both outside and within the traditional IETF network management community. For example, developers of some XML-based management applications may want to incorporate the rich set of data models provided by MIB modules. Developers of other XML-based management applications may want to access MIB module instrumentation via gateways to SNMP agents. Such applications benefit from the IETF standard mapping of SMI datatypes to XML datatypes via XSD (W3C XML Schema)...

MIB modules use SMIv2 to describe data models. For legacy MIB modules, SMIv1 was used. MIB data conveyed in variable bindings ("varbinds") within protocol data units (PDUs) of SNMP messages use the primitive, base datatypes defined by the SMI... Using the translation of TC into base SMI datatypes any MIB module that uses TCs can be mapped into XSD using the mappings defined in this memo. For example, for IP addresses (both IPv4 and IPv6), MIB objects defined using the InetAddress TC (as per [RFC4001]) are encoded using the base SMI datatype underlying the InetAddress TC syntax rather than the IpAddress base datatype..."

See also: the IETF Operations and Management Area Working Group (OPSAWG) Working Group

W3C Proposed Recommendations: MathML Version 3.0 and MathML CSS Profile
David Carlisle, Patrick Ion, Robert Miner (eds), W3C PRs

W3C has issued a Call for Review in connection with the publication of two Proposed Recommendation specifications for MathML. W3C MathML is "about encoding the structure of mathematical expressions so that they can be displayed, manipulated and shared over the World Wide Web. A carefully encoded MathML expression can be evaluated in a computer algebra system, rendered in a Web browser, edited in your word processor, and printed on your laser printer. Mathematical software vendors are adding MathML support at a rapid pace, and MathML is fast becoming the lingua franca of scientific publication on the Web."

MathML can be used to encode both mathematical notation and mathematical content. About thirty-eight of the MathML Version 3.0 markup elements describe abstract notational structures, while another about one hundred and seventy provide a way of unambiguously specifying the intended meaning of an expression. The specification discusses how the MathML content and presentation elements interact, and how MathML renderers might be implemented and should interact with browsers. It also addresses the issue of special characters used for mathematics, their handling in MathML, their presence in Unicode, and their relation to fonts.

Mathematical Markup Language (MathML) Version 3.0 PR is a specification that is under review by the W3C Advisory Committee for endorsement as a W3C Recommendation. It is is a mature document that has been widely reviewed and has been shown to be implementable; W3C encourages everybody to implement the specification. MathML Version 3.0 is designed as an 'XML Application', that is, it uses XML markup for describing mathematics. The specification makes use of a format called Content Dictionaries, which is also an application of XML. This format has been developed by the OpenMath Society,with the dictionaries being used by this specification involving joint development by the OpenMath Society and the W3C Math Working Group."

The PR specification A MathML for CSS Profile is expected to facilitate adoption of MathML in web browsers and CSS formatters, allowing them to reuse existing CSS visual formatting model, enhanced with a few mathematics-oriented extensions, for rendering of the layout schemata of presentational MathML. Development of the CSS profile is assumed to be coordinated with ongoing work on CSS. As specified in this document a restricted part of MathML3 properly used should render well with currently implemented CSS up to CSS 2.1."

See also: A MathML for CSS Profile Credit Where it's Due
Jeni Tennison, Jeni's Musings Blog

"The web site is a government resource built on the principles of transparency and open [linked] data, including ideas laid out in the Power of Information Taskforce Report. It now features a lovely user interface which helps end-users find and understand legislation, but it's layered over the top of an API that anyone is free to use to construct their own websites based on the same data.

As far as content goes, legislation is about as tough as you can get. For a start, Acts and Statutory Instruments are semi-structured documents, not tabular data. It's not a simple matter of storing and extracting rows in a database: we need to be able to address portions of an item of legislation such as 'Local Government Act 1988 (c. 9, SIF 81:2), Sch. 3 para. 13(1)(b)(2)'... The content itself is complex. For, the main challenge is not to do with faithfully reconstructing page and line breaks (fortunately!) but how to represent complex, annotated, changes to legislation over time, and then how to present them...

We also have a lot of documents, some of which are very large. There are nearly 60,000 items of legislation on the site. The largest and most complex of them has hundreds of sections and about a hundred distinct versions. When you consider all the versions of all the possible fragments of all the items of legislation, you're talking about 6.5 million distinct documents, each of which is available in HTML, XML, PDF and for which there is some RDF metadata... On top of this, the content is constantly changing. New legislation is published every working day, first as PDFs, then as HTML (and XML), and then various associated documents the most important of which are Explanatory Notes, again first in PDF and then in HTML/XML form. Old legislation changes too; the editorial team is constantly working through a backlog of changes to existing legislation brought about by new legislation...

Building the user interface (API) first helped in two ways: it helped the legislation experts who were looking at the documents to spot errors in a way that they unsurprisingly struggled to do when presented with raw XML. Also, it helped to identify things that the API needed to do to support a useful website, such as always providing links to the table of contents for an item of legislation or providing a search based on modification date... All of this has only been possible by having an excellent team of experts and developers [credits follow]..."

See also: the '' web site

Call for Participation: Role of Semantic Web in Provenance (SWPM 2010)
Amit Sheth and Juliana Freir (Chairs), Workshop CFP

Submissions are due August 27, 2010 for The Second International Workshop on the role of Semantic Web in Provenance Management (SWPM 2010). The Workshop will be held on November 7, 2010, at Shanghai International Convention Center, 2727 Riverside Avenue, Pudong Shanghai, China. It is co-located with the Ninth International Semantic Web Conference (ISWC-2010).

"The workshop anticipates the participation of researchers in academia, industry, and government involved in both provenance management and Semantic Web. Given the focus of this workshop on provenance management in real world Semantic Web scientific applications we expect active participation of domain scientists and Web technologists also. Finally, the workshop aims to raise awareness among provenance researchers about Semantic Web and correspondingly highlight provenance management as a rich problem domain for Semantic Web researchers.

The scale at which data across different domains (e.g. biomedical informatics, astronomy, oceanography and etc) is created along with the rapidly increasing LOD cloud mandates the processing and analysis of provenance metadata in a scalable way. The proof layer in the Semantic Web layer cake, corresponding to provenance information, has been identified as an important component for the implementation of 'trust mechanisms' and effective information extraction from the Web. The Semantic provenance notion brought together the elements of the Semantic Web and provenance metadata useful in context of real world Semantic Web applications.

Several workshops each addressing different aspects of provenance have been held, such as Provenance in Databases, Provenance in Scientific Workflows, and the biannual International Provenance and Annotations Workshop (IPAW, 2006 through 2010), but none of these workshops have specifically addressed the role of Semantic Web in provenance management. Further, the large number of participants representing a variety of domains in the ongoing W3C Provenance Incubator Group makes this workshop timely and relevant. The special issues of the Journal of Web Semantics and the IEEE Internet Computing on provenance strongly emphasize the importance of provenance management for computer science researchers..."

See also: ISWC-2010


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: