The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: August 26, 2009
XML Daily Newslink. Wednesday, 26 August 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc.

W3C Announces Two New Co-Chairs for the HTML Working Group
Staff, W3C Announcement

Tim Berners-Lee announced that two people will join Sam Ruby (IBM) in co-Chairing the W3C HTML Working Group: Paul Cotton (Microsoft) and Maciej Stachowiak (Apple). Chris Wilson has stepped down as co-Chair, indicating that he will be changing his focus to programmability in the web platform. As Berners-Lee wrote about this transition: "The work of this group is tremendously important to the Web; I am pleased that all three co-Chairs have taken on the responsibility for working closely with the editor and group to make HTML 5 a success."

The W3C HTML Working Group, chartered through December 2010, "will maintain and produce incremental revisions to the HTML specification, which includes the series of specifications previously published as XHTML version 1. Both XML and 'classic HTML' syntaxes will be produced. The Group will define conformance and parsing requirements for 'classic HTML', taking into account legacy implementations; the Group will not assume that an SGML parser is used for 'classic HTML'. The Group will monitor implementation of and conformance to the HTML specification, construct test suites, and from them produce interoperability reports."

The HTML 5 draft specification, "HTML 5: A vocabulary and associated APIs for HTML and XHTML" was published as a First Public Working Draft on 23-January-2008. The current editor's draft of the HTML 5 specification is available from the W3C site and in a multi-page annotated format at the editor's site.

See also: Stephen Shankland in CNET

OASIS Public Review for DSS Verifying Protocol Specification
Detlef Hühnlein (ed), OASIS DSS-X TC Committee Draft

An approved committee draft from the OASIS Digital Signature Services Extended (DSS-X) TC has been submitted for public review through October 23, 2009. The document "Profile for Comprehensive Multi-signature Verification Reports for OASIS Digital Signature Services Version 1.0" defines a protocol and processing profile of the DSS Verifying Protocol specified in Section 4 of "Digital Signature Service Core Protocols and Elements" (2007), which allows to return individual signature verification reports for each signature in a verification request and include detailed information of the different steps taken during verification.

While the DSS Verifying Protocol specified in DSS Core allows to verify digital signatures and time stamps, this protocol is fairly limited with respect to the verification of multiple signatures in a single request. In a similar manner it is possible to request and provide processing details, but this simple mechanism does not support the verification of multiple signatures in a single request.and there are no defined structures yet, which reflect the necessary steps in the verification of a complex signature, like an advanced electronic signature according to the European Directive (EC/1999/93) for example. Therefore the profile defines how (1) individual verification results may be returned, if multiple signatures are part of a 'dss:VerifyRequest' element, and (2) detailed information gathered in the various steps taken during verification may be included in the response to form a comprehensive verification report...

See also: the announcement

DomainKeys Identified Mail (DKIM) Development, Deployment and Operations
Tony Hansen, Ellen Siegel, Dave Crocker (eds), IETF Internet Draft

Members of the IETF Domain Keys Identified Mail (DKIM) Working Group have released an updated version of the Informational Internet Draft "DomainKeys Identified Mail (DKIM) Development, Deployment and Operations."

The DKIM working group was chartered to produce standards-track specifications that allow an Internet Domain to take responsibility, using digital signatures, for having taken part in the transmission of an email message and to publish "policy" information about how it applies those signatures. Taken together, these will assist receiving domains in detecting (or ruling out) certain forms of spoofing as it pertains to the signing domain.

This document provides implementation, deployment, operational and migration considerations for DKIM. The organization taking responsibility can be the author's, the originating sending site, an intermediary, or one of their agents. A message can contain multiple signatures, from the same or different organizations involved with the message. DKIM defines a domain-level digital signature authentication framework for email, using public key cryptography, using the domain name service as its key server technology (RFC 4871). This permits verification of a responsible organization, as well as the integrity of the message contents. DKIM will also provide a mechanism that permits potential email signers to publish information about their email signing practices; this will permit email receivers to make additional assessments about messages. DKIM's authentication of email identity can assist in the global control of "spam" and "phishing".

See also: the IETF Domain Keys Identified Mail WG Status Pages

Is MIME a Problem for REST?
Mark Little, InfoQueue

As REST (and REST-over-HTTP) adoption grows for more than just running your favourite Web server, we are seeing more and more people bring their real-world experiences to a wider community. In this case, Benjamin Carlyle's blog asks whether MIME types are holding back REST:

"A significant weakness of HTTP in my view is its dependence on the MIME standard for media type identification and on the related IANA registry. This registry is a limited bottleneck that does not have the capacity to deal with the media type definition requirements of individual enterprises or domains. Machine-centric environments rely in a higher level of semantics than the human-centric environment of the Web. In order for machines to effectively exploit information, every unique schema of information needs to be standardised in a media type and for those media types to be individually identified. The number of media types grows as machines become more dominant in a distributed computing environment and as the number of distinct environments increases..."

The problem as Benjamin points out, is that getting universal adoption of various MIME types is extremely difficult and doesn't scale...

See also: IANA MIME Media Types

Orchestrating RESTful Services With Mule ESB and Groovy
David Dossot, InfoQueue

Over the past couple of years, the REST style of software architecture has gained in popularity, mainly because it typically gets reified in systems that require less moving parts, exhibit loose coupling and are more resilient. Having more REST resources available in the enterprise landscape increases the chance that orchestrating them in some way will be needed. For example, a business activity will typically consist of creation of a resource followed by subsequent lookups and creations of other resources...

Orchestrating the interactions with several resources becomes slightly more involved. We need to define the orchestration, handle errors and retries properly and our system must behave gracefully under load. As an integration framework Mule provides all of this... In this article, we detail the interactions for each of these steps and will consider what particular Mule moving parts and Groovy features we have used to achieve such an interaction.

Mule's stock HTTP transport contains a component that facilitates the interacting with REST resources from within a service. Thanks to this component, we do not need to chain the result of the GET operation to a subsequent service but can perform everything within a single service... Both Mule's expressions and tiny Groovy scripts are efficient ways to dynamize your Mule configuration. As an added bonus, the fact we use asynchronous VM queues to chain the different steps of our orchestration allows our solution to gracefully degrade under peak loads, thanks to the intrinsic SEDA architecture of Mule..."

Microsoft Bridges PHP to ADO.NET Data Services
Jeffrey Schwartz, Application Development Trends

In its latest bid to show its support for PHP, Microsoft late last week released a toolkit that will bridge the popular scripting language to .NET-based data-driven applications.

"The goal of the ADO.Net Data Services framework is to facilitate the creation of flexible data services that are naturally integrated with the web, using URIs to point to pieces of data and simple, well-known formats to represent that data, such as JSON and plain XML. This results in the data service being surfaced to the web as a REST-style resource collection that is addressable with URIs and that agents can interact with using the usual HTTP verbs such as GET, POST or DELETE..."

From the blog: "we are releasing today the PHP Toolkit for ADO.NET Data Services which makes it easier for PHP developers to take advantage of ADO.NET Data Services, a set of features recently added to the .NET Framework. ADO.NET Data Services offer a simple way to expose any sort of data in a RESTful way. The PHP Toolkit for ADO.NET Data Services is an open source project funded by Microsoft and developed by Persistent Systems Ltd. and is available today on Codeplex... Data sources can be relational databases, XML files, and so on. ADO.NET Data Services defines a flexible addressing and query interface using a URL convention, as well as the usual resource manipulation methods on data sources; it supports the full range of Create/Read/Update/Delete operations..."

See also: blog on PHP Toolkit for RESTful ADO.NET Data Services

Technical Experts Sought for Input on Voting Equipment Standards
William Jackson, Government Computer News

"The Election Assistance Commission, which oversees guidelines for certifying voting equipment, is looking for technical experts to serve on the committee that is rewriting these guidelines. The Technical Guidelines Development Committee (TGDC) is an advisory panel that provides technical assistance and advice in revising the Voluntary Voting System Guidelines (VVSG) used by states to certify voting equipment. Created by the Help America Vote Act of 2002, four of the 15 spots on the committee are reserved for technical experts...

EAC, together with the National Institute of Standards and Technology, are in the middle of a major revision of the current guidelines, first adopted in 2005. The update includes the development of uniform test suites by NIST. This revision will also clarify the standard, providing test labs and voting system manufacturers with a clearer sense of performance and test requirements for EAC certification... The guidelines and the testing are intended to address concerns that have arisen in the last decade about the reliability and security of voting systems, particularly electronic systems..."

Making Sense of Revision-control Systems
Bryan O'Sullivan, ACM Queue Developer Tools

"Modern software is tremendously complicated, and the methods that teams use to manage its development reflect this complexity. Though many organizations use revision-control software to track and manage the complexity of a project as it evolves, the topic of how to make an informed choice of revision-control tools has received scant attention... Both Subversion and CVS follow the client-server model: a single central server hosts a project's metadata, and developers "check out" a limited view of this data onto the machines where they work...

In the early 2000s, several projects began to move away from the centralized development model. Of the initial crop of a half-dozen or so, the most popular today are Git and Mercurial. The distinguishing feature of these distributed tools is that they operate in a peer-to-peer manner. Every copy of a project contains all of the project's history and metadata. Developers can share changes in whatever arrangement suits their needs, instead of through a central server...

Mercurial, Git, and Subversion all have the ability to cherry-pick a change from one branch and apply it to another branch. The trouble with cherry-picking is that it is very brittle. A change doesn't just float freely in space: it has a context—dependencies on the code that surrounds it. Some of these dependencies are semantic and will cause a change to be cherry-picked cleanly but to fail later. Many dependencies are simply textual. The usual approach when cherry-picking fails because of a textual problem (sadly, a common occurrence) is to inspect the change by eye and reenter it by hand in a text editor..."


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: