The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: March 09, 2009
XML Daily Newslink. Monday, 09 March 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation

Reborn: The OAuth Core Protocol
Eran Hammer-Lahav and Blaine Cook (eds), IETF Internet Draft

With the finalization of the IETF Open Authentication Protocol (OAUTH) Working Gruop Charter, an updated IETF specification has been published for "The OAuth Core Protocol." Abstract: "This document specifies the OAuth core protocol. OAuth provides a method for clients to access server resources on behalf of another party (such a different client or an end user). It also provides a redirection-based user agent process for end users to authorize access to clients by substituting their credentials (typically, a username and password pair) with a different set of delegation-specific credentials." This specification consists of two parts. The first part defines a method for making authenticated HTTP requests using two sets of credentials, one identifying the client making the request, and a second identifying the resource owner on whose behalf the request is being made. The second part defines a redirection- based user agent process for end users to authorize client access to their resources, by authenticating directly with the server and provisioning tokens to the client for use with the authentication method... In an associated blog article, Eran Hammer-Lahav writes about "OAuth Core 1.0 Reborn "—" I wanted to have a better, more consistent baseline to work with, but even more, I wanted to offer developers a better document to read today, instead of having to wait for the IETF profile of the protocol which is expected to take about a year. The result is a completely new approach to explaining OAuth. It includes a new, much simplified set of terms (gone are the confusing consumers, service providers, and multiple types of tokens). The new document structure has also been completely revised, flipping the specification on its head. The new specification first explains how to make OAuth- authenticated requests, before explaining how to obtain tokens via redirection. Another benefit of the new format is that it places less emphasis on the browser-based authorization workflow. Instead, it positions it as one of many possible methods for exchanging usernames and passwords for tokens..."

See also: Eran Hammer-Lahav's blog

OAuth Access Tokens Using Credentials
Bill de hÓra and Stephen Farrell (eds), IETF Internet Draft

An updated version of "OAuth Access Tokens Using Credentials" has been published as an IETF Internet Draft. This version adds an applicability statement and an increased level of required security. "OAuth Access Tokens using credentials is a technique for allowing user agents to obtain an OAuth access token on behalf of a user without requiring user intervention or HTTP redirection to a browser. OAuth itself is documented in the OAuth Core 1.0 Specification." Details: The "OAuth Core 1.0" specification [2007-12-04] is a protocol that enables websites or applications to access protected web resources via an API, without requiring users to disclose their credentials. This draft of "OAuth Access Tokens Using Credentials" defines a technique for allowing a user to provide their crendentials in cases where HTTP redirection to a browser is unavailable or unsuitable, such as intermediary aggregators and mobile or settop devices. The scheme is intended for use where one or both of the following situations apply: (1) the User is using a device that cannot play the HTTP re-direct game normally played in the "3-legged" OAuth model (2) the Consumer is an aggregator that will in any case, be presented with the credentials of the end-user If neither of the above apply, then this specification should not be used. In addition, the security considerations below must be followed, in particular the requirement that communications between the Consumer and Service Provider that contain the user's credentials must be sent via a confidential and mutually authenticated channel. That channel can be provided either via mutally-authenticated transport layer security or a virtual private network providing equivalent security functionality. See the security considerations section below for details. Once the Access Token has been acquired by the Consumer, then the security requirements of standard OAuth apply. Client request to obtain an Access token: to request an Access Token in this model, the Consumer makes an HTTP request to the Service Provider's Access Token URL Response: To grant an access token, the Service Provider must ensure that the request signature has been successfully verified as per OAUTH 1.0, that a request with the supplied timestamp and nonce has never been received before, and that the supplied username and password match a User's credentials. If successful, the Service Provider generates an Access Token and Token Secret using a 200 Ok response and returns them in the HTTP response body. Accessing Protected Resources: After successfully receiving the Access Token and Token Secret, the Consumer is able to access the Protected Resources on behalf of the User as per section 7 of the OAUTH specification. In other words the Access Token obtained here is no different in capability to the Access Token specified by OAUTH. Once authenticated using the above process, the Consumer will sign all subsequent requests for the User's Protected Resources using the returned Token Secret...

See also: the OAuth Core 1.0 2007-12-04 specification

Content Management Interoperability Services (CMIS) - Domain Model V0.6
Ethan Gur-esh (ed), OASIS CMIS TC Working Draft

An updated 0.6 working draft of the CMIS Domain Model was uploaded to the OASIS TC's repository. Abstract: "The Content Management Interoperability Services (CMIS) standard defines a domain model (in this document) and set of bindings, such as Web Service and REST/Atom that can be used by applications to work with one or more Content Management repositories/ systems. The CMIS interface is designed to be layered on top of existing Content Management systems and their existing programmatic interfaces. It is not intended to prescribe how specific features should be implemented within those CM systems, nor to exhaustively expose all of the CM system's capabilities through the CMIS interfaces. Rather, it is intended to define a generic/universal set of capabilities provided by a CM system and a set of services for working with those capabilities... Data Model: CMIS provides an interface for an application to access a Repository. To do so, CMIS specifies a core data model that defines the persistent information entities that are managed by the repository, and specifies a set of basic services that an application can use to access and manipulate these entities. In accordance with the CMIS objectives, this data model does not cover all the concepts that a full-function ECM repository typically supports. Specifically, transient entities (such as programming interface objects), administrative entities (such as user profiles), and extended concepts (such as compound or virtual document, work flow and business process, event and subscription) are not included. However, when an application connects to a CMIS service endpoint, the same endpoint MAY provide access to more than one CMIS repositories. How an application obtains a CMIS service endpoint is outside the scope of CMIS. How the application connects to the endpoint is a part of the protocol that the application uses. An application SHALL use the CMIS 'Get Repositories' service (getRepositories) to obtain a list of repositories that are available at that endpoint. For each available repository, the Repository MUST return a Repository Name, a Repository Identity, and an URI. The Repository Identity MUST uniquely identify an available repository at this service endpoint. Both the repository name and the repository identity are opaque to CMIS. Aside from the 'Get Repositories' service, all other CMIS services are single-repository- scoped, and require a Repository Identity as an input parameter. In other words, except for the 'Get Repositories' service, multi-repository and inter-repository operations are not supported by CMIS..."

See also: the draft CMIS Namespace Proposal

Update on the AIIM CMIS Demo
Laurence Hart, Blog

"At the end of January [2009], I talked about the proposed effort being undertaken by the iECM committee to create a CMIS demonstration for the AIIM Expo ('AIIM's iECM Committee, Validating CMIS'). Things are going well and I am working with others to build the demonstration. I wanted to share a few details with you. (1) We are implementing the Web Service binding for CMIS. While REST would be better for what we are doing, it was felt that the Web Services binding would be easier for the development team to churn out. (2) As a result of that, the participating vendors are Alfresco, EMC, IBM, and Nuxeo. Microsoft wanted to participate was not sure that their Web Services binding would be complete in time. (3) Each vendor will have a two issues worth of articles from AIIM's bi-monthly publication, Infonomics. In addition, each vendor is welcome to add their own white papers and collateral to the system. (4) Users will search on metadata and/or full text. All searches will be round-robin sorted so that each repository has multiple hits on the first page, assuming that they have any content that meets the criteria. (5) The system is being developed in .NET because we were able to identify a free hosting server that could support the effort. (6) We, including myself, are going to be at the Expo on April 2nd to talk about it. I'll share the exact time when I have it. That is about it. I'll be working and trying to get a basic search up this week. The second step will be performing this in a federated manner against multiple repositories. I'll share the journey as it unfolds. Until then, here is a modified version of the metadata model [Object: AIIMContent... details]

WS-Discovery and WS-DeviceProfile Public Review
Mark Little, InfoQueue

"We've discussed a number of WS-* standards and specifications over the years but ones that haven't come up before are WS-Discovery and WS-DiscoveryProfile, which went for OASIS standardization in late 2008. As the FAQ for the technical committee states: 'This technical committee aims to standardize the WS-Discovery, SOAP-over-UDP and Devices Profile for Web Services (DPWS) specifications. [...] At a high level the purpose of the TC is to standardize an interoperable way to discover Web services, be they enterprise services or embedded in devices, in an ad-hoc network or a carefully managed and controlled network. The other major goal of the TC is to define a lightweight, interoperable profile of Web services standards for communicating with Web services embedded in devices such as printers, scanners, conference room projectors, and many others...' Despite its relatively low-key venture into the field of WS-* compared to some other standards/specifications, WS-Discovery has certainly been seeing a fair bit of interest over the years. As Jesus Rodriguez says when talking about its inclusion within WCF: 'Contrary to other WS-* protocols, WS-Discovery has found a great adoption among the network device builders as it allows to streamline the interactions between these type of devices. For instance, a printer can use WS-Discovery to announce its presence on a network so that it can be discovered by the different applications that require printing documents. Windows Vista's contact location system is another example of a technology based on WS-Discovery... ' [...] At the moment it does seem like Microsoft is doing the majority of the work around this standardization effort. So one way or another WS-Discovery is likely to make its way into a desktop near you at the very least. Whether it goes further will depend on other factors of course..." [Note: WS-DD TC members include representatives from CA, Canon, CheckMi, Fuji Xerox, IBM, Konica Minolta, Lexmark International, Microsoft, Novell, Odonata, Progress Software, Red Hat, Ricoh, Schneider Electric, Software AG, TU Dortmund, University of Rostock, and WSO2. The public review period ends April 03, 2009.]

See also: the WS-DD TC specifications public review

Taking XML Validation to the Next Level: Introducing CAM
Michael Sorens,

Validating an XML document entails confirming that the document is both well-formed and conforms to a specific set of rules specified with a Document Type Definition (DTD), an XML Schema, or (as introduced in this article) a CAM template. DTD was the earliest specification. DTDs provided useful but limited capabilities, letting you validate XML document structure but very little in the way of semantics. Next came XML Schema, which offered more flexibility and capability, improved support for structure, and good (but not great) support for semantics. Schematron [DSDL Part 3], RelaxNG, and others have attempted to improve the semantic support, but none have caught on in a big way. Now technology called Content Assembly Mechanism (CAM) is being developed under the aegis of OASIS... CAM is more than just another schema language. It was designed to better meet the needs of business exchange requirements and interoperability. CAM provides a powerful mechanism for validating XML both structurally and semantically, in a concise, easy-to-use, easy-to-maintain format. It provides a context mechanism—a way to dynamically adjust what should be considered a valid XML instance based upon other parts of the XML itself or external parameters. CAM is an exciting technology with much promise, but it is a nascent technology, which can be both good and bad. Things move fast with CAM development, thus you may notice frequent "at the time of writing" disclaimers in this article. However, the chances are good that the development team will act upon some of the problems discussed here and fix them before you ever have a chance to encounter them... In the next part of this article you'll see much more of CAM's expressive power. Additionally, you'll see much more in-depth discussion of practical techniques for developing templates and rules including: leveraging common structure and common rules; conditionalizing validation based on either internal or external factors; detailed comparison to XSD regarding datatypes, compositors, and cardinality; and finally, some pitfalls to avoid.

See also: the OASIS CAM Wiki


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: