The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: October 03, 2008
XML Daily Newslink. Friday, 03 October 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



Beginner's Guide to OAuth, Part III: Security Architecture
Eran Hammer-Lahav, OAuth Tutorial

As an authorization delegation protocol, OAuth must be secure and allow the Service Provider to trust the Consumer and validate the credential provided to gain access. To accomplish that, OAuth defines a method for validating the authenticity of HTTP requests. This method is called Signing Requests and in order to understand it, we must first explore the security features and architecture of the protocol, which will be the focus of this part of the Beginner's Guide. In the following part we will explore how all this comes together and translates into the OAuth signature workflow using interactive examples. OAuth's stated objective is to create an 'authorization delegation protocol'. Allowing one party to access someone else's resources on their behalf is the core of the OAuth protocol and the void it seeks to fill. In this delegated access scenario, also known as the 3-legged scenario, the three parties (legs) involved are the Service Provider, Consumer, and User. Since requests are only made by the Consumer, it needs a way to authenticate itself with the Service Provider, but also to authenticate its authorization to access User data. This requires OAuth to support an HTTP request with two sets of credentials... HTTP defines an authorization method called 'Basic' which is commonly used by many sites and APIs. The way 'Basic' works is by sending the username and password in plain text with each request. When not used over HTTPS, 'Basic' suffers from significant security flaws and limitations, but for this discussion we will focus on three... The OAuth signature method was primarily designed for insecure communications — mainly non-HTTPS. HTTPS is the recommended solution to prevent a man-in-the-middle attack (MITM), eavesdropping, and other security risks. However, HTTPS is often too expensive for many applications both to setup and maintain. When OAuth is used over HTTPS, it offers a simple method for a more efficient implementation called PLAINTEXT which offloads most of the security requirements to the HTTPS layer. It is important to understand that PLAINTEXT should not be used over an insecure channel. This tutorial will focus on the methods designed to work over an insecure channel: HMAC-SHA1 and RSA-SHA1...

See also: the OAuth web site


How to GET a Cup of Coffee
Jim Webber, Savas Parastatidis, Ian Robinson; InfoQueue

The impact of the Web is still widely misunderstood and underestimated in enterprise computing. Even those who are Web-savvy often struggle to understand that the Web isn't about middleware solutions supporting XML over HTTP, nor is it a crude RPC mechanism. This is a shame because the Web has much more value than simple point-to-point connectivity; it is in fact a robust integration platform. In this article we'll showcase some interesting uses of the Web, treating it as a pliant and robust platform for doing very cool things with enterprise systems. And there is nothing that typifies enterprise software more than workflows... We'll show how Web techniques can be used with all the dependability associated with traditional EAI tools, and how the Web is much more than XML messaging over a request/response protocol! We'll apologise in advance for taking liberties with the way Starbucks works because our goal here isn't to model Starbucks completely accurately, but to illustrate workflows with Web-based services. Since we're talking about workflows, it makes sense to understand the states from which our workflows are composed, together with the events that transition the workflows from state to state. In our example, there are two workflows, which we've modelled as state machines. These workflows run concurrently. One models the interaction between the customer and the Starbucks service; the other captures the set of actions performed by a barista... Handing over the coffee brings us to the end of the workflow. We've ordered, changed (or been unable to change) our order, paid and finally received our coffee. On the other side of the counter Starbucks has been equally busy taking payment and managing orders. We were able to model all necessary interactions here using the Web. The Web allowed us to model some simple unhappy paths (e.g. not being able to change an in process order or one that's already been made) without us having to invent new exceptions or faults: HTTP provided everything we needed right out of the box. And even with the unhappy paths, clients were able to progress towards their goal... The Web even helped non-functional aspects of the solution. Where we had transient failures, a shared understanding of the idempotent behaviour of verbs like GET, PUT and DELETE allowed safe retries; baked-in caching masked failures and aided crash recovery (through enhanced availability); and HTTPs and HTTP Authentication helped with our rudimentary security needs. Although our problem domain was somewhat artificial, the techniques we've highlighted are just as applicable in traditional distributed computing scenarios.


Build Configurable Workflows with WS-BPEL and IoC, Part 2
Bilal Siddiqui, IBM DeveloperWorks

Part 1 of this two-article series presented a two-layer model for analyzing dynamic business workflows and discussed how to implement the layers using Inversion of Control (IoC) and Web Services Business Process Execution Language (WS-BPEL). Here in Part 2, the author explains how to express a workflow's business logic using BPEL and shows how to deploy IoC and BPEL to control the behavior of your business workflows. He starts by explaining how to use BPEL to express a business workflow's dynamic behavior. Then, he demonstrates the procedure for hosting your BPEL file on Apache ODE. Apache ODE (Orchestration Director Engine) executes business processes written following the WS-BPEL standard. It talks to web services, sending and receiving messages, handling data manipulation and error recovery as described by your process definition. It supports both long and short living process executions to orchestrate all the services that are part of your application. Next, he explains the steps needed to expose the services of individual Java beans of the task layer as Web services. Finally, he presents a working sample application based on BPEL and IoC.

See also: article Part 1


Persistent Identifiers: Considering the Options
Emma Tonkin, Ariadne

Persistent identifiers (PIs) are simply maintainable identifiers that allow us to refer to a digital object — a file or set of files, such as an e-print (article, paper or report), an image or an installation file for a piece of software. The only interesting persistent identifiers are also persistently actionable (that is, you can "click" them); however, unlike a simple hyperlink, persistent identifiers are supposed to continue to provide access to the resource, even when it moves to other servers or even to other organisations. A digital object may be moved, removed or renamed for many reasons. This article looks at the current landscape of persistent identifiers [in the digital library domain], describes several current services, and examines the theoretical background behind their structure and use. Issues are raised of likely relevance to anybody who is considering deployment of a standard for their own purposes. Technology cannot create a persistent identifier, in the digital library community's sense of the term. This is primarily due to the fact that the longevity of each of these persistent identifier schemes (other than OpenURL) is closely linked to the information provider's long-term commitment to keeping records pertaining to their data available and up to date. To offer a persistent identifier for a document via a given resolver additionally implies entering into a long-term commitment with the organisation maintaining that resolver. However, picking the right standard is an important step in ensuring that the infrastructure remains available. Software will age with time, so a more complex infrastructure implies a greater commitment from the organisation that developed and made available the resolver package... Several issues are particularly relevant to design and adoption of persistent identifier systems: (1) The actionability of the persistent identifier: can it be used directly? Does it do something when you copy it into a browser location bar and press 'enter', or is it necessary to copy it into a resolver service in order to retrieve a link to the digital object in question? (2) The scope of the identifier standard: does it link only to digital objects or can it be used more widely, for example as a semantically valid way of referring to physical objects within a given domain or description language? (3) The architecture and infrastructure underlying the standard are of relevance as regards issues such as reliability, maintenance cost, and risk. (4) The status of the standard: is it a formal standard, having undergone the process of standardisation, a de facto standard or an ad hoc approach? There is likely to be space for several persistent identifier standards, both within the digital library world and elsewhere. Internet infrastructure in general will benefit from similar standards, and indeed many resolver services have sprung up that offer some of the functionality of the persistent identifier, such as TinyURL, SNURL, and elfurl.

See also: Tim Berners-Lee on 'Cool URIs'


Binding Extensions to Web Distributed Authoring and Versioning (WebDAV)
Geoffrey Clem, Jason Crawford (et al., eds), IETF Internet Draft

This specification extends the WebDAV Distributed Authoring Protocol (RFC 4918) to enable clients to create new access paths to existing resources. This capability is useful for several reasons: URIs of WebDAV-compliant resources are hierarchical and correspond to a hierarchy of collections in resource space. The WebDAV Distributed Authoring Protocol makes it possible to organize these resources into hierarchies, placing them into groupings, known as collections, which are more easily browsed and manipulated than a single flat collection. However, hierarchies require categorization decisions that locate resources at a single location in the hierarchy, a drawback when a resource has multiple valid categories. For example, in a hierarchy of vehicle descriptions containing collections for cars and boats, a description of a combination car/boat vehicle could belong in either collection. Ideally, the description should be accessible from both. Allowing clients to create new URIs that access the existing resource lets them put that resource into multiple collections. Hierarchies also make resource sharing more difficult, since resources that have utility across many collections are still forced into a single collection... The BIND method defined here provides a mechanism for allowing clients to create alternative access paths to existing WebDAV resources. HTTP and WebDAV methods are able to work because there are mappings between URIs and resources. A method is addressed to a URI, and the server follows the mapping from that URI to a resource, applying the method to that resource. Multiple URIs may be mapped to the same resource, but until now there has been no way for clients to create additional URIs mapped to existing resources.

See also: WebDAV news


Four Approaches to Implementing a Canonical Message Model in an ESB
Mei Y. Selvage, Greg Flurry (et al.), IBM DeveloperWorks

A canonical message model (CMM) is a crucial element of the enterprise service bus (ESB). This article explains how to achieve CMM, highlights the characteristics of different approaches, and evaluates the pros and cons of each approach. An ESB must understand all the proprietary message models, and the ESB maps, or transforms, between the message models. In the example scenario, the ESB must provide transformations for each possible interaction. For each requester (n in this scenario), a separate transformation has to be specified to each provider (m in this scenario). That means m transformations for the first requester, m transformations for the second, and so on. Because there are requesters, the total number of transformations is n * m. If one new provider is added and all requesters need to interact with it, n new transformations need to be added. If one new requester is added and needs to interact with all m providers, then m transformations need to be added. As a consequence, extending the scope of such an architected integration solution is very costly. The recommended solution to this problem is to use a CMM... A CMM standardizes the message models between service requesters and providers, mandating message consistency and improving message maintainability. From a technical perspective, a CMM comprises: (1) A defined set of semantics representing the business entities and their business attributes used in all messages. (2) A defined set of messages with a specific syntax representing the business entities, each including a related set of the defined types, elements, and attributes structured to provide a business document with a specific semantic meaning and context It's important to understand that this doesn't mean that every requester or provider adopts the exact same message set, but that the messages are consistent and all based on the same types.


XMLHttpRequest Level 2 Draft Published
Anne van Kesteren (ed), W3C Technical Report

W3C has announced the publication of an updated Working Draft for "XMLHttpRequest Level 2." This specification enhances XMLHttpRequest with new features, such as cross-site requests, progress events, and the handling of byte streams for both sending and receiving. The document was produced by members of the Web Applications (WebApps) Working Group. The WebApps Working Group is part of the Rich Web Clients Activity in the W3C Interaction Domain. The Rich Web Clients Activity contains the work within W3C on Web Applications and Compound Document Formats. "Compound document" is the W3C term for a document that combines multiple formats, such as XHTML, SVG, SMIL and XForms. The W3C Compound Document Formats (CDF) Working Group is specifying the behavior of some format combinations, addressing the needs for an extensible and interoperable Web. "Web API" means the assorted scripting methods that are used to build rich Web applications, mashups, Web 2.0 sites. Standardizing them improves interoperability and reduces site development costs. "Web Application Formats" means a variety of things from XBL for skinning applications to Widgets for deploying small Web applications outside the browser... With the ubiquity of Web browsers and Web document formats across a range of platforms and devices, many developers are using the Web as an application environment. Examples of applications built on rich Web clients include reservation systems, online shopping or auction sites, games, multimedia applications, calendars, maps, chat applications, weather displays, clocks, interactive design applications, stock tickers, currency converters and data entry/display systems.

See also: W3C Rich Web Clients Activity


Jacobsen v. Katzer: A Big Change for Open Source
Bruce Perens, Datamation

An appeals court has erased most of the doubt around Open Source licensing, permanently, in a decision that was extremely favorable toward projects like GNU, Creative Commons, Wikipedia, and Linux. The man who prompted that decision could be described as the worst enemy a Free Software project could have. This is the story of how our community was able to benefit from that enemy. For a decade there'd been questions: Are Open Source licenses enforceable at all? Are their terms, calling for a patent detente or disclosure of source code, legal? Are they contracts, which require agreement by all parties to be valid, or licenses, which are binding even if you don't agree to then? What legal penalties can a Free Software developer employ: only token damages, or much more? [...] The one high-publicity case we've ever had, SCO's self-destructive pursuit of Linux users and IBM, established the originality of Linux, but didn't concern Free Software licensing. So, we had waited 10 years for the magic lawsuit that would establish the legal solidity of Open Source licensing, and hadn't gotten it. Enter the two opponents: on the left, Bob Jacobsen: by day on the staff of a government nuclear research lab, by night a model train hobbyist. Jacobsen built what might be the ultimate nerd product: "Java Model Railroad Interface" or "JMRI," computer software for controlling model trains. Jacobsen gave JMRI to the world as Free Software, never expecting to make a cent from the project but only asking to share the software he created with other train hobbyists. On the right: Matthew Katzer, owner of a company that sells model train software, who has filed patents that essentially cover all use of computers to control model trains. Katzer has brought and later withdrawn a few lawsuits against other model train hobbyists, who in turn allege that the technology Katzer claims to have invented recently is not his, and has actually existed since the 1960's... What the appeals court found was, essentially, that the Free Software license was a license, rather than a contract, that it does not require that both parties agree before it can be binding, that its terms can be enforced, that if you violate the license you're a copyright infringer, and that violation of an Open Source license causes real economic damage to the copyright holder even though the copyright holder doesn't charge money for his software... We need to restore justice to the patent system, and we also need to take a good look at the motivation for software patents, which many economists and others feel do more to hurt innovation than to promote it.

See also: the article by Lawrence Rosen


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-10-03.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org