A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
Headlines
- Open Web Foundation (OWF) Publishes Contributor/Specification Agreements
- UOML (Unstructured Operation Markup Language) Part 1 V1.0 Draft Errata
- W3C Launches Object Memory Modeling Incubator Group
- Revised Internet Draft for OAuth Use Cases
- OAuth from XQuery
- IETF ECRIT Working Group Draft: Trustworthy Location Information
- Simulate XQuery and XInclude Functionality with PHP
Open Web Foundation (OWF) Publishes
Contributor/Specification Agreements
David Rudin, OWF Legal Drafting Committee Announcement
"The Open Web Foundation Agreement (OWF) Legal Drafting Committee is pleased to announce the publication of the set of OWF v1.0 agreements for public review. The public review period will close October 08, 2010. At that point, the Drafting Committee will consider and respond to all comments, and will refer a final version of the agreements to the OWF Board of Directors for final approval. OWF "is aimed at building a lightweight framework to help communities deal with the legal requirements necessary to create successful and widely adopted specifications."
This set of agreements includes the following documents: (1) Contributor License Agreement ('CLA') - Copyright - Draft 1.0. This is copyright only Contributor License Agreement ('CLA'), which is similar to the previously approved 0.9 CLA updated to conform to the language in the proposed 1.0 agreements (2) CLA - Copyright and Patent - Draft 1.0. This CLA covers both copyright and patents for contributions. (3) Open Web Foundation Agreement ('OWFa') - Draft 1.0. This is an updated version of the OWFa and is intended to cover the entire specification. (4) OWFa Patent only - Draft 1.0. This agreement is for use in contexts were copyright is already covered by a separate grant. For example, this agreement could be used for making a commitment to standards body where copyright is already covered by organization's rules...
Both the Open Web Foundation Agreement v 1.0 (OWFa) and the OWF Contributor License Agreement v 1.0 (CLA) apply to Specifications that are intended to be implemented and used in computer software. By signing the CLA, a Bound Entity grants its patent and copyright intellectual property rights so that others around the world freely can include the Bound Entity's Contributions — and elaborate on them — in the community effort to write and implement the Specification containing those Contributions. A single signed CLA covers all future Contributions of that Bound Entity to that Specification.
By signing the OWFa, a Bound Entity grants its patent and copyright intellectual property rights so that others can freely implement the entire final Specification as it is published, regardless of who contributed what portions. These agreements provide necessary copyright and patent licenses, in effect allowing contributors and supporters who helped to create and who formally support a Specification to protect the Specification against infringement lawsuits over the intellectual property in that Specification..."
See also: the FAQ document
UOML (Unstructured Operation Markup Language) Part 1 V1.0 Draft Errata
Staff, OASIS Announcement
Members of the OASIS Unstructured Operation Markup Language Extended (UOML-X) TC have announced approval of a set of errata (UOML (Unstructured Operation Markup Language) Part 1 Version 1.0 Draft Errata) for the specification UOML (Unstructured Operation Markup Language) Part 1 Version 1.0 which was approved as an OASIS Standard on October 10, 2008. The Committee Draft errata document and associated files are approved for public review through October 09, 2010.
Background: "The UOML Part 1 v1.0 OASIS Standard was submitted to ISO/IEC JTC1 for approval. The original ballot failed to receive sufficient yes votes to pass and, as a result, a Ballot Resolution Meeting was scheduled to deal with the comments that were submitted by the National Bodies. That meeting was held in Tokyo, Japan, on 10-11 September 2010. The agreed-upon resolution of those comments require many changes to the original specification; in order to ensure that there is no deviation between what may become an ISO Standard and the OASIS Standard, an errata incorporating each of those changes must be prepared, submitted for review, and further approved by the UOML-X Technical Committee. The review package includes (links to) the original OASIS Standard, the Committee Draft Errata, and the Disposition of Comments document. Each item in the Errata document is commented, noting the associated National Body (abbreviated NB) comment found in the Disposition of Comments."
From the Errata document Summary: "There is a proposal to edit the UOML specification substantially. The specification will be reformatted, restructured and thoroughly edited. New normative wording will be added for preciseness (however, breaking changes have been minimized). This errata document will be extensive in nature because the resulting update to the specification will be quite different from the 1.0 standard, especially from a clause arrangement and formatting perspective. However, there are no breaking changes to implementers amongst all of these changes. The flow of this errata document attempts to be in order of the original specification while showing the new structure of any updated specification..."
UOML is an "interface standard to process unstructured document; it plays the similar role as SQL (Structured Query Language) to structured data. UOML is expressed with standard XML, featuring compatibility and openness. UOML deals with layout-based document and its related information (such as metadata, rights, etc.) Layout-based document is two dimensional, static paging information, i.e. information can be recorded on traditional paper. The software which implements the UOML defined function, is called DCMS, applications can process the document by sending UOML instructions to DCMS. UOML first defines abstract document model, then operations to the model. Those operations include read/write, edit, display/print, query, security control; it covers the operations which required by all different kinds of application software to process documents. UOML is based on XML description, and is platform-independent, application-independent, programming language-independent, and vendor neutral. This standard will not restrict manufacturers to implement DCMS in their own specific way..."
See also: the OASIS Standard version of October 10, 2008
W3C Launches Object Memory Modeling Incubator Group
Staff, W3C Announcement
W3C has announced the creation of the Object Memory Modeling Incubator Group, initially sponsored by the German Research Center for Artificial Intelligence (DFKI GmbH), SAP AG, and Siemens AG. The Object Memory Modeling Incubator Group is chartered through September 2011 to "define an object memory format, which allows for modeling of events or other information about individual physical artifacts—ideally over their lifetime—and which is explicitly designed to support data storage of those logs on so-called "smart labels" attached to the physical artifact...
Candidate labels range from barcodes, to RFID, to sensor nodes -- miniaturized embedded systems capable of performing some processing, gathering sensory information and communicating with other nodes. The object memory format implemented on a "smart label" can provide an object memory, which may serve as a data collector for real world data concerning a physical artifact. Associating semantic definitions with the data stored using the object memory format, can help tie together the Semantic Web with the Internet of Things.
Today, heterogeneous standards are already in use to describe a physical artifact's individual characteristics in different application domains. The envisioned object memory format has to complement and embrace such standards dedicated to the description of physical items. In order to facilitate interoperability in scenarios comprising several application domains (e.g., business processes covering production and logistics) and open-loop scenarios (e.g., production lines with highly varying process steps), the object memory format should provide a standardized way to organize and access the selected data independent from the application domain. Furthermore, it should function as a technology-neutral layer for delivering content from physical artifacts to applications in business processes ranging from product lifecycle management to consumer support...
An XML-based object memory format should provide a flexible and extensible approach, such that any potential owner of the object will be able to enrich the representation with additional information in arbitrary formats. This format should explicitly support the temporal and incremental aspects of information accumulation along the object's lifecycle and allow for describing the state of the object at different points in time. In addition, it should allow for describing active components, such as sensors, employed by the object to acquire information from its environment. Technically, the format defines the structural characteristics of an object memory. The format should be designed in a way which facilitates its adoption on technically limited information storage devices such as RFID chips. Further, it should enable an efficient information exchange between objects..."
See also: the W3C Incubator Activity
Revised Internet Draft for OAuth Use Cases
Torsten Lodderstedt and Zachary Zeltsan (eds), IETF Internet Draft
Members of the IETF Open Authentication Protocol (OAuth) Working Group have published a revised version of OAuth Use Cases, updating the previous draft of May 18, 2010. The major contributors of the use cases for this document include representatives from Deutsche Telekom AG, Sonoa Systems, Google, NewBay Software, Microsoft, Facebook, Yahoo!, Twitter, PayPal, and Alcatel-Lucent—along with independent/individual contributors. The objective in this document is to identify the use cases that will be a base for deriving the OAuth requirements. The provided list of use cases is based on the Internet-Drafts of the OAuth working group and discussions on the group's mailing list.
The need for documenting the OAuth use cases was discussed at the OAuth WG virtual meetings, on the group's mailing list, and at the IETF 77 and IETF 78. This Internet-Draft describes such use cases. The objective of the draft is to initiate discussion that will lead to defining a set of the use cases that the OAuth specifications should support.
Use case examples: Web server; User-agent; Mobile App; Device; Client password credentials; Assertion; Content manager; Access token exchange; Multiple access tokens; Gateway for browser-based VoIP applets; Signature with asymmetric secret...
For instance: (1) Web server: "Alice accesses an application running on a web server at www.printphotos.example.com and instructs it to print her photographs that are stored on a server www.storephotos.example.com. The application at www.printphotos.example.com receives Alice's authorization for accessing her photographs without learning her authentication credentials with www.storephotos.example.com..." (2) User-agent" "Alice has installed on her computer a gaming application. She keeps her scores in a database of a social site at www.fun.example.com. In order to upload Alice's scores, the application gets access to the database with her authorization..."
See also: the IETF Open Authentication Protocol (OAuth) Working Group
OAuth from XQuery
Norm Walsh, Blog
"A short while ago, Twitter disabled basic authentication. That means that all clients that want to talk to the API must use OAuth. Naturally, that killed my microblogging backup tool, as narrated in a series of essays I am painfully aware is as yet still unfinished... To work around this problem, I implemented XQuery-OAuth. Fair warning: there are a couple of MarkLogic-specific functions in there, but it shouldn't be hard to adapt to other systems if you need to.
The 'oauth.xqy' module provides a function for accessing web services authenticated with OAuth. I'm going to assume you know at least as much about OAuth as I do. That's a pretty safe bet since I don't know all that much. The heart of the API is the 'service-provider' document. I've attempted to document this format with a schema... This document provides the API with the information it needs to contact the service provider for a request token, user authorization, user authentication, and an access token. It describes the signature methods that the service provider accepts and the application-level authentication of the caller.
In theory, this information is sufficient to perform the entire hand-shaking process necessary to authenticate a user. In practice, I've only tested a small part of the API in anger. For me, the hardest part of using OAuth authenticated services for my own applications is getting past the initial handshake stage. In brief, if you start with a consumer key and a consumer secret (the things that authenticate the application), you can extract from the service an access token and access token secret that allow you to make authenticated calls to the service provider's API...
I've used a Perl script [here] to successfully extract the access information from Twitter. After you have all the tokens, you can issue a signed request by calling oa:signed-request. For example, suppose that I want to get the first 25 statuses on my user timeline from Twitter. I'd [say...] I already had a single entry-point for making API calls to twitter, so as soon as I fixed that function, my whole system started working again. Scant on detail, I know, but I hope this helps someone..."
IETF ECRIT Working Group Draft: Trustworthy Location Information
Hannes Tschofenig, Henning Schulzrinne, Bernard Aboba; IETF Internet Draft
Members of the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group have published an initial level -00 Internet Draft for Trustworthy Location Information. The document lists different threats, an adversary model, outlines three frequentlly discussed solutions and discusses operational considerations.
From the document 'Introduction': "Much of the focus in trustable networks has been on ensuring the reliability of personal identity information or verifying privileges. However, in some cases, access to trustworthy location information is more important than identity since some services are meant to be widely available, regardless of the identity of the requestor. Emergency services, such as fire department, ambulance and police, but also commercial services such as food delivery and roadside assistance are among those. Customers, competitors or emergency callers lie about their location to harm the service provider or to deny services to others, by tying up the service capacity. In addition, if third parties can modify the information, they can deny services to the requestor.
Physical security is often based on location. As a trivial example, light switches in buildings are not typically protected by keycards or passwords, but are only accessible to those within the perimeter of the building. Merchants processing credit card payments already use location information to estimate the risk that a transaction is fraudulent, based on the HTTP client's IP address (that is then translated to location). In all these cases, trustworthy location information can be used to augment identity information or, in some cases, avoid the need for role-based authorization.
A number of standardization organizations have developed mechanisms to make civic and geodetic location available to the end host. Examples for these protocols are LLDP-MED, DHCP extensions, HELD, or the protocols developed within the IEEE as part of their link-layer specifications. The server offering this information is usually called a Location Information Server (LIS). More common with high-quality cellular devices is the ability for the end host itself to determine its own location using GPS. The location information is then provided, by reference or value, to the service-providing entities, i.e. location recipients, via application protocols, such as HTTP, SIP or XMPP... We use emergency services an example to illustrate the security problems, as the problems have been typically discussed in that context since the stakes are high, but the issues apply also to other examples as cited earlier...."
See also: the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group
Simulate XQuery and XInclude Functionality with PHP
Colin Beckingham, IBM developerWorks
"XInclude and XQuery are XML tools that help web programmers process data dynamically. XInclude lets you treat multiple XML files as if they were one file, and XQuery can process the combined data and prepare it for inclusion into output for web-page display. Together, they perform this service elegantly and efficiently with few lines of code.
Most browsers can display and process XML files either directly or in cooperation with XSL templates. In an ideal world, browsers would understand XQuery and XInclude directly too. But at this point they support these tools only by placing unreasonable demands on users—for example, by requiring them to load experimental add-ons. Fetching the data from widely different sources and combining them into one large data set for processing can be a painstaking task for the web programmer.
Through a hypothetical business example, this article first shows you the strength of the combination of XQuery and XInclude. You'll learn how to use PHP to simulate the functionality that XQuery and XInclude provide. Moving all the data processing to the server side gives you a workaround to limited browser support for XQuery and XInclude. Another benefit is that PHP gives you much finer control over the final output presentation.
The programmer's concern is to compare PHP coding using the readily available SimpleXML libraries (on the one hand) with PHP and intermediate tools provided by special XInclude and XQuery libraries (on the other). A library of special tools is only of value if it eases the programmer's burden, reducing and clarifying the code required to get a job done. In the case of includes, the insertion of data from other files is rather straightforward and does not require much coding using either method. In the case of queries, XQuery libraries can replace quite a lot of code otherwise required ('for' loops, and so on) with the PHP+SimpleXML method. However the more condensed the code is for data retrieval the less opportunity to use PHP for its other capabilities. To take an example, you could reduce your query to one line in XQuery replacing 20 lines of PHP+SimpleXML. The opportunity cost is that you have forgone the chance to easily and clearly insert other statements between the 20 separate statements. This is the trade-off..."
See also: XML Inclusions (XInclude) Version 1.0 Second Edition
Sponsors
XML Daily Newslink and Cover Pages sponsored by:
IBM Corporation | http://www.ibm.com |
ISIS Papyrus | http://www.isis-papyrus.com |
Microsoft Corporation | http://www.microsoft.com |
Oracle Corporation | http://www.oracle.com |
Primeton | http://www.primeton.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/