The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: August 11, 2010
XML Daily Newslink. Wednesday, 11 August 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com



Proposal for a Browser-Friendly, HTTP-Based Communication Protocol for Fine-Grained Information Exchange
Julian Reschke and David Nuescheler (eds), IETF/W3C Draft Proposal

A proposal from Julian Reschke and David Nuescheler has been published for new work on "an efficient, browser-friendly, HTTP-based communication protocol for fine-grained information exchange. HTTP/1.1 (RFC 2616) already contains a set of tools for modifying resources, namely the methods PUT, POST, and DELETE. Many systems have been built on top of this, most of them in an ad-hoc manner (which is ok when client and server are controlled by the same developers)...

We would like to cover some of the following use cases extending the resource oriented model. (1) An simple javascript based browser application should be able to read fine-grained information (comparable to WebDAV properties) in a simple manner using a defined JSON format to be consumed in an intuitive fashion. (2) A simple HTML Form should be able to write information in a patch oriented manner containing both binary (file) data and fine-grained, typed information using a multipart POST. (3) A simple javascript application should be able to write information in a patch oriented fashion using a defined JSON-diff PATCH content-type to update fine-grained information.

Several extensions/applications of HTTP are in this space, such as WebDAV (RFC 4918), The Atom feed format (RFC 4287), and AtomPub (RFC 5023). WebDAV and AtomPub have been very successful so far. WebDAV gets used both as a plain remote filesystem protocol (as observed by clients being shipped with all operating systems, and both Apache httpd and IIS supporting it), and for specific applications, such as Versioning (subversion), Calendaring (CalDAV), etc. The same is true for AtomPub, which actually may not be used a lot in practice for the original use case (feed authoring), but for many other things instead. Both of those protocol specifications are not easily consumed by websites and applications running current browsers and require a lot of client-sided scripting to cover simple read and write use cases.

There's a proposal for a protocol called JSOP which addresses these use cases, which we may want to consider as input for this work... Needs a Data Model (define a collection model [hierarchy, naming], and a representation format; Authoring through HTML forms and POST; Assign either hardwired or discoverable URIs for inspecting collections; Provide improvements to WebDAV... Expected deliverables from this new activity would be: (1) Definition of a very simply data model and a representation format for it (required JSON, optionally XML). (2) A format suitable for manipulating the data format above using PATCH (potentially tunneled through POST). (3) A binding from multipart/form-data/POST to this model. (4) A separate (?) document explaining how these ingredients would be combined in practice..."

See also: the posting to the CMIS-Browser list


Carnegie Mellon: Turning the World Into a Sensor Network
John Cox, Network World

"Try to imagine a 'world littered with trillions' of wireless sensors. Now try to imagine the problems getting even a few thousand of them to work together in any kind of intelligible way so you can know if that interstate bridge is near collapse or the natural gas pipe behind a housing development has a crack in it or how dropping your AC temperature by 3 degrees during peak demand will clobber your electric bill...

Those are the problems that a new research project at Carnegie Mellon University (CMU) is going to explore. It has, as most such government-industry-academia joint efforts do, the cumbersome name of Pennsylvania Smart Infrastructure Incubator (PSII).

The basic idea: Bring together some smart people, give them state of the art facilities and communications, and ask them to wrestle with how to build and run really big sensor networks that can deliver useable information... CMU already has a lot of practical experience in sensors. It's launched an internal project called Sensor Andrew, which is gradually adding a wireless sensor infrastructure burrowed into every campus building. So far, Sensor Andrew reaches five buildings on the Pittsburgh campus, each using the networks for different purposes, from tracking locations of people to warning that a printer is still using maximum power, due to a low-toner alert, instead of shutting down...

The campus sensor network makes use of homegrown technology: a low-cost wireless mesh node called FireFly, and a real-time operating system specifically designed for such networks. Like other similar products, FireFly uses an IEEE 802.15.4 transceiver, good for 150 to 300 feet. It has a maximum raw data rate of 250Kbps and an 8-bit microcontroller, and SD Flash card slot, to process data from four optional on-board sensors: light, audio, temperature, humidity, acceleration..."

See also: FireFly


Managing Semantics in XML Vocabularies: Legal and Legislative Domains
Gioele Barabucci, Luca Cervone (et al), Balisage 2010 Presentation

"Akoma Ntoso is an XML vocabulary for legal and legislative documents whose primary objective is to provide semantic information on top of a received legal text. There are three key aspects of legal documents on which Akoma Ntoso focuses: identification of structures, references to other legal documents and storage of non-authoritative annotations. Structures are identified and marked up according to an XML vocabulary based on common patterns found in legal documents. References to legal documents across countries are made using a common naming convention based on URIs. Third-party annotations and interpretations (broadly called metadata) are stored using and ontologically sound approach compatible with Topic Maps, OWL, and GRDDL.

The XML documents created according to the Akoma Ntoso specifications use a layered structure where each layer addresses a single problem: the text layer provides a faithful representation of the original content of the legal text, the structure layer provides a hierarchical organization of the parts present in the text layers, the metadata layer associate information from the underlying layers with ontological information.

This paper shows how design choices fit the stated goals: using XML as the underlying mark-up format and having clearly separated layers allow documents to be preserved for long periods of time and without modifications to the endorsed texts. Additionally, multiple agents can provide their own interpretation of certain legal aspects of the given legal text. Moreover, computer reasoners can extract semantic information from Akoma Ntoso documents and reason over them both with or without user-supplied ontologies

The approach used by Akoma Ntoso allows the development of systems that use more sophisticated formal logic modelling framework, like non-monotonic or non-deductive logics in order to apply sophisticated legal reasoning theories, more suitable for the complex legal domain, filling the gap between all the semantic web layers while preserving interdependency and expressiveness..."


Terminology for Talking About Privacy by Data Minimization
Andreas Pfitzmann and Marit Hansen (eds), TU Dresden Technical Report

Version 0.34 of a 98-page TU Dresden Technical Report has been published for: A Terminology for Talking About Privacy by Data Minimization: Anonymity, Unlinkability, Undetectability, Unobservability, Pseudonymity, and Identity Management.

"Based on the nomenclature of the early papers in the field privacy by data minimization, we develop a terminology which is both expressive and precise. More particularly, we define anonymity, unlinkability, linkability, undetectability, unobservability, pseudonymity (pseudonyms and digital pseudonyms, and their attributes), identifiability, identity, partial identity, digital identity and identity management. In addition, we describe the relationships between these terms, give a rationale why we define them as we do, and sketch the main mechanisms to provide for the properties defined...

Early papers from the 1980s about privacy by data minimization already deal with anonymity, unlinkability, unobservability, and pseudonymity and introduce these terms within the respective context of proposed measures. We show relationships between these terms and thereby develop a consistent terminology. Then we contrast these definitions with newer approaches, e.g., from ISO IS 15408. Finally, we extend this terminology to identity (as the opposite of anonymity and unlinkability) and identity management. Identity management is a much younger and much less defined field—so a really consolidated terminology for this field does not exist. But nevertheless, after development and broad discussion since 2004, we believe this terminology to be the most consolidated one in this rapidly emerging field.

We develop this terminology in the usual setting of entities (subjects and objects) and actions, i.e., subjects execute actions on objects. In particular, subjects called senders send objects called messages to subjects called recipients using a communication network, i.e., stations send and receive messages using communication lines. For other settings, e.g., users querying a database, customers shopping in an e-commerce shop, the same terminology can be derived by abstracting away the special names 'sender', 'recipient', and 'message'. Irrespective whether we speak of senders and recipients or whether we generalize to actors and actees, we regard a subject as a human being (i.e., a natural person), a legal person, or a computer. An organization not acting as a legal person we neither see as a single subject nor as a single entity, but as (possibly structured) sets of subjects or entities."

See also: the corresponding IETF Internet Draft


OASIS Members Submit Proposed Charter for WSS-M Technical Committee
Staff, OASIS Announcement

OASIS members have proposed the formation of a new Web Services Security Maintenance (WSS-M) Technical Committee. Supporting the TC Charter proposal are representatives from Fujitsu, IBM, Oracle, Microsoft, and NeuStar.

"The purpose of this TC is to perform ongoing maintenance on the OASIS Standards of Web Services Security 1.1 and token profiles produced by the Web Services Security (WSS) TC, which is now closed. The work is defined as: any drafting or development work to modify the indicated OASIS Standards that: (a) constitutes only error corrections, bug fixes or editorial formatting changes; (b) does not add any new features; (c) is within the scope of the Web Services Security TC that approved the OASIS Standard.

The Web Services Security and token profiles are: Web Services Security SOAP Message Security 1.1; Web Services Security Kerberos Token Profile 1.1; Web Services Security Rights Expression Language (REL) Token Profile 1.1; Web Services Security SAML Token Profile 1.1; Web Services Security SOAP Message with Attachments (SwA) Profile 1.1; Web Services Security Username Token Profile 1.1; Web Services Security X.509 Certificate Token Profile 1.1.

Operating under the 'Non-Assertion Covenant' IPR Mode, the group would create OASIS Standards incorporating Approved Errata or updated OASIS Standards to correct a number of currently known errors in the specifications to prepare for PAS Submission to ISO/IEC JTC 1. The goal is to produce Web Services Security 1.1 and token profiles as OASIS Standards incorporating Approved Errata. Projected completion is December 2010. Additionally: create other future Approved Errata, OASIS Standards incorporating Approved Errata, or updated OASIS Standards as required to correct other errors reported by ISO/IEC JTC 1 or any other sources..."

See also: the OASIS 'wss' specifications


Making HTTP Pipelining Usable on the Open Web
Mark Nottingham (ed), IETF Internet Draft

IETF has published an initial level -00 Informational Internet Draft Making HTTP Pipelining Usable on the Open Web. Abstract: "Pipelining was added to HTTP/1.1 as a means of improving the performance of persistent connections in common cases. While it is deployed in some limited circumstances, it is not widely used by clients on the open Internet. This memo suggests some measures designed to make it more possible for clients to reliably and safely use HTTP pipelining in these situations."

From the Introduction: "HTTP/1.1 added pipelining—that is, the ability to have more than one outstanding request on a connection at a particular time—to improve performance when many requests need to be made (e.g., when an HTML page references several images). Although not usable in all circumstances (e.g., POST, PUT and other non-idempotent requests cannot be pipelined), for the common case of Web browsing, pipelining seems at first like a broadly useful improvement—especially since the number of TCP connections browsers and servers can use for a given interaction is limited, and especially where there is noticeable latency present.

Indeed, in constrained applications of HTTP such as Subversion, pipelining has been shown to improve end-user perceived latency considerably. However, pipelining is not broadly used on the Web today; while most (but not all) servers and intermediaries support pipelining (to varying degrees), only one major Web browser uses it in its default configuration, and that implementation is reported to use a number of proprietary heuristics to determine when it is safe to pipeline.

This memo characterises issues currently encountered in the use of HTTP pipelining, and suggests the use of mechanisms that, when used in concert, are designed to make its use more reliable and safe for browsers. It does not propose large protocol changes (e.g., out-of-order messages), but rather incremental improvements that can be deployed within the confines of existing infrastructure..."

See also: the HTML version


Web Security Context: User Interface Guidelines
Thomas Roessler and Anil Saldhana (eds), W3C Technical Report

Members of the W3C Web Security Context Working Group have released the specification Web Security Context: User Interface Guidelines as a final Recommendation. An accompanying Implementation Report for Web Security Context UI Guidelines presents the results of testing against the PR using basic and advanced conformance as defined as in the document's conformance sections. In the report's 'Overview Table of Supported Features', results are tabulated for Opera, Google Chrome 5, and Firefox 3.6.

This specification deals with the trust decisions that users must make online, and with ways to support them in making safe and informed decisions where possible. In order to achieve that goal, the specification includes recommendations on the presentation of identity information by user agents. It also includes recommendations on conveying error situations in security protocols. The error handling recommendations both minimize the trust decisions left to users, and represent known best practice in inducing users toward safe behavior where they have to make these decisions.

This document specifies user interactions with a goal toward making security usable, based on known best practice in this area. The document is intended to provide user interface guidelines. Most sections assume the audience has a certain level of understanding of the core PKI (Public Key Infrastructure) technologies as used on the Web.

Since this document is part of the W3C specification process, it is written to clearly lay out the requirements and options for conforming to it as a standard. User interface guidelines that are not intended for use as standards do not have such a structure. Readers more familiar with that latter form of user interface guideline are encouraged to read this specification as a way to avoid known mistakes in usable security. This specification comes with two companion documents: 'Web Security Experience, Indicators and Trust: Scope and Use Cases' documents the initial assumptions about the scope of the specification. It also includes an initial set of use cases the Working Group discussed. 'Web User Interaction: Threat Trees' documents the Working Group's initial threat analysis. This document is based on current best practices in deployed user agents, and covers the use cases and threats in those documents to that extent..."

See also: the Implementation Report for Web Security Context UI Guidelines


Major Update to Access Control Service (ACS) Now Available
Justin Smith, Blog

Microsoft developers have announced a major update to ACS, available in the labs environment. The Access Control Service (ACS) "allows you to integrate Single Sign On and centralized authorization into your web application. It works with most modern platforms, and integrates with both web and enterprise identity providers...

A snapshot of what's in this release: (1) Integrates with Windows Identity Foundation (WIF) and tooling; (2) Out-of-the-box support for popular web identity providers including: Windows Live ID, Google, Yahoo, and Facebook; (3) Out-of-the-box support for Active Directory Federation Server v2.0; (4) Support for OAuth WRAP, WS-Trust, and WS-Federation protocols; (5) Support for the SAML 1.1, SAML 2.0, and Simple Web Token (SWT) token formats; (6) Integrated and customizable Home Realm Discovery that allows users to choose their identity provider; (7) An OData-based Management Service that provides programmatic access to ACS configuration; (8) A Web Portal that allows administrative access to ACS configuration...

ACS is compatible with virtually any modern web platform, including .NET, PHP, Python, Java, and Ruby. The CodePlex project contains the documentation and samples for the Labs release of ACS.

Most of the scenarios that involve ACS consist of four autonomous services: [a] Relying Party (RP): Your web site or service [b] Client: The browser or application that is attempting to gain access to the Relying Party [c] Identity Provider (IdP): The site or service that can authenticate the Client [d] ACS: The partition of ACS that is dedicated to the Relying Party The core scenario is similar for web services and web sites, though the interaction with web sites utilizes the capabilities of the browser..."

See also: the Access Control Service Samples and Documentation Project


An Encoding Parameter for HTTP Basic Authentication
Julian F. Reschke (ed), IETF Internet Draft

An initial IETF public working draft has been published for the Standards Track Internet Draft An Encoding Parameter for HTTP Basic Authentication. Discussion of this specification takes place on the W3C mailing list 'ietf-http-wg@w3.org'. The specification offers 'a very modest proposal to fix the I18N issue in Basic Authentication'.

From the document Abstract: "The 'Basic' authentication scheme defined in RFC 2617 does not properly define how to treat non-ASCII characters. This has lead to a situation where user agent implementations disagree, and servers make different assumptions based on the locales they are running in. There is little interoperability for characters in the ISO-8859-1 character set, and even less interoperability for any characters beyond that.

This document defines a backwards-compatible extension to 'Basic', specifying the server's character encoding expectation, using a new authentication scheme parameter...

The 'encoding' auth-param: servers MAY use the 'encoding' authentication parameter to express the character encoding they expect the user agent to use... The only allowed value is 'UTF-8', to be matched case-insensitively, indicating that the server expects the UTF-8 character encoding to be used. Other values are reserved for future use..."

See also: the HTML version


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-08-11.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org