The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: September 09, 2009
XML Daily Newslink. Wednesday, 09 September 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Oracle Corporation

New IETF RFC: Tags for Identifying Languages
Addison Phillips and Mark Davis (eds), IETF Approved RFC

The IETF announced the availability of a new Request for Comments in the RFC libraries: Tags for Identifying Languages (RFC 5646, BCP 47). Precursors of this document include RFC 4646, RFC 4647, RFC 3066, and RFC 1766. The document specifies a particular identifier mechanism (the language tag) and a registration function for values to be used to form tags. It also defines a mechanism for private use values and future extensions. Language tags are used to help identify languages, whether spoken, written, signed, or otherwise signaled, for the purpose of communication. This includes constructed and artificial languages but excludes languages not intended primarily for human communication, such as programming languages.

Also published, concurrently: "Update to the Language Subtag Registry" (IETF RFC 5645). RFC 5645 expands on RFC 4646 by adding support for approximately 7,500 new primary and extended language subtags based on ISO 639-3 and ISO 639-5 alpha-3 code elements, and seven new region subtags based on ISO 3166-1 exceptionally reserved code elements. This memo describes the process of updating the registry to include these additional subtags and to make secondary changes to the registry that result from adding the new subtags and from other decisions made by the Language Tag Registry Update (LTRU) Working Group. In writing this document, a complete replacement of the contents of the Language Subtag Registry was provided to the Internet Assigned Numbers Authority (IANA) to record the necessary updates.

Rationale for RFC 5646: "The language of an information item or a user's language preferences often need to be identified so that appropriate processing can be applied. For example, the user's language preferences in a Web browser can be used to select Web pages appropriately. Language information can also be used to select among tools (such as dictionaries) to assist in the processing or understanding of content in different languages. Knowledge about the particular language used by some piece of information content might be useful or even required by some types of processing, for example, spell-checking, computer- synthesized speech, Braille transcription, or high-quality print renderings. One means of indicating the language used is by labeling the information content with an identifier or 'tag'. These tags can also be used to specify the user's preferences when selecting information content or to label additional attributes of content and associated resources..."

See also: Language Identifiers in the Markup Context

OpenID and Information Card Pilot Launched Supporting Open e-Government
OpenID Foundation and Information Card Foundation, Joint Announcement

"Ten industry leaders — Yahoo!, PayPal, Google, Equifax, AOL, VeriSign, Acxiom, Citi, Privo and Wave Systems — announced that they will support the first pilot programs designed for the American public to engage in open government, government that is transparent, participatory, and collaborative. This open identity initiative is a key step in President Obama's memorandum to make it easy for individuals to register and participate in government websites - without having to create new usernames and passwords. Additionally, members of the public will be able to fully control how much or how little personal information they share with the government at all times...

These companies will act as digital identity providers using OpenID and Information Card technologies. The pilot programs are being conducted by the Center for Information Technology (CIT), National Institutes of Health (NIH), U.S. Department of Health and Human Services (HHS), and related agencies. The participating companies are being certified under non-discriminatory open trust frameworks developed under collaboration between the OpenID Foundation (OIDF) and the Information Card Foundation (ICF) per the federal government Trust Framework Provider Adoption Process...

This initiative will help transform government websites from basic "brochureware" into interactive resources, saving individuals time and increasing their direct involvement in governmental decision making. OpenID and Information Card technologies make such interactive access simple and safe. For example, in the coming months the NIH intends to use OpenID and Information Cards to support a number of services including customized library searches, access to training resources, registration for conferences, and use of medical research wikis, all with strong privacy protections..."

See also: the Gov 2.0 Summit web site

OpenTransact: Proposed REST Based Standard for Financial Transactions
Pelle Braendgaard and Agile Banking List Participants, Software Announcement

"We on the Agile Banking list have been working on a very simple OAuth based API for general purpose financial transactions called Open Transact. The goal here is to develop an extremly simple low level standard for transfering an amount of an asset from one account to another...

Most payment standards are overly complex as they have to link to lots of different legacy systems such as credit card clearing systems, trading systems etc.. We aim to create a new standard from scratch ignoring all legacy systems, while leaving it flexible enough to allow applications built on top of it to deal with legacy systems...

OAuth should form the core authentication and signature layer of the API. The OAuth protocol would automatically give us Transaction ID, Payer, Signature and a security layer. The protocol should use simple form based parameter encoding as it OAuth supports it well and it is supported by every single programming language out there...

An Asset Service is a service maintained by an organization to manage accounts of ONE particular Asset. For money and other financial assets the Asset Service would normally be run by a Financial Service Provider of some type. However there are many types of assets that could be offered by non-financial services.. Each Asset Service has a specific transaction URL. This way we don't need to get into details in our standard about application specific details like currency, card type, size, color etc. The transaction URL would follow basic REST practices...

See also: the agile-banking group pages

Microsoft Corporation Provides Funding for New CodePlex Foundation
Staff, CodePlex Foundation Announcement

An announcement has been issued for Microsoft Corporation's funding of the CodePlex Foundation, "a non-profit foundation formed with the mission of enabling the exchange of code and understanding among software companies and open source communities...

Incorporated as a 501.c6 non-profit, the CodePlex Foundation was created as a forum in which open source communities and the software development community can come together with the shared goal of increasing participation in open source community projects. The CodePlex Foundation will complement existing open source foundations and organizations, providing a forum in which best practices and shared understanding can be established by a broad group of participants, both software companies and open source communities..."

From the FAQ document: "Microsoft has an evolving engagement with open source, as demonstrated by its sponsorship of the Apache Software Foundation, contributions to the PHP Community, participation in Apache projects including the Hadoop project and the Qpid project, and participation in various community events such as OSCON, EclipseCon, PyCon, and the Moodle Conference... We see a convergence of maturing technology and evolving business models—an inflection point -- underway where more commercial companies are willing to participate in open source projects..."

See also: on CodePlex Foundation licensing details

Cloud Storage for Cloud Computing
Staff, SNIA and OGF White Paper

The Storage Networking Industry Association (SNIA) and Open Grid Forum (OGF) have released a white paper Cloud Storage for Cloud Computing discussing "the business drivers in the Cloud delivery mechanism and business model, what the requirements are in this space, and how standard interfaces, coordinated between different organizations can meet the emerging needs for interoperability and portability of data between clouds...

The Cloud has become a new vehicle for delivering resources such as computing and storage to customers on demand. Rather than being a new technology in itself, the cloud is a new business model wrapped around new technologies such as server virtualization that take advantage of economies of scale and multi-tenancy to reduce the cost of using information technology resources...

OGF's Open Cloud Computing Interface (OCCI) is a free, open, community consensus driven API, targeting cloud infrastructure services. The API shields IT data centers and cloud partners from the disparities existing between the lineup of proprietary and open cloud APIs... SNIA has created a technical work group to address the need for a cloud storage standard. The new Cloud Data Management Interface (CDMI) is meant to enable interoperable cloud storage and data management. In CDMI, the underlying storage space exposed by the above interfaces is abstracted using the notion of a container. A container is not only a useful abstraction for storage space, but also serves as a grouping of the data stored in it, and a point of control for applying data services in the aggregate..."

See also: the document source from OGF

Document Object Model (DOM) Level 3 Events Specification
Doug Schepers, Bjorn Hohrmann (et al, eds), W3C Technical Report

W3C announced the publication of a new Working Draft for the Document Object Model (DOM) Level 3 Events Specification, revising a previous description from 2007-12-21. This document was produced by members of the W3C Web Applications Working Group, part of the Rich Web Clients Activity, in the W3C Interaction Domain. The document was previously published as a W3C Note, pending further feedback from implementers, and is now being revised to reflect the current state of implementation. It is expected that this document will progress along the W3C's Recommendation track.

"The specification defines the Document Object Model Events Level 3: a generic platform- and language-neutral event system which allows registration of event handlers, describes event flow through a tree structure, and provides basic contextual information for each event. The Document Object Model Events Level 3 builds on the Document Object Model Events Level 2.

DOM Events is designed with two main goals. The first goal is the design of an event system which allows registration of event listeners and describes event flow through a tree structure. Additionally, the specification will provide standard modules of events for user interface control and document mutation notifications, including defined contextual information for each of these event modules.

The second goal of DOM Events is to provide a common subset of the current event systems used in DOM Level 0 browsers. This is intended to foster interoperability of existing scripts and content. It is not expected that this goal will be met with full backwards compatibility. However, the specification attempts to achieve this when possible.."

See also: the W3C Web Applications (WebApps) Working Group

Encrypted Key Package Content Type
Sean Turner and Russ Housley (eds), IETF Internet Draft

An initial -00 IETF Internet Draft has been published for the Encrypted Key Package Content Type," along with Algorithms for Encrypted Key Package Content Type. The "Encrypted Key Package Content Type" memo "defines the encrypted key package content type, which can be used to encrypt a content that includes a key package, such as a symmetric key package or an asymmetric key package. It is transport independent. The Cryptographic Message Syntax (CMS) can be used to digitally sign, digest, authenticate, or further encrypt this content type. It is designed to be used with the CMS Content Constraints extension, which does not constrain the EncryptedData, EnvelopedData, and AuthEnvelopedData.

The Cryptographic Message Syntax (CMS) specification of RFC 3852 defines means to digitally sign, digest, authenticate or encrypt arbitrary message content. Many specifications define content types intended for use with CMS... The CMS Content Constraints (CCC) certificate extension defines an authorization mechanism that allows recipients to determine whether the originator of an authenticated CMS content type is authorized to produce messages of that type. CCC is used to authorize the innermost content type within a CMS-protected message. CCC cannot be used to constrain the following structures that are used to provide layers of protection: SignedData, EnvelopedData, EncryptedData, DigestData, CompressedData, AuthenticatedData, ContentCollection, ContentWithAttributes or AuthEnvelopedData... This document therefore defines the encrypted key package content type, which can be used to encrypt a content that includes a key package, such as a symmetric key package or an asymmetric key package..."

See also: Algorithms for Encrypted Key Package Content Type

Geospatial Tools Offer Killer App for Gov 2.0
Wyatt Kash, Government Computer News

"If the idea of government 2.0 revolves around using government information as a platform for enabling public discourse, then geospatial technologies are one of the killer apps"—according to a presentation by Jack Dangermond, president of ESRI, delivered at the Gov 2.0 Summit in Washington DC. "Maps and geospatial information systems are becoming richer, smarter, and more pervasive, but government agencies still need to do more to convert data into services that can populate mapping applications."

Robert Greenberg, chief executive of G&H International Services, [similarly] described "the evolution of Virtual USA, a geospatial emergency management tool being developed by the Homeland Security Department's Science and Technology Directorate. The initiative builds on work pioneered by Virtual Alabama, which aggregates and integrates property and infrastructure data in a visualization tool by using Google Earth Enterprise. The initiative also relies on the work of Virginia's Virginia Interoperability Picture for Emergency Response (VIPER) system, which uses an ESRI geospatial platform..."

See also: the ESRI web site


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: