The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: May 13, 2009
XML Daily Newslink. Wednesday, 13 May 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com



ASC X9 to Develop Encryption Standard for Cardholder Data
Staff, Heartland Payment Systems Announcement

The Accredited Standards Committee X9 (ASC X9), accredited by the American National Standards Institute (ANSI), is embarking on the development of a new standard to protect cardholder data in the electronic payments industry. ASC X9 develops, maintains and promotes standards for all financial services in the United States and pioneered standards for items including the credit card magnetic stripe and ATM systems. In a May 7, 2009 meeting in Plano, TX, data security experts and industry leaders brainstormed technical approaches to protecting this data. Ideas generated at this meeting will be presented at ASC X9's initial standards development meeting on June 1-5, 2009 in Foster City, CA. Bob Carr, chairman and chief executive officer of Heartland Payment Systems: "This preliminary meeting marks an important step in expediting the development of next-generation data security solutions. Exchanging ideas is critical to the creation of a robust and public standard that protects the security of cardholder data and safeguards consumers and businesses nationwide." Heartland, one of the nation's largest payments processors, is a member of the ASC X9 working group. Carr is a strong proponent of information sharing and end-to-end encryption as a means to enhance consumer data security at all points of a payments transaction...

[Note: related news coverage reports that "Heartland plans to start rollout a new security approach to its retailer customers. It's based on attaching a Tamper-Resistant Security Module (TRSM), which is a physical piece of hardware, within centimeters or less to the magnetic stripe itself; the connection is shielded with the TRSM... Heartland's approach is based on the licensed technology from several vendors — including Voltage Security — along with a healthy dose of code written by salaried Heartland programmers..." Avivah Litan (Gartner Research Report): "End-to-end encryption would be most effective if data was encrypted from the time a card was swiped at a POS until it reached the card issuer, similar to the way personal identification numbers (PINs) currently are encrypted according to card brand standards. However, Heartland is limited by the scope of systems it manages and from which it accepts data; it can only seek to influence the card industry to carry end-to-end encryption beyond the processor stage, through the card networks and onto the card issuers. The proposal's success also depends on merchants' willingness to invest in terminal upgrades that support card data encryption. If Heartland implements its proposed project more securely than it has managed in the past with its network, it will make payment card processing more secure for merchants, especially if they don't manage the encryption keys and leave key management to their processor. Nevertheless, the process will always include vulnerabilities at the point where data is encrypted and decrypted. These vulnerabilities can be limited by using sound key management practices and enforcing extra security measures..."

See also: ANSI X9 and key management standards


W3C Member Submission: Website Parse Template
Armen Manukyan, Avet Manukyan (et al., eds) W3C Submission

W3C has acknowledged receipt of a Member Submission from OMFICA (Open Market For Internet Content Accessibility) for the Website Parse Template specification. The specification "defines an XML based format that describes the semantic structure followed by a set of HTML Web pages and rules to extract semantically rich data from portions of these pages. The format is intended to minimize the potential discrepancies between the actual content of an HTML page and an RDF graph representation of the same content, so that Web crawlers typically used by search engines may use the RDF graph to index the page appropriately. The submission uses the format to describe HTML pages, but it may be directly extended for use in general-purpose XML documents. The submission also defines the WPT Ontology language that provides a very minimalist vocabulary definition language. Consistency between structured content intended for humans and the authoritative meaning (or semantics) of that content intended for machines is ensured by the fact that the semantics are directly extracted from the content. The extraction rules bind sections of the page identified by XPath expressions to machine-readable content description based on ontologies. Since the extraction rules become the potential source of errors in this paradigm, there would be little value if they had to be defined for each and every single page. The submission thus encourages the re-use of the extraction rules on a set of URIs identified by some regular expression matching rule."

From the specification abstract: "Existing web crawling technologies assume content extraction directly from web pages without basing on keywords declared in HTML codes. The key reason is that web publishers usually define keywords different from the actual content. The same situation is with deployment of RDF, because there is no guarantee for web crawlers that the information included in RDF file fully corresponds to actual content. Moreover, RDF description prepared for specific web page provides information about the content and does not include any information about content allocation on that web page. To escape the mismatch problems described above, web crawlers are forced to check RDF compliance with structured content for each targeted web page which is associated with the lack information on web page structured content. Website Parse Template facilitates solution of this problem by providing web page HTML structure description for a single or group of similarly structured pages. Website Parse Template (WPT) allows web publishers to define references to specific HTML elements together with web page content description represented in any supported format including RDF." The W3C Team Comment on the Website Parse Template Submission, edited by Francois Daoust, encourages the community interested in this area to investigate augmentation of WPT with a combination of RDFa, GRDDL, and POWDER.

See also: the submission request


OAuth Core 1.0 Rev A, Draft 3
Eran Hammer-Lahav (ed), Community Draft

Authors of the "OAuth Core 1.0" specification have issued Rev A, Draft 3 and solicit comments through May 25, 2009. This is espected to be the last draft. The OAuth protocol enables websites or applications (Consumers) to access Protected Resources from a web service (Service Provider) via an API, without requiring Users to disclose their Service Provider credentials to the Consumers. More generally, OAuth creates a freely-implementable and generic methodology for API authentication. An example use case is allowing printing service printer.example.com (the Consumer), to access private photos stored on photos.example.net (the Service Provider) without requiring Users to provide their photos.example.net credentials to printer.example.com. OAuth does not require a specific user interface or interaction pattern, nor does it specify how Service Providers authenticate Users, making the protocol ideally suited for cases where authentication credentials are unavailable to the Consumer, such as with OpenID. OAuth aims to unify the experience and implementation of delegated web service authentication into a single, community-driven protocol. OAuth builds on existing protocols and best practices that have been independently implemented by various websites. An open standard, supported by large and small providers alike, promotes a consistent and trusted experience for both application developers and the users of those applications...

OAuth includes a Consumer Key and matching Consumer Secret that together authenticate the Consumer (as opposed to the User) to the Service Provider. Consumer-specific identification allows the Service Provider to vary access levels to Consumers (such as un-throttled access to resources). Service Providers should not rely on the Consumer Secret as a method to verify the Consumer identity, unless the Consumer Secret is known to be inaccessible to anyone other than the Consumer and the Service Provider. The Consumer Secret may be an empty string (for example when no Consumer verification is needed, or when verification is achieved through other means such as RSA)...


Open Grid Protocol: Introduction and Requirements
Meadhbh S. Hamrick (ed), IETF Internet Draft

The Open Grid Protocol (OGP) defines interactions between hosts which collaborate to create an shared, internet scale virtual world experience. This document introduces the protocol, the objectives it attempts to achieve and requirements it imposes on systems and users utilizing the protocol. This document also describes the model assumed by the protocol, to the extent it affects protocol interactions. From the 'Introduction': "Virtual Worlds are of increasing interest to the internet community. Innumerable examples of virtual world implementations exist; most using proprietary protocols. With roots in games and social interaction, Virtual Worlds are now finding use in business, education and information exchange. This document introduces the Open Grid Protocol (OGP) suite. This protocol is intended to carry information about the virtual world: its shape, its residents and manipulatable objects existing inside the world. The objective of the protocol is to define an extensible set of messages for carrying state and state change information between hosts participating in the simulation of the virtual world. OGP assumes hosts operated by multiple organizations will collaborate to simulate the virtual world. It also assumes that services originally defined for other environments (like the world wide web) will enhance the experience of the virtual world. The virtual world is expected to be simulated using software from multiple sources. The definition of how these systems will interoperate is essential for delivering a robust collection of co-operating hosts and a compelling user experience. OGP describes interoperability expectations and mechanisms between systems simulating the virtual world and for service providers exposing their content to virtual world participants...

To protect against "brittleness" from version skew, the Open Grid Protocol uses a flexible object representation system known as LLSD. Used correctly, semantics of remote resource access may be maintained even when the participants in the protocol do not adhere to exactly the same revision of the protocol... XML serialization of LLSD data is in common use in protocols implementing virtual worlds. When used to communicate protocol data with a transport that requires the use of a Type, the type 'application/llsd+xml' is used... OGP uses Representational State Transfer (REST) style interaction over HTTP. Much of the protocol interaction between systems participating in the virtual world simulation uses a request / response interaction style. Rather than creating a new messaging framework, OGP layers much of it's protocol atop HTTP. Further, OGP uses Representational State Transfer (REST) like semantics when exposing a protocol interface. A persistent, ubiquitous identity accompanies requests between hosts involved in the virtual world simulation... OGP protocol exchanges are described in terms of an abstract type system using an interface description language. Implementations may choose to instantiate actual protocol data units using the most appropriate presentation format. Web-based applications may choose to use JSON or XML. Server-to-server interactions may use the OGP specific binary serialization scheme if implementers and deployers view binary encoding to be advantageous. The decision of which serialization scheme to use is ultimately that of the system implementer. OGP has been designed to provide this flexibility to system implementers and those tasked with deploying OGP compatible systems...

See also: Open Grid Protocol Foundation


Joseph Yoder on Adaptive Object Model Architecture
Srini Penchikala, InfoQueue

In this interview Joseph Yoder talks about the Adaptive Object Model (AOM) architecture, a software architecture for easily adapting to changing business requirements. Yoder: "For a traditional object model, when you think of modeling your objects you might use a UML diagram, you draw your different main domain constructs within the business. For example with the insurance domain you might have things like customer, insurance policies and you have different behaviors in your interactions between those types. A customer may have things like an address and things that they own that they might want to ensure, like a house. So, you have this classic object diagram that generally when you write in code, you would write classes for it, like Java classes — or whatever language you are using — to represent that model... The thing with that is, if you are modeling things that way, whenever that object model changes, say in the insurance domain you want to change the insurance policies and types of products and services that you are offering, you have to write new classes and recompile it and release with that. With the adaptive object model, the difference between it and the normal one is you still want to model those types of things, those types of business entities, such as in the insurance domain you want to get your insurance policies, but, rather than representing those as classes, you will still be able to model them, but we are going to represent them with descriptive information about your business domain, so that we can change that, even at run time...

You are taking the descriptive information and putting it into XML or a database, or some kind of metamodel that you can interpret at run time. The difference between adaptive object model and your normal way of object oriented development and when you are doing object models generally is, rather than representing your business domains as classes with attributes and behaviors, you represent the types of models that you need as data so your classes and your methods on classes and all the attributes and relationships between classes are representing as data so we can change and add those without changing code. An adaptive object model is very useful if you're saying you are going to have these new types of products and services and your business is going to change within a domain specific way, you can represent them that way and it will allow users to adapt more quickly to that changing requirement. It usually evolves from frameworks and domain specific languages—they have a lot of related patterns that you see arriving from them...


Binding Extensions to Web Distributed Authoring and Versioning (WebDAV)
Julian F. Reschke, Geoffrey Clemm, Jason Crawford, Jim Whitehead; IETF Internet Draft

The Internet Engineering Steering Group (IESG) has issued a Last Call review for the draft specification Binding Extensions to Web Distributed Authoring and Versioning (WebDAV). IESG has received a request from the WWW Distributed Authoring and Versioning (WEBDAV) Working Group to consider this document ad an IETF Experimental RFC. The IESG plans to make a decision in the next few weeks, and solicits final comments on this action. This IETF specification defines bindings, and the BIND method for creating multiple bindings to the same resource. Creating a new binding to a resource causes at least one new URI to be mapped to that resource. Servers are required to insure the integrity of any bindings that they allow to be created.

From the 'Introduction': "URIs of WebDAV-compliant resources are hierarchical and correspond to a hierarchy of collections in resource space. The WebDAV Distributed Authoring Protocol makes it possible to organize these resources into hierarchies, placing them into groupings, known as collections, which are more easily browsed and manipulated than a single flat collection. However, hierarchies require categorization decisions that locate resources at a single location in the hierarchy, a drawback when a resource has multiple valid categories. For example, in a hierarchy of vehicle descriptions containing collections for cars and boats, a description of a combination car/boat vehicle could belong in either collection. Ideally, the description should be accessible from both. Allowing clients to create new URIs that access the existing resource lets them put that resource into multiple collections. Hierarchies also make resource sharing more difficult, since resources that have utility across many collections are still forced into a single collection. For example, the mathematics department at one university might create a collection of information on fractals that contains bindings to some local resources, but also provides access to some resources at other universities. For many reasons, it may be undesirable to make physical copies of the shared resources on the local server: to conserve disk space, to respect copyright constraints, or to make any changes in the shared resources visible automatically. Being able to create new access paths to existing resources in other collections or even on other servers is useful for this sort of case...

The BIND method defined here provides a mechanism for allowing clients to create alternative access paths to existing WebDAV resources. HTTP and WebDAV (RFC 4918) methods are able to work because there are mappings between URIs and resources. A method is addressed to a URI, and the server follows the mapping from that URI to a resource, applying the method to that resource. Multiple URIs may be mapped to the same resource, but until now there has been no way for clients to create additional URIs mapped to existing resources. BIND lets clients associate a new URI with an existing WebDAV resource, and this URI can then be used to submit requests to the resource. Since URIs of WebDAV resources are hierarchical, and correspond to a hierarchy of collections in resource space, the BIND method also has the effect of adding the resource to a collection. As new URIs are associated with the resource, it appears in additional collections. A BIND request does not create a new resource, but simply makes available a new URI for submitting requests to an existing resource. The new URI is indistinguishable from any other URI when submitting a request to a resource. Only one round trip is needed to submit a request to the intended target. Servers are required to enforce the integrity of the relationships between the new URIs and the resources associated with them. Consequently, it may be very costly for servers to support BIND requests that cross server boundaries.

See also: the WWW Distributed Authoring and Versioning Working Group Status Pages


Fedora Commons and DSpace Foundation Create DuraSpace Organization
Staff, DuraSpace Announcement

Fedora Commons and the DSpace Foundation, two of the largest providers of open source software for managing and providing access to digital content, have announced today that they will join their organizations to pursue a common mission. Jointly, they will provide leadership and innovation in open source technologies for global communities who manage, preserve, and provide access to digital content. The joined organization, named DuraSpace, will sustain and grow its flagship repository platforms: Fedora and DSpace. DuraSpace will also expand its portfolio by offering new technologies and services that respond to the dynamic environment of the Web and to new requirements from existing and future users. DuraSpace will focus on supporting existing communities and will also engage a larger and more diverse group of stakeholders in support of its not-for-profit mission. The organization will be led by an executive team consisting of Sandy Payette (Chief Executive Officer), Michele Kimpton (Chief Business Officer), and Brad McLean (Chief Technology Officer) and will operate out of offices in Ithaca, NY and Cambridge, MA. Together Fedora and DSpace make up the largest market share of open repositories worldwide, serving over 700 institutions. These include organizations committed to the use of open source software solutions for the dissemination and preservation of academic, scientific, and cultural digital content...

According to the announcement, DuraSpace will continue to support its existing software platforms, DSpace and Fedora, as well as expand its offerings to support the needs of global information communities. The first new technology to emerge will be a Web-based service named DuraCloud. DuraCloud is a hosted service that takes advantage of the cost efficiencies of cloud storage and cloud computing, while adding value to help ensure longevity and re-use of digital content. The DuraSpace organization is developing partnerships with commercial cloud providers who offer both storage and computing capabilities. The DuraCloud service will be run by the DuraSpace organization. Its target audiences are organizations responsible for digital preservation and groups creating shared spaces for access and re-use of digital content. DuraCloud will be accessible directly as a Web service and also via plug-ins to digital repositories including Fedora and DSpace. The software developed to support the DuraCloud service will be made available as open source. An early release of DuraCloud will be available for selected pilot partners in Fall 2009... Clifford Lynch, Executive Director of the Coalition for Networked Information (CNI): "This is a great development. It will focus resources and talent in a way that should really accelerate progress in areas critical to the research, education, and cultural memory communities. The new emphasis on distributed reliable storage infrastructure services and their integration with repositories is particularly timely."

See also: the Fedora Commons


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-05-13.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org