The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: April 09, 2009
XML Daily Newslink. Thursday, 09 April 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
IBM Corporation

Protocol for Web Description Resources (POWDER) Working Drafts
Phil Archer, Andrea Perego, Kevin Smith (et al., eds), W3C Technical Reports

W3C announced that the Protocol for Web Description Resources (POWDER) Working Group has published five Working Drafts. The purpose of the Protocol for Web Description Resources (POWDER) is to provide a means for individuals or organizations to describe a group of resources through the publication of machine-readable metadata. The primary change in these publications relates to the IRI canonicalization sections of the "POWDER: Grouping of Resources" document. The five Working Drafts include: (1) "Description Resources" Last Call "details the creation and lifecycle of Description Resources (DRs), which encapsulate such metadata. These are typically represented in a highly constrained XML dialect that is relatively human-readable. The meaning of such DRs are underpinned by formal semantics, accessible by performing a GRDDL Transform. (2) "Grouping of Resources" Last Call "describes how sets of IRIs can be defined such that descriptions or other data can be applied to the resources obtained by dereferencing IRIs that are elements of the set. IRI sets are defined as XML elements with relatively loose operational semantics. This is underpinned by the formal semantics of POWDER which include a semantic extension, defined separately. A GRDDL transform is associated with the POWDER namespace that maps the operational to the formal semantics. (3) "Formal Semantics" Last Call "describes how the relatively simple operational format of a POWDER document can be transformed for processing by Semantic Web tools." (4) "Primer" - gives an overview and use cases; here are a variety of use cases: from providing a better means to describing Web resources and creating trustmarks to aiding content discovery, child protection and Semantic Web searches. (5) "Test Suite" - aims to indicate the correct formats of POWDER documents and illustrate various crucial aspects on the usage of POWDER documents, such as locating a document and infering information from it.

See also: the W3C news item

Workshop on Open Standards for Government Transformation
Staff, OASIS Announcement

OASIS announced participation with the World Bank Group's Global Information and Communication Technologies Department in organizing an eGov Workshop, to be held April 17, 2009, in Washington, D.C. Representatives from the public and private sectors will explore the significance of open standards for efficient, citizen-centric and transformational government. Issues surrounding public financial management, e-procurement, security, and interoperability frameworks will be discussed. Participants in Washington will be joined via interactive video links with officials from developing countries where the World Bank supports e-Government projects, including Russia, Moldova, Ghana, Kenya, Tanzania and Rwanda. The workshop is being held in response to a successful event that was co-hosted by the OASIS eGov Member Section and the Belgian Federal Ministry of Finance (SPFF) in Brussels in February 2009. Representatives from more than 20 countries were in attendance. Sponsors of the workshop include IBM, Microsoft, and Adobe. There is no charge to attend; however, space is limited and advance registration is required. This Workshop is an ideal opportunity to: (1) Better understand the role open standards play in public administration and doing business with government; (2) Exchange views and compare use cases with your peers; (3) Learn more about the latest technology trends in eGovernment systems; (4) Assess strategies to avoid vendor lock-in and promote choice.

See also: the workshop web site

W3C Personalization Roadmap: Ubiquitous Web Integration of AccessForAll
Andy Heath and Rich Schwerdtfeger (eds), W3C Working Group Note

"This document describes an activity of integrating personalization with device context for the delivery of content materials and interface components that are customized to meet both individual personal needs and preferences and delivery context. It brings together the work of separate standards and specifications organizations and working groups, notably W3C Ubiquitous Web Applications Working Group, IMS Global Learning Consortium Accessibility Special Interest Group, ISO/IEC JTC1 SC36 Information Technology for Learning, Education and Training: Human Diversity and Access For All Working Group, and associated working groups in SC36. The document should be viewed as a roadmap for the work to be undertaken and includes description of the basis for the work, the organizational context, the likely technologies and a partially complete description of how the technologies fit together... Delivery of content that is useful to and accessible to all in whatever delivery context or environment they are in at the time is a complex business potentially involving many technologies conforming to a variety of standards. This picture is big enough that to date different specifications and standards bodies have worked on the separate parts of the problem separately and not always in a way that interoperates. Completely different and separate business and technology scenarios have evolved completely separately and non-interoperably - for example mobile device technologies and desktop software technologies or learning technologies and banking. To date integration across these different technology and business worlds has been only on a small scale with isolated use cases, sometimes confined to a single vendor's products. At the same time accessibility of content and interface to all users has become a more visible and necessary requirement as governments around the world mandate that all content and systems should be accessible to all and as the business cases for that, including for example delivery to consumers having an ever-increasing age profile, come into focus. Furthermore, with the advent of more visually complex, browser-delivered, content it is becoming increasingly important to deliver content alternatives to meet user's needs. This mandates a more flexible, personalized web infrastructure that will respond to the needs of each user. This provides a defines a plan and strategy, with standards collaboration, to define a personalized accessible infrastructure for the Web.

See also: the IMS AccessForAll Meta-data Specification

A Federated Service Bus Infrastructure
Jack van Hoof, Blog Presentation

How to obtain high autonomy and low mutual dependencies of the functional entities in an organization with regard to message interaction and service exposure of SOA? In this pattern I'll describe a model for a federated multi-bus SOA-platform that satisfies the desired autonomies and low mutual dependencies in complex organizations... First of all: what is an ESB? Despite of what the market tries to make us believe, an ESB is not for sale. An ESB is an enterprise wide role of a service bus. It's the enterprise itself who decides about the role of the service bus products offered by the vendors. In this pattern, I will not mention the Enterprise Service Bus, as this name does not clearly qualify its position in a federated service bus infrastructure; is it the corporate level service bus or is it the entire service bus infrastructure of the whole enterprise? I take the short way: avoiding the acronym. I will only use the acronym when referring to marketed service bus products. This pattern suggests four levels of interest in a federated service bus infrastructure consisting of multiple logical buses: Application level—multiple application buses per domain, one for each application; Domain level—multiple domain buses, one for each domain; Corporate (enterprise) level—one corporate bus for the enterprise; External level—one external gateway for the enterprise... The application level service buses support fine grained application level processes and activity monitoring. Each application is bound to its own logical bus. In practice this boundary will typically be implemented by an application name space on an application server using JMS (java) or WCF (.net). Complex distributed multi-technology applications may take advantage of a dedicated service bus implementation like SonicESB or JBoss... The available interoperability between multiple ESB-products also supports choosing ESB-products based on the characteristics of the specific application- or domain environment. Think of products that are strong or weak in aspects like centralization (hub), decentralization (distributed), multi-tenant (logical separation), multi-instance (physical separation, clustering), device footprint, high volume message routing, back-office processing. E.g. a service bus in an environment of moving trains and gates and vending machines on stations is of quite a different characteristic than an service bus in a centralized data center environment. By choosing a standards based ESB-solution conformed to a federated and distributed implementation model, the IT-infrastructure will mature to an enormous, agile — yet relatively cheap — business enabler for most (if not all) enterprises...

See also: Dilip Krishnan's summary

Thales Sees Converging Encryption Standards
Beth Pariseau, Storage Networking World (SNW) Report

From the article "Sights and Bites From SNW" — "In February 2009, Thales Group was part of a coalition of vendors that submitted a standard for interoperability between key management systems and encryption devices to the Organization for the Advancement of Structured Information Standards (OASIS) called the Key Management Interoperability Protocol (KMIP). If adopted, KMIP would mean users could attach almost any encrypting device to one preferred key management system, regardless of the vendors involved. Meanwhile, the Institute of Electrical and Electronics Engineers (IEEE) approved a standard in January 2008 for managing encryption on storage devices. Now the vendors are working on bridging between the two standards, according to Kevin Bocek, director of product marketing for Thales, so that if product developers want to roll the more-detailed IEEE spec into the more general OASIS spec, the two will be compatible. This interoperability will probably be more valuable to developers than end users, he said, as the IEEE specification contains very granular details for developing products down to specificying protocols. If engineers don't have to re-invent the encryption wheel or ensure interoperability for each of their products, it could get products to market faster or free them to focus on other innovations, he said..."

See also: the IEEE P1619.3 Security in Storage Working Group (SISWG), Key Management

ProtectServe: To Protect and Serve
Eve Maler, Blog

In the last year, I've done a lot of thinking about the permissioned data sharing theme that runs through everything online, and have developed requirements around making the 'everyday identity' experience more responsive to what people want: rebalancing the power relationships in online interactions, making those interactions more convenient, and giving people more reason to trust those with whom they decide to share information. In the meantime, I've been fortunate to learn the perspectives of lots of folks like Bob Blakley, Project VRM and VPI participants, e-government experts, various people doing OAuth, and more... I'm asking you to assess a proposal of ours, which tries to meet these goals in a way that is: simple, secure, efficient, RESTful, powerful, OAuth-based, and identity system agnostic. We call the web protocol portion ProtectServe (yep, you got it). ProtectServe dictates interactions among four parties: a User/User Agent, an Authorization Manager (AM), a Service Provider (SP), and a Consumer. The protocol assumes there's a Relationship Manager (RM) application sitting above, acting on behalf of the User—sometimes silently. At a minimum, it performs the job of authorization management. We're looking for your input in order to figure out if there are good ideas here and what should be done with them. The proposal is entirely exploratory; my employer has no plans around it at the moment, though our work has been informed by OpenSSO—particularly its ongoing entitlement management enhancements... here's a buffet of analogies to choose from in relating ProtectServe and the Relationship Manager notion to concepts you might already know: (1) If you're an OAuth aficionado, ProtectServe is something like four-legged OAuth or higher-order OAuth, with the effect of separating out an authorization job for the Relationship Manager that today's OAuth SPs do all by themselves. (2) If you're an enterprise IT type, ProtectServe is a bit like RESTful XACML, with the Relationship Manager serving as a policy decision and administration point (PDP and PAP) and SPs serving as policy enforcement points (PEPs). (3) If you work on VRM solutions, you might think of a Relationship Manager as a kind of virtual personal datastore, and possibly a literal one as well—not shown in the mockups yet, stay tuned. (4) If you are familiar with the Liberty Web Services, particularly the RESTful ID-WSF work, ProtectServe could be seen as a Discovery Service complement that helps a user manage access to her various identity-data-providing services. (5) If you've been following along with OpenID extension work, the offering and acceptance of contract terms is sort of a user-driven analogue of OpenID Contract Exchange.

See also: on RM/AM

Using ActiveDirectory Federation Services for Single Sign-On
J. Peter Bruzzese, InfoWorld

Web-based log-in and SharePoint-based sites create a need for a new system of trust, and ADFS could be the solution... Here's a common scenario: You have developed a Web site that requires a log-in. However, rather than promoting your site to individuals who sign up and create their personal log-in, you promote the site to entire companies. A company with thousands of employees signs up to use your site and you are suddenly faced with the need to re-create thousands of accounts on your server. Users, when accessing the site, must perform the most frustrating task: logging in with a different account. You sit back and think, "There must be a better way." Well, that is only one of many scenarios where people in one company need to access items in another company or have a single sign-on for another site, be it a SharePoint site or some other type that requires a log-in. Here is where it may be worth looking into ADFS (Active Directory Federation Services)... Essentially, with ADFS, each company manages its own identities. But within a federated environment, each company can accept and provide permissions and/or access to identities from within another company. It all comes down to trust. The ability to trust accounts from one company without requiring a local account on your servers. This trust is called federated identity management and is the core behind ADFS. The biggest concern, logically, is security. All communication from one company's Active Directory to the other's ADFS is encrypted, and client access to through their browser is also encrypted using SSL... There are a few scenarios where you might use ADFS. One is the single sign-on where a person does have to log in one time through forms-based authentication, which will provide the users with a cookie that they can then use to access the rest of the site without having to provide repeat credentials. Another option is the identity federation option where they use their token from Active Directory to access the Web application without having to reauthenticate at all... Explaining ADFS is easy, but the design and configuration of ADFS is a tad bit more complicated than I've made it sound so far. The design reading alone can take forever because you need to determine what you are truly looking to accomplish, and there are several methods to reach those goals. For example, do you want a Web single sign-on implementation, a federated Web single sign-on implementation, or a federated Web single sign-on implementation with Forest Trust? Knowing your goal is the key to getting started.

See also: the ADFS White Paper

NISO DSFTU: Cost of Resource Exchange (CORE) Protocol
National Information Standards Organization, Draft Standard for Trial Use

NISO announced that the CORE Working Group has approved publication of Z39.95-200x, "Cost of Resource Exchange (CORE) Protocol" as a Draft Standard for Trial Use (DSFTU). The one-year trial use period will run April 1, 2009 through March 31, 2010. The CORE draft standard defines an XML schema to facilitate the exchange of financial information related to the acquisition of library resources between systems, such as an ILS and an ERMS. The NISO "Draft Standard for Trial Use (DSFTU)" phase [see Section 4.8 in the NISO Operating Procedures] allows a draft standard to be tested and validated by implementers and the marketplace prior to final publication. The trial work will also serve as an opportunity for the information community to provide the CORE Working Group and NISO with feedback on the draft, including the identification of any errors or omissions that may arise during the trial. The intent of this period is to discover and subsequently address such issues, with the goal of creating a more perfect CORE standard. Trial participants will be asked to implement the CORE protocol in their own organization (or with another trial implementer), participate in the CORE Interest Group list during the trial to share experiences, and provide feedback on any needed changes to the protocol prior to final issuance of the standard. The CORE Working Group will be available to provide guidance and answer questions and will continue to develop support documents during the trial. All comments will be reviewed regularly by the Working Group and will be responded to and made public at the end of the trial use period. Summary: "The purpose of the Cost of Resource Exchange (CORE) specification is to facilitate the transfer of cost and related library acquisitions information from one automated system to another. This transfer may be from: (1) an Integrated Library System (ILS) acquisitions module (the data source and CORE responder) to an Electronic Resource Management System (ERMS) (the data recipient and CORE requester), both belonging to the same library; (2) a book or serials vendor to a library's ERMS; (3) a transfer of cost and transaction data among members of a consortium; or (4) any transaction partner to another that can benefit from the sharing of cost and library acquisitions-related data. Using the defined CORE XML data schema, this standard provides a common method of requesting cost-related information by a client application (an ERMS, for example) for a specific order transaction, a specific resource, or all resources that the library owns, within the boundaries of a payment period or access period. The client requester must supply sufficient request information (e.g., a unique order identifier, a date range) in its request, so that the responding system (an ILS, for example) can interpret the request, identify the appropriate financial record(s), and respond with the appropriate financial and/or resource data elements.

See also: the NISO announcement


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: