The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: February 19, 2010
XML Daily Newslink. Friday, 19 February 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



Mapping OASIS KMIP to an XML WSDL for IEEE P1619.3
Matt Ball (ed), EEE Project 1619.3 Task Group Draft

Members of IEEE Project 1619.3: Key Management (IEEE Security in Storage Working Group—SISWG) are developing a Standard for Key Management Infrastructure for Cryptographic Protection of Stored Data which specifies an architecture for the key management infrastructure for cryptographic protection of stored data, describing interfaces, methods and algorithms. The WG is seeking to align technical features of this IEEE standard with the emerging OASIS "Key Management Interoperability Protocol Specification."

The 1619.3 members have produced a draft Mapping OASIS KMIP to an XML WSDL for IEEE P1619.3 as one element in the IEEE P1619.3 Plan for 2010. A companion zip file contains a proposed XML WSDL (Web Services Description Language) grammar, XSD file, and example XML files for an XML encoding of the OASIS KMIP specification.

Details: "This document proposes a method to map OASIS KMIP (Key Management Interoperability Protocol) onto an XML WSDL (Web Services Description Language) schema. This document assumes that the reader is familiar with the OASIS KMIP specification, XML WSDL, XML XSD, and XML SOAP. The general strategy is to use XML SOAP to map KMIP onto a WSDL with Document/Literal encoding so that it is possible to validate the WSDL against the WS-I Basic Profile 1.0. The proposed WSDL for KMIP-P1619.3 mapping is [available online]...

When using XML/SOAP, there is a functional model that is different than that of KMIP. In particular, SOAP has a basic method for encoding procedures and the corresponding responses that differs from the KMIP method of wrapping each command and response with a message envelope. While it is possible to emulate this using SOAP, it forces everything to be interpreted as a single command and removes the benefits that XML parser generators provide in performing parameter type-checking. In practice, this will serve to simplify things on both the server and client side. Data type mapping [1] 'Primitive data types': Table 1 shows a proposed mapping of primitive OASIS KMIP data types to their corresponding XSD types and recommended C++ programming language types. Note that the C++ data types are not normative, but are rather informational, and may help programmers to better understand the implications of these mappings. [2] 'Complex data types': In addition to primitive data types, KMIP also supports complex data types, both explicit and implicit. Table 2 shows the mapping of KMIP complex types (both explicit and implicit) onto XML and C++...."

See also: KMIP and the IEEE P1619.3 Plan for 2010


NIST Laying the Groundwork for More Advanced Cryptography
William Jackson, Government Computer News

The U.S. National Institute of Standards and Technology (NIST) has released two documents as part of its Cryptographic Key Management Project—a summary of a key management workshop held in June 2009 that explored the risks and challenges of handling cryptographic keys in new technological environments, and a draft of recommendations for agencies on transitioning to new algorithms and keys.

Key management is one of the most difficult tasks in cryptography, because a cryptographic algorithm or scheme is only as secure as the keys used to encrypt and decrypt data. The scalability and usability of the methods used to distribute keys are of particular concern. NIST's key management project is an effort to improve the overall key management strategies to enhance the usability of cryptographic technology, provide scalability and support a global cryptographic key management infrastructure...

The results of the workshop are summarized in NIST Interagency Report 7609. Presentations covered a variety of security issues, including key management systems that are available but are under-used because they lack user-friendly automated key management services; systems that are under development but not reaching the marketplace because of financial, logistical and support service problems; and new security mechanisms needed to support future computing environments such as cloud computing, integrated international applications, and the secure management of dynamic and global relationships among people, organizations and applications.

NIST Draft SP 800-131 (Recommendation for the Transitioning of Cryptographic Algorithms and Key Sizes) provides guidance for transitions to stronger cryptographic keys and more robust algorithms, based on years of experience in dealing with key management. It is part of an effort to define the implement appropriate key management procedures and to establish adequate strengths for algorithms for protecting sensitive information, as well as to plan ahead for changes in the use of cryptography as algorithms become compromised and computing technology used to break them advances. Special Publication 800-57, Part 1, included a general approach for transitioning from one algorithm or key length to another. The new draft of SP 800-131 gives more specific guidance...."

See also: NIST and key management


Internet X.509 Public Key Infrastructure: Certificate Image
Stefan Santesson, Russell Housley (et al, eds), IETF Internet Draft

Members of the IETF Public-Key Infrastructure (X.509) (PKIX) Working Group have published a revised version -06 for the specification Internet X.509 Public Key Infrastructure: Certificate Image. It specifies a method to bind a visual representation of a certificate in the form of a certificate image to an RFC 5280 public key certificate by defining a new otherLogos image type according to RFC 3709 ('Internet X.509 Public Key Infrastructure: Logotypes in X.509 Certificates').

The purpose of the Certificate image is to aid human interpretation of a certificate by providing meaningful visual information to the user interface. Typical situations when a human needs to examine the visual representation of a certificate are: (1) A person establishes secured channel with an authenticated service. The person needs to determine the identity of the service based on the authenticated credentials. (2) A person validates the signature on critical information, such as signed executable code, and needs to determine the identity of the signer based on the signer's certificate. (3) A person is required to select an appropriate certificate to be used when authenticating to a service or Identity Management infrastructure. The person needs to see the available certificates in order to distinguish between them in the selection process...

Display of a certificate information to humans is challenging due to lack of well defined semantics for critical identity attributes. Unless the application has out of band knowledge about a particular certificate, the application will not know the exact nature of the data stored in common identification attributes such as serialNumber, organizationName, country, etc. Consequently the application can display the actual data, but faces problem to label that data in the UI, informing the human about the exact nature (semantics) of that data. It is also challenging for the application to determine which identification attribute that are important to display and how to organize them in a logical order..."

See also: the IEEE Public-Key Infrastructure (X.509) WG Status Pages


Guidelines for Web Content Transformation Proxies 1.0
Jo Rabin (ed), W3C Technical Report

W3C has published an updated version of the Guidelines for Web Content Transformation Proxies 1.0. This is the third Last Call Working Draft, expected to become a W3C Recommendation. The W3C Membership and other interested parties are invited to review the document and send comments through March 11, 2010.

The document provides guidance to Content Transformation proxies as to whether and how to transform Web content. Within this document, Content Transformation refers to the manipulation of requests to, and responses from, an origin server. This manipulation is carried out by proxies in order to provide a better user experience of content that would otherwise result in an unsatisfactory experience on the device making the request. Content Transformation proxies are mostly used to convert Web sites designed for desktop computers to a form suitable for mobile devices.

Based on current practice and standards, this document specifies mechanisms with which Content Transformation proxies should make their presence known to other parties, present the outcome of alterations performed on HTTP traffic, and react to indications set by clients or servers to constrain these alterations. The objective is to reduce undesirable effects on Web applications, especially mobile-ready ones, and to limit the diversity in the modes of operation of Content Transformation proxies, while at the same time allowing proxies to alter content that would otherwise not display successfully on mobile devices.

The document is an attempt to improve a situation at a point in time where there appears to be disregard of the provisions of HTTP - and is primarily a reminder and an encouragement to follow those provisions more closely... Important considerations regarding the impact on security are highlighted..."

See also: the W3C Mobile Web Initiative Activity


Review Documents from OASIS Service Component Architecture C and C++ TC
Staff, OASIS Announcement

Members of the OASIS OASIS Service Component Architecture / C and C++ (SCA-C-C++) TC have approved two Committee Draft specifications for public review: (1) Service Component Architecture Client and Implementation Model for C++ Specification Version 1.1, and (2) Service Component Architecture Client and Implementation Model for C Specification Version 1.1. The 15-day review ends March 02, 2010.

The OASIS Service Component Architecture / C and C++ TC was chartered to develop the C and C++ programming model for clients and component implementations using the Service Component Architectire (SCA). SCA defines a model for the creation of business solutions using a Service-Oriented Architecture, based on the concept of Service Components which offer services and which make references to other services. SCA models business solutions as compositions of groups of service components, wired together in a configuration that satisfies the business goals. SCA applies aspects such as communication methods and policies for infrastructure capabilities such as security...

"The SCA C++ implementation model describes how to implement SCA components in C++. A component implementation itself can also be a client to other services provided by other components or external services. The document describes how a C++ implemented component gets access to services and calls their operations. The document also explains how non-SCA C++ components can be clients to services provided by other components or external services. The document shows how those non-SCA C++ component implementations access services and call their operations...

"Service Component Architecture Client and Implementation Model for C Specification Version 1.1" describes the SCA Client and Implementation Model for the C programming language. The SCA C implementation model describes how to implement SCA components in C. A component implementation itself can also be a client to other services provided by other components or external services. The document describes how a component implemented in C gets access to services and calls their operations. The document also explains how non-SCA C components can be clients to services provided by other components or external services. The document shows how those non-SCA C component implementations access services and call their operations..."

See also: the Model for C


OpenLaszlo: Rapidly Build and Deploy Rich Internet Applications
Kumarsun Nadar, IBM developerWorks

"OpenLaszlo is an open source platform, released under the Common Public License (CPL), for the development and delivery of rich Internet applications (RIAs). OpenLaszlo is based on LZX, which is an object-oriented language utilizing XML and JavaScript. Rich-client applications written with OpenLaszlo run across browsers and across platforms.

The OpenLaszlo Server is a Java servlet/JSP application. The OpenLaszlo Server comprises five main subsystems: (1) The Interface Compiler: The Interface Compiler consists of an LZX Tag Compiler and a Script Compiler, which convert the source files into executable (SWF) files and serve them either as bytecode to a plug-in that runs in the client's browser (such as Flash or J2ME), or as JavaScript (DHTML) executed by the browser itself. (2) The Media Transcoder: The Media Transcoder converts a full range of media assets into a single format for rendering by OpenLaszlo's target client-rendering engine. This enables an OpenLaszlo application to present supported media types in a unified manner on a single canvas, without multiple helper applications or supplemental playback software. The Media Transcoder automatically renders the following media types: JPEG, GIF, PNG, MP3, TrueType, and SWF (art/animation only). (3) The Data Manager: The Data Manager acts as an interface between OpenLaszlo applications and other applications across the network, such as databases and XML Web services. It consists of a data compiler that converts data into a compressed binary form and a series of data connectors that enable OpenLaszlo applications to retrieve data via XML/HTTP. (4) The Cache: The Cache contains the most recently compiled version of any application. The first time an OpenLaszlo application is requested, it is compiled, and the resultant SWF file is sent to the client. A copy is also cached on the server, so subsequent requests do not have to wait for compilation.

OpenLaszlo's client-side architecture mainly consists of the Laszlo Foundation classes, which provides the runtime environment for running a OpenLaszlo applications. Whenever a client invokes an OpenLaszlo application by its URL, the required runtime libraries are also downloaded along with source. The client always maintains a connection with the server...

Unlike many Ajax solutions, however, OpenLaszlo applications are portable across browsers. This is possible due to the OpenLaszlo compiler technology, which takes care of runtime details, allowing the developer to concentrate more on the application's behavior/logic and appearance, truly making it a 'write once, run everywhere' platform. OpenLaszlo supports a rich graphics model with many built-in and reusable components, as well as advanced WYSIWYG text and graphical editing tools.

See also: the OpenLaszlo web site


Towards a Toolkit for Implementing Dublin Core Application Profiles
Talat Chaudhri, Julian Cheal, Richard Jones, Mahendra Mahey, Emma Tonkin; Ariadne Journal

The development of the Dublin Core Application Profiles (DCAPs) has been closely focussed on the construction of metadata standards targeted at specific resource types, on the implicit assumption that such a metadata solution would be immediately and usefully implementable in software environments that deal with such resources. The success of an application profile would thus be an inevitable consequence of correctly describing the generalised characteristics of those resources.

Dublin Core Application Profiles are intended to be based upon an application model, which can be extremely simple. This article concentrates on the recent set of JISC-funded application profiles, which make use of application models based on variants of FRBR (Functional Requirements for Bibliographic Records), and which follow the Singapore Framework for Dublin Core Application Profiles. While application profiles are by no means limited to repositories and can for instance be implemented in such wide-ranging software environments as Virtual Learning Environments (VLEs), Virtual Research Environments (VREs) and eAdmin, this paper focusses in the first instance on digital repositories.

The thesis of this article that application profiles, in order to be workable, living standards, need to be re-examined in their constituent parts in far greater detail than before, and that a range of implementation methods need to be practically tested against the functional and user requirements of different software systems. This can only be achieved through a process of engagement with users, service providers (such as repository managers), technical support staff and developers. While the ideal target audience are the end-users of the repository service, it is in practice difficult to engage them in the abstract with unfamiliar, possibly complex, metadata schemas. So much of the process must inevitably be mediated through the repository managers' invaluable everyday experience in dealing directly with users -- at least until the stage in the process when test interfaces, test repositories or live services can be demonstrated. In order to engage developers in the process of building and testing possible implementation methods, it is absolutely crucial to collect and present tangible evidence of user requirements...

The aim of the iterative testing, development and user engagement effort that has been outlined here is to complement the plan for the development of DCAPs that was advanced in the Singapore Framework. The functional requirements, domain model and DSP were advanced as mandatory elements of a DCAP. It is proposed here that functional requirements are a fundamental pre-condition for the other two, and consequently they require considerable, ongoing analysis and usability testing..."

See also: 'Assessing FRBR in Dublin Core Application Profiles'


Abstract Modelling of Digital Identifiers
Nick Nicholas, Nigel Ward, Kerry Blinco; Ariadne Journal

"Discussion of digital identifiers, and persistent identifiers in particular, has often been confused by differences in underlying assumptions and approaches. To bring more clarity to such discussions, the PILIN Project (Persistent Identifier and Linking INfrastructure) has devised an abstract model of identifiers and identifier services, which is presented here in summary. Given such an abstract model, it is possible to compare different identifier schemes, despite variations in terminology; and policies and strategies can be formulated for persistence without committing to particular systems. The abstract model is formal and layered; in this article, we give an overview of the distinctions made in the model. This presentation is not exhaustive, but it presents some of the key concepts represented, and some of the insights that result.

The main goal of the PILIN project has been to scope the infrastructure necessary for a national persistent identifier service. There are a variety of approaches and technologies already on offer for persistent digital identification of objects. But true identity persistence cannot be bound to particular technologies, domain policies, or information models: any formulation of a persistent identifier strategy needs to outlast current technologies, if the identifiers are to remain persistent in the long term.

For that reason, PILIN has modelled the digital identifier space in the abstract. It has arrived at an ontology and a service model for digital identifiers, and for how they are used and managed, building on previous work in the identifier field (including the thinking behind URI, DOI, XRI, and ARK), as well as semiotic theory. The ontology, as an abstract model, addresses the question 'what is (and isn't) an identifier?' and 'what does an identifier management system do?'. This more abstract view also brings clarity to the ongoing conversation of whether URIs can be (and should be) universal persistent identifiers...

It is important for the Web that all digital identifiers behave as HTTP URIs for dereferencing—resolution and/or retrieval. This has made the modern Web architecture possible. But this does not mean all digital identifiers have to be HTTP URIs, and in particular managed as HTTP URIs, in order to achieve interoperability with other identifiers. HTTP as a service protocol for identifiers does not address all purposes equally well, and there is a place in the Web for other identifier schemes to continue in use, so long as they are exposed through HTTP..."

See also: the PILIN Project


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-02-19.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org