The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: September 08, 2010
XML Daily Newslink. Wednesday, 08 September 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com



OASIS KMIP Specifications Submitted for Ballot as OASIS Standards
Staff, OASIS Announcement

On September 01, 2010, OASIS announced that members of the Key Management Interoperability Protocol (KMIP) Technical Committee had submitted two approved Committee Specification documents for consideration at OASIS Standard maturity level. Balloting for the two specifications was scheduled for September 16-30, 2010. Statements of use were provided by Cryptsoft, HP, IBM, RSA, and Safenet. Companion documents approved at CS level include the Key Management Interoperability Protocol Usage Guide Version 1.0 (provides illustrative information on using the protocol) and Key Management Interoperability Protocol Use Cases Version 1.0 (provides samples of protocol messages corresponding to a set of defined test cases).

Key Management Interoperability Protocol Specification Version 1.0 in CS-01 "establishes a single, comprehensive protocol for communication between enterprise key management servers and cryptographic clients. By defining a protocol that can be used by any cryptographic client, from the smallest automated electric meters to the most complex disk-arrays, KMIP enables enterprise key management servers to speak a single protocol to all cryptographic clients supporting the protocol. Through vendor support of KMIP, an enterprise will be able to consolidate key management in a single enterprise key management system, reducing operational and infrastructure costs while strengthening operational controls and governance of security policy.

KMIP includes three primary elements: (1) Objects. These are the symmetric keys, asymmetric keys, digital certificates and so on upon which operations are performed. (2) Operations. These are the actions taken with respect to the objects, such as getting an object from a key management system, modifying attributes of an object and so on. (3) Attributes. These are the properties of the object, such as the kind of object it is, the unique identifier for the object, and so on. The protocol supports other elements, such as the use of templates that can simplify the specification of attributes in a request or response. But at its most basic level, KMIP consists of placing objects, operations and/or attributes either into a request from a cryptographic client to a key management server or into a response from a key management server to a cryptographic client.

Key Management Interoperability Protocol Profiles Version 1.0 intends to meet the OASIS requirement on conformance clauses for a KMIP Server through profiles that define the use of KMIP objects, attributes, operations, message elements and authentication methods within specific contexts of KMIP server and client interaction. These profiles define a set of normative constraints for employing KMIP within a particular environment or context of use. They may, optionally, require the use of specific KMIP functionality or in other respects define the processing rules to be followed by profile actors... As a transport-level protocol, KMIP is complementary to other key management efforts, including OASIS EKMI and IEEE P1619.3, expressed in XML. KMIP leverages other standards whenever possible. For example, it uses the key life-cycle specified in NIST special publication 800-57 to define attributes related to key states. It uses network security mechanisms such as TLS to establish authenticated communication between the key management system and the cryptographic client. It relies on existing standards for encryption algorithms, key derivation and many other aspects of a cryptographic solution, focusing on the unique and critical problem of interoperable messages between key management systems and cryptographic clients..."

See also: Cryptographic Key Management


New Standard: W3C Speech Synthesis Markup Language (SSML) Version 1.1
Daniel C. Burnett, Voxeo and Zhi Wei Shuang (eds), W3C Recommendation

"The World Wide Web Consortium (W3C) today extended speech on the Web to an enormous new market by improving support for Asian languages and multi-lingual voice applications. The Speech Synthesis Markup Language (SSML 1.1) Recommendation provides control over voice selection as well as speech characteristics such as pronunciation, volume, and pitch.

SSML is part of W3C's Speech Interface Framework for building voice applications, which also includes the widely deployed VoiceXML and the Pronunciation Lexicon, which provides speech engines guidance on proper pronunciation... The intended use of SSML is to improve the quality of synthesized content. Different markup elements impact different stages of the synthesis process. The markup may be produced either automatically, for instance via XSLT or CSS3 from an XHTML document, or by human authoring. Markup may be present within a complete SSML document or as part of a fragment embedded in another language, although no interactions with other languages are specified as part of SSML itself. Most of the markup included in SSML is suitable for use by the majority of content developers...

The multilingal enhancements in this version of SSML result from discussions at W3C Workshops held in China, Greece, and India. SSML 1.1 also provides application designers greater control over voice selection and handling of content in unexpected languages. Estimates suggest that around 85% of voice response (IVR) systems deployed in North America and Western Europe use VoiceXML and SSML. The new version of SSML will open significant new markets, thanks to the improved support for non-Western European languages. A number of North American and European vendors of text-to-speech (TTS) products have indicated they expect to support SSML 1.1 within the coming year.

Dan Burnett, Co-Chair of the Voice Browser Working Group and Director of Speech Technologies and Standards at Voxeo: 'With SSML 1.1 there is an intentional focus on Asian language support, including Chinese languages, Japanese, Thai, Urdu, and others, to provide a wide deployment potential. With SSML 1.0 we already had strong traction in North America and western Europe, so this focus makes SSML 1.1 incredibly strong globally. We are really pleased to have many collaborators in China, in particular, focusing on SSML improvements and iterations'..."

See also: the approved W3C SSML Version 1.1 specification


Cloud SDO Activities Survey and Analysis
Bhumip Khasnabish and Chu JunSheng (eds), IETF Internet Draft

An initial level -00 Standards Track document was published by IETF in connection with ongoing Cloud Computing discussions: Cloud SDO Activities Survey and Analysis. This document is one of several drafted (and being revised) for possible future work in IETF (e.g., Cloud P2P Video Streaming, Cloud Log, Cloud Resource Mobility, HTTP Enhancements for Cloud, Cloud VPN Extensions, Cloud Address Resolution, Cloud Reference Framework, etc.). Some thirty-eight (38) SDOs are surveyed for cloud computing technical activities.

From the document 'Introduction': "In conducting a survey and gap analysis we will be able to determine interoperability gaps and overlaps in Clouds standards. Of those features and functions that are required, what are the gaps and overlaps that hinder interoperability. Finally, we will be able to determine what IETF work would fill in those gaps and not overlap with other standard organizations.

This survey of Cloud services and networking related Standards organizations (SDOs) and working groups (WGs) shows that there are a variety of way to support both client-side and server-side application layer programming interfaces (APIs) to support cloud services. Many of these services tend to utilize resources across multiple administrative, technology, and geographical domains. Since there is no unified and universally acceptable protocol and mechanism to define the mobility of resources across domains, the early implementers tend to utilize the features and functions of the existing IETF protocols along with their proprietary modifications or extensions in order to achieve their goals.

In addition to using a virtualization layer (VM layer), a thin Cloud operating system (OS) layer may be useful to hide the complexity, specificity, and regionalness (locality) of the resources. We also observe that different SDOs/WGs are trying to develop many different methods for logging and reporting of resource usage for Cloud services. This will create auditing transparency problems which may negatively impact the development of security and service level agreement features. At the end, these may in fact result in an increase in the effective cost for services that utilize the cloud based systems and networks, violating the very foundation on which the concept of using cloud is based on...."

See also: documents from the IETF Cloud Computing bar BOFs at IETF-77/IETF-78


jQuery, ASP.NET, and Interoperability
Dino Esposito, DDJ

"Microsoft's announcement that it will contribute to the development of the jQuery JavaScript library and enhance interoperability between jQuery and ASP.NET caught a lot of developers and project managers off-guard, causing them to play catch-up in regards to what jQuery is and how it can play an important role in enterprise projects." From the web site: "jQuery is a fast and concise JavaScript Library that simplifies HTML document traversing, event handling, animating, and Ajax interactions for rapid web development..."

"The idea behind jQuery is to simplify the task of getting a selected subset of DOM elements to work with. In other words, the jQuery library is mostly intended to run queries over the page DOM and execute operations over returned items. But the query engine behind the library goes far beyond the simple search capabilities of, say, document.getElementById (and related functions) that you find natively in the DOM.

The query capabilities of jQuery use the powerful CSS syntax which gives you a surprising level of expressivity. For example, you can select all elements that share a given CSS class, have a given combination of attribute values, appear in a fixed relative position in the DOM tree, and are in particular relationship with other elements. More importantly, you can add filter conditions and chain all queries together to be applied sequentially...

Nearly any jQuery script is characterized by one or more calls to the '$' function—an alias for the root jQuery function. Any line of jQuery code is essentially a query with some optional action applied to the results. When you specify a query, you call the root function and pass it a selector plus an optional context. The selector indicates the query expression; the context indicates the portion of the DOM where to run the query. If no context is specified, the jQuery function looks for DOM elements within the entire page DOM. The jQuery root object performs some work on the provided arguments, runs the query, and then returns a new jQuery object that contains the results..."

See also: the jQuery web site


OGC Seeks Comments on NetCDF Climate and Forecast Data Encoding Standards
Staff, Open Geospatial Consortium Announcement

"The Open Geospatial Consortium (OGC) members are seeking comments on three candidate standards: the OGC CF-netCDF Primer, OGC Network Common Data Form (NetCDF) Core Encoding Standard, and the OGC NetCDF Binary Encoding Extension Standard — NetCDF Classic and 64-bit Offset Format. The public comment period closes on October 07, 2010.

NetCDF (network Common Data Form) is comprised of a data model for array-oriented scientific data, related access libraries, and a machine-independent data format. Together, the interfaces, libraries, and format support the creation, access, and sharing of georeferenced scientific data.

NetCDF and CF-NetCDF were developed by the weather and climate communities and have been maintained by the University Corporation for Atmospheric Research (UCAR). These standards has been formally recognized by US Government standards bodies. UCAR introduced NetCDF into the OGC as a candidate OGC standard to encourage broader international use and greater interoperability among clients and servers interchanging data in binary form. Establishing CF-netCDF as an OGC standard for binary encoding will enable standard delivery of data in binary form via several OGC service interface standards, including the OGC Web Coverage Service (WCS), Web Feature Service (WFS), and Sensor Observation Service (SOS) Interface Standards.

The following organizations submitted these candidate standards to the Open Geospatial Consortium: (1) IMAA-CNR Italy; (2) METEO-FRANCE; (3) Natural Environment Research Council (NERC); (4) Northrop Grumman Corporation; (5) University Corporation for Atmospheric Research (UCAR); (6) US National Oceanic and Atmospheric Administration (NOAA)..."

See also: the University Corporation for Atmospheric Research (UCAR)


OASIS Service Component Architecture / Assembly TC Public Review Drafts
Bryan Aupperle, Dave Booz, Mike Edwards, Jeff Estefan (eds), OASIS PRDs

Members of the OASIS Service Component Architecture / Assembly (SCA-Assembly) Technical Committee have published two Committee Draft specifications as OASIS Public Review Draft documents. The public comment period ends October 30, 2010.

"Test Suite Adaptation for SCA Assembly Model Version 1.1 Specification" (Public Review Draft 01) "defines the requirements for adaptation of the SCA Assembly Test Suite to use a new SCA implementation type that is provided by a conforming SCA Runtime. The SCA Runtime needs to pass the SCA Assembly Test Suite without failures. Where the SCA Runtime supports an implementation type that is not currently supported by the SCA Assembly Test Suite, it is necessary for the test suite to be adapted to use that implementation type, before the test suite can be run successfully against that SCA Runtime. The SCA Assembly Test Suite is designed for adaptation to new implementation types. The test suite is divided into two groups of artifacts, handled by means of separate SCA contributions: (A) Implementation type-independent artifacts, largely consisting of SCA composites, with associated supporting artifacts such as interfaces provided as WSDL declarations. (B) Implementation type-dependent artifacts, which consist of implementations in the relevant implementation type, plus SCA composites which directly wrap those implementations, and other associated artifacts which can include interfaces provided in a language which is natural for the implementation type. e.g., Java implementation classes have their interfaces declared as Java interfaces...

"Implementation Type Documentation Requirements for SCA Assembly Model Version 1.1 Specification" (Public Review Draft 01) "defines the requirements for the documentation of an SCA implementation type that is used by a conforming SCA Runtime. The documentation describes how implementation artifacts of that implementation type relate to SCA components declared within SCA composites, as described by the SCA Assembly specification. The SCA Assembly specification defines an application in terms of service components that use and configure a particular implementation artifact.

In order to fully define how a particular service component operates, it is necessary to describe the relationship between the configuration of the SCA component and the implementation technology used by the service component. It is the role of the Implementation Type Documentation to describe this relationship. Some implementation types are described by formal specifications that have been created by OASIS SCA technical committees. Examples include: (1) SCA WS-BPEL Client and Implementation V1.1, and (2) SCA POJO Component Implementation V1.1..."

See also: the Requirements document


Network Portability Requirements and Models for Cloud Environment
Keiichi Shima and Yuji Sekiya (eds), IETF Internet Draft

IETF has published an initial level -00 Internet Draft for the Informational specification Network Portability Requirements and Models for Cloud Environment.

From the document Abstract: "Recent progress of virtual machine technology made it possible to host various Internet service nodes in a so called cloud environment. The virtual machine hosting technology provides a method to migrate a virtual machine from one hypervisor to another. However, such a technology is mainly focusing on a migration between hypervisors attached to the same link, and tend not to consider migration over the Internet. This document mentions the purpose of that type of operation and describe several possible operation methods to provide network portability in cloud systems."

"The progress of virtualization technology makes changes for services on the Internet to work on the infrastructure of virtual nodes, called PaaS—Platform as a Services. A PaaS is built and provided in single datacenter by single organization nowadays, however, the needs of building distributed, inter-datacenters PaaS and inter-connecting existing PaaS(es) are growing. Because inter-datacenters PaaS is required for administrative points. If a datacenter is encountered network circuit or power troubles, administrators or users of PaaS want to move the virtual nodes in a datacenter to other datacenters or other PaaS(es) without service interruption.

In order to migrate virtual nodes between inter-datacenters and inter-clouds without service interruptions and changes, network portability technologies are required between the datacenters and clouds... We consider two kinds of network portability models in this draft. One is the host-oriented portability model, and the other is the network-oriented portability model. In the former model, each host (virtual machine) manages the network portability. To adpot this model, each host must be equipped with some kinds of host mobility protocol. In the Network-oriented Portability Model, the network resource to which the host is attached before migration and after migration does not change. Since the migrated host doesn't notice the network environment change, it can continue to work after the migration. To provide the same network environment, the cloud system must support network resource migration between the previous location and current location of the host..."

See also: the IETF Clouds Discussion Archive


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-09-08.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org