The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: April 08, 2008
XML Daily Newslink. Tuesday, 08 April 2008

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
IBM Corporation

DMTF SM CLP Specification Adopted as an ANSI INCITS Standard
Staff, DMTF Announcement

The Distributed Management Task Force announced a major technology milestone in achieving "National Recognition with a Newly Approved ANSI Standard." Its Server Management Command Line Protocol (SM CLP) specification, a key component of DMTF's Systems Management Architecture for Server Hardware (SMASH) initiative, has been approved as an American National Standards Institute (ANSI) InterNational Committee for Information Technology Standards (INCITS) standard. DMTF will continue to work with INCITS to submit the new ANSI standard to the International Standards Organization/ International Electrotechnical Commission (ISO/IEC) Joint Technical Committee 1 (JTC 1) for approval as an international standard. The INCITS Executive Board recently approved the SM CLP standard, which has been designated ANSI INCITS 438-2008. INCITS is accredited by ANSI, the organization that oversees the development of American National Standards by accrediting the procedures of standards-developing organizations, such as INCITS. SM CLP (DSP0214) is a part of DMTF's SMASH initiative, which is a suite of specifications that deliver architectural semantics, industry standard protocols and profiles to unify the management of the data center. The SM CLP standard was driven by a market requirement for a common command language to manage a heterogeneous server environment. Platform vendors provide tools and commands in order to perform systems management on their servers. SM CLP unifies management of multi-vendor servers by providing a common command language for key server management tasks. The spec also enables common scripting and automation using a variety of tools. The SM CLP spec allows management solution vendors to deliver many benefits to IT customers. The spec enables data center administrators to securely manage their heterogeneous server environments using a command line protocol and a common set of commands. SM CLP also enables the development of common scripts to increase data center automation, which can help significantly reduce management costs... The CLP is defined as a character-based message protocol and not as an interface, in a fashion similar to Simple Mail Transfer Protocol (RFC 2821). The CLP is a command/response protocol, which means that a text command message is transmitted from the Client over the transport protocol to the Manageability Access Point (MAP). The MAP receives the command and processes it. A text response message is then transmitted from the MAP back to the Client... The CLP supports generating XML output data (Extensible Markup Language, Third edition), as well as keyword mode and modes for plain text output. XML was chosen as a supported output format due to its acceptance in the industry, establishment as a standard, and the need for Clients to import data obtained through the CLP into other applications.

First Public Draft: Health Care and Life Sciences (HCLS) Knowledgebase
M. Scott Marshall and Eric Prud'hommeaux, W3C Technical Report

Members of the W3C Semantic Web in Health Care and Life Sciences Interest Group (HCLS) have released a First Working Draft for a "HCLS Knowledgebase" specification. This document is one of two initial WDs. The HCLS Knowledgebase (HCLS-KB) is a biomedical knowledge base that integrates 15 distinct data sources using currently available Semantic Web Technologies such as the W3C standard Web Ontology Language (OWL) and Resource Description Framework (RDF). This report outlines which resources were integrated, how the KB was constructed using freely available triple store technology, how it can be queried using the W3C Recommended RDF query language SPARQL, and what resources and inferences are involved in answering complex queries. While the utility of the KB is illustrated by identifying a set of genes involved in Alzheimer's Disease, the approach described here can be applied to any use case that integrates data from multiple domains. A second document "Experiences with the Conversion of SenseLab databases to RDF/OWL" shares implementation experience of the Yale Center for Medical Informatics: "One of the challenges facing Semantic Web for Health Care and Life Sciences is that of converting relational databases into Semantic Web format. The issues and the steps involved in such a conversion have not been well documented. To this end, we have created this document to describe the process of converting SenseLab databases into OWL. SenseLab is a collection of relational (Oracle) databases for neuroscientific research. The conversion of these databases into RDF/OWL format is an important step towards realizing the benefits of Semantic Web in integrative neuroscience research. This document describes how we represented some of the SenseLab databases in Resource Description Framework (RDF) and Web Ontology Language (OWL), and discusses the advantages and disadvantages of these representations. Our OWL representation is based on the reuse of existing standard OWL ontologies developed in the biomedical ontology communities." The mission of the W3C Health Care and Life Sciences (HCLS) Interest Group is to show how to use Semantic Web technology to answer cross-disciplinary questions in life science that have, until now, been prohibitively difficult to research. The success of the group continues to draw industry interest. W3C Members are currently reviewing a draft charter that would enable the renewed HCLS Interest Group to develop and support use cases that have clear scientific, business and/or technical value, using Semantic Web technologies in three areas: life science, translational medicine, and health care. W3C invites Members to review the draft charter (which is public during the review), and encourages those who are interested in using the Semantic Web to solve knowledge representation and integration on a large scale to join the Interest Group.

See also: the W3C news item

New WSO2 Identity Solution Feature-Rich with OpenID
Staff, WSO2 Announcement

Developers today announced the "WSO2 Identity Solution", which enables LAMP and Java websites to provide strong authentication based on the new interoperable Microsoft CardSpace technology. New features in version 1.5 include: (1) OpenID Provider and relying party component support; (2) OpenID information cards based on user name-token credential and self issued credential; (3) SAML 2.0 support. "This new release includes OpenID and OpenID Information Cards, further enhancing the WSO2 Identity Solution to cater to a wider audience for web based authentication. OpenID is a key feature in decentralizing single sign-on, much favored by many users. The WSO2 Identity Solution is built on the open standards Security Assertion Mark-up Language (SAML) and WS-Trust. This version supports SAML version 2.0 in addition to 1.1 which was available in the previous version of the WSO2 Identity Solution. WSO2's open source security offering features an easy-to-use Identity Provider that is controlled by a simple Web-based management console and supports interoperability with multiple vendors' CardSpace components. This includes those provided by Microsoft .NET. The WSO2 Identity Solution also works with current enterprise identity directories, such as those based on the Lightweight Directory Access Protocol (LDAP) and Microsoft Active Directory, allowing them to leverage their existing infrastructure. In addition to the Identity Provider the WSO2 Identity Solution provides a Relying Party Component Set which plugs into the most common Web servers to add support for CardSpace authentication and now OpenID." The software is available for download, governed by the open source Apache License, Version 2.0.

See also: the WSO2 Identity Solution project web site

Google App Engine Supports Scalable Application Development
Staff, Google Announcement

Google has announced the availability of its free Google App Engine which provides a fully-integrated application environment, making it "easy to build scalable applications that grow from one user to millions of users without infrastructure headaches." According to the Google announcement, "Google App Engine gives you access to the same building blocks that Google uses for its own applications, making it easier to build an application that runs reliably, even under heavy load and with large amounts of data. The development environment includes the following features: (1) Dynamic webserving, with full support of common web technologies; (2) Persistent storage powered by Bigtable and GFS [Google File System, a scalable distributed file system for large distributed data-intensive applications] with queries, sorting, and transactions; (3) Automatic scaling and load balancing; (4) Google APIs for authenticating users and sending email; (5) Fully featured local development environment. App Engine applications are implemented using the Python programming language. The App Engine Python runtime environment includes a specialized version of the Python interpreter, the standard Python library, libraries and APIs for App Engine, and a standard interface to the web server layer. Google App Engine and Django both have the ability to use the WSGI standard to run applications. As a result, it is possible to use nearly the entire Django stack on Google App Engine, including middleware. As a developer, the only necessary adjustment is modifying your Django data models to make use of the Google App Engine Datastore API to interface with the fast, scalable Google App Engine datastore. Since both Django and Google App Engine have a similar concept of models, as a Django developer, you can quickly adjust your application to use our datastore. Google App Engine packages these building blocks and takes care of the infrastructure stack, leaving you more time to focus on writing code and improving your application... This preview of Google App Engine is available for the first 10,000 developers who sign up, and we plan to increase that number in near future. During this preview period, applications are limited to 500MB of storage, 200M megacycles of CPU per day, and 10GB bandwidth per day. We expect most applications will be able to serve around 5 million pageviews per month. In the future, these limited quotas will remain free, and developers will be able to purchase additional resources as needed..."

See also: the TechCrunch review

Cool URIs for the Semantic Web
Leo Sauermann and Richard Cyganiak (eds), W3C Interest Group Note

Members of the W3C Semantic Web Education and Outreach (SWEO) Interest Group have published an Interest Group Note "Cool URIs for the Semantic Web." It constitutes a tutorial explaining decisions of the Technical Architecture Group (TAG) for newcomers to Semantic Web technologies. The document was initially based on the DFKI Technical Memo TM-07-01, 'Cool URIs for the Semantic Web' and was subsequently published as a W3C Working draft in December 2007, and again in March 2008 by the Semantic Web Education and Outreach (SWEO) Interest Group of the W3C, part of the W3C Semantic Web Activity. The drafts were publicly reviewed, especially by the TAG and the Semantic Web Deployment Group (SWD). Summary: The Resource Description Framework RDF allows users to describe both Web documents and concepts from the real world—people, organisations, topics, things—in a computer-processable way. Publishing such descriptions on the Web creates the Semantic Web. URIs (Uniform Resource Identifiers) are very important, providing both the core of the framework itself and the link between RDF and the Web. This document presents guidelines for their effective use. It discusses two strategies, called 303 URIs and hash URIs. It gives pointers to several Web sites that use these solutions, and briefly discusses why several other proposals have problems. Given only a URI, machines and people should be able to retrieve a description about the resource identified by the URI from the Web. Such a look-up mechanism is important to establish shared understanding of what a URI identifies. Machines should get RDF data and humans should get a readable representation, such as HTML. The standard Web transfer protocol, HTTP, should be used. There should be no confusion between identifiers for Web documents and identifiers for other resources. URIs are meant to identify only one of them, so one URI can't stand for both a Web document and a real-world object.

See also: the announcement

Intel Releases SOA Security Toolkit
Staff, DDJ

Intel has introduced its SOA Security Toolkit as a release candidate. Part of Intel's family of XML tools, the toolkit is a high-performance software module that addresses the confidentiality needs of services-oriented architectures (SOA) by providing XML digital signatures, encryption, and decryption capabilities for SOAP protocol messages. Enterprises adopting and deploying Service Oriented Architecture (SOA) solutions rely on message formats defined in XML (Extensible Markup Language). The extensibility, verbosity and structured nature of XML create performance challenges for software developers seeking to provide content security in this dynamic, heterogeneous environment. The Intel SOA Security Toolkit is standards compliant, for easy integration into existing XML processing environments and is optimized to support the authentication, confidentiality and integrity of complex and large-size XML documents. The Intel SOA Security Toolkit 1.0 for Java environments is a high-performance policy-driven API available for Linux and Windows. Compliant with WS-security 1.0/1.1 and SOAP 1.1/1.2 standards, the toolkit focuses on confidentiality, integrity and non-repudiation for SOA environments. This toolkit enables encryption and decryption of SOAP message data, digital signature and verification via a wide range of security algorithms, using industry standards, for both servers as well as application environments. The toolkit lets users provide their own XML policy file as an input. Through this policy file, users can specify for the API security policy engine which key provider and trust manager to instantiate, using either a custom or the default class loader implementation. The security policy engine then applies the specified policy, obtaining the keys and certificates through the specified key provider and perform the trust check using the specified trust manager. The toolkit supports all types of X509 certificates, private, and shared keys.

See also: the product description

SCA Java EE Integration Specification Version 0.9
OSOA, Specification Draft

On March 28, 2008 Version 0.9 of the SCA "Java EE Integration Specification" was published by OSOA authors as part of the SCA Service Component Architecture; contributors include BEA, Cape Clear, IBM, Interface21, IONA, Oracle, Primeton, Progress Software, Red Hat, Rogue Wave, SAP, Siemens, Software AG., Sun, Sybase, and TIBCO. The specification defines a model of using SCA assembly in the context of a Java EE runtime that enables integration with Java EE technologies on a fine-grained component level as well as use of Java EE applications and modules in a coarse-grained large system approach. The Java EE specifications define various programming models that result in application components, such as Enterprise Java Beans (EJB) and Web applications that are packaged in modules and that are assembled to enterprise applications using a Java Naming and Directory Interface (JNDI) based system of component level references and component naming. Names of Java EE components are scoped to the application package (including single module application packages), while references, such as EJB references and resource references, are scoped to the component and bound in the Environment Naming Context (ENC). In order to reflect and extend this model with SCA assembly, this specification introduces the concept of the Application Composite and a number of implementation types, such as the EJB implementation type and the Web implementation type, that represent the most common Java EE component types. Implementation types for Java EE components associate those component implementations with SCA service components and their configuration, consisting of SCA wiring and component properties as well as an assembly scope (i.e. a composite). Note that the use of these implementation types does not create new component instances as far as Java EE is concerned. Section 3.1 explains this in more detail. In terms of packaging and deployment this specification supports the use of a Java EE application package as an SCA contribution, adding SCA's domain metaphor to regular Java EE packaging and deployment. In addition, the JEE implementation type provides a means for larger scale assembly of contributions in which a Java EE application forms an integrated part of a larger assembly context and where it is viewed as an implementation artifact that may be deployed several times with different component configurations.

See also: the OSOA canonical source

RSA 2008: BT Trials Federated Identity Management
Ian Grant, ComputerWeekly

BT is experimenting with a federated identity management system that could be rollled out to its eight million internet users and corporate customers. A commercial version would allow users to identify themselves for websites and applications and other users to access data, do work and transact business, said Robert Temple, BT's chief security architect. Using CA's Siteminder software, BT is giving internal staff web access to applications such as Peoplesoft, Siebel, Oracle Financials, Citrix, an XML gateway, and a voice-verification system from Persay. Temple said the company's intention is to provide managed user identity as a "common capability" of the kind relatively common in IT but rare in telecommunications. Temple said BT runs 32 discrete different networks. As a result it has too many Radius identity authentication servers. Learning how to consolidate how it manages user identities on all these networks is the only way it would be possible to extend similar safeguards to BT customers, he said. It has opted to use the Liberty Alliance's Security Assertion Markup Language (SAML) 2.0 standard for federated identity management. However, it has proved hard to find external contractors willing and able to help BT as most were familiar with earlier versions of SAML. Temple noted that relationships between BT and organisations sharing its federated IDs were plagued by lawyers and contracts. "In the end, we asked the lawyers politely to get out of the way as we knew what we were doing," he said. Temple said this was not to minimise the legal issues, which required partners to spend a lot of time building trust in each other. These lessons would help to reduce the learning curve for user organisations when the time came for them to make more use of the web for business applications...


XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.
IBM Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: