This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com
- W3C XML Security Working Group Publishes Version 2.0 Working Drafts
- OASIS XACML Technical Committee Approves Profiles for Public Review
- OASIS Key Management Interoperability Protocol (KMIP) TC Advances Specifications
- Emulex and IBM Collaborate on Security for Cloud Storage
- Stoneware, Inc. Supports SAML Provisioning as Core webNetwork Service
- RDF/RDFa Now Supported as Part of Drupal Core
- Web 2.0 Summit: Tech Advice from Tim Berners-Lee
- Mozilla Unveils Raindrop Messaging Dashboard
W3C XML Security Working Group Publishes Version 2.0 Working Drafts
Mark Bartel, John Boyer, Barb Fox (et al, eds), W3C Technical Reports
Members of the W3C XML Security Working Group, Chaired by Frederick Hirsch, have published two First Public Working Drafts as part of the Version 2.0 chartered activity. The XML Security Working Group started in summer 2008, and "has decided to publish an interim set of 1.1 specifications as it works towards producing a more radical change to XML Signature. The XML Signature 1.1 and XML Encryption 1.1 specifications clarify and enhance the previous specifications without introducing breaking changes, although they do introduce new algorithms. The recent published set of documents also include Algorithm Cross-Reference, Properties and Best Practices documents to enable adoption, Derived Keys to recognize a needed use case, and Use Cases and Transform Simplification documents to obtain early feedback to guide the development of 2.0. specifications..."
XML Signature Syntax and Processing Version 2.0 specifies XML digital signature processing rules and syntax. XML Signatures provide integrity, message authentication, and/or signer authentication services for data of any type, whether located within the XML that includes the signature or elsewhere. This version of the XML Signature specification introduces a new, simpler transform model. While this model is less generic than the one in the 1.x versions of this specification, we anticipate gains in terms of simplicity, lower attack surface, and streamability. This model is significantly different than in XML Signature 1.x; see '10 Differences from 1.x version'. XML Signature 2.0 is designed to be backward compatible, however, enabling the XML Signature 1.x model to be used where necessary...
Canonical XML Version 2.0 represents "a major rewrite of Canonical XML Version 1.1 to address issues around performance, streaming, hardware implementation, robustness, minimizing attack surface, determining what is signed and more. It also incorporates an update to Exclusive Canonicalization, effectively a 2.0 version, as well... Any XML document is part of a set of XML documents that are logically equivalent within an application context, but which vary in physical representation based on syntactic changes permitted by 'XML 1.0' and 'Namespaces in XML 1.0'. This specification describes a method for generating a physical representation, the canonical form, of an XML document that accounts for the permissible changes. Except for limitations regarding a few unusual cases, if two documents have the same canonical form, then the two documents are logically equivalent within the given application context. Note that two documents may have differing canonical forms yet still be equivalent in a given context based on application-specific equivalence rules for which no generalized XML specification could account..."
See also: Canonical XML Version 2.0
OASIS XACML Technical Committee Approves Profiles for Public Review
Staff, OASIS Announcement
Members of the OASIS Extensible Access Control Markup Language (XACML) Technical Committee have approved two Committee Draft specifications for public review. Both are XACML profiles. Comments from the public are invited through December 22, 2009.
The Committee Draft for XACML 3.0 Export Compliance-US (EC-US) Profile Version 1.0 defines a profile for the use of XACML in expressing policies for complying with USA government regulations for export compliance (EC). It "defines standard attribute identifiers useful in such policies, and recommends attribute value ranges for certain attributes... Any U.S. organization that ships goods, materials, software, and/or technical information may be subject to U.S. export control laws. Non-military products may be classified according to the U.S. Department of Commerce 'Commerce Control List'. Military products are controlled according to the United States Munitions List. Destination countries are also classified by a variety of criteria. Even specific entities and individuals may have restrictions. The recipient's U.S. person status, location, and organization must also be taken into account in these export control authorization decisions. This EC-US profile provides a standard framework for the subject and resource attributes that must be considered for U.S. export control decisions..."
The Committee Draft for XACML 3.0 Intellectual Property Control (IPC) Profile Version 1.0 defines a profile for the use of XACML in expressing policies for intellectual property control (IPC). "It defines standard attribute identifiers useful in such policies, and recommends attribute value ranges for certain attributes...
Many intellectual property access control decisions can be made on the basis of the resource's copyright, trademark, patent, trade secret, or other custom classification. This profile defines standard XACML attributes for these properties, and recommends the use of standardized attribute values. In practice, an organization's intellectual property protection policies will be a mixture of rules derived from laws and regulations, along with enterprise-specific rules derived from government-approved bilateral or multilateral agreements with other organizations.
The attributes and glossary terms defined in this XACML profile are not an exclusive or comprehensive list of all the attributes that may be required for rendering authorization decisions concerning IP. For example, PDPs would have to evaluate other entitlements, such as group membership, from PIPs. This profile is meant as a point of reference for implementing IP controls, and may be extended as needed for organizational purposes. Software vendors who choose to implement this profile should take the attributes herein as a framework for IP controls, but allow individual implementers some flexibility in constructing their own XACML-based authorization policies and PDPs. The goal of the profile is to create a framework of common IP-related attributes upon which authorization decisions can be rendered. This profile will also provide XACML software developers and authorization policy writers guidance on supporting IP control use cases..."
OASIS Key Management Interoperability Protocol (KMIP) TC Advances Specifications
Robert Haas, Indra Fitzgerald, (et al, eds), Committee Drafts
At the October 22, 2009 meeting of the OASIS Key Management Interoperability Protocol (KMIP) TC, members voted to approve the technical content of three (3 of 4) KMIP documents at Committee Draft level: the KMIP Specification 1.0, the KMIP Usage Guide, and KMIP Use Cases. A fourth KMIP document (Key Management Interoperability Protocol Profiles Version 1.0) was identified for further work.
The Key Management Interoperability Protocol Specification 1.0 is intended as a "specification of the protocol used for the communication between clients and servers to perform certain management operations on objects stored and maintained by a key management system. These objects are referred to as Managed Objects in this specification. They include symmetric and asymmetric cryptographic keys, digital certificates, and templates used to simplify the creation of objects and control their use. Managed Objects are managed with operations that include the ability to generate cryptographic keys, register objects with the key management system, obtain objects from the system, destroy objects from the system, and search for objects maintained by the system. Managed Objects also have associated attributes, which are named values stored by the key management system and are obtained from the system via operations..."
The Key Management Interoperability Protocol Usage Guide 1.0 complements the KMIP Specification by providing guidance on how to implement KMIP most effectively to ensure interoperability. In particular, the document includes the following guidance: (1) Clarification of assumptions and requirements that drive or influence the design of KMIP and the implementation of KMIP-compliant key management; (2) Specific recommendations for implementation of particular KMIP functionality; (3) Clarification of mandatory and optional capabilities for conformant implementations; (4) Functionality considered for inclusion in KMIP V1.0, but deferred to subsequent versions of the standard. A selected set of conformance profiles and authentication suites are defined in the KMIP Profiles specification. Further assistance for implementing KMIP is provided by the KMIP Use Cases for Proof of Concept Testing document that describes a set of recommended test cases and provides the TTLV (Tag/Type/Length/Value) format for the message exchanges defined by those use cases..."
Key Management Interoperability Protocol Use Cases 1.0 discusses: Message exchange, Centralized Management, Key life cycle support, Auditing and reporting, Key Interchange, Vendor Extensions, Asymmetric keys, Key Roll-over... The use-cases indicate if all concepts within the protocol are sound and if the protocol is usable when implementing typical scenarios in real life. These use-cases are not intended to fully test an implementation of KMIP. Thus, the use-cases do not contain typical QA scenarios which would stress an implementation. The use-cases are based on v1.0 of the protocol. The use-cases define a number of client-to-server request-response pairs for a number of operations. For each request-response message pair the operation is stated, along with the relevant parameters needed for the request or response message. This is followed by two different illustrations of the messages: first, a human-readable construction which shows the fields tags, types and values, followed by the TTLV encoding of the message. These are included to facilitate the implementation of the message creation and parsing functionality. The use-cases show one possible way to construct the messages, and the messages shown are not necessarily the only correct constructions (e.g., it is possible to omit the attribute index if it is zero)..."
Emulex and IBM Collaborate on Security for Cloud Storage
Warwick Ashford, ComputerWeekly.com
"Emulex is collaborating with IBM to deliver a host-based encryption system to secure data in cloud-based storage, virtualised environments and converged networks. The system is based on the Emulex Secure Host Bus Adapter (HBA) that sits in every physical server in a datacentre and IBM's Tivoli Key Lifecycle Manager. Although due for commercial release in mid 2010, Emulex is to demonstrate the system at the RSA Conference Europe 2009 this week in London at the Hilton London Metropole. Emulex claims this hardware-based approach to encryption provides a cost-effective, easy to use way for enterprises to protect data outside the server. By encrypting every piece of data before it leaves the server, this system avoids the need to classify data and keep track of it, which simplifies data management, said Brandon Hoff, director of security product management at Emulex... The data is encrypted and therefore protected no matter where it goes, for data in-flight on the network and for data at-rest on disc arrays... The Emulex system uses the Key Management Interoperability Protocol (KMIP), which was developed as an industry standard by companies including IBM, RSA, and Emulex.. "
From the announcement: "Enterprise key management is an essential feature that security conscious companies demand. Emulex and IBM are taking this to the next step by collaborating on the newly developed Key Management Interoperability Protocol (KMIP), according to Steve Daheb: 'With this combined solution, IT managers will have a seamless standards based encryption solution that will enable them to achieve maximum enterprise-wide data center protection, without impacting server performance, delivering better security at a cost point well below other data encryption approaches'... Implementing a host-based encryption security solution improves the data center's security stance and minimizes the window of data vulnerability, because data is protected where it's created, in the host..."
See also: the announcement
Stoneware, Inc. Supports SAML Provisioning as Core webNetwork Service
Ken Quinton, Stoneware Announcement
"Stoneware, Inc. announced the upcoming release of SAML Provisioning for webNetwork. Integrated SAML (Security Assertion Markup Language) provisioning will enable Stoneware's private cloud solution to push a user's identity to public cloud application providers. SAML Provisioning is just another step in strengthening the union between the private cloud and public cloud providers. Provisioning identity information from the private cloud to the public provider simplifies management and strengthens integration resulting in cost savings for both the customer and public cloud provider..."
According to the announcement: "Adding SAML (Security Assertion Markup Language) Provisioning as a core webNetwork service simplifies the integration of public cloud applications and increases an organization's ability to respond to changing market and economic conditions... SAML Provisioning works by defining a public cloud web application within the webNetwork system. The web application will hold the necessary information required for the webNetwork private cloud to connect to the public cloud application or service and exchange certificate information. Once the connection is trusted, the identity provider (i.e., private cloud) is ready to send requests to the service provider (i.e., public cloud)..."
See also: SAML references
RDF/RDFa Now Supported as Part of Drupal Core
Stéphane Corlosque, Posting to W3C 'semantic-web' List
"After several months of research, coding, sprints and patch reviews, the main RDF patch has been committed to Drupal core. This introduces a basic RDF API which maps the Drupal data structure to RDF. These mappings are then automatically exported as RDFa throughout the site... Drupal 7 itself is still in development phase; the first beta releases should come out in the coming months, and the final release in 2010...
Please try [the sample code] with your favorite RDFa parser and report any bug you encounter. You will find some dummy RDF properties/classes sometimes but we'll fix them soon. Note that the RDF mappings are fairly independent from the the actual API since the mapping definitions are centralized and are not hardcoded in the HTML, thus allowing exports in other RDF serialization formats...
Also, those attending the Eighth International Semantic Web Conference (ISWC 2009) next week, I'll present the research work which was used to build RDF in Drupal core," via the session 'Social Web and Networks' and paper "Produce and Consume Linked Data with Drupal" (by Stephane Corlosquet, Renaud Delbru, Tim Clark, Axel Polleres, and Stefan Decker). Abstract: "Currently a large number of Web sites are driven by Content Management Systems (CMS) which manage textual and multimedia content but also, inherently, carry valuable information about a site's structure and content model. Exposing this structured information to the Web of Data has so far required considerable expertise in RDF and OWL modelling and additional programming effort. In this paper we tackle one of the most popular CMS: Drupal. We enable site administrators to export their site content model and data to the Web of Data without requiring extensive knowledge on Semantic Web technologies. Our modules create RDFa annotations and (optionally) a SPARQL endpoint for any Drupal site out of the box. Likewise, we add the means to map the site data to existing ontologies on the Web with a search interface to find commonly used ontology terms. We also allow a Drupal site administrator to include existing RDF data from remote SPARQL endpoints on the Web in the site. When brought together, these features allow networked RDF Drupal sites that reuse and enrich Linked Data. We finally discuss the adoption of our modules and report on a use case in the biomedical field and the current status of its deployment."
See also: the ISWC 2009 paper
Web 2.0 Summit: Tech Advice from Tim Berners-Lee
Rafe Needleman, CNET News.com
When Tim Berners-Lee, inventor of the World Wide Web, entered the room for the final interview at the Web 2.0 Summit, the audience stood up for him. Appropriately so, since most of those present owe their livelihoods to his invention. In an on-stage interview with Tim O'Reilly, the audience was listening to Berners-Lee not just for his perspective but his guidance... Here's what I heard:
TBL: (1) Don't build your laws into the Web. "Technology shouldn't tell you what's right and what's wrong...The rule of law applies on the Web. It's a platform for humanity." (2) Fault-tolerance is vital. "Building a tight system where everything is guaranteed to work is possible in smaller configurations but not on a global scale." (3) If you want it everywhere, give it away. (4) Large companies/govt are the enemy: "I'm worried about anything large coming in to take control, whether it's large companies or government..." (5) Small open companies can topple big closed ones. (6) Separate design from device. "The growth of mobile devices is one example of how thinking about Web design for one size screen (a PC or laptop) can cut a product off from growth." (7) Consider content as app. (8) Forge trust. (9) Make the Web work for more people. "Only 20 percent to 25 percent of humans uses the Web even though 80 percent 'have signal'..."
See also: Web 2.0 Summit 2009
Mozilla Unveils Raindrop Messaging Dashboard
Thomas Claburn, InformationWeek
"E-mail used to be the Internet's killer app. Mozilla's Raindrop software anticipates a world where email has been reduced to one channel among many... Just as Google Wave represents an attempt to imagine what email would look like if it were invented today, Mozilla's Raindrop represents an attempt to imagine a more modern communication client. Developed by the team that created Mozilla's Thunderbird e-mail client, Raindrop recognizes that the diverse range of communication channels—Twitter, IM, Skype, Facebook, Google Docs, E-mail—would be more useful if presented in a unified interface... As with other Mozilla open-source projects, the Raindrop team is encouraging interested developers to participate and contribute code to improve the project..."
From the Mozilla Labs web site: "Raindrop is a new exploration by the team responsible for Thunderbird to explore new ways to use open Web technologies to create useful, compelling messaging experiences. Raindrop's mission: make it enjoyable to participate in conversations from people you care about, whether the conversations are in email, on twitter, a friend's blog or as part of a social networking site.
See also: the Mozilla Labs Raindrop web site
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: firstname.lastname@example.org
Newsletter unsubscribe: email@example.com
Newsletter help: firstname.lastname@example.org
Cover Pages: http://xml.coverpages.org/