The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: May 26, 2010
XML Daily Newslink. Wednesday, 26 May 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



Dynamic Symmetric Key Provisioning Protocol (DSKPP)
Andrea Doherty, Mingliang Pei, Salah Machani, Magnus Nystrom (eds), IETF Internet Draft

On May 26, 2010 the Internet Engineering Steering Group (IESG) announced a Last Call public review for the IETF specification Dynamic Symmetric Key Provisioning Protocol (DSKPP). This specification has been produced by members of the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group.

"DSKPP is a client-server protocol for initialization (and configuration) of symmetric keys to locally and remotely accessible cryptographic modules. The protocol can be run with or without private-key capabilities in the cryptographic modules, and with or without an established public-key infrastructure. Two variations of the protocol support multiple usage scenarios. With the four-pass variant, keys are mutually generated by the provisioning server and cryptographic module; provisioned keys are not transferred over-the-wire or over-the-air. The two-pass variant enables secure and efficient download and installation of pre-generated symmetric keys to a cryptographic module."

Background: "Symmetric key based cryptographic systems (e.g., those providing authentication mechanisms such as one-time passwords and challenge-response) offer performance and operational advantages over public key schemes. Such use requires a mechanism for provisioning of symmetric keys providing equivalent functionality to mechanisms such as CMP (RFC 4210) and CMC (RFC 5272) in a Public Key Infrastructure. Traditionally, cryptographic modules have been provisioned with keys during device manufacturing, and the keys have been imported to the cryptographic server using, e.g., a CD-ROM disc shipped with the devices. Some vendors also have proprietary provisioning protocols, which often have not been publicly documented; CT-KIP is one exception, per RFC 4758..."

The IETF Provisioning of Symmetric Keys (KEYPROV) Working Group was chartered to "develop the necessary protocols and data formats required to support provisioning and management of symmetric key authentication tokens, both proprietary and standards based... The need for provisioning protocols in PKI architectures has been recognized for some time. Although the existence and architecture of these protocols provides a feasibility proof for the KEYPROV work assumptions built into these protocols mean that it is not possible to apply them to symmetric key architectures without substantial modification. Current developments in deployment of Shared Symmetric Key (SSK) tokens have highlighted the need for a standard protocol for provisioning symmetric keys..."

See also: the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group


xCal: The XML format for iCalendar
Cyrus Daboo, Mike Douglass, Steven Lees (eds), IETF Internet Draft

IETF has published a revised level -04 version of the specification xCal: The XML format for iCalendar. This version changes the proposed MIME type from 'xml+calendar' to 'calendar+xml', fixes several references to sections of RFC 5545. updates examples in Appendix C (example iCalendar data and its representation in XML as defined by the xCal specification), and corrects the definition and grammar for TIME and UTC-OFFSET properties.

iCalendar (Internet Calendaring and Scheduling Core Object Specification), defined in RFC 5545, defines a data format for representing and exchanging calendaring and scheduling information such as events, to-dos, journal entries, and free/busy information, independent of any particular calendar service or protocol. The iCalendar data format is a widely deployed interchange format for calendaring and scheduling data. While many applications and services consume and generate calendar data, iCalendar is a specialized format that requires its own parser/generator. In contrast, XML-based formats are widely used for interoperability between applications, and the many tools that generate, parse, and manipulate XML make it easier to work with than iCalendar.

The purpose of this IETF specification is to define 'xCal' as an XML format for iCalendar data. xCal is defined so that iCalendar data to be converted to XML, and then back to iCalendar, without losing any semantic meaning in the data. Anyone creating XML calendar data according to this specification will know that their data can be converted to a valid iCalendar representation as well. Two key design considerations in xCal are: [1] Round-tripping, so that converting an iCalendar instance to XML and back will give the same result as the starting point. Preserve the semantics of the iCalendar data. While a simple consumer can easily browse the calendar data in XML, a full understanding of iCalendar is still required in order to modify and/or fully comprehend the calendar data...

At the top level of the iCalendar object model is an 'iCalendar stream'. This object encompasses multiple 'iCalendar objects'. In XML, the entire stream is contained in the root 'ICAL:icalendar' XML element. An iCalendar stream can contain one or more iCalendar objects. Each iCalendar object, delimited by 'BEGIN:VCALENDAR' and 'END:VCALENDAR', is enclosed by the 'ICAL:vcalendar' XML element... iCalendar properties, whether they apply to the VCALENDAR object or to a component, are handled in a consistent way in the xCal format. iCalendar properties are enclosed in the XML element 'ICAL:properties'. Each invidivual iCalendar property is represented in XML by an element of the same name as the iCalendar property, but in lowercase. For example, the CALSCALE property is represented in XML by the 'ICAL:calscale' element..."

See also: the OASIS WS-Calendar TC


SA-REST: Semantic Annotation of Web Resources
Karthik Gomadam, Ajith Ranabahu, Amit Sheth (eds), W3C Member Submission

W3C has acknowledged receipt of a specification SA-REST: Semantic Annotation of Web Resources in the form of a W3C Member Submission. "By publishing this document, W3C acknowledges that the Submitting Members [Wright State University] have made a formal Submission request to W3C for discussion. Publication of this document by W3C indicates no endorsement of its content by W3C, nor that W3C has, is, or will be allocating any resources to the issues addressed by it. This document is not the product of a chartered W3C group, but is published as potential input to the W3C Process. A W3C Team Comment has been published in conjunction with this Member Submission."

SA-REST is a poshformat designed "to add additional meta-data to (but not limited to) REST API descriptions in HTML or XHTML." According to the referenced definition, poshformats are data formats "constructed from the use of semantic class names. They are one-off, ad-hoc or more informal class-name based formats efforts distinguished from the more formally researched and documented microformats..." In the SA-REST Submission, meta-data from various models such an ontology, taxonomy or a tag cloud can be embedded into the documents. This embedded meta-data permits various enhancements, such as improve search, facilitate data mediation and easier integration of services..."

According to the W3C Team Comment on the SA-REST Submission: "the Submission proposes poshformat for identifying the meaning and purpose of regions of HTML documents. Specifically, it proposed markup for three concepts: (1) 'domain-rel' identifies the "domain information" for a document or region (e.g. 'div') in a document; (2) 'sem-class' identifies the meaning or purpose of a non-block element (e.g. the 'img' element), and (3) 'sem-rel' describes a link in the marked-up document that captures the semantics of a link. The REST in SA-REST stems from the initial use case which is the markup of RESTful Web Services. In this reqard, SA-REST provides an alternative to WADL and SAWSDL. SAWSDL annotations can be added to WSDL bindings to label the semantic of RESTful Web Services. SA-REST is motivated by other use cases, such as improving search accuracy.

Unlike other microformats, SA-REST suggests the labeling semantics with URLs in title attributes; these URLs would be visible in browsers which display element titles. The intention of the title attribute is to contain human understandable information (although, unfortunately, the relevant part of the HTML specification does indeed fail to make this absolutely clear). Using this attribute to help developers instead of catering for the end-user of a page raises serious accessibility issues that are not addressed by the submission. Note that the microformat community has introduced the "value-class" pattern to avoid such accessibility problems. Section 4 of the Submission proposes using GRDDL to extract RDF assertions from documents marked up with SA-REST attributes. A normative XSLT would specify the representation of SA-REST in RDF..."

See also: the W3C Team Comment on this Submission


Information Card Issuance Community Technical Preview (CTP)
Forefront Blogger, TechNet

Microsoft has released a Community Technical Preview (CTP) for an 'Information Card Issuance CTP Add-On for Active Directory Federation Services 2'. The goal of the CTP is to enable the community to continue to exercise the capabilities of the identity metasystem, as relates specifically to information card issuance, in testing, pilots, and other non-production environments.

The Information Card Issuance CTP "will enable IT administrators to easily issue information cards via Active Directory Federation Services 2.0, giving end users a more flexible and secure means of authentication to applications within the enterprise, across company boundaries and into the cloud. Through this CTP, Microsoft hopes to gain valuable feedback on our Information Card technologies...

The Information Card Issuance CTP will support the following scenarios: (1) Administrators can install an Information Card Issuance component on AD FS 2.0 RTM servers and configure Information Card Issuance policy and parameters. (2) End users with IMI 1.0- or IMI 1.1 (DRAFT)-compliant identity selectors can obtain Information Cards backed by username/password, X.509 digital certificate, or Kerberos. (3) Continued support for Windows CardSpace 1.0 in Windows 7, Windows Vista, and Windows XP SP 3 running .NET 3.5 SP1..."

Active Directory Federation Services 2.0 "is a server role in Windows Server that provides simplified access and single sign-on for on-premises and cloud-based applications in the enterprise, across organizations, and on the Web. AD FS 2.0 helps IT streamline user access with native single sign-on across organizational boundaries and in the cloud, easily connect applications by utilizing industry standard protocols and provide consistent security to users with a single user access model externalized from applications..."

See also: Microsoft Connect


Explore the CDI Programming Model in ZK: Implement a Simple Application
Sachin K Mahajan and Ashish Dasnurkar, IBM developerWorks

The Java Specification Request (JSR) 299: Contexts and Dependency Injection (CDI) for the Java EE Platform defines a powerful set of services. Services include type-safe dependency injection of Java EE components and an event notification model for allowing interaction between components, which simplifies access to Java EE services from the Java EE Web tier. Essentially, any third-party framework used in the Java EE Web tier can leverage CDI services using a CDI portable extensions mechanism.

This article and explains how to modify a real-world example using the ZK framework and its integration with powerful CDI services. ZK is analogous to Ajax without JavaScript. It is an open source event-driven Ajax framework. CDI as defined by JSR-299 of Java EE platform 6 also provides a set of powerful features, such as type-safe dependency injection, an event notification model, and an SPI to develop portable extensions. The ZK CDI extension integrates the ZK programming model with CDI, allowing seamless development of Java EE 6 enterprise applications.

ZK CDI, which is provided by the ZK framework, gives seamless integration with CDI to expose CDI services within the ZK framework. It lets enterprise developers integrate CDI-driven applications, with a compressive and powerful Ajax front end supplied by ZK. Using CDI and ZK together lets you effortlessly bridge the gap between Java's EE Web tier and Java EE

Because CDI emphasizes loose coupling and strong typing, the bean doesn't need to be aware of certain aspects, such as implementation, threading model, or lifecycle. These aspects can vary based on the deployment, thus not affecting the client at all. Loose coupling makes the code easy to maintain and extensible..."

See also: JSR 299 (Contexts and Dependency Injection for the Java EE Platform)


IETF Internet Draft: A RADIUS Attribute for SAML Constructs
Josh Howlett (ed), IETF Internet Draft

An initial level -00 IETF Internet Draft has been published through the Network Working Group. This Informational specification defines the SAML-Construct attribute using the 'Remote Authentication Dial In User Service' (RADIUS). This attribute is used for encapsulating Security Assertion Markup Language (SAML) constructs.

The 'SAML-Construct Attribute' attribute contains a SAML construct, as defined in the OASIS specification Assertions and Protocol for the OASIS Security Assertion Markup Language (SAML) V2.0. This attribute MAY be used with any AAA protocol that makes use of RADIUS attributes, such as RADIUS (RFC 2865) or DIAMETER (RFC 3588). Where multiple SAML-Construct attributes are included in an AAA protocol message (for example, a RADIUS packet), the Construct field of the attributes are to be concatenated to form a SAML construct.

In the SAML-Construct format the fields (Type, Length, MT, Construct) are transmitted from left to right... The Construct Type field (CT) is a one octet enumerated field. It takes an integer value denoting the type of SAML construct in the Construct field...

Construct: The Construct field is one or more octets. It contains a SAML construct (for example, as defined in [SAMLCore]). If larger than a single attribute, the SAML construct data MUST be split on 253-octet boundaries over as many attributes as necessary. On reception, the SAML construct is reconstructed by concatenating the contents of all SAML-Construct attributes..."

See also: SAML resources


Comparing IETF OAuth and Kantara User-Managed Access (UMA)
Eve Maler, Pushing String Blog

The last few weeks have been fertile for the User-Managed Access work. The Kantara User-Managed Access (UMA) Work Group was chartered to "develop a set of draft specifications that enable an individual to control the authorization of data sharing and service access made between online services on the individual's behalf, and to facilitate the development of interoperable implementations of these specifications by others..."

Because UMA layers on OAuth 2.0 and the latter is still under development, IIW and the follow-on OAuth interim F2F presented opportunities for taking stock of and contributing to the OAuth work as well... UMA settled on its terms before WRAP was made public; any overlap in terms was accidental. As we have done the work to model UMA on OAuth 2.0, it has become natural to state the equivalences more boldly and clearly, while retaining our unique terms to distinguish the UMA-enhanced versions...

Conceptually, UMA is a sort of unhooking OAuth's authorization server concept from its resource-server moorings and making it user-centric. In OAuth, there is one resource owner in the picture, on both sides. In UMA, the authorizing user may be granting access to a truly autonomous party—which is why we need to think harder about authorization agreements). In OAuth, the resource server respects access tokens from its authorization server. In UMA, the host outsources authorization jobs to an authorization manager chosen by the user. In OAuth, the authorization server issues tokens based on the client's ability to authenticate. In UMA, the authorization manager issues tokens based on user policy and claims conveyed by the requester.

UMA has a need to support lots of dynamic matchups (dynamic trust) between entities. In OAuth, the client and server sides must meet outside the resource-owner context ahead of time (not mandated, just not dealt with in the spec). But in UMA, a requester can walk up to a protected resource and attempt to get access without having registered first. In OAuth, the resource server meets its authorization server ahead of time and is tightly coupled with it (not mandated, just not dealt with in the spec). In UMA, the authorizing user can mediate the introduction of each of his hosts to the authorization manager he wants it to use. In OAuth, the resource server validates tokens in an unspecified manner, assumed locally. In UMA, the host has the option of asking the authorization manager to validate tokens in real time. As to protocol: UMA started out life as a fairly large application of OAuth 1.0. Over time, it has become a cleaner and smaller set of profiles, extensions, and enhanced flows for OAuth 2.0. If any find wider interest, we could break them out into separate specs..."

See also: the Kantara User-Managed Access (UMA) Work Group home


Encryption Key Management: Are Your Keys Under the Mat?
Patrick Townsend, Blog

"Customers are struggling to understand compliance regulations that seem to be vague in many places. One of these places has to do with encryption key management. And one question that I encounter on a regular basis has to do with the separation of encryption keys from the data they protect. Is it necessary to physically separate the encryption keys from the protected data? Can I store the data encryption keys on the same server as the protected data if I protect the key-encryption keys that protect those data encryption keys? [...]

I don't think you are going to find clear guidance on this issue from any of the current PCI, HIPAA, HITECH Act, and state privacy regulations. The best you get is 'Use good key management practices' or 'Use key management practices consistent with international standards'. Thin gruel if you are looking for clear guidance. As the compliance regulations change over time I suspect we'll get better guidance, but right now there are not many strong statements about this issue...

There are really good security reasons to separate encryption keys from the data they protect: (1) Dual control is an important part of any good key management practice, and almost impossible to do well without a separate of the keys from the protected data. (2) When doing routine backups you must separate encryption keys from protected data, again almost impossible to do on a full server backup. (3) NIST key management best practices require secure key transport at the time of use—very hard to do unless you have a separation of keys and a secure SSL/TLS channel. (4) Key management best practices also call for documented key management procedures and controls. This is really hard to get right without a professional key management system.

Lastly, I've really benefited from Bruce Schneier's practical view of encryption and related technologies. I'm reading his latest book on Cryptography Engineering: he frequently points out that a data protection strategy is only as secure as the weakest link. To me, encryption key management looks like the weakest link in most data protection schemes. Moving encryption keys away from the data they protect is a minimal requirement to begin to get better security..."

See also: the OASIS Key Management Interoperability Protocol KMIP TC


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-05-26.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org