The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: July 15, 2009
XML Daily Newslink. Wednesday, 15 July 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc.

The Elusive Keys to DNS Security
William Jackson, Government Computer News

With six months remaining until the deadline for government agencies to digitally sign their Domain Name System address records, deployment of DNS Security Extensions remains a work in progress. After agencies sign address records, key management and securing the Internet's root zone will remain challenges... According to a survey earlier this year of network operators by the European Network and Information Security Agency, 78 percent of operators either have deployed or have plans to deploy DNSSEC services within the next three years. DNSSEC still is at the beginning of deployment and that there is a lack of tools and policies. The difficulty is not in signing the address data within the domains but with managing keys, according to NIST's Doug Montgomery: 'The basic act of signing the authoritative zone is easy; most people could do it in an afternoon if they wanted to'... Branko Miskov, director of product management at BlueCat Networks: 'That's the easiest part to tackle... Tools such as the latest version of BlueCat's Proteus IP address management product automate the process. All I have to do is generate a couple of keys, mark that zone for publishing, and push'...

But that is only half of the task. Once a zone is signed, servers requesting addresses have to be DNSSEC aware and must have access to a key to verify the digital signatures for the process to work. Implementing and managing key policies—the strength of the keys to be used, the length of time they remain valid, and production of new keys on schedule -- can be a complex job. Obtaining keys from a trusted source also can be complex... Plans for deploying DNSSEC at the authoritative root zone will help to simplify this challenge by reducing the number of trusted keys needed to verify requests and answers.

See also: OASIS Key Management Interoperability Protocol (KMIP) references

Internationalized Resource Identifiers (IRIs)
Martin Duerst, Michel Suignard, Larry Masinter (eds), IETF Internet Draft

Larry Masinter has published an updated version of the IETF specification for "Internationalized Resource Identifiers (IRIs)," being produced jointly with the W3C I18N Core Working Group. This version attempts to integrate the "web address" concept (called Hypertext Reference or HREF) into the main IRI specification. The text has gone through sufficient transformations that accuracy needs to be checked, but at least it indicates to the editors that the many specs being reworked are mergable. If approved, this document is intended to obsolete IETF RFC 3987.

This document "defines a new protocol element, the Internationalized Resource Identifier (IRI), as an extension of the Uniform Resource Identifier (URI). An IRI is a sequence of characters from the Universal Character Set (Unicode/ISO 10646). A mapping from IRIs to URIs is defined, which provides a means for IRIs to be used instead of URIs, where appropriate, to identify resources. To accomodate widespread current practice, additional derivative protocol elements are defined, and current practice for resolving IRI-based hypertext references in HTML are outlined.

The approach of defining new protocol elements, rather than updating or extending the definition of URI, was chosen to allow independent orderly transitions as appropriate: other protocols and languages that use URIs and their processing may explicitly choose to allow IRIs or derivative forms. Guidelines are provided for the use and deployment of IRIs and related protocol elements when revising protocols, formats, and software components that currently deal only with URIs.

See also: the W3C Internationalization (I18n) Activity

Ontopia 5.0.0 for Topic Maps Released as Open Source
Lars Marius Garshol, Software Announcement

Ontopia 5.0.0 provides a complete set of tools for building, maintaining, and deploying Topic Maps-based applications. The developers announced that the Ontopia open source project has produced its first release with Ontopia 5.0.0, which can be downloaded from from Google Code. "This release is the same as the previous Ontopia Knowledge Suite 4.1.0, except that it is now under an open source license, and the developers have removed the need for a license key. Note also that some new features in this release, including: support for TMAPI 2.0, the new tolog optimizer, the new TologSpy tolog query profiler, certain new classes have been added, providing convenient lookup of topics using qnames. Ontopia now uses the Simple Logging Facade for Java (SLF4J), which makes it easy to switch logging engines, if desired...

The Ontopia developers intend to host Code Camps where people who want to work with the code can get introduced to it. "The details are not yet clear, but most likely there will be one in Oslo, and one in Leipzig in November 2009 in conjunction with the TMRA 2009 conference... Topic Maps is a semantic technology designed for the integration of information, and is as such closely connected with other information-centric technologies..."

See also: XML Topic Maps references

Last Call for IETF 'Web Linking' Specification
Mark Nottingham (ed), IETF Internet Draft

The Internet Engineering Steering Group (IESG) announced the publication of a revised Internet Draft for a "Web Linking" specification, where version -06 updates the previous -05 document of April 17, 2009. The IESG plans to make a decision about the request for promotion to IETF Proposed Standard in the next few weeks, and solicits final comments on this action; substantive comments should be sent to the IETF lists by 2009-08-11.

The "Web Linking" specification defines relation types for Web links, and further defines a registry for them. It also defines how to send such links in HTTP headers with the Link header-field. Version -06 changes include: (1) addition of "up" and "service" relation types; (2) fixing the "type" attribute syntax, with added prose; (3) addition ofa note about RDFa and XHTML to HTML4 notes; (4) removal of specific location for the registry, deferring to IANA.

Problem space: "A means of indicating the relationships between resources on the Web, as well as indicating the type of those relationships, has been available for some time in HTML, and more recently in Atom (RFC 4287). These mechanisms, although conceptually similar, are separately specified. However, links between resources need not be format-specific; it can be useful to have typed links that are independent of their serialisation, especially when a resource has representations in multiple formats. To this end, this document defines a framework for typed links that isn't specific to a particular serialisation. It does so by re-defining the link relation registry established by Atom to have a broader scope, and adding to it the relations that are defined by HTML. Furthermore, an HTTP header-field for conveying typed links was defined in RFC 2068, but removed from RFC 2616, due to a lack of implementation experience. Since then, it has been implemented in some User-Agents (e.g., for stylesheets), and several additional use cases have surfaced. Because it was removed, the status of the Link header is unclear, leading some to consider minting new application-specific HTTP headers instead of reusing it. This document addresses this by re-specifying the Link header as one such serialisation, with updated but backwards-compatible syntax..."

Handling Asynchronous REST Operations
Boris Lublinsky and Tim Bray, InfoQueue

"In a new post on 'Slow REST', Tim Bray tries to answer this question about REST: In a RESTful context, how do you handle state-changing operations (POST, PUT, DELETE) which have substantial and unpredictable latency?' Tim describes three different approaches for this situation developed as part of a Project Kenai in a form of a proposal - Handling Asynchronous Operation Requests. These approaches include a Resource- based approach, Comet style implementation (keeping the HTTP channel open for the duration of a long running request), and 'Web hooks' (using two independent one-way' invocations, one to start a long-running operation and the other one to notify a requester that it is completed.

Bray asks whether "Slow REST" is a pattern that's going to pop up again often enough in the future that we should be thinking of a standardized recipe for approaching it... In spite of many differences (some real, some religious) between REST and WS*, both camps aim to solve real life problems and consequently face the same challenges. Learning from each other experiences and implementations will definitely enhance both..."


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: