The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: June 10, 2009
XML Daily Newslink. Wednesday, 10 June 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com



KMIP and Banking-Oriented Key Management
Todd Arnold, Posting to OASIS KMIP Discussion List

Todd Arnold of IBM was recently voted to act as liaison from ANSI X9F into the OASIS KMIP Technical Committee at the request of the ANSI X9 Group. Excerpt from his memo to the KMIP TC, from which several issues are being written up and discussed in the Technical Committee:

(1) I have concerns about KMIP and its ability to provide the features needed in banking-oriented key management... Key management for banking applications has some special issues that are somewhat different from other more generic key management. For example, requirements are governed by standards such as those from ANSI X9. These standards have very stringent and precise requirements, and they are very difficult to change because of the standards themselves and because of the large set of equipment and software that is in use and must operate together. In many cases, key management techniques and hardware-related security requirements that are acceptable elsewhere are prohibited in the banking applications world. The key types required in banking applications are much more specific than in generic applications, and in many cases they have no analog in non-banking applications. Each hardware HSM vendor has their own proprietary API and cryptographic architecture for their own way of meeting the security requirements in this area. In particular, the key typing and key usage approaches are quite varied, and some are much more complex and granular than others.

(2) Symmetric keys often come in matched pairs, where the two keys have the same value but have different attributes. An examples would be MAC keys where one can be used for generation or verification, but the other can only be used for verification. Another would be key-encrypting keys where one copy can only be used to export (wrap) and the other copy can only be used to import (unwrap). Symmetric key pairs like this are critical to security in systems like banking...

(3) The banking standards have strong requirements that keys be protected at all times by an SCD (Secure Cryptographic Device, often called a TRSM). This means something like an HSM or a secure POS terminal. The keys cannot ever be in plaintext or in any form where plaintext could be recovered, unless they are protected within secure hardware. I think it would be very useful in KMIP to have an attribute that says 'this key must be protected by hardware'..."

See also: ANSI X9F


KMIP and EKMI Credential Bootstrapping
Anders Rundgren, Posting to the OASIS KMIP TC Comment List

"When you are about to perform trustworthy operations between different entities, authentication of the end-points is typically necessary. It seems that KMIP (as well as EKMI) leaves the bootstrapping of end-point authentication credentials to somebody else to cater for. Since this process is both highly device-dependent as well as generally difficult, KMIP interoperability may in practice prove to be quite limited. As a comparison, my own brain-child, KeyGen2, builds on the fact that devices are shipped with a device certificate. One may claim that KeyGen2 requires enhanced devices, and yes this is true! The problem with not requiring enhanced devices is that "the tyranny of the least common denominator" will rule which is a stopgap to progress. That is, the missing bootstrap may severely impede market acceptance.

Note: KeyGen2 does not compete with KMIP because KeyGen2 (deliberately) supports a very limited range of devices that are used by everybody (phones) but would be totally useless for storage. I would if I were you consider "borrowing" the device certificate concept. Properly implemented, all kinds of shared secrets and enrollment passwords are eliminated by device certificates. If you are curious on how such a scheme could work you may take a peek in section "Dual-use Device IDs" in [the document] "SKS: Secure Key Store." [It reads, in part:] "The Device Certificate is used as a trusted device ID in a provisioning process. That is, an issuer may reject requests attested by an SKS from an (for the issuer) unknown vendor. A side-effect of using certificate-based device IDs is that you can create efficient and still quite secure on-line enrollment processes where a non-authenticated user signs-up for credentials by sending an SKS-attested request to an issuer. The issuer can then verify the user's identity with an OOB (Out Of Band) method meeting the issuer's policy which can be anything from the classic 'e-mail roundtrip', to requiring the user to show up in an office with an identity card..."

See also: KeyGen2


In-lining Extensions for Atom
Nikunj R. Mehta (ed), IETF Internet Draft

An initial version -00 IETF Internet Draft for In-lining Extensions for Atom has been published, along with an updated release of Hierarchy Relations for Atom. The new specification "In-lining Extensions for Atom" defines mechanisms for in-lining representations of linked resources in Atom documents. Such in-line representations can be either text, binary, or well-formed XML. Some applications require the ability to pre-fetch resources linked from an Atom document using the atom:link element, per IETF RFC 4287. Such in-line representations are similar to the in-line content model of Atom. An Atom document may include the in-line representation of a linked resource by using the ae:inline element inside the corresponding 'atom:link' element. The in-lined representation is only a hint and may differ from the representation obtained from the URI referenced in the link. Atom Processors should use the link URI to obtain the complete representation of the linked resource..."

The Hierarchy Relations for Atom specification defines link relations for hierarchical navigation among Atom feeds and entries. Many applications, besides blogs, provide their data in the form of syndicated Web feeds using formats such as Atom. Some such applications organize Atom Entries in a hierarchical fashion similar to a file system. The specification describes a means of communicating about Atom Entries that are hierarchically related to each other since resource identifiers are opaque to clients and cannot be directly manipulated for the purposes of representation exchange, i.e., navigation.

See also: Hierarchy Relations for Atom


Vinton Cerf: The Internet is Incomplete, Needs Security and Mobile
Patrick Thibodeau, ComputerWorld

The co-designer of the Internet's basic architecture, Vinton Cerf, said the Internet "still lacks many of the features that it needs," particularly in security, in a blunt talk to a tech industry crowd. Cerf, who is a vice president and chief Internet evangelist at Google Inc., co-designed with Robert Kahn the TCP/IP protocols that underpin the Internet. That was in 1973. And despite becoming operational in 1983, and commercially available in 1989... Cerf is influential because of his accomplishments, but he may be even more so today because of his affiliation with Google. President Obama's administration has appointed a number of Google employees, including CEO Eric Schmidt, to important positions. One of the most critical needs is authentication... The lack of authentication is pervasive and is even a problem in simple cases, such as authenticating entries in the domain name system... Authentication isn't available on an end-to-end basis at all layers of the architecture... While users are good at building concrete tunnels using simple SSL (Secure Sockets Layer) techniques, they don't identify the end points and just secure the channel... You can have an email with an attached virus, thoroughly encrypted, and send it through an encrypted tunnel, and once it gets to the other end it gets decrypted and then, of course, does its damage..."


Program Online: International Symposium on Processing XML Efficiently
B. Tommie Usdin, Markup Conference Announcement

Organizers of "Balisage: The Markup Conference" have announced the publication of a program for the co-located "International Symposium on Processing XML Efficiently: Overcoming Limits on Space, Time, or Bandwidth." This Symposium, chaired by Michael Kay of Saxonica, will be held on Monday August 10, 2009 in Montreal, Canada, just preceding the Balisage 2009 Conference.

Background: Developers have said: "XML is too slow!", where "slow" can mean many things including elapsed time, throughput, latency, memory use, and bandwidth consumption. The aim of this one-day symposium is to understand these problems better and to explore and share approaches to solving them. We'll hear about attempts to tackle the problem at many levels of the processing stack. Some developers are addressing the XML parsing bottleneck at the hardware level with custom chips or with hardware-assisted techniques. Some researchers are looking for ways to compress XML efficiently without sacrificing the ability to perform queries, while others are focusing on the ability to perform queries and transformations in streaming mode. We'll hear from a group who believe the problem (and its solution) lies not with the individual component technologies that make up an application, but with the integration technology that binds the components together.

Sample presentations: (1) "The XML Chip at Six Years", by Michael Leventhal and Eric Lemoine; (2) "Hardware and software trade-offs in the IBM DataPower XML XG4 Processor Card", by Richard Salz, Heather Achilles, and David Maze; (3) "Parallel bit stream technology as a foundation for XML parsing performance", by Rob Cameron, Ken Herdy, and Ehsan Amiri; (4) "Memory management in streaming: buffering, lookahead, or none. Which to choose?", by Mohamed Zergaoui; (5) "Efficient scripting", by David Lee and Norman Walsh; (6) "Performance of XML-based applications", by James Robinson.

See also: the Symposium web site


Unicode Standard Version 5.2.0: Review Announced
Staff, Unicode Consortium

Members of the Unicode Consortium have announced the beginning of a review period for Unicode Standard Version 5.2.0 (beta): "This version is planned for release in October 2009. A beta version of the 5.2.0 Unicode Character Database files is also available for public comment. We strongly encourage implementers to download these files and test them with their programs, well before the end of the beta period. Any comments on the beta Unicode 5.2.0, the UCD 5.2.0, or the 5.2.0 UAXes should be reported using the Unicode reporting form. The comment period ends August 3, 2009. All substantive comments must be received by that date for consideration at the next UTC meeting. Editorial comments (typos, etc.) may be submitted after that date for consideration in the final editorial work...

The Unicode Consortium provides early access to updated versions of the data files and text to give reviewers and developers as much time as possible to ensure a problem-free adoption of Version 5.2.0. The assignment of characters for Unicode 5.2.0 is now stable. There will be no further additions or modifications of code points. One of the main purposes of the beta review period, however, is to verify and correct the preliminary character property assignments in the Unicode Character Database. Reviewers should check for property changes to existing Unicode 5.1.0 characters, as well as the property values for the new Unicode 5.2.0 character additions. To facilitate verification of the property changes and additions, diffable XML versions of the Unicode Character Database are available. These XML files are dated, so that people can check the details of changes that occurred during the beta review period..."

See also: the Unicode 5.2.0 files for review


EEMA, European Association for e-Identity and Security Conference
EEMA Staff, Conference Announcement

The European Electronic Messaging Association (EEMA) has announced a program for the European e-Identity Conference 2009. This is to be "Europe's leading forum for this critical security application — tackling the key issues surrounding e-identity as a core enabler of today's personal, business and government processes. For 2009, eema will host a uniquely interactive two-day event in London. The event will take place on the 25-26 June 2009 and will comprise thought-provoking keynotes, panel discussions, roundtable sessions and focused workshops on the key challenges and strategies for effectively managing Government, employee, citizen and private identities. eema will also be demonstrating the latest work in e-ID card interoperability, both within and between EU countries. Many card infrastructures aren't compatible across borders, so come and find out if yours is one of them, and how this is being resolved..."

"This high-value conference is an essential date for those in business, public sector and government who are involved in the policy, security, systems and processes surrounding identity management. Integrated with the main conference will be a showcase of Europe's leading identity management and security vendors, on hand to demonstrate the industry's latest technological advances. The European e-Identity Conference regularly attracts over 150 senior delegates from the commercial and public sectors throughout Europe and beyond. Key subject areas for 2009 include: Managing Identity, Securing Identity, Federated Identity, e-ID Cards, Service Oriented Architecture SOA, and Mobile e-ID." eema 2009 is provided in collaboration with OASIS and the British Computer Society (BCS).


VMware Marketplace Is Important Piece of Virtualization Puzzle
Cameron Sturdevant, eWEEK

VMware has significantly improved its VMware Marketplace, but there are many improvements that could be made to ease/drive virtual appliance acquisition, implementation and support. In the first of a series of reviews I'm writing about VMware vSphere 4, I focused on important new features such as the vNetwork Distributed Switch and improved management tools. As a product reviewer, that's my job—to focus on the product. But as an industry analyst, one of the big changes at VMware that caught my eye was the drastic improvements made to the VMware Marketplace for virtual appliances... Installing your new virtual appliance, will, in most cases, be much easier because of the DMTF (Distributed Management Task Force). All of the virtual appliances I looked at in the VMware marketplace are provided in an OVF (Open Virtualization Format) package. In this method, all aspects of the virtual machine, or multiple virtual machines running together, are described. This means that the CPU, memory and disk, along with all other virtual hardware requirements, are provided with the virtual appliance. With a little practice, most IT managers will be able to deploy virtual appliances in OVF packages with little or no manual intervention. To keep the marketplace interesting for IT managers, VMware needs to make sure that product offerings are kept up-to-date. Further, it would be nice to see support, maintenance, advice and user group links added to each of the products. Even if these links lead off to vendor or community-supported sites, it would be convenient for potential customers to see these service links right next to the offered product..."

See also: DMTF Open Virtualization Format (OVF)


Spinning a Semantic Web for Metadata: Developments in the IEMSR
Emma Tonkin and Alexey Strelnikov, Ariadne

The authors reflect on their experience of developing components for the Information Environment Metadata Schema Registry... Metadata may be described as 'structured data which describes the characteristics of a resource', although it is often more concisely offered as 'data about data'. There is a great deal of specialist vocabulary used in discussing it: standards, schemas, application profiles and vocabularies. Going through each of them in turn: a schema is a description of a metadata record or class of records, which will provide information as to the number of elements that can be included in the record, the name of each element and the meaning of each element. In other words, just as with a database schema, it describes the structure of the record. The Dublin Core (DC) standard defines the schema, but does not define the encoding method that must be used (for example, HTML, XML, DC-Text, DC-XML and so forth). A metadata vocabulary is simply a list of metadata elements (names and semantics).

The IEMSR began as an early adopter of various Semantic Web technologies, and was placed firmly on the bleeding edge. Its development, however, was motivated by some very pragmatic concerns: the need to provide an easy place to find information about metadata schemas, vocabularies and application profiles; to promote use of existing metadata schema solutions; to promote interoperability between schemas as a result of reuse of component terms across applications... We have reviewed the stability of the backend service itself and decided that the data collection itself is a valid process, that the data is portable and can potentially be expressed and searched using any of a number of mechanisms, and that at present the SPARQL interface fulfils the immediate need identified. Given the limited familiarity with SPARQL we have produced a series of lightweight demonstration scripts for developers to adapt to their purposes.

We intend to continue the development effort, moving on the basis of the relatively stable and well-understood/documented setup that we now have towards a lighter, simpler service based around REST and other Web standards currently being reviewed by the JISC Information Environment. As a result we expect to improve, simplify and encourage practical interoperability with other JISC services and developments. As a result of experience during this phase, the IEMSR can operate as a centrepoint for collecting information, evaluating, annotating etc. both in terms of software and development needs and in terms of user engagement, for the purposes of supporting actual engineering methods for developing schemas, elements, and Application Profiles (APs).


Publish and Cherish with Non-proprietary Peer Review Systems
Leo Waaijers, Ariadne

Although there is a steadily growing number of peer-reviewed Open Access journals and an active Open Access Scholarly Publishers Association, the supply fails to keep pace with the demand. More and more research funders require open access to the publications that result from research they have financed. Recently the European Commission conducted a pilot initiative on open access to peer-reviewed articles in FP7, its Seventh Research Framework Programme, that may result in 100,000 reviewed articles. In so far as authors cannot all publish in Open Access journals, the EC and, for that matter, other Open Access-mandating funders impose unfair conditions on authors. With a shift from proprietary to non-proprietary systems of peer review, initial experience has now been garnered from SCOAP and the Springer experiments at UKB, MPG, Goettingen University and, lately, California University). This conversion can be speeded up if disciplinary communities, universities, and research funders actively enter the market of the peer review organisers by calling for tenders and inviting publishers to submit proposals for a non-proprietary design of the peer review process. Given the current situation — with the American legislature and the European Commission having clearly taken a stand in favour of Open Access — one can expect that such tenders will certainly produce interesting proposals... This article examines the idea of the European Commission putting out such a tender.

See also: Open Archives Initiative Protocol (OAI-PMH)


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-06-10.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org