This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
NIST Releases 800-57/3: Application-Specific Key Management Guidance
Staff, U.S. National Institute of Standards Announcement
NIST has announced the publication of Special Publication 800-57, Recommendation for Key Management Part 3: Application-Specific Key Management Guidance. The 107-page document was prepared by Elaine Barker, William Burr, Alicia Jones, Timothy Polk, Scott Rose, Quynh Dang, and Miles Smid.
Part 3 of the 'Recommendation for Key Management, Application-Specific Key Management Guidance', is intended to address the key management issues associated with currently available cryptographic mechanisms. General Guidance, Part 1 of the Recommendation for Key Management, contains basic key management guidance for users, developers and system managers regarding the "best practices" associated with the generation and use of the various classes of cryptographic keying material. General Organization and Management Requirements, Part 2 of the Recommendation, provides a framework and general guidance to support establishing cryptographic key management within an organization, and a basis for satisfying the key management aspects of statutory and policy-based security planning requirements for Federal government organizations.
Application-Specific Key Management Guidance Part 3 "is intended primarily to help system administrators and system installers adequately secure applications based on product availability and organizational needs and to support organizational decisions about future procurements. The guide also provides information for end users regarding application options left under their control in normal use of the application. Recommendations are given for a select set of applications, namely: Section 2 - Public Key Infrastructures (PKI); Section 3 - Internet Protocol Security (IPsec); Section 4 - Transport Layer Security (TLS); Section 5 - Secure/ Multipurpose Internet Mail Extensions (S/MIME); Section 6 - Kerberos; Section 7 - Over-the-Air Rekeying of Digital Radios (OTAR); Section 8 - Domain Name System Security Extensions (DNSSEC); Section 9 - Encrypted File Systems (EFS)...
Conformance testing for implementations of key management as specified in this Recommendation will be conducted within the framework of the Cryptographic Module Validation Program (CMVP), a joint effort of NIST and the Communications Security Establishment of the Government of Canada..."
See also: Cryptographic Key Management
Common Interface to Cryptographic Modules
Daniel Lanz and Lev Novikov (eds), IETF Internet Draft
IETF has published an initial -00 level Internet Draft for "Common Interface to Cryptographic Modules." The memo "presents a programming interface to standardize the way software programs manage cryptographic modules and utilize cryptographic services offered by modules. Although a number of interfaces for commercial environments have been standardized and are in use, this is the first generic cryptographic interface to be developed that supports cryptographic modules separating two security domains and is thus ideal for the high assurance marketplace. The interface has been designed to also allow less demanding environments to take advantage of its features."
Background: "Systems that require cryptographic protection may utilize various cryptographic services including data encryption, signature generation, hashing, and keystream generation. Cryptographic modules providing these services and the key material they hold must be managed. All of these services have proprietary interfaces that differ significantly among module types, leading to the following problems: (1) Replacement of one module type for another and reuse of module- dependent software are inhibited as applications require extensive modifications to adapt to new module types and their proprietary interfaces. (2) Developers of systems that host cryptographic modules must accommodate different cryptographic module interfaces for different types of cryptographic modules. (3) Test tools and procedures developed for one module usually will not work with other modules. (4) Security evaluators must learn multiple module developers' interfaces, increasing evaluation time and expense. To address these problems, the Common Interface to Cryptographic Modules (CICM) specification offers module developers a set of standard interfaces for the set of operations supported by high assurance cryptographic modules...
CICM interfaces provide a common way to access the following services offered by cryptographic modules: cryptographic module management, key management (includes the generation, storage, protection, and removal of key material, and support for message exchanges used in key agreement and key transfer protocols), and channel management... Cryptographic modules utilize key material under their protection as one input to perform a cryptographic transformation. Keys: can originate at a Key Infrastructure Component that has a trust relationship with the module; may be agreed upon between the module and another entity; may be generated on the module itself; may be derived from information presented to the module by a client program. Once established on a module, they may be subject to client-initiated management operations or may be used as part of a cryptographic channel to effect cryptographic transformations...
CICM supports separate IDL interfaces for symmetric keys and asymmetric keysets. An asymmetric keyset may comprise an asymmetric key pair, the public and private key components of a keypair, the digital certificate corresponding to the keyset public key, one or more verification certificates in the certificate chain of trust, and related public domain parameters. The asymmetric and symmetric key manager attributes allow for access to asymmetric keysets and symmetric keys, respectively..."
James Clark, Random Thoughts Blog
"I have been continuing to have a dialog with some folks at Microsoft about M. This has led me to do a lot of thinking about what is good and bad about the XML family of standards. The standard I found it most hard to reach a conclusion about was XML Namespaces. On the one hand, the pain that is caused by XML Namespaces seems massively out of proportion to the benefits that they provide. Yet, every step on the process that led to the current situation with XML Namespaces seems reasonable...
[For example:] (1) We need a way to do distributed extensibility (somebody should be able to choose a name for an element or attribute that won't conflict with anybody else's name without having to check with some central naming). (2) The one true way of naming things on the Web is with a URI. (3) XML is supposed to be human readable/writable so we can't expect people to put URIs in every element/attribute name, so we need a shorter human-friendly name and a way to bind that to a URI. (4) Bindings need to nest so that XML Namespace-generating processes can stream, and so that one document can easily be embedded in another. (5) XML Namespace processing should be layered on top of XML 1.0 processing. (6) Content and attribute values can contain strings that represent element and attribute names; these strings should be handled uniformly with names that the XML parser recognizes as element and attribute names.
I would claim that the aspect of XML Namespaces that causes pain is the URI/prefix duality: the thing that occurs in the document (the prefix + local name) is not the same as the thing that is semantically significant (the namespace URI + local name). As soon as you accept this duality, I believe you are doomed to a significant extra layer of complexity... The need for this duality stemmed from the use of URIs for names. As far as I remember, there was actually no discussion in the XML WG on this point when we were doing XML Namespaces: it was treated as axiomatic that URIs were the right thing to use here. But this is where I believe XML Namespaces went wrong...
I think the insistence on URIs for namespaces is paying insufficient attention to the distinction between instances of things and types of things. The Web works as well as it does because there is an extraordinarily large number of instances of things (ie Web pages) and a relatively very small number of types of things (ie MIME types). Completely different considerations apply to naming instances and naming types: both the scale and the goals are completely different. URIs are the right way to name instances of things on the Web; it doesn't follow that they are the right way to name types of things..."
See also: Namespaces in XML
Simplify Your Apps with the XML Binding Language 2.0
Kurt Cagle, DevX.com
"[One approach to adding new functionality to web sites] may have the potential to both simplify your applications and contribute significantly to reuse. The idea behind it is deceptively simple: in a web page's CSS page, you define what's called a behavior, a script that binds to a given behavior language document written in mixed XML and java-script called the XML Binding Language (XBL). Once the page loads, any element that's associated with that particular rule will gain the behavior, essentially acting as a new 'element' with its own presentation, its own responses to user input, and its own underlying data. XBL has been floating around in various incarnations since the early 2000s. Microsoft created a type of binding called behaviors in the late 1990s, but the technology never really caught on with other browsers...
Mozilla announced in 2008 that they were looking to have an XBL 2 version likely with their 4.0 release ... this effort is underway now, and it's very likely that they will achieve this goal, especially with a formal candidate recommendation status that's unlikely to change its underlying functionality. Google, for it's part, took an alternative route that's similar to the approach they took recently with the SVG Web project. Rather than waiting for other browser vendors to adopt XBL 2, they've recently created a java-script based XBL2 code project [...] with the code designed in such a way that it will work across any browser. Currently it supports all major browser versions produced within the last four years...
It's hard to say whether Google's XBL2 implementation will really catch on, although it has a number of factors going for it. The code is remarkably cross platform—it works on all contemporary browsers with the possible exception of Konquerer. That doesn't necessarily mean that java-script code written within the bindings will satisfy that same restriction, of course, but having the framework in place can go a long way to making such code browser independent.
Bindings make for cleaner layout and code and encourages componentization at the browser level, which in turn promotes code reuse and the development of core libraries... Overall, it may very well be that XBL's time is just now arriving. With the stabilizing of the AJAX space, and the resources of a company like Google behind it, XML as a binding language has a great deal to offer and very little downside..."
Wireless Structural Monitoring
Jennifer La Montagne and Celeste Bragorgos, DDJ
"Illinois researchers have developed an inexpensive, wireless means for continuous and reliable structural health monitoring and successfully deployed their system this summer at full scale on the new Jindo Bridge in South Korea. A joint project between the University of Illinois at Urbana-Champaign, KAIST in Korea, and the University of Tokyo, it is the first dense deployment of a wireless sensor network on a cable-stayed bridge and the largest of its kind for civil infrastructure to date.
The researchers, as part of the Illinois Structural Health Monitoring Project (ISHMP) led by Bill Spencer and Gul Agha, designed, developed, and tested sensors that can be manufactured very cheaply and still produce the high-fidelity data required for structural health monitoring. Their research has also produced a customizable software framework that simplifies the development of structural health monitoring applications for smart sensor platforms. In combination, their sensors and software create an integrated framework that can be utilized by most civil engineers without the need for extensive background in electrical engineering or computer science. According to Spencer, more than 40 institutions throughout the world are now using the ISHMP framework..."
See also: OGC's Sensor Web Enablement
U.S. Pushes for EMR Standards
W. David Gardner, InformationWeek
"In a move to streamline medical records, Medicare officials have detailed plans to standardize medical files so they can be stored and delivered in comprehensive electronic files. Announced by the Centers for Medicare & Medicaid Services (CMS), the proposed standards are aimed at helping release $19 billion in federal stimulus funds. The standards are expected to be developed over a period of several months. The program is designed to coax the medical establishment to move away from paper files and to pave the way for currently incompatible files to be accessible in standard formats...
The existing patchwork of paper files and incompatible electronic files has caused many segments of the medical delivery system to hesitate to move to new electronic systems, because no meaningful standard exists. The CMS hopes to pave the way to a workable standardized system though stimulus funds. There is widespread belief that standardized electronic medical files will improve medical delivery to patients and cut costs as well. A recent study of healthcare executives carried out by PricewaterhouseCoopers found they believe the information contained in electronic medical records will become the heath care industry's most valuable asset, once the data becomes accessible..."
See also: XML in Clinical Research and Healthcare
Stack Overflow Has Open Sourced Markdown/C#
Abel Avram, InfoQueue
Markdown Sharp, initially called Markdown.NET, a C# implementation of the Markdown text processor, has been open sourced by Stack Overflow. Markdown is a text-to-HTML conversion tool initially written in Perl by John Gruber who released it back in 2004 under a BSD license. Markdown is one of several lightweight markup languages — AsciiDoc, BBCode, Textile, etc.—and has got some traction over the years being employed by websites like Stack Overflow.
Markdown is useful to writers who want to use a simpler-than-HTML markup language that later can be converted to HTML. Also, websites can use it to let users enter comments in plain text which then are converted to HTML for publishing..."
See also: the Markdown list archives
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/