The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: June 08, 2009
XML Daily Newslink. Monday, 08 June 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com



Running Code as Part of An Open Standards Policy
Rajiv C. Shah and Jay P. Kesan, First Monday

Governments around the world are considering implementing or even mandating open standards policies. They believe these policies will provide economic, socio-political, and technical benefits. In this article, we analyze the failure of the Massachusetts's open standards policy as applied to document formats. We argue it failed due to the lack of running code. Running code refers to multiple independent, interoperable implementations of an open standard. With running code, users have choice in their adoption of a software product and consequently economic and technological benefits. We urge governments to incorporate a 'running code' requirement when adopting an open standards policy...

The advantages of open standards make it reasonable that governments will seek to adopt open standards. However, the Massachusetts experience suggests that governments must not judge open standards by how they are written, but by how widely they are implemented. After all, without multiple interoperable independent implementations, i.e., 'running code', governments may find themselves suffering from lock-in to an open standard solution. The running code requirement is not new, but it has been forgotten by governments in their rush to adopt open standards. Adding a running code requirement to an open standards policy puts an emphasis on how the standard is actually being used. We believe if adopters of open standards insist on running code, software developers and vendors will further support open standards and their interoperability. The result will be an array of economic and technological benefits...

See also: ETRM news stories for 2005 and 2007


IEEE Key Management Summit 2010
KMS 2010 Program Committee, Workshop Announcement

As followon to the very successful IEEE Key Management Summit 2008, the IEEE Key Management Summit 2010 will be held May 4-5, 2010, in Lake Tahoe, Nevada, USA. KMS 2010 is co-located with the IEEE Mass Storage and Systems Technologies conference (MSST). This summit aims to provide clarity to the key management by showing how existing products and standards organizations address the problem of interoperability and security.

"The IEEE Key Management Summit brings together the top companies that develop cryptographic key management for storage devices with the standards organizations that make interoperability possible and the customers that rely on key management to secure their encrypted data. With recent legislation, such as California's SB 1386 or Sarbanes-Oxley, companies now have to publicly disclose when they lose unencrypted personal data. To meet this new need for encryption, many companies have developed solutions that encrypt data on hard disks and tape cartridges. The problem is that these data storage vendors need a solution for managing the cryptographic keys that protect the encrypted data."

Members of the KMS 2010 Program Committee include: Matt Ball (Sun Microsystems, Program Chair), Scott Kipp (Brocade), Robert Lockhart (Thales E-Security), Fabio Maino (Cisco Systems), Luther Martin (Voltage Security), Landon Noll (Cisco Systems), Subhash Sankuratripati (NetApp), and Hannes Tschofenig (Nokia Siemens Networks).

See also: IEEE Key Management Summit 2008


NIST Focus Paper: Cryptographic Key Management (CKM) Workshop
NIST Computer Security Division, Workshop Briefing

"What is a key management framework? A key management framework is a basic conceptual structure that is used to specify the high-level issues and requirements for secure key management and will be the initial product of the CKM [Cryptographic Key Management] workshop. The framework will provide a structure for defining key management architectures from which key management systems can be built. The CKM framework is intended to define the components of a seamless set of technologies that will automatically create, establish, supply, store, protect, manage, update, replace, verify, lock, unlock, authenticate, audit, backup, destroy, and oversee all cryptographic keys needed for applications in the computing and communicating environments of the future. The framework will define the requirements for secure key management; the topics to be addressed include security policies, trust issues, cryptographic algorithms and key sizes for generating, distributing, storing, and protecting keys, key distribution, interoperable protocols, archiving, key recovery, key lifecycles, transparent user interfaces, etc...

Cryptographic Key Management (CKM) includes policies for selecting appropriate key generation/establishment algorithms and key sizes, protocols to utilize and support the distribution of keys, protection and maintenance of keys and related data, and integration of key management with cryptographic technology to provide the required type and level of protection specified by the overall security policy and specifications... Some large-scale applications to be addressed [in the workshop] include the protection of critical infrastructure information, uniform (if not universal) health care, international finance, real-time national voting systems, integrated electronic commerce, international multimedia communications, long term information archives, Federal and State social services, and automatic data conversion conforming to technology changes, etc..."

See also: the NIST workshop


How to Protect Sensitive Data Using Database Encryption
Christian Kirsch, eWEEK

"Database encryption has gradually worked its way up the priority list for today's IT director. Firewalls and application security are no longer enough to protect businesses and data in the modern-day, open and complex IT environment. Mitigating this risk and complying with numerous emerging regulations are two principal drivers that are forcing database encryption onto the IT director's agenda. In this article, eWEEK Knowledge Center contributor Christian Kirsch explains how these challenges can be overcome and advises on best practices for database encryption... Advanced security through database encryption is required across many different sectors and is increasingly needed to comply with regulatory mandates. The public sector, for example, uses database encryption to protect citizen privacy and national security. Initiated originally in the United States, many governments now have to meet policies requiring Federal Information Processing Standard (FIPS) validated key storage. For the financial services industry, it is not just a matter of protecting privacy but also complying with regulations such as the Payment Card Industry Data Security Standard (PCI DSS). This creates policies that not only define what data needs to be encrypted and how, but also places some strong requirements on keys and key management... It is important that database encryption is accompanied by key management; however, this is also the main barrier to database encryption. It is well-recognized that key use should be restricted and that key backup is extremely important. However, with many silos of encryption and clusters of database application servers, security officers and administrators require a centralized method to define key policy and enforce key management..."

See also: the OASIS Key Management Interoperability Protocol TC


Mathematical Markup Language (MathML) Version 3.0
David Carlisle, Patrick Ion, Robert Miner (eds), W3C Technical Report

Members of the W3C Math Working Group have released a revised Working Draft for the "Mathematical Markup Language (MathML) Version 3.0," updating the WD of 2008-11-17. The specification defines the Mathematical Markup Language (MathML). MathML is an XML application for describing mathematical notation and capturing both its structure and content. The goal of MathML is to enable mathematics to be served, received, and processed on the World Wide Web, just as HTML has enabled this functionality for text.

MathML can be used to encode both mathematical notation and mathematical content. About thirty-eight of the MathML tags describe abstract notational structures, while another about one hundred and seventy provide a way of unambiguously specifying the intended meaning of an expression. Additional chapters discuss how the MathML content and presentation elements interact, and how MathML renderers might be implemented and should interact with browsers. Finally, this document addresses the issue of special characters used for mathematics, their handling in MathML, their presence in Unicode, and their relation to fonts. While MathML is human-readable, in all but the simplest cases, authors use equation editors, conversion programs, and other specialized software tools to generate MathML. Several versions of such MathML tools exist, and more, both freely available software and commercial products, are under development.

The MathML 2.0 (Second Edition) specification has been a W3C Recommendation since 2001. After its recommendation, a W3C Math Interest Group collected reports of experience with the deployment of MathML and identified issues with MathML that might be ameliorated. The rechartering of a Math Working Group allows the revision to MathML 3.0 in the light of that experience, of other comments on the markup language, and of recent changes in specifications of the W3C and in the technological context. MathML 3.0 does not signal any change in the overall design of MathML. The major additions in MathML 3 are support for bidirectional layout, better linebreaking and explicit positioning, elementary math notations, and a new strict content MathML vocabulary with well-defined semantics generated from formal content dictionaries. The MathML 3 Specification has also been restructured.

See also: the W3C Math Working Group Charter


REST is a Style, WOA is the Architecture
Dave West, InfoQueue

Dion Hinchcliffe has recently offered two related articles that explore relationships between Web Oriented Architecture (WOA) and other technologies. The first article deals with WOA and REST; the second looks at WOA and SOA. The main point of the first article: REST is a style and WOA is the architecture. The second article argues that WOA is really a highly complimentary sub-style of SOA and explores the implications of this simple observation... Hinchcliffe defines WOA in two parts: a core that contains REST, URLs, SSL, and XML; and, a "WOA Full" that includes protocols and interfaces (e.g. BitTorrent), identity and security (e.g. OpenID), distribution and components (e.g. Open APIs), and data formats and descriptions (e.g. ATOM). These are organized in a WOA Stack in six levels with (example technologies): [1] Distribution - HTTP, feeds; [2] Composition - Hypermedia, Mashups; [3] Security - OpenID, SSL; [4] Data Portability - XML, RDF; [5] Data Representation - ATOM, JSON; [6] Transfer Methods - REST, HTTP. This stack reinforces the relationship between WOA and REST, with the latter being fundamental to and supportive of the larger architectural idea.

See also: SOA Considers Web-Oriented Architecture (WOA) in Earnest


LDAP Schema for vCard Version 4.0
Stephen Gryphon (ed), IETF Internet Draft

IETF announced an initial version -00 Internet Draft for "LDAP Schema for vCard Version 4.0." vCard is intended to be a format for representing directory information about people in a MIME message, including LDAP directories. Lightweight Directory Access Protocol (LDAP) is a common standard used to store and access directory information, including information about people (per RFC 2798, RFC 4512, RFC 4517, and RFC 4519). Although both intended to represent contact information about people, the two standards have significant differences in the attributes they support, and no clear method for mapping between the two.

This 'LDAP Schema' document works to harmonize the vCard directory information card and Lightweight Directory Access Protocol (LDAP) standards by extending both standards to support a common directory card entity. Additional LDAP attributes and object classes, and additional properties for vCard are defined. A standard mapping process between the two designed to support vCard's goal of being a transport format between directories (not just LDAP) is defined.


Gilbane SF on Content Integration Standards: CMIS, JSR-170, JSR-283
Irina Guseva, CMS Wire

One of the final sessions at Gilbane San Francisco [recently] was around content standards: CMIS, JSR-170, and JSR-283. Many realize there are several challenges with CMIS in particular and efficiently working with content from disparate content repositories in general. The session aimed at shedding light on some of these challenges and possible solutions in the standards space. Content standards existing right now are conceptually simple but challenging to implement. At the same time, many realize that content integration standards hold some promise for developing an enterprise-wide content infrastructure... Moderated by Larry Hawes, the session featured two implementers who tried to address content integration problems... Dick Weisinger, vice president and Chief Technologist at Formtek kicked off his presentation full of interesting statistical numbers. Weisinger's pointed out a fact we're all very well aware of: content in the digital universe is exploding... Naresh Devnani, managing director at Lean Management Group, gave us a peek into real-life scenarios and impressions of implementing a standard's wrapper, from the times when he was working for Vignette PS... The reality is that most customers have more than one repository. The focus of CMIS should really be around helping customers not vendors. JCR is independent of the repository logic, while CMIS targets one or more content repositories in order to allow for communication between them...

See also: CMIS references


Atom Bidirectional Attribute
James Snell (ed), IETF Internet Draft

A revised version -07 IETF Internet Draft has been released for the "Atom Bidirectional Attribute." This document adds a new attribute to the Atom Syndication Format used to indicate the base directionality of directionally-neutral characters. Portions this specification are illustrated with fragments of a non-normative RELAX NG Compact schema... The "dir" attribute is an extension to the Atom vocabulary that will be treated as unknown foreign markup by existing Atom processors that have not been explicitly implemented to support the "dir" attribute. As per the rules specified in RFC 4287 ('The Atom Syndication Format'), such processors are required to ignore unknown foreign markup and continue processing as if the markup does not exist. The direction specified by "dir" applies to elements and attributes whose values are specified as being "Language-Sensitive" as defined by Section 2 of RFC 4287. The direction specified by the attribute is inherited by descendant elements and attributes and may be overridden. Values other than "ltr", "rtl" and '""' MUST be ignored and processed as if the dir attribute was not present; Atom processors MUST NOT stop processing or signal an error. The value of the attribute is not case-sensitive.

Editor's note: "I just posted an updated version of the Atom Bidi Draft, mainly to get it out of the Expired state. I've changed it to "Informational" from "Experimental" and updated the IPR per the current boilerplate guidelines. I also added an explicit statement indicating that the dir attribute will be treated as unknown foreign markup by existing Atom processors that do not implement support for the 'dir' attribute. I would like to go ahead and get this wrapped up and published as an Informational RFC. Support for the attribute has been implemented in Apache Abdera and the attribute is being used in at least one commercial product (IBM's Lotus Connections Blogs component)..."

See also: Atom references


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-06-08.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org