The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: May 19, 2009
XML Daily Newslink. Tuesday, 19 May 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation

Interoperable Key Management Systems: KMIP is the Key
Blair Semple, NetApp Blog

After attending the Storage Networking World show in Orlando earlier this month, I was struck by the fact that the community of storage security vendors have done a lousy job of establishing viable interoperability standards to help make our customers' operations function more efficiently. I've been involved in the Storage Security industry since 2002... Looking back, I think one of the most significant factors that has limited the ubiquitous deployment of solutions that encrypt data-at-rest within the enterprise is key management—or perhaps more specifically, the lack of a key management standards. In many ways it is shameful that we as a vendor community haven't been able to come up with a standard interoperability protocol that would allow an end-user to purchase different solutions for different applications or datasets but only deploy one key management solution — making the best use of their investment. However, until very recently, each solution had its own proprietary key management system, and end-user organizations are understandable reluctant to deploying multiple key management systems for different encryption solutions. In early 2007 work began on a key management protocol within the IEEE Security in Storage Working Group—a project known as "P1619.3 Standard for Key Management Infrastructure for Cryptographic Protection of Stored Data. Member companies of this project include Brocade, Cisco, HDS, HP, IBM, NetApp, RSA and many others...

Two years later we still find ourselves without a standard. Now, a new standardization project is being started up called Key Management Interoperability Protocol (KMIP) under the OASIS group. Just announced in February [2009], a similar cast of characters are involved in this new effort to establish a standard. Their estimate is to have something finalized in the next 12 months or so, and hopefully products will appear without to much extra delay. The end-user organizations I work with are clamoring for interoperable key management systems that allow them to deploy what makes the most sense for their applications without having to deploy and manage multiple systems. Please, please, please tell your vendors that this is an important part of your security strategy, and you cannot wait another two years for compatible technologies to appear...

See also: KMIP Technical Activities Overview

What's There To Like About Microsoft Office 2007 SP 2?
Patrick Durusau, OpenDocument Blog

From an OpenDocument Format perspective, [there's] quite a bit to like about Microsoft Office 2007 SP2. First and foremost, having a office format implemented by the largest vendor in that market area isn't a bad thing. Office formats, to be meaningful at all, must serve the needs of the users of those formats. The larger the group of users that a format serves, the more useful it is overall. The support of ODF by Microsoft will only help reach more users. Second, and this is mostly an editor's concern, implementation of ODF by a group new to the standard will help isolate unspecified and under specified parts of the standard. I have yet to encounter any standard that did not have such blemishes and one of the best ways to find them is to ask a new implementer to implement what the standards says —not what may have been meant. Third, another implementation increases the number of readings of ODF that resulted in actual choices being made about what we said, or didn't say. Comparing the results across implementations is also a good way to find places that may need work in a standard. I would prefer that such differences be illustrated and then the implementers asked (as opposed to being accused of incompetence or malice) why their implementation reached a different result? Forth, and purely from a standards perspective, the more implementations exist the less cause there is to see a standard as belonging to one group or another. Certainly no one now claims that XML for example belongs to one particular interest or community, well, other than the XML community. If we are ultimately successful in the development of ODF 1.2 and beyond, it will not be seen as 'my standard' or 'their standard' but 'our standard'...

If OpenDocument Format is to be a standard, then it must be a standard equally for the largest vendor as well as the smallest open source project that wants to use it and everyone in between. In order to reach that status, the OpenDocument Format standard must change and evolve to meet the needs of those users. It must belong to the ODF community writ large, ranging from users of MS Office, to OpenOffice and Symphony users, to users of custom software based upon ODF, and everyone in between...

See also: Working with ODF in Word 2007 SP2

Nevada Tags Financial Data
Joab Jackson, Government Computer News

The XBRL standard will now be used to streamline reporting of grants, debt collection. Nevada has begun applying the Extensible Business Reporting Language to a number of its business processes and has plans to use it as part of its business Web portal, according to State Controller Kim Wallin. She has high hopes that XBRL can streamline reporting and citizen services for the state. Her office has been working with Deloitte Consulting to apply XBRL to the grants reporting process since January. Officials hope to have a fully working system by fall. The office is also in the early stages of using XBRL for documents related to debt collection. Created by the American Institute of Certified Public Accountants, XBRL is a vendor-neutral extension of the Extensible Markup Language (XML) and offers a large set of markup tags for reporting on business activities. A growing number of enterprise applications support XBRL, including development tools from Altova and General Ledger software from Oracle... To automate the [expenditures] process, the controller's office is developing an XBRL GL Adapter that applies tags to each general ledger entry. After the CSV file is downloaded, the XBRL data fields can be populated directly into a report employees build using Web-based XForms that process XML. The state carved a subset from XBRL to build a taxonomy that describes its grants reporting process and is used to tag all the fields in the report...

By working with XBRL, Nevada is following the lead of a number of federal agencies. For example, the Securities and Exchange Commission mandates that companies file their public reports to the its Electronic Data Gathering, Analysis and Retrieval database using XBRL, and the Federal Deposit Insurance Corp. requires U.S. banks to file quarterly reports in XBRL. In March 2009, members of the XBRL Consortium told Congress that the language could help with oversight of the Emergency Economic Stabilization Act of 2008. Requiring financial firms to file reports in XBRL would make it easier for the government and investors to evaluate the activities of those firms.

See also: XBRL specifications

The Assertions in HTML 5: Constraints and Content Models
Rick Jelliffe,

"During last week a colleague asked me for a collection of typical constraints that Schematron is used for, to test an implementation. So lets look at the assertions in draft of 'HTML 5: The Markup Language' which collects constraints about the markup: the kinds of things that are susceptible for schema testing. Most of the draft is taken up by section 6, which is a listing of all the elements with a standard form for the constraints in various kinds Content Model, Attribute Model, Permitted Contexts, and so on. Lets have a look at the assertions in particular and see how they fit in with Schematron. Since, on my understanding, the Assertions were actually largely created during an exercise that created RELAX NG schemas and Schematron, it should be no surprise that RELAX NG can handle all the content models and Schematron can handle all the assertions. But it is interesting to classify the kinds of assertions, to get an idea of the kinds of constraints that are, in practise, important. (I don't know whether the designers of HTML limited themselves to Schematron assertions or to the subset in drafts of XSD 1.1.) I've omitted repeating assertions, where exactly the same constraints or the same kind of constraint has been specified.

Here they are categorized in various ways... (1) Downward axis exclusion constraints: These constraints can be expressed using conventional schema languages such as RELAX NG and XSD. In SGML DTDs, some of them could be be expressed using the inclusion exception or exclusion exception mechanisms. (2) Downward axis requirement: constraints A variant on the downward axis constraints is one that makes a requirement. (3) Complex value constraints: These constraints are not possible with conventional grammars. However RELAX NG does allow content models where the presence of a constant text value in an attribute or element forces a particular path. XSD 1.1 has an assertion mechanism that should be able to handle some of these too... (4) Reference constraints: The particular reference constraints here are fairly straightforward. I expect that XSD KEYREF checking could cope with these. (5) Reverse axis constraints: Sometimes the constraint works backwards, as in this: The 'img' element with the 'ismap' attribute set must have an 'a' ancestor with the 'href' attribute... (6) Permitted contexts: 'HTML 5: The Markup Language' also has lists the permitted contexts for each element. These are usually the grouped common content models... (7) Content models sequence: One very interesting part of the draft HTML 5 content models is how rarely sequence is actually used. Sequence is one thing that is usually easier to express with content models than with XPaths in Schematron. In my opinion, this ease leads to the situation where sequence is used even where it cannot be related to any business requirement..."

See also: the Draft "HTML 5 - The Markup Language"

Location-to-URL Mapping Architecture and Framework
Henning Schulzrinne (ed), IETF Internet Draft

Members of the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group have released an updated Internet Draft version for the "Location-to-URL Mapping Architecture and Framework" specification. This document describes an architecture for a global, scalable, resilient and administratively distributed system for mapping geographic location information to URLs, using the Location-to-Service (LoST) protocol. The architecture generalizes well-known approaches found in hierarchical lookup systems such as DNS. The Location-to-Service Translation (LoST) protocol is an XML-based protocol for mapping service identifiers and geodetic or civic location information to service URIs and service boundaries. In particular, it can be used to determine the location-appropriate Public Safety Answering Point (PSAP) for emergency services. Such location information includes revised civic location information and a subset of the PIDF-LO profile, which consequently includes the Geo-Shapes defined for GML; example service URI schemes include sip, xmpp, and tel.

From the 'Introduction': It is often desirable to allow users to access a service that provides a common function, but is actually offered by a variety of local service providers. In many of these cases, the service provider chosen depends on the location of the person wishing to access that service. Among the best-known public services of this kind is emergency calling, where emergency calls are routed to the most appropriate public safety answering point (PSAP), based on the caller's physical location. Other services, from food delivery to directory services and roadside assistance, also follow this general pattern. This is a mapping problem, where a geographic location and a service identifier (URN - RFC 5031) is translated into a set of URIs, the service URIs, that allow the Internet system to contact an appropriate network entity that provides the service. The caller does not need to know where the service is being provided from, and the location of the service provider may change over time, e.g., to deal with temporary overloads, failures in the primary service provider location or long-term changes in system architecture. For emergency services, this problem is described in more detail in the I-D "Framework for Emergency Calling using Internet Multimedia." The overall emergency calling architecture separates mapping from placing calls or otherwise invoking the service, so the same mechanism can be used to verify that a mapping exists ("address validation") or to obtain test service URIs. Mapping locations to URIs describing services requires a distributed, scalable and highly resilient infrastructure. Authoritative knowledge about such mappings is distributed among a large number of autonomous entities that may have no direct knowledge of each other. In this document, we describe an architecture for such a global service. It allows significant freedom to combine and split functionality among actual servers and imposes few requirements as to who should operate particular services.

See also: the IETF Emergency Context Resolution with Internet Technologies (ECRIT) WG

JBoss Enterprise Business Rules Management System (BRMS)
Staff, Red Hat Announcement

Red Hat, Inc. a leading provider of open source solutions, has announced the availability of JBoss Enterprise Business Rules Management System (BRMS), an open source business rules solution that enables easy business policy and rules development, access, and management. JBoss Enterprise BRMS allows customers to reduce development time to update applications and processes with the latest business rules and policies. Automating business decisions with JBoss Enterprise BRMS can help a business to run faster and enable business process stakeholders the ability to rapidly implement change. Now, organizations can automate the delivery of customer-facing updates, promotions, loyalty programs, discounts, or payment terms among others, designed to cut time to deployment with greater accuracy, and at a lower cost.

Business rules are parameters that describe how an organization performs work. Best practices require business rules to be maintained separately from the software applications and services they govern in order to maximize agility. If business rules are duplicated or scattered across many applications, updates are costly, error prone, and take weeks or months to implement. With JBoss Enterprise BRMS, enterprises can update their business rules to reflect the day-to-day business and regulatory environment as quickly as in a few hours or days. Additionally, JBoss Enterprise BRMS enables non-technical staff to manage the business process in an organization without programming. JBoss Enterprise BRMS is an enterprise-ready open source business rules management system. When deployed with other JBoss Enterprise Middleware such as JBoss Enterprise SOA and Portal Platforms, Red Hat expects that organizations will be able to create better customer experiences by delivering up-to-date service, support and product faster and with higher quality.

Key JBoss Enterprise BRMS features include: (1) Business Rules Engine: The rules engine is highly flexible and can be embedded within a wide variety of enterprise applications and business processes. Rules can be updated through a range of tools, from editors and spreadsheets to Eclipse and Web 2.0 business analyst friendly tools, enabling greater business agility. (2) Web 2.0 Rules Authoring and Management Tools: The Rich Internet Application (RIA) user interface provides a user-friendly environment for business rule authoring and editing, versioning and deployment management. Designed for business analysts and process owners to craft and update rule sets supporting their business processes, the tools extend the value delivered to a wider audience of enterprise stakeholders beyond the Java developers. (3) BRMS Repository: The repository enables version control of business rules artifacts, including fact models, enumerations, functions, domain specific languages (DSL) definitions, rules and tests. The BRMS repository provides the foundation for the Web 2.0 and other management tools that deliver business agility not possible with traditional enterprise and web application deployments.

See also: XML and Business Rules

AccesStream Releases Open Source Identity Access Management Solution
Staff, AccesStream Announcement

"AccesStream, a provider of open source security solutions, is pleased to announce the Version 1.0 release of its identity access management (IAM) solution to the open source community. Accesstream 1.0 is a next-generation IAM solution, developed with adherence to open standards in order to ensure interoperability and reduce implementation complexity. Its open architecture is designed to be very modular, flexible and scalable to accommodate environments of all sizes and configurations, which allow it to function as the main IAM solution in an enterprise environment or provide very limited and specific functionality when used with an existing IAM solution. The goal of the AccesStream project is to provide (1) Centralized profile management and storage for multiple software applications; (2) Single Sign-On with SAML; (3) Extensible access policies; (4) Storage for entitlements and other supplemental data that can be delivered to and used by secured applications after a user signs on; (5) Authentication gateway for single point of secure credential exchange; (6) ETL with scheduler for synchronization with external data sources and profile stores; (7) Alerts for critical policy violations; (8) Robust reporting, auditing and forensics; (9) Modular architecture encouraging extension through plugins...

Accesstream 1.0 GA delivers authentication, authorization, profile and policy administration and full Single Sign-On capabilities. The source code is available for immediate download from AccesStream's website. Lance Edelman, Co-Founder and CEO, said 'Accesstream 1.0 not only provides small and medium sized businesses who cannot afford existing proprietary solutions with an affordable solution, but it allows larger organizations who already have a solution to augment their IAM projects and avoid additional licensing costs. We are very pleased with our progress thus far and the reception our solution has received. We look forward to enhancing our solution and growing our customer base and our open source community.'... AccesStream is an Atlanta-based open source software company providing enterprise security and identity access management solutions. The objective of its project is to provide developers and enterprises with flexible, no-cost, open source solutions to address their identity access management needs.

See also: the AccesStream Wiki

Draft Charter for the W3C Device APIs and Policy Working Group
Dominique Hazael-Massieux, Thomas Roessler (et al), W3C Postings

Contributors to the W3C public list 'public-device-apis' have prepared a draft charter for the Device APIs and Policy Working Group. As proposed, the mission of this WG is to create client-side APIs that enable the development of Web Applications and Web Widgets that interact with devices services such as Calendar, Contacts, Camera, etc. Additionally, the group will produce a framework for the expression of security policies that govern access to security-critical APIs.

From the initial WG Scope Statement: "The scope of this Working Group is this creation of API specifications for a device's services that can be exposed to Widgets and Web Applications. Devices in this context include desktop computers, laptop computers, mobile internet devices (MIDs), cellular phones, etc. The scope also includes defining a framework for the expression of security policies that govern access of Web Applications and Widgets to security-critical APIs. To achieve this goal, the group will need to deal with the following items: policy expression proper, identification of APIs and identification of Web Applications and Widgets. Among the principles that guide the policy framework are: (1) Before developing a new policy expression language, existing languages (such as XACML) should be reviewed for suitability; (2) The resulting policy model must be consistent with the existing same origin policies (as documented in the HTML5 specification), in the sense that a deployment of the policy model in Web browsers must be possible; (3) The work should not be specific to either mobile or desktop environments, but may take differences between the environments into account Where practical, the API specifications should use the Web IDL formalism. Priority will be given to developing simple and consensual APIs, leaving more complex features to future versions. This Working Group's deliverables must address issues of accessibility, internationalization, mobility, and security. Additionally, comprehensive test suites will be developed for each specification to ensure interoperability, and the group will create interoperability reports. The group will also maintain errata as required for the continued relevance and usefulness of the specifications it produces..."

See also: the W3C group public mailing list

SGIX: Smart Grid Information Exchange
Toby Considine,

The smart grid is one of the more exciting efforts of today, one that reaches beyond the utilities and into the buildings. The smart grid moves beyond traditional demand response to using direct system integration between the systems of the power grid and those in buildings. But what does this interface look like? Who will control the interaction, the building owner, or the utility? One challenge for the smart grid is that alternative energy will make the energy supply much less reliable than today. The sun and the wind are replacing reliable, predictable power sources. The building may get energy from the poorly maintained building systems next door rather than from the reliable generator in the next county. Those who operate the grid are rightfully concerned about quality of power and speed of performance when buildings respond to their requests to shed load, or when buildings place new stress on the grid by selling back power stored earlier or produced by uncertain means...

The smart grid roadmap is defining the realm of standards which will operate the smart grid, and interact with the smart energy on the nodes on the edge of the grid. Smart energy nodes (meaning buildings) may generate and store energy as well as respond to the demand response signals from the grid. The smart grid will be based upon standards for improved interoperability for digital telemetry and real time operations. To meet the goals of the smart grid, we need to look ahead to the higher level business interoperability that will enable innovation and new markets. The smart grid is more than improved top down control; it is a grid ready for unreliable energy sources (such as wind, waves, and sun), distributed generation, and Net Zero Energy (NZE) buildings. NZE buildings sometimes buy energy, sometimes sell energy, and energy use balances out over the day, season, or year. The NZE building presents particular problems as it may switch from buying energy one minute, and selling energy the next. Plug-in electric vehicles, whether hybrid or not, present challenges similar to those of NZE buildings, with the added complexity of mobility. The smart grid requires distributed decision making, distributed responsibility for reliability, and easy interoperability to integrate an ever-changing mix of technologies...

Alex Levinson of Lockheed Martin has named the suite of standards we will need for the smart grid Smart Grid Information Exchange (SGIX). What follows is a straw man view of SGIX, including information about some new standards that are just underway. These interfaces will use the semantics of the CIM while using the applying the e-commerce disciplines of symmetry, transparency, and composition. (1) SG-Energy Interoperation: OpenADR is a tested specification for achieving automated demand response to meet the needs of the regulated utilities in California. The work done on OpenADR at the University of California's Lawrence Berkeley National Laboratory is being developed for possible use as a national standard in OASIS. (2) SG-Energy Market Information Exchange: The charter for Energy Market Information Exchange (EMIX) is in pre-public circulation and is drawing interest from the market makers of the grid, the ISOs (independent system operators) and RTOs (regional transmission organizations) as well as building system integrators. Energy Market Information Exchange will be chartered to produce an XML vocabulary for exchanging price and energy characteristics (hydro, hard coal, nuclear, wind, etc, with a place for carbon information) to facilitate energy markets and device understanding/ communication of so-called Real Time Pricing or Dynamic Pricing...


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: