The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: November 18, 2010
XML Daily Newslink. Thursday, 18 November 2010

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus

Will Security Issues Stifle Smart Grid Investment?
Larry Karisny, MuniWireless

"With billions of dollars of public and private smart grid investment in place and billions more in forecasted network hardware and software shipments, will enthusiasm for the smart grid be dampened by security concerns? Current smart meter deployment trends and reported security breaches point towards that possibility. A recent Pike Research report entitled 'Smart Grid: 10 Trends to Watch 2011 and Beyond' maintains that security will become the top smart grid concern."

The Pike Research report 'Smart Meter Security' "assesses in considerable detail the security risks to Smart Metering, using ISO27002:2005 as a baseline to identify topics for consideration. The study reviews Smart Metering against all 11 security clauses of ISO 27002:2005 to identify six key security opportunities including event correlation improvements, security software on meters, identity management and authorization, network resiliency, meter worm prevention, and end-to-end data encryption. It includes an in-depth examination of the market issues and technology issues related to smart meter security, along with market forecasts for key world regions through 2015."

"Grid Net has just released a white paper entitled Assuring a Secure Smart Grid... By applying over 40 standards, Grid Net's approach to the smart grid security is 'multi-layer.' The core architecture delivers an end-to-end secure solution, which begins with PolicyNet SmartNOS and Smart Grid devices (smart meters, routers, inverters, and customer devices), proceeds to data encryption for both data storage and data transport on the network, and concludes with PolicyNet SmartGrid NMS at the Utility NOC.

Security solutions may differ, but the clear message in the smart grid is to get effective security deployed and get it deployed now. With billions of dollars in deployments on hold, there must be a concerted effort to fund immediate, short term and long term security solutions for the smart grid or the smart grid is not going to get smart anytime soon..."

See also: Pike Research's Smart Meter Security      [TOC]

Deprecating XML: Not the Darling of Web API Designers
Norm Walsh, Blog

"Someone asked me recently what I thought about XML being removed from the Twitter streaming API. Around the same time, I heard that Foursquare are also moving to a JSON-only API. As an unrepentant XML fan, here's the full extent of my reaction: 'Meh'. If all you want to pass around are atomic values or lists or hashes of atomic values, JSON has many of the advantages of XML: it's straightforwardly usable over the Internet, supports a wide variety of applications, it's easy to write programs to process JSON, it has few optional features, it's human-legible and reasonably clear, its design is formal and concise, JSON documents are easy to create, and it uses Unicode.

If you're writing JavaScript in a web browser, JSON is a natural fit. The XML APIs in the browser are comparitively clumsy and the natural mapping from JavaScript objects to JSON eliminates the serialization issues that arise if you're careless with XML. One line of argument for JSON over XML is simplicity. If you mean it's simpler to have a single data interchange format instead of two, that's incontrovertibly the case. If you mean JSON is intrinsically simpler than XML, well, I'm not sure that's so obvious. For bundles of atomic values, it's a little simpler. And the JavaScript APIs are definitely simpler. But I've seen attempts to represent mixed content in JSON and simple they aren't. In short, if all you need are bundles of atomic values and especially if you expect to exchange data with JavaScript, JSON is the obvious choice. I don't lose any sleep over that...

XML wasn't designed to solve the problem of transmitting structured bundles of atomic values. XML was designed to solve the problem of unstructured data. In a word or two: mixed content. XML deals remarkably well with the full richness of unstructured data. I'm not worried about the future of XML at all even if its death is gleefully celebrated by a cadre of web API designers... I look forward to seeing what the JSON folks do when they are asked to develop richer APIs. When they want to exchange less well strucured data, will they shoehorn it into JSON? I see occasional mentions of a schema language for JSON, will other languages follow? I predict there will come a day when someone wants to federate JSON data across several application domains. I wonder, when they discover that the key 'width' means different things to different constituencies, will they invent namespaces too?

In the meantime, I'll continue to model the full and rich complexity of data that crosses my path with XML, and bring a broad arsenal of powerful tools to bear when I need to process it, easily and efficiently extracting value from all of its richness. I'll send JSON to the browser when it's convenient and I'll map the the output of JSON web APIs into XML when it's convenient... JSON vs. XML? Meh..." [Note the response of James Clark on 2010-11-24.]

See also: James Clark on XML and JSON (2007)      [TOC]

First IETF Draft: Database of Long-Lived Symmetric Cryptographic Keys
Russell Housley and Tim Polk (eds), IETF Internet Draft

An IETF first public Internet Draft has been published for a Standards Track specification Database of Long-Lived Symmetric Cryptographic Keys. The document specifies the information contained in a database of long-lived cryptographic keys used by many different security protocols. The database design supports both manual and automated key management. In many instances, the security protocols do not directly use the long-lived key, but rather a key derivation function is used to derive a short-lived key from a long-lived key."

"The conceptual database proposed in this specification is designed to support both manual key management and automated key management. The intent is to allow many different implementation approaches to the specified cryptographic key database. Security protocols such as TCP-AO are expected to use an application program interface (API) to select a long-lived key from the database. In many instances, the long-lived keys are not used directly in security protocols, but rather a key derivation function is used to derive short-lived key from the long-lived keys in the database. In other instances, security protocols will directly use the long-lived key from the database. The database design supports both use cases.

The database is characterized as a table, where each row represents a single long-lived symmetric cryptographic key. Each key should only have one row; however, in the (hopefully) very rare cases where the same key is used for more than one purpose, multiple rows will contain the same key value. The columns in the table represent the key value and attributes of the key. To accommodate manual key management, then formatting of the fields has been purposefully chosen to allow updates with a plain text editor: LocalKeyID, PeerKeyID, Peers, Interfaces, Protocol, KDF, KDFInputs, AlgID, Key (a hexadecimal string representing a long-lived symmetric cryptographic key), Direction, NotBefore, NotAfter...

If usage periods for long-lived keys do not overlap and system clocks are inconsistent, it is possible to construct scenarios where systems cannot agree upon a long-lived key. When installing a series of keys to be used one after the other (sometimes called a key chain), operators should configure the NotAfter field of the preceding key to be several days after the NotBefore field of the subsequent key to ensure that clock skew is not a concern. For group keys, the most significant bit in LocalKeyID must be set to one. Collisions among group key identifiers can be avoided by subdividing the remaining 15 bits of the LocalKeyID field into an identifier of the group key generator and an identifier assigned by that generator..."

See also: the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group      [TOC]

W3C First Public Draft: A Direct Mapping of Relational Data to RDF
Marcelo Arenas, Eric Prud'hommeaux, Juan Sequeda (eds), W3C Technical Report

Members of the W3C RDB2RDF Working Group have published a First Public Working Draft for "A Direct Mapping of Relational Data to RDF." The document "includes the RDF Schema that can be used to specify a mapping of relational data to RDF. The structure of this document will change based upon future decisions taken by the W3C RDB2RDF Working Group. The Working Group is also working on a document that will define a default mapping from relational databases to RDF. The Working Group hopes to publish the default mapping document shortly."

Overview: "The need to share data with collaborators motivates custodians and users of relational databases (RDB) to expose relational data on the Web of Data. This document defines a direct mapping from relational data to RDF. This definition provides extension points for refinements within and outside of this document.

Relational databases proliferate both because of their efficiency and their precise definitions, allowing for tools like SQL to manipulate and examine the contents predictably and efficiently. Resource Description Framework (RDF) is a data format based on a web-scalable architecture for identification and interpretation of terms. This document defines a mapping from relational representation to an RDF representation.

Strategies for mapping relational data to RDF abound. The direct mapping defines a simple transformation, providing a basis for defining and comparing more intricate transformations. This document includes an informal and a formal description of the transformation. The Direct Mapping is intended to provide a default behavior for R2RML: RDB to RDF Mapping Language. It can be also used to materialize RDF graphs or define virtual graphs, which can be queried by SPARQL or traversed by an RDF graph API..."

See also: the W3C RDB2RDF Working Group      [TOC]

OASIS Review: SAML v2.0 Metadata Profile for Algorithm Support Version 1.0
Scott Cantor (ed), OASIS Public Review Draft

Members of the OASIS Security Services (SAML) Technical Committee have released an approved Committee Specification Draft of SAML v2.0 Metadata Profile for Algorithm Support Version 1.0 for public review through December 03, 2010. Changes in this revision include: (a) addition of processContents="lax" to the wildcards in the schema; (2) correction of the example in non-normative Section 2.7...

The SAML V2.0 Metadata specification includes an 'md:EncryptionMethod' markup element intended to communicate the XML Encryption algorithms supported for use with the key described by a containing 'md:KeyDescriptor' element. The use of this element is not completely defined by the original specification, and there is no comparable support for communicating the XML Signature algorithms supported by an entity. This profile addresses both considerations to improve algorithm agility and interoperability for deployments that make use of metadata.

There are more general standards for the description of security requirements of communicating endpoints, such as 'WS-SecurityPolicy'. This specification is not intended as a replacement for such mechanisms, but is directed at systems with fewer requirements that are already designed around SAML V2.0 Metadata.

One of the interoperability challenges in large-scale, and long-term, SAML deployments is the selection of XML Signature and XML Encryption algorithms at runtime when communicating with peer entities. In particular, accounting for software limitations that prevent support of newer algorithms, while supporting those algorithms where possible to gradually strengthen systems, is difficult to manage without knowledge of a peer's capabilities. This profile makes use of SAML metadata to enable deployments to document their algorithm capabilities and preferences. It also allows for future expansion to address the interoperability requirements of more complex algorithms..."

See also: the OASIS Security Services (SAML) TC      [TOC]

Last Call Public Review for IETF HTTP State Management Mechanism
Adam Barth (ed), IETF Internet Draft

The Internet Engineering Steering Group (IESG) has received a request from the IETF HTTP State Management Mechanism (httpstate) Working Group to consider the Standards Track specification HTTP State Management Mechanism as an IETF Proposed Standard. The IESG plans to make a decision in the next few weeks, and solicits final comments on this action; please send substantive comments to the IETF mailing lists by 2010-12-02.

From the Abstract: "This document defines the HTTP Cookie and Set-Cookie header fields. These header fields can be used by HTTP servers to store state (called cookies) at HTTP user agents, letting the servers maintain a stateful session over the mostly stateless HTTP protocol. Although cookies have many historical infelicities that degrade their security and privacy, the Cookie and Set-Cookie header fields are widely used on the Internet."

Overview: "This document defines the HTTP Cookie and Set-Cookie header fields. Using the Set-Cookie header field, an HTTP server can pass name/value pairs and associated metadata (called cookies) to a user agent. When the user agent makes subsequent requests to the server, the user agent uses the metadata and other information to determine whether to return the name/value pairs in the Cookie header. Although simple on their surface, cookies have a number of complexities. For example, the server indicates a scope for each cookie when sending it to the user agent. The scope indicates the maximum amount of time the user agent should return the cookie, the servers to which the user agent should return the cookie, and the URI schemes for which the cookie is applicable.

For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports. Prior to this document, there were at least three descriptions of cookies; however, none of these documents describe how the Cookie and Set-Cookie headers are actually used on the Internet. This document attempts to specify the syntax and semantics of these headers as they are actually used on the Internet..."

See also: the IETF HTTP State Management Mechanism (httpstate) WG      [TOC]

Riverbed Boosts Cloud Services: Devices for Public Cloud Networks
Tim Greene, Network World

"Riverbed is launching two new products that can make cloud services faster by accelerating traffic and also more economical by reducing corporate data center and storage costs. The first product, called Cloud Steelhead, is a version of the Riverbed's Steelhead WAN acceleration appliance with add-ons that make it possible to quickly optimize traffic to and from cloud providers' data centers without requiring reconfiguration of corporate or provider networks.

The second product, called Whitewater, is designed for use with cloud backup and archiving providers and translates storage protocols between corporate data centers and cloud-storage providers' data centers... Whitewater is compatible with EMC Atmos, AT&T Synaptic Storage as a Service and Amazon S3..."

From the Riverbed announcement: "To enable organizations to meet data protection requirements, the Whitewater appliance will leverage an innovative key management system that allows enterprises to carefully manage data security and at the same time allows flexibility to restore into any location. By encrypting data on-site, in-flight, as well as in the cloud using 256-bit AES encryption and SSL v3, the Whitewater appliance provides a dual layer of encryption that ensures that any data moved into the cloud is not compromised, and it creates a complete end-to-end security solution for cloud storage...

The Whitewater appliance will offer enterprises secure, accelerated DR in the cloud without forcing them to change their current storage systems. To take advantage of cloud storage in its current state, many backup applications would have to be rewritten. The Whitewater appliance will simply act as a target for an organization's current backup software, requiring no expensive integration or complex configuration. Customers have been able to set up the Whitewater appliance and start moving data to the cloud in a few hours, compared to setting up tape infrastructure which can take days..."

See also: the Riverbed announcement      [TOC]

Provisioning: The Shifting Sands of a Hell-Raising Technology
John Fontana, Ping Talk Blog

"Provisioning, born of promise but raising hell ever since, is in a transition phase that hopefully accentuates the good, incorporates the new and leaves behind the bad... At the Gartner Identity and Access Management Summit in San Diego, Lori Rowland picked at nagging provisioning legacies, detailed changes brought by regulations such as Sarbanes Oxley, explained evolutions such as identity and access governance (IAG) and looked ahead to the cloud.

The cloud is where provisioning, federated to cloud-based apps, should be playing a significant role in adoption, but today provisioning is a work in transition. In fact, IT's sore chapters in provisioning's history (namely connectors) are being recreated in the cloud, a development Rowland calls 'frightening': 'Stop the connector madness whenever possible; especially out to the cloud', Rowland said; 'right now cloud vendors have their own APIs and we are again building proprietary connectors. She admitted connectors will not go away completely then outlined how provisioning has changed and what the alternatives are now.

Rowland says the 'push' model, which provisions users accounts to an application, must be replaced by transaction-based authorizations that 'pull' data from systems like virtual repositories (such as those from UnboundID or Radiant Logic) and deliver it to applications... The pull model is a popular idea, however, it is not universally accepted. In the pull model, which is contextual and operates in real-time, data delivery can be accomplished using established federation protocols such as SAML, along with authorization and policy tools based on XACML from vendors such as Axiomatics, which this week inked a major deal with PayPal.

Other standard pieces that might get a look include the Security Provisioning Markup Language (SPML). The standards group OASIS recently rescued the spec from death, but the group has yet to make any meaningful changes. Rowland heaped much of the blame for SPML's churn on vendors, but added that the technology is not well suited for federated provisioning..."

See also: a draft revised charter for the OASIS SPML TC      [TOC]

The Smart-Fat and Smart-Thin Edge of the Network
Jon Oltsik, Enterprise Strategy Group Blog Insecure About Security

"Take a look at ESG Research and you'll see a number of simultaneous trends. Enterprises are consolidating data centers, packing them full of virtual servers, and hosting more and more web applications within them. This means massive traffic coming into and leaving data centers.

Yes, this traffic needs to be switched and routed, but this is actually the easiest task. What's much harder is processing this traffic at the network for security, acceleration, application networking, etc. This processing usually takes place at the network edge, but additional layers are also migrating into the data center network itself for network segmentation of specific application services. Think of it this way: There is a smart-fat network edge that feeds multiple smart-thin network segments...

The smart-fat network edge aggregates lots of network device functionality into a physical device, cluster of devices, or virtual control plane. This is the domain of vendors like Cisco, Crossbeam Systems, and Juniper Networks for security and companies like A10 Networks, Citrix (Netscaler), and F5 Networks for application delivery. These companies will continue to add functionality to their systems (for example, XML processing, application authentication/authorization, business logic, etc.) to do more packet and content processing over time. It wouldn't surprise me at all if security vendors added application delivery features and the app delivery crowd added more security...

The smart-fat, smart-thin architecture is already playing out in cloud computing and wireless carrier networks today and I expect it to become mainstream in the enterprise segment over the next 24 months..."

See also: the blog Insecure About Security      [TOC]


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: