The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: February 17, 2009
XML Daily Newslink. Tuesday, 17 February 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:

SmartGrid Community Collaborates on Draft Charter for OASIS Energy Interoperation TC
William Cox, Posting to OASIS 'smartgrid-discuss' List

A charter proposal for an OASIS Energy Interoperation Technical Committee was posted to 'smartgrid-discuss' for discussion and review. The intent of the drafting group is to revise this draft after a comment period, and then submit the revised charter to the OASIS' Technical Committee process. As proposed, the TC will develop a data model and communication model to enable collaborative and transactive use of energy. Web services definitions, service definitions consistent with the OASIS SOA Reference Model, and XML vocabularies will be developed for interoperable and standard exchange of dynamic price signals, reliability signals, emergency signals, communication of market participation information such as bids, and load predictability and generation information. As energy use and peak demand increases, the supply-side, namely delivery and generation infrastructure, has not kept pace. There have typically been limited high demand periods (on the order of ten days per year, and for only a portion of each of those days). This presents opportunities to shift energy use to times of lower demand and also to reduce use during peak periods so that the existing infrastructure will suffice. This shifting and reduction can reduce the need for new power plants, transmission and distribution systems, and through greater economic efficiency, reduce costs to energy consumers. This is often called Demand Response (DR) or demand shaping... The core work of the TC is defining XML and Web services interactions for so-called Automated Demand Response, growing out of work at the Lawrence Berkeley National Laboratory Demand Response Research Center led by Mary Ann Piette, who is the convener of the proposed TC. This specific proposal comes from the context of many discussions in and related to the OpenADR Technical Advisory Group, GridWise Architecture Council, Grid-Interop, the NIST Smart Grid project, GridEcon, and many other places. The LBNL OpenADR body of work is being extended through two organizations/entities being created: this proposed OASIS Technical Committee and a proposed UCAIug OpenADR Task Force. In this innovative collaboration, the UCAIug, whose members are largely utilities and their suppliers, we will focus requirements, goals, data models and comments through UCAIug, involving ket makers, independent system operators, and policy and regulatory groups.

See also: the associated posting

Revised Internet Draft: Link-based Resource Descriptor Discovery
Eran Hammer-Lahav (ed), IETF Internet Draft

The editor of the "Link-based Resource Descriptor Discovery" Internet Draft has released an updated version, where "except for Appendix B, the rest of the specification was significantly changed and a fresh read is recommended." This specification describes a process for obtaining information about a resource identified by a URI. The 'information about a resource', a resource descriptor, provides machine-readable information that aims to increase interoperability and enhance the interaction with the resource. This memo only defines the process for locating and obtaining the descriptor, but leaves the descriptor format and its interpretation out of scope. Resource descriptors are documents (usually based on well known serialization languages such as XML, RDF, and JSON) which provide machine-readable information about resources (resource metadata) for the purpose of promoting interoperability and assist in interacting with unknown resources that support known interfaces. While many methods provide the ability to link a resource to its metadata, none of these methods fully address the requirements of a uniform and easily implementable process. These requirements include the ability for resources to self-declare the location of their descriptors, the ability to access descriptors directly without interacting with the resource, and support a wide range of platforms and scale of deployment. They must also be fully compliant with existing web protocols, and support extensibility. These requirements, and the analysis used as the basis for this memo are explains in detail in Appendix B. For example, a web page about an upcoming meeting can provide in its descriptor document the location of the meeting organizer's free/busy information to potentially negotiate a different time. A social network profile page descriptor can identify the location of the user's address book as well as accounts on other sites. A web service implementing an API with optional components can advertise which of these are supported. This memo describes the first step in the discovery process in which the resource descriptor document is located and retrieved... Discovery can be performed before, after, or without obtaining a representation of the resource. Performing discovery ahead of accessing a representation allows the client not to reply on assumptions about the properties of the resource. Performing discovery after a representation has been obtained enables further interaction with it. Given the wide range of 'information about a resource', no single descriptor format can adequately accommodate such scope. However, there is great value in making the process locating the descriptor uniform across formats. While HTTP is the most common protocol used in association with discovery and is explicitly specified in this memo, other protocols may be used..." Discussion of this document takes place on the public '' mailing list.

See also: the associated posting

Proposed Recommendation and Call for Review: Service Modeling Language 1.1
Bhalchandra Pandit, Valentina Popescu, Virginia Smith (eds), W3C Technical Report

W3C announced the publication of "Service Modeling Language, Version 1.1" and "Service Modeling Language Interchange Format Version 1.1" as Proposed Recommendations, together with a call for implementations. Comments are welcome through 12-March-2009. W3C publishes a technical report as a Proposed Recommendation to indicate that the document is a mature technical report that has received wide review for technical soundness and implementability and to request final endorsement from the W3C Advisory Committee. The Service Modeling Language (SML) provides a rich set of constructs for creating models of complex services and systems. Depending on the application domain, these models may include information such as configuration, deployment, monitoring, policy, health, capacity planning, target operating range, service level agreements, and so on. A model in SML is realized as a set of interrelated XML documents. The XML documents contain information about the parts of a service, as well as the constraints that each part must satisfy for the service to function properly. Constraints are captured in two ways: (2) Schemas - these are constraints on the structure and content of the documents in a model. SML uses XML Schema (Structures, Datatypes) as the schema language. In addition, SML defines a set of extensions to XML Schema to support references that may cross document boundaries. (2) Rules - are Boolean expressions that constrain the structure and content of documents in a model. SML uses Schematron (ISO/IEC 19757-3) and XPath for rules. One of the important operations on the model is to establish its validity. This involves checking whether all data in a model satisfies the schemas and rules declared... The SMIF PR specification defines the interchange format for Service Modeling Language, Version 1.1 (SML) models. This format identifies the model being interchanged, distinguishes between model definition documents and model instance documents, and defines the binding of rule documents with other documents in the interchange model. To ensure accurate and convenient interchange of the documents that make up an SML model, it is useful to define both an implementation-neutral interchange format that preserves the content and interrelationships among the documents and a constrained form of SML model validation.

See also: the W3C Service Modeling Language (SML) Working Group

Updated Working Draft for HTML 5 Differences from HTML 4
Anne van Kesteren (ed), W3C Technical Report

W3C announced the publication of updated Working Drafts for "HTML 5 Differences from HTML 4" and "HTML 5" A Vocabulary and Associated APIs for HTML and XHTML." In this version of HTML5, new features have been introduced to help Web application authors, new elements are introduced based on research into prevailing authoring practices, and special attention has been given to defining clear conformance criteria for user agents in an effort to improve interoperability. HTML has been in continuous evolution since it was introduced to the Internet in the early 1990's. Some features were introduced in specifications; others were introduced in software releases. In some respects, implementations and author practices have converged with each other and with specifications and standards, but in other ways, they continue to diverge. HTML 4 became a W3C Recommendation in 1997. While it continues to serve as a rough guide to many of the core features of HTML, it does not provide enough information to build implementations that interoperate with each other and, more importantly, with a critical mass of deployed content. The same goes for XHTML 1, which defines an XML serialization for HTML 4, and DOM Level 2 HTML, which defines JavaScript APIs for both HTML and XHTML. HTML 5 will replace these documents. The HTML 5 draft reflects an effort, started in 2004, to study contemporary HTML implementations and deployed content. The HTML 5 working draft: (1) Defines a single language called HTML 5 which can be written in a "custom" HTML syntax and in XML syntax; (2) Defines detailed processing models to foster interoperable implementations; (3) Improves markup for documents; (4) Introduces markup and APIs for emerging idioms, such as Web applications. HTML 5 is defined in a way that it is backwards compatible with the way user agents handle deployed content. To keep the authoring language relatively simple for authors several elements and attributes are not included as outlined in the other sections of this document, such as presentational elements that are better dealt with using CSS. User agents, however, will always have to support these older elements and this is why the specification clearly separates requirements for authors and user agents. This means that authors can not use the isindex or plaintext element, but user agents are required to support them in a way that is compatible with how these elements need to behave for compatibility with deployed content. Since HTML 5 has separate conformance requirements for authors and user agents there is no longer a need for marking things "deprecated".

See also: the HTML 5 Working Draft

Attention Request (POKE) for Instant Messaging
Gustavo Garcia and Jose-Luis Martin (eds), IETF Internet Draft

Members of the IETF SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE) Working Group have published an updated Internet Draft for "Attention Request (POKE) for Instant Messaging." Section 3 ('XML Document Format') presents the XML schema. "Some existing messaging platforms include the capability to send a message to a user requesting his attention (e.g. XMPP). This feature is usually known as poke, nudge or buzz, and in desktop applications the notification is usually implemented using a combination of sound and the vibration of chat windows. This document describes the XML message format to encode this attention request. This message can be used inside an instant messaging session (for example a MSRP session) or as a standalone message (for example in a SIP MESSAGE). In session mode, the poke message is sent as part of the messaging stream and its usage is negotiated just like any other media type in that stream, with details depending on the session mode protocol. The receiver of this message can present it to the user in different ways depending on the device capabilities and the user preferences. The message format does not include support to specify sender preferences for the realization of the attention request... The only XML element of the message is the poke element. This is the root element of the message and doesn't define any additional attribute. The XML schema should be consulted for the normative message format. In order to include additional functionality, the XML schema can be extended in future documents. Additional elements must use their own namespaces and must be designed such that receivers can safely ignore such extensions. Adding elements to the namespace defined in this document is not permitted..."

See also: the IETF SIMPLE WG Status Pages

Namespaces and OOXML's Markup Compatibility and Extensibility (MCE)
Rick Jelliffe, O'Reilly Technical

Vigorous standards that need to support a dynamic market are a problem. We all like nice stable standards, and we certainly like the idea of nice stable standards, but building our standards processes around some idea that we get it right and complete the first time is folly: it may be a worthy goal, but in many cases even the most perfect initial standard will immediately suffer evolutionary pressure. Isn't this the problem that XML Namespaces is supposed to address? Yes, to an extent: XML Namespaces lets us have a clear separation into different vocabularies each targetted at specific parts of a document: a namespace for paragraphs and document parts, a namespace for maths objects, a namespace for metadata and so on. But XML Namespaces provide only a medium-sized grain, they don't help either when we are interested to implement some of the namespace but not all of it or when we want to supercede the namespace with whole new production. Namespace URLs almost always span different generations of the schemas for that vocabulary: the XSLT 2.0 namespace is the same as the XSLT 1.0 namespace, for example, and it will not be surprising if XHTML keeps its namespace through multiple versions. During the OOXML standardization proceedings, the ISO particpants felt that there was one particular sub-technology, Markup Compatibility and Extensibility (MCE), that was potentially of such usefulness by other standards, that it was brought out into its own part. It is now IS29500:2009 Part 3... The particular issue that MCE address is this: what is an application supposed to do when it finds some markup it wasn't programmed to accept? This could be extension elements in some foreign namespace, but it could also be some elements from a known namespace: the case when a document was made against a newer version of the standard than the application. The approach taken is very practical and, I think, user-oriented: an application that doesn't understand some new kind of markup should fail if that new markup was essential to the document. Otherwise it can use various other strategies, the most straight-forward one of which is just to ignore the new markup. And, to complement this, the document is allowed to have alternative versions of the same content using different namespaces, where the application chooses the versions it is happiest with... I think standards developers who are facing the cat-herding issue of multiple implementations and the need for all sorts of extensions should seriously consider the MCE approach.

See also: the MCE specification

An Introduction to jQuery, Part 1
Rick Strahl,

jQuery is a small JavaScript library that makes development of HTML-based client JavaScript drastically easier. This article introduces the jQuery concepts of document manipulation purely from a client-side perspective. A follow-up article will discuss how to use jQuery in combination with ASP.NET on the server for AJAX callbacks, and how to integrate jQuery with server-side controls and components. Some key features: (1) DOM Element Selectors: jQuery Selectors let you select DOM elements so that you can apply functionality to them with jQuery's operational methods. jQuery uses a CSS 3.0 syntax (plus some extensions) to select single or multiple elements in a document (2) The jQuery Object: The Wrapped Set: Selectors return a jQuery object known as the "wrapped set," which is an array-like structure that contains all the selected DOM elements. You can iterate over the wrapped set like an array or access individual elements via the indexer... (3) Wrapped Set Operations: The real power of the wrapped set comes from applying jQuery operations against all selected DOM elements simultaneously. The jQuery.fn object exposes about 100 functions that can operate on wrapped sets, and allows you to manipulate and retrieve information from the selected DOM objects in a batch... Most wrapped set operations are also chainable; they return the jQuery wrapped set object as a result. (4) Simplified Event Handling: Much of what you do in JavaScript code from DOM manipulation to AJAX calls is asynchronous, and requires using events. Unfortunately, DOM implementations for event handling vary considerably between browsers. jQuery provides an easy mechanism for binding and unbinding events and a normalized event model that makes it easy to handle events and hook up result handlers for all supported browsers... (5) Small FootprintjQuery is a fairly compact base library, yet it's packed with features you'll actually use. During my relatively short time using jQuery, I've gone through well over 85% of the jQuery functions with my code, which points at how useful the library is... (6) Easy Plug-in Extensibility: When jQuery's language and DOM extension library features aren't enough, jQuery provides a simple plug-in API that has spawned hundreds of plug-ins for almost every conceivable common operation you might think up to perform on a set of DOM elements. jQuery's API allows extending the core jQuery object's operations simply by creating a function and passing the jQuery wrapped set as a parameter, which lets plug-ins operate on it and participate in jQuery chaining... However, the jQuery library is not "the perfect tool," and it doesn't solve every possible JavaScript and DOM problem for you... You may still need a set of a few helper functions to help with non-DOM related functionality. For example, I still use my old JavaScript library to get functionality such as date and number formatting, windowing support, and a host of other features. That's unlikely to ever go away. But I can simply toss out large parts of my old library because jQuery replaces its functionality—in most cases much more elegantly than my own code did.

Los Alamos Stung by Loss of Scores of Laptops
Joab Jackson, Government Computer News

Summary: Emergency inventory reveals that 67 laptop PCs are missing. The U.S. Energy Department's Los Alamos National Laboratory has been stung by a leak of another loss of laptop computers. In a leaked memorandum, the contracting agency managing the lab admitted that 67 laptops are unaccounted for, including thirteen in the past year. The memo was written in response to the fact that, in January 2009, three computers were stolen from an employee's Santa Fe, N.M, residence. Only one of the three computers was authorized for home usage, said Jeff Berger, a spokesperson for the lab. As a result of this loss the lab, which is managed by Los Alamos National Security LLC, conducted an inventory of all its computers and found that 67 went unaccounted for, including the three recently stolen ones... The Project on Government Oversight, a nonprofit organization dedicated to uncovering government malfeasance, posted the memo, which was leaked anonymously. This latest leak comes less than a month after POGO posted another lab e-mail sent in January, stating that a Blackberry had been lost in "a sensitive foreign country." In 2003, the Energy Department's inspector general faulted the lab for not being able to locate 22 laptops during an audit. The lab is currently reviewing all employee home computer usage to ensure all remote computers are being used within policy guidelines for home use, Berger said. Overall, the lab has more than 40,000 servers, printers, personal digital assistants, desktop computers, laptops and other computational devices, all of which are bar-coded. In its contract with the National Nuclear Security Administration, Los Alamos National Security LLC must have full accountability of at least 98.7 percent of these bar-coded items at any given time. Each year, the organization submits an independently validated inventory report to NNSA... Absolute Software, a provider of laptop theft-management software, has estimated that the average percentage of laptops organizations lose due to theft is between 3.5 and 5 percent.


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: