This issue of XML Daily Newslink is sponsored by:
- SmartGrid Community Collaborates on Draft Charter for OASIS Energy Interoperation TC
- Revised Internet Draft: Link-based Resource Descriptor Discovery
- Proposed Recommendation and Call for Review: Service Modeling Language 1.1
- Updated Working Draft for HTML 5 Differences from HTML 4
- Attention Request (POKE) for Instant Messaging
- Namespaces and OOXML's Markup Compatibility and Extensibility (MCE)
- An Introduction to jQuery, Part 1
- Los Alamos Stung by Loss of Scores of Laptops
SmartGrid Community Collaborates on Draft Charter for OASIS Energy Interoperation TC
William Cox, Posting to OASIS 'smartgrid-discuss' List
A charter proposal for an OASIS Energy Interoperation Technical Committee was posted to 'smartgrid-discuss' for discussion and review. The intent of the drafting group is to revise this draft after a comment period, and then submit the revised charter to the OASIS' Technical Committee process. As proposed, the TC will develop a data model and communication model to enable collaborative and transactive use of energy. Web services definitions, service definitions consistent with the OASIS SOA Reference Model, and XML vocabularies will be developed for interoperable and standard exchange of dynamic price signals, reliability signals, emergency signals, communication of market participation information such as bids, and load predictability and generation information. As energy use and peak demand increases, the supply-side, namely delivery and generation infrastructure, has not kept pace. There have typically been limited high demand periods (on the order of ten days per year, and for only a portion of each of those days). This presents opportunities to shift energy use to times of lower demand and also to reduce use during peak periods so that the existing infrastructure will suffice. This shifting and reduction can reduce the need for new power plants, transmission and distribution systems, and through greater economic efficiency, reduce costs to energy consumers. This is often called Demand Response (DR) or demand shaping... The core work of the TC is defining XML and Web services interactions for so-called Automated Demand Response, growing out of work at the Lawrence Berkeley National Laboratory Demand Response Research Center led by Mary Ann Piette, who is the convener of the proposed TC. This specific proposal comes from the context of many discussions in and related to the OpenADR Technical Advisory Group, GridWise Architecture Council, Grid-Interop, the NIST Smart Grid project, GridEcon, and many other places. The LBNL OpenADR body of work is being extended through two organizations/entities being created: this proposed OASIS Technical Committee and a proposed UCAIug OpenADR Task Force. In this innovative collaboration, the UCAIug, whose members are largely utilities and their suppliers, we will focus requirements, goals, data models and comments through UCAIug, involving ket makers, independent system operators, and policy and regulatory groups.
See also: the associated posting
Revised Internet Draft: Link-based Resource Descriptor Discovery
Eran Hammer-Lahav (ed), IETF Internet Draft
The editor of the "Link-based Resource Descriptor Discovery" Internet Draft has released an updated version, where "except for Appendix B, the rest of the specification was significantly changed and a fresh read is recommended." This specification describes a process for obtaining information about a resource identified by a URI. The 'information about a resource', a resource descriptor, provides machine-readable information that aims to increase interoperability and enhance the interaction with the resource. This memo only defines the process for locating and obtaining the descriptor, but leaves the descriptor format and its interpretation out of scope. Resource descriptors are documents (usually based on well known serialization languages such as XML, RDF, and JSON) which provide machine-readable information about resources (resource metadata) for the purpose of promoting interoperability and assist in interacting with unknown resources that support known interfaces. While many methods provide the ability to link a resource to its metadata, none of these methods fully address the requirements of a uniform and easily implementable process. These requirements include the ability for resources to self-declare the location of their descriptors, the ability to access descriptors directly without interacting with the resource, and support a wide range of platforms and scale of deployment. They must also be fully compliant with existing web protocols, and support extensibility. These requirements, and the analysis used as the basis for this memo are explains in detail in Appendix B. For example, a web page about an upcoming meeting can provide in its descriptor document the location of the meeting organizer's free/busy information to potentially negotiate a different time. A social network profile page descriptor can identify the location of the user's address book as well as accounts on other sites. A web service implementing an API with optional components can advertise which of these are supported. This memo describes the first step in the discovery process in which the resource descriptor document is located and retrieved... Discovery can be performed before, after, or without obtaining a representation of the resource. Performing discovery ahead of accessing a representation allows the client not to reply on assumptions about the properties of the resource. Performing discovery after a representation has been obtained enables further interaction with it. Given the wide range of 'information about a resource', no single descriptor format can adequately accommodate such scope. However, there is great value in making the process locating the descriptor uniform across formats. While HTTP is the most common protocol used in association with discovery and is explicitly specified in this memo, other protocols may be used..." Discussion of this document takes place on the public 'firstname.lastname@example.org' mailing list.
See also: the associated posting
Proposed Recommendation and Call for Review: Service Modeling Language 1.1
Bhalchandra Pandit, Valentina Popescu, Virginia Smith (eds), W3C Technical Report
W3C announced the publication of "Service Modeling Language, Version 1.1" and "Service Modeling Language Interchange Format Version 1.1" as Proposed Recommendations, together with a call for implementations. Comments are welcome through 12-March-2009. W3C publishes a technical report as a Proposed Recommendation to indicate that the document is a mature technical report that has received wide review for technical soundness and implementability and to request final endorsement from the W3C Advisory Committee. The Service Modeling Language (SML) provides a rich set of constructs for creating models of complex services and systems. Depending on the application domain, these models may include information such as configuration, deployment, monitoring, policy, health, capacity planning, target operating range, service level agreements, and so on. A model in SML is realized as a set of interrelated XML documents. The XML documents contain information about the parts of a service, as well as the constraints that each part must satisfy for the service to function properly. Constraints are captured in two ways: (2) Schemas - these are constraints on the structure and content of the documents in a model. SML uses XML Schema (Structures, Datatypes) as the schema language. In addition, SML defines a set of extensions to XML Schema to support references that may cross document boundaries. (2) Rules - are Boolean expressions that constrain the structure and content of documents in a model. SML uses Schematron (ISO/IEC 19757-3) and XPath for rules. One of the important operations on the model is to establish its validity. This involves checking whether all data in a model satisfies the schemas and rules declared... The SMIF PR specification defines the interchange format for Service Modeling Language, Version 1.1 (SML) models. This format identifies the model being interchanged, distinguishes between model definition documents and model instance documents, and defines the binding of rule documents with other documents in the interchange model. To ensure accurate and convenient interchange of the documents that make up an SML model, it is useful to define both an implementation-neutral interchange format that preserves the content and interrelationships among the documents and a constrained form of SML model validation.
Updated Working Draft for HTML 5 Differences from HTML 4
Anne van Kesteren (ed), W3C Technical Report
See also: the HTML 5 Working Draft
Attention Request (POKE) for Instant Messaging
Gustavo Garcia and Jose-Luis Martin (eds), IETF Internet Draft
Members of the IETF SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE) Working Group have published an updated Internet Draft for "Attention Request (POKE) for Instant Messaging." Section 3 ('XML Document Format') presents the XML schema. "Some existing messaging platforms include the capability to send a message to a user requesting his attention (e.g. XMPP). This feature is usually known as poke, nudge or buzz, and in desktop applications the notification is usually implemented using a combination of sound and the vibration of chat windows. This document describes the XML message format to encode this attention request. This message can be used inside an instant messaging session (for example a MSRP session) or as a standalone message (for example in a SIP MESSAGE). In session mode, the poke message is sent as part of the messaging stream and its usage is negotiated just like any other media type in that stream, with details depending on the session mode protocol. The receiver of this message can present it to the user in different ways depending on the device capabilities and the user preferences. The message format does not include support to specify sender preferences for the realization of the attention request... The only XML element of the message is the poke element. This is the root element of the message and doesn't define any additional attribute. The XML schema should be consulted for the normative message format. In order to include additional functionality, the XML schema can be extended in future documents. Additional elements must use their own namespaces and must be designed such that receivers can safely ignore such extensions. Adding elements to the namespace defined in this document is not permitted..."
See also: the IETF SIMPLE WG Status Pages
Namespaces and OOXML's Markup Compatibility and Extensibility (MCE)
Rick Jelliffe, O'Reilly Technical
Vigorous standards that need to support a dynamic market are a problem. We all like nice stable standards, and we certainly like the idea of nice stable standards, but building our standards processes around some idea that we get it right and complete the first time is folly: it may be a worthy goal, but in many cases even the most perfect initial standard will immediately suffer evolutionary pressure. Isn't this the problem that XML Namespaces is supposed to address? Yes, to an extent: XML Namespaces lets us have a clear separation into different vocabularies each targetted at specific parts of a document: a namespace for paragraphs and document parts, a namespace for maths objects, a namespace for metadata and so on. But XML Namespaces provide only a medium-sized grain, they don't help either when we are interested to implement some of the namespace but not all of it or when we want to supercede the namespace with whole new production. Namespace URLs almost always span different generations of the schemas for that vocabulary: the XSLT 2.0 namespace is the same as the XSLT 1.0 namespace, for example, and it will not be surprising if XHTML keeps its namespace through multiple versions. During the OOXML standardization proceedings, the ISO particpants felt that there was one particular sub-technology, Markup Compatibility and Extensibility (MCE), that was potentially of such usefulness by other standards, that it was brought out into its own part. It is now IS29500:2009 Part 3... The particular issue that MCE address is this: what is an application supposed to do when it finds some markup it wasn't programmed to accept? This could be extension elements in some foreign namespace, but it could also be some elements from a known namespace: the case when a document was made against a newer version of the standard than the application. The approach taken is very practical and, I think, user-oriented: an application that doesn't understand some new kind of markup should fail if that new markup was essential to the document. Otherwise it can use various other strategies, the most straight-forward one of which is just to ignore the new markup. And, to complement this, the document is allowed to have alternative versions of the same content using different namespaces, where the application chooses the versions it is happiest with... I think standards developers who are facing the cat-herding issue of multiple implementations and the need for all sorts of extensions should seriously consider the MCE approach.
See also: the MCE specification
An Introduction to jQuery, Part 1
Rick Strahl, DevX.com
Los Alamos Stung by Loss of Scores of Laptops
Joab Jackson, Government Computer News
Summary: Emergency inventory reveals that 67 laptop PCs are missing. The U.S. Energy Department's Los Alamos National Laboratory has been stung by a leak of another loss of laptop computers. In a leaked memorandum, the contracting agency managing the lab admitted that 67 laptops are unaccounted for, including thirteen in the past year. The memo was written in response to the fact that, in January 2009, three computers were stolen from an employee's Santa Fe, N.M, residence. Only one of the three computers was authorized for home usage, said Jeff Berger, a spokesperson for the lab. As a result of this loss the lab, which is managed by Los Alamos National Security LLC, conducted an inventory of all its computers and found that 67 went unaccounted for, including the three recently stolen ones... The Project on Government Oversight, a nonprofit organization dedicated to uncovering government malfeasance, posted the memo, which was leaked anonymously. This latest leak comes less than a month after POGO posted another lab e-mail sent in January, stating that a Blackberry had been lost in "a sensitive foreign country." In 2003, the Energy Department's inspector general faulted the lab for not being able to locate 22 laptops during an audit. The lab is currently reviewing all employee home computer usage to ensure all remote computers are being used within policy guidelines for home use, Berger said. Overall, the lab has more than 40,000 servers, printers, personal digital assistants, desktop computers, laptops and other computational devices, all of which are bar-coded. In its contract with the National Nuclear Security Administration, Los Alamos National Security LLC must have full accountability of at least 98.7 percent of these bar-coded items at any given time. Each year, the organization submits an independently validated inventory report to NNSA... Absolute Software, a provider of laptop theft-management software, has estimated that the average percentage of laptops organizations lose due to theft is between 3.5 and 5 percent.
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/