The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: September 02, 2009
XML Daily Newslink. Wednesday, 02 September 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



Protocol for Web Description Resources (POWDER): Approved Recommendations
Staff, W3C Announcement

A W3C announcement A Sprinkle of POWDER Fosters Trust on the Web reports on the approval of three W3C Recommendations from the POWDER Working Group, which "takes steps toward building a Web of trust, and making it possible to discover relevant, quality content more efficiently." POWDER is a new W3C Standard intended to raise confidence in site quality, relevance and authenticity. The W3C POWDER (Protocol for Web Description Resources) Working Group was chartered to specify a protocol for publishing descriptions of (e.g. metadata about) Web resources using RDF, OWL, and HTTP.

"When content providers use POWDER, the Protocol for Web Description Resources, they help people with tasks such as seeking sound medical advice, looking for trustworthy retailers, or searching for content available under a particular license—for instance, a Creative Commons license...

A site wishing to promote the mobile-friendliness of its content or applications can tell the world using POWDER. Content providers start by creating content that is conformant with W3C's mobileOK scheme and validating it with the mobileOK Checker. The checker generates POWDER statements that apply to individual pages. But a key feature of POWDER is that it lets content providers make statements about groups of resources—typically all the pages, images and videos on a Web site. Other tools such as the i-sieve POWDER generator (not from W3C) generates POWDER statements about the mobile-friendliness of entire sites. Once these POWDER statements are in place they can be used by search engines or other tools to help people find mobile-friendly content..."

See also: the POWDER Working Group home page


VMware Submits VMware vCloud API Specification to DMTF
Staff, VMware Announcement

At VMworld 2009, an announcement was made by VMware for submission of its vCloud API to the Distributed Management Task Force (DMTF), "to enable consistent mobility, provisioning, management, and service assurance of applications running in internal and external clouds. The submission is part of the VMware vCloud Initiative — created to enable customers to work closely with VMware cloud partners who provide reliable, enterprise-ready cloud services without the lock-in associated with some cloud solutions available today..."

According to the announcement: "The VMware vCloud API provides for rich capabilities such as upload, download, instantiation, deployment and operation of vApps, networks and virtual datacenters, and we [VMware] feel it is a great basis upon which to start the standardization process... Earlier this year, VMware released its vCloud API to partners to work with them to develop and deploy interoperable cloud services. With the availability of this API, customers can choose cloud services that will enable the on-demand flexibility that they require, and the ability to move their applications in and out of internal or external clouds with the high availability, manageability and security that customers have grown to rely on from VMware..."

See also: the VMware vCloud API


VMware Cloud Initiative Raises Vendor Lock-In Concerns
Jon Brodkin, Network World

Brodkin argues that [the VMware] "cloud initiative threatens to introduce a type of vendor lock-in that rival virtualization vendors claim they would not impose. While competitors Citrix and Microsoft have embraced the notion of supporting multiple virtualization platforms with their software, VMware has long maintained that its management tools will support only its own hypervisor. Now its growing cloud initiative, highlighted at this week's VMworld conference, depends upon customers and vendors using its vSphere virtualization platform, which could prevent true cloud interoperability...

VMware officials contend they are trying to promote open standards by submitting their own vCloud API to the Distributed Management Task Force (DMTF), in an effort to promote interoperability among public cloud platforms. VMware CTO Stephen Herrod said VMware submitted the API to DMTF because the company wants a 'broad ecosystem of compatible clouds,' even including those not running VMware software. But vCloud is still a VMware-centric API, at least until other hypervisor vendors start using it, says Burton Group analyst Chris Wolf..."

See also: the Distributed Management Task Force


OASIS Webinar: Defining DITA for Pharmaceutical Documentation
Staff, OASIS Announcement

OASIS is hosting a 30-minute webinar outlining the upcoming technical activities of the new DITA Pharmaceutical Content Subcommittee. Webinar Date/Time: Tuesday, September 22, 2009 11:00 AM - 11:30 AM EDT.

This group is bringing together pharmaceutical documentation experts from all regions to define DITA topics and maps, as well as associated metadata and terminology, to support re-usable content. The output of this Committee will address good practices as well as proposed DITA specialization; all stakeholders are strongly encouraged to be represented. The webinar will provide examples of how the group's recommendations may be used to streamline the creation of documentation supporting a product for scientific and regulatory purposes throughout its lifecycle.

While the actual scope and schedule will be determined by the DITA Committee, some of the early topics may include: (a) ICH Common Technical Document; (b) FDA Structured Product Labeling; (3) EU Product Information Management; (4) Clinical Trial Protocol and Study Reports. Anyone with an interest in participating in or monitoring the work of the DITA Pharmaceutical Content Subcommittee including may consider attending the Webinar, including: [i] Medical and technical writers for pharmaceutical companies, [ii] Information technology architects with specialization in content and document management, and [iii] Researchers who have expertise in the design of drug development programs..."

See also: the OASIS DITA Pharmaceutical Content Subcommittee Draft Charter


Requirements for a Location-by-Reference Mechanism
Roger Marshall (ed), IETF Internet Draft

Members of the IETF Geographic Location/Privacy (GEOPRIV) Working Group have published an updated Internet Draft Requirements for a Location-by-Reference Mechanism relative to PIDF-LO. The document defines terminology and provides requirements relating to Location-by-Reference approach using a location URI to handle location information within signaling and other Internet messaging.

From the 'Introduction': All location-based services rely on ready access to location information. Using location information can be done one of two ways: either in a direct, Location-by-Value (LbyV) approach, or using an indirect, Location-by-Reference (LbyR) model. For LbyV, location information is conveyed directly in the form specified by "A Presence-based GEOPRIV Location Object Format" (IETF RFC 4119, updated by RFCs 5139 and 5491). Using LbyV might either be infeasible or undesirable in some circumstances. There are cases where LbyR is better able to address location requirements for a specific architecture or application. This document provides a list of requirements for use with the LbyR approach, and leaves the LbyV model explicitly out of scope.

Consider the circumstance that in some mobile networks it is not efficient for the end host to periodically query the LIS for up-to-date location information. This is especially the case when power availability is a constraint or when a location update is not immediately needed. Furthermore, the end host might want to delegate the task of retrieving and publishing location information to a third party, such as to a presence server.

The concept of an LbyR mechanism is simple. It is made up of a reference identifier which indirectly references actual location information using some combination of key value and fully qualified domain name. This combination of data elements, in the form of a URI, is referred to specifically as a "location URI". A location URI is thought of as a dynamic reference to the current location of the Target, yet the location value might remain unchanged over specific intervals of time for several reasons..."

See also: the IETF Geographic Location/Privacy Working Group Specification Status Pages


W3C Publishes Revised HTML 5 Working Drafts
Ian Hickson, David Hyatt (et al, eds), W3C Technical Report

Members of the W3C HTML Working Group have published updated Working Draft specifications for HTML 5: A Vocabulary and Associated APIs for HTML and XHTML and HTML 5 Differences from HTML 4. In HTML 5, new features are introduced to help Web application authors, new elements are introduced based on research into prevailing authoring practices, and special attention has been given to defining clear conformance criteria for user agents in an effort to improve interoperability. HTML 5 is defined in a way that it is backwards compatible with the way user agents handle deployed content.

"HTML has been in continuous evolution since it was introduced to the Internet in the early 1990's. Some features were introduced in specifications; others were introduced in software releases. In some respects, implementations and author practices have converged with each other and with specifications and standards, but in other ways, they continue to diverge.

HTML 4 became a W3C Recommendation in 1997. While it continues to serve as a rough guide to many of the core features of HTML, it does not provide enough information to build implementations that interoperate with each other and, more importantly, with a critical mass of deployed content. The same goes for XHTML 1, which defines an XML serialization for HTML 4, and DOM Level 2 HTML, which defines JavaScript APIs for both HTML and XHTML; HTML 5 will replace these documents.

The HTML 5 draft reflects an effort, started in 2004, to study contemporary HTML implementations and deployed content. The HTML 5 draft: (1) Defines a single language called HTML 5 which can be written in HTML syntax and in XML syntax. (2) Defines detailed processing models to foster interoperable implementations. (3) Improves markup for documents. (4) Introduces markup and APIs for emerging idioms, such as Web applications..."

See also: HTML 5 Differences from HTML 4


Cryptographic Agility
Bryan Sullivan, MSDN Magazine

"For as long as cryptographers have been making secret codes, cryptanalysts have been trying to break them and steal information, and sometimes the code breakers succeed. Cryptographic algorithms once considered secure are broken and rendered useless. Sometimes subtle flaws are found in the algorithms, and sometimes it is simply a matter of attackers having access to more computing power to perform brute-force attacks. [For example], security researchers have demonstrated weaknesses in the MD5 hash algorithm as the result of collisions; that is, they have shown that two messages can have the same computed MD5 hash value. They have created a proof-of-concept attack against this weakness targeted at the public key infrastructures that protect e-commerce transactions on the Web.

A complete list of the cryptographic algorithms banned or approved by the SDL is reviewed and updated annually as part of the SDL update process... Even if you follow these standards in your own code, using only the most secure algorithms and the longest key lengths, there's no guarantee that the code you write today will remain secure. In fact, it will probably not remain secure if history is any guide. [You can go] through your old applications' code bases, picking out instantiations of vulnerable algorithms, replacing them with new algorithms, [but] a better alternative is to plan for this scenario from the beginning. Rather than hard-coding specific cryptographic algorithms into your code, use one of the crypto-agility features built into the Microsoft .NET Framework...

Given the time and expense of recoding your application in response to a broken cryptographic algorithm, not to mention the danger to your users until you can get a new version deployed, it is wise to plan for this occurrence and write your application in a cryptographically agile fashion. The fact that you can also obtain a performance benefit from coding this way is icing on the cake. Never hardcode specific algorithms or implementations of those algorithms into your application. Always declare cryptographic algorithms as one of the following abstract algorithm type classes: HashAlgorithm, SymmetricAlgorithm, AsymmetricAlgorithm, KeyedHashAlgorithm, or HMAC..."

See also: Cryptographic Key Management


Jotting on Parsers for SGML-family Document Languages: SGML, HTML, XML
Rick Jelliffe, O'Reilly Technical

"Years ago, when I first started looking at SGML and how parsers for it might be written, I became confused: SGML (1986) didn't fit into the kind of grammars or automata I had been taught at university. In a way, this was not surprising: the kinds of automata that are probably appropriate were not invented/formalized by theoretical computer scientists until for example 1988 (EPDA), 1990s (adaptive grammars), 1994 (2-SA), and as late as 2007 (unambiguous boolean grammars), though other aspects were floating around prior, but at a fairly rarefied altitude—indeed, 2-SA still has no Wikipedia entry. [RCC notes: 'GML'-style descriptive markup design began in the mid 60s, with various streams, and evolved painfully to IS finally in 1986.]

When people talk about HTML or XML 'abandoning' their SGML roots, I think they mean a few different things... I think there is a group of users, or developers, who really want HTML and XML to change to a different type of grammar, and preferably one neatly in the Chomsky hierarchy: some kind of pushdown automaton. One of the problems with all the SGML-family has been that developers may come to them expecting nice pushdown automata, but then finding they don't shoe-horn in well as simple PDAs and FSMs get disaffected, confused about how to apply their favourite tricks.

My [stacks] model is that there is a parser with access to various maps: of entities (whether these are local or external, streams or files, it doesn't matter), of grammars (the element and attribute declarations in the DTD), of delimiters (the delimiter strings and their functions), and of defaults—for filling in the values of attributes, however it could be generalized for various aspects of repair and type annotation as well... the parser has access to three stacks of stacks. The first is the input: this holds characters. The second is the context stack: this holds references to items in the maps. The third is the output: this holds information set items—the parsed document...

Surely SGML is dead? Well, XML is SGML and XML isn't dead. And HTML has been notionally SGML and it isn't dead. And there still are large SGML documents and systems, so SGML is not dead: though certainly Grandpa has hung up his saddle. But full SGML, especially as extended in 1998 with WebSGML to cope with HTML and XML better, plus XML and HTML (not to mention Wikis) together cover a very broad range of markup language possibilities. It is entirely possible that future changes in HTML or XML could already be covered by SGML: people pre-judge that change necessarily implies divergence but it is case-by-case. So I hope having an introductory model for a machine that could cope with all of them might be useful or interesting to readers..."


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-09-02.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org