The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: March 10, 2009
XML Daily Newslink. Tuesday, 10 March 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



For Review: Improving Access to Government through Better Use of the Web
José M. Alonso and Kevin Novak (eds), W3C First Public Working Draft

W3C announced the publication of a First Public Working Draft for the document "Improving Access to Government through Better Use of the Web," produced by members of the W3C eGovernment Interest Group. This document is a working document to which all can contribute; comments about the draft are invited through April 26, 2009. In this draft, the term "eGovernment" refers to the use of the Web or other information technologies by governing bodies (local, state, federal, multi-national) to interact with their citizenry, between departments and divisions, and between governments themselves. Recognizing that governments throughout the World need assistance and guidance in achieving the promises of electronic government through technology and the Web, this document seeks to define and call forth, but not yet solve, the variety of issues and challenges faced by governments. The use cases, documentation, and explanation are focused on the available or needed technical standards but additionally provide context to note and describe the additional challenges and issues which exist before success can be realized. This document has been published in time for W3C's eGovernment stakeholder meeting in Washington, D.C...

Standards work across many groups, governments, and organizations continues to aid governments. Many have committed time and resources to develop XML, Authentication, and other data standards to promote and aid information to be free flowing and available. Others have sought to address and understand how to aid in developing standards for interoperability and interchange of data while others have sought to create or identify Web presentation layer, application, and browser based standards to aid governments in their efforts... The W3C eGov IG is in its first year of existence and is through this Note, an issues paper, and future work attempting to meet and execute its charter and mission for the W3C and specifically for serving its purpose and intent to assist governments throughout the World in realizing the promise of electronic government. The eGovernment Interest Group (eGov IG) is designed as a forum to support researchers, developers, solution providers, and users of government services that use the Web as the delivery channel. The Interest Group uses email discussions, scheduled IRC topic chats and other collaborative tools as a forum to enable broader collaboration across eGov practitioners.

See also: the W3C eGovernment Interest Group


Project Zeppelin Looks to Manage Clouds
Ed Scannell, InformationWeek

Cittio has rolled out Project Zeppelin, what it believes is the first open source cloud management and monitoring agent. The company hopes the new technology serves to accelerate the adoption of cloud computing among larger IT shops. Company officials see Zeppelin as just the first step toward materializing its goal of offering a set of tools that let IT shops better match up applications best suited for cloud computing. While the changes in cloud-based applications and infrastructure architectures promise to be dramatic, cloud computing itself brings in fresh risks to the command and control structures of today's IT operations, company officials said. This means there will have to be an equally dramatic evolution in the capabilities of existing network and systems management solutions. There are three major problems facing the industry in the area of cloud management. First, the primitive nature of instrumentation, managing and metering at the cloud operator and application user ends. Second, the lack of new metrics that can more accurately monitor cloud elasticity and resource availability. Third, most system management solutions rely on proprietary agent technology or SNMP for their performance metric, and so lack the ability to transfer data securely, Cittio contends. Project Zeppelin is designed to counteract these deficiencies by offering detailed asset, performance, auditing, and of cloud and data center infrastructure and applications. It can be deployed remotely and reportedly can also secure data accessed through the Internet based on standard WBEM/CIM-XML and WS-Management interfaces. Zeppelin also supplies instrumentation for a range of open source software including Linux, Citrix XenServer through Project Kensho, and VMware.

See also: DMTF Common Information Model (CIM)


Unified Cloud Interface Project (UCI) and Requirements
Reuven Cohen, Cloud Computing Interoperability Forum (CCIF) Memo

The Cloud Computing Interoperability Forum is a group of industry stakeholders that are active in cloud computing. The group's goal is to define an organization that would enable interoperable enterprise-class cloud computing platforms through application integration and stakeholder cooperation. CCIF is a thought forum and advocacy group for cloud interoperability and related standards. The unified cloud interface (UCI) or cloud broker will be composed of a semantic specification and an ontology also referred to as "Semantic Cloud Abstraction". The ontology provides the actual model descriptions, while the specification defines the details for integration with other management models... A UCI Requirements document "specifies the implementation of an semantic process that can broker access and represent multiple cloud providers that are cloud-platform or cloud- infrastructure designs. The concept is to provide a single interface that can be used to retrieve a unified representation of all multi-cloud resources and to control these resources as needed. The key drivers of a unified cloud interface is to create an api for and about other APIs...

A singular abstraction that can encompass the entire infrastructure stack as well as emerging cloud centric technologies all through a unifed interface. What a sematic model enables for the UCI is a capability to bridge both cloud based API's such as Amazon Web Services with existing protocols and standards, regardless of the level of adoption of the underlying API's. The other benefit of a semantic model is that of future proofing. Creating a model that assumes we as an industry are moving forward and not making any assumptions on the advancements in technology by implementing a static specification based on current technology limitations but instead creating one that can adapt as we adapt. In this vision for a unified cloud interface the use of the resource description framework (RDF) or something similar would be an ideal method to describe our semantic cloud data model (taxonomy and ontology). The benefit to RDF based ontology languages is they act as general method for the conceptual description or modeling of information that is implemented by web resources. These web resources could just as easily be "cloud resources" or APIs. This approach may also allow us to easily take an RDF-based cloud data model and use it within other ontology languages or web services making it both platform and vendor agnostic...

See also: the CCIF web site


W3C First Public Working Draft: Pointer Methods in RDF
Carlos Iglesias and Mike Squillace (eds), W3C Technical Report

W3C's Evaluation and Repair Tools Working Group has published a First Public Working Draft for "Pointer Methods in RDF." Abstract: "Access to the World Wide Web has meant access to many types of content, including text, documents in a variety of formats such as the Open Document Format (ODF) or Portable Document Format (PDF), audio or video clips, and, of course, Web content. Web content (e.g. XML, XHTML, and HTML documents), like many types of content, usually has a structure that allows identifying portions of the document in many ways. The specification "Pointer Methods in RDF" defines a framework for representing pointers -- entities that permit identifying a portion or segment of a piece of content—making use of the Resource Description Framework (RDF). It will also describe a number of specific types of pointers that permit portions of a document to be referred to in different ways. When referring to a specific part of, say, a piece of Web content, it is useful to be able to have a consistent manner by which to refer to a particular segment of a Web document, to have a variety of ways by which to refer to that same segment, and to make the reference robust in the face of changes to that document... The intent of this First Public Working Draft of this specification is to introduce the Pointer Methods in RDF vocabulary, a collection of classes and properties that can be used to identify portions or segments of content, especially Web content. Keep in mind that this specification is part of a larger suite of the Evaluation and Report Language (EARL) produced and maintained by the Evaluation and Repair Tools Working Group (ERT WG), but that it is meant to be consumable as an independent vocabulary. Public comment is invited through April 07, 2009.

See also: the Evaluation and Report Language (EARL) Overview


ALTO Discovery Protocols
Gustavo Garcia, Marco Tomsu, Yu-Shun Wang (eds), IETF Internet Draft

An initial level -00 I-D for "ALTO Discovery Protocols" has been published by members of the IETF Application-Layer Traffic Optimization (ALTO) Working Group. Application-Layer Traffic Optimization (ALTO) service aims to provide distributed network applications with information to perform better- than-random initial peer selection when multiple peers in the network are available to provide a resource or service. A discovery mechanism is needed for the applications to find a suitable entity that provides the ALTO service. This document discusses various scenarios of ALTO discovery, provides a survey of available options, and addresses potential issues and consideration for each. The ALTO service architecture and protocol are currently under discussion and development within the IETF ALTO working group.. Although it is identified in the charter that a discovery mechanism is needed, the preference is to adopt one or more existing mechanisms for ALTO discovery rather than designing a new one... this document makes minimum assumptions of the ALTO framework, and presents different scenarios and available options based on prior and related discovery mechanisms. This document will be updated to track the progress of the ALTO requirements and solution... XRDS (Extensible Resource Descriptor Sequence), and its simplified profile XRDS-Simple, specifies an XML format to describe resources associated with a URI, and the protocol to retrieve that XML document. One of the purposes of this XRDS document is to enumerate and describe the service endpoints associated with the resource, including the URI to access the service and a a type of service and/or media-type identifying the service being discovered. The use of XRDS for ALTO Service Discovery requires using a URI to retrieve the XRDS document and the specification of a type of service and/or media-type identifying the ALTO Service. The necessity of an initial URI to retrieve the XRDS document requires an additional pre-discovery mechanism similar to the discovery of the ALTO service itself. This extra complexity and roundtrip seems to make XRDS not especially appropriate for the ALTO discovery use case... The ALTO WG was chartered to design and specify an Application-Layer Traffic Optimization (ALTO) service that will provide applications with information to perform better-than-random initial peer selection. ALTO services may take different approaches at balancing factors such as maximum bandwidth, minimum cross-domain traffic, lowest cost to the user, etc. The WG is considering the needs of BitTorrent, tracker-less P2P, and other applications, such as content delivery networks (CDN) and mirror selection. In order to query the ALTO server, clients must first know one or more ALTO servers that might provide useful information. The WG is looking at service discovery mechanisms that are in use, or defined elsewhere (e.g. based on DNS SRV records or DHCP options). If such discovery mechanisms can be reused, the WG will produce a document to specify how they may be adopted for locating such servers.

See also: the IETF ALTO Working Group Status Pages


XProc: Drupal, XML Pipelines and RESTful Services
Kurt Cagle, O'Reilly Technical

Why would you want to use XML as a programming language? Anyone who has used languages such as XSLT should have a pretty fair idea about the complexities involved in treating XML as a programming language itself — it's verbose, forces thinking into a declarative model that can be at odds with the C-based languages currently used by most programmers, can be difficult to read, and as a syntax it doesn't always fit well with the requirements in establishing parameter signatures and related structures. For this reason, languages such as XProc, the XML Pipeline Language, must have many people scratching their head. At first blush, it is in fact a programming language—it has many of the same lexical structures (declarations, parameters, encapsulation, control structures, exception handling and so forth) that other programming languages has, and overall, the amount of work necessary to put together an XProc "program" would seem to outweight the benefits when processing single documents. However, working on some revisions to a RESTful Services prototype (part of a larger series of articles I'm working on about RESTful Services, to be published soon), I began to see a place where XProc is not only just a viable alternative, but may in fact be the best solution. Oddly enough, it has to do with Drupal... Despite its dependency upon PHP5 (a useful language but one that tends to encourage some truly dreadful programming habits), the Drupal architecture itself is perhaps one of the best I've ever seen, in great part because it has implicitly taken many of the tenets of RESTful programming to heart. Specifically, you can think of Drupal as a database of document resources, each of which can be accessed via a RESTful URL. At its core this mapping assumes that each internal document (which Drupal refers to as a node) can be accessed directly via its ID... For Drupal, the process of creating themes can be fairly complex, typically required hard-coding specific configurations in PHP and employing mixed PHP and HTML code. It is remarkably easy, from personal experience, to bollix such a theme. On the other hand, if the source file was an XML configuration template, not only can you validate the document prior to running it, but it becomes much easier to build tools to visualize what the template would like like (and to build the templates in the first place) with an XML foundation. Similarly, the process of rendering widgets become XSLT transformations acting on specific data that may be generated via an XQuery command, one that can be parameterized and defined as a distinct step in an XProc pipeline. Indeed, the whole view mechanism that gives Drupal so much of its power has direct analogs in XML technologies -- filters become XQuery WHERE phrases, paging is a simple XPath function '(subsequence())' , sorting is handled by the ORDER BY expression, arguments are parsed from the URL and passed in via XProc options, Go and so forth, while the final rendering can be handled via either simple XSLT transformations that can be autogenerated or via more complex XSLTs that can be loaded in and again wrapped as named pipes... Because XProc is itself an XML abstraction, this raises the possibility that the same XProc can be used with multiple xml database environments, as the abstraction of the signature could also provide for selection of the right XQuery extensions or similar code for a given database. Whether this would lead to the same kind of community support that Drupal now has of course remains to be seen, but it is not hard here to envision both a strong community following, especially in the realm of publishing, blogging and general content management, and commercial opportunities (XBRL, HL7, S1000D, HR/XML, DITA integration—really anywhere where you're dealing with enterprise-grade XML)...

See also: the XProc.org web site


Converting XML Schemas to Schematron: Beta Available
Rick Jelliffe, O'Reilly Technical

In part 14 of the Schematron series, the author announces that "The beta release of my open source XML Schema validator is available now, from Schematron.com. "Schematron (ISO DSDL Part 3) is a language for making assertions about patterns found in XML documents, and serves as a schema language for XML... the Schematron differs in basic concept from other schema languages in that it not based on grammars but on finding tree patterns in the parsed document. This approach allows many kinds of structures to be represented which are inconvenient and difficult in grammar-based schema languages. If you know XPath or the XSLT expression language, you can start to use The Schematron immediately. It allows you to develop and mix two kinds of schemas: (1) Report elements allow you to diagnose which variant of a language you are dealing with; (2) Assert elements allow you to confirm that the document conforms to a particular schema." Why would you want to convert XSD to Schematron? The prime reason is to get better diagnostics: grammar-based diagnostics basically don't work, the last two decades of SGML/XML DTD/XSD experiences makes plain. People find them difficult to interpret and they give the response in terms of the grammar not the information domain... A secondary reason is that Schematron only needs an XSLT implementation... This beta validator implementation takes the approach of converting the XML Schema to Schematron code. The methods used have been explored and documented in a blog... The coverage is approximately: (1) simple datatypes: believed to be 100%; (2) list and union datatypes: not supported; (3) structural constraints on elements and attributes: supported [Content model validation is implemented by a series of finer sieves, which combine to provide most of the capabilities of a full grammar checker. If a grammar has repeated particules or complex nested occurrence constraints, there may be some false positives where our sieves are not fine enough, however there are never false negatives.] (4) multiple namespaces, import and include: supported; (5) identity constraints: not supported; (6) dynamic constraints: ('xsi:type', 'xsi:nill') not supported; (7) tricky prefixes: ('elementFormDefault') not supported... The converter is a pipeline... [W3C XML Schema - XSLT stylesheet (extract) - Schematron schema, etc.] ... The ZIP archive has a sample ANT file. Please note that this requires XSLT2 and an ability to create the necessary scripts, make or batch files to run. The ANT file uses a version of the Schematron task for ANT. Substitute your own Schematron implementation if necessary..." [Note: this blog article includes URI references to earlier installations in the series.]

See also: the ISO Schematron web site


Healthcare Standards: The HIT Standards Committee
John D. Halamka, Blog

"Today, HHS posted a call for nominations to the HIT Standards Committee. Although the ARRA's HIT Policy Committee and HIT Standards Committee are still being formed, I do have a few thoughts about how all our organizations will evolve. These Federal Advisory Committees (FACAs) will advise the government. They will not advise industry, payers, providers, or patients. I believe the FACAs will need multi-stakeholder groups to do the work they prioritize and to coordinate with all the stakeholders in the healthcare IT ecosystem. I believe there will be an ongoing need to harmonize standards, especially around quality measurement mentioned in ARRA several times. The HIT Policy Committee will be a new committee. NeHC, CCHIT and HITSP are not specifically submitting slates of candidates, but we will happily support any of our members who self-nominate. The HIT Standards Committee is a new committee, but it is my hope that NeHC will evolve to become the HIT Standards Committee. As the new Secretary of HHS is confirmed, hopefully we will get clarity in this area. It is my hope that HITSP will continue its work and will report to the HIT Standards Committee. I have the same hope for CCHIT and its certification mission...

National eHealth Collaborative posted its hopes and expectations: "The HIT Standards Committee established by the ARRA will bring central focus and urgency to the interoperability efforts needed for such a nationwide network through the development of national standards, and the Secretary of Health and Human Services was given the option by Congress to recognize the National eHealth Collaborative (NeHC) as this Committee. However, the ARRA sets an aggressive timetable, which understandably requires that HHS move forward with a nominations process even as we await confirmation of a new Secretary. The goals of NeHC and those of the HIT Standards Committee are highly complementary. NeHC's unique membership constitution represents the full spectrum of public and private sector e-health stakeholders, including consumers and patients, healthcare providers, employers and payers, government officials, information technology experts, quality improvement experts, public health researchers, and privacy advocates, and in this time of urgent need for economic and healthcare reform, an already established and cross- functioning group of experts would be a strong asset to the work envisioned by the legislation. We remain committed to this possibility, and look forward to quickly engaging in a discussion about this and other possible NeHC contributions with Secretary-Designate Kathleen Sebelius upon her confirmation..."

See also: the Healthcare Information Technology Standards Panel (HITSP)


Eclipse Pulsar Seeks Mobile App Dev Unity
Paul Krill, InfoWorld

Tackling the complicated issue of developing mobile applications for different platforms, the Eclipse Foundation is set to unveil a project to build a multivendor, unified platform for mobile development. But the effort thus far lacks the support of some major mobile players, including Microsoft and Apple. Called Pulsar, the initiative is intended to build a standard mobile application development tools platform. It is being led by vendors like Motorola and Nokia and seeks to make it easier to develop applications for different mobile systems. Although the individual platform technologies would not go away, Pulsar provides a unified platform to work with the individual vendor-specific technologies... A RIM executive stressed that Pulsar is intended to make it easier for developers having to cope with building applications for many devices and work with many development environments. "I think what Eclipse is driving and many of us have been focused on for a while is how do we make the developer's life in mobile a lot simpler," said Alan Brenner, senior vice president at RIM. Java Micro Edition will be a platform supported by Pulsar. But missing at this juncture is support of major mobile platforms, such as Microsoft's Windows Mobile, Apple's iPhone, and Google's Android... Specific deliverables of the Pulsar effort include: (1) Development of a packaged distribution called Eclipse Pulsar Platform; (2) A technical road map to advance Pulsar's capabilities; (3) A set of best practices, including documentation and test suites; (4) Education and outreach to drive adoption of Pulsar. In conjunction with its participation in Pulsar, RIM has delivered version 1.0 of it BlackBerry JDE (Java Development Environment) Plug-in for Eclipse, offering a plug-in enabling developers to build applications for the RIM BlackBerry device from within the Eclipse IDE..

See also: the Eclipse announcement


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-03-10.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org