The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: January 29, 2009
XML Daily Newslink. Thursday, 29 January 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
IBM Corporation

Using Web 2.0 to Reinvent Your Business for the Economic Downturn
Dion Hinchcliffe, ZDNet Blog

At this point it's more than clear that 2009 will be a challenging year for a great many businesses. Most organizations these days are now actively engaged in activities that are taking a look at what they can do to make the best of the current economic situation... The good news is that most enterprises actually have a fair number of compelling options right now if they are willing to think outside the box. While some might look at the social aspects of things like Web 2.0 as marginal subjects when things get tough, nothing could be further from the truth when it comes to the deeper implications of Web 2.0 in the enterprise. Many of the more transformational aspects of the 2.0 era now have extensive groundwork laid for them, are available in genuinely enterprise-ready solutions/pilots, and many have just been waiting for the right situation; the driving need for businesses to change and transform in the face of radically different business conditions... Why is Web 2.0 particularly interesting right now for the enterprise? Web 2.0 has always been about making the most of the intrinsic power of the network and whatever is attached to it. This can be people (social computing and Enterprise 2.0), low-cost dynamic Web partners (open APIs and cloud computing), the world's largest database of information, lightweight integration (mashups and Web-style SOA), or maximizing the value of the network itself (the network effects that everyone talks about), and much more. These collectively represent better, more efficient, and less expensive ways to accomplish things that we previously used to do without the network's help or with methods that didn't take advantages of how the network works... Note that the struggle with many of these, as with so much of Web 2.0, is that there is a major shift in control, a much higher level of transparency, and an openness that many businesses can be uncomfortable with. However, to organizations that are willing to overcome these largely political, cultural, and mindset challenges, significant opportunities are available for the taking, often for relatively modest investment... (1) Move to lower-cost online/SaaS versions of enterprise applications; (2) Use Enterprise 2.0 to capture the knowledge and know-how of employees; (3) Strategically move IT infrastructure to the cloud; (4) Embrace new low-cost models for production such as crowdsourcing; (5) Lower customer service costs by pro-active use of online customer communities; (6) Reduce application development and integration time/expenditures with new platforms and techniques (7) Open your supply chain to partners on the Web: 'By building on top of existing investment in SOA and enterprise architecture, your organization could open up its SOA to trading partners, something that CIOs have reported wanting to do en masse for several years now'; (8) Overhauling and reinventing paper and digital workflow...

Toward 2W, Beyond Web 2.0
T.V. Raman, [Author's Authorized Version from] CACM

"From its inception as a global hypertext system, the Web has evolved into a universal platform for deploying loosely coupled distributed applications. As we move toward the next-generation Web platform, the bulk of user data and applications will reside in the network cloud. Ubiquitous access results from interaction delivered as Web pages augmented by JavaScript to create highly reactive user interfaces. This point in the evolution of the Web is often called Web 2.0. In predicting what comes after Web 2.0 (what I call 2[superscript]W[/], a Web that encompasses all Web-addressable information—I go back to the architectural foundations of the Web, analyze the move to Web 2.0, and look forward to what might follow. For most users of the Internet, the Web is epitomized by the browser, the program they use to log on to the Web... [But] The notion of the Web as a new platform emerged in the late 1990s with the advent of sites providing a range of end-user services exclusively on the Web. Note that none of them had a parallel in the world of shrink-wrap software that had preceded the Web: Portal - Yahoo! Web directory, Shopping - Amazon online store; Auction - eBay auction site; Search - Google search engine. In addition to lacking a pre-Web equivalent, each of these services lived on the Web and, more important, exposed the services as simple URLs, an idea later known as REpresentational State Transfer, or (REST)ful, Web APIs. All such services not only built themselves on the Web, they became an integral part of the Web in the sense that every Google search, auction item on eBay, and item for sale on Amazon were URL addressable. URL addressability is an essential feature of being on the Web. The URL addressability of the new services laid the foundation for Web 2.0, that is, the ability to build the next generation of solutions entirely from Web components. The mechanism of passing-in parameters via the URL defined lightweight Web APIs. Note that in contrast to all earlier software APIs, Web APIs defined in this manner led to loosely coupled systems... Beyond Web 2.0, here is where we stand: (1) The Web, which began as a global hypertext system, has evolved into a distributed application platform delivering final-form visual presentation and user interaction; (2) The separation between application logic and user interface enables late binding of the user interface, promising the ability to avoid a one-size-fits-all user interface; (3) More than URL-addressable content, the Web is a distributed collection of URL-addressable content and applications; (4) It is now possible to create Web artifacts built entirely from Web components; and (5) The underlying Web architecture ensures that when created to be URL-addressable, Web artifacts in turn become the building blocks for the next set of end-user Web solutions... The Web has evolved from global hypertext system to distributed platform for end-user interaction. Users access it from a variety of devices and rely on late binding of the user interface to produce a user experience that is best suited to a given usage context. With data moving from individual devices to the Web cloud, users today have ubiquitous access to their data. The separation of the user interface from the data being presented enables them to determine how they interact with the data. With data and interaction both becoming URL-addressable, the Web is now evolving toward enabling users to come together to collaborate in ad-hoc groups that can be created and dismantled with minimal overhead. Thus, a movement that started with the creation of three simple building blocks -- URL, HTTP, HTML — has evolved into the one platform that binds them all.

Synchronizing Location-to-Service Translation (LoST) Protocol Based Service Boundaries and Mapping Elements
Henning Schulzrinne and Hannes Tschofenig (eds), IETF Internet Draft

Members of the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group have released an updated Internet Draft for "Synchronizing Location-to-Service Translation (LoST) Protocol Based Service Boundaries and Mapping Elements." Section 9 presents the RelaxNG XML grammar. The LoST (Location-to-Service Translation) protocol (RFC 5222 maps service identifiers and geodetic or civic location information to service URIs. As specified in the LoST architecture description, there are a variety of LoST servers that cooperate to provide an ubiquitous, globally scalable and resilient mapping service. The LoST protocol specification only describes the protocol used for individual seeker-originated queries. This document allow forest guides, resolver clusters and authoritative servers to synchronize their database of mappings. It is often desirable to allow users to access a service that provides a common function, but is actually offered by a variety of local service providers. In many of these cases, the service provider chosen depends on the location of the person wishing to access that service. Among the best-known public services of this kind is emergency calling, where emergency calls are routed to the most appropriate public safety answering point (PSAP), based on the caller's physical location. Other services, from food delivery to directory services and roadside assistance, also follow this general pattern. This is a mapping problem, where a geographic location and a service identifier (URN) is translated into a set of URIs, the service URIs, that allow the Internet system to contact an appropriate network entity that provides the service... The overall emergency calling architecture separates mapping from placing calls or otherwise invoking the service, so the same mechanism can be used to verify that a mapping exists ("address validation") or to obtain test service URIs... The Location-to-Service Translation (LoST) protocol is an XML-based protocol for mapping service identifiers and geodetic or civic location information to service URIs and service boundaries. In particular, it can be used to determine the location-appropriate Public Safety Answering Point (PSAP) for emergency services. The main data structure, the XML 'mapping' element, used for encapsulating information about service boundaries is defined in the LoST protocol specification and circumscribes the region within which all locations map to the same service URI or set of URIs for a given service. This document defines an XML protocol to exchange these mappings between two nodes. As motived in the Location-to-URL Mapping Architecture document this mechanism is useful for the synchronization of top-level LoST Forest Guides. This document is, however, even useful in a deployment that does not make use of the LoST protocol but purely wants to distribute service boundaries.

See also: the IETF ECRIT Working Group

U.S. HHS Dept Adopts New Rules To Coordinate Health Care Technology
Gautham Nagesh, NextGov

The U.S. Health and Human Services Department (HSS) has announced several new interoperability standards for health care information technology, paving the way for nationwide adoption of electronic health records. Three new sets of IT standards aimed at enabling diverse systems to talk to one another took effect on January 16, 2009, according to a notice in the January 21, 2009 Federal Register. The standards are mandatory for all federal agencies implementing any type of health care information technology system. Dr. John Halamka, chief information officer for Harvard University's Medical School and chairman of the Healthcare Information Technology Standards Panel, which established the standards, said the announcement means that a lack of uniformity among IT standards is no longer a barrier to the creation of national e-health care records. Obama has said establishing e-health records is a priority for his administration. The three groups of standards use XML tags to define data elements common to all medical records. Those elements are searchable by any network user seeking a particular piece of information. A spokesperson for HITSP said the new standards use existing technology available on the market. The notice in the Federal Register outlines interoperability standards for electronic health records used by emergency first responders, consumers seeking to download their own medical records, and organizations examining the quality of medical care provided by hospitals, providers and other groups. Halamka said there are about 60,000 data elements in the average medical record, but not all of them are necessary for every search. The panel examined the possible uses for electronic health care data and determined which elements would be most helpful to providers, researchers and public health organizations. The first set of new standards address how emergency responders access the electronic health records of patients involved in a mass incident such as a terrorist attack or natural disaster. It describes how to obtain a person's lifetime medical records, including a list of medical problems and history, and transmit that information to emergency responders without compromising security or privacy. The second set of standards relates to how individuals access their own electronic health records and transmit them using a storage device such as a thumb drive or DVD... It establishes a standard level of encryption and requires password protection to call up the records. In addition, it develops an audit trail so users can track how many times a particular record has been downloaded. The third group of standards is aimed at researchers and public health organizations seeking to use medical records to track patient care and make sure health care providers are using best practices... [Note from David Staggs: "The Notice specifies several OASIS standards but does not include profiles that provide the specificity required of a standards-based interoperable cross-enterprise authorization exchange. Our work at HIMSS will concentrate the expertise required to formalize these necessary profiles. HITSP continues to follow our progress closely and is expected to update TP20 with the XSPA profiles once they become OASIS standards. A future Federal Registry notice is expected to trigger legal obligations to use the XSPA profiles under the conditions cited in the Notice..."]

See also: XML and Healthcare

AIIM's iECM Committee: Validating CMIS
Laurence Hart, Blog

"AIIM's iECM committee is taking on the creation of a prototype, CMIS-based, system to store the presentation from the 2009 AIIM Expo (March 30 - April 2, 2009, Pennsylvania Convention Center, Philadelphia, PA). The basic premise is to have one or more CMIS back-end systems storing content with a central interface that would provide content, seamlessly, to users. Rather than explain the details, I'm publishing the official write-up. Before you dive in, if you are a vendor with a CMIS implementation, we want to speak to you... The iECM committee is in the process of evaluating the new Content Management Interoperable Services protocol, which allows CMS integrators and other users to create federated and distributed ECM systems by combining the power of multiple CMS, even those of multiple vendors into a system of systems which makes all the content of those systems available to end users as though they were one system. The members of the iECM committee and the corporations, government agencies, and other organizations that employ and sponsor those members are asking the vendors that are supporting the CMIS effort, and the organizers of the AIIM Expo to cooperate with us, combine our resources to demonstrate the power of ECM in general, and evaluate the usefulness of the proposed CMIS standard. The CMIS demonstration system would be made available prior to the AIIM Expo, and updated with additional related information during and after the event. Access to the content would use the same security mechanisms currently used for allowing AIIM Expo attendees access to the content, the difference being that the content would be more easily found and navigated by using best of breed CMS technologies... The iECM committee is asking: (1) That the AIIM Expo 2009 organizers make available copies of the content discussed above, as it becomes available so that the iECM committee could make populate to AIIM Expo attendees using the same user name/password protections that are currently used for access to this information. (2) That the CMIS vendors supply instances of their ECM/CMS products, and their CMIS implementations (even those that are not yet available to the general CMS community). Preferably ones that are hosted on Internet accessible servers that are under the control of the individual vendors, but that the iECM committee remotely administer and populate, The content would be divided into logical subsets, each to be hosted by one of the distributed CMS systems. The access to the entire collection will be via a CMIS based integration federator, with a web based thin client interface..."

Optaros and MuleSource Help Nespresso With Next-Generation SOA Solution
Dilip Krishnan, InfoQueue

Nestles Nespresso SA division, which is headquartered in Paudex, Switzerland, recently announced the successful completion the first phase of their SOA initiative 'NesOA' in just six months. Optaros and MuleSource helped define and implement a new middleware architecture called 'Nespresso Open Architecture', or NesOA. Nespresso Enterprise Architect, Joel Schmitt: "We are committed to an open source approach, including MuleSource's Mule ESB, because complying with open standards is the key for future extensibility and growth..." [But, Krishnan observes: Based on that statement it seems like there might be some ambiguity on the benefits offered by open source vs open standards, both have their own merits but may not necessarily be complementary. Given that integration with a multitude of channels it is very important that the ESB supports the latest WS-* standards or Web standards in the case of RESTful endpoints for interoperability.] Schmitt: "Though Open Source is usually leading innovation when speaking about Open Standards, that's true there is no complete overlap. For example Mule ESB is not relying on the JBI standard and we are still using it. Open Source and Open Standards are part of the strategy as they both guarantee vendor independence, ease the integration with variety of systems and different integration patterns. In term of endpoints, we are looking to support both WS-* and RESTful endpoints, and Mule/JBoss offers that flexibility today. Some integration requirements are about Services, some are more Resources, some are more about Messages—we intend to address all of them... A standard integration platform does not imply a central instance and we are open to different deployment models (Mule ESB enables us to implement a fairly distributed model); also, having an ESB allows to create both corporate standards based services as well as customized facades to them (within limits...), making the integration effort minimal to all parties...


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: