The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: December 13, 2007
XML Daily Newslink. Thursday, 13 December 2007

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com



Why Revise HTTP?

Mark Nottingham I haven't talked about it here much, but I've spent a fair amount of time over the last year and a half working with people in the IETF to get RFC2616 (the HTTP specification) revised. HTTP started as a protocol just for browsers, and its task was fairly simple. Yes, persistent connections and ranged requests make things a bit more complex, but the use cases were relatively homogenous almost a decade ago, and the people doing the implementations were able to assure interop for those common cases. Now, a new generation of developers are using HTTP for things that weren't even thought of then; AJAX, Atom, CalDAV, 'RESTful Web Services' and the like push the limits of what HTTP is and can do. The dark corners that weren't looked at very closely in the rush to get RFC2616 out are now coming to light, and cleaning them up now will help these new uses, rather than encourage them to diverge in how they use HTTP. So, while the focus of the WG is on implementors, to me that doesn't must mean Apache, IIS, Mozilla, Squid and the like; it also means people using HTTP to build new protocols, like OAuth and Atom Publishing Protocol. It means people running large Web sites that use HTTP in not-so-typical ways. Another reason to revise HTTP is that there's a lot of things that the spec doesn't say. [citation via James Clark's blog]

See also: the earlier news item


Standards for Personal Health Records
John Halamka, Blog

I have identified the four (4) major types of Personal Health records: provider-hosted, payer-based, employer-sponsored and commercial. As more products are offered, it's key that all the stakeholders involved embrace national healthcare data standards to ensure interoperability of the data placed in personal health records. To illustrate the point, I am posting my entire lifelong medical record on my blog (this is with my consent, so there are no HIPAA issues) in two ways. The first is a PDF which was exported from a leading electronic health record system. It's 77 pages long and contains a mixture of clinical data, administrative data, normal and abnormal results, numeric observations, and notes. It's a great deal of data, but is very challenging to understand, since it does not provide an organized view of the key elements a clinician needs to provide me ongoing care. It is not semantically interoperable, which means that it cannot be read by computers to offer me or my doctors the decision support that will improve my care. The second is a Continuity of Care Document , using the national Health Information Technology Standards Panel (HITSP) interoperability specifications. It uses "Web 2.0" approaches, is XML based, machine and human readable, and uses controlled vocabularies enabling computer-based decision support. Today (December 13), HITSP will deliver the harmonized standards for Personal Health Records, Labs, Emergency Records, and Quality measurement to HHS Secretary Leavitt. These "interoperability specifications" will become part of Federal contacting language and be incorporated into vendor system certification criteria (CCHIT) over the next two years.

See also: CCR references


Validation by Projection
David Orchard, Blog

Many of the architectures and strategies for validation apply validity checking to a particular document with a pass or fail result on the document. This assumes that the schemas used in validation are expressive enough for all the potential versions of documents including any extensions. We've regularly seen that the Schema 1.0 wildcard limits the ability for fully describing documents. For example, it is impossible to have a content model that has optional elements in multiple namespaces with a wildcard at the end. The choice is to either have the wildcard or the elements. There is another approach to validation, called validation by projection, which effectively removes any unknown content prior to validation. It is validation of a projection of the XML document, where the projection is a subset of the xml document with no other modifications to the contents including order. Part of validation by projection is determining what to project. The simplest rule for determining what to project is: Starting at the root element, project any attributes and any elements that match elements in the content model of the current complexType and recurse into each element. [Author's note to W3C TAG: I wrote up a couple of personal blog entries on validation by projection. This seems to be a useful way of achieving forwards and backwards compatibility without relying upon schemas that have wildcards or open content models.   From the TAG's definitional perspective, I'd characterize validation by projection as an architecture where the schema(s) define a Defined Text Set and an Accept Text Set that is equal to the Defined Text Set, then the process of projection is the creation and validation of the text against a generated Accept Text Set that has the original Accept Text Set plus all possible extra undefined elements and attributes.]

See also: Validation by Projection implementations


Spring Integration Boosts Service-Based Apps
Paul Krill, InfoWorld

SpringSource, the keepers of the popular Spring Framework for Java application development, this week introduced Spring Integration, a framework to ease enterprise integration. Spring Integration is intended to assist Spring users in building service-oriented, message-driven applications. It offers a model for building message-driven systems by encapsulating internal complexities of these systems so business components can be declaratively configured without knowledge of the integration infrastructure, according to SpringSource, which formerly was known as Interface21. The open-source framework handles message-listening and service-invoking aspects and applies Inversion of Control principles to the runtime arena, SpringSource said. Inversion of Control principles in Spring Integration pertain to the framework taking over the linking of activities. Spring Integration, for example, could be used in a stock ordering system that handles data arriving via both HTTP and messaging. Also part of Spring Integration are adapters to integrate with common input and output sources. Spring Integration builds on functionality in the core Spring Framework, including support for Java Message Service, aspect-oriented programming, event publication, and subscription and transaction management. Written in Java, Spring Integration features extension points, including input and output adapters; content-based routers; and message filters and translators. Version 1.0 of the framework is due in the second quarter of 2008.


Building a Grid System Using WS-Resource Transfer, Part 5: Using WS-RT for Work Distribution
Martin Brown, IBM developerWorks

The WS-RT standard provides a new method for accessing and exchanging information about resources between components. It is designed to enhance the WS-Resource Framework (WSRF) and build on the WS-Transfer standards. The WS-RT system extends previous resource solutions for Web services and makes it easy not only to access resource information by name but also to access individual elements of a larger data set through the same mechanisms by exposing elements of an XML data set through the Web services interfaces. In any grid, there is a huge amount of metadata about the grid that needs to be stored and distributed. Using WS-RT makes sharing the information, especially the precise information required by different systems in the grid, significantly easier. This article concludes the five-part "Building a grid system using WS-Resource Transfer" series. Let's revisit some key elements of the WS-RT system and how we've used it throughout the series to work as a flexible solution for different grid solutions. The key to the WS-RT system is the flexible method with which we can create and recover information within its repository. Technically, WS-RT is not seen as a general-purpose solution for the storage and recovery of information, but, in fact, the XML structure and the ease with which we can process information by using the QName and XPath dialects to extract and update the information makes it a flexible and easy-to-manipulate system for information storage and distribution. It can be used on a number of levels, as we've seen throughout the series, from the fundamentals of information storage to the organization and definition of security information, and for the distribution of work throughout the grid system. Using the flexible nature of WS-RT makes the distribution of work easy and allows us to bypass some of the problems and limitations that exist in other grid systems.

See also: Web Services Resource Transfer (WS-ResourceTransfer)


XBRL Reaches Marquee Companies
Sharon Linsenbach, eWEEK

Ford, General Electric, Infosys and Microsoft are already using the Business Reporting Tags to file financial reports. Can a full SEC mandate be far off? With the release of new Extensible Business Reporting Language taxonomies and Microsoft's announcement Dec. 6 that it used the technology to file its quarterly earnings report to the Securities and Exchange Commission, XBRL is proving it is mature enough to warrant widespread attention and adoption. Microsoft is currently only one of 61 companies to voluntarily use XBRL to make SEC filings, said Rob Blake, senior director, Interactive Services, Bowne and Co., and a founding member of the XBRL consortium, founded in 1999 to develop and maintain the language. Those companies include Bowne itself, as well as Ford, General Electric and Infosys, Blake said. The Federal Deposit Insurance Corporation has also been using XBRL for two years, Blake said. "Every financial institution in the U.S. that's regulated by the FDIC has been using this language" to submit financial information to the FDIC, he said. But some in the financial services industry already speculate that the SEC is leaning towards mandating the use of the language in reporting and filings. XBRL is likened to an XML schema, or digital "bar code," which lets companies represent their data in a format easily and quickly understood and processed by computers. The language ensures that companies can accurately transmit financial data internally and to investors, analysts and the SEC. The new taxonomies, based on GAAP (Generally Accepted Accounting Principles) and released December 5, 2007 broaden and deepen the types of data to which XBRL can be applied, making the language more accessible for companies across a broader industry spectrum.


That's ISO not I-S-O
Joab Jackson, Government Computer News Tech Blog

The next time you're talking about the standardization and the International Organization for Standardization comes up, be sure to pronounce it as [English /eye-so/] "Iso" and not "I-S-O." We say this because ISO does not, in fact, stand for the International Organization for Standardization (or the International Standardization Organization, which doesn't even exist). We heard this neat tidbit the XML 2007 conference, held in Boston last week. Ken Holman, who this week steps down from the role as the international secretary of the ISO subcommittee responsible for the Standard Generalized Markup Language(SGML), gave a briefing on ISO and related matters during the conference's lightening round sessions Tuesday night. He noted that the ISO name actually comes from "iso," the greek prefix for equal. For instance, Isometric refers to the equality of measurement... Holman dropped another tidbit during his talk as well. We may see a new ISO/IEC working group devoted to office document formats, such as Office Document Format and the Microsoft Office Open XML standard. First some hierarchy needs to be explained. ISO works on a wide variety of standards, from everything from medical equipment to film (ISO 400, ISO 200, etc.). In many information technology standards designations, a lot of times we'll see ISO in conjunction with IEC. For instance, ISO/IEC 13818 is the internationally-approved designation for MPEG-2. The two bodies often work together on IT standards. The International Electrotechnical Commission (IEC) was founded a little over 100 years ago (by none other than Lord Kelvin, among others!) to standardize the then-emerging field of electrical componentry. Both IEC and ISO were doing work in IT, so in order to eliminate duplication, they founded a joint body, called the Joint Technical Committee (JTC 0001), the only working group between the two organizations. JTC has a number of subcommittees, handling standards from everything from biometrics to user interface conventions. SC34 is the committee that begat SGML, which in turn begat XML... SC34 itself has a number of different working groups. WG 1 handles the data types and character types for XML documents. WG 2 handles the presentation of documents, including the font management and the like. WG 3 took the World Wide Web Consortium's Hypertext Markup Language specification and made it an international standard.

See also: the ISO/IEC JTC1/SC34 Web Server


Adobe To Open Source Data-Connection Technology for Rich Internet Apps
Antone Gonsalves, InformationWeek

Adobe Systems announced that it plans to contribute to the open source community its technology for connecting rich Internet applications (RIAs) to backend data sources. Under the plan, the source code for Adobe's messaging, data, and remote procedure call services would be packaged under a new open-source product called BlazeDS. The technology, along with Adobe's Action Message Format (AMF) protocol specification, would be available under the Lesser General Public License, or LGPL v3. The technologies, which are available in public beta on Adobe's labs site, are part of Adobe's LiveCycle Data Services Enterprise Suite, which provides some backend connectivity for applications built in Adobe's Flex platform for developing and deploying RIAs. Flex software runs on Adobe's runtime environment called AIR. In creating BlazeDS, Adobe apparently is looking to open source developers to help build better backend connectivity for RIAs. Data access technologies within LiveCycle Data Services today are particularly aimed at tying RIAs with the document and forms management of other products in the suite. Going forward, BlazeDS developers can use the technology to add data access to RIAs for "real-time collaboration and data-push capabilities" found in guided self-service, live help, and other applications.


Flickr Upload Tool Turns 3.0, Goes Open-Source
Stephen Shankland, CNet News.com

Flickr has released a new version of its tool for uploading photos to the Yahoo photo-sharing site, and made it an open-source program in the process. Flickr Uploadr 3.0, available for Mac OS X 10.4 and 10.5 and for Windows XP and Vista is now available in source code form, too, governed by version 2 of the General Public License (GPL). Open-source software may be freely modified, copied, and shared; opening source code could let programmers modify the Uploadr tool so it works on Linux or uploads to other photo-sharing sites, for example. Uploadr lets photographers select photos for upload, add tags, organize them into sets, and change privacy settings. Among the changes in Version 3 is the ability to set the photo order in sets and to add new photos to the upload queue while others are in the process of being transferred. Flickr Stats shows whence visitors came to look at your photos, either from within Flickr or outside on the Web. Stats also shows totals for recent viewings of photos and compiles data such as how many photos have tags, geotags, and comments. Views of your photos can be sorted by viewing totals, comments, favorite status, and the ever-elusive "interestingness" ranking.

See also: Flickr and Creative Commons licenses


Selected from the Cover Pages, by Robin Cover

W3C Forms Emergency Information Interoperability Framework Incubator Group

W3C has announced the formation of a new Emergency Information Interoperability Framework Incubator Group as part of the W3C Incubator Activity. The Group has been chartered through 01-December-2008 to "review and analyse the current state-of-the-art in vocabularies used in emergency management functions and to investigate the path forward via an emergency management systems information interoperability framework. These activities will lay the groundwork for a more comprehensive approach to ontology management and semantic information interoperability leading to a proposal for future longer-term W3C Working Group activity." The EIIF Incubator Group will primarily conduct its work on the public mailing list 'public-xg-eiif'. The The XG's Initial Chairs are Renato Iannella (NICTA) and Chamindra de Silva (Lanka Software Foundation/Virtusa). Initiating Members of the EIIF Incubator Group include National ICT Australia (NICTA) Ltd, Google, Swedish Institute of Computer Science (SICS), and IBM Corporation. The Emergency Information Interoperability Framework Incubator Group "will form a strong liaison to the OASIS Emergency Management Technical Committee, via the Chair, who is a member of that group. The XG intends to collect and categorize numerous emergency management related vocabularies and in the process will gain a comprehensive picture of the key stakeholders. This will include other standards groups, national and international emergency management groups, and international resilience and relief organisations."


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
EDShttp://www.eds.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
SAP AGhttp://www.sap.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2007-12-13.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org