The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: December 27, 2007
XML Daily Newslink. Thursday, 27 December 2007

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
IBM Corporation

XML Moves to mySQL
Kurt Cagle, O'Reilly XML Blog

The unification of XML and SQL relational data has taken another significant step forward recently with the introduction of significant new XML functionality in mySQL, the world's most popular open source database. In versions 5.1 and 6.0, mySQL adds the ability to retrieve tables (and JOINS) as XML results, to retrieve SQL schemas as XML files, to both select content via a subset of XPath and to update content using similar functions, and the like. I think the ramifications for this are actually quite huge. I've known for some time that much of the driving technology behind Web 2.0? is the power of SQL databases, with the bulk of those to date being mySQL databases. While enterprise level databases such as Oracle 10i+, IBM db2, and Microsoft SQL Server have long had XML capabilities, they also account collectively for a surprisingly small amount of the outward facing databases on the web, especially compared to mySQL. However, this has also has the unfortunate effect of promoting a relational database model as the prime one for the web, diminishing the utility of XML there and increasing the fragility of Web 2.0 applications. With native XML support moving into mySQL, it opens up a chance for XML developers to start working within that community, and and also raises some significant issues with regard to how unstructured and semi-structured data is stored, retrieved and manipulated... The XML support for mySQL is not yet at the level where it can support XQuery, but I think that this will come in time given the degree of support they have for the XPath specification. Keep an eye on this development. Related reference: Jon Stephens, "Using XML in MySQL 5.1 and 6.0."

See also: Jon Stephens' article

W3C Drafts for XML Interchange (EXI): Format, Best Practices, Primer
John Schneider and Takuki Kamiya (eds), W3C Technical Reports

W3C's Efficient XML Interchange (EXI) Working Group recently published three documents. First Public Working Drafts have been issued for "Efficient XML Interchange (EXI) Best Practice" and "Efficient XML Interchange (EXI) Primer." An updated WD is available for the "Efficient XML Interchange (EXI) Format 1.0" specification. Efficient XML Interchange (EXI) is a very compact representation for the Extensible Markup Language (XML) Information Set that is intended to simultaneously optimize performance and the utilization of computational resources. The EXI format uses a hybrid approach drawn from the information and formal language theories, plus practical techniques verified by measurements, for entropy encoding XML information. Using a relatively simple algorithm, which is amenable to fast and compact implementation, and a small set of data types, it reliably produces efficient encodings of XML event streams. The event production system and format definition of EXI are presented. The "Best Practices" document provides explanations of format features and techniques to support interoperable information exchanges using EXI. While intended primarily as a practical guide for systems architects and programmers, it also presents information suitable for the general reader interested in EXI's intended role in the expanding Web. The "EXI Primer" a non-normative document intended to provide an easily readable technical background on the Efficient XML Interchange (EXI) format. It is oriented towards quickly understanding how the EXI format can be used in practice and how options can be set to achieve specific needs. Section 2 "Concepts" describes the structure of an EXI document and introduces the notions of EXI header, EXI body and EXI grammar which are fundamental to the understanding of the EXI format. Additional details about data type representation, compression, and their interaction with other format features are presented. Section 3 "Efficient XML Interchange by Example" provides a detailed, bit-level description of a schema-less example.

See also: the W3C Efficient XML Interchange Working Group

A Document Format for Expressing Authorization Policies to Tackle Spam and Unwanted Communication for Internet Telephony
Hannes Tschofenig (et al., eds); IETF Internet Draft

Members of the IETF SIPPING Working Group have published an updated draft defining SPIT authorization documents that use SAML. The problem of SPAM for Internet Telephony (SPIT) is an imminent challenge and only the combination of several techniques can provide a framework for dealing with unwanted communication. The responsibility for filtering or blocking calls can belong to different elements in the call flow and may depend on various factors. This document defines an authorization based policy language that allows end users to upload anti-SPIT policies to intermediaries, such as SIP proxies. These policies mitigate unwanted SIP communications. It extends the Common Policy authorization framework with additional conditions and actions. The new conditions match a particular Session Initiation Protocol (SIP) communication pattern based on a number of attributes. The range of attributes includes information provided, for example, by SIP itself, by the SIP identity mechanism, by information carried within SAML assertions... A SPIT authorization document is an XML document, formatted according to the schema defined in RFC 4745. SPIT authorization documents inherit the MIME type of common policy documents, application/auth-policy+xml. As described in RFC 4745, this document is composed of rules which contain three parts—conditions, actions, and transformations. Each action or transformation, which is also called a permission, has the property of being a positive grant to the authorization server to perform the resulting actions, be it allow, block etc . As a result, there is a well-defined mechanism for combining actions and transformations obtained from several sources. This mechanism therefore can be used to filter connection attempts thus leading to effective SPIT prevention... Policies are XML documents that are stored at a Proxy Server or a dedicated device. The Rule Maker therefore needs to use a protocol to create, modify and delete the authorization policies defined in this document. Such a protocol is available with the Extensible Markup Language (XML) Configuration Access Protocol (XCAP), per RFC 4825..."

See also: SAML references

ACL Data Model for NETCONF
Iijima Tomoyuki (et al, eds); IETF Internet Draft

Members of the IETF Network Configuration (NETCONF) Working Group have published a draft "ACL Data Model for NETCONF." The Working Group was chartered to produce a protocol for network configuration that uses XML for data encoding purposes: "Configuration of networks of devices has become a critical requirement for operators in today's highly interoperable networks. Operators from large to small have developed their own mechanisms or used vendor specific mechanisms to transfer configuration data to and from a device, and for examining device state information which may impact the configuration..." The "ACL Data Model" document introduces a data model developed by the authors so that it facilitates discussion of data model which NETCONF protocol carry. Data modeling of configuration data of each network function is necessary in order to achieve interoperability among NETCONF entities. For that purpose, the authors devised an ACL data model and developed a network configuration application using that data model... The data model was originally designed in a style of UML (Unified Modeling Language) class diagram. From the class diagram ACL's XML schema can be generated; the configuration data are sent in a style conforming to this XML schema. The configuration application developed using the ACL data model can open and read the file. Then, the configuration application reads the lists of ACL line by line and transforms them into a NETCONF request message conforming to the XML schema listed before. And the configuration application sends the NETCONF request message and configures the network device accordingly... When we exchange NETCONF messages based on the data model we proposed, security should be taken care of. WS-Security can achieve secure data transportation by utilizing XML Signature, XML Encryption mechanism...."

See also: Network Configuration (NETCONF) Working Group

Electricity Costs Attacked Through XML
Charles Babcock, InformationWeek

A power consortium that distributes a mix of "green" and conventional electricity is implementing an XML-based settlements system that drives costs out of power distribution. The Northern California Power Agency is one of several state-chartered coordinators in California that schedules the delivery of power to the California power grid then settles the payment due the supplier. NCPA sells the power generated by the cities of Palo Alto and Santa Clara, as well as hydro and geothermal sources farther north. Power settlements are a highly regulated and complicated process. Each settlement statement, which can be 100 Mbytes of data, contains how much power a particular supplier delivered and how much was used by commercial vs. residential customers. The two have different rates of payment, set by the Public Utilities Commission. The settlements are complicated by the fact that electricity meters are read only once every 90 days; many settlements must be based on an estimate of consumption that gets revised as meter readings come in. On top of that, there are fees for transmission across the grid, sometimes set by the PUC to apply retroactively. On behalf of a supplier, NCPA can protest that fees for transmission usage weren't calculated correctly, and the dispute requires a review of all relevant data. NCPA sought these vendor bids three years ago and received quotes that were "several hundred thousand dollars a year in licensing fees and ongoing maintenance," said Caracristi. The need for services from these customized systems adds to the cost of power consumption for every California consumer. Faced with such a large annual expense, NCPA sought instead to develop the in-house expertise to deal with the statements. Senior programmer analyst Carlo Tiu and his team at NCPA used Oracle's XML handling capabilities gained in the second release of 10g, a feature known as Oracle XML DB. They developed an XML schema that allowed Oracle to handle the data and an XML configuration file that contained the rules for determining supplier payment from the data. That file can be regularly updated, without needing to modify the XML data itself.

CCTS 2.01 Data Type Catalogue
Mark Crawford, Blog

When the Core Components Technical Specification (CCTS) Version 2.01 was published by UN/CEFACT in 2003, it contained a list of 10 Core Component Types, 20 primary and secondary representation terms, and supporting Content and Supplementary Components. The Core Component Types were simple data types that were intended to be used as the basis for the development of data types to express the value domain for CCTS leaf elements (Basic Core Components and Basic Business Information Entities). It was envisioned that the 10 CCTs and the 20 Representation Terms would be used to create a set of 20 unqualified data types and an unlimited amount of qualified (more restricted) data types. It was also envisioned that future updates to the data types would be published independently of the CCTS specification. The recently published CCTS 2.01 Data Type Catalogue delivers on those expectations. It republishes the CCTs, Representation Terms, Content and Supplementary Components, and allowed restrictions by primitive data type that were contained in CCTS 2.01. It also, for the first time, publishes the full set of 20 unqualified data types that were implicitly expressed in CCTS 2.01. These data types have also been expressed as XML schema in support of the UN/CEFACT XML NDR standard. The UN/CEFACT Applied Technologies Group is responsible for maintaining changes to the data type catalogue and has provided a Data Maintenance Request form for interested parties to submit their requested changes. ATG is also working on the CCTS 3.0 data type catalogue which expands the number of data types, and also looks at closer alignment with the data types of the W3C XSD specification. SAP actively participates in the development and maintenance of these data types, and has contributed a number of additional unqualified data types that are under consideration within UN/CEFACT. Additionally, these unqualified (or Core) data types are the lowest level of data interoperability being created across a wide variety of individual business standards development organizations such as ACORD, CIDX, OAGi, RosettaNet, UBL and others who have adopted, or are in the process of adopting, CCTS and its supporting data types.

Text: UN/CEFACT Core Components Data Type Catalogue (Version 2.01, 7-December-2007). Abstract: "This Data Type Catalogue contains the Allowed Restriction, Core Component Type, Content and Supplementary Component, and Representation Term Core Component Tables published in the Core Components Technical Specification (CCTS) Version 2.01. It also contains the physical instantia-tion of the implied data types from CCTS. Additionally, the XML Schema Definition (XSD) and UN/EDIFACT manifestations of the implied data types are also provided as appendicies. This catalogue will be maintained by the UN/CEFACT Applied Technologies Group (ATG) using published data maintenance request (DMR) procedures for data types." Cache PDF .doc from the reference document.

See also: the UNTMG Core Components Working Group

Using CCTS Modeler Warp 10 to Customize Business Information Interfaces
Gunther Stuhec, Blog

SAP's Warp 10 is using the context driver principle for getting customized business information interfaces that most closely fits your unique requirements. This article explains how you can use this feature of Warp 10 for especially shaping a business data interface such as a purchase order or invoice to your business needs in a matter of seconds rather than weeks. Manually finding the correct use from the myriad possibilities, combinations and contextualization inherent in today's message structures is extremely difficult. Warp 10 can dramatically reduce the effort required by essentially performing this function for you automatically using its revolutionary context methodology to assembly message interfaces on demand for your specific business purpose. The context driver principle is a combination of set logic, predicate logic and graph theory. This unique combination enables you to discover the correct structure through the re-use of existing elements available in the common repository. The context driver principle enables the clear categorization of how these entities are really used within given contexts. In fact, it is not just a reuse; it is a classified reuse that precisely defines what is reused and how it is reused across the myriad context possibilities. For example it monitors, tracks, and categorizes through its wiki concept which of the elements of an address are really necessary in the context 'United States', 'Germany' or even 'China'. Warp 10 makes this information visible to the data modeler and integrator with minimal effort on their part. This increased visibility of reuse of common building blocks in specific context offers significant improvement over current approaches and will contribute to achieving true semantic interoperability within and across organizational boundaries.

Additional details are presented in the blog: "CCTS Modeler Warp 10: The Speed of Data Integration and B2B."


Technical Comparison: OpenID and SAML
Jeff Hodges, NeuStar Whitepaper

This document presents a technical comparison of the OpenID Authentication protocol and the Security Assertion Markup Language (SAML) Web Browser SSO Profile and the SAML framework itself. Topics addressed include design centers, terminology, specification set contents and scope, user identifier treatment, web single sign-on profiles, trust, security, identity provider discovery mechanisms, key agreement approaches, as well as message formats and protocol bindings. An executive summary targeting various audiences, and presented from the perspectives of end-users, implementors, and deployers, is provided. We do not attempt to assign relative value between OpenID and SAML, e.g., which is 'better'; rather, it attempts to present an objective technical comparison... OpenID 1.X and 2.0, and SAML 2.0's Web Browser SSO Profile (and earlier versions thereof), offer functionality quite similar to each other. Obvious differentiators to a protocol designer are the message encodings, security mechanisms, and overall profile flows. Other differentiators include the layout and scope of the specification, trust and security aspects, OP/IDP discovery mechanisms, user-visible features such as identifier treatment, key agreement provisions, and security assertion schema and features..."

See also: the author's blog

Five Things You'll Love About Firefox Version 3
Barbara Krasnoff, Computerworld

Although the basic look of the Firefox 3 Beta 2 browser hasn't changed, there are actually quite a few new features coming. For a complete list, you can check out Mozilla's release notes. Some of the new features in Firefox 3 are not immediately obvious—at least, not to the casual user. Among other things, Mozilla is incorporating new graphics- and text-rendering architectures in its browser layout engine (Gecko 1.9) to offer rendering improvements in CSS and SVG; adding a number of security features, including malware protection and version checks of its add-ons; and offline support for suitably coded Web applications. (1) Easier downloads: While the older Download Manager was quite serviceable, Mozilla has made some nice tweaks in the new version. It now lists not only the file name, but the URL it was downloaded from, and includes an icon that leads to information about when and where you downloaded it. The new feature I really approve of is the ability to resume a download that may have been abruptly stopped because Firefox, or your system, crashed. (2) Enhanced address bar: In Firefox 3 Beta 2, the autocomplete doesn't just offer a list of URLs that you've been to, but includes sites that are in your bookmark list; it then gives you a nice, clear listing of the URLs and site names in large, easy-to-read text, with the typed-in phrase underlined. (3) A workable bookmark organizer: The new Places Organizer vastly improves Firefox's management of bookmark lists. (4) Easier bookmarking: You can now quickly create a bookmark by double-clicking on a star that appears in the right side of the address bar; you can also add tags to your bookmarks, which could work nicely as an organizational tool. (5) Better memory management: The new version of Firefox appears to have a smaller memory footprint than its predecessor. ["Beta 2 includes over 30 more memory leak fixes, and 11 improvements to the memory footprint."]

See also: Firefox 3 Beta 2 Release Notes


XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.
IBM Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: