The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: April 06, 2010
XML Daily Newslink. Tuesday, 06 April 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com



W3C Approves XML Entity Definitions for Characters as Final Recommendation
David Carlisle and Patrick Ion (eds), W3C Technical Report

W3C announced that the Math Working Group has published a Final W3C Recommendation for the specification XML Entity Definitions for Characters. On its way to approval as a standard, this document has been reviewed by W3C Members, by software developers, and by other W3C groups and interested parties, and is endorsed by the Director as a W3C Recommendation. It is a stable document and may be used as reference material or cited from another document. W3C's role in making the Recommendation is to draw attention to the specification and to promote its widespread deployment."

Overview: "Notation and symbols have proved very important for human communication, especially in scientific documents. Mathematics has grown in part because its notation continually changes toward being succinct and suggestive. There have been many new signs developed for use in mathematical notation, and mathematicians have not held back from making use of many symbols originally introduced elsewhere. The result is that science in general, and particularly mathematics, makes use of a very large collection of symbols. It is difficult to write science fluently if these characters are not available for use. It is difficult to read science if corresponding glyphs are not available for presentation on specific display devices. In the majority of cases it is preferable to store characters directly as Unicode character data or as XML numeric character references.

However, in some environments it is more convenient to use the ASCII input mechanism provided by XML entity references. Many entity names are in common use, and this specification aims to provide standard mappings to Unicode for each of these names. It introduces no names that have not already been used in earlier specifications. Note that these names are short mnemonic names designed for input methods such as XML entity references, not the longer formal names that form part of the Unicode standard. Specifically, the entity names in the sets starting with the letters 'iso' were first standardized in SGML and updated in ISO 9573-13-1991. The W3C Math Working Group has been invited to take over the maintenance and development of these sets by the original standards committee (ISO/IECJTC1 SC34). The sets with names starting 'mml' were first standardized in MathML and those starting with 'xhtml' were first standardized in HTML.

This document is the result of years of employing entity names on the Web. There were always a few named entities used for special characters in HTML, but a flood of new names came with the symbols of mathematics. This means that this document can be viewed as an extension and final revision of Chapter 6 of the MathML 2.0 recommendation. Now it presents a completed listing harmonizing the known uses of character entity names throughout the XML world and Unicode... Since there are so many character entity names, and the files specifying them are resources that may be subject to frequent lookup, a template catalog file has also been provided. Users are strongly encouraged to design their implementations so that relevant entity name tables are cached locally, since it is not expected that the listings provided with this specification will need changing for a long time..."

See also: the W3C MathML Activity


Information Card Token Profile Specifications for SAML V1.1 and V2.0
Michael B. Jones and Scott Cantor (eds), OASIS Committee Drafts for Public Review

Members of the OASIS Identity Metasystem Interoperability (IMI) Technical Committee have released two Committee Draft documents for public review through June 04, 2010. Both documents relate to Identity Metasystem Interoperability Version 1.0, published by the IMI TC as an OASIS Standard in July 2009. According to this specification: "An Identity Selector and the associated identity system components allow users to manage their Digital Identities from different Identity Providers, and employ them in various contexts to access online services. In this specification, identities are represented to users as 'Information Cards'. Information Cards can be used both at applications hosted on Web sites accessed through Web browsers and rich client applications directly employing Web services. The IMI specification also provides a related mechanism to describe security-verifiable identity for endpoints by leveraging extensibility of the WS-Addressing specification. This is achieved via XML elements for identity provided as part of WS-Addressing Endpoint References. This mechanism enables messaging systems to support multiple trust models across networks that include processing nodes such as endpoint managers, firewalls, and gateways in a transport-neutral manner.

SAML V1.1 Information Card Token Profile Version 1.0 describes a set of rules for Identity Providers and Relying Parties to follow when using SAML V1.1 assertions as managed Information Card security tokens, so that interoperability and security is achieved commensurate with other SAML authentication profiles.

OASIS has standardized a set of profiles for acquiring and delivering security tokens, collectively referred to as "Information Card" technology. Identity Providers and Relying Parties employing the Identity Metasystem Interoperability (IMI) profile to request and exchange security tokens are able to use arbitrary token formats, provided there is agreement on the token's syntax and semantics, and a way to connect the token's content to the supported protocol features. These profiles are agnostic with respect to the format and semantics of a security token, but interoperability between Issuing and Relying Parties cannot be achieved without additional rules governing the creation and use of the tokens exchanged... "SAML V1.1 Information Card Token Profile Version 1.0" does not seek to alter the required behavior of existing Identity Selector software, or conflict with the profile defined by IMI.

The SAML V2.0 Information Card Token Profile Version 1.0 public review draft similarly "describes a set of rules for Identity Providers and Relying Parties to follow when using SAML V2.0 assertions as managed Information Card security tokens, so that interoperability and security is achieved commensurate with other SAML authentication profiles. It provides a set of requirements and guidelines for the use of SAML V2.0 assertions as security tokens that, where possible, emulates existing SAML V2.0 authentication profiles so as to limit the amount of new work that must be done by existing software to support the use of Information Cards. It also provides for the use of SAML assertions in this new context that is safe and consistent with best practices in similar contexts.

See also: the profile for SAML v2.0


Gmail and Yahoo! Mail Support OAuth for Access to IMAP/SMTP
Eric Sachs, Blog

Blog postings from Eric Sachs (Google) and Michael Curtis (Yahoo!) have announced support for the IETF Open Authentication Protocol (OAuth) specification in email APIs. OAuth is "an open protocol that allows users to share their private resources (e.g. photos, videos, contact lists) stored on one site with another site without having to hand out their username and password. In the blog "OAuth Access to IMAP/SMTP in Gmail," Sachs writes: Google has long believed that users should be able to export their data and use it with whichever service they choose. For years, the Gmail service has supported standard API protocols like POP and IMAP at no extra cost to our users. These efforts are consistent with our broader data liberation efforts... In addition to making it easier for users to export their data, we also enable them to authorize third party (non-Google developed) applications and websites to access their data at Google. One of the more common examples is allowing a social network to access your address book in order to send invitations to your friends...

While it is possible for a user to authorize this access by disclosing their Google Account password to the third party app, it is more secure for the app developer to use the industry standard protocol called OAuth which enables the user to give their consent for specific access without sharing their password. Most Google APIs support this OAuth standard, and starting today it is also available for the IMAP/SMTP feature of Gmail... The feature is available in Google Code Labs and we have provided a site with documentation and sample code. In addition, Google has begun working with other companies like Yahoo and Mozilla on a formal Internet standard for using OAuth with IMAP/SMTP..."

In the blog article "Hey Developers: You've Got Mail (And Full API Access!)," Michael Curtis (Yahoo! Mail Product and Engineering) writes: "Here at Yahoo! Mail, we're cranking so hard on new features and capabilities that sometimes we forget to take a moment to write about them. This was the case a few weeks ago, when we greatly expanded the Yahoo! Mail API -- adding new access scopes, new auth methods, and cool helper libraries -- but didn't get around to the blog post.

We're super excited to announce that the Yahoo! Mail API now allows Read and Read+Write access to full message contents for any type of user. We've added these expanded scopes — we've been allowing message-header access for years — so that the hundreds of millions of users who entrust their data to us can have the freedom to use it in whichever context is most useful to them... The API has also been ported to OAuth, providing a much cleaner token model, better authentication UI, and more fine-grained management for developers. We'll continue to support Browser-Based Auth for legacy developers, but its use is deprecated. We recommend that anyone starting a new project choose OAuth. The API Documentation has been updated to reflect the new authentication and scopes. It has everything you need to get up and running, including sample code and Python and PHP helper libraries..."

See also: the Yahoo! blog by Michael Curtis


W3C Call for Implementations: Voice Browser Call Control (CCXML v1.0)
RJ Auburn, Paolo Baggia, Mark Scott (eds), W3C Candidate Recommendation

As part of the W3C Voice Browser Activity, members of the Voice Browser Working Group have issued a call for implementations of the CR-level specification Voice Browser Call Control: CCXML Version 1.0. Accompanying the call for implementations and prose specification is a document CCXML 1.0 Implementation Report Plan which provides a key criterion for moving CCXML beyond the Candidate Recommendation phase. This document describes the requirements for the Implementation Report and the process that the Voice Browser Working Group will follow in preparing the report. The Voice Browser Working Group expect to meet all requirements of the report within the Candidate Recommendation period closing 28-May-2010. The group will advance CCXML 1.0 to Proposed Recommendation no sooner than 28 May 2010...

The entrance criteria to the Proposed Recommendation phase require at least two independently developed interoperable implementations of each required feature, and at least one or two implementations of each optional feature depending on whether the feature's conformance requirements have an impact on interoperability. Detailed implementation requirements and the invitation for participation in the Implementation Report are provided in the Implementation Report Plan..."

CCXML (Call Control eXtensible Markup Language) provides declarative markup to describe telephony call control. CCXML is a language that can be used with a dialog system such as VoiceXML. CCXML can provide a complete telephony service application, comprised of Web server CGI compliant application logic, one or more CCXML documents to declare and perform call control actions, and to control one or more dialog applications that perform user media interactions. Since platforms implementing CCXML may choose to use one of many telephony call control definitions (JAIN Call Control, ECMA CSTA, S.100, etc.), the call control model in CCXML has been designed to be sufficiently abstract so that it can accommodate all major definitions. For relatively simple types of call control, this abstraction is straightforward. The philosophy in this regard has been to "make simple things simple to do." Outdial, transfer (redirect), two-party bridging, and many forms of multi-party conferences fall within this classification.

The architecture of a typical telephony implementation consists of four primary components: (1) a caller (along with the telephone network); (2) a dialog system (e.g. a VoiceXML implementation); (3) a conference server used to mix media streams; (4) and the CCXML implementation which manages the Connections between the first two components. The Telephony Web Application may or may not be integrated with the Voice Web Application. The Telephony Control and Dialog Control Interfaces may be implemented as an API or protocol..."

See also: the W3C Voice Browser Activity


Revised IETF Internet Draft on Web Linking
Mark Nottingham (ed), IETF Internet Draft

An updated version of the standards track specification Web Linking has been published through the IETF Network Working Group. The document specifies relation types for Web links, and defines a registry for them. It also defines the use of such links in HTTP headers with the Link header-field. This specification, if approved, provides an update to The Atom Syndication Format proposed standard published by IETF in December 2005, where Atom ia XML-based Web content and metadata syndication format.

From the document Introduction: "A means of indicating the relationships between resources on the Web, as well as indicating the type of those relationships, has been available for some time in HTML, and more recently in the Atom specification (RFC 4287). These mechanisms, although conceptually similar, are separately specified. However, links between resources need not be format-specific; it can be useful to have typed links that are independent of their serialisation, especially when a resource has representations in multiple formats. To this end, this document defines a framework for typed links that isn't specific to a particular serialisation or application. It does so by re-defining the link relation registry established by Atom to have a broader domain, and adding to it the relations that are defined by HTML.

Furthermore, an HTTP header-field for conveying typed links was defined in Section 19.6.2.4 of RFC 2068, but removed from RFC 2616, due to a lack of implementation experience. Since then, it has been implemented in some User-Agents (e.g., for stylesheets), and several additional use cases have surfaced. Because it was removed, the status of the Link header is unclear, leading some to consider minting new application-specific HTTP headers instead of reusing it. This document addresses this by re-specifying the Link header as one such serialisation, with updated but backwards-compatible syntax...

Changes in I-D versions -09 and 08, as presented in the listing of Appendix E (Document History): Corrected ptoken / ptokenchar BNF; Disallow multiple title* parameters; Prefer title* over title when available; Remove "\" from ptokenchar; Explain why mailing list isn't archived; Define default language for title and title*, based on Content-Language, when present; Licensed machine-readable data under MIT; Clarified URI comparison for extension relation types; Various editorial tweaks; Changed "fields" to "appdata" to avoid confusion, and add example to clarify; Defined REV according to HTML2, deprecated; Clarified allowable characters in link-extensions; Changed RFC 2231 reference to draft-reschke-rfc2231-in-http; Added hub, latest-version, predecessor-version, successor-version, version-history, working-copy and working-copy-of relation types to initial registry; Adjusted text regarding when anchor parameter is appropriate..."

See also: the W3C comment archive for 'Web Linking'


RESTful Web Services: Apache Wink and REST
Vishnu Vettrivel, IBM developerWorks

"This article, the third in a three-part series, compares Apache Wink with various other free and open source JAX-RS implementations like Project Jersey, JBoss RESTEasy, and the Restlet Framework. It provides a high-level overview of each implementation framework while highlighting differences based on a common set of attributes. Finally, this article helps you select the right framework for your needs by analyzing and reviewing the different JAX-RS implementations.

Project Jersey is Sun's open source, dual-licensed, production-quality JAX-RS reference implementation for building RESTful Web services. It's meant to be more than just a reference implementation and provides APIs that allow easy customization and extension by developers. Jersey ships as part of Sun's GlassFish application server download. Jersey is usually deployed within a servlet container but does support an embedded mode of operation within Java programs. Running a JAX-RS service in an embedded mode is easy and simple and requires only a few lines of code. You can also use this embeddable container easily with unit tests.

The Jersey client API is a sophisticated, high-level, Java-based API for invoking any RESTful Web service, not just JAX-RS-compliant services. JAX-RS developers however, should find the Jersey client API easy and familiar to use. The Jersey client API claims to have three important goals: (1) Encapsulate the REST constraint of Uniform Interface Constraint on the client side; (2) Make it as easy to interoperate with server-side RESTful Web services; (3) Leverage JAX-RS API concepts and artifacts for the client side.

The Jersey client API also allows a pluggable HTTP implementation (like HttpURLConnection and the Apache HTTP client). Overall, the Jersey client API lets you efficiently implement a REST-based client-side solution... This article uses five main functional attributes to describe each of these frameworks: embedded container support, client API framework, interceptor framework, data format support, and component integration support. Though all of the above-discussed frameworks implement the same JAX-RS specification, the design and architecture of these various frameworks are very different... Depending on your specific needs, you might find one or more of the above frameworks a better match. For example, if component integration is of primary importance, the Restlet Framework or RESTEasy might be a good choice. However, if extensive data format support and a sophisticated interception framework with high throughput characteristics is important, then Apache Wink might be a good fit for your needs.


How Schematron Could Open Up Management of ODF and OOXML Flavours
Rick Jelliffe, O'Reilly Technical

"The old SGML idea of DTDs was primarily a gatekeeper function: it was (incoming) validation rather than (outgoing) verification. The idea was that by requiring validation, invalid documents (bad data) would not propagate unchecked through a system. More than that, the location in the production process where the invalidity occurred would be clear: the recipient of the invalid document can send it back to the person or process that caused the problem.

This is a nice model, and gave a tacet software engineering discipline that made SGML successful for many large projects. Even now, gateway functions are useful in Web Services systems. However, the idea that on the WWW the recipient can send back documents for re-work is obviously bogus... The trouble is that the organizing principle of most schema languages (XSD, RELAX NG) is the namespace. But we have no schema languages that treat namespaces as first class objects, or allow parameterization of them. Both RELAX NG and XSD 1.1 do allow the use of attributes on top-level elements to select document variants, I should point out, but while this is a great feature, I don't think this goes far enough...

Both OOXML and ODF have had substantial discussions relating to versioning and extensions (the two are interwined): ODF has gone with a head-in-the-sand hack-something-later approach; OOXML has a very good mechanism (MCE) which address the ability to add shiny new extensions in new namespaces well, but does not address changes within a namespace...

I think Schematron can play a part, because a Schematron schema is not hooked to a single namespace the way XSD and RELAX NG are... The point I would like to make is that everyone will never catch up, and models of interoperability based entirely on the promise that sooner or later everyone will catch up will just lead to disappointment. Now, of course, to an extent the ODF approach has been to try to lower the bar at feature level compared to Office (though ODF does have some features OOXML has not) but even there we will have a moving target: ODF NG for example. So I wonder if it would be useful to have some kind of Open Source Schematron schema where we could collect tests and diagnostics for the various flavours..."


A Seismometer in Every Laptop
Salvatore Salamone, Smarter Technology

"The recent earthquakes in Haiti and Chile are driving interest in a distributed seismometer idea to better understand the intricate details of earthquake mechanisms. The approach will provide details that have not been economically possible to collect before.

University researchers are behind an effort called the Quake-Catcher Network (QCN), a collaborative initiative for developing the world's largest, low-cost seismic network. The group's plan is to use sensors in and attached to Internet-connected computers to study earthquakes and their aftershocks.

Specifically, the group hopes volunteers will help them build a large distributed network of seismometers. The network would use the accelerometers that are now routinely available in laptops or USB-based accelerometers connected to desktop computers.

Similar to SETI@Home and other distributed scientific research projects, users who want to help can download a small application to their computer. The software then collects information about the device's motions obtained from the unit's accelerometer...


Cloud Computing: Evolution or Just Intelligent Design?
Paul Stanfield, Austin Technology Law Blog

This blog article is Part 2 in a series, where the first part was published on March 11, 2010. Excerpts: "In the 1980s, in the arena of big data processing users the cord ran from a workstation to a large mainframe or AS400 computer, which was often in the same room or in close proximity. The cord was whole, undivided and dedicated....

At some point in this timeline, the internet intruded. This made it possible to use the internet as the connecting cord. Data was broken down and transmitted in little packets through the internet from the terminal to the processor (which was now probably a server rather than a hulking mainframe) and back. The cord was no longer dedicated and unbroken (or even necessarily a physical entity) but the user still generally knew the location and identity of the processing assets...

Cue the cloud: Workstations or laptops with a web browser and an internet connection could be used to process and transmit (or only transmit) data into the internet, which would then be directed to the appropriate server or servers, which would store and/or process the data and return to the user via the internet...

The user still wants to have an acceptable degree of comfort with response time, on-line up time, through put, security, business continuity and disaster recovery as well as help desk services and software and hardware updates and refreshes. While the basic issues have not materially changed, the complexity of addressing them has..."


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-04-06.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org