The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: December 02, 2009
XML Daily Newslink. Wednesday, 02 December 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc.

IETF Last Call for Specification of The 'mailto' URI Scheme
Martin Dürst, Larry Masinter, Jamie Zawinski (eds); IETF Internet Draft

The Internet Engineering Steering Group (IESG) has received a request to consider The 'mailto' URI Scheme specification (level -07) as an IETF Proposed Standard. The IESG plans to make a decision in the next few weeks, and solicits final comments on this action; please send substantive comments by 2010-01-08. The Change Log in document Section 10.1 (Changes between draft 06 and draft 07) summarizes ten significant revisions in this draft.

The 'mailto' URI scheme is used to identify resources that are reached using Internet mail. In its simplest form, a 'mailto' URI contains an Internet mail address. For interactions that require message headers or message bodies to be specified, the 'mailto' URI scheme also allows setting mail header fields and the message body. This document defines the format of Uniform Resource Identifiers(URI) to identify resources that are reached using Internet mail. It adds better internationalization and compatibility with IRIs (RFC 3987) to the previous syntax of 'mailto' URIs. If approved, this Standards Track Internet Draft will obsolete IETF RFC 2368.

A 'mailto' URI designates an "internet resource", which is the mailbox specified in the address. When additional header fields are supplied, the resource designated is the same address, but with an additional profile for accessing the resource. While there are Internet resources that can only be accessed via electronic mail, the 'mailto' URI is not intended as a way of retrieving such objects automatically. The operation of how any URI scheme is resolved is not mandated by the URI specifications. In current practice, resolving URIs such as those in the 'http' URI scheme causes an immediate interaction between client software and a host running an interactive server...

When creating 'mailto' URIs, any reserved characters that are used in the URIs MUST be encoded so that properly-written URI interpreters can read them. Also, client software that reads URIs MUST decode strings before creating the mail message so that the mail messages appear in a form that the recipient software will understand... Document Section 6 (Examples) supplies a range of literal example URIs, including: 'Examples Conforming to RFC2368', 'Examples of Complicated Email Addresses' (i.e., how to treat email addresses that contain complicated escaping syntax), and 'Examples Using UTF-8-Based Percent-Encoding'..."

See also: the current IETF Request for Comments #2368

Next-Generation Banking with Web 2.0
Xu Ming Chen, Shan Jian Hong, Shao Yu; IBM developerWorks

"Web 2.0 brings innovative design ideas and methodologies to the financial industry and improves considerably the development of business applications in this competitive market environment. This article explains how Web 2.0 influences the design of financial applications. It examines trends in Internet banking and how Web 2.0 practices influence those trends...

Features of next-generation Internet banking: (1) Personalization and customization. The next generation of Internet banking based on Web 2.0 fully exhibits the idea of 'people-orientation.' For different customers, different personalized Internet banking transaction and marketing platforms are displayed. (2) Rich third-party services. Based on the Widget standard, Web 2.0-based next-generation Internet banking can conveniently integrate many third-party services, such as Google Maps, Yahoo Stocks, weather forecasts, financial news, and so on. (3) Multi-service window. The next-generation Internet banking, based on Web 2.0, supports multi-service windows. Users can open several service windows at the same time, and each window supports asynchronous concurrent operation. (4) A new development model. Web 2.0 promotes a user-centered design. Products and services adjust and improve according to users' demands and feedback. Web 2.0 advocates the 'Never Release' concept, which means that there is no 'official version,' but every version is an official version, providing E-business On Demand...

For a next-generation Internet banking architecture, several new components may be added into the server side and the browser side: on the server side, the Channel Handler should support communication with the browser through the XML or JSON data formats. With the two kinds of structured data, the request and response between the server and client will have more content and meaning. Besides the XML/JSON communication support, the server application needs to manage the Web 2.0 theme and style as well as the Web 2.0 layout, which are related to the GUI presented on the browser. The next-generation Internet bank is the next evolution of the traditional Internet bank, where construction is based on the original application, utilizing the banking transaction pages and processes, while providing new features in using Web 2.0 business models and technology... New key components in the architecture include the Web 2.0 Browser Side Runtime, the Web 2.0 Theme and Style management, the Web 2.0 Layout Management, the Web 2.0 Service Management, and the Channel Intelligence..."

Updated Working Draft: Multimodal Architecture and Interfaces
Michael Bodell, Deborah Dahl, Ingmar Kliche (et al., eds) W3C Technical Report

Members of the W3C Multimodal Interaction Working Group have published an updated Working Draft of the Multimodal Architecture and Interfaces (MMI Architecture) specification. The document defines a general and flexible framework providing interoperability among modality-specific components from different vendors—for example, speech recognition from one vendor and handwriting recognition from another.

The document as a whole has changed significantly and the group welcomes public review. The main changes from the previous draft are: (1) clarifying the relationship to EMMA, (2) simplifying the architecture constituents, (3) adding a description on HTTP transport of lifecycle events and (4) adding an example of handwriting recognition modality component. A colored diff-marked version of this document is also available.

The aim of the MMI Architecture design is to provide a general and flexible framework providing interoperability among modality-specific components. This framework places very few restrictions on the individual components or on their interactions with each other, but instead focuses on providing a general means for allowing them to communicate with each other, plus basic infrastructure for application control and platform services... Even though multimodal interfaces are not yet common, the software industry as a whole has considerable experience with architectures that can accomplish these goals...

A recent architecture that is relevant to MMI Architecture concerns is the model-view-controller (MVC) paradigm. This is a well known design pattern for user interfaces in object oriented programming languages, and has been widely used with languages such as Java, Smalltalk, C, and C++. The design pattern proposes three main parts: a Data Model that represents the underlying logical structure of the data and associated integrity constraints, one or more Views which correspond to the objects that the user directly interacts with, and a Controller which sits between the data model and the views. The separation between data and user interface provides considerable flexibility in how the data is presented and how the user interacts with that data. While the MVC paradigm has been traditionally applied to graphical user interfaces, it lends itself to the broader context of multimodal interaction where the user is able to use a combination of visual, aural and tactile modalities..."

See also: the W3C Multimodal Interaction Activity

Google to Phase Out Gears Plug-In in Favor of HTML5
Stephen Shankland, CNET

"Google plans to phase out its Gears plug-in in favor of HTML5 when it comes to augmenting browser abilities... Along with Mozilla, Opera, Apple, and some other allies, Google has been agitating for features that can make browsers and the Web into a more powerful foundation for Web sites and Web applications. Gears was an early Google effort in this area... But Gears emerged in 2007—back before Google released a browser of its own, before the World Wide Web Consortium had put its full weight behind HTML5, before HTML5 had gotten the traction it now enjoys as an official standard in the making, and before Microsoft took interest in contributing to that standard. It's clear things are different now, and HTML5 is solving the same problems Gears set out to fix, and a healthy cooperation is under way for future Web standards work.

Linus Upson, Google's engineering director for the Chrome browser and Chrome OS, confirmed that Gears will be supported but isn't an active area of development. Perhaps the most notable Gears feature is the ability to store data on a PC so a Web application could work even when disconnected from the network—Gmail and Google Docs being the biggest examples. But that's solved by the local database work in HTML5 that's now arriving in browsers. HTML5 also provides for interfaces with files for better uploading geolocation to let a Web site make use of a person's location..."

From LA Times article (Mark Milian): "We are excited that much of the technology in Gears, including offline support and geolocation APIs, are being incorporated into the HTML5 spec as an open standard supported across browsers, and see that as the logical next step for developers looking to include these features in their websites... We're continuing to support Gears so that nothing breaks for sites that use it. But we expect developers to use HTML5 for these features moving forward as it's a standards-based approach that will be available across all browsers..."

See also: the LA Times article

Semantic MPEG Query Format Validation and Processing
Mario Doeller, Armelle Natacha (et al), IEEE MultiMedia

The retrieval of multimedia content has experienced a tremendous boost in the research and industry sectors during the last couple of years. Due to the intensive work in this area, an unmanageable diversity of approaches and retrieval systems in the image, video, and audio domains has emerged. In addition, computer scientists and industry professionals have fostered major developments in the area of multimedia databases. While this diversity serves to stimulate the development of new technologies, it also prevents clients from relying on a universal, interoperable search-and-retrieval system. In this context, the MPEG standardization committee (ISO/IEC JTC1/SC29/WG11) has developed a new standard, the MPEG Query Format (ISO/IEC 15938-12, MPQF), which provides a standardized interface to multimedia document repositories, including multimedia databases, documental databases, digital libraries, and geographical information systems...

Currently, the working groups are implementing the MPQF reference software, which consists of three different software modules, the MPQF validator, parser, and basic interpreter. The MPQF is an XML-based multimedia query language that defines the format of queries and replies that can be exchanged between clients and servers in a multimedia search-and-retrieval environment. The MPQF validator first checks the XML form and validity of an MPQF input/ output query according to the rules of XML 1.1 and the MPQF XML schema. Secondly, the validator checks if the input or output query is semantically compliant with the rules, described in the MPQF standard, that cannot be enforced by the XML schema. After successful validation, the MPQF parser translates the XML-based query instance into an internal representation of a Java object, complete with methods for accessing and modifying the different parts of the query. Finally, the MPQF basic interpreter serves to help understand the semantics of certain parts of the language, focusing on basic conditions and query types.

In this context, this article presents the definition and implementation of a semantic MPQF validator and a processing engine for an MPEG-7 multimedia database. This article also presents an implementation of selected query types of MPQF within Oracle's object-relational database management system... The validator framework was approved at the 86th MPEG meeting in October 2008. In our future work in this area, we plan to work on semantic rules for other standard committees. JPEG, for example, uses a subset of the MPQF in its JPSearch project. We also plan to continue to focus on the query-processing engine for MPQF and its integration into the MPEG-7 MMDB. Currently, the processing engine supports the evaluation of QueryByFree-Text, QueryByXQuery, QueryByDescription, and QueryByMedia..."

See also: the IEEE ComputingNow December 2009 Theme 'Multimedia Metadata and Semantic Management'

Managing and Querying Distributed Multimedia Metadata
Sèbastien Laborie, Ana-Maria Manzat, Florence Sèdes; IEEE MultiMedia

In this article the authors "propose an automatically constructed metadata resume to facilitate the retrieval of desired media information about content distributed across several servers. This architecture consists of several parts: multimedia content (a single media item, such as a piece of text, an image, a video, or an audio snippet), multimedia collection, a set of extractors applied to a given piece of multimedia content returns a set of content metadata, content metadata, metadata collection (contains all the content metadata describing objects from the multimedia collection, encoded according to standards such as Exif, Dublin Core, MXF), and metadata resume (a concise version of the metadata collections).

In our proposed framework, multimedia content URIs are linked to remote server URIs. To preserve this information for each randomly generated RDF description, we randomly selected a specific RDF node and typed it as a server. Thereafter, we evaluated the efficiency of the metadata resume by measuring the query response time...

We implemented our framework in Java using Jena for managing and storing RDF metadata, and ARQ for querying this metadata with SPARQL. To be as general as possible, we generated random RDF graphs by specifying particular graph sizes and densities using RDFizer... Our results from testing this proposed framework show that distributing metadata is more efficient than centralizing it, and that querying a metadata resume to locate servers that might contain relevant information can improve search performance. This RDF-based framework is going to be integrated in the context of the Lindo project, which is focused on managing distributed multimedia indexing. However, our proposed framework is not limited to one particular representation language. Indeed, it can handle other existing languages, such as XTM or Topic Maps models...

As for future work, we plan to continue to evaluate this proposed framework with some metadata based on RDFS and OWL, and more expressive query languages such as CPSPARQL. We also plan to experiment with the influence of the summary degree on the system's performance, for example, finding the best summary degree. Finally, we intend to test our proposal by mixing several languages in metadata collections, for example, some servers will contain RDF descriptions while others will contain topic map models.

The Ariadne Infrastructure for Managing and Storing Metadata
Stefaan Ternier, Katrien Verbert, Gonzalo Parra (et al.), IEEE Internet Computing

"In the e-learning community, interest is growing in reusing learning objects (LOs), defined as any entity, digital or non-digital, that may be used for learning, education, or training. LOs are often described with standardized metadata using the IEEE Learning Technology Standards Committee (LTSC) Learning Object Metadata (LOM) standard. Users and systems can use metadata to retrieve LOs in various innovative ways (faceted search, social recommendation, and so on) and for purposes such as attribution and capturing life-cycle or license information. This article presents and analyzes the standards-based Ariadne infrastructure for managing learning objects in an open and scalable architecture. The architecture supports the integration of learning objects in multiple, distributed repository networks...

The core Ariadne infrastructure has several components: the repository offers persistent management of LOs and metadata; the federated search engine supports transparent search within a network of heterogeneous repositories; the finder is a Web client for searching and publishing; the harvester collects metadata from external repositories; and the metadata validation service validates metadata against metadata application profiles...

The Ariadne repository features both metadata and object stores for persistently managing LOs and LOM instances. To enable stable search, publishing, and harvesting, the repository provides a search interface based on the Simple Query Interface (SQI) specification, a publishing interface based on the Simple Publishing Interface specification, and a harvesting interface based on the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). SQI lets the repository interoperate with different query languages (e.g., the ProLearn Query Language, the Contextual Query Language (CQL), or the Query Exchange Language (QEL) and metadata standards (such as LOM, Dublin Core, or MPEG). SPI also allows interoperability for ingesting LOs and metadata instances, and OAI-PMH enables metadata collection from various repositories...

An important feature of the harvester is its integration with a metadata validation service. The harvester uses this service to validate each individual target's metadata against a specific validation scheme and automatically creates a validation report. Ariadne currently harvests from more than 20 LO repositories. Typically, all repositories in a network export metadata instances that conform to an application profile. The metadata validation service integrates various state-of-the-art metadata validation components, such as XML Schema validation, Schematron, and third-party vcard and vocabulary validators..."

DARPA Readies Social Networking Experiment
J. Nicholas Hoover, InformationWeek

In a bid to study behavioral economics, social networking, and game theory, The U.S. Defense Advanced Research Projects Agency (DARPA) is offering prize money to anyone who can find ten weather balloons located around the country. The research is designed to show how social networking and the Internet can help solve broad, time-critical challenges. DARPA is offering a $40,000 prize to the first person who can find 10 large, red weather balloons that have been moored at locations across the United States. There's a catch: The balloons, to be put up in "readily accessible locations" visible from nearby roads, will only be up from 10 a.m. until 4 p.m. Eastern on Saturday, December 5, 2009. People will have until December 14 to submit their findings...

A Facebook group established for the challenge had 555 members at last count. One man set up a Google map marking locations of individuals involved in the challenge. Messages on Facebook and a wiki set up for the network challenge are filled with claims that people will share prize money or send it to a charity as an enticement to get others to help. One group plans to synthesize publicly available information and interpret it, hopefully improving its chances. Others suggest setting up red herrings to confuse participants..."

See also: the DARPA Network Challenge photos

Connect09: How IBM WebSphere Got REST Religion But Forgot to Tell Anyone
James Governor, Monkchips Blog

From IBM's Connect09 analyst conference: "The Connect09 session that most surprised me was 'Federated Connectivity — Smarter Integration Across and Beyond The Enterprise', hosted by AIM General Manager Craig Hayman... Suddenly I realised Craig was saying something pretty revolutionary. REST-style development and integration is part of the SOA world, and AIM is increasingly supporting REST in its products. The new Service Federation Management product is not based on Big SOA WS-* style integration. On the contrary, its designed to be easy to use, to make point to point integration more programmatic. This is SOA as documentation, rather than SOA as specification. You see IBM has this thing called WebSphere Service Registry and Repository (WSRR), a tool for managing SOA services. While that may have initially meant implement UDDI, today we have a nice ATOM-based store, with a more metadata, and less WS-* specific approach. IBM took a flexible, modern approach to architecting WSRR, and it shows...

But just because IBM is now taking advantage of REST and more lightweight integration methods doesn't mean its customers are. IBM's main education efforts in SOA were about the style that is now being superceded. IBM customers are usually two to three, if not four to five, years behind current state of the art. Its time for IBM to start beating the drum for the new development and integration style..."


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: