The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: June 24, 2008
XML Daily Newslink. Tuesday, 24 June 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com



W3C Common Web Language Evaluation and Installation Incubator Group
Staff, W3C Announcement

W3C has announced the creation of a new Common Web Language (CWL) Evaluation and Installation Incubator Group, sponsored by W3C Members Institute of Semantic Computing (ISeC), (Japan) National Institute of Advanced Industrial Science and Technology (AIST), Keio University, and JustSystems Corporation. The mission of the Group, chartered through May 31, 2009, is to substantiate the CWL (Common Web Language) in actual web environment using the pilot model of the CWL platform. The CWL is a graphic language of semantic network with hyper node and is used to describe contents and meta-data of web pages in three different type of form such as UNL, CDL and RDF. The CWL platform allows people to input CWL using natural languages and display information written in CWL in natural languages. Using this CWL platform, the CWL will be evaluated from multilingualism, semantic computing and semantic web points of view. Based on these evaluation and feedback, the CWL and its platform will be bearable in actual use in the web. The CWL is designed to be used to describe meta-data and contents of web pages for breaking language barriers and enable computers to process web information semantically. However, even a language specification is decided, it is not enough for practical use. Evaluation from various aspects based on actual use is totally necessary. Adjustment and improvement based on such evaluation will be done to make the CWL and its platform to be bearable for practical use in the web. According to the W3C "Common Web Language" Incubator Group Report, the CWL must solve the following two big problems exist in the present web world. One is the language barriers in the web, and another is lacking of machine understandability for the contents in the web. (1) Language barrier: Currently almost all web pages are written in English. It is convenient for English speaking people but it is not for non-English speaking people, and those people are the majority in the world. Those people cannot get information easily, because it is not written in their mother tongue. Recently machine translation facilities are equipped in the web, but it is not the solutions. Machine translation has a problem of quality and coverage of languages. (2) Machine Understandability: HTML tags give information on structure of web documents, but they do not give semantic information on each words nor sentences in documents. It means HTM tags information is insufficient to intellectually utilize contents of web pages. The RDF and OWL have a framework to give semantic information but they do not have standard vocabulary to describe web contents... W3C Incubator Activities are intended to foster development of emerging Web-related technologies.

See also: W3C Incubator Activities


Definition Languages for RESTful Web Services: WADL vs. WSDL 2.0
T. Takase, S. Makino, S. Kawanaka, Ken Ueno, C. Ferris; IBM Report

There are two specifications for describing interfaces of HTTP based web applications, WADL and the WSDL 2.0 HTTP binding extension. These two languages are very similar, but there are some differences. This paper attempts to provide an unbiased, objective comparison of the two technologies, highlighting both the differences and similarities between WADL and the WSDL 2.0 HTTP binding... The World Wide Web has, until recently, not had a formal means by which a web application can be described in a machine-processable manner. In a purely browser-based context, it has not been necessary to provide such a description, because the web application provided its interface to the user in the form of (X)HTML and web forms (such as HTML forms and XForms) that were rendered in the browser, to be interacted with directly by an end user. However, the lack of a formal description of a web application's interface has made it difficult (though certainly not impossible) to develop non-browser-based interactions with web applications. Authors of web applications that wanted to encourage nonbrowser- based access typically provided natural language-based descriptions that were subject to both mis-interpretation and version skew issues (e.g where the application's interface changed but the natural language description did not change accordingly). While the natural language-based description of web applications are of value, such descriptions do not aid in automating and/or simplifying the development of the software intended to interact with the described web applications. Additionally, it is a fairly common occurrence that the description becomes inconsistent with the actual service interface, because there is no formal link between the two... WADL and WSDL 2.0 HTTP binding are similar but do have some differences. Each specification has both its pros and cons. In short, WADL is simple and has limited scope. By design, WADL is limited to describing HTTP applications and does not address features such as security. On the other hand, the WSDL 2.0 HTTP binding is more feature rich, at the cost of increased complexity, yet still lacks a true resource-centric model. It will be interesting to watch as each of these technologies matures and gains broader adoption.

See also: the Web Application Description Language (WADL) Project


Last Call Review for Speech Synthesis Markup Language (SSML) Version 1.1
Daniel C. Burnett, Zhi Wei Shuang (eds), W3C Technical Report

W3C announced the release of a Last Call Working Draft for the "Speech Synthesis Markup Language (SSML) Version 1.1" specification. The document has been produced by members of the Voice Browser Working Group as part of the W3C Voice Browser Activity—"applying Web technology to enable users to access services from their telephone via a combination of speech and DTMF." Public comment on the Draft is invited; the Last Call period ends on 20-July-2008. Appendix G summarizes the changes since SSML 1.0. The Speech Synthesis Markup Language specification (SSML) is part of a larger set of markup specifications for voice browsers developed through the open processes of the W3C. It is designed to provide a rich, XML-based markup language for assisting the generation of synthetic speech in Web and other applications. The essential role of the markup language is to give authors of synthesizable content a standard way to control aspects of speech output such as pronunciation, volume, pitch, rate, etc. across different synthesis-capable platforms. A related initiative to establish a standard system for marking up text input is SABLE, which tried to integrate many different XML-based markups for speech synthesis into a new one. The activity carried out in SABLE was also used as the main starting point for defining the Speech Synthesis Markup Requirements for Voice Markup Languages. Since then, SABLE itself has not undergone any further development. The intended use of SSML is to improve the quality of synthesized content. Different markup elements impact different stages of the synthesis process. The markup may be produced either automatically, for instance via XSLT or CSS3 from an XHTML document, or by human authoring. Markup may be present within a complete SSML document or as part of a fragment embedded in another language, although no interactions with other languages are specified as part of SSML itself. Most of the markup included in SSML is suitable for use by the majority of content developers; however, some advanced features like phoneme and prosody (e.g. for speech contour design) may require specialized knowledge... SSML Version 1.1 enhances SSML 1.0 to provide better support for a broader set of natural (human) languages. To determine in what ways, if any, SSML is limited by its design with respect to supporting languages that are in large commercial or emerging markets for speech synthesis technologies but for which there was limited or no participation by either native speakers or experts during the development of SSML 1.0, the W3C held three workshops on the Internationalization of SSML. The first workshop, in Beijing, PRC, in October 2005, focused primarily on Chinese, Korean, and Japanese languages, and the second, in Crete, Greece, in May 2006, focused primarily on Arabic, Indian, and Eastern European languages. The third workshop, in Hyderabad, India, in January 2007, focused heavily on Indian and Middle Eastern languages. Information collected during these workshops was used to develop an updated requirements document.

See also: the "Voice Browser" Activity


OASIS Cross-Enterprise Security and Privacy Authorization (XSPA): WS-Trust Healthcare Profile
B. Burley, D. DeCouteau, M. Davis, D. Staggs (eds), OASIS Working Draft

This working draft anticipates the creation of a new OASIS Cross-Enterprise Security and Privacy Authorization (XSPA) Technical Committee. The document describes how WS-Trust is leveraged by cross-enterprise security and privacy authorization (XSPA) to satisfy requirements pertaining to information-centric security within the healthcare community XSPA encompasses the mechanisms to authenticate and administer, and enforce authorization policies controlling access to protected information residing within or across enterprise boundaries. The policies being administered and enforced relate to security, privacy and consent directives. In general, and with respect to this profile, WS-Trust works in concert with additional, supporting, lower-layer standards including WS-Security, WS-Policy and SAML to provide the overarching XSPA specification. XACML is well suited for, and may be used to provide policy administration and enforcement within XSPA, leveraging a WS-based infrastructure where appropriate. However, this profile does not include the use of XACML within XSPA. XSPA does not mandate the use of XACML. This working draft document provides an overview of the major WS components of the XSPA profile. The profile then establishes how these components may be used to implement cross-enterprise access control requirements relevant to the healthcare community. The profile does not address security required to protect message transactions such as digital signatures and encryption, but instead discusses how shared messages can be used to negotiate the necessary claims to access a protect resource... This profile specifies the use of WS-Trust, an extension of WS-Security, as a token-type agnostic means for requesting, issuing, renewing, and validating security assertions. While the WS-Trust specification completely describes these activities, a brief overview is provided here describing the interactions between a web service requestor, security token service (STS) and web service provider. The core component of WS-Trust is the STS. The authentication and authorization-related services provided by the STS are conducted on the frontline of the multi-layered approach of this profiles strategy for securing web services... The XSPA WS-Trust model facilitates course and fine-grained access control, relieving the service provider from making access control decisions. The service provider is left with only having to enforce the decision determined by an access control service (ACS)... Those familiar with the government healthcare sector know that HITSP has been tasked to identify standards supporting the AHIC use cases. HITSP has identified a need for a WS-Trust profile that supports cross-enterprise security and privacy authorizations. We have started a draft profile and would appreciate the comments of the TC. HITSP would like to see the profile balloted in OASIS so it can be cited as a standard profile. The government healthcare sector will be required to adhere to the standards selected by HITSP, so this is an important effort.

See also: the associated posting


New Release: Force.com Toolkit for Google Data APIs
Staff, Force.com Project Team

At the Santa Clara, California "Tour de Force" event, developers announced a new Force.com Toolkit for Google Data APIs... GData, Atom, and RSS 2.0 all share the same basic data model: a container that holds both some global data and any number of entries. For each protocol, the format is defined by a base schema, but it can be extended using foreign namespaces. GData can use either the Atom syndication format (for both reads and writes) or the RSS format (for reads only). Atom is GData's default format... Force.com platform is a scalable, secure, and popular on-demand platform which supports a Web Services API from a wide variety of client-side languages. The Force.com Toolkit for Google Data APIs provides a free and open-source set of tools and services that developers can use to take advantage of Google Data APIs from within Force.com. The goal of the toolkit is to make Google Data APIs—starting with Spreadsheets, Documents and Calendar—first class citizens of the Force.com environment. The project will support: Google Documents API; Google Calendar API; Google Spreadsheet API; Blogger API; Contacts API; Google Data Authentication. Specifically, the toolkit exposes these APIs directly within Apex, making it easier to access them natively from Force.com apps and providing tighter integration between the platforms with less developer effort... Perhaps the most significant (and technically interesting) aspect of the toolkit is that unlike the early Web 2.0 mashups that worked primarily by combining services within the browser via JavaScript, this toolkit works by literally connecting the clouds -- all of the integration and interaction happens 'in the cloud' rather than on the client. This is possible because of the increasingly rich capabilities of the Apex runtime within Force.com—and significant because these server side interactions can be much richer and robust than anything possible on the client. Best of all, its remarkably easy to use—just a few lines of code will have your Force.com apps exchanging data with Google Apps in real time. This is a new model for mashups, and one we think will become increasingly common.

See also: Google Data APIs


Discovering XProc: Enable the XML Ecosystem with Pipelines
James R. Fuller, IBM developerWorks

Since October 2005, the W3C XML Processing Model Working Group (WG) has collaborated on a Working Draft (WD) specification titled "XProc: An XML Pipeline Language." As early implementations start to appear on the horizon and the anticipation of a second Last Call by the W3C WG (paving the way to a W3C draft recommendation), it has become clear that over the past twelve months, the XProc specification effort has picked up pace. XProc's goal is to promote an interoperable and standard approach to the processing of XML documents. These requirements were formally set out in a group of use cases, including: (1) Apply a sequence of operations to XML documents; (2) Parse XML, validate it against a schema, and then apply an XSLT transformation; (3) Combine multiple XML documents -- document aggregation; (4) Interact with Web services; (5) Use metadata retrieval. XProc is a markup language that describes processing pipelines composed of discrete steps that apply operations on XML documents. If a specification's importance is related to the quality of individuals working on it, then XProc is significant, indeed. The W3C XML Processing Model WG is packed with pragmatic XML practitioners and superstars as well as grizzled veterans of past XML-related efforts: Erik Bruchez, Andrew Fang, Paul Grosso, Rui Lopes, Murray Maloney, Alex Milowski, Michael Sperberg-McQueen, Jeni Tennison, Henry Thompson, Richard Tobin, Alessandro Vernet, Norman Walsh (Chair), and Mohamed Zergaoui, to name a few. XProc's declarative format, combined with the simplicity of thinking in terms of pipelines, will mean that non-technical people can be involved in writing and maintaining processing workflows. In many configurations, XProc is amenable to streaming, whereas other approaches to control XML processes are not—for example, XSLT. XProc steps focus on performing specific operations, which over time should experience greater optimization (in an XProc processor used by many) versus one-off code that you or I write. XProc's standard step library and extensibility mechanisms position XProc to be an all-encompassing solution. Structured data (such as XProc markup) is typically easier to reuse than structured code... Not surprisingly, XProc will probably gain considerable favor amongst those groups who work and generate XML documents. You can also imagine that people with business workflows and XML documents flowing through them might be excited by the possibility of modeling their workflows with XProc pipelines, and then running them on their XML documents... It's important for XML technologists to remind themselves that some families and phylum of developers do not work with XML. When someone from these groups asks, "Why do I need XProc?," my first response is usually that XProc is designed to be platform neutral, meaning that XProc can run everywhere a compliant XProc processor can run. However, if you already work with XML documents and technologies, XProc is probably something you have emulated with other approaches (XSLT, Apache Ant, Apache Cocoon site maps, Jelly, and so on), and you will be happy to see the arrival of XProc processors.

See also: XML Pipeline languages


SourceForge DITA Exchange Package (DXP) Project
Eliot Kimber, DITA Users Group Announcement

The recently announced SourceForge DITA Exchange Package (DXP) project is an effort to define a ZIP-based packaging mechanism for DITA content. The general requirement is to be able to package one or more DITA maps and all of their local (and optionally, peer) resources into a single storage object that can be easily interchanged with other users, systems, processors, or used directly for editing. The intent of the project is to define the simplest possible mechanism that satisfies the documented requirements. If the DXP mechanism gets acceptance and is proven useful, the intent of this project is to submit the design for standardization by the appropriate standards body... Because DXP packages are ZIP files, they can be packed and unpacked using any ZIP-aware processor. As of July 2008, version 9.3 the OxygenXML editor product provides the ability to edit files directly from ZIP files, meaning you can edit directly from a DXP package. We are hopeful that other DITA-aware editor products will add similar features. The intended deliverables from this project include: (1) A specialization of DITA map for defining package manifests; (2) A packager utility for creating DXP packages from DITA maps; (3) An unpacker utility for extracting resources from DXP packages (for example, extracting a single map and its dependencies from a package containing several maps). All project materials are licensed with the same Apache open source license used by the DITA Open Toolkit. DITA DXP packages are intended to support the following use cases: [A] Convenient interchange of one or more maps and all local dependencies, such as between the authoring enterprise and a localization supplier' [B] Storage-conserving local storage of DITA resources for local editing and processing—e.g., treating a DITA map and its dependent topics as a single "document" the Microsoft Office 2007 .docx format; [C] Archiving of DITA resources [D] Export of maps and dependencies from content management systems in a way that enables re-import with CMS-specific metadata maintained—for example, to support off-line editing of DITA resources without loss of context and with minimal local storage costs, as opposed to using something like CVS/Subversion-style local working copies; [E] Create a package that reflects the application of a particular filter specification to the source content (for example, creating a package that omits all internal-use-only content or only includes content for a specific product version or operating system).

See also: DITA references


Selected from the Cover Pages, by Robin Cover

Information Card Foundation Formed to Support User-Centric Digital Identity

Equifax, Google, Microsoft, Novell, Oracle, and Paypal have announced the formation of the Information Card Foundation (ICF) as an independent, not-for-profit organization designed to advance the adoption and use of Information Cards across the Internet. ICF's mission is to advance the use of the Information Card metaphor as a key component of an open, interoperable, royalty-free, user-centric identity layer spanning both the enterprise and the Internet. Information about ICF is provided through a 2008-06-24 press release, in web site documents, as well as through blogs and mailing lists. ICF is is working with or is planning to work with other supporting organizations, including: Concordia, The Fraunhofer Institute FOKUS, Identity Commons, Liberty Alliance, OpenID Foundation, and Open Source Identity Systems (OSIS). In principle, ICF working groups will collaborate with other identity-related organizations: "(1) Protocol, specifications and standards groups; (2) Organizations that promote user-centric identity principles; (3) Other groups to perform iteroperability certification tests in a pragmatic, inclusive process wherever possible to minimize cost and time-to-market, while meeting a quality metric." ICF is also affiliated with Identity Commons as a working group; this means ICF agrees to operate under the shared principles of all Identity Commons working groups. The founding members of the Information Card Foundation "represent a wide range of technology, data, and consumer companies. Equifax, Google, Microsoft, Novell, Oracle, and PayPal, are founding members of the Information Card Foundation Board of Directors. Individuals also serving on the board include ICF Chairman Paul Trevithick of Parity, Patrick Harding of Ping Identity, Mary Ruddy of Meristic, Ben Laurie, Andrew Hodgkinson of Novell, Drummond Reed, Pamela Dingle of the Pamela Project, Axel Nennker, and Kim Cameron of Microsoft. Additional founding members are Arcot Systems,Aristotle, A.T.E. Software, BackgroundChecks.com, CORISECIO, FuGen Solutions, the Fraunhofer Institute, Fun Communications, the Liberty Alliance, Gemalto, IDology, IPcommerce, ooTao, Parity, Ping Identity, Privo, Wave Systems, and WSO2." Published ICT ByLaws, IPR Policy, and Contribution Agreements govern the activities of ICT Working Groups. Working Groups may be proposed by Steering and Sponsor level members. All members may participate in a Working Group. Working Groups are designed to be temporary, and stay formed until the deliverables specified in their application are completed. Information Cards, according to the ICF FAQ document, are the "digital, online equivalents of your physical identification credentials such as a drivers license, passport, credit card, club card, business card or a social greeting card. Users control the distribution of their personal information through each Information Card. Information Cards are stored in a user's own online wallet (called a 'selector') and 'handed out' with a mouse click just like a physical ID card."

See also: the ICF announcement


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-06-24.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org