The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: March 09, 2010
XML Daily Newslink. Tuesday, 09 March 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com/



W3C Releases Working Draft of Ontology for Media Resource 1.0
Wonsuk Lee, Tobias Bürger, Felix Sasaki,Véronique Malaisé, Florian Stegmaier, Joakim Söderberg (eds), W3C Technical Report

W3C announced that the Media Annotations Working Group has published a Working Draft of Ontology for Media Resource 1.0. The specification defines a core vocabulary to describe media resources on the Web. It is defined based on a core set of properties which covers basic metadata to describe media resources. Further it defines syntactic and semantic level mappings between elements from existing formats. The ontology is intended to foster the interoperability among various kinds of metadata formats currently used to describe media resources on the Web, viz., to counter the current proliferation of video metadata formats by providing full or partial mappings towards existing formats. This WG is part of the W3C Video on the Web Activity. WG members request feedback on the general direction, and in particular encourage public comment on Section 4 Property definition.

The vocabulary in Ontology for Media Resource 1.0 describes a mapping between different schemas containing this property. Ideally, the mappings should be semantics-preserving, but this is not achieved with the first version of the specification, because of the difference in nature of the properties in the mapped vocabularies. Their extension is not exactly overlapping and their values might differ in syntax too. For example the propertydc:creator from Dublin Core and the propertyexif:Artist defined in EXIF are both mapped to the property Creator of our Ontology, but the extension of the property in the EXIF vocabulary (the set of values that the property can refer to) is more specific than the one of Dublin Core. Mapping back and forth with our ontology as reference will hence induce a certain loss of semantics. This is inevitable if we want to achieve a certain amount of interoperability.

The Ontology defines mappings between a set of vocabularies and a set of core properties in our own namespace. Although some of these properties can seem to be redundant with the Dublin Core set, we defined our own namespace for several reasons: (1) Dublin Core is one of the vocabularies that we take into account in the mappings; (2) The Dublin Core set does not cover all of our needs, we would hence still have to create properties in our own namespace; (3) More importantly, the Dublin Core properties have been created with a set of restrictions, although loose, and we might want to apply other restrictions to our properties. we have to have "our hands on" the set of properties to be able to control or constrain their behavior, and cannot be dependant on an external source of authority for the definition of our core mapping. For a practical use of the Media Ontology in an API, we define type restrictions for our properties that go beyond generic Dublin Core specification.

The ontology as a core set of properties and mappings provides the basic information needed by targeted applications for supporting the interoperability among the various kinds of metadata formats related to media resources, and particularly media resources on the Web. In addition, the ontology will be accompanied by an API that provides uniform access to all elements defined by the ontology. Although the set of properties is now limited, it already constitutes a proof of concept. It reflects design goals presented in Use Cases and Requirements for Ontology and API for Media Object 1.0.

See also: the Ontology for Media Resource 1.0 specification text


Service Component Architecture EJB Session Bean Binding Specification
David Booz and Anish Karmarkar (eds), OASIS Public Review Draft

Members of the OASIS Service Component Architecture / J (SCA-J) Technical Committee have released an approved Committee Draft of Service Component Architecture EJB Session Bean Binding Specification Version 1.1 for public review through May 08, 2010. This TC was chartered in July 2007 to develop specifications that standardize the use of the use of Java technologies within an SCA domain. The TC is part of the 'OASIS Open Composite Services Architecture (CSA) Member Section', and its work is coordinated with the work of the other TCs in the member section.

This specification explains the SCA EJB session bean binding. It describes how to integrate a previously deployed session bean into an SCA assembly, and how to expose SCA services to clients which use the EJB programming model.

Details: "EJB session beans are a common technology used to implement business services. The ability to integrate SCA with session bean based services is useful because it preserves the investment incurred during the creation of those business services, while enabling the enterprise to embrace the newer SCA technology in incremental steps. The simplest form of integration is to simply enable SCA components to invoke session beans as SCA services. There is also a need to expose SCA services such that they are consumable by programmers skilled in the EJB programming model. This enables existing session bean assets to be enhanced to exploit newly deployed SCA services without the EJB programmers having to learn a new programming model.

The EJB Session Bean binding enables: (1) SCA developers to treat previously deployed stateless session beans as SCA services, by wiring them into an SCA assembly (SCA reference). (2) SCA service deployers to expose a SCA service as a stateless session bean for consumption by Java EE applications. Stateful session beans are out of scope for this specification. The terms 'session bean' and 'stateless session bean' are interchangeable for the purpose of this specification. The use of EJBs and EJB modules as SCA component implementations is beyond the scope of this specification and is described in the Java EE integration specification..."

See also: the OASIS SCA-J TC


WebCGM Version 2.1 Approved as a W3C Recommendation and OASIS Standard
Benoit Bezaire and Lofton Henderson (eds), Joint OASIS/W3C Standard

OASIS and W3C have announced the approval of WebCGM [Version] 2.1 as a final standard (both W3C Recommendation and OASIS Standard). The W3C and OASIS documents have identical normative technical content, with only cover page, editorial and formatting differences as appropriate to the two organizations.

"Computer Graphics Metafile (CGM) is an ISO standard, defined by ISO/IEC 8632:1999, for the interchange of 2D vector and mixed vector/ raster graphics. WebCGM is a profile of CGM, which adds Web linking and is optimized for Web applications in technical illustration, electronic documentation, geophysical data visualization, and similar fields. First published (1.0) in 1999, WebCGM unifies potentially diverse approaches to CGM utilization in Web document applications. It therefore represents a significant interoperability agreement amongst major users and implementers of the ISO CGM standard.

The design criteria for WebCGM aim to balance graphical expressive power on the one hand, versus simplicity and implementability on the other. A small but powerful set of standardized metadata elements supports the functionalities of hyperlinking and document navigation, picture structuring and layering, and enabling search and query of WebCGM picture content.

The present version, WebCGM 2.1, refines and completes the features of the major WebCGM 2.0 release. WebCGM 2.0 added a DOM (API) specification for programmatic access to WebCGM objects, a specification of an XML Companion File (XCF) architecture, and extended the graphical and intelligent content of WebCGM 1.0..."

See also: the approved specification from OASIS


NIST Releases Update for XML Schema and Instance Validation Tools
Katherine C. Morris, NIST Announcement

The U.S. National Institute of Standards and Technology (NIST) has announced the release of updated online software validation services in the XML Schema and Instance Validation Services offering. These facilities can be used to remotely validate XML schema files against the W3C XML Schema standard (Schema Validation Service), and XML data files against their corresponding schemas (Instance Validation Service).

The website allows users to upload XML and XML Schema files to be processed with a number of third party, publically available XML tools. Often XML and XML schema files are developed using a single set of tools. By using the NIST site users can test their files with a number of popular tools before sharing them with others, thereby improving the quality of what they release and saving everyone's time in the process. The publicly available XML parsers included on the site are Xerces (Xerces v2.9.1), Jing (JING v20081028), MultiSchema Validator (MSV v20081113, Sun Multi-Schema XML Validator), JAXP (JAXP v1.4.2), and LibXml2 (LibXml2 v2.7.6).

You may upload an XML Schema or a ZIP file containing XML schemas to test against the W3C standard specification for XML Schemas. For instance validation, a user may upload their own XML Schema files or choose from the following sets of publicly available schemas provided on the site: (1) OAGIS, from the Open Applications Group, Inc; (2) OASIS UBL (Unified Business Language) schemas version 1; (3) OASIS UBL (Unified Business Language) schemas version 2; (4) Schemas from AIAG, the Automobile Industry Action Group (5) StratML schemas for strategic plans from AIIM, the Association for Information and Image Management. Additional parsers and schemas may be added on request.

NIST's efforts to define methods and tools for developing XML Schemas to support systems integraton will help you effectively build and deploy XML Schemas amongst partners in integration projects. Through the Manufacturing Interoperability Program (MIP) XML Testbed, NIST provides guidance on how to build XML Schemas as well as a collection of tools that will help with the process allowing projects to more quickly and efficiently meet their goals. The NIST XML Schema development and testing process is documented as the Model Development Life Cycle, which is an activity model for the creation, use, and maintenance of shared semantic models, and has been used to frame our research and development tools..."

See also: the NIST Validation Services web site


Best Current Practice for IP-based In-Vehicle Emergency Calls
Brian Rosen, Hannes Tschofenig, Ulrich Dietz (eds), IETF Internet Draft

Members of the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group have published an updated Internet Draft for the Informational IETF Best Current Practice for IP-based In-Vehicle Emergency Calls. The document "describes how to use a subset of the IETF-based emergency call framework for accomplishing emergency calling support in vehicles. Simplifications are possible due to the nature of the functionality that is going to be provided in vehicles with the usage of GPS. Additionally, further profiling needs to be done regarding the encoding of location information." XML examples (encoded in a PIDF-LO format) illustrate how this location information is conveyed in such an emergency call.

Details: "Emergency calls made from vehicles can assist with the objective of significantly reducing road deaths and injuries. Unfortunately, drivers often have a poor location-awareness, especially on urban roads (also during night) and abroad. In the most crucial cases, the victim(s) may not be able to call because they have been injured or trapped.

In Europe, the European Commission has launched the eCall initiative that may best be described as a user initiated or automatically triggered system to provide notifications to Public Safety Answering Point's (PSAP), by means of cellular communications, that a vehicle has crashed, and to provide geodetic location information and where possible a voice channel to the PSAP. The current specifications being developed to offer the eCall solution are defined to work with circuit switched telephony. This document details how similar or more extended functionality can be accomplished using IP-based mechanisms...

The usage of in-vehicular emergency calls does not require the usage of a Location Configuration Protocol, since GPS is used. Furthermore, since the GPS receiver is permanently turned on it can even provide useful information in cases where the car entered a tunnel. Consequently, there is no need to discover any LIS. Since the emergency call within the car is either triggered by a button or, in most cases, automatically thanks to sensors mounted in the car there is no need to learn a dial string. This document registers a separate Service URN, namely 'urn:service:ecall', used specifically for emergency calls that are triggered by vehicles..."

See also: XML and Emergency Management


Paul Cotton on Microsoft Participation in the W3C HTML Working Group
Philippe Le Hégaret, Interview and Blog

Philippe is W3C Interaction Domain Leader, with oversight of the following W3C Activities: Graphics, HTML, Internationalization, Math, Rich Web Client, Style, Synchronized Multimedia, Video in the Web, and XForms. He is also Video in the Web Activity Lead, W3C Team Contact for the IETF (with Dan Connolly) and the OMG. In this blog, Philippe interviews Paul Cotton as part of a series of interviews with W3C Members to learn more about their support for standards and participation in W3C. Paul Cotton from Microsoft recently became co-Chair of the W3C HTML Working Group (with Sam Ruby). Microsoft is collaborating very actively, and helping drive consensus around many HTML 5 proposals related to Canvas, Accessibility and Extensibility.

Excerpts from Paul's resopnses: "W3C and Microsoft understand that the Web is no longer the domain of just academics, governments, and computer scientists, but that it is today a vital service relied upon by regular people around the world as well as enterprises. It is a vital part of everyday life, and must be treated with the utmost care. Because of this, Microsoft has allocated software engineers, test developers and program managers to assist the W3C with the work ahead. Microsoft has a vast breadth and depth of experience in the challenges of supporting such a vast, dynamic ecosystem...

We've learned a lot of lessons (sometimes the hard way) about how to build resiliency and interoperability into our operating systems. We want to bring this expertise to the W3C to help with the challenge of revising the underpinnings of the Web. Since August 2009 when I became a co-chair of the W3C HTML Working Group I have been trying to use my more than 10 years of W3C experience to help progress long standing issues on the specification, to define a testing infrastructure, and to push for more work in important areas that had not yet received enough attention—like accessibility...

Last fall the HTML WG agreed to create two separate task forces: one on Testing and a second on Accessibility. The Testing Task Force's mandate is to setup the infrastructure and a test suite for the HTML WG's specifications... When the WG has processed all Last Call stage comments the HTML WG specifications will move on to the W3C Candidate Recommendation stage when the W3C does a 'Call for Implementations' for the specifications. The idea behind starting the Testing Task Force so much in advance of getting to the CR stage is to build as much of the required test suite as possible BEFORE the WG's specifications gets to CR. By doing this the time spent in the CR stage should be minimized. In addition by creating tests for the specifications as early as possible these tests can then be used to assist in improving the quality of the HTML 5 specifications even before the Last Call or Candidate Recommendation stages. Having a comprehensive test suite for all the HTML 5 specification is something that Microsoft thinks is very important. Microsoft is committed to submitting test cases for HTML 5 features and to reviewing test cases submitted by other task force members..."

See also: the W3C HTML Working Group


What to Expect from HTML5
Neil McAllister, InfoWorld

"Support for the next generation of HTML is already appearing in today's browsers and Web pages... Anticipation is mounting for HTML5, the overhaul of the Web markup language currently under way at the Worldwide Web Consortium (W3C). For many, the revamping is long overdue. HTML hasn't had a proper upgrade in more than a decade... Many claim the HTML and XHTML standards have become outdated, and that their document-centric focus does not adequately address the needs of modern Web applications.

HTML5 aims to change all that. When it is finalized, the new standard will include tags and APIs for improved interactivity, multimedia, and localization. As experimental support for HTML5 features has crept into the current crop of Web browsers, some developers have even begun voicing hope that this new, modernized HTML will free them from reliance on proprietary plug-ins such as Flash, QuickTime, and Silverlight.

But although some prominent Web publishers—including Apple, Google, the Mozilla Foundation, Vimeo, and YouTube—have already begun tinkering with the new standard, W3C insiders say the road ahead for HTML5 remains a rocky one. Some parts of the specification are controversial, while others have yet to be finalized. It may be years before a completed standard emerges and even longer before the bulk of the Web-surfing public moves to HTML5-compatible browsers. In the meantime, developers face a difficult challenge: how to build rich Web applications with today's technologies while paving the way for a smooth transition to HTML5 tomorrow.

Standards bodies by their very nature move slowly, but work on HTML5 is being driven by large, motivated vendors, including Adobe, Apple, Google, Microsoft, the Mozilla Foundation, Opera Software, and others. These companies recognize the need for an upgrade to the HTML standard, and their work is helping to realize its potential. The resulting opportunities for Web developers are too compelling to ignore..." [Note: the Editor's draft version of HTML5 supports three different views (Normal view, Hide UA text, Highlight UA text) via radio-button selection.]

See also: the HTML5 Editor's draft specification


Versioning (UK Government) Linked Data
Jeni Tennison, Blog

"I've been working quite a lot recently on the UK government's use of linked data, and in particular on providing guidance for people who want to publish their data as linked data. One of the things that we need to provide guidance about is how to publish linked data that changes over time... I have split the discussion into two parts: versioned information resources (which are pretty easy) and versioned non-information resources (which are pretty hard). For both, we need to provide some guidance about what the RDF should look like, and mint or adopt properties to support that model...

Some of the things that we talk about, such as legislation, are information resources (web documents), and these have different versions. The relevant level of precision for legislation is a day, but this will be different for different kinds of documents—some might change every second, for others an incrementally increasing version number might be more appropriate than a date. A generic pattern for the URIs, based on the design of URI sets for the UK public sector report [is simple enough...] There might be sub-versions too, if the inspection report itself goes through a revision process. The RDF for this document should include links to the previous reports that it replaces, and dates that indicate when it was created and so on... It's also useful to have a URI for unversioned document; this is the same as for the versioned document, but without the version...

Versioned Non-Information Resources: The harder problem is how we handle changes to non-information resources over time. For example, how do we handle the fact that a school often changes head, sometimes changes name, regularly changes class sizes, rarely changes address and so on? How do we handle the fact that we have legacy statistics about local authorities as they were in 2008, prior to the 2009 reorganisation, and that it's very likely that these kinds of changes will continue to take place regularly in the future?

Our requirements are: (1) that the data is easily usable by people who only care about the current state of a resource; (2) that the (current) data remains easily queryable at a SPARQL endpoint; (3) that it's possible (not necessarily easy) to query historic data; (4) that historic data can be moderately easily retrieved and navigated; (5) that it can represent historical states even when the precise time period is not known; (6) that it can distinguish between a change in the concept and a change in our record of it—e.g. changing the name of a school, versus correcting a typo in the database entry for the school; (7) that it can trace what the nature or cause of the change was—e.g. redrawing of local authority boundaries..." [Discussion follows]

See also: the UK Government web site for RDF linked data


Encoded Archival Context (EAC) for Corporate Bodies, Persons, and Families
Daniel Pitti, Posting to Encoded Archival Description Mailing List

"The EAC-CPF Working Group (EACWG) is pleased to announce the public release of the EAC-CPF 2010 version of the schema and tag library for Encoded Archival Context - Corporate Bodies, Persons, and Families. The new standard provides an XML vocabulary to enable encoding and communicating archival authority records created according to the rules promulgated in ISAAR-CPF ('International Standard Archival Authority Record For Corporate Bodies, Persons and Families'), and it is intended to facilitate content-rich authority records that can interoperate in a global environment.

EAC-CPF 2010 results from work over the past 30 months by a 15-member working group representing nine countries. Their work has been supported by the Society of American Archivists, Staatsbibliothek zu Berlin, Archivio di Stato di Bologna, the Istituto per i Beni Artistici, Culturali e Naturali della Regione Emilia-Romagna, and by generous funding from the Delmas Foundation. The Working Group benefitted from extensive input from the international archival community throughout the review process of the draft schema in late 2009.

The EAC-CPF stable schema is available for immediate download in three versions: WC3 Schema Language (XSD), Relax NG Schema, and Relax NG Schema Compact. It is accompanied by an extensive Tag Library complete with encoding examples, which is also available for immediate download. It is expected that the online tag library will continue to evolve over time to meet the needs of the encoding community... The international standard is jointly supported by the Society of American Archivists and the State Library of Berlin.

The context of the EAC-CPF creation and use of material is complex and multi-layered and may involve individuals, families, organizations, societies, functions, activities, business processes, geographic places, events, and other entities. Primary among these entities are the agents responsible for the creation or use of material, usually organizations or persons. With information about these agents, users can understand and interpret records more fully since they will know the context within which the agents operated and created and/or used the material..."

See also: Markup Languages for Names and Addresses


Avaya and Polycom Partner on Unified Communications (UC) Solutions
Jeffrey Burt, eWEEK

"Avaya and Polycom are expanding their relationship to include the development and marketing of new voice and video unified communications solutions. The integrated offerings will be based on Avaya's Aura UC platform and Polycom's Open Collaboration Network strategy. The two companies announced plans to jointly develop and market a host of new, tightly integrated UC solutions based on Avaya's SIP (Session Initiation Protocol)-based Aura UC platform and Polycom's Open Collaboration Network initiative, which is a partner strategy focused on offering UC solutions.

The partnership between Avaya and Polycom will touch on voice and video systems that are integrated with Avaya's Aura platform for greater real-time collaboration. Aura will be used as a central management and delivery point for the joint solutions..."

According to the announcement, the agreement reflects the companies' shared commitment to open, standards-based unified communications: "The Avaya Aura platform simplifies existing voice and video communications architectures, including the ability to integrate communications across multi-vendor networks, to deliver business communications with more capabilities, fewer costs and less complexity. The combined Avaya and Polycom offers will support customers who want to maintain their existing communications investments as they adopt new capabilities and solutions.

Built on open standards, the Avaya/Polycom solution provides a full range of video telephony capabilities: (1) Personal video: place a voice call with Avaya IP Softphone or Avaya one-X Communicator and add video -- with one click; (2) Conference room video: quickly launch a group voice and video call using a Polycom HDX or legacy series systems and easily add other participants on-the-fly; (3) Call Control: extend telephony features to Polycom video calls including hold, transfer, and forward; (4) Integrated voice and video: use a single network and common global dial plan..."

See also: the announcement


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-03-09.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org