The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: June 26, 2008
XML Daily Newslink. Thursday, 26 June 2008

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
IBM Corporation

W3C Last Call Working Draft: Widgets 1.0: Requirements
Marcos Caceres (ed), W3C Technical Report

Members of the W3C Web Applications Working Group have published a Last Call Working Draft for the "Widgets 1.0: Requirements" specification. "Widgets" in this context are small client-side Web applications for displaying and updating remote data, that are packaged in a way to allow download and installation on a client machine, mobile phone, or mobile Internet device. Typical examples of widgets include clocks, CPU gauges, sticky notes, battery-life indicators, games, and those that make use of Web services, like weather forecasters, news readers, email checkers, photo albums and currency converters. The WD document lists the design goals and requirements that a specification would need to address in order to standardize various aspects of widgets. The Last Call period ends on 1-August 2008. This version reflects nearly two years of gathering and refining requirements for the Widgets 1.0 family of specifications. The requirements were gathered by extensive consultation with W3C members and the public, via the Working Group's mailing lists (WAF archive, WebApps archive). The purpose of this Last Call is to give external interested parties a final opportunity to publicly comment on the list of requirements. The Working Group's goal is to make sure that vendor's requirements for Widgets are complete and have been effectively captured. The Widgets 1.0 family of specifications will set out to address as many requirements as possible—particularly the ones marked with the keywords must and should.

See also: the W3C Rich Web Clients Activity Statement

Bringing Semantic Technology to the Enterprise
Jonathan Mack,

As seen at the recent 2008 Semantic Technology Conference in San Jose, serious interest in corporate use of semantic technology continues to grow rapidly. Semantically-enabled applications are increasingly seen as fertile ground for Web 2.0 applications such as mash ups as well as the basis for innovative business intelligence strategies, internal collaboration wikis, and rich canonical models for service-oriented architectures (SOA). What's lacking, however, is a clear understanding of where semantic technology fits in enterprise architectures. Should it be thought of primarily as purely a web technology to integrate information on the presentation layer? Should it be seen as closer to the data layer of the application because of its potential to bring disparate sets of data together? Alternatively, should semantic technology be focused on the increasingly important middle layer of enterprise architectures where messaging and service implementations live? [...] Semantic technology is a collection of technologies, rather than a single model, so it can fit into more than one place. For simple data aggregation 'on the glass' or in a thin application close on the UI—in the mode of Web 2.0 applications—the presentation layer is an appropriate target. Using XSLT or other presentation tools, information can be mashed directly on this layer. A corporate collaboration wiki could use this layer to interconnect data from a wide range of sources. On the other end of the spectrum, semantic technology can be appropriately used on the data layer of the enterprise architecture. 'Tall skinny' tables containing RDF triplets or quartets have the potential to provide a more dynamic means of storing data than relational tables. This is particularly crucial when the relations between data elements are themselves subject to frequent change. An example of this use is found in storing policies, such as web service policies, that drive services contracts. The exact interrelationship of these policies can be difficult to predict in advance, therefore, changes in relational tables require modifying columns and foreign key relationships. A table in which the columns are RDF subject, verb, and predicate (and, potentially, provenance), changes in structure only require CRUD (create, read, update, delete) operations on rows, and so are far more flexible...

See also: the Semantic Technology Conference

Permanent URLs for Things in the Real World
Taylor Cowan, O'Reilly Technical

At the Semantic Technologies conference in San Jose I attended an interesting presentation entitled 'Persistent Identifiers for the "Real Web".' XML often uses URLs for identifying schema namespaces, and I suppose could be credited for influencing RDF's practice of using URLs for identifying resources. In using RDF to describe and annotate things a problem arises: are you describing the web page, or the thing the web page is talking about? For example, what if I assert that '' LIKES ''? Does that mean I like the web page or the band the page is about? As you're traversing the semantic web it's going to be advantageous to distinguish between content assets and the real world entities they may represent. Their proposed solution involves PURLs. Normally a permanent URL redirects you to the best representation of the resource via a 302 response. They propose that when the PURL represents a real world entity that the response be given as a 303 ('see also'). The computer agent can then understand that the 'thing' is a real world entity, and that the redirect is not to the real thing, but to another web resource about the thing. I'm very much in favor of permanent URLs. Otherwise all our assertions will become disjointed as links break, or we'll have to keep our own archives of dead links and sites. I also appreciate the simplicity of Dave and Eric's proposal, however, I'm not so sure this is really the best way to solve identifiers for real world things. Consider books for example: what would be the best way to represent a book, it's URL on Amazon or it's ISBN number as a URN? If we use the Amazon URL we can't be sure it's a book, it might be binoculars or a coffee table. The URN however makes it clear: 'URN:ISBN:0-395-36341-1'. The URN namespace indicates that it's a book, without a doubt. If PURL were to host a 'see also' permanent URL scheme for each declared URN namespace we'd be able to visit that URL to find out more... But on the practical web, we don't use PURLs or URNs for books, we use the URL. I think in practical terms things are going to be represented on the web by the domain that has the best collection with the best open content...

Network Designed to Help Health Care Professionals
Staff, eHealthNews.EU Portal

European researchers have developed a computer system designed to give health care professionals access to a broader range of medical information. However the system, which was meant to allow them to share medical information across a network, highlighted the limits of computer 'understanding'. The EU-funded Doc@Hand project set out to improve coordination among health professional by improving access to information. The researchers aimed to 'push' information to health professionals making decisions about patients' healthcare, rather than expecting those professionals to 'pull' out all the relevant data. The data could be delivered to the health professional's computer or to a mobile device. Easy access to information should lead to speedier and better decision-making. For example colon cancer patients suffering from anaemia can experience shortness of breath. The medical term for this shortness of breath is 'dyspnoea'. If you asked the patients to discuss dyspnoea they wouldn't know what you were talking about. If you asked them about their 'shortness of breath' they might have plenty to say on the subject. Professionals in areas such as healthcare for the chronically ill need to ensure people at all levels are talking the same language if they are to use IT tools to improve their coordination and decision-making, according to recent research results... Through Doc@Hand, health professionals could access the web, communications tools, clients' medical histories, and databases of medical research. By drawing on the user's profile and his or her previous search history, the system is designed to improve the quality of information it returned. It also used a powerful XML-based search engine and a subsystem that included a linguistic parser and a system of ontologies. An ontology defines the concepts and relationships used to describe and represent a domain of knowledge. It specifies standard conceptual vocabularies with which to exchange data among networked systems, provide services for answering queries, publish reusable knowledge bases, and offer services to allow interoperability across multiple systems and databases.

See also: XML in Clinical Research and Healthcare Industries

NETCONF Monitoring Schema
Mark Scott, Sharon Chisholm (et al,. eds), IETF Internet Draft

Members of the IETF Network Configuration (NETCONF) Working Group have issued an updated Internet Draft for the "NETCONF Monitoring Schema." The document defines NETCONF content via XML Schema to be used to monitor the NETCONF protocol. It includes information about NETCONF sessions, locks, and subscriptions and is intended to facilitate management of a NETCONF server. In addition this memo defines a mechanism to discover all possible data models (schema list retrieval) and a mechanism to retrieve schema via NETCONF (get schema). Both can be performed dynamically throughout a session, unlike capabilities exchange which is performed during session setup only. Both support multiple schema versions, formats and locations. The NETCONF Working Group, part of the IETF Operations and Management Area, was chartered to produce a protocol suitable for network configuration—including session establishment, user authentication, configuration data exchange, and error responses. The NETCONF protocol is using XML for data encoding purposes, because XML is a widely deployed standard which is supported by a large number of applications.

See also: the IETF Netconf Status Pages

ICANN Votes to Expand Top-Level Domain Names
Linda Rosencrance, ComputerWorld

ICANN, the nonprofit group that manages the Internet Domain Name System, unanimously voted today to begin the process of relaxing the rules for generic top-level domain names (gTLD). Details are provided in the announcement ""Biggest Expansion to Internet in Forty Years Approved for Implementation." The action means that companies and other organizations eventually could run their own domains. For example, eBay Inc. could run the domain .ebay, and Microsoft Corp. could run the domain .microsoft. Currently, the endings of top-level domain names are limited to a few which include .com, .net and .org, as well as individual country codes such as .ca for Canada or .uk for the United Kingdom. Prices to register the new domain names, expected to be anywhere from $150,000 to $500,000, would most likely prohibit individuals from applying for new domain names. ICANN said the high fees would allow it to recoup the approximately $20 million it expects to spend on implementation of the new policy. Groups applying for new top-level domain names must also either prove they are technically able to operate Web sites or contract with a company that does. New gTLDs will probably start appearing by the end of 2009... The ICANN board also approved actions to stop the practice of domain name tasting, which allows a registrar to register a domain name and place pay-per-click ads on it for up to five days to determine whether it will make money from those ads. If so, the registrar can then register the domain name for $6 per year. If not, the registrar must return the domain to ICANN.

See also: CNET

Selected from the Cover Pages, by Robin Cover

W3C Publishes Approved TAG Finding on Associating Resources with Namespaces

W3C has published "Associating Resources with Namespaces" as an Approved TAG Finding from the W3C Technical Architecture Group (TAG). The document addresses the question of how ancillary information (schemas, stylesheets, documentation) can be associated with an XML namespace. It offers guidance on how a namespace document can be optimally designed for humans and machines such that information at the namespace URI conforms to web architecture good practice. This TAG finding addresses TAG issue 'namespaceDocument-8': "What should a namespace document look like?" The issue was raised on January 14, 2002 by Tim Bray in reference to a 1998 Web architecture document that said "The namespace document (with the namespace URI) is a place for the language publisher to keep definitive material about a namespace. Schema languages are ideal for this." Bray: I disagree quite strongly. Schema languages as they exist today represent bundles of declarative syntactic constraints. This is a small subset of 'definitive material'. RDDL represents my current thinking as to what a "namespace document" ought to be like..." The new TAG finding on "Associating Resources with Namespaces" defines a conceptual model for identifying related resources that is simple enough to garner community consensus as a reasonable abstraction. It demonstrates how RDDL 1.0 is one possible concrete syntax for this model, and shows how other concrete syntaxes could be defined and identified in a way that would preserve the model. The specification also provides guidance on use of identifiers for indivisual terms within an XML namespace. Finally, the TAG finding discusses the use of a namespace URI as a suitable "key" for the nature of a resource encoded in an XML vocabulary, or for the purpose of a resource. The chartered mission of the W3C TAG is to "document and build consensus around principles of Web architecture and to interpret and clarify these principles when necessary; to resolve issues involving general Web architecture brought to the TAG; and to help coordinate cross-technology architecture developments inside and outside W3C. The TAG consists of eight persons elected (by the W3C Advisory Committee) or appointed, and a Chair. The W3C Team appoints the Chair of the TAG, and three TAG participants are appointed by the Director. TAG has three public mailing lists, including an 'Announce' list for publication of URIs for TAG minutes, IRC logs, meeting summaries, findings, new issues, resolved issues, and drafts of architecture documents.

See also: Resource Directory Description Language (RDDL)


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: