The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: January 13, 2009
XML Daily Newslink. Tuesday, 13 January 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



XML Schema 1.1, Part 2: Co-Occurence Constraints Using XPath 2.0
Neil Delima, Sandy Gao (et al), IBM developerWorks

This second article in a six-part series takes an in-depth look at the co-constraint mechanisms introduced by XML Schema 1.1, specifically the new assertions and type alternatives features. Complex and simple type definitions in XML Schema 1.0 allow schema authors to specify and restrict the content of elements and values of attributes. According to the XML Schema 1.0 specification, complex type definitions constrain elements by providing attribute declarations that govern the appearance and contents of attributes by restricting elements to be empty or to conform to a specific content model, such as element-only, mixed, or simple content determined by a simple type definition of the content. But XML Schema 1.0 has certain limitations. Beyond the constraints mentioned above, XML schema authors often needed to enforce more complex rules that determine and restrict the content of elements and attributes, such as the ability to restrict the appearance of certain child elements based on the value of an attribute, having the total sum of child elements not exceed a certain value, or allowing the value of a child element to be valid based on the scope in which it is found. Unfortunately, XML Schema 1.0 did not provide a way to enforce these rules. To implement such constraints, you would (1) Write code at the application level -- after XML schema validation; (2) Use stylesheet checking—also a post-validation process; (3) Use a different XML schema language such as RelaxNG or Schematron. With the constant requests for co-occurrence constraint checking support from the XML Schema 1.0 user community, the XML Schema 1.1 working group introduced the concept of assertions and type alternatives in XML Schema 1.1 to allow XML schema authors to express such constraints... In Part 3 of the series, we will explore wildcard support and how it allows you to evolve your XML schema.

See also: W3C XML Schema references


Feature Article: NIEM and Logical Entity Exchange Specifications (LEXS)
Sudhi Umarji, NIEM Newsletter

The U.S. National Information Exchange Model (NIEM) is the result of a collaborative effort by the justice and homeland security communities to produce a set of common, well-defined data elements to be used as the basis for data exchange development and harmonization. NIEM defines a set of building blocks that are used as a consistent baseline for creating exchange documents and transactions within the federal government and between the federal government and state, local, and tribal organizations... An Information Exchange Package (IEP) is a NIEM-based XML representation of the information shared to support a specific mission. An Information Exchange Package Documentation (IEPD) is the set of specifications that describe the function and structure of a NIEM information exchange. A NIEM-conformant information exchange is one that is based on an IEPD that follows the rules for NIEM conformance and is registered in one of the established NIEM IEPD repositories. The goal of NIEM conformance is for the sender and receiver of information to share a common, unambiguous understanding of the meaning of the information being exchanged... LEXS (Logical Entity Exchange Specifications) is a family of reusable NIEM IEPDs for many common types of public safety information exchanges, particularly for the publication, update, and federated searching of law enforcement and intelligence data. LEXS is also used within the law enforcement community of DHS. Currently, the Enterprise Architectures of both DOJ and DHS include the use of NIEM and LEXS in the implementation of information exchanges. In particular, the IEPDs for the National Data Exchange (N-DEx) and Suspicious Activity Reporting (SAR) are based on LEXS... The involvement of the NIEM program in the requirements, design, and implementation of UCore 2.0 ensured its compatibility with NIEM and LEXS. UCore 2.0 shares the same underlying message structure as LEXS, which creates a substantial functional alignment between the two and allows for greatly simplified translation of messages from one to the other. In addition, UCore 2.0 is largely agnostic with respect to the information exchange vocabularies of various communities. This means that UCore 2.0 messages can supplement the basic UCore 'digest' with richer, more detailed information content in the form of NIEM 'payloads,' governed by NIEM IEPDs. Although UCore is not mandated by DoD or IC, its use is nonetheless expected to grow among programs in those communities...

See also: UCore 2.0


Portable Symmetric Key Container (PSKC)
Philip Hoyer, Mingliang Pei (et al., eds), IETF Internet Draft

Members of the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group have issued a new version of 'Portable Symmetric Key Container' specification with a new filename and many changes. Working Group Co-Chair Hannes Tschofenig notes: "Based on the number of changes we have made to the draft to improve readability we strongly suggest you to review this version. We are also positive that this document version gets us closer to a Working Group Last Call." This IETF WG was chartered to define protocols and data formats necessary for provisioning of symmetric cryptographic keys and associated attributes, considering use cases related to use of Shared Symmetric Key Tokens. The "Portable Symmetric Key Container (PSKC)" specification specifies a symmetric key format for transport and provisioning of symmetric keys (for example One Time Password (OTP) shared secrets or symmetric cryptographic keys) to different types of crypto modules, such as a strong authentication device. The standard key transport format enables enterprises to deploy best-of-breed solutions combining components from different vendors into the same infrastructure. The portable key container is based on an XML schema definition and contains the following main conceptual entities: (1) KeyContainer entity: representing the container that carries the keys; (2) Device entity: representing a physical or virtual device where the keys reside optionally bound to a specific user; (3) DeviceInfo entity: representing the information about the device and criteria to uniquely identify the device; (4) Key entity: representing the key transmitted; (5) KeyData entity: representing data related to the key including value either in plain or encrypted. Section 11 ('XML Schema') of the specification defines the XML schema for PSKC.

See also: OASIS Symmetric Key Services Markup Language (SKSML) V1.0


Web Technologies: Web 3.0 Emerging
Jim Hendler, IEEE Computer

The IEEE Computer Society releases some of its publications with free Internet access. In this article, Jim Hendler (Rensselaer Polytechnic Institute) reports on the progress of Web 3.0: "While Web 3.0 technologies are difficult to define precisely, the outline of emerging applications has become clear over the past year. We can thus essentially view Web 3.0 as Semantic Web technologies integrated into, or powering, large-scale Web applications. Last year was a rewarding one for those of us involved in the Web 3.0 world. 2008 got off to a good start with the news in January that Metaweb Technologies had received more than $42 million in second-round funding for continued development of its Freebase 'social database'... Key enablers of Web 3.0 are a maturing infrastructure for integrating Web data resources and the increased use of and support for the languages developed in the World Wide Web Consortium. The application of these technologies, integrated with the Web frameworks that power the better-known Web 2.0 applications, is generally becoming the accepted definition of the Web 3.0 generation. The base of Web 3.0 applications resides in the Resource Description Framework (RDF) for providing a means to link data from multiple websites or databases. With the SPARQL query language, a SQL-like standard for querying RDF data, applications can use native graph-based RDF stores and extract RDF data from traditional databases. Once the data is in RDF form, the use of uniform resource identifiers (URIs) for merging and mapping data from different resources facilitates development of multisite mashups. RDF Schema (RDFS) and the Web Ontology Language (OWL) provide the ability to infer relationships between data in different applications or in different parts of the same application. These Semantic Web languages allow for the assertion of relationships between data elements, which developers can use, via custom code or an emerging toolset, to enhance the URI-based direct merging of data into a single RDF store. In RDF, if we can recognize two data elements with the same URI, then we can join them in a merged graph... While many Web 3.0 technologies might seem to be familiar to those in the AI knowledge representation field, the key difference is the Web naming scheme provided by URIs coupled with the simple and scalable inferencing in Web 3.0 applications (which typically only use a small subset of the OWL language). This combination makes it possible to create large graphs that can underlie large-scale Web applications. Further, more companies are providing tools for manipulating RDF data, which is helping to accelerate the development of this emerging market. The term 'linked data' is often used to describe the evolving RDF development space, and 'Semantic Web' is increasingly being used to describe coupling linked data with RDFS and OWL. These capabilities can be used in numerous different environments, and many current Semantic Web applications are being deployed within industries to do enterprise data integration and related functions. The term 'Web 3.0,' in turn, now commonly describes the use of one or both of these capabilities underlying a large-scale Web application, typically including Web 2.0 technologies or approaches. It is worth noting that several early Web 3.0 applications do not use RDF and OWL directly. However, these applications are increasingly creating SPARQL APIs or RDF exports of their data, as the ability to integrate data using these standards is seen as an opportunity for cross-marketing and more open applications... With Web 3.0 the explosion of data on the Web has emerged as a new problem space, and the game-changing applications of this next generation of technology have yet to be developed..."

See also: the IEEE flagship journal, 'IEEE Computer'


FDIS for DSDL Part 7: Character Repertoire Description Language (CREPDL)
MURATA Makoto (ed), ISO/IEC JTC 1 SC 34 Announcement

On behalf of the ISO/IEC JTC 1 Subcommittee SC 34 Working Group for DSDL, MURATA Makoto announced the release of a candidate FDIS (Final Draft International Standard) for ISO/IEC FDIS 19757-7:2009(E), Document Schema Definition Languages (DSDL)—Part 7: Character Repertoire Description Language (CREPDL). ISO/IEC 19757 defines a set of Document Schema Definition Languages (DSDL) that can be used to specify one or more validation processes performed against Extensible Markup Language (XML) documents. A number of validation technologies are standardized in DSDL to complement those already available as standards or from industry. The main objective of ISO/IEC 19757 is to bring together different validation-related technologies to form a single extensible framework that allows technologies to work in series or in parallel to produce a single or a set of validation results. The extensibility of DSDL accommodates validation technologies not yet designed or specified. This part of ISO/IEC 19757 provides a language for describing character repertoires. Descriptions in this language may be referenced from schemas. Furthermore, they may also be referenced from forms and stylesheets. Informative Annex A describes 'Differences of Conformant Processors'; Informative Annex B supplies 'Example CREPDL schemas' (B.1 ISO/IEC 8859-6; B.2 ISO/IEC; B.3 Armenian script; B.4 Malayalam script; B.5 The Japanese list of kanji characters for the first grade; B.6 The Japanese list of kanji characters for the second grade). Descriptions of repertoires need not be exact. Non-exact descriptions are made possible by kernels and hulls, which provide the lower and upper limits, respectively. The structure of this part of ISO/IEC 19757 is as follows. Clause 5 introduces kernels and hulls of repertoires. Clause 6 describes the syntax of CREPDL schemas. Clause 7 describes the semantics of a correct CREPDL schema; the semantics specify when a character is in a repertoire described by a CREPDL schema. Clause 8 defines CREPDL processors and their behaviour... 6.1 General: "An CREPDL schema shall be an XML document (W3C XML) valid against the the NVDL (ISO/IEC 19757-4) script in 6.3, which in turn relies on the RELAX NG (ISO/IEC 19757-2) schema in 6.2. The 'Related Resources for CREPDL' referenced from the XML namespace document included as of 7-January-2009: (1) NVDL, A (normative) NVDL crepdl.nvdl for CREPDL. It references to crepdl.rnc. (2) RELAX NG, A (normative) RELAX NG schema in the compact syntax crepdl.rnc for CREPDL.

See also: DSDL references


IMAP METADATA Extension Advanced to IETF Proposed Standard
Cyrus Daboo (ed), IETF Internet Draft

The Internet Engineering Steering Group (IESG) has announced the approval of the "IMAP METADATA Extension" Standards Track specification as an IETF Proposed Standard. The METADATA extension to the Internet Message Access Protocol permits clients and servers to maintain "annotations" or "meta data" on IMAP servers. It is possible to have annotations on a per-mailbox basis or on the server as a whole. For example, this would allow comments about the purpose of a particular mailbox to be "attached" to that mailbox, or a "message of the day" containing server status information to be made available to anyone logging in to the server. Mailboxes or the server as a whole may have zero or more annotations associated with them. An annotation contains a uniquely named entry each of which has a value. Annotations can be added to mailboxes when a mailbox name is provided as the first argument to the SETMETADATA command, or to the server as a whole when the empty string is provided as the first argument to the command... The goal of the METADATA extension is to provide a means for clients to set and retrieve "annotations" or "meta data" on an IMAP server. The annotations can be associated with specific mailboxes or the server as a whole. The server can choose to support only server annotations or both server and mailbox annotations... The I-D was informally promoted to Last Call in the Internet Message Access Protocol Extension (IMAPEXT) Working Group, and was discussed in several WG meetings. It was also reviewed by lemonade WG participants for interaction with other IMAP extensions. This proposal has been discussed extensively and is leveraged by upcoming extensions from the lemonade WG. There was a proposal to add multi-valued attributes, but it was not accepted due to existence of a work-around using subordinate entries. This area might need further feedback from client implementors. There have been ongoing requests for this functionality for many years, but also ongoing concerns about the level of complexity in this proposal. There are already two server implementations of earlier versions of this document. At least one client and one server vendor are interested in implementing the specification. Francis Dupont performed GEN-ART review. After the first last call for this document, an effort was made to greatly simplify this proposal resulting in a second last call.


Social Annotations in Digital Library Collections
Rich Gazan, D-Lib Magazine

In order to incorporate Web 2.0 functionality effectively, digital libraries must fundamentally recast users not just as content consumers, but as content creators. This article analyzes the integration of social annotations—uncontrolled user-generated content—into digital collection items. The literature review briefly summarizes the value of annotations and finds that there is conceptual room to include user-generated content in digital libraries, that they have been imagined as forums for social interaction since their inception, and that encouraging a collaborative approach to knowledge discovery and creation might make digital libraries serve as boundary objects that increase participation and engagement. The results of an ongoing case study of a Web 2.0 question and answer site that has made a similar transition from factual to social content are analyzed, and eight decision points for digital libraries to consider when integrating social annotations with digital collection items are proposed... Whether via links, tags, social bookmarks, comments, ratings or other means, providing users the means to create, share and interact around content typifies the Web 2.0 approach. Most instances of Web 2.0 operate from a model of aggregate peer authority. For example, no single expert tags (essentially categorizes) photographs on a site like flickr.com, but tags from an aggregation of non-experts can make a photograph 'findable enough.' Similarly, hotel ratings or movie reviews from a large-enough number of non-experts can provide a general sense of quality or trustworthiness. Most critically, knowledge discovery and transfer is no longer restricted to a model of one expert creator to many consumers. In Web 2.0, consumers are creators, who can add their voices to both expert and non-expert claims. Users get the benefit of multiple perspectives and can evaluate claims in the best tradition of participative, critical inquiry... Two concept maps in the DELOS 2007 reference model suggest that social annotations fit into current digital library architecture.. Two of the primary engines of Web 2.0 are the ability to create update notifications and to share content across different sites. Users can set up RSS feeds and receive alerts when certain conditions are met, such as when content they have created draws a response. Providing tangible evidence that the effort they took to post was not in vain encourages people to return and continue the conversation.


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-01-13.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org