The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: November 03, 2010
XML Daily Newslink. Wednesday, 03 November 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com



DITA Version 1.2 Submitted for OASIS Standard Approval Ballot
Kristen Eberlein, Robert Anderson, Gershon Joseph (eds), Candidate OASIS Standard

The OASIS Darwin Information Typing Architecture (DITA) Technical Committee has submitted an approved Committee Specification of DITA Version 1.2 for consideration as an OASIS Standard. Statements of use have been provided by Cisco, Comtech Services, IBM, JustSystems Canada, PTC, Really Strategies, and SDL. Balloting was scheduled to run from November 15-30, 2010. The Darwin Information Typing Architecture (DITA) v1.2 specification "defines both a set of document types for authoring and organizing topic-oriented information and a set of mechanisms for combining, extending, and constraining document types."

"Prior to the release of DITA 1.2, the document types and specializations for technical content were included as an integral part of the base DITA specification. With the release of DITA 1.2, the document types and specializations for technical content have a dedicated section in the DITA specification. This change reflects the addition of an increasing number of specializations that are part of the DITA standard.

The document types and specializations included in the technical content package were designed to meet the requirements of those authoring content for technically oriented products in which the concept, task, and reference information types provide the basis for the majority of the content. These information types are used by technical-communication and information-development organizations that provide procedures-oriented content to support the implementation and use of computer hardware, computer software, and machine-industry content. However, many other organizations producing policies and procedures, multi-component reports, and other business content also use the concept, task, and reference information types as essential to their information models.

The DITA 1.2 technical content package includes the following document types and supporting structural specializations and constraints: (1) Concept document type and structural specialization; (2) Reference document type and structural specialization; (3) Task document type -- general task with the Strict Taskbody Constraint; (4) General task document type and specialization; (5) Machinery task document type -- general task with the Machinery Taskbody Constraint; (6) Glossary entry (glossentry) document type and structural specialization; (7) Glossary group (glossgroup) document type and structural specialization; (8) Bookmap document type and structural specialization..."

See also: the OASIS announcement      [TOC]


Create Walk-Through and Acceptance Scripts With Single-Sourced DITA
Piers Michael Hollott, IBM developerWorks

"As your software development project and development team grow, you might need to create reference documentation such as a user guide for internal or external use. Creating this sort of documentation becomes more cumbersome the longer you defer it. Developing a framework that supports creating multiple types of documentation from a single source can be advantageous for small and large projects alike; with forethought, you can also leverage this documentation to support your project's quality assurance (QA) and testing needs...

Paired with a validating XML editor, Darwin Information Typing Architecture (DITA) provides a useful tool for developing topic-based user documentation that describes how to use your application. With a bit of forethought and planning, you can repurpose these same topics into documents that provide value much earlier in the development process, such as walk-through scripts for use in client demos or acceptance scripts for a manual quality assurance effort.

This article demonstrates how to develop a strategy of document reuse along with some benefits of initiating documentation projects early so you can repurpose them both immediately and throughout the life cycle of your software project. Using a single source to generate multiple document projects that crosscut the various stages of the development cycle opens further opportunities to leverage this shared knowledge. This approach is important because if a documentation is used by multiple stakeholders in a software project, it is much more likely to be maintained and to gain value over time.

Applying a more flexible approach to documentation also has the advantage of crosscutting methodologies. If you are involved with a client, for example, who demands a high level of formality and ceremony but your development team practices an agile methodology, repurposing documentation might be exactly the approach you need to address the demands of both your client and your chosen methodology. If this is the case, DITA and XSLT 2.0 might be just the tools you need..."

See also: DITA references      [TOC]


W3C First Public Working Draft of R2RML: RDB to RDF Mapping Language
Souripriya Das, Seema Sundara, Richard Cyganiak (eds), W3C Technical Report

Members of the W3C RDB2RDF Working Group have released a First Public Working Draft of the specification R2RML: RDB to RDF Mapping Language. This WG, part of the W3C Semantic Web Activity, was chartered to standardize a language for mapping relational data and relational database schemas into RDF and OWL, tentatively called the RDB2RDF Mapping Language, R2RML.

The Working Draft describes R2RML, "a language for expressing customized mappings from relational databases to RDF datasets. Such mappings provide the ability to view existing relational data in the RDF data model, expressed in a structure and target vocabulary of the mapping author's choice. R2RML mappings are themselves RDF graphs and written down in Turtle syntax. R2RML enables different types of mapping implementations: processors could, for example, offer a virtual SPARQL endpoint over the mapped relational data, or generate RDF dumps, or offer a Linked Data interface. The intended audience of this specification are implementors of software that generates or processes R2RML mapping documents, as well as mapping authors looking for a reference to the R2RML language constructs.

Besides the R2RML language, this working group will also define a fixed 'default mapping' from relational databases to RDF. In the default mapping of a database, the structure of the resulting RDF graph directly reflects the structure of the database, the target RDF vocabulary directly reflects the names of database schema elements, and neither structure nor target vocabulary can be changed. With R2RML on the other hand, a mapping author can define highly customized views over the relational data.

Every R2RML mapping is tailored to a specific database schema and target vocabulary. The input to an R2RML mapping is a relational database that conforms to that schema. The output is an RDF dataset, as defined in SPARQL, that uses predicates and type from the target vocabulary. The mapping is conceptual; R2RML processors are free to materialize the output data, or to offer virtual access through an interface that queries the underlying database, or to offer any other means of providing access to the output RDF dataset..."

See also: the W3C RDB2RDF Working Group      [TOC]


IETF Proposed Working Group for Uniform Resource Names, Revised (urnbis)
IESG Secretary, Proposed WG Announcement

From the Internet Engineering Steering Group (IESG): "A new IETF working group 'Uniform Resource Names, Revised (urnbis)' has been proposed in the Applications Area. The IESG has not made any determination as yet; the draft charter has been submitted, and is provided for informational purposes' please send your comments to the IESG mailing list by Tuesday, November 9, 2010."

From the Problem Statement: "Uniform Resource Names (URNs) are location-independent, persistent identifiers for information resources. The RFCs defining URNs were published in 1997-2001. They rely on old (or even provisional) basic documents on the concepts of URI and URL. At that time there was almost no URN implementation experience. Since then, the URN system has gained significant popularity, and roughly 40 formal URN Namespaces have been defined and registered with IANA. Hundreds of millions of resources have been assigned URNs; this enables searching of and persistent linking to these documents, artifacts, and other objects. However, the URN system lacks a foundation that is consistent in terminology and formal description with present (Full) Internet Standards...

The lack of a standard definition of the 'urn' URI scheme fosters recurring discussions on what URNs are and IETF commitment to them. There is a need to clarify that URNs are specific URIs (namely those using the 'urn' URI scheme) and hence all general URI rules apply to URNs... There also is a need to update some namespace registrations for at least two reasons: the standards specifying the relevant underlying namespaces (such as International Standard Book Number (ISBN)) have been amended/expanded since the original specification of the related URN namespace and the WG's update of the basic URN-related RFCs might introduce or identify inconsistencies.

[As proposed] this working group is chartered to update the key RFCs describing the URN system, including RFC 2141 (URN Syntax), RFC 3406 (Namespace Definition Mechanisms), and review and update selected URN namespace specifications including those for for ISBN, National Bibliography Numbers (NBN) and International Serial Standard Number (ISSN). For all document revisions, backward compatibility with previous URN-related RFCs will be retained. The WG will produce an updated set of URN-related RFCs. All documents will be on the Standards-Track or BCP. These updates will provide a normative foundation for URNs and assure uniformity of the URN assignment and resolution concepts and procedures at the abstract level..."

See also: the IETF URN discussion list      [TOC]


JSON Web Token (JWT) Defines Token Format for Encoding Claims
Michael Jones, Dirk Balfanz, John Bradley (et al), Community Draft Specification

A draft version of the "JSON Web Token (JWT)" specification has been made available for public review. JSON Web Token (JWT) "defines a token format that can encode claims transferred between two parties. The claims in a JWT are encoded as a JSON object that is digitally signed."

The principal editor (Michael B. Jones) presented this version the result of merging earlier draft proposals: "I've produced a new JSON token draft based on a convergence proposal discussed with the authors of the other JSON signing proposals. I borrowed portions of this draft with permission from Dirk Balfanz, John Bradley, John Panzer, and Nat Sakimura, and so listed them as co-authors..." A detailed comparison of the precursor documents is also available.

Summary: "JSON Web Token (JWT) [suggested pronunciation 'jot'] is a simple token format intended for space constrained environments such as HTTP Authorization headers and URI query parameters. JWTs encode the claims to be transmitted as a JSON object that is base64url encoded and digitally signed. As per RFC 4627 Section 2.2, the JSON object consists of zero or more name/value pairs (or members), where the names are strings and the values are arbitrary JSON values. These members are the claims represented by the JWT. The JSON object is base64url encoded to produce the JWT Claim Segment. An accompanying base64url encoded JSON envelope object describes the signature method used.

The names within the object MUST be unique. The names within the JSON object are referred to as Claim Names. The corresponding values are referred to as Claim Values. JWTs contain a signature that ensures the integrity of the content of the JSON Claim Segment. This signature value is carried in the JWT Crypto Segment. The JSON Envelope object must contain an "alg" parameter, the value of which is a string that unambiguously identifies the algorithm used to sign the JWT Claim Segment to produce the JWT Crypto Segment... The members of the JSON object represented by the Decoded JWT Claim Segment contain the claims. Note however, that the set of claims a JWT must contain to be considered valid is context-dependent and is outside the scope of this specification. There are three classes of JWT Claim Names: Reserved Claim Names, Public Claim Names, and Private Claim Names..." [Subsequent to IIW 2010 in Mountain View, further improvements have been proposed.]

See also: details on the convergence proposal      [TOC]


OGC Seeks Comments on Sensor Observation Service Candidate Standard
Staff, Open Geospatial Consortium Announcement

"The Open Geospatial Consortium (OGC) invites public comment on the candidate OGC Sensor Observation Service (SOS) Standard Version 2.0. The SOS candidate interface standard is designed to provide access to sensor observations, sensor descriptions, and digital representations of observed features in an interoperable and standardized way. Further, the SOS 2.0 candidate standard provides means to insert new sensor descriptions or observations.

Sensor systems contribute the largest part of geospatial data used in geospatial systems today. Sensor systems include for example in-situ sensors (e.g. river gauges), moving sensor platforms (e.g. satellites or Autonomous Unmanned Vehicles) or networks of static sensors (e.g. seismic arrays). Used in conjunction with other OGC specifications the SOS provides a broad range of interoperable capability for discovering, binding to and interrogating individual sensors, sensor platforms, or networked constellations of sensors in real-time, archived or simulated environments.

The SOS is part of the OGC Sensor Web Enablement (SWE) framework of standards. The SWE activity aims at providing interfaces and protocols for enabling Sensor Webs through which applications and services are able to access sensors of all types. Sensor Webs can be accessed over networks such as the Internet with the same standard technologies and protocols that enable the Web... The OGC Sensor Observation Service revision incorporates several enhancements. These include a modular restructuring of the document, new KVP and SOAP bindings, redesign of the observation offering concept, and reliance on the OGC Sensor Web Enablement Service Model. SOS 2.0 is highly modular and follows the OGC core/extension design pattern. The main SOS 2.0 document incorporates the core as well as the transactional extension, result handling extension, enhanced operations extension, binding extension, and a profile for spatial filtering of observations..."

The draft 'OGC SOS 2.0 Interface Standard Version 2.0.0' (edited by Arne Broering, Christoph Stasch, and Johannes Echterhoff) defines a SOAP binding for all specified operations as well as a KVP binding for the core operations and the 'GetFeatureOfInterest' operation. Future versions or extensions of this standard may add a RESTful binding similar to what has been defined by Janowicz et al.

See also: the OpenGIS Sensor Model Language (SensorML)      [TOC]


Global Adoption of W3C Standards Boosted by ISO/IEC Official Recognition
Staff, W3C Announcement

"W3C, the International Standards Organization (ISO), and the International Electrotechnical Commission (IEC) have taken steps "that will encourage greater international adoption of W3C standards. W3C is now an 'ISO/IEC JTC 1 PAS Submitter', bringing 'de jure' standards communities closer to the Internet ecosystem.

As national bodies refer increasingly to W3C's widely deployed standards, users will benefit from an improved Web experience based on W3C's standards [known as W3C 'Recommendations'] for an Open Web Platform. W3C expects to use this process: (1) to help avoid global market fragmentation; (2) to improve deployment within government use of the specification; and (3) when there is evidence of stability/market acceptance of the specification. Web Services specifications will likely constitute the first package W3C will submit, by the end of 2010."

From the FAQ document: "There are contexts where having the de-jure standard imprimatur is likely to increase adoption of W3C specifications. For instance, beyond the W3C brand, a larger audience may be familiar with the ISO and IEC brands. Furthermore, there are also contexts where it is mandatory to use ISO/IEC standards or their national transposition by legislation, for instance in some government procurement. W3C also has experience where lack of coordination among standards bodies results in fragmentation. The PAS process can be seen as a mechanism for better coordination between different standardization cultures, all of which seek global interoperability for ICT technologies, but through different means. This coordination can result in lower entry costs on the Web platform for the community at large... Together with the W3C ARO status (allowing ISO specs to reference W3C Recommendations), W3C believes that de jure recognition of W3C specifications will add trust to the entire international standard system, so that end-users benefit from better interoperability..."

W3C's decision aligns with practice in other SSOs: according to the ISO roster of Approved PAS Submitters, other PAS submitters include OASIS, OMG, WS-I, IFPUG, UPnP Implementers Corporation, the Trusted Computing Group (TCG), and The Open Group. Former submitters include NESMA, Sun Microsystems, IrDA, DAVIC, X-Open, VESA, ATM Forum, EUROPAY International, UKSMA, DMTF, ISSEA, The J Consortium, and the Linux Foundation.

See also: the W3C PAS FAQ document      [TOC]


First Official HTML5 Tests Topped by Microsoft
Cade Metz, The Register

"The Worldwide Web Consortium has released the results of its first HTML5 conformance tests, and according to this initial rundown, the browser that most closely adheres to the latest set of web standards is...Microsoft Internet Explorer 9.

Yes, the HTML5 spec has yet to be finalised. And yes, these tests cover only a portion of the spec. But we can still marvel at just how much Microsoft's browser philosophy has changed in recent months.

The W3C tests, available [online], put IE9 beta release 6 at the top of the HTML5 conformance table, followed by the Firefox 4 beta 6, Google Chrome 7, Opera 10.6, and Safari 5.0. The tests cover seven aspects of the spec: 'attributes', 'audio', 'video', 'canvas', 'getElementsByClassName', 'foreigncontent,' and 'xhtml5'... The tests do not yet cover web workers, the file API, local storage, or other aspects of the spec. Not do they cover CSS or other standards that have nothing to do with HTML5 but are somehow lumped under HTML5 by the likes of Apple, Google, and Microsoft.

From the HTML5 Test Suite Conformance Results document: 'Interoperability is important to web designers. Good test suites drive interoperability. They're a key part of making sure web standards are implemented correctly and consistently. More tests encourage more interoperability. The HTML5 Test Suite Results aims to help implementers write applications that support HTML5. In no way are these conformance tests to be construed as providing certification or branding of HTML5 implementations. The only claim that could be made is that a particular implementation is conformant to a particular version of the HTML5 Test Suite... [Here] are the results obtained when using the approved HTML5 tests that have been agreed upon by the HTML WG as valid per the HTML5 specification. The tests can be run and inspected individually using our test runner, see the test harness..." [Note: W3C has renewed a call for participation in testing to improve interoperability.]

See also: the HTML5 Test Suite Conformance Results      [TOC]


CollabNet Further Embraces Cloud with Codesion Acquisition
David Ramel, Application Development Trends

"CollabNet, known for its Agile application lifecycle management (ALM) platform, made a self-described 'aggressive move' into cloud-based developer services recently by announcing it is acquiring Codesion, which hosts software version control services such as Subversion. Guy Marion, CEO of CollabNet, said in a statement that the acquisition was a 'natural fit' for the company because 'our users were seeking version control training and best practices around Agile development.'

CollabNet itself created the open-source Subversion version control system in 2000 and is still the primary sponsor of the project, reportedly used by millions of users, as well as many high-profile projects such as SourceForge, Ruby, Python and PHP. It's now under an open-source Apache license and is officially called Apache Subversion.

Codesion, formerly called CVSDude, also hosts version control services Git and CVS, along with other applications, in the software-as-a-service model..."

According to the Codesion web site: "Codesion's exclusive new FrogSAFE (Secure Application Fusion Engine) Technology is secure, fast and highly available enabled by redistributing server load across multiple service-optimized clusters. Customers enjoy our clean, AJAX-based web interface , which allows managers to configure open source tools like Git, Subversion, Trac and Bugzilla in a few mouse clicks. Assign fine grained permissions with our built-in SVN browser, or use our API to integrate with your systems..."

See also: Codesion cloud services      [TOC]


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-11-03.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org