The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: November 17, 2008
XML Daily Newslink. Monday, 17 November 2008

Please complete a short survey about this newsletter.

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com



Equifax and Parity Offer Online I-Card Beta Test
Staff, Equifax Announcement

Equifax Inc. has unveiled the Equifax online identity card or I-Card, with a beta test of a first-of-its-kind digital identity management solution that is designed to make online transactions easier and more secure for both consumers and businesses. Information cards (I-cards) are the online equivalent of a driver's license, passport, or similar ID and allow consumers to 'click-in' to web and e-commerce sites that accept the I-card and conduct online transactions with greater security and control and without having to fill in forms or remember multiple passwords. It is anticipated that this ease-of-use and security will, over time, facilitate relationships between consumers and businesses by reducing the need for companies to retain customers' personal identification information, which could also result in the reduction of risks posed by data breaches. Equifax is partnering with Parity, a leader in user-centric identity management, to offer the Equifax I-Card that enables people to verify their identity online... The Equifax I-Card is part of the growing trend to provide increased anonymity and security for a consumer's financial and credit information online. Equifax is working with Parity to help deliver this solution. It also used its premier authentication solution, eIDverifier, as well as multiple data sources for identity verification along with open source technology that is endorsed by The Information Card Foundation (ICF), an industry consortium of consumer, data and technology companies. The Equifax I-Card is among the first commercial I-card-based products to re secure digital identity on the Internet. Led by Deutsche Telecom, pid build-out and adoption of Internet-enabled digital identities using information cards...

See also: Information Card Foundation references


Combining the Power of Taverna and caGrid: Scientific Workflows that Enable Web-Scale Collaboration
Wei Tan, Ian Foster, Ravi Madduri; IEEE Internet Computing

Service-oriented architecture represents a promising approach to integrating data and software across different institutional and disciplinary sources, thus facilitating Web-scale collaboration while avoiding the need to convert different data and software to common formats. The US National Cancer Institute's Biomedical Information Grid program seeks to create both a service-oriented infrastructure (caGrid) and a suite of data and analytic services. Workflow tools in caGrid facilitate both the use and creation of services by accelerating service discovery, composition, and orchestration tasks. The authors present caGrid's workflow requirements and explain how they met these requirements by adopting and extending the Taverna system. Taverna is an open source workflow workbench developed in the myGrid project; its goal is to facilitate the use of workflows and distributed resources within the e-science community. Taverna provides both a workflow-authoring tool that uses a proprietary definition language called Scufl, and an execution engine compliant with this language. To validate our decision to choose Taverna, look at some research challenges that occur in the life cycle of scientific workflows and the Taverna features that align with them. This life cycle has four stages: services discovery, workflow composition, workflow execution, and result analysis. We noticed that the service interaction process (discovery, engagement, and enactment) proposed in the Semantic Web Service Architecture (SWSA) is a well-accepted one for Semantic Web services... Globus-based caGrid services are Web services invoked by SOAP, with WSDL-defined interfaces. Globus implements two sets of Web services features that are particularly important for Web-scale computing: access to stateful resources and secure access. In scientific computing, users want to access and manipulate state in service interactions. For example, scientists might submit a job to a scheduler and want to query the state of this specific job instance or get state notification when the job completes. The Web Service Resource Framework (WSRF) specification lets service clients access stateful resources. A resource-generation operation creates a new resource instance and returns an element called ReferenceProperties that identifies this instance... We use Taverna and the caGrid plug-in to identify relevant services, compose those services with additional building blocks (for data transformation), and orchestrate their execution. Our workflow involves three major steps: (1) Identify and retrieve the microarray data of interest. We used CQL, the query language that caGrid Data Services uses, to specify this data and retrieve it from a caArray data service hosted at Columbia University; (2) Preprocess, or normalize, the microarray data before clustering them. We used a GenePattern analytical service; (3) Run hierarchical clustering on the preprocessed data. We invoked the geWorkbench analytical service Columbia University hosts...


W3C Publishes XML Signature Best Practices First Public Draft
Frederick Hirsch and Pratik Datta (eds), W3C Technical Report

Members of W3C's XML Security Working Group have published the First Public Working Draft for "XML Signature Best Practices." The XML Signature specification offers powerful and flexible mechanisms to support a variety of use cases. This flexibility has the downside of increasing the number of possible attacks. One countermeasure to the increased number of threats is to follow best practices, including a simplification of the use of XML Signature where possible. This "XML Signature Best Practices" document outlines best practices noted by the XML Security Specifications Maintenance Working Group, the XML Security Working Group, and other ideas cited at the Workshop on Next Steps for XML Security. While most of these best practices are related to improving security and mitigating attacks, yet others are for best practices in the practical use of XML Signature, such as signing XML that doesn't use namespaces.

See also: the W3C Workshop Report


RESTful Web Services Development Checklist
Steve Vinoski, IEEE Internet Computing

Sometimes, Representational State Transfer (REST) architectural style proponents describe it as being easy, but this in no way implies that REST is trivial or simplistic, nor does it mean that RESTful systems lack sophistication. REST's relative simplicity comes from the fact that it not only clearly defines its trade-offs and constraints but also distinctly separates concerns, such as resource identification, resource interfaces, and definitions for interchanged data. This delineation makes it relatively easy for developers designing and building RESTful services to consider and track important issues that can profoundly impact system flexibility, scalability, and performance. REST isn't the answer to all distributed computing and integration problems by any stretch of the imagination, but it can yield highly practical solutions to a variety of such problems, not only on the Web but also within the enterprise... HTTP supports content negotiation (conneg) between clients and services. A client can set the Accept header in a request to a list of acceptable MIME types to tell the server what formats it's willing to receive. It can also augment the list with quality (q) parameters to indicate preferences. For example, a browser might send an Accept header declaring its preference for XHTML, HTML, and image types, in that order, followed by a wildcard indicator with a low q parameter to indicate that it will accept anything else as well. Noninteractive programmatic clients, however, tend to prefer a much more limited set of media types—often, just one. When a server returns a response, it sets the Content-type header to indicate the type of representation it's returning. To determine the client's preferred content type for a given request, servers must be capable of parsing Accept headers using techniques such as those embodied in the open source 'mimeparse' module... Status codes are quite important as well. For each method on each resource, developers must choose which HTTP status codes to return, and under what circumstances... HTTP can be reasonably efficient on a global networking scale because it provides significant support for intermediation and caching. Servers control whether their responses can be cached and, if so, for how long. But even for small-scale systems without any caching intermediaries, servers and clients can still include certain data in headers in their responses and requests that can significantly reduce the amount of data they exchange and, in some cases, even eliminate it. Because conditional GET is relatively straightforward, service developers should always strive to support it. One way to do so is to return the date and time of the most recent change to the resource in the Last-modified header when a client requests a GET of that resource... This helps overall efficiency for both the server and client by avoiding sending and receiving the same message bodies repeatedly...


Identity-based Encryption Architecture and Supporting Data Structures
Guido Appenzeller, Luther Martin (et al., eds), IETF Internet Draft

The Internet Engineering Steering Group (IESG) recently announced approval of "Identity-based Encryption Architecture and Supporting Data Structures" as an Informational RFC. It was produced by members of the IETF S/MIME Mail Security (SMIME) Working Group. The specification describes the security architecture required to implement identity-based encryption, a public-key encryption technology that uses a user's identity to generate their public key. At least one implementation exists; no additional vendors have announced implementation plans. In the Request Structure, the POST method contains in its body a prescribed XML structure that must be encoded as an 'application/ ibe-key-request+xml' MIME type. For the Server Response Format: If the PKG replies with an HTTP response that has a status code indicating success, the body of the reply must contain the a prescribed XML structure that must be encoded as an 'application/ibe-pkg-reply+xml' MIME type... Identity-based encryption (IBE) is a public-key encryption technology that allows a public key to be calculated from an identity and a set of public mathematical parameters and for the corresponding private key to be calculated from an identity, a set of public mathematical parameters and a domain-wide secret value. An IBE public key can be calculated by anyone who has the necessary public parameters; a cryptographic secret is needed to calculate an IBE private key, and the calculation can only be performed by a trusted server which has this secret. Calculation of both the public and private keys in an IBE system can occur as needed, resulting in just-in-time creation of both public and private keys. This contrasts with other public-key systems, in which keys are generated randomly and distributed prior to secure communication commencing, and in which private encryption keys need to be securely archived to allow for their recovery if they are lost or destroyed. The ability to calculate a recipient's public key, in particular, eliminates the need for the sender and receiver to interact with each other, either directly or through a proxy such as a directory server, before sending secure messages. A characteristic of IBE systems that differentiates them from other server-based cryptographic systems is that once a set of public parameters is fetched, encryption is possible with no further communication with a server during the validity period of the public parameters. Other server-based systems may require a connection to a server for each encryption operation.

See also: the IETF S/MIME Mail Security (SMIME) Working Group


OASIS Forms Technical Committee to Advance CMIS as an Open Standard
Staff, OASIS Announcement

OASIS has formed a new group to standardize a Web services interface specification that will enable greater interoperability of Enterprise Content Management (ECM) systems. The new OASIS Content Management Interoperability Services (CMIS) Technical Committee will advance an open standard that uses Web services and Web 2.0 interfaces to enable information to be shared across Internet protocols in vendor-neutral formats, among document systems, publishers and repositories, within and between companies. Mary Laplante, senior analyst at The Gilbane Group: "CMIS offers new potential for write-once, run-anywhere content. Companies want the best solutions for their business applications. In reality, this means multiple CM systems and the resulting need for integration. Companies still spend significant time and money connecting heterogeneous repositories. CMIS offers the promise of dramatically reduced IT burden associated with maintaining custom integration code and one-off integrations."

With CMIS, users do not need unique applications to access each ECM repository. Application development and deployment are much faster and more efficient. The specification provides easy mapping to existing ECM systems. Web technologies, including Web 2.0, Internet scale, service-orientation and resource- orientation, are all exploited in CMIS. According to Al Brown of IBM, convenor of the OASIS CMIS Technical Committee: "CMIS will help rapidly grow the industry for both vendors and independent software vendors (ISVs), while protecting customers' investments in applications and repositories. The specification makes it possible to build applications capable of running over a variety of content management systems. This will foster the growth of a true ECM ecosystem and the overall ECM market.'..."

See also: complete CMIS references


Collaboration Is At the Heart of Open Source Content Management
Andrew Conry-Murray, InformationWeek

"... Microsoft SharePoint has blitzkrieged the ECM market thanks to the same powerful weapon Drupal is using: collaboration. SharePoint makes it relatively easy for sales, marketing, and product development teams to access and share content using common Office tools. SharePoint thrust itself into the ECM space by wrapping essential document management capabilities, including access control, authorization, and workflows, around this collaboration environment... In October, Alfresco launched Share, a collaboration system built into Enterprise 3.0, its ECM platform. Share lets employees and people outside a company set up workspaces to collaborate on documents and files. Like most collaboration apps, Share also offers blogs, wikis, and calendars, and it includes a fairly easy way for people to upload and manage documents in a library that includes a Flash-based viewer, so people can preview documents before downloading them... Companies can run Share on the operating system and database software they want... But this might be only the start of more wide-open competition. A proposed standard promises to crack open ECM silos and let developers create a new generation of apps that can pull content from heterogeneous repositories. The Content Management Interoperability Services (CMIS) specification is backed by most of the industry and is expected by late next year.

Today, companies must buy or build integrations to link applications to competing ECM products. Alfresco CTO John Newton says CMIS will have the same impact on the ECM market that the SQL standard had on databases. "Until there was a standard, you were beholden to vendors to create applications on top of a repository," he says. "Now you'll get a variety of wider applications, search tools, publishing tools, and integration with the Web." Newton says Alfresco's open source foundation will let the company move faster than others, a claim it backed up by its being the first ECM vendor to release a draft implementation of the proposed specification. "It's hard to see how a legacy player will counter a platform that's high-quality, free of charge, and based on a standard everyone is going to," says John Howard, president and CEO of Alfresco. Those are bold words from a company that's not yet profitable, particularly when it's up against EMC, IBM, and Microsoft, which are growing fast and are backed by some of the industry's richest R&D budgets. From a product basis, Alfresco still has work to become a well-rounded ECM product. It doesn't yet have certification for DoD 5015.2, a Defense Department records management standard used by all federal agencies. It also lacks the European records management certification. Without these stamps of approval, government agencies may look elsewhere to manage official records. Still, big companies' willingness to use platforms like Drupal and Alfresco shows how they're leveraging collaboration to drive open source into new territory..."

See also: CMIS references


Working on Jing and Trang
James Clark, Random Thoughts Blog

I've been back to working on Jing (A RELAX NG Validator in Java) and Trang (Multi-format Schema Converter Based on RELAX NG) for about a month now. It would be something of an understatement to say that they were badly in need of some maintenance love: It's been five years since the last release... I started a jing-trang project on Google Code to host future development. There are new releases of both Jing and Trang in the downloads section of the project site. The code base for Jing and Trang had evolved over a number of years, incorporating various bits of functionality that were independent of each other to various degrees; its structure only made any sense from a historical perspective. The current structure is now nicely modular. I converted my CVS repository to subversion before I started moving things around, so the complete history is available in the project repository. For people who want to stay on the bleeding edge, it's now really easy to check out and build from subversion. My natural tendencies are much more to the cathedral than to the bazaar, but I'm trying to be more open. I'm pleased to say that are already two committers in addition to myself. There's a commercial XML editor called 'oXygen/', which uses Jing and Trang to support RELAX NG. The main guy behind that, George Bina, had made a number of useful improvements. In particular, he upgraded Jing's support for the Namespace Routing Language to its ISO-standardized version, which is called NVDL (you might want to start with this NVDL tutorial rather than the spec). This is now on the trunk. The other committer is Henri Sivonen, who has been using Jing in his Validator.nu service. My goals for the next release are: (1) complete support for NVDL (I think the only missing feature is inline schemas); (2) support for the ISO-standardized version of Schematron; (3) customizable resource resolution support (so that, for example, you can use XML catalogs); (4) support standard JAXP XML validation API (javax.xml.validation); (5) more code cleanup. Please use the issue tracker to let me know what you would like. Google Code has a system that allow you to vote for issues: if you are logged in, which you can do with a regular Google account, each issue will be displayed with a check box next to a star; checking this box "stars" the issue for you, which both adds a vote for the issue and gets you email notifications about changes to it.

See also: the jing-trang project on Google Code


OASIS Forms SOA for Telecom (SOA-Tel) Technical Committee
Staff, OASIS Announcement

OASIS announced that members have created a new "OASIS SOA for Telecom (SOA-TEL) Technical Committee." Abbie Barbir (Nortel) is Convenor of the TC, which holds its first meeting as a face-to-face meeting January 13-15, 2009 in Ottawa, Ontario, Canada. The TC operates under the OASIS RAND IPR Mode. This TC plans to identify gaps in standards coverage for using Service Oriented Architecture (SOA) techniques in a telecom environment; particularly for Telecom operators/providers. The combined term "provider/operator" means a company that utilizes a telecoms network to provide service to the subscriber community, and they may or may not own the network assets or services they are providing. The proposers assert that "Applicability of IT-based SOA techniques is much more complex in the Telecom world, where services and network features are often tightly coupled and vertically integrated... This complexity hinders the ability of Telecom operators/ providers to offer their clients converged, identity based services that are available at any time and secure across any access network and that are operating system, device and location independent... It is important for the Telecom industry to identify where and why SOA can be applied in telecommunication, and the potential gaps and limitations of using Web 2.0, SOA, Web Services and/or REST in supporting the unique requirements of integrating telecommunication services within business applications... This work focuses on identifying gaps and generates requirements to identify how existing standards can help Telecom providers/operators better compete in this new environment."

See also: the TC home page


Validating Code Lists with Schematron
Rick Jelliffe, O'Reilly Technical

How happy the man whose documents are clearly divided into variant and invariant: data versus schemas. But in the real world, often there are data values or structures which have fixed choices, but not completely fixed: a twilight zone. For example, the values of a field with codes for different nations may vary independently of the schema which requires such codes be used: think of the political roil of Eastern Europe at the end of the cold war. If the schema enumerates the allowed codes, then it will need to be updated to track the actual values which requires an ongoing effort and creates a deployment/update aspect; but if the schema just gives some lesser requirement, such requiring a token, developers need to develop some alternative mechanism for validating and documenting the constraints. But there are more subtle and potentially catastrophic issues at play. If it is decided to update the schema by merely adding the new codes without removing old ones, that removes a check of incorrect data values. If, however, it is decided to put out a new version of the schema, then documents clearly need to signal which version of the schema they are supposed to accord to. And XML Schemas type derivation mechanism may get in the way, if the type derivation mechanism was not set up correctly: the correct method being a base type using tokens with derived types with the actual enumerated values as restrictions; type binding being done using the base general schema rather than any particular restricted one. Furthermore, XSD is very frequently compiled rather than used for dynamic validation. So the option of merely having the code list in a separate namespace and schema module (administered separately and imported as needed) is not available. There are two basic ways of handling code lists with Schematron...

See also: Schematron references


JTC1 SC 34 Presentation to the JTC 1 Plenary in Nara, Japan
Dr. Sam Oh, SC 34 Informational FYI Document

Excerpt from Document N1115, ISO/IEC JTC 1/SC 34 Document Description and Processing Languages, apropos of ODF/IS 26300 spec maintenance: "JTC1 recognizes the timely response (N9398) from OASIS to the SC34 liaison statement (SC34 N1095), and thanks OASIS for the new draft errata to ODF 1.0. JTC1 particularly welcomes OASIS's proposal to confer with JTC1 and SC34 to forge a genuine partnership for collaboratively handling the maintenance of ODF/IS 26300. JTC1 requests SC34 and OASIS to develop a document specifying the detailed operation of joint maintenance procedures, with a common goal of preparation of technically-equivalent documents, and taking into account the requirements and constraints of both standards bodies. SC34 is requested to consider this document at its March 2009 plenary and report the results to JTC1 following this meeting." Commentary may be found in: (1) "ODF: OASIS and JTC 1 Get It Together", by Alex Brown; (2) "The Maintenance of ODF — An Aide-mémoire"; (3) "My Tinfoil Hat", by Tim Bray.

See also: the reference document


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-11-17.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org