The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: June 10, 2008
XML Daily Newslink. Tuesday, 10 June 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



W3C Recommendation: XML Signature Syntax and Processing (Second Edition)
Donald Eastlake, Joseph Reagle (et al., eds), W3C Technical Report

W3C has announced the publication of "XML Signature Syntax and Processing (Second Edition)" as a final Recommendation. The document has been reviewed by W3C Members, by software developers, and by other W3C groups and interested parties; it is a stable document and may be used as reference material or cited from another document. This document document specifies XML syntax and processing rules for creating and representing digital signatures. XML Signatures can be applied to any digital content (data object), including XML. An XML Signature may be applied to the content of one or more resources. Enveloped or enveloping signatures are over data within the same XML document as the signature; detached signatures are over data external to the signature element. More specifically, this specification defines an XML signature element type and an XML signature application; conformance requirements for each are specified by way of schema definitions and prose respectively. This specification also includes other useful types that identify methods for referencing collections of resources, algorithms, and keying and management information. The original version of this specification was produced by the IETF/W3C XML Signature Working Group which believes the specification is sufficient for the creation of independent interoperable implementations; the Interoperability Report shows at least ten (10) implementations with at least two interoperable implementations over every feature. This Second Edition was produced by the W3C XML Security Specifications Maintenance Working Group, part of the W3C Security Activity. This Second Edition of XML Signature Syntax and Processing adds Canonical XML 1.1 as a required canonicalization algorithm and recommends its use for inclusive canonicalization. This version of Canonical XML enables use of 'xml:id' and 'xml:base' Recommendations with XML Signature and also enables other possible future attributes in the XML namespace. Additional minor changes, including the incorporation of known errata, are documented in Changes in XML Signature Syntax and Processing (Second Edition). The Working Group has conducted an interoperability test as part of its activity. The Test Cases for C14N 1.1 and XMLDSig Interoperability are available as a companion Working Group Note. The Implementation Report for XML Signature, Second Edition is also publicly available.

See also: the W3C XML Security Working Group


Extensible Markup Language Evidence Record Syntax (ERS-XML)
A. Blazic, J. Blazic, and T. Gondrom; IETF Internet Draft

Members of the IETF Long-Term Archive and Notary Services (LTANS) Working Group have published an Internet Draft for "Extensible Markup Language Evidence Record Syntax." In many scenarios, users must be able to demonstrate the (time) existence, integrity and validity of data including signed data for long or undetermined period of time. This document specifies XML syntax and processing rules for creating evidence for long-term non-repudiation of existence of data. ERS-XML incorporates alternative syntax and processing rules to ASN.1 ERS syntax by using XML language. Evidence Record Syntax in XML format is based on long term archive service requirements as defined in RFC 4810. XMLERS syntax delivers the same (level of) non-repudiable proof of data existence as ASN.1 ERS. The XML syntax supports archive data grouping (and de-grouping) together with simple or complex time stamp renewal process. Evidence records can be embedded in the data itself or stored separately as a standalone XML file. The LTANS Working Group, part of the IETF Security Area, was chartered "to define requirements, data structures and protocols for the secure usage of the necessary archive and notary services. In many scenarios, users need to be able to ensure and prove the existence and validity of data, especially digitally signed data, in a common and reproducible way over a long and possibly undetermined period of time. Cryptographic means are useful, but they do not provide the whole solution. For example, digital signatures (generated with a particular key size) might become weak over time due to improved computational capabilities, new cryptanalytic attacks might "break" a digital signature algorithm, public key certificates might be revoked or expire, and so on. Complementary methods covering potential weaknesses are necessary. Long-term non-repudiation of digitally signed data is an important aspect of PKI-related standards. Standard mechanisms are needed to handle routine events, such as expiry of signer's public key certificate and expiry of trusted time stamp authority certificate."

See also: the IETF Long-Term Archive and Notary Services (ltans) WG


Call for Participation: 2008 ACM Workshop on Secure Web Services (SWS)
Staff, ACM Announcement

Organizers of the 2008 ACM Workshop on Secure Web Services (SWS) have issued a call for participation in the workshop, to be held October 31, 2008 in Fairfax, VA, USA in conjunction with the Fifteenth ACM Conference on Computer and Communications Security (CCS-15). The SWS workshop explores security challenges ranging from the advancement and best practices of building block technologies such as XML and Web services security protocols to higher level issues such as advanced metadata, general security policies, trust establishment, risk management, and service assurance. The workshop provides a forum for presenting research results, practical experiences, and innovative ideas in web services security. Topics of interest include, but are not limited to, the following: (1) Web services and GRID computing security; (2) Authentication and authorization; (3) Frameworks for managing, establishing and assessing inter-organizational trust relationships; (4) Web services exploitation of Trusted Computing; (5) Semantics-aware Web service security and Semantic Web Secure orchestration of Web services; (6) Privacy and digital identities support... Basic security protocols for Web Services, such as XML Security, the WS-* series of proposals, SAML, and XACML are the basic set of building blocks enabling Web Services and the nodes of GRID architectures to interoperate securely. While these building blocks are now firmly in place, a number of challenges are still to be met for Web services and GRID nodes to be fully secured and trusted, providing for secure communications between cross-platform and cross-language Web services. Also, the current trend toward representing Web services orchestration and choreography via advanced business process metadata is fostering a further evolution of current security models and languages, whose key issues include setting and managing security policies, inter-organizational (trusted partner) security issues and the implementation of high level business policies in a Web services environment.


Proposed Cross-Enterprise Security and Privacy Authorization (XSPA) TC
Staff, OASIS Announcement

OASIS announced that certain of its members have published a draft Charter for a proposed "Cross-Enterprise Security and Privacy Authorization (XSPA) Technical Committee." Proposers include representatives from Cisco, Redhat, Symlabs, and the U.S. Veterans Health Administration. Enterprises, including the healthcare enterprise, need a mechanism to exchange privacy policies, consent directives and authorizations in an interoperable manner. At this time, there is no standard that provides a cross-enterprise security and privacy profile. The proposed OASIS XSPA TC will address this gap. The need for an XSPA profile has been identified by the security and privacy working group of the Healthcare Information Technology Standards Panel (HITSP). HITSP is an ANSI-sponsored body charged with identifying standard building blocks that can be leveraged to implement common healthcare use cases. The XSPA profile will require the participation of subject matter experts in several areas, including WS-Federation, SAML, WS-Trust, and possibly others noted below. OASIS has the unique combination of member expertise necessary to complete this work. The purpose of the TC is to specify sets of stable open standards and profiles, and create other standards or profiles as needed, to fulfill the security and privacy functions identified by the functions and data practices identified by HITSP, or specified in its use cases, as all are mandated or specified from time to time. These functions will at a minimum support the HITSP Access Control Transaction Package specification TP20, including those access control capabilities required to support the HITSP Manage Consent Directive Package specification TP30. This includes the support of reliable and auditable methods to identify, select and confirm the personal identity, official authorization status, and role data for the subjects, senders, receivers and intermediaries of electronic data; data needed to convey and/or enforce permitted operations on resources and associated conditions and obligations; and reasonable measures to secure and maintain the privacy and integrity of that data from end to end... The profile specified by this TC will have broad applicability to health communities beyond the regulated portion of U.S. healthcare data transactions that the HITSP panel is directed to address. Use cases from other instances of cognate data exchanges, particularly in healthcare privacy contexts, may be solicited and used to improve the TC's work. However, the first priority of this committee will be to deliver and demonstrate sets of standards-based methods that fulfill the identified security and privacy functions needed by HITSP's specifications of functions and mandates.


SKOS Simple Knowledge Organization System Reference
Alistair Miles and Sean Bechhofer (eds), W3C Technical Report

Members of the W3C Semantic Web Deployment Working Group have issued an updated Working Draft for the "SKOS Simple Knowledge Organization System Reference," as part of the W3C Semantic Web Activity. The document defines the Simple Knowledge Organization System (SKOS), a common data model for sharing and linking knowledge organization systems via the Semantic Web. Many knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems, share a similar structure, and are used in similar applications. SKOS captures much of this similarity and makes it explicit, to enable data and technology sharing across diverse applications. The SKOS data model provides a standard, low-cost migration path for porting existing knowledge organization systems to the Semantic Web. SKOS also provides a light weight, intuitive language for developing and sharing new knowledge organization systems. It may be used on its own, or in combination with formal knowledge representation languages such as the Web Ontology language (OWL). This document is the normative specification of the Simple Knowledge Organization System. It is intended for readers who are involved in the design and implementation of information systems, and who already have a good understanding of Semantic Web technology, especially RDF and OWL. Using SKOS, concepts can be identified using URIs, labeled with lexical strings in one or more natural languages, assigned notations (lexical codes), documented with various types of note, linked to other concepts and organized into informal hierarchies and association networks, aggregated into concept schemes, grouped into labeled and/or ordered collections, and mapped to concepts in other schemes.

See also: W3C Semantic Web Activity


Apache Synapse Version 1.2: An Open Source Enterprise Service Bus (ESB)
Paul Fremantle and Apache Synapse Team, Blog

The Apache Synapse team is pleased to announce the release of version 1.2 of the Open Source Enterprise Service Bus (ESB). Apache Synapse is an lightweight and easy-to-use Open Source Enterprise Service Bus (ESB) available under the Apache Software License v2.0. Apache Synapse allows administrators to simply and easily configure message routing, intermediation, transformation and logging task scheduling, etc.. The runtime has been designed to be completely asynchronous, non-blocking and streaming. Apache Synapse offers connectivity and integration with a range of legacy systems, XML-based services and SOAP Web Services. It supports non-blocking HTTP and HTTPS using the Apache HTTPCore components, as well as supporting JMS (v1.0 and higher) and a range of file systems and FTP sources including SFTP, FTP, File, ZIP/JAR/TAR/GZ via the Apache VFS project. At the same time Synapse 1.2 release adds the support for the Financial Information eXchange (FIX) an industry driven messaging standard through QuickFixJ, Hessian binary web service protocol, as well as other functional, stability and performance improvements. Synapse supports transformation and routing between protocols without any coding via configurable virtual services. Synapse provides first class support for standards such as WS-Addressing, Web Services Security (WSS), Web Services Reliable Messaging (WSRM), Throttling and caching, configurable via WS-Policy upto message level, as well as efficient binary attachments (MTOM/XOP). The 1.2 release contains a set of enhancements based on feedback from the user community, including: (1) Support for Hessian binary web service protocol; (2) FIX (Financial Information eXchange) protocol for messaging; (3) WS-Reliable Messaging support with WSO2 Mercury; (4) Support for re-usable database connection pools for DB report/lookup mediators; (5) Support for GZip encoding and HTTP 100 continue; (6) Natural support for dual channel messaging with WS-Addressing; (7) Cluster aware sticky load balancing support; (8) Non-blocking streaming of large messages at high concurreny with constant memory usage; (9) Support for an ELSE clause for the Filter mediator; (10) Ability to specify XPath expressions relative to the envelope or body; (11) Support for separate policies for incoming/outgoing messages; (12) Support for a mandatory sequence before mediation. The combination of XML streaming and asynchronous support for HTTP and HTTPS using Java NIO ensures that Synapse has very high scalability under load. Performance tests show that Synapse can scale to support thousands of concurrent connections with constant memory on standard server hardware. Apache Synapse ships with over 50 samples designed to demonstrate common integration patterns "out-of-the-box", along with supporting sample services, and service clients that demonstrate these scenarios. Apache Synapse is configured using a straightforward XML configuration syntax.

See also: the Apache Synapse Project web site


ODF Conformance Testing
Rick Jelliffe, O'Reilly Articles

There is a new avenue for participation in the ODF effort at OASIS: ODF Implementation, Interoperability, and Conformance, which I commend. Conventionally, people speak of syntactical conformance and semantic conformance, where the first is easy and the second is hard. In fact, because computers can only deal in symbols, the second is impossible. So the issue for automated conformance testing becomes "how can we reflect the semantic operations into syntactical artifacts: into symbols we can investigate?" So the semantic conformance problem then resolves into just another validation issue. And we have lots of nice schema languages notably Schematron which can help out there. To put it another way, it is an issue of data capture. For ODF, I would recommend they adopt a strategy of progressive but complete verification. For ODF import and export, this is easy: have a good RELAX NG schema (make it quite forgiving), use NVDL and DSRL if needed, then use Schematron phases to allow various levels of validity to be detected. The trouble with the monolithic valid/invalid distinction is that there may easily be invalidities in thing you don't care about. An implementation of a word processor may have problems in its support for spreadsheets, but it should be a minor issue not a flagged as a showstopper. Schematron's phase mechanism groups patterns of assertions so that you can have a much more useful chunked view of the strengths and weaknesses of a system. But this leaves the issue of screen display. How can that be tested? Given my characterization of the issue as being one of data capture, the answer is that ODF needs to specify a page dump format, which can then be tested with automated tests. What would this format look like? Think PDF in XML: tiny-SVG may be good enough—anything where you can get the page position of each character (or string) and graphic on a page...


Software Architecture Patterns for a Context-Processing Middleware Framework
R. Rouvoy, D. Conan, L. Seinturier; IEEE Distributed Systems Online

COSMOS (Context Entities Composition and Sharing) is a component-based framework for managing context information in ubiquitous context-aware applications. It supports the design and development of such applications, which react to changes in their execution environment. Examples of such context changes are the appearance or disappearance of hardware or software resources, and modifications in user preferences. Because the context information that such applications require is very diverse, COSMOS relies on component-based software engineering principles to ensure the integration of context information. COSMOS describes context management policies as hierarchies of context nodes using a dedicated composition language. In this article the authors present the mapping of the composition language constructions to architectural design patterns used in COSMOS. They illustrate how to reuse well-known design patterns and apply them at the architectural level to offer better control over the COSMOS architecture. In particular, they use these design patterns to separate the various extra-functional concerns (memory footprint, resource consumption, instance management, and so on) involved in a context management policy from the business concerns of context management. A mobile computing scenario illustrates the benefits of COSMOS. This sample application lets the family share information, consult product prices, download discount tickets, receive advertisements, access information and comments about products, and find a product or a shop's location in the mall. The parents want their children to remain in the mall, with their devices on as much as possible, so that each family member knows the others' locations. But they can turn off their devices once in a while to save batteries. While walking in the mall, the eldest girl sees an advertisement indicating that a dress store proposes an RFID-tag-based service for helping to choose clothes. All these features are based on different network technologies, such as Bluetooth or Wi-Fi, and require the application to adapt itself depending on network connectivity and context information availability... Several existing projects use COSMOS. The French Cappucino project uses COSMOS and its reference implementation (based on the Fractal component model) for the design and development of context policies for ubiquitous computing scenarios. The European IST (Information Society Technologies) MUSIC project uses COSMOS to develop context operators for synthesizing social relationships among collocated mobile users. The Norwegian SWISNET project is investigating the COSMOS abstract model in combination with wireless sensor networks for reifying context information in next-generation health-care applications.


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-06-10.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org