The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: May 01, 2009
XML Daily Newslink. Friday, 01 May 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



Use Cases and Requirements for Media Fragments
Raphaël Troncy, Jack Jansen, Yves Lafon, Erik Mannens, Silvia Pfeiffer, Davy Van Deursen (eds), W3C Technical Report

W3C announced the publication of First Public Working Draft for the Use Cases and Requirements for Media Fragments specification. The document was produced by the Media Fragments Working Group, which is part of the W3C Video on the Web Activity. The mission of the Media Fragments Working Group is to address temporal and spatial media fragments in the Web using Uniform Resource Identifiers (URI). This specification describes use cases and requirements for the development of Media Fragments 1.0. It also specifies the syntax for constructing media fragment URIs and explains how to handle them when used over the HTTP protocol. Finally, it includes a technology survey for addressing fragments of multimedia document (video, audio, images).

From the 'Introduction': Audio and video resources on the World Wide Web are currently treated as "foreign" objects, which can only be embedded using a plugin that is capable of decoding and interacting with the media resource. Specific media servers are generally required to provide for server-side features such as direct access to time offsets into a video without the need to retrieve the entire resource. Support for such media fragment access varies between different media formats and inhibits standard means of dealing with such content on the Web.

This specification provides for a media-format independent, standard means of addressing media fragments on the Web using Uniform Resource Identifiers (URI). In the context of this document, media fragments are regarded along three different dimensions: temporal, spatial, and tracks. Further, a fragment can be marked with a name and then addressed through a URI using that name. The specified addressing schemes apply mainly to audio and video resources—the spatial fragment addressing may also be used on images. The aim of the specification is to enhance the Web infrastructure for supporting the addressing and retrieval of subparts of time-based Web resources, as well as the automated processing of such subparts for reuse. Example uses are the sharing of such fragment URIs with friends via email, the automated creation of such fragment URIs in a search engine interface, or the annotation of media fragments with RDF. This specification will help make video a first-class citizen of the World Wide Web. The media fragment URIs specified in this document have been implemented and demonstrated to work with media resources over the HTTP and RTP/RTSP protocols. Existing media formats in their current representations and implementations provide varying degrees of support for this specification. It is expected that over the time, media formats, media players, Web Browsers, media and Web servers, as well as Web proxies will be extended to adhere to the full requirements given in this specification.

See also: the W3C Media Fragments Working Group


Public Review of Service Component Architecture (SCA) Specifications
Staff, OASIS Announcement

Members of four OASIS Service Component Architecture (SCA) Technical Committees have released five specifications for review and feedback. The public review period ends June 23, 2009. (1) Service Component Architecture Assembly Model Specification Version 1.1, produced by the OASIS Service Component Architecture / Assembly (SCA-Assembly) TC, describes the SCA Assembly Model, which covers: a model for the assembly of services, both tightly coupled and loosely coupled, and a model for applying infrastructure capabilities to services and to service interactions, including Security and Transactions. Service Component Architecture (SCA) provides a programming model for building applications and solutions based on a Service Oriented Architecture. It is based on the idea that business function is provided as a series of services, which are assembled together to create solutions that serve a particular business need. These composite applications can contain both new services created specifically for the application and also business function from existing systems and applications, reused as part of the composition. SCA provides a model both for the composition of services and for the creation of service components, including the reuse of existing application function within SCA composites. SCA is a model that aims to encompass a wide range of technologies for service components and for the access methods which are used to connect them. For components, this includes not only different programming languages, but also frameworks and environments commonly used with those languages. For access methods, SCA compositions allow for the use of various communication and service access technologies that are in common use, including, for example, Web services, Messaging systems and Remote Procedure Call (RPC). The SCA Assembly Model consists of a series of artifacts which define the configuration of an SCA Domain in terms of composites which contain assemblies of service components and the connections and related artifacts which describe how they are linked together.

(2) Service Component Architecture WS-BPEL Client and Implementation Specification Version 1.1, produced by the OASIS SCA-BPEL TC, specifies how WS-BPEL 2.0 can be used with SCA. (3) Service Component Architecture Client and Implementation Model for C Specification Version 1.1, produced by the OASIS Service Component Architecture / C and C++ (SCA-C-C++) TC, describes the SCA Client and Implementation Model for the C programming language. A component implementation itself can also be a client to other services provided by other components or external services. The document describes how a component implemented in C gets access to services and calls their operations. (4) Service Component Architecture Client and Implementation Model for C++ Specification Version 1.1, produced by the OASIS Service Component Architecture / C and C++ (SCA-C-C++) TC, describes the SCA Client and Implementation Model for the C++ programming language. (5) SCA Policy Framework Version 1.1, produced by the OASIS SCA Policy TC, presents the SCA framework and its usage from a policy POV. The capture and expression of non-functional requirements is an important aspect of service definition and has an impact on SCA throughout the lifecycle of components and compositions. SCA provides a framework to support specification of constraints, capabilities and QoS expectations from component design through to concrete deployment. This document describes the SCA policy association framework that allows policies and policy subjects specified using the W3C "Web Services Policy (WS-Policy)" and "Web Services Policy Attachment (WS-PolicyAttachment)" documents, as well as with other policy languages, to be associated with SCA components.

See also: the OASIS Open Composite Services Architecture (CSA) Member Section


SOA Simplified: Service Virtualization With The Managed Services Engine
Aaron Skonnard, MSDN Magazine

While SOA can definitely provide benefits to the business, there's a common misperception that SOA makes things easier. This is simply not true. SOA embraces the fact that building distributed systems is inherently complex, and it outlines architectural principles that can help you manage some of that complexity. Large SOA initiatives are still inherently complex, but with a new set of challenges and issues. One of the greatest challenges in succeeding with SOA is establishing and maintaining a service governance solution that addresses the complexities inherent in a growing service ecosystem, thereby allowing the business to realize more benefits with less pain... Today, most SOA initiatives cannot answer fundamental questions because they lack a sound strategy for service governance within their SOA. Lacking a focus on service governance can prevent companies from realizing a net gain through their SOA initiatives. Without a governance solution, most SOA initiatives devolve over time into an unwieldy spaghetti mess of services without any sense of management, visibility, or versioning control, and it's only going to be worse as companies move more towards cloud computing over time...

One way to deal with all this complexity is through a governance solution that provides service virtualization. Service virtualization is an emerging trend in the SOA landscape that focuses on providing a common infrastructure for building and managing a complex service ecosystem while addressing the challenges of SOA initiatives. Microsoft Services has been providing leadership in this area through their Microsoft Services SOA Infrastructure offerings and a technical solution referred to as the Managed Services Engine (MSE). First I'll describe what they provide at a very high level before diving into the technical details of how it works...

[CodePlex] Overview: The Managed Services Engine (MSE) is one approach to facilitating Enterprise SOA through service virtualization. Built upon the Windows Communication Foundation (WCF) and the Microsoft Server Platform, the MSE was developed by Microsoft Services as we helped customers address the challenges of SOA in the enterprise. The MSE fully enables service virtualization through a Service Repository, which helps organizations deploy services faster, coordinate change management, and maximize the reuse of various service elements. In doing so, the MSE provides the ability to support versioning, abstraction, management, routing, and runtime policy enforcement for Services. The February 2009 CTP Release is the third release of the MSE in what will continue to be an evolving solution on CodePlex. The intent of this version is to solicit feedback on the architecture, the components, their application, and the documentation.

The MSE comes with a service runtime engine, a service catalog (repository), and a management tool for implementing a real-world service management solution based on service virtualization. It's built on WCF from the ground up, using common techniques and taking advantage of its various extensibility points when necessary. The MSE runtime engine is implemented as a Windows service that manages a set of WCF service host instances that are automatically configured from the information found in the service catalog at run time. The MSE runtime consists of three logical components internally: the messenger, the broker, and the dispatcher. The messenger is primarily responsible for message normalization. The broker receives the normalized message and is primarily responsible for operation rationalization (choosing a specific version of a specific operation). The dispatcher is primarily responsible for invoking the target implementation... The MSE also makes it possible to virtualize RESTful services. However, since RESTful services don't typically come with a WSDL definition, you'll have to use a different wizard for defining and importing metadata that describes the RESTful service in terms of operations. The RESTful Service Virtualization Wizard starts by asking you to specify the name of the resource exposed by the RESTful service along with the base URI for the resource. Then it asks you to specify the HTTP verbs (GET, POST, PUT, or DELETE) and content type (for example, 'text/xml', 'application/json', or 'application/atom+xml') supported by the resource...

See also: the Managed Services Engine (MSE) open source at CodePlex


An Internet Attribute Certificate Profile for Authorization
Sean Turner, Russ Housley, Stephen Farrell (eds), IETF Internet Draft

Members of the IETF Public-Key Infrastructure (X.509) (PKIX) Working Group have released an updated Internet Draft for An Internet Attribute Certificate Profile for Authorization, intended to obsolete RFC 3281, published in April 2002. Appendix D of the I-D presents "Changes Since RFC 3281." The specification defines a profile for the use of X.509 Attribute Certificates in Internet Protocols. Attribute certificates may be used in a wide range of applications and environments covering a broad spectrum of interoperability goals and a broader spectrum of operational and assurance requirements. The goal of this document is to establish a common baseline for generic applications requiring broad interoperability as well as limited special purpose requirements. The profile places emphasis on attribute certificate support for Internet electronic mail, IPsec, and WWW security applications. X.509 public key certificates (PKCs) bind an identity and a public key. An attribute certificate (AC) is a structure similar to a PKC; the main difference being that the AC contains no public key. An AC may contain attributes that specify group membership, role, security clearance, or other authorization information associated with the AC holder. The syntax for the AC is defined in Recommendation X.509, making the term "X.509 certificate" ambiguous...

Some people constantly confuse PKCs and ACs. An analogy may make the distinction clear. A PKC can be considered to be like a passport: it identifies the holder, tends to last for a long time, and should not be trivial to obtain. An AC is more like an entry visa: it is typically issued by a different authority and does not last for as long a time. As acquiring an entry visa typically requires presenting a passport, getting a visa can be a simpler process. Authorization information may be placed in a PKC extension or placed in a separate attribute certificate (AC). The placement of authorization information in PKCs is usually undesirable for two reasons. First, authorization information often does not have the same lifetime as the binding of the identity and the public key. When authorization information is placed in a PKC extension, the general result is the shortening of the PKC useful lifetime. Second, the PKC issuer is not usually authoritative for the authorization information. This results in additional steps for the PKC issuer to obtain authorization information from the authoritative source. For these reasons, it is often better to separate authorization information from the PKC. Yet, authorization information also needs to be bound to an identity. An AC provides this binding; it is simply a digitally signed (or certified) identity and set of attributes. An AC may be used with various security services, including access control, data origin authentication, and non-repudiation...

See also: the IETF Public-Key Infrastructure (X.509) (PKIX) Working Group


Live from the CMIS Plugfest in Basel: Day 2
Michael Marth, Blog

"Day 2 of the CMIS Plugfest [in Basel, Switzerland] just ended. We tried to connect as many client implementations with as many server implementations as we can. The results can be seen in matrix [Testing Results and Servers published online]: "C" means being able to connect, "R" able to read, "W" able to write, and "W+S" write and search... All in all we have tested thirty-one (31) client/server combinations, most ATOM-based and four (4) with SOAP. All tests were based on the OASIS CMIS specification version 0.6.1. I am quite happy with these results, especially because many servers and clients were updated to the latest spec version (or even implemented from scratch!) during the plugfest. Cédric Huesler has compiled a collection of screenshots of CMIS clients in action: (showing) the SAP "ECM Explorer", SourceSense "Portlet", Jahia "Raw CMIS Client", OpenText "Explorer Extension", OpenText "Office Extension", Alfresco "JUnit testsuite", Shane's "Flex CMIS Explorer." Also, today the ApacheAlso, today the Apache Chemistry project (Apache's CMIS implementation) has been accepted in the ASF's Incubator. Congratulations!

Note: on April 5, 2009 David Nuescheler of Day Software issued an invitation to all CMIS TC members and other interested parties to participate in an CMIS Open PlugFest at the Day Offices in Basel, Switzerland. The two-day event was held on April 29-30, 2009. Confirmations for attendance were received from representatives of Alfresco, Day, Hippo, IBM, Jahia, Magnolia, Nuxeo, OpenText, SAP, Saperion, and SourceSense. In a posting of 2009-05-03, David Nuescheler extends thanks to the Plugfest participants and provides a brief report on the Basel PlugFest results: "Dear [CMIS] TC members and Jackrabbit-devs, I would like to thank everybody who attended the CMIS PlugFest in Basel. I think it was very successful and we uncovered a lot of issues while having a lot of fun achieving 31 (!) client / server connections. See the matrix. I think we should be able to use the above matrix to track ongoing CMIS interoperability testing. I am sure this can be an evolving base for everybody to contribute their test results to. Also, find write-ups about the PlugFest here... I also reported the Issues that were logged [alt] throughout the PlugFest as issues 161 - issue 170 in the CMIS Jira issues tracking system]..."

See also: Michael Marth's blog for CMIS PlugFest Day 1


Sun Updates Solaris 10 Performance, Security
Sean Michael Kerner, InternetNews.com

Every six months, Sun updates its Solaris 10 operating system to include bug fixes as well as feature updates. The company has officially released the Solaris 10 5/09 release, providing new Intel hardware support, IPsec security features as well as network performance improvements. The update also comes as Sun is working on its next version of Solaris in the OpenSolaris community... The updates also enable Sun to add new features that enhance Solaris 10. With its last Solaris update version 10/08 in November, Sun introduced some support for Intel's Nehalem chip architecture. With the new release, Sun has expanded that support to include more power management capabilities... Intel is taking a greater role in Solaris' future: Larry Wake, group manager for Solaris software, noted that the chipmaking giant is now the number 2 contributor to Sun's OpenSolaris effort, the open source community project through which the next generation of Solaris is developed. In particular, Sun and Intel have worked together on what's known as the Power Aware Dispatcher, which provides datacenter server power management capabilities for CPU cores. For example, Wake noted that if a datacenter has a big machine with 24 cores in the box, but its current workload is only using a third of them, the administrator can power down 16 of those cores by taking the workloads and moving them onto 8 cores. Additionally, work has been done to support new Intel 10GbE network interface cards (NIC) and a technology called large-segment offload. The feature enables the NIC to process network traffic directly, without having to route it into the computer itself. The new Solaris update also includes IPsec enhancements that enable secure clustering of traffic over the public Internet. IPsec is often thought of as a VPN technology for remote users, but it is also used for securing site to site tunnels as well. Wake noted that the goal is to make IPsec, however it's used, easier to implement and more integrated into Solaris...Sun is also improving its ZFS file system capabilities with a new cloning feature, designed to make the 128-bit filesystem — credited with providing advanced data scalability and recovery options—even faster at data cloning than before.

See also: the ZFS file system


Selected from the Cover Pages, by Robin Cover

Apache Software Foundation Launches Chemistry Incubation Effort for CMIS

On April 30, 2009, the Apache Software Foundation (ASF) announced the creation of a new Incubator project to support the OASIS Content Management Interoperability Services (CMIS) specification. As proposed, the Apache Chemistry Incubation development effort will implement the latest draft of the CMIS specification and provide input to the TC on the implementation details of the specification. It is also anticipated that the group will produce a CMIS Reference Implementation (RI) and a CMIS Technology Compatibility Kit (TCK). Three Apache Chemistry mailing lists have been set up, along with a Jira issues tracking system and SVN source code repository. According to the Apache Chemistry Proposal summary, Chemistry "is an effort to provide a Java (and possibly others, like JavaScript) implementation of an upcoming CMIS specification, consisting of a high-level API for developers wanting to manipulate documents, a low-level SPI close to the CMIS protocol for developers wanting to implement a client or a server, and default implementations for all of the above. Chemistry aims to cover both the AtomPub and SOAP bindings defined by the CMIS specifications... The background to Apache Chemistry can be found (in source code) within the Chemistry codebase developed at Nuxeo and the JCR-CMIS sandbox components developed at Apache Jackrabbit. All contributions to the JCR-CMIS components in Jackrabbit have been made by people with Contributor License Agreements (CLAs) on file; this agreement is required before an individual is given commit rights to an ASF project. Rationale for the Apache Chemistry Incubation is provided in the proposal as follows (excerpt): "For the [CMIS] standard to succeed, ensuring interoperability is paramount: in order to manage an ever growing context and leverage the enormous portability and interoperability issues that a globally adopted Standard brings, it is necessary to think about how to make the related ecosystem healthy and sustainable... Successful modern standards are driven by clear documentation, a clearly defined compatibility process, accurate compliance criteria, and reference implementation to clear up potential doubts and ensure that the standard can actually be implemented in real life scenarios... Having an healthy ecosystem will ensure a smoother implementation process, more compliant products and, ultimately, a wider adoption of the standard."

See also: CMIS specification references


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-05-01.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org