The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: January 12, 2009
XML Daily Newslink. Monday, 12 January 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



Packaging Formats of Famous application/*+zip
Rick Jelliffe, XML.com

This article presents a table showing some of the characteristics of the various packaging formats used by modern XML-in-ZIP applications. Packaging relates to the particular details of how files or resources are arranged in the ZIP archive. The granddaddy of these packaging systems is the JAR system. This came from the Java world, takes a ZIP file and adds a META-INF directory. In JAR, this META-INF directory contains various files, notably MANIFEST.MF which is a manifest file, which is a file that contains various kinds of useful data about the files in the archive. There are at least four main streams of packaging systems: (1) IMS is a long-standing education and training stream, used by SCORM (learning objects) and perhaps S1000D (aerospace technical documents). (2) OPC is used by Microsoft for OOXML and XPS. (3) ODF, which was missing key pieces in ODF 1.0 and 1.1, but which looks like getting key pieces for ODF 1.2. (4) A stream I think of as "fake ODF": this is a stream which includes EBooks OCF and more recently Adobe's UCF, but which seems to have adopted some container mechanism from a draft of ODF that was not eventually adopted. An odd situation where these specifications claim to use a namespace URI 'urn:oasis:names:tc:opendocument:xmlns:container' that OASIS does not seem to actually specify. It needs to be cleared up: you don't adopt draft namespaces into something and then claim they are somehow standard. There seems to be growing convergence. Adobe is pushing that there should be an OASIS group for packaging, which would presumably merge UCF and ODF 1.2 packaging. In ODF 1.2, the packaging forms a separate part (i.e. document), so it looks like things are set up well for this. It would be a good idea. Other areas of increasing agreement are on only supporting deflate compression, on using Dublin Core metadata, on using W3C Digital Signatures, on using W3C Encryption, and on using RDF for other metadata (or, at least, on providing a clear transformation from the specific markup to RDF.) Some of the areas of disagreement relate to the different use-cases for the packaging. In particular, the issue of whether the package is supposed to hold a single document or publication, or whether it should hold multiple documents (or multiple publications or information packages or websites) or just be a single document. The major differentiator between the different packaging mechanisms is whether they provide a system of indirection for identifying parts by short names. The most immediate aspect of this is whether the root file (i.e. the file where the main data of the document is kept) is hardcoded. OOXML has an advanced system, the relationships files which provide a mapping from an identifier to a filename or external URI, rather like an SGML/XML entity set... My expectation for convergence would be that there would be a level of convergence where everyone agrees on ZIP (deflate), self- identification of document type, multuple document support, /mimetype, W3C DSIG, Dublin Core metadata and IS29500 OPC's URL scheme for identifying parts, but then an advanced layer with more platform-dependent features on things like references, relationships, RDF and rights where one vendor's meat may be another's poison: encryption & DRM may certainly be contentious...


Novell Broadens Access Management Capabilities
Sean Michael Kerner, InternetNews.com

Novell is expanding its access management solution today with the addition of new federation options, new client support and new functionality that monitors clients to ensure compliance with security policy, similarly to Network Access Control (NAC). Novell's new Access Manager 3.1 release comes as the market for access control solution continues to heat up with IBM, CA, Oracle ramping up their own solutions. The new release also borrows from Novell's partnership with Microsoft, which plays a key role in the interoperability of the two companies' wares. A chief addition to the product is improved support for federation—a mechanism by which users can be authenticated across different security domains—through its support of WS-Federation, a specification developed by many of the major players in enterprise identity federation, including Novell. Getting WS-Federation into Access Manager adds compatibility with key business applications, and in particular, Microsoft's SharePoint collaboration suite. It's an enhancement to existing frameworks supported by Access Manager. The Novell Access Manager 3 solution, which debuted in October 2006, included support for SAML 2.0 (Security Assertion Markup Language) as well as the Liberty Alliance Web Service Framework... As part of the technical collaboration, Novell Access Manager integrates with Windows CardSpace, a technology included in Microsoft's Windows Vista operating system that securely stores and transmits personal identities. The pair's joint work also comes into play with the open source Bandit identity management framework, which aims to create an identity fabric for the Web, unifying disparate silos of identity management. From the announcement: "Novell Access Manager provides a secure and simple way to federate identities from any LDAP directory into a Microsoft infrastructure, eliminating the need to purchase a separate identity federation product. This cost and time-saving feature is important for organizations that need an access management solution that supports a broad range of platforms and directory services in complex multi-vendor computing environments. Novell Access Manager 3.1 simplifies and safeguards online asset-sharing, helping customers control access to Web-based and traditional business applications. Trusted users gain secure authentication and access to portals, Web-based content and enterprise applications, while IT administrators gain centralized policy-based management of authentication and access privileges..."

See also: the Novell announcement


Building Trust into Demanding Data Center Environments
Thomas Coughlin, Computer Technology Review

Earlier articles have exposed the vulnerability of software-based encryption to recovery of the encryption keys from host DRAM. These vulnerabilities point out the value of hardware-based encryption to computer users and managers of data centers. The Trusted Computing Group (TCG) has created standards for hardware-based encryption on individual storage devices used in data centers; such as hard disk drives, tape and even optical disks. The group has also created key management technologies that can be used to manage this protection in the data center and throughout an enterprise. Disk drives with built-in encryption provide data security with no use of host system resources and independent of current or future applications. Since the encryption key never leaves the disk drive, the drive provides a Trusted Platform (TP). With TCG-based drives, encryption security is based upon authentication of the user. To provide this authentication, after each power-up, but during pre-boot, the user must enter a password in order to gain access to the disk drive... The Trusted Computing Group has a committee working on key management standards called the Key Management Services Subgroup (KMSS). This group is defining best practices for key and/or access control management, providing a uniform way to manage keys for a variety of storage devices and easing the development of products by producing Key Management Application Notes. For some storage architectures there may be multiple disk drive authentication credentials that can be managed by a centralized application. All of these credentials can be controlled and coordinated by a key server. The key server manages the credentials needed to decrypt encrypted data in a networked storage infrastructure. The key server creates credentials, backs them up and archives them... Besides the TCG standards, there are other standards that deal with encryption on computers and enterprise environments. The IEEE P1619 specifications deal with encryption modes for storage devices as well as key management. There is commonality between the engineering experts leading the TCG key management standards and the IEEE P1619.3 key management standards, which will do much to make sure that these standards are consistent and support each other. Note that NIST and OASIS have also created recommendations for key management, which have been studied by both the IEEE and TCG groups. Coordination between the various standards efforts is provided by people interfacing and having membership in multiple standards groups... Channel encryption is handled by the various interface standards groups such as ANSI T10 (SCSI), T11 (Fibre Channel), T13 (ATA) and IETF (Internet). SNIA and DMTF provide management services that are used by key management standards. Besides the TCG KMSS and IEEE P1619.3 key management, there are also key management elements in the OASIS specifications that also flow into applications...


SOA Still Alive and Well: Sell it to the Business
David A. Chappell, O'Reilly Technical

"In case you need to catch up, Anne Thomas Manes of Burton Group declared that 'SOA met its demise on January 1, 2009, when it was wiped out by the catastrophic impact of the economic recession'.' I am happy to see that there's renewed energy to try and find something new among some of my industry colleagues whom I respect. I'm not against finding a new name for this thing that we have been until-recently-referring-to-as-SOA but I still am looking for a reason why. Dave Linthicum claimed he predicted it would become less important, asked 'Could the death of SOA bring it back to life?' , and purported that Anne Manes had simply signed and dated the certificate of death for it. Miko Matsumura and I had some fun with the whole thing, and the Yahoo!Groups service-orientated-architecture forum had a field day with it... Joe McKendrick brought it up again in his 09 predictions saying that SOA will be de-emphasized by cloud computing. Nick Gall of Gartner Group used this as a way of promoting his pet peeve WOA with Long Live the Web, and as a bit of tongue-in-cheek retort, Steve Jones of CapGemini quickly proclaimed REST is dead long live the Web I have been thinking hard over the past year or more to come up with new models for how service-orientation, grid computing, cloud computing, and SAAS all come together in a coherent architecture, but I have always thought we would just call it SOA. So here's my problem with all this noise. [...] I'm still looking for where's the death!?? I'm not against joining the new bandwagon, but I'm still looking for a good reason to declare an acronym dead while still declaring everything that it stands for to be critical for future success. Not a one has bothered to substantiate any of the claims that are made about SOA being unsuccessful. The consensus is that all the elements of SOA such as service-orientation, governance, alignment with the business, etc are still critical to live on, and be joined with things like cloud computing and SAAS yet the term itself needs to die because IT has to stop selling that term to the business... I gathered up some success stories that show tangible ROI from recent SOA projects across the industry, which include some Oracle customers. If IT needs some ammunition to help sell SOA to the business, there's some that I documented..."

See also: eWEEK


AbiWord Gets Funding for ODF Development
DJWM, Heise Online

"AbiSource Corporation is to receive funding to improve the ODF compatibility of AbiWord, the free software word processor. AbiSource Corporation, a company created by some of the AbiWord developers a few months ago, was approached by the Dutch non-profit organisation NLnet which was interested in seeing AbiWord gain better compatibility with the OpenDocument format. This would in turn would boost AbiWord's compatibility with OpenOffice.org... AbiWord has carved a niche on Linux systems, as a lighter option to using OpenOffice's more heavyweight Write, and is available on multiple platforms. ODF and Microsoft's Office Open XML (OOXML) are already supported by AbiWord, but in the realm of document interoperability there is always room for improvement. NLNet Foundation, was created in its current form in 1997 from the proceeds of selling the NLNet internet service provider business. It currently sponsors a number of open source and free software projects and is looking for new projects to sponsor..."

See also: the AbiWord web site


HTTP-based Resource Descriptor Discovery
Eran Hammer-Lahav (ed), IETF Internet Draft

An initial -00 IETF Internet Draft has been published for an "HTTP-based Resource Descriptor Discovery" specification documenting a new discovery workflow that is proposed to replace Yadis. The memo describes an HTTP-based process for obtaining information about a resource identified by a URI. The 'information about a resource' (a resource descriptor) typically provides machine-readable information that aims to assist and enhance the interaction with the resource. The memo only defines the process for locating and obtaining the descriptor, but leaves the descriptor format and its interpretation out of scope. Background: "With the development of interoperability specifications comes the need to enable compliant services and resources to declare their conformance to these specifications. There is a growing need to describe resources in a way that does not depend on their internal structure, or even the availability of an HTTP-accessible representation of these resources. For example, while an end-user is reading a web page such as a blog article, the user-agent can discover whether the content of this page has generated from an Atom feed or Atom entry and whether that feed supports Atom authoring. It can discover whether there is an iCalendar-formatted or CalDAV calendar associated with the page, or where other content by the same page author might be found. In an example related to the identity space, an end-user can use a URI as an identifier for signing into web services, and in turn, the web service can discover more information about the user's resources and preferences such as who did the user delegate their identity management to, where they keep their address book or list of social network friends, where their profile information is stored to reduce signup registration requirements, and what other services they use which may enhance their interaction with the web service." Resource Discovery and Service Discovery: "Resource discovery provides a process for obtaining information about a resource identified with a URI. It allows resource-providers to describe their resources in a machine-readable format, enabling automatic interoperability by user-agents and resource-consuming applications. Discovery enables applications to utilize a wide range of web services and resources across multiple providers without the need to know about their capabilities in advance, reducing the need for manual configuration and resource-specific software. When discussing discovery, it is important to differentiate between resource discovery and service discovery. Both types attempts to associate capabilities with resources, but they approach it from opposite ends. Service discovery centers around identifying the location of qualified resources, typically finding an endpoint capable of certain protocols and capabilities. In contrast, resource discovery begins with a resource, trying to find which capabilities it supports. A simple way to distinguish between the two types of discovery is to define the questions they are each trying to answer: Resource-Discovery: Given a resource, what are its attributes: capabilities, characteristics, and relationships to other resources? Service-Discovery: Given a set of attributes, which available resources match the desired set and what is their location? [Update: Version -01.]

See also: the W3C TAG comment


An Architectural Framework for Media Server Control
Tim Melanchuk (ed), IETF Internet Draft

The Internet Engineering Steering Group (IESG) announced receipt of a request from the IETF Media Server Control WG (MEDIACTRL) to consider "An Architectural Framework for Media Server Control" as an approved Informational RFC. The IESG plans to make a decision in the next few weeks, and solicits final comments on this action in email through 2009-01-26. This IETF Working Group was chartered to examine protocol extensions between media servers and their clients. Media services offered by the media server are addressed using SIP mechanisms, such as described in RFC 4240. Media servers commonly have a built-in VoiceXML interpreter. VoiceXML describes the elements of the user interaction, and is a proven model for separating application logic (which run on the clients of the media server) from the user interface (which the media server renders). Many Media Servers support IVR dialog services using VoiceXML. In this case the MS interacts with other servers using HTTP during standard VoiceXML processing. VoiceXML Media Servers may also interact with speech engines, for example using MRCPv2, for speech recognition and generation purposes... The Session Initiation Protocol (SIP) (RFC 3261) was developed by the IETF for the purposes of initiating, managing and terminating multimedia sessions. The popularity of SIP has grown dramatically since its inception and is now the primary Voice over IP (VoIP) protocol. This includes being selected as the basis for architectures such as the IP Multimedia Subsystem (IMS) in 3GPP and included in many of the early live deployments of VoIP related systems. The SIP Control Framework includes basic control message semantics corresponding to the types of interactions identified in Section 3. It uses the concept of "packages" to allow domain specific protocols to be defined using the Extensible Markup Language (XML) format. The MS Control Protocol is made up of one or more packages for the SIP Control Framework... RFC 3261 contains rules when using an un-reliable protocol such as UDP. When a packet reaches a size close to the Maximum Transmission Unit (MTU) the protocol should be changed to TCP. This type of operation is not ideal when constantly dealing with large payloads such as XML formatted MS control messages... Advanced IVR Services: Although IVR Services with Mid-call Control provides a comprehensive set of media functions expected from a Media Server, the Advanced IVR Services model allows a higher level of abstraction describing application logic, as provided by VoiceXML, to be executed on the Media Server. Invocation of VoiceXML IVR dialogs may be via the 'Prompt and Collect' mechanism of RFC 4240. Additionally, the IVR control protocol can be extended to allow VoiceXML requests to an HTTP interface between the Media Server and one or more back-end servers that host or generate VoiceXML documents. These server(s) may or may not be physically separate from the Application Sever...

See also: SIP Interface to VoiceXML Media Services


Workshop on Trust and Privacy on the Social and Semantic Web (SPOT 2009)
SPOT 2009 Workshop Organizers, Announcement

A call for papers has been issued in connection with the First International Workshop on Trust and Privacy on the Social and Semantic Web (SPOT 2009), to be held in conjunction with 6the Sixth European Semantic Web Conference (ESWC 2009). The workshop will take place on May 31, 2009 in Heraklion, Greece. "Applications using Semantic Web technologies start to arise and to be used by a large number of users. However, although trust and privacy play a crucial role in its final development and adoption, in most of the running systems and research prototypes no or not sufficient solutions to address these topics are considered. The Semantic Web as well as the Social Web has reached a state where those issues have to be addressed seriously in order to become reality. As the Semantic Web goes mainstream, especially through its Social aspect, it is time for the community to gather around that topic.... SPOT 2009 will bring together researchers and developers from the field of Semantic Web, the Social Web, and trust and privacy enforcement. It provides the opportunity to discuss and analyze important requirements and open research issues for a trustful Semantic Web. We welcome both, theoretical and application oriented results, concerning how trust can be ensured in an open system like the Social Semantic Web as well as how Semantic Web technologies can be used or have to be extended in order to serve for privacy issues. We also plan to include a specific time slot for case studies and system demonstrations. Workshop topics include, but are not limited to: (1) Trust and Privacy on the Semantic Web (including: Ontologies for trust and privacy; Data provenance and trustworthiness of knowledge sources; Semantic web policies; Privacy by generalization of answers; Usage control and accountability; Trust-enabled linked data; Policy representation and reasoning). (2) Trust and Privacy for Social Semantic Web Applications (including: Trust and privacy in social online communities (e.g., SIOC); Privacy in Semantic Web sharing applications (e.g., semantic desktop); User profiling and modeling vs. privacy; Privacy and community mining; Trust and reputation metrics; Usage mining and policy extraction; Privacy awareness in social communities; The Semantic Web as a trust enabler). (3) Applications and Case Studies (including: Social Semantic Web case studies, prototypes, and experiences Trust and privacy on social semantic platforms Social network annoyance, social software fatigue, social spam Managing information overload in the Social Web with privacy metrics Trust and privacy for social software on mobile devices Scalability of trust and privacy on the Semantic Web)..."

See also: the ESWC 2009 web site


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-01-12.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org