The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: June 08, 2010
XML Daily Newslink. Tuesday, 08 June 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com



Balisage Markup Conference 2010: Complete Program and Late-breaking News
Staff, Balisage Conference Announcement

Organizers of the Balisage Markup Conference 2010 have published a complete program listing, together with a reminder that proposals for Balisage Late-breaking News Presentations are due June 11, 2010. Balisage is an annual conference devoted to the theory and practice of descriptive markup and related technologies for structuring and managing information. Balisage 2010 will be held August 3-6, 2010 in Montréal, Canada, preceded by an August 2 International Pre-conference Symposium 'XML for the Long Haul: Issues in the Long-term Preservation of XML.'

While the peer-reviewed part of the Balisage 2010 program has been scheduled, a few slots on the Balisage program have been reserved for presentation of 'Late-breaking' material. To qualify, proposals should reflect something that's happened in the last month or include a paper, an extended paper proposal, or a very long abstract with references.

Balisage Program for August 3-4: "The high cost of risk aversion"; "Multi-channel eBook production as a function of diverse target device capabilities"; "gXML, a new approach to cultivating XML trees in Java"; "Grammar-driven markup generation"; "Java integration of XQuery: An information unit oriented approach"; "Reverse modeling for domain-driven engineering of publishing technology"; "Managing semantics in XML vocabularies: an experience in the legal and legislative domain"; "XML pipeline processing in the browser"; "Extension of the type/token distinction to document structure"; "Discourse situations and markup interoperability"; "Where XForms meets the glass: Bridging between data and interaction design"; "Refining the taxonomy of XML schema languages: Categorizing XML schema languages"; "Schema component paths for schema analysis"; "A packaging system for EXPath"; "A streaming XSLT processor."

Balisage Program for August 5-6: "Why TEI stand-off annotation doesn't quite work and why you might want to use it nevertheless"; "Freestyle Markup Language"; "Multi-structured documents and the emergence of annotations vocabularies"; "Processing arbitrarily large XML using a persistent DOM"; "A virtualization-based retrieval and update API for XML-encoded corpora"; "Panel Discussion. Greasing the Wheels: Overcoming User Resistance to XML"; "XML essence testing"; "Automatic upconversion using XSLT 2.0 and XProc"; "Stand-alone encoding of document history"; "Scripting documents with XQuery: virtual documents in TNTBase"; "XQuery design patterns"; "Parallel processing and your XML data"; "Closing Keynote: Stone Soup".

See also: Balisage 2010 Late-breaking News Presentations


W3C First Public Working Draft for an RDFa API
Benjamin Adrian, Manu Sporny, Mark Birbeck (eds), W3C Technical Report

W3C has announced publication of a First Public Working Draft for a specification RDFa API: An API for Extracting Structured Data from Web Documents. The document was produced by the RDFa Working Group, which was chartered to support the developing use of RDFa for embedding structured data in Web documents in general.

"This document details such a mechanism; an RDFa Document Object Model Application Programming Interface (RDFa DOM API) that allows simple extraction and usage of structured information from a Web document. RDFa API provides a mechanism that allows Web-based applications using documents containing RDFa markup to extract and utilize structured data in a way that is useful to developers. The specification details how a developer may extract, store and query structured data contained within one or more RDFa-enabled documents. The design of the system is modular and allows multiple pluggable extraction and storage mechanisms supporting not only RDFa, but also Microformats, Microdata, and other structured data formats. For more information about the Semantic Web, please see the Semantic Web Activity.

RDFa provides a means to attach properties to elements in XML and HTML documents. Since the purpose of these additional properties is to provide information about real-world items, such as people, films, companies, events, and so on, properties are grouped into objects called Property Groups. The RDFa DOM API provides a set of interfaces that make it easy to manipulate DOM objects that contain information that is also part of a Property Group. This specification defines these interfaces. A document that contains RDFa effectively provides two data layers. The first layer is the information about the document itself, such as the relationship between the elements, the value of its attributes, the origin of the document, and so on, and this information is usually provided by the Document Object Model, or DOM.

The second data layer comprises information provided by embedded metadata, such as company names, film titles, ratings, and so on, and this is usually provided by RDFa, Microformats, DC-HTML, GRDDL, or Microdata. Whilst this embedded information could be accessed via the usual DOM interfaces—for example, by iterating through child elements and checking attribute values—the potentially complex interrelationships between the data mean that it is more efficient for developers if they have access to the data after it has been interpreted. For example, a document may contain the name of a person in one section and the phone number of the same person in another; whilst the basic DOM interfaces provide access to these two pieces of information through normal navigation, it is more convenient for authors to have these two pieces of information available in one property collection, reflecting the final Property Group..."

See also: the W3C RDFa Primer


OASIS Public Review: SCA Policy Framework Version 1.1
David Booz, Michael J. Edwards, Ashok Malhotra (eds), OASIS PRD

Members of the OASIS Service Component Architecture / Policy (SCA-Policy) Technical Committee have released SCA Policy Framework Version 1.1 (Committee Draft 03/Public Review 02) for review through June 23, 2010.

"The capture and expression of non-functional requirements is an important aspect of service definition and has an impact on SCA throughout the lifecycle of components and compositions. SCA provides a framework to support specification of constraints, capabilities and QoS expectations from component design through to concrete deployment. This specification describes the framework and its usage...

The term Policy is used to describe some capability or constraint that can be applied to service components or to the interactions between service components represented by services and references. An example of a policy is that messages exchanged between a service client and a service provider have to be encrypted, so that the exchange is confidential and cannot be read by someone who intercepts the messages. In SCA, services and references can have policies applied to them that affect the form of the interaction that takes place at runtime. These are called interaction policies. Service components can also have other policies applied to them, which affect how the components themselves behave within their runtime container. These are called implementation policies.

In SCA, policies are held in policySets, which can contain one or many policies, expressed in some concrete form, such as WS-Policy assertions. Each policySet targets a specific binding type or a specific implementation type. PolicySets are used to apply particular policies to a component or to the binding of a service or reference, through configuration information attached to a component or attached to a composite. For example, a service can have a policy applied that requires all interactions (messages) with the service to be encrypted. A reference which is wired to that service needs to support sending and receiving messages using the specified encryption technology if it is going to use the service successfully. In summary, a service presents a set of interaction policies, which it requires the references to use. In turn, each reference has a set of policies, which define how it is capable of interacting with any service to which it is wired. An implementation or component can describe its requirements through a set of attached implementation policies...

SCA intents are used to describe the abstract policy requirements of a component or the requirements of interactions between components represented by services and references. Intents provide a means for the developer and the assembler to state these requirements in a high-level abstract form, independent of the detailed configuration of the runtime and bindings, which involve the role of application deployer. Intents support late binding of services and references to particular SCA bindings, since they assist the deployer in choosing appropriate bindings and concrete policies which satisfy the abstract requirements expressed by the intents. It is possible in SCA to attach policies to a service, to a reference or to a component at any time during the creation of an assembly, through the configuration of bindings and the attachment of policy sets..."

See also: OASIS SCA-Policy TC documents in the OASIS Library


Use Cases and Requirements for Mapping Relational Databases to RDF
Eric Prud'hommeaux and Michael Hausenblas (eds), W3C Technical Report

Members of the W3C RDB2RDF Working Group invite technical comment on the First Public Working Draft of Use Cases and Requirements for Mapping Relational Databases to RDF. The need to share data with collaborators motivates custodians and users of relational databases (RDB) to expose relational data on the Web of Data. This document examines a set of use cases from science and industry, taking relational data and exposing it in patterns conforming to shared RDF schemata. These use cases expose a set of functional requirements for exposing relational data as RDF in the RDB2RDF Mapping Language (R2RML)...

The majority of dynamic Web content is backed by relational databases (RDB), and so are many enterprise systems. On the other hand, in order to expose structured data on the Web, Resource Description Framework (RDF) is used. This document reviews use cases and requirements for a relational database to RDF mapping (RDB2RDF) with the following structure: (1) The remainder of this section motivates why mapping RDBs to RDF is necessary and needed and highlights the importance of a standard. (2) In the next section RDB2RDF use cases are reviewed. (3) The last section discusses requirements regarding a RDB2RDF mapping language, driven by an analysis of the aforementioned use cases...

Use of a standard for mapping language for RDB to RDF may allow use of a single mapping specification in the context of mirroring of schema and (possibly some or all of the) data in various databases, possibly from different vendors (e.g., Oracle database, MySQL, etc.) and located at various sites. Similarly structured data (that is, data stored using same schema) is useful in many different organizations often located in different parts of the world. These organizations may employ databases from different vendors due to one or more of many possible factors (such as, licensing cost, resource constraints, availability of useful tools and applications and of appropriate database administrators, etc.). Presence of a standard RDB2RDF mapping language allows creation and use of a single mapping specification against each of the hosting databases to present a single (virtual or materialized) RDF view of the relational data hosted in those databases and this RDF view can then be queried by applications using SPARQL query or protocol... Another reason for a standard is to allow easy migration between different systems. Just as a single web-page in HTML can be viewed by two different Web browsers from different vendors, a single RDB2RDF mapping standard should allow a user from one database to expose their data as RDF, and then, when they export their data to another database, allow the newly imported data to be queried as RDF without changing the mapping file.

The mission of the RDB2RDF Working Group, part of the Semantic Web Activity, is "to standardize a language for mapping relational data and relational database schemas into RDF and OWL, tentatively called the RDB2RDF Mapping Language, R2RML. The mapping language defined by the WG will facilitate the development of several types of products. It could be used to translate relational data into RDF which could be stored in a triple store. This is sometimes called Extract-Transform-Load (ETL). Or it could be used to generate a virtual mapping that could be queried using SPARQL and the SPARQL translated to SQL queries on the underlying relational data. Other products could be layered on top of these capabilities to query and deliver data in different ways as well as to integrate the data with other kinds of information on the Semantic Web..."

See also: the W3C RDB2RDF Working Group


XHTML Modularization: A Markup Language Designer's Toolkit
Steven Pemberton, Blog

"The current maintenance update to XHTML Modularization is in response to the inevitable bug reports and clarifications that come from actual use. Since there have recently been some misconceptions expressed about the purpose of the spec, I'd thought I'd take the opportunity to try and clear them up.

XHTML Modularization is a tool for people who design markup languages. It has been used by the people designing the format for Jabber (xmpp), for the open eBook standard (epub), for the microformats specification for outlines (xoxo), and the Resource Directory Description Language (RDDL), among many others, as well as those at W3C such as XHTML 1.1, and RDFa.

Although Rick Jelliffe asserted that XHTML Modularization "...may be one of the most important new technologies of 2001," most people will not be familiar with it. That is because XHTML Modularization is not for designing Web pages, nor is it implemented in browsers: a lot of people create Web pages; not many create new markup languages. XHTML Modularization helps people design and manage markup language schemas and DTDs; it tells you how to write schemas that will plug together. Modules can be reused and recombined across different languages, which helps keep related languages in sync.

The modularization approach in the spec applies to XML as well. We could have called it "XML Modularization" but the main reason that XHTML appears in the title is that the spec also contains modules for XHTML using the methodology. It is with these modules that XHTML 1.1, XHTML Print, and XHTML Basic (and the others mentioned above) are defined. Modularization is in some ways an unusual specification for W3C, because you don't have to write any software for it. In a sense, the 'processor' for Modularization is a human who is writing a schema.."

See also: the W3C Proposed Edited Recommendation


Asigra Launches Cloud Backup Platform
Nathan Eddy, eWEEK

"Asigra, a cloud backup and recovery software provider, has announced the launch of Cloud Backup v10. The latest edition extends the reach and performance of the Asigra platform, including protection for laptops, desktops, servers, data centers and cloud computing environments with tiered recovery options to meet Recovery Time Objectives (RTOs). The solution is available through the Asigra partner network. Organizations can opt to deploy the software directly onsite or select an Asigra service provider for offsite backup or both...

According to Forrester Research, at least two-thirds of companies in North America and Europe have already implemented server virtualization. With the major server virtualization vendors embracing the cloud as the strategic deliverable of a virtualized infrastructure, Asigra also added enhancements to the virtualization support in v10. Cloud Backup v10 can be deployed as a virtual appliance within virtual infrastructures and the company has offered support for virtual machine backups at the host level..."

From the text of the announcement: "New features in Cloud Backup Version 10 include: (1) DS-Mobile support to back-up laptops in the field; (2) Tiered recovery—local-only backup to machines on end customer's premises, offsite backup to a service provider or private cloud -- allowing users to better align the value of data to the cost of protecting it while achieving higher Recovery Time Objectives (RTOs); (3) Asigra Cloud Backup v10 can be deployed as a virtual appliance within virtual infrastructures for maximum application mobility, scalability and uptime; (4) Advanced FIPS 140-2 NIST certified security and encryption of data in-flight and at-rest; (5) New backup sets for comprehensive protection of enterprise applications, including MS Exchange, MS SharePoint, MS SQL, Windows Server Hyper-V, Oracle SBT, Sybase and Local-Only backup...

Asigra Cloud Backup is next generation backup and recovery software optimized for cloud computing and designed to offer backup and storage efficiencies unavailable with traditional backup architectures by allowing users to capture less, ingest less, and store less data thereby reducing the amount of backup software cycles and storage hardware required to deploy and maintain high levels of data protection... Asigra Cloud Backup is built on the company's hybrid cloud deployment model, allowing users the freedom to backup to a public cloud for offsite recovery, build a private cloud for onsite recovery or select a hybrid of the two models. With v10 users can create tiered backup policies that align the value of the data with the cost of protecting it. Establishing policies for local backup sets allows IT to segment the data and choose what gets backed up locally on their private cloud. The local backup sets option enables companies to replace their typical tape-based local backup solution..."

See also: the announcement


Users Can Encrypt Amazon CloudFront Content Delivery System
Mikael Ricknäs, ComputerWorld

"Amazon Web Services (AWS) content delivery network service CloudFront can now transfer data over an encrypted HTTPS connection, but users will pay more than if they transfer it via HTTP... CloudFront can be used to distribute all files that can be sent over HTTP, including images, audio, video, media files or software downloads. The service, which is still in beta test, can stream audio and video, as well... CloudFront will use encryption when retrieving data from its storage service S3 (Simple Storage Service), so the content is protected all the way from where it is stored to the user's computer...

Amazon has also opened a new edge location in New York City, which brings the total number of places in the U.S. to nine. There are also four locations in Europe and three in Asia. Closeness to an edge location will help improve performance, and depending on where users are located CloudFront will automatically send them to the most appropriate place..."

From the AWS announcement: "[...] three separate changes to Amazon CloudFront, the easy-to-use AWS content delivery network. First, we've added the ability to deliver content over an HTTPS connection. Since we launched Amazon CloudFront, HTTPS support has been one of the most requested features by our customers. HTTPS lets you transfer content over an encrypted connection, helping ensure the authenticity of the content delivered to your users. Amazon CloudFront HTTPS delivery can be used to transfer inherently sensitive objects to your users, to avoid security warnings that some browsers present when viewing a mix of HTTP and HTTPS content, or for anything else that needs to be encrypted when transferred.

Using HTTPS is easy—just change your cloudfront.net links to use 'https://' instead of 'http://', and Amazon CloudFront will serve your content using HTTPS. By default, Amazon CloudFront will accept requests over both the HTTP and the HTTPS protocols. However, if you always want your content encrypted when transmitted, you can configure Amazon CloudFront to require HTTPS for all requests for your content and not allow requests made over the regular HTTP protocol. For HTTPS requests, Amazon CloudFront will also use HTTPS to retrieve your object from Amazon S3, so your object is encrypted whenever it is transmitted..."

See also: the AWS announcement


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-06-08.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org