This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com
- Public Review: SCA Assembly Model Specification Version 1.1
- W3C Candidate Recommendation: Timed Text Markup Language (TTML) 1.0
- XML Security RELAX NG Schemas from W3C
- Toward Energy-Efficient Computing
- Manage XML Schemas in DB2: XML Schema Evolution and XML Data Management
- Updated IETF Internet Draft for Common YANG Data Types
- Archivists' Toolkit and Archon: Next-Generation Archival Management Tool
- Data Integration Issues Challenge Cal Power Operation's Move to SOA
Public Review: SCA Assembly Model Specification Version 1.1
Michael Beisiegel, Khanderao Khand (et al, eds), OASIS Public Review Draft
Members of the OASIS Service Component Architecture / Assembly (SCA-Assembly) Technical Committee have submitted an approved 'Committee Draft 05' of the Service Component Architecture Assembly Model Specification Version 1.1 for public review through March 04, 2010.
This OASIS TC was chartered in July 2007 to "define the core composition model of Service Component Architecture. Service Component Architecture (SCA) defines a model for the creation of business solutions using a Service-Oriented Architecture, based on the concept of Service Components which offer services and which make references to other services. SCA models business solutions as compositions of groups of service components, wired together in a configuration that satisfies the business goals. SCA applies aspects such as communication methods and policies for infrastructure capabilities such as security and transactions through metadata attached to the compositions."
Document abstract: "Service Component Architecture (SCA) provides a programming model for building applications and solutions based on a Service Oriented Architecture. It is based on the idea that business function is provided as a series of services, which are assembled together to create solutions that serve a particular business need. These composite applications can contain both new services created specifically for the application and also business function from existing systems and applications, reused as part of the composition. SCA provides a model both for the composition of services and for the creation of service components, including the reuse of existing application function within SCA composites.
SCA is a model that aims to encompass a wide range of technologies for service components and for the access methods which are used to connect them. For components, this includes not only different programming languages, but also frameworks and environments commonly used with those languages. For access methods, SCA compositions allow for the use of various communication and service access technologies that are in common use, including, for example, Web services, Messaging systems and Remote Procedure Call (RPC). The SCA Assembly Model consists of a series of artifacts which define the configuration of an SCA Domain in terms of composites which contain assemblies of service components and the connections and related artifacts which describe how they are linked together. This document describes the SCA Assembly Model, which covers: (1) A model for the assembly of services, both tightly coupled and loosely coupled; (2) A model for applying infrastructure capabilities to services and to service interactions, including Security and Transactions..."
See also: the OASIS announcement
W3C Candidate Recommendation: Timed Text Markup Language (TTML) 1.0
Glenn Adams (ed), W3C Technical Report
Members of the W3C Timed Text Working Group have published an updated Candidate Recommendation specification for Timed Text Markup Language (TTML) 1.0. Timed text markup language is a content type that represents timed text media for the purpose of interchange among authoring systems. Timed text is textual information that is intrinsically or extrinsically associated with timing information." This Candidate Recommendation is an updated document based on implementation experience, which includes a list of changes. A test suite for TTML is also available, along with its coverage report and a preliminary implementation report. The test suite and implementations are work in progress and may not reflect all of the changes of the main specification document. The W3C membership and other interested parties are invited to review the document and send comments through 23-March-2010.
From the document introduction: "TTML 1.0 provides a standardized representation of a particular subset of textual information with which stylistic, layout, and timing semantics are associated by an author or an authoring system for the purpose of interchange and potential presentation. TTML is expressly designed to meet only a limited set of requirements established by the 'Timed Text (TT) Authoring Format 1.0 Use Cases and Requirements' specification as summarized in Section J Requirements. In particular, only those requirements which service the need of performing interchange with existing, legacy distribution systems are satisfied. In addition to being used for interchange among legacy distribution content formats, TTML content may be used directly as a distribution format, providing, for example, a standard content format to reference from a 'text' or 'textstream' media object element in a SMIL 2.1 document.
Certain properties of TTML support streamability of content, as described in Section M (Streaming DFXP Content). While TTML was not expressly designed for direct (embedded) integration into a SMIL document instance, such integration is not precluded. In some contexts of use, it may be appropriate to employ animated content to depict sign language representations of the same content as expressed by a Timed Text document instance. This use case is not explicitly addressed by TTML mechanisms, but may be addressed by some external multimedia integration technology, such as SMIL. Use of TTML is intended to function in a wider context of Timed Text Authoring and Distribution mechanisms that are based upon a system model wherein the timed text makrup language serves as a bidirectional interchange format among a heterogeneous collection of authoring systems, and as a unidirectional interchange format to a heterogeneous collection of distribution formats after undergoing transcoding or compilation to the target distribution formats as required, and where one particular distribution format is TTML..."
The W3C Timed Text Working Group is part of the W3C Video in the Web Activity. "The goal of this activity is to make video a "first class citizen" of the Web. Video on the Web (and this includes audio, as the two are typically used together) has seen explosive growth, improving the richness of the user experience but leading to challenges in content discovery, searching, indexing and accessibility. Enabling users (from individuals to large organizations) to put video in the Web requires that we build a solid architectural foundation that enables people to create, navigate, search, link and distribute video, effectively making video part of the Web instead of an extension that doesn't take full advantage of the Web architecture..."
XML Security RELAX NG Schemas from W3C
MURATA Makoto, Posting to 'Office-Comment'
Makoto writes: "I am working with W3C to create RELAX NG schemas for XML digital signature and other things from the XML Security Working Group. The current draft is available [from the W3C Web site]. I hope that both ODF and OOXML use them..." According to a note from the document editor (Frederick Hirsch), the XML Security RNG Schemas draft has been updated to to reflect the latest set of changes from Makoto, including the new files with descriptions.
The draft specification XML Security RELAX NG Schemas serves to publish RELAX NG schemas for XML Security specifications, including XML Signature 1.1, and XML Signature Properties. These schemas are drafts and subject to further revisions. The normative description of the respective data formats are included in the Recommendation-track Working Drafts for XML Signature and XML Signature Properties...
The document presents: (1) XML Signature 1.0 RNG Schema, (2) the XML Signature 1.1 RNG Schema, and (3) XML Signature Properties RNG Schema.
See also: the draft specification and schema files
Toward Energy-Efficient Computing
David J. Brown, ACM Queue
"By now, most everyone is aware of the energy problem at its highest level: our primary sources of energy are running out, while the demand for energy in both commercial and domestic environments is increasing, and the side effects of energy use have important global environmental considerations...
If the exponential growth of data-center computing equipment revealed by [the August 2007 EPA] study continues, roughly double the demand for electricity seen in 2006 is expected in data centers by 2011. This poses challenges beyond the obvious economic ones. For example, peak instantaneous demand is expected to rise from 7 gigawatts in 2006 to 12 gigawatts in 2011, and 10 new base-level power plants would be needed to meet such a demand... There is some evidencea that the amount of energy consumed by mobile and desktop computing equipment is of roughly the same magnitude as that used by servers in data centers, although we do not have a correspondingly comprehensive and authoritative current study to refer to...
We are still at the debut of energy-conscious computing, with a great deal of the industry's attention being given to the introduction and use of power-management mechanisms and controls in individual hardware components rather than to the broader problem of energy efficiency: the minimization of total energy required to run computational workloads on a system. This article suggests an overall approach to energy efficiency in computing systems. It proposes the implementation of energy-optimization mechanisms within systems software, equipped with a power model for the system's hardware and informed by applications that suggest resource-provisioning adjustments so that they can achieve their required throughput levels and/or completion deadlines.
In the near term, a number of heuristic techniques designed to reduce the most obvious energy waste associated with the highest-power components, such as CPUs, are likely to remain practical. In the longer term, and for more effective total energy optimization, we believe that techniques able to model performance relative to the system's hardware configuration (and hence its energy consumption), along with an improved understanding and some predictive knowledge of workloads, will become increasingly important.
Manage XML Schemas in DB2: XML Schema Evolution and XML Data Management
Masahiro Ohkawa, IBM developerWorks
The first article in this series showed how to register several types of XML schemas, how to validate XML data with them, and ways to get the validated information. This article explores several scenarios about evolving XML schemas and ways to manage the XML data.
An XML schema is updated (evolved) as a result of business analysis. As for evolving the XML schema, these are the typical scenarios. (1) Evolve the XML schema (upward compatibility). The XML schema is evolved, which is upward compatible with the existing XML schema. By doing so, the existing XML data conforms to the new XML schema without modifying the XML data. (2) Evolve the XML schema (no compatibility) and transform the XML data. The XML schema is evolved but it is not compatible with the existing XML schema. The existing XML data is transformed to fit to the new XML schema. (3) Evolve the XML schema (no compatibility) and manage the XML data without transformation. The XML schema is evolved but it is not compatible with the existing XML schema. The existing XML data is not transformed, and you manage it with the existing XML schema..."
See also: article Part 1
Updated IETF Internet Draft for Common YANG Data Types
Juergen Schoenwaelder (ed), IETF Internet Draft
Members of the IETF Network Configuration (NETCONF) Working Group have published an update for Common YANG Data Types, which introduces a collection of common data types to be used with the YANG data modeling language.
The YANG language supports a small set of built-in data types and provides mechanisms to derive other types from the built-in types. This document introduces a collection of common data types derived from the built-in YANG data types. The definitions are organized in several YANG modules. The 'ietf-yang-types' module contains generally useful data types. The 'ietf-inet-types' module contains definitions that are relevant for the Internet protocol suite. The derived types are generally designed to be applicable for modeling all areas of management information... A YANG data type is equivalent to an SMIv2 data type if the data types have the same set of values and the semantics of the values are equivalent...
"YANG: A Data Modeling Language for NETCONF" describes the syntax and semantics of the YANG language, how the data model defined in a YANG module is represented in XML, and how NETCONF operations are used to manipulate the data. A YANG module defines a hierarchy of data which can be used for NETCONF-based operations, including configuration, state data, remote procedure calls (RPCs), and notifications. This allows a complete description of all data sent between a NETCONF client and server. YANG models the hierarchical organization of data as a tree in which each node has a name, and either a value or a set of child nodes. YANG provides clear and concise descriptions of the nodes, as well as the interaction between those nodes. YANG structures data models into modules and submodules. A module can import data from other external modules, and include data from submodules. The hierarchy can be augmented, allowing one module to add data nodes to the hierarchy defined in another module. This augmentation can be conditional, with new nodes appearing only if certain conditions are met...
YANG models can describe constraints to be enforced on the data, restricting the appearance or value of nodes based on the presence or value of other nodes in the hierarchy. These constraints are enforceable by either the client or the server, and valid content must abide by them...YANG permits the definition of reusable grouping of nodes. The instantiation of these groupings can refine or augment the nodes, allowing it to tailor the nodes to its particular needs. Derived types and groupings can be defined in one module or submodule and used in either that location or in another module or submodule that imports or includes it... YANG modules can be translated into an equivalent XML syntax called YANG Independent Notation (YIN), allowing applications using XML parsers and XSLT scripts to operate on the models. The conversion from YANG to YIN is loss-less, so content in YIN can be round-tripped back into YANG..."
"The University of California at San Diego, New York University, and the University of Illinois at Urbana-Champaign Libraries are teaming up to develop a next-generation archival management tool, thanks to a grant in the amount of $539,000 from The Andrew W. Mellon Foundation. The grant will support the planning and design of a new software tool for the description and management of archives, based on the combined capabilities of Archivists' Toolkit (AT) and Archon.
The two predominant open-source archival tools are currently utilized by numerous academic libraries, special collections, archives, and museums worldwide, including universities like UCLA and Harvard, the Metropolitan Museum of Art, the San Diego Zoo, and smaller archival repositories like the Niels Bohr Archives in Denmark and the Biblioteca Ateneu Barcelones in Spain. Planning activities will include the development of a next-generation architectural framework as well as a complete review of the new archival tool's required and desirable functional specifications. Members of the archival community will be consulted during the planning and product development stages..."
According to the AT web site: "The Archivists' Toolkit (AT), is the first open source archival data management system to provide broad, integrated support for the management of archives. It is intended for a wide range of archival repositories. The main goals of the AT are to support archival processing and production of access instruments, promote data standardization, promote efficiency, and lower training costs. Currently, the application supports accessioning and describing archival materials; establishing names and subjects associated with archival materials, including the names of donors; managing locations for the materials; and exporting EAD finding aids, MARCXML records, and METS, MODS, and Dublin Core records. Future functionality will be built to support repository user/resource use information, appraisal for archival materials, expressing and managing rights information, and interoperability with user authentication systems. The AT project is a collaboration of the University of California San Diego Libraries, the New York University Libraries and the Five Colleges, Inc. Libraries, and is generously funded by The Andrew W. Mellon Foundation.
In the Archivists' Toolkit 1.0 Functional Specification, Import Maps are supported for EAD to AT and MARC-XML to AT. Export maps are provided for (1) AT to Dublin Core map object; (2) AT to EAD map; (3) AT resource record to MARC-XML map object; (4) AT digital object record to MARCXML map object; (5) AT to METS map object; (6) AT to MODS map object..."
See also: the Archivists Toolkit
Data Integration Issues Challenge Cal Power Operation's Move to SOA
Rob Barry, SearchSOA.com
Many new services-based applications transcend the bounds of a single organization, and data definitions often loom as the most pressing challenge when integrating these extended systems. This was the case for the California Independent System Operator (ISO), a not-for-profit corporation that manages the state's market-based wholesale power grid. The independent California grid power broker embarked on the road to SOA around 2004. The immediate job was to better connect the varied information systems of power market participants...
Typically, a common information model contains a specification and a schema. The schema contains model descriptions while the specification contains integration details. These objects and relationships provide common definitions of management information for an architecture that can then be extended for use by third-parties. Groups like the Distributed Management Task Force (DMTF) and the Object Management Group (OMG) have worked long and hard to help businesses standardize such models. Industry-specific CIMs include an International Electrotechnical Commission (IEC) CIM fashioned to the needs of the electrical industries. Close to 100 power companies participate on the California ISO's grid, each of which needs to retrieve information from the central system. Not only did these vendors have their own infrastructures, they did not share a standard information mode...
A Market Redesign and Technology Upgrade project took about five years and involved a great deal of custom internal development, although a standards-based commercial ESB was employed. The organization runs on Java and PL/SQL and, through the redesign, adopted WC3 standards and now uses SOAP with its Web services. The SOA implementation remained technology-agnostic... The new infrastructure allowed CAISO to split its three primary power grid zones into more than 3,000 nodes, each reflecting local power generation and delivery costs. The system sends status reports on all of these nodes every five minutes. Web services of vendors that provide required data need only to conform to the CIM... Under the new information model, the CAISO has created the business vocabulary for the rest of its market to conform to..."
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/