This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com
- W3C Invites Public Comment on First Draft of XML Processor Profiles
- Building the New-generation China Academic Digital Library Information System (CADLIS): A Review and Prospectus
- New IETF Draft: Benchmarking Power Usage of Networking Devices
- The Future of Embedded Software: Adapting to Drastic Change
- NIEM: Design an XML Information Exchange Between U.S. Government Entities
- Working Draft of the OpenID v.Next Attributes Working Group Charter
- Data.gov Web Site to Relaunch With Microsoft's Bing Search Feature
W3C Invites Public Comment on First Draft of XML Processor Profiles
Henry S. Thompson and Norman Walsh (eds), W3C Technical Report
Members of the W3C XML Processing Model Working Group, part of the W3C XML Activity, have announced publication of the First Public Working Draft for the specification XML Processor Profiles. Since this specification is intended for use by other specifications which themselves define one or more XML languages, the Working Group particularly welcomes input for other Working Groups who are responsible for such specifications.
XML Processor Profiles defines "several XML processor profiles, each of which fully determines a data model for any given XML document. It is intended as a resource for other specifications, which can by a single normative reference establish precisely what input processing they require...
The minimum approach to the construction of a data model from a well-formed and namespace well-formed XML document requires the following: (1) Processing of the document as required of conformant non-validating XML processors while reading all external markup declarations; (2) Maintenance of the base URI property of each element in conformance with the XML Base specification. For the basic XML processor profile, the draft proposes (3) Identification of all 'xml:id' attributes as [XML attribute type] IDs, as required by the xml:id Version 1.0 specification (W3C Recommendation 9-September-2005), and (4) Replacement of all include elements in the XInclude namespace, and namespace, 'xml:base' and 'xml:lang' fixup of the result, as required for conformance to the specification XML Inclusions (XInclude) Version 1.0 (Second Edition)..."
Background: "The Extensible Markup Language (XML) 1.0 (Fifth Edition) specification defines an XML processor as 'a software module ...used to read XML documents and provide access to their content and structure ... on behalf of another module, called the application.' XML applications are often defined by building on top of the XML Information Set (Second Edition) or other similar XML data models such as XML Path Language (XPath) Version 1.0 or XQuery 1.0 and XPath 2.0 Data Model (XDM), understood as the output of an XML processor. Such definitions have suffered to some extent from an uncertainty inherent in using that kind of foundation, in that the mapping, XML processors perform from XML documents to data model is not rigid. Some of this stems from the XML specification itself, which leaves open the possiblity of reading and interpreting external entities, or not. Some stems from the growth of the XML family of specifications: if the input document includes uses of XInclude, for instance..."
Building the New-generation China Academic Digital Library Information System (CADLIS): A Review and Prospectus
Wang Wenqing and Chen Ling, D-Lib Magazine
"China Academic Digital Library Information System (CADLIS) is a national project funded by the Chinese government and steered by the National Administrative Center for China Academic Library Information System (CALIS) which is a nation-wide academic library cooperative with over one thousand member libraries in China. By leveraging cloud computing technology, the new-generation CADLIS is an open framework and infrastructure, designed to help academic libraries to build and support large-scale federated academic digital libraries of high-quality scholarly information resources which are constructed and shared by CALIS members.
This paper gives an overview of CALIS and CADLIS, and then describes two kinds of services, the overall architecture and interoperability of new-gen CADLIS, the related standards and specifications, newly-built and imported digital resources, etc. It concludes by discussing the current status and future development of CADLIS...
The China Academic Digital Library Information System (CADLIS) federates digital libraries for higher education and academic libraries. It was first built in the CALIS Phase-II Project beginning in 2002. The new-gen CADLIS was started in the CALIS Phase-III Project beginning in 2009. CADLIS' services include union catalogue, inter-library loan and document delivery, collaborative virtual reference, networked e-reserve, networked thesis and dissertation (ETD), networked digital collections for disciplines, e-journals, e-books, et al. Many new-gen services are provided in the Software as a Service (SaaS) model...
The first group of SaaS applications (including the OpenSocial-compliant portal, federated identity management, inter-library loan and document delivery, virtual reference, e-reserve, digital collections management, etc.) have also been developed since the end of 2008 and will be released in the middle of this year. The training on how to lease and use these applications by members will be provided jointly by CALIS Technology Centers and Provincial Centers in the latter half of this year. Other applications of CADLIS will be completed and brought into operation over time. Special collections and other resources will be constructed cooperatively by members, and many important resources and tools will be imported jointly step by step till the end of next year..."
New IETF Draft: Benchmarking Power Usage of Networking Devices
Vishwas Manral (ed), IETF Internet Draft
Through the IETF Network Working Group, an initial level -00 Informational Internet Draft has been published for Benchmarking Power Usage of Networking Devices. Abstract: "With the rapid growth of networks around the globe there is an ever increasing need to improve the energy efficiency of devices. Operators begining to seek more information of power consumption in the network, have no standard mechanism to measure, report and compare power usage of different networking equipment under different network configuration and conditions exist. This document provides suggestions for measuring power usage of live networks under different traffic loads and various switch router configuration settings. It provides a suite which can be deployed on any networking device."
Background: "Energy Efficiency is becoming increasing important in the operation of network infrastructure. Data traffic is exploding at an accelerated rate. Networks provide communication channels that facilitates components of the infrastructues to exchange critical information and are always on. On the other hand a lot of devices run at very low average utlization rates. Various strategies are being defined to improve network utilization of these devices and thus improve power consumption.
The first step to get a network wide view is to start with an individual device view of the system and address different devices in the network on a per device basis. The easiest way to measure the power consumption of a device is to use a power meter. This can be used to measure power under a variety of conditions affecting power usage on a networking device.
Various techniques have been defined for energy management of networking devices. However there is no common strategy to actually benchmark power utilization of networking devices like routers or switches. This document defines the mechanism to correctly characterize and benchmark the power consumption of various networking devices so as to be able to correctly measure and compare the power usage of various devices. This will enable intelligent decisions to optimize the power consumption for individual devices and the network as a whole. Benchmark are also required to comare effictiveness of various energy optimization techniques. A Network Energy Consumption Rate (NECR) as well as Network Energy Proportionality Index (NEPI) is also defined here...."
The Future of Embedded Software: Adapting to Drastic Change
Yoshiaki Kushiki, IEEE Computer
"Over the past decade, the pervasive and critical nature of embedded software has grown rapidly, leading to serious problems for software development management. Process innovation has thus far overcome many of these problems, but new economic and environmental challenges require major changes in the embedded software field... Governments across the world are aggressively researching and developing smart grids that integrate electricity and IT infrastructure, utilizing renewable energies and liberalizing electric power itself.
The smart grid concept can be applied not only to countries or regions but to factories and houses. For example, smart houses can incorporate hybrid power supplies such as photovoltaic and DC power, fuel cells and storage batteries, and heat pumps as well as traditional electric energy. Panasonic is well on its way to creating such a zero-CO2-emission house...
Embedded system designers now must also factor in glocal requirements, which go beyond energy savings. Consumers want products that match their specific needs as well as natural conditions such as climate and geography. In addition, manufacturers must adapt technology to each country's regulations, which encompass everything from safety and security restrictions, to end-ofproduct- life guidelines, to hazardous substance prohibitions, to 3R (recycle, reuse, reduce) policies...
A new value solution links embedded software with cutting-edge eco-technologies and environmental awareness. Komatsu, the world's second-largest construction equipment manufacturer, offers a prime example of the benefits of this approach. The company has implemented an inventory-control system that uses embedded GPS and wireless networking technology to efficiently move its equipment around the world, thereby saving energy and reducing carbon emissions. Innovative environmental software gives us spiritual richness and is the most important way to the future..."
See also: the IEEE Computer Society home page
NIEM: Design an XML Information Exchange Between U.S. Government Entities
Priscilla Walmsley, IBM developerWorks
"Information Exchange Package Documentation (IEPD) is a collection of documents describing a National Information Exchange Model (NIEM) exchange. It typically includes schemas, samples, documentation of various kinds, and rendering instructions. It also includes several NIEM-specific artifacts that are required in a NIEM-conformant exchange, such as a metadata document and a catalog file.
In the first three articles of this series, you learned to model a NIEM exchange and define subset and extension schemas that implement that model. Now you take the final step and assemble the schemas, documentation, and all the other artifacts of an exchange into a complete NIEM-conformant IEPD.
This final article in a series describes the process of creating a NIEM IEPD... To be NIEM conformant, an IEPD must contain (at a minimum) some form of master documentation and a change log describing the changes since the last version. You can include any documentation that might typically accompany a software application in the IEPD—either in the master documentation or as separate files), such as: (1) UML models -- sequence diagrams, use cases, class diagrams; (2) A CMT; (3) Business rules—constraints on the data that are not expressed in the schemas or in the model; (4) Requirements definitions; (5) Testing and/or conformance statements; (6) Memoranda of understanding and letters of endorsement..."
About NIEM: "NIEM (National Information Exchange Model), a partnership of the U.S. Department of Justice and the Department of Homeland Security, is designed to develop, disseminate and support enterprise-wide information exchange standards and processes that can enable jurisdictions to effectively share critical information in emergency situations, as well as support the day-to-day operations of agencies throughout the nation. NIEM enables information sharing, focusing on information exchanged among organizations as part of their current or intended business practices. The NIEM exchange development methodology results in a common semantic understanding among participating organizations and data formatted in a semantically consistent manner. NIEM will standardize content (actual data exchange standards), provide tools, and managed processes. NIEM builds on the demonstrated success of the Global Justice XML Data Model. Stakeholders from relevant communities work together to define critical exchanges, leveraging the successful work of the GJXDM... [web site description]"
Working Draft of the OpenID v.Next Attributes Working Group Charter
Joseph Smarr, Posting to OpenID Specifications Discussions
Members of the OpenID Specifications Discussion list have published a draft charter for discussion on a proposed "OpenID v.Next Attributes" Working Group, building upon an early draft from Mike Jones. The goal of the 'OpenID v.Next Attributes Working Group' would be to "produce attribute transmission and schema specifications for OpenID v.Next that address the limitations and drawbacks present in the OpenID 2.0 attribute facilities that limit OpenID's applicability, adoption, usability, and interoperability. Sharing basic data about the user has become a common enough requirement that OpenID needs to take a more hands-on role in specifying common fields and also more tightly/actively working on how to propose and accept new standard fields going forward."
Specific goals would be to: "(1) define how to ask for and get rich, consistent, common and extensible data attributes, (2) define schemas for common attributes, (3) define a mechanism and process for using attributes not in this common set, (4) enable user control over what attributes are released, (5) enable aggregation of attributes from multiple verifiable attribute sources, (6) enable the use of attributes by non-browser applications (7) enable the use of attributes both with and without employing an active client, (8) seamlessly integrate with and complement the other OpenID v.Next specifications..."
In addition, a proposal for an OpenID Certification Working Group has been published. Its goal would be "to produce certification checklists for the use of OpenID in different use-cases so that neutral certification bodies such as Open Identity Exchange (OIX) can validate IDPs against them as opposed to requiring each RP to individual perform such an analysis of each potential IDP...
This WG would (1) Define the checklist for at least one use-case; (2) Have at least one IDP certified against that checklist by a certification body; (3) Have at least one RP who will dynamically support the published list of IDP(s) that have been certified... WG members would produce a list of certification use-cases, and checklists for them. We expect this work will identify the need for additional enhancements to the technical standards, but in general this WG will not directly develop those standards, but will coordinate with other OpenID WGs to define the necessary standards..."
Data.gov Web Site to Relaunch With Microsoft's Bing Search Feature
J. Nicholas Hoover, InformationWeek
"The U.S. federal government's online data repository is getting a redesign that will add Microsoft's Bing search, feature links to data sets and applications, and focus on usability. The refreshed Data.gov Web site will include integrated Microsoft Bing search, featured data sets, and links to Web applications that take advantage of government data.
In its first year since launching last May, Data.gov has received 97.6 million hits, amassed 250,028 government data sets, and spurred the development of 237 applications and mashups that use data from the site, according to the Office of Management and Budget. Following the Obama administration's lead, open data Web sites have popped up around the country and around world.
The evolution of the site comes through agile development sessions followed up by work with focus groups in order to ensure the site meets users' needs and demands. Data.gov's developers studied usage patterns, looked to optimize the site for speed... The new site will also highlight third-party efforts to build Data.gov-based applications. For example, it will link to work done by a small team led by Professor James Hendler at Rensselaer Polytechnic Institute to quickly build data visualization apps and mashups from government data sets, including visualizations of the White House visitor list, a map of ozone levels nationwide, and maps and visualizations of international aid levels..."
According to the web site: "A primary goal of Data.gov is to improve access to Federal data and expand creative use of those data beyond the walls of government by encouraging innovative ideas (e.g., web applications). Data.gov strives to make government more transparent and is committed to creating an unprecedented level of openness in Government. Public participation and collaboration will be key to the success of Data.gov. Data.gov enables the public to participate in government by providing downloadable Federal datasets to build applications, conduct analyses, and perform research..."
See also: the Data.gov web site description
XML Daily Newslink and Cover Pages sponsored by:
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/