The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: March 31, 2009
XML Daily Newslink. Tuesday, 31 March 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc.

Trusted Computing Shapes Self-Encrypting Drives
James Figueroa, IEEE Computing Now

"Earlier this year, the Trusted Computing Group (TCG) released new standards for self-encrypting storage (SES), specifications that many large drive manufacturers anticipated as an improved model of hardware security. Drives featuring the new full-disk encryption began appearing in March, eliciting acclaim from many observers but prompting a slew of questions ranging from user accessibility to key management. According to TCG's official blog, the most practical use for SES is to protect data when a laptop is stolen or drives are recycled. The hardware encryption is specified within the drive and not in any other part of the PC, including RAM, making the technology invulnerable to tactics such as cold boot attacks, which have been proven effective against other forms of full-disk encryption. 'For this use case, the trusted storage specifications define the concept of self-encrypting storage (SES), in which the hardware circuitry for encryption and decryption is integrated directly into the onboard storage electronics,' said Seagate Technologies senior director of research Michael Willett. 'Everything written to storage is encrypted and everything read from storage is decrypted at full channel speeds.' TCG claims that self- encrypting drives don't interfere with performance, a key issue compared to other forms of hardware security. The drives also include a lockdown feature that administrators can use to immediately wipe any data, and with the user-level authentication password the drives are considered impossible to crack. Several manufacturers have introduced new drives with SES technology or are planning launches, including Fujitsu, Hitachi, and Seagate. TCG released its three new specifications in January—a data center storage specification called Enterprise SSC, the Storage Interface Interactions Specification (SIIF) for common storage interfaces such as SCSI and ATA, and Opal, the specification for individual laptops that's getting the most attention from the industry... TCG is still regarded as a controversial organization in some circles... InfoWorld's Grimes, a TCG supporter, acknowledged that there might be too 'The encryption storage device spec may not be broad enough or perfect, but it's a step in the right direction', he said..."

See also: the TCG's Key Management Services Subgroup (KMSS)

Trust Management: Editorial Overview
Sotirios Terzis, Guest Editor's Introduction to IEEE Computing Now

With the emphasis on loosely coupled and decentralized systems and the advent of service orientation, trust management has moved beyond the domains of security, multiagent systems, and e-commerce to become a key concern across all aspects of computing. However, there's currently little agreement on what trust really means and what the best way of managing it is... In credential-based trust, principals' trustworthiness is determined on the basis of the credentials they possess, and trust management is about specifying and interpreting security policies, credentials, and relationships. In the same area is trust negotiation where, motivated by privacy concerns, principals iteratively disclose certified digital credentials that verify their properties to establish mutual trust. Beyond credential-based trust, security-oriented trust management also includes distributed trust, where replication and threshold cryptography are used to reduce the vulnerability of an ensemble of a service's replicas, making it more trustworthy. This view of trust has also been the basis of trusted computing, a collection of technologies that, when combined, help establish a more secure operating environment on various hardware platforms. In the context of software engineering, this view of trust has been extended beyond security to include other software qualities, and has been the basis of the work on trusted components and services. In this context, component and service trustworthiness is determined on the basis of provided qualities guaranteed through formal verification (Bertrand Meyer's "high road" towards trusted components). In contrast, the modern view of trust is that trustworthiness is a measureable property that different entities have in various degrees. Trust management is about managing the risks of interactions between entities. Trust is determined on the basis of evidence (personal experiences, observations, recommendations, and overall reputation) and is situational—that is, an entity's trustworthiness differs depending on the context of the interaction. This view of trust has been the basis of most work in trust management in multiagent systems. In these systems, trust is used as a measure of agents' competence and benevolence, often abstracting away from the complex factors that can drive agent behavior. The notion of agent benevolence includes both concerns about malicious behavior, typical in security-oriented work, and about selfish behavior that can be counterproductive for the system.

Reputation Bootstrapping for Trust Establishment among Web Services
Zaki Malik and Athman Bouguettaya, IEEE Internet Computing

With the growing trend in Web services, the World Wide Web is shifting from being merely a data repository to being an environment (dubbed the service Web) in which Web users or other applications can automatically invoke other Web services. We define a Web service as a self-describing software application that can be advertised, located, and used across the Web using a set of standards... In service-oriented environments in which honest and malicious service providers coexist, finding the exact balance between fairness and accuracy for reputation bootstrapping is nontrivial. For instance, a malicious service provider might attempt to clear its (negative) reputation history by discarding its original identity and entering the system with a new one (known as whitewashing). In contrast, a different service provider might enter the system for the first time without any malicious motives. Here, we propose a reputation-bootstrapping model that's accurate (that is, the newcomer is assigned an initial reputation that it actually deserves) and fair to both existing services and newcomers (no participant is wrongfully disadvantaged)... A Web service exposes an interface describing a collection of operations that are network accessible through standardized XML messaging. 2 We propose extending the traditional (publish-discover- access) Web service model and introduce the concept of community to aid in bootstrapping. A community is a 'container' that groups Web services related to a specific topic area (for example, auto makers or car dealers). Communities describe desired services by providing interfaces for them without referring to any particular one. We can use ontologies as templates for describing communities and Web services. An ontology typically comprises a hierarchical description of important concepts in a domain and describes those concepts' properties. A concept in an ontology is similar to the notion of class in object-oriented programming An ontology relates concepts to each other through ontological relationships, such as asubclassof or superclassof. Community providers define communities as instances of the community ontology (that is, they assign values to the ontology's concepts). Community providers generally comprise government agencies, nonprofit organizations, and businesses that share a common domain. In our model, a community is itself a service that's created, advertised, discovered, and invoked as a regular Web service, so that service providers can discover it. Such providers identify the community of interest and register their services with it. We use the Web Ontology Language (OWL) to describe the proposed ontology, but you could also use other Web ontology standards... In our model, Web services in a particular domain (that is, registered with the same community) can aid each other in assessing a newcomer's initial reputation. We propose two reputation- bootstrapping approaches. The first relies on cooperation among services to compute a newcomer's reputation in a peer-to-peer (P2P) manner. The second functions under a 'super peer' topology in which the community provider is responsible for assigning the newcomer's reputation.

See also: the W3C Web Ontology Language (OWL)

Dependable Service-Oriented Computing
Asit Dan and Priya Narasimhan, IEEE Internet Computing

In the past few years, interest has been increasing in dependable service-oriented computing (SOC), both in industry and academia. SOC can extend the scope of developing dependable solutions beyond runtime protocols by focusing on independently developed services' entire life cycles—design, development, deployment, and runtime management. Distributed computing, in which an application runs over multiple independent computing nodes, has a higher risk of one or more nodes failing than a centralized, single-node environment. On the other hand, distributed computing can also make an overall system more dependable by detecting those faulty nodes—whether they're due to an underlying hardware or software failure or to compromised security through malicious attacks—and then redistributing application components or coordinating them via predefined protocols to avoid such problems. The emerging SOC paradigm is changing how enterprises architect, develop, deliver, and use distributed software systems. As SOC gains momentum, dependability is likely to become an important driving factor and also a key competitive differentiator for the effective, 24/7, highly available deployment of real-world services that meet business requirements. SOC's new set of challenges require us to revisit dependable distributed computing principles and understand how to apply, transform, or revolutionize dependability practices for use in the emerging SOC world. SOC research must address various knowledge barriers. Much of a deployed SOA's administrative cost is likely to arise from manually finding and fixing various problems over a system's lifetime. To provide high-confidence SOCbased platforms, and to mitigate administrative burdens for supported applications, systems require automated self-management and troubleshooting to determine a problem's root cause. Such systems must then perform recovery that targets the root cause appropriately, rather than distracting administrators' attention with potential red herrings. In SOC, the loose coupling of services in composing end-to-end business processes and the underlying runtime middleware for orchestrating, routing, mediating, and transforming service requests, provide ample opportunities for monitoring execution and collecting detailed information to pinpoint problems as they arise and subsequently avoiding faulty services by invoking alternate services...

W3C eGovernment Interest Group: Open Meeting Summary Report
Kevin Novak and John Sheridan, Interest Group Report

Members of the W3C eGovernment Interest Group have published a Meeting Summary from the March 12-13, 2009 eGovernment stakeholder meeting in Washington, D.C. The purpose of the meeting was to obtain feedback on the First Public Working Draft of the group's Improving Access to Government through Better Use of the Web, published on 1 March 2009. Featured speakers at the meeting included Beth Noveck, US Office of Science and Technology Policy, Ellen Miller, Sunlight Foundation, and Steve Riesner, GovLoop, as well as meeting co-chairs Kevin Novak, American Institute of Architects, John Sheridan, UK National Archives, and W3C Team contact Jose Alonso. Key subject areas addressed by participants were: Openness and Transparency in Government; Social Networking; Data Interoperability and Semantic Web in Government; and Multi-Channel Deliver and Information Access via Mobile Platforms. The term "eGovernment" refers to the use of the Web or other information technologies by governing bodies (local, state, federal, multi-national) to interact with their citizenry, between departments and divisions, and between governments themselves... The group discussed the value of RDFa, XBRL, persistent URIs and other Web standards that allow governments to easily expose data and ensuring it is discoverable but kept ROI and business case requirements in focus noting the challenges that government employees face with their managers and superiors when attempting to make government information available. The group identified that small gains are important and that more examples need to be identified and documented. The participants concluded that more needs to be investigated and reviewed prior to finalizing the Open Government Data section in the draft issues paper... The final conversation focused on formulating a plan for the second year of work for the electronic government interest group. Proposed efforts are to continue to provide tools and direction for governments to enable social networking and media applications and services and include focus on policies and cultural aspects, continue to explore and match standards and policies related to multichannel delivery focused on mobile devices with the goal of assisting governments with the tools and techniques that allow data to be served via multiple and diverse access points, meet objectives set by US Federal Government and work to identify standards and practices which would enable e-rulemaking, and continue to grow and mature standards and practices, to include additional use cases and scenarios on the topics of open government data, interoperability, authentication and identification, and long term data management.

See also: the W3C eGovernment Interest Group (eGov IG) Wiki

U.S. House Hearing on the PCI Security Standard (PCI DSS)
Jaikumar Vijayan, ComputerWorld

At a U.S. House of Representatives hearing yesterday, federal lawmakers and representatives of the retail industry challenged the effectiveness of the PCI rules, which are formally known as the Payment Card Industry Data Security Standard (PCI DSS). They claimed that the standard, which was created by the major credit card companies for use by all organizations that accept credit and debit card transactions, is overly complex and has done little to stop payment card data thefts and fraud. Much of PCI's limitations have to do with the static nature of the standard's requirements, according to [Yvette] Clarke, who said the rules are ineffective at dealing with the highly dynamic security threats that retailers and other merchants now face. For instance, she pointed to the data breach disclosed early last year by Hannaford Bros. Co., which said that attackers had stolen card numbers and expiration dates by installing malware on servers at each of the Scarborough, Maine-based grocery chain's stores and capturing the data as cards were swiped at cash registers. Hannaford was certified as PCI-compliant by a third-party assessor in February 2008, just one day after the company was informed of the system intrusions, which had begun two months earlier. That means the grocer received its PCI certification "while an illegal intrusion into its network was in progress," Clarke said. Similarly, RBS WorldPay Inc. and Heartland Payment Systems Inc. were both certified as PCI-compliant prior to breaches that the two payment processors disclosed in December 2008 and January 2009, respectively. Visa Inc. dropped Heartland and RBS WorldPay from its list of PCI-compliant service providers last month and is requiring them to be recertified, although it has said that merchants can continue to do business with the two companies in the meantime. The key takeaway from the hearing is that the time may have come "for some real oversight in the credit card industry" on how card data is secured, said Tom Kellerman, vice president of security awareness at Core Security Technologies, a security software vendor in Boston. "We saw PCI being challenged in a way it never has been," he said. Kellerman, who was a member of a think-tank commission that issued a set of cybersecurity recommendations for the federal government in December, added that security standards should be based on actual threats, not on a consensus approach aimed at appeasing all stakeholders. And, he said, the credit card companies need to realize that merely transferring to merchants the risks and responsibilities associated with securing data won't cut it any longer.

Building a Bigger Pipeline: Building the XProc Specification with XProc
Norm Walsh, Blog

This essay explores XProc in more detail by constructing a fairly complex 'real world' example of an XProc pipeline. This task formed the heart of my presentation at XML Prague, a few days ago. This essay was composed from the notes I made before that presentation and from my recollection of what I said. As a result, it's a bit rambling. To be clear from the start, this is an essay about what you can do with XProc and not an essay that attempts to motivate why you could or should want to do it. I've already written about why pipelines are a good thing. Vojtech Toman also spoke about XProc at XML Prague (and is also on the working group); his presentation was more motivational than mine, if that's what you're after. My goal here is to build something that does real work and to see how XProc can be used to address a number of common document-processing tasks. Along the way, I'll examine both the strengths and weaknesses of XProc and address at least one weakness with an extension. XML's near ubiquity means that the range of applications that perform 'XML processing' is exceptionally broad. These applications range from what many of us would recognize as traditional document processing on one end to almost binary, interprocess communication on the other. XProc's design aims to be applicable wherever the tasks at hand can be described as the application of a series of transformations of XML documents. That's neither a particularly 'document centric'view nor a particularly 'data centric' view. However, for many of us, the canonical examples of XProc involve traditional document processing steps: validation, transformation, and XInclude, for example. It follows that XProc had better be applicable to real world document transformation tasks. To test XProc's capabilities in this area, we'll examine what is a very real-world example to me: construction of 'XProc: An XML Pipeline Language', the XProc specification itself, from its constituent parts...

See also: the new version of the XML Calabash XProc processor

Are We Losing the Declarative Web?
Philip Fennell, O'Reilly Technical

"[A new] working group will look at exposing OpenGL capabilities within ECMAScript. The intriguing part is that, as a fan of 3D Computer Graphics and Animation this has got to be a good sign, especially if it is exposed in this way; but the bothersome bit is how people will end up using it as a result of it being exposed in this way. The crux of the problem for me is the question, JavaScript—what's it good for?... Absolutely, that is the question. What is JavaScript's purpose on the web, what is it good for, and what is it not? [...] I suggested that to drive adoption of declarative languages like XForms, JavaScript implementations are the way forward as they are able to side-step the whole issue of plug-ins which has, so far, dogged XForms, SVG, X3D and the like. With a lot of interest growing in the new high performance JavaScript engines there may come a time (soon I hope) when an alliance of JavaScript and OpenGL could deliver SVG and X3D rendering within the browser. I don't know what the realities of doing such a thing would be using these technologies, but, writing the libraries to do so is a worthier pursuit than stopping short at the purely procedural level... Many people have put a considerable amount of effort into making content mark-up both rich in semantics, just look at DocBook and XHTML 2, and extensible through the adoption of XML Namespaces and open Schemas. User input and client-side logic is well served by XForms and presentation extends these formats on many levels via CSS. Other modes of delivery, like print, are catered for by XSL-FO. Richer and more interactive experiences can be delivered either in-line or out-of-line using SVG and SMIL. Where fully supported, many of these formats can be freely inter-woven because of, rather than in spite of, XML Namespaces. Content can be aggregated with XInclude, stored and syndicated with Atom and validated either by grammar using XML Schema or via rules with Schematron... JavaScript should be used to help implement the declarative languages that a web browser is designed to handle if those languages are not natively supported by the browser. Any other use is by-and-large a distraction from moving the Web forward..."


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: