The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: April 04, 2007
XML Daily Newslink. Wednesday, 04 April 2007

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
IBM Corporation

Introducing RDFa, Part Two
Bob DuCharme,

In part 1 of this article, we saw that RDFa, a new syntax for representing RDF triples, can be embedded into arbitrary XML documents more easily than RDF/XML. RDFa is particularly good for embedding these triples into XHTML 2, which has a few new attributes that make it easier to use RDFa. Part 1 of this article showed several roles that RDFa metadata can play, describing metadata about the containing document and metadata about individual elements within the document. We also saw how RDFa can represent triples that use existing web page content as their subject and triples that specify new objects, which are useful for adding workflow metadata about a document or for specifying normalized values such as "2007-04-23" as metadata associated with a date displayed on a web page as "April 23, 2007". This article shows how to use RDFa to express additional, richer metadata, and we'll explore some ideas to automate the generation of RDFa markup. Whenever you see HTML being generated automatically, you have an opportunity to create RDFa. Movie timetables, price lists, and so many other web pages where we look up information are generated from a backend database. This is fertile ground for easy RDFa generation, which could make RDFa's ease of incorporating proper RDF triples into straightforward HTML one of the great milestones in the building of the semantic web.

See also: RDFa Use Cases

A Data-Centric Approach to Distributed Application Architecture
Gerardo Pardo-Castellote and Supreet Oberoi,

The object-oriented development approach is useful for developing applications in general, but a data-centric approach is better for designing and developing distributed applications. This article introduces the data-centric approach, explaining how to design with data-centric principles and implement data-centric applications. In general, the tenets of data-oriented programming include the following principles: (1) Expose the data. Ensure that the data is visible throughout the entire system. Hiding the data makes it difficult for new processing endpoints to identify data needs and gain access to that data. (2) Hide the code. Conversely, none of the computational endpoints has any reason to be cognizant of another's code. By abstracting away from the code, data is free to be used by any process, no matter where it was generated. (3) Separate data and code into data-handling and data-processing components. Data handling is required because of differing data formats, persistence, and timeliness, and is likely to change during the application lifecycle. Conversely, data processing requirements are likely to remain much more stable. (4) Generate code from process interfaces. Interfaces define the data inputs and outputs of a given process. Having well-defined inputs and outputs makes it possible to understand and automate the implementation of the data-processing code. (5) Loosely couple all code. With well-defined interfaces and computational processes abstracted away from one another, endpoints and their computations can be interchanged with little or no impact on the distributed application as a whole. The data-oriented approach to application design is effective in systems where multiple data sources are required for successful completion of the computing activity, but those data sources reside in separate nodes on a network in a net-centric application infrastructure. For network-centric distributed applications, applying a data-oriented programming model lets you focus on the movement of data through the network, an easier and more natural way of abstracting and implementing the solution.

Enterprise SOA Adoption to Double in Next Two Years
Darryl K. Taft, eWEEK

According to a new Evans Data Corp. study, enterprise adoption of service-oriented architecture is expected to double over the next two years. Evans Data's recently released Corporate Development Issues Survey showed that nearly one-fourth of the enterprise-level developers surveyed said they already have SOA environments in place, and another 28 percent plan to do so within the next 24 months. Meanwhile, adoption of ESBs (enterprise service buses), which is currently at 15 percent, will more than double during this same time, according to the survey results. The Evans study of corporate-developer issues also looked at budgets, outsourcing and technologies such as grid computing. For instance, other findings from the survey of more than 300 in-house corporate developers showed that 60 percent of them said they will likely increase budget spending on Web security over the next year. Web services came in second on the list of budgeting priorities, followed by integration projects.

Middleware Makes Data Sharing Easier for N.J. Police
Trudy Walsh, Government Computer News

The New Jersey State Police, pretty much like every other law enforcement organization in the past five years or so, has been working on improving its data sharing capabilities. Wanting to take a standards-based approach, NJSP decided to use Crossflo Data Exchange (CDX), a browser-based middleware solution from Crossflo Systems Inc. The software provides secure cross-domain data sharing across disparate platforms and different data structures, said Joe Ramirez, Crossflo's director of technical integration services. Written in Java 2 Enterprise Edition, CDX supports the Global Justice XML Data Model (GJXDM), the law enforcement data standard developed by the Justice Department. Specifically, NSJP is using CDX to help extract incident and arrest data from its Records Management System (RMS) and integrate it with the state police's Statewide Intelligence Management System (SIMS). The state police recently upgraded to the browser-based CDX Version 3.2, which supports more data sources, including Web services and message queuing systems.

DITA Specialization Tutorial: Beta 1
W. Eliot Kimber, Resource Announcement

"I've gotten my DITA specialization tutorial far enough along that I thought I could publish it safely. The tutorial is now available in HTML and as a downloadable package, including all the source content and whatnot..." [Excerpt]: DITA specialization allows you to define new element types that are specific to your information and business processes while still being recognizable as base DITA-defined element types. These "specialized elements" let you tailor your markup to your local needs while still taking advantage of any processing associated with the base type. In addition, if you need to, you can extend your processing tools to add new functionality that is specific to your specialized types. When you "create a specialization" you are defining new document type modules that can then be combined with the base DITA modules to create new DITA-based document types. Note that the process of combining document type modules together to create a shell document type is called "configuration" and is distinct from the process of specialization. In short, specialization is the process of declaring new element types and attributes based on existing DITA element types and attributes, while integration is the process of combining declaration modules to define a complete shell document type for creating document instances. A shell document type that simply includes or doesn't include a set of pre-existing modules is a configuration, as compared to a module that defines entirely new element types, which is a specialization. You can do configuration without doing specialization, but you can't do specialization without also doing configuration. The DITA architecture defines specific structural, naming, and coding patterns for specialization modules and shell DTDs that help ensure consistency of design and implementation and make it easy to combine modules into new document types. While these patterns are not strictly needed technically (they have no bearing on the validity or processibility of DITA documents), they make it easier to use and re-use modules and generally keep things consistent. Once you understand the patterns and how the pieces fit together, you will see that creating new specializations and configurations is remarkably easy.

See also: DITA references

IBM, Oracle, Others Create Services Consortium
China Martens, InfoWorld

IBM Corp. and Oracle Corp., more often rivals than partners, have joined in creating an industry consortium focused on establishing what it calls "service science" as both a key area for investment by companies and governments and as a full-blown academic discipline. The vendors along with two services organizations—the Technology Professional Services Association (TPSA) and the Service and Support Professionals Association (SSPA)—and other IT companies and universities launched the Service Research and Innovation (SRI) Initiative Wednesday. The group's main goal is to increase the amount of money spent on service research and development in the IT industry. Its members also hope that evangelizing service science to the corporate world, government and academia will eventually result in the area achieving the same status as computer science. The founding members of the SRI Initiative have formed an advisory board with members including services providers Accenture Ltd. and Computer Sciences Corp. as well as Cisco Systems Inc., EMC Corp., Hewlett-Packard Co., Microsoft Corp. and Xerox Corp. Missing from the list so far are the two other leading services providers Cap Gemini SA and Electronic Data Systems Corp. The board also includes researchers from a variety of academic institutions including Arizona State University, Cranfield School of Management, the University of California, Los Angeles and the Wharton School of Business.

See also: the web site

Don't Use WEP, Say German Security Researchers
Peter Sayer, InfoWorld

The Wi-Fi security protocol WEP should not be relied on to protect sensitive material, according to three German security researchers who have discovered a faster way to crack it. They plan to demonstrate their findings at a security conference in Hamburg this weekend. Now it takes just 3 seconds to extract a 104-bit WEP key from intercepted data using a 1.7GHz Pentium M processor. The necessary data can be captured in less than a minute, and the attack requires so much less computing power than previous attacks that it could even be performed in real time by someone walking through an office. Anyone using Wi-Fi to transmit data they want to keep private, whether it's banking details or just e-mail, should consider switching from WEP to a more robust encryption protocol, the researchers said. Erik Tews (Darmstadt University of Technology), along with colleagues Ralf-Philipp Weinmann and Andrei Pyshkin, published a paper about the attack showing that their method needs far less data to find a key than previous attacks: just 40,000 packets are needed for a 50 percent chance of success, while 85,000 packets give a 95 percent chance of success, they said. Although stronger encryption methods have come along since the first flaws in WEP were discovered over six years ago, the new attack is still relevant, the researchers said. Many networks still rely on WEP for security: 59 percent of the 15,000 Wi-Fi networks surveyed in a large German city in September 2006 used it, with only 18 percent using the newer WPA (Wi-Fi Protected Access) protocol to encrypt traffic.

Selected from the Cover Pages, by Robin Cover

UN/CEFACT Releases XML Schema for Cross Industry Electronic Invoice (CII)

An announcement from the United Nations Economic Commission for Europe (UNECE) describes the release of an International e-Invoice designed for use by the Steel, Automotive, or Electronic industries, as well as in the retail sector or Customs and other Government Authorities. Mike Doran Chair of the UN/CEFACT Forum Management Group, noted that the Cross Industry Invoice (CII) has the potential to create the necessary critical mass of national and international business and government partners required in order to reap the benefits of the huge savings offered by e-invoicing. According to text in the CII schema's annotation/ documentation element, "The cross industry invoice is an electronic document exchanged between trading partners with a prime function as a request for payment. It is an important accounting document and has potential legal implications for sender and receiver. It is also used in the European Union as the key document for VAT declaration and reclamation, for statistics declaration in respect of intra community trade, and to support export and import declaration in respect of trade with countries outside the European community." Tim McGrath, Co-Chair of the OASIS Universal Business Language (UBL) Technical Committee, noted that this first candidate release of UN/CEFACT's Cross Industry Invoice schema provides an opportunity for OASIS to further collaboration with UN/CEFACT.


XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.
IBM Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: