The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: December 31, 2004
XML Articles and Papers December 2004

XML General Articles and Papers: Surveys, Overviews, Presentations, Introductions, Announcements

Other collections with references to general and technical publications on XML:

December 2004

  • [December 31, 2004] "OpenOffice 2.0 Nears Beta Testing." By Steven J. Vaughan-Nichols. In eWEEK (December 20, 2004). " 2.0, the open-source office suite sponsored by Sun Microsystems Inc., is getting closer to reality. While StarOffice (OpenOffice's commercial, closed-source brother) has long had a database, Software AG's ADABAS D relational DBMS, OpenOffice has not had one. Microsoft Office, with which OpenOffice is often compared, comes with the Access database. Starting with 2.0, however, OpenOffice will include the open-source HSQL database engine. The new OpenOffice will use the OASIS Open Office XML Format as its default file format. Despite its name, this is not OpenOffice-specific. The format is a new standard, based on OpenOffice formats and supported by OASIS (Organization for the Advancement of Structured Information Standards). This is meant to be an open standard for office documents. If all goes well with the beta testing, Sun will roll out the OpenOffice 2.0 release candidate in February [2005] and the final version in March [2005]..." Note: The OASIS Open Office XML Format Technical Committee announced that a second revision of the Open Document Format for Office Applications (OpenDocument) 1.0 specification was approved as an OASIS committee draft. This "revised specification, formerly called Open Office Specification, contains new definitions in response to new developments in the office application space, but also error corrections and clarifications. The Committee Draft contains three embedded Relax-NG schemas: (1) The RNG schema for office documents, specified in chapters 1 to 16; (2) the normative schema for the manifest file used by the OpenDocument package format, specified in chapter 17; (3) the strict schema for office documents that permits only meta information and formatting properties contained in this specification itself, specified in appendix A. The approved Committee Draft is available in PDF format and XML format. A proposal has been made to rename the OASIS TC (to): "OASIS Open Document Format for Office Applications (OpenDocument) TC." General references in " XML File Format."

  • [December 21, 2004] "EPCglobal Ratifies Gen 2 Standard." By Mark Roberti. From RFID Journal (December 16, 2004). "EPCglobal announced late today that its board of governors has ratified the second-generation Electronic Product Code specification as an EPC standard and that it will be royalty-free. The move paves the way for vendors to begin making products based on the specification, which was designed to work globally and be approved as an international standard by the International Organization for Standardization (ISO). "This is the most significant event in our history," Mike Meranda, president of EPCglobal US, told RFID Journal... EPCglobal says the standard is royalty-free. Meranda says that as part of the ratification process, EPCglobal engaged legal counsel to examine claims made by Intermec Technologies, an Everett, Wash.-based RFID systems provider, that the Gen 2 spec contains intellectual property that it has patented. After exhaustive examination, the lawyers concluded that Intermec's patents are not essential to implementing the standard and therefore the standard is royalty-free... Intermec indicated it believed that its patents would be infringed by any products built to the new standard. Intermec president Tom Miller said in a statement released tonight that ratification of the Gen 2 standard 'is an important step towards bringing the powerful benefits of RFID to market... It is important to remember the claim of a royalty-free protocol does not mean UHF RFID products will be royalty-free... We believe companies who offer UHF RFID products will still require a license to use Intermec intellectual property.' In addition to the IP claims included in the Generation 2 standard, Intermec holds more than 125 additional UHF RFID patents..." See also: (1) the announcement; (2) "Radio Frequency Identification (RFID) Resources and Readings"; (3) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)"; (4) "Patents and Open Standards."

  • [December 20, 2004] "The Benefits of ebXML for e-Business." By David Webber, with Mark Yader, John Hardin, and Patrick Hogan. Presented at the IDEAlliance XML 2004 Conference and Exposition (December 15-19, 2004, Washington, DC, USA). ['The ebXML specifications have matured rapidly over the past year. New components and capabilities have extended the architecture for service oriented architectures (SOA). This paper discussed a new comprehensive release of ebXML that is available from OASIS.'] "With thousands of users globally the ebXML infrastructure is beginning to enter the mainstream of business consciousness today. Born from a process began by two organizations (UN/CEFACT and OASIS) who each brought unique backgrounds and solution envisioning together, ebXML has created a new and compelling metaphor for conducting e-Business via the Internet. The vision and model for better e-Business using open standards was created by combining the business knowledge gained from twenty years of EDI-based interactions from CEFACT with the OASIS web commerce and marketplace expertise of internet-based companies using XML. That model seeks to move from processes that are highly labour intensive to configure and deploy manually in a paper based culture to a world where trading partners can discover each other and then begin to do business electronically by linking their systems together using ebXML and the Internet. Each step of this process is supported and enabled by ebXML through the use of discreet components that are engineered to deliver specific functionality. Each component can be used individually or combined as needed. Just as LINUX is widely used by businesses today to run their web sites and services, the ebXML infrastructure provides the means for open and low-cost global commerce... The rapid acceptance of LINUX worldwide and especially in high growth countries such as China, India and Japan, should fuel dramatic growth in the ebXML infrastructure as these enhanced LINUX versions become available. LINUX is in many ways the perfect vehicle for ebXML and the availability of ebXML enhanced LINUX versions will ensure that ebXML becomes a critical component in global electronic commerce. [The ebXML] components available today are being used to deploy a variety of business solutions. Examples include supply of spare parts and maintenance support for the Metro Rail in Hong Kong; Banking and Insurance services in Korea; in Australia the Electricity and Gas supply in Sydney and small farmers selling wheat to cooperatives; raw steel distribution in Europe's 24x7 steel marketplace; the US DOD EMALL for logistics parts purchase; State of Texas electricity distribution marketplace; and Volkswagen is working on using ebXML to cut costs to its dealerships and suppliers worldwide. These examples illustrate the range from small to large configurations..." General references in "Electronic Business XML Initiative (ebXML)." [alt URL for HTML, PDF, cache]

  • [December 17, 2004] "Registration of Fine-Grained XML Artifacts in ebXML Registry." By Joseph M. Chiusano (Booz Allen Hamilton). Presentation on the OASIS/ebXML Registry Fine-Grained Artifacts Technical Note. Given at the December 17, 2004 joint meeting of the XML Community of Practice (xmlCoP) and the DHS Core Data Types Focus Group (CDT-FG), Washington, DC, USA. "A Technical Note is underway within the OASIS/ebXML Registry Technical Committee for the registration of 'fine-grained' XML artifacts. Fine-grained XML artifacts are XML constructs that are 'building blocks' for XML-based transaction definitions in electronic information exchanges. These XML-based transaction definitions are often represented as XML schemas based on controlled vocabularies Such controlled vocabularies often represent the 'information domain' for Communities of Interest (CoIs). We will refer to these XML-based transaction definitions as XML-based vocabularies. Fine-grained XML artifacts include: Elements, Attributes, Data Types, Namespaces. Metadata registries provide a means by which metadata can be registered, discovered, maintained, shared, and evolved. Such registries can support Communities of Interest (CoIs) in governing and evolving controlled vocabularies. Examples of metadata registry standards are ebXML Registry and ISO/IEC 11179... There has been a growing need for information exchange partners and Communities of Interest to maintain XML artifacts at a fine-grained level within metadata registries, using an open standard. This enables XML-based vocabularies to be managed and to evolve as a building blocks, rather than monolithic, high-level artifacts. Such support will allow discovery and reuse of these artifacts in multiple higher-level artifacts; [we] can gain efficiencies of reuse, as well as consistency in representation and usage... Although the generic capabilities of ebXML Registry have always supported fine grained XML artifact registration, at present no document describes how this should be done as a best practice. Such artifacts can be registered in an ebXML Registry, but their metadata content is determined by each individual submitter; this means that such artifacts can be represented as different 'object types' by different submitters, which impedes interoperability... The 'Fine-Grained XML Artifacts Technical Note' will describe how such artifacts can be managed as RegistryObjects within ebRIM in a standard manner, using the existing registry capabilities..." [also in .PPT format, cache]

  • [December 17, 2004] "U.S. Army Aims to Halt Paperwork with IBM System." By Eric Auchard. From Reuters Top Technology News (December 17, 2004). "No more passing the buck. The U.S. Army has enlisted IBM and a handful of other companies to create an automated record-keeping system that ends the need for electronic forms to be printed out, signed and delivered up the military service's chain of command. IBM, the world's largest computer company, together with PureEdge, an electronic forms supplier, and Silanis, a digital signature technology maker, said on Thursday it has created a complete system to take the paperwork out of Army bureaucracy. Terms were not disclosed. The project is being managed by contractor Enterprise Information Management Inc. (EIM). When fully implemented over the next decade, the forms management system could save well over a billion dollars a year in unnecessary paperwork and administrative procedures, according an Army Audit Agency report... The Army now relies on up to an estimated 100,000 different forms for everything from supply-ordering and pay-disbursement to medical record keeping and the awarding of citations. Currently, the Army has the ability to convert paper-based forms into digital files that can be located on an official Army Web site. But while it is possible to fill-in the form and store the data electronically, users have been forced to print a paper copy, manually sign and then hand-carry or mail the form to complete many authorization processes... The new single, centralized, document warehouse is comprised of XML-based forms, digital signature approval technology, and content management software from IBM that will help the Army to automate the entire form-completion process. It will be used by the 1.4 million direct and indirect employees of the Army, which includes both uniformed staff, reservists and civilian contractors, Acklin said. In an average year, they fill out some 15 million forms, according to IBM..."

  • [December 17, 2004] "The Atom Notification Protocol." By James M Snell [WWW]. IETF Network Working Group. [Individual] Internet Draft. Reference: 'draft-snell-atompub-notification-00'. December 14, 2004, expires June 14, 2005. "This memo presents a protocol for posting notifications of new or updated content using a combination of the Atom Syndication Format and HTTP POSTs. The Atom Notification Protocol has been designed to complement the Atom Publishing Protocol by providing the means of sending notifications when Atom-based content is created or updated. The Atom Notification Protocol works by POSTing Atom Entries or Atom Feeds to a NotificationURI using HTTP POST. As is the case with the Atom Publishing Protocol, this document does not seek to specify the form of the URIs that are used. This document does, however, specify the formats of the entities posted to those URIs. The NotificationURI is used to POST notifications. A notification consists of a single Atom Entry or Atom Feed. The notification is essentially a one-way operation that implies no semantics or action on the part of the receiver. The request contains a filled-in Atom Entry or Atom Feed. A notification request containing an Atom Entry is intended to notify the receiving endpoint that a specific entry has been created or updated. A notification request containing an Atom Feed is intended to notify the receiving endpoint that a specific feed has been created or updated..." [Author's note, posted to '' and '' lists: I have published the first draft of the Atom Notification Protocol... I would like to discuss the possibility of this work being picked up by the working group. It may even be feasible to incorporate this into the main body of the Atom Protocol spec. In the meantime, however, please take a look and review the draft and please comment on the protocol mailing list..." General references: "Atom Publishing Format and Protocol." [From ephemeral IETF URL:]

  • [December 16, 2004] "Buildings Become Information Systems." By Daniel J. Weitzner (Technology and Society Domain Lead, World Wide Web Consortium; Principal Research Scientist, MIT Computer Science and Artificial Intelligence Laboratory). From ComputerWorld (November 29, 2004). "The transparent enterprise is characterized by increased data integration possibilities across formerly stovepiped databases. Now, even the buildings that house our transparent enterprises are becoming transparent themselves. In response to the demands of energy efficiency, security, lower operating costs and the need to increase space-planning flexibility, the physical structures in which we work are on their way to becoming more closely integrated with our information infrastructure... In support of these goals, building systems, once the domain of HVAC engineers and security services, are becoming just one more information system. As with our other information systems, the first design requirement is that it be built on open standards for interoperability... As with our other information systems, the first design requirement is that it be built on open standards for interoperability. The International Standards Organization has even released a standard (ISO 16484-5:2003 [BACnet, ISO/ANSI/ASHRAE standard data]) that 'defines data communication services and protocols for computer equipment used for monitoring and control of heating, ventilation, air conditioning and refrigeration, and other building systems.' The aim of the standard is to facilitate 'the application and use of digital control technology in buildings.' As buildings become more automated, formerly disparate components (HVAC, LANs, security systems and even signage) will become interoperable with one another and with other information systems traditionally considered beyond the boundaries of the building systems themselves... New 'interoperable' building systems represent a dramatic change in design and function from even the most complex systems of the past. The critical change is that today's 'smart' buildings have APIs that allow the buildings' physical systems to be linked, as any other piece of software, to other parts of an enterprise information system. The interface between building systems and the rest of the enterprise information infrastructure will now be defined by a series of SOAP message formats and the exchange of XML-formatted data... The transparent building raises the design stakes for efforts to ensure the integrity, reliability and accuracy of enterprise information systems. Today, system faults may result in a sales order being lost or an employee's paycheck being delayed. Tomorrow, with more transparent and dynamic links between building systems and current information systems, the results could be an employee locked out of the office, power shutting down in a building at the wrong time or embarrassing information being flashed across the building's public information displays..." See: (1) "ASHRAE Releases BACnet Web Services Interface Specification for Public Review"; (2) OASIS Open Building Information Exchange (oBIX) TC; (3) general references in "XML and Web Services for Facilities Automation Systems."

  • [December 16, 2004] "W3C Publishes Web Architecture: Organization Looks to Codify Principles." By Paul Krill. From InfoWorld (November 06, 2004). "The World Wide Web Consortium (W3C) this week published a final version of its Architecture of the World Wide Web, Volume 1 document, looking to set forth codified principles for the Web itself. Published as a formal W3C recommendation, the architecture features components for URIs (Uniform Resource Identifiers), data formats, and protocols such as HTTP. 'The purpose of this document is [to serve as a guide] if you need to know everything about the Web in 50 pages, in a sense,' said Dan Connolly, member of the W3C Technical Architecture Group. W3C with the document is eyeing those doing software development and design work, he said. 'We'd like them to know the principles of the Web that allow it to scale and work well,' Connolly said... Earlier drafts of the document have been used in software engineering classes. The planned next volume of the document will focus on areas of Web applications such as Web services, the semantic Web, mobile Web applications, and additional principles. Work on this volume will start in 2005, with no date set yet for completion, according to W3C..." See details in the news story "'Architecture of the World Wide Web, Volume One' Released as a W3C Recommendation."

  • [December 16, 2004] "Priscilla Walmsley on XQuery and XML Schema Technologies." By Ivan Pedruzzi and Priscilla Walmsley [email]. In The Stylus Scoop Newsletter (December 16, 2004), Stylus Studio Developer Network. 'Priscilla Walmsley has been working closely with XML Schema and XQuery for years. She was a member of the W3C XML Schema Working Group from 1999 to 2004, where she served as editor of the second edition of XML Schema Part 0 (Primer). As a result of her work with XML Schema, Ms. Walmsley wrote the respected book Definitive XML Schema for Prentice Hall. She has also been an Observer of the XML Query Working Group for two years. During that time she has written another book, Definitive XQuery, which will be published in 2005. Currently, Ms. Walmsley serves as Managing Director of Datypic, where she specializes in XML- and SOA-related consulting and training. Ivan Pedruzzi, Stylus Studio's Senior Product Architect, and editor of The Stylus Scoop newsletter, caught up with Ms. Walmsley at the XML Conference & Exhibition 2004 (XML 2004) last month, where Ms. Walmsley gave a presentation entitled 'Introduction to XQuery'..." The two chatted about the XQuery buzz, XML Schema, XQJ technologies, and other hot topics in the XQuery development arena.'] Walmsley: "I was immediately attracted to XQuery because it has an intuitive syntax that I enjoy using and stretching to its limits. Having spent many years using SQL, XQuery feels familiar, yet much more powerful. I've enjoyed working with XSLT and XPath 1.0 over the years, but for some of the work I've done they felt like an awkward fit. For a transformation scenario where I'm saying 'every time you get an x element, do this' it works great. But for applications that involve selecting a subset of an XML document, joining it with other data, and performing calculations or manipulating it in some way, I've sometimes felt like XSLT was making me force a square peg into a round hole. XQuery embedded in program code is a great way to reduce (and transform) the set of data you're working with rather than tediously traversing the DOM model of an entire document. In the past I've done this with XPath, but XQuery lets me join multiple data sources easily and sort my results, actions that are not part of XPath. Being a true data-head, I also really like the typing capabilities of XQuery. Some of the advanced functionality of XQuery is more data-oriented, and there are some compelling benefits for using XQuery with XML Schemas..." General references in "XML and Query Languages."

  • [December 16, 2004] "The QName URN Namespace." By David Orchard (BEA Systems, Inc) and Rich Salz (DataPower Technology, Inc). IETF Network Working Group. Internet Draft. Reference: 'draft-rsalz-qname-urn-00.txt'. December 9, 2004, expires June 9, 2005. "This specification defines a Uniform Resource Name namespace for XML namespace-qualified names, QNames. As long as the URN is encoded in the same character set as the document containing the original QName, the Qname URN provides enough information to maintain the semantics, and optionally the exact syntax, of the original name. There are a variety of situations when a QName may need to be mapped to a URI. For example, when exchanging (or referencing) an identifier for an XML element contained within a document, and the medium of exchange prefers URIs to QNames, such as an XML Schema anyURI data type. Another scenario is for comparing the identifiers, which can be simpler by comparing just a string without having to also compare the context setting XML namespace attribute that may be declared arbitrarily earlier in the document. The [W3C] XML Namespaces specification does not provide a canonical mapping between QNames and URIs. Any XML specification that wants to enable identifier exchanges must define a language specific QName to URI mapping. There have emerged a variety of different algorithms and solutions for the mapping. To date, there have been no standardized algorithms available that they can re-use, which has increased their efforts. A standardized mapping, such as this, should provide increased productivity. Almost all of the algorithms for Qname to URI mappings are based upon concatenation of the URI and the name with variations based upon prefix inclusion, namespace name and name separator, etc. These are typically problematic because it is difficult to recover the QName from the URI as the namespace name and name separator may have already been used in the namespace name. Having the namespace name at the end of the identifier string avoids these and other problems..." See also "Namespaces in XML." Source (ephemeral) IETF URL:

  • [December 10, 2004] [OASIS Framework for Web Services Implementation (FWSI) TC] Functional Elements Specification. Edited by Tan Puay Siew (Singapore Institute of Manufacturing Technology, SIMTech), with contributions by Cheng Huang Kheng (SIMTech). Produced by members of the OASIS FWSI TC, Functional Elements Subcommittee. November 25, 2004. 151 pages. Working Draft "revision 3.0" (document identifier 'FWSI-FESC-specifications-02.doc'), posted 2004-11-25 and voted for advancement to status of OASIS Committee Draft. Ballot results announced 2004-12-10 by Andy Tan (FWSI TC Secretary): "Functional Elements Specification Committee Draft Approved: 'The FWSI Functional Elements (FE) Specification Working Draft revision 3.0 was unanimous accepted by the eligible voting members of the FWSI TC when balloting closed today...'" "The ability to provide robust implementations is a very important aspect to create high quality Web Service-enabled applications and to accelerate the adoption of Web Services. The Framework for Web Services Implementation (FWSI) TC aims to enable robust implementations by defining a practical and extensible methodology consisting of implementation processes and common functional elements that practitioners can adopt to create high quality Web Services systems without reinventing them for each implementation. This document specifies a set of Functional Elements for practitioners to instantiate into a technical architecture, and should be read in conjunction with the Functional Elements Requirements document. It is the purpose of this specification to define the right level of abstraction for these Functional Elements and to specify the purpose and scope of each Functional Element so as to facilitate efficient and effective implementation of Web Services... In a Service-Oriented Architecture (SOA) environment, new applications/services are created through the assembly of existing services. One of the key advantages of this loosely coupled model is that it allows the new application/service to leverage on 3rd party services. As a typical 3rd party's implementation of the services is done via the software component approach, this specification further proliferate new applications/services by defining a framework for Web Services implementation consisting of Functional Elements. Through these Functional Elements, which are implementation neutral, this Specification hopes to influence future software development towards assembly of services rather than pure built only'..."

  • [December 09, 2004] "IESG Announcement: Last Call for 'Tags for Identifying Languages' to BCP." - "The IESG has been considering 'Tags for Identifying Languages' [draft-phillips-langtags-08.txt] as a BCP. There have been considerable changes to the document since the initial last call, and the IESG would like the community to consider the changes. In addition, the authors have prepared text describing why this mechanism is needed as a replacement for the existing procedure... The IESG plans to make a decision in the next few weeks, and solicits final comments on this action." Reasons for Enhancing RFC 3066: "RFC 3066 and its predecessor, RFC 1766, define language tags for use on the Internet. Language tags are necessary for many applications, ranging from cataloging content to computer processing of text. The RFC 3066 standard for language tags has been widely adopted in various protocols and text formats, including HTML, XML, and CLDR, as the best means of identifying languages and language preferences. This specification proposes enhancements to RFC 3066. Because revisions to RFC 3066 therefore have such broad implications, it is important to understand the reasons for modifying the structure of language tags and the design implications of the proposed replacement. This specification, the proposed successor to RFC 3066, addresses a number of issues that implementers of language tags have faced in recent years: (1) Stability of the underlying ISO standards; (2) Accessibility of the underlying ISO standards for implementers; (3) Ambiguity of the tags defined by these ISO standards; (4) Difficulty with registrations and their acceptance; (5) Identification of script where necessary; (6) Extensibility. The stability, accessibility, and ambiguity issues are crucial. Currently, because of changes in underlying ISO standards, a valid RFC 3066 language tag may become invalid (or have its meaning change) at a later date. With much of the world's computing infrastructure dependent on language tags, this is simply unacceptable: it invalidates content that may have an extensive shelf-life. In this specification, once a language tag is valid, it remains valid forever..." See general references in: "Language Identifiers in the Markup Context."

  • [December 09, 2004] "Topic-Oriented Information Development and Its Role in Globalization. The Case for the Darwin Information Typing Architecture (DITA)." By Bill Trippe. From the Gilbane White Papers, sponsored by Idiom, Inc. December 2004. 14 pages. "Globalization is a critical issue for any company interested in expanding its markets. For the company that markets sophisticated products, globalization is both more difficult and more critical because of the rich content that is needed to support these products. Product document localization may well be the most difficult aspect of globalization. Documents often are long, with a mixture of text, tables, charts, and graphics. Moreover, the documentation must be produced in different forms — print, online Help sets, HTML. Translating such documents into multiple languages can be challenge. Single-source publishing has matured as a method for producing complex documents in many formats. XML in particular has become the preferred format for single-sourcing, enabling companies to both repurpose their content into different formats and reuse content modules in different content types. Thus, a procedure that appears in one document can be stored once, edited once, reused in many different documents and repurposed into many different formats. For all of its upside, XML-based single-source publishing has proven to be expensive and complicated to implement. XML-based single sourcing requires significant tool development, data conversion, and system integration prior to realizing the benefits of repurposing and reuse. To mitigate this, some vertical industries have developed their own XML tag sets. While successful on their own, these vertical industry efforts have not been extensible to other industries. A new XML-based approach to information development is the Darwin Information Typing Architecture (DITA). DITA is a topic-centric architecture that provides a core Document Type Definition (DTD) and schema for developing documentation typical of many kinds of products. Conceived over several years at IBM, the extensible DITA architecture is now being managed by a technical committee at OASIS. We looked at one organization, software developer Information Builders, Inc. (IBI), and their implementation of DITA for managing a large set of documentation that is translated into many languages. IBI made a strategic decision to adopt DITA, has implemented it, and is already realizing benefits from the decision..." See also the Blog entry. General references in "Darwin Information Typing Architecture (DITA XML)." [PDF format, cache]

  • [December 08, 2004] "The Problem of Software Patents in Standards." By Bruce Perens (Senior Research Scientist, Open Source Cyber Security Policy Research Institute, George Washington University). Paper delivered at the FFII Conference "Regulating Knowledge: Costs, Risks, and Models of Innovation," Brussels, November 9-10, 2004. [A two-day conference, sponsored by MERIT, CEA-PME, the Open Society Institute, the Greens/EFA in the EP, and FFII; to survey the state of the policy debate over software patents and its relation to broader issues of access, innovation, and control of knowledge in the knowledge-based economy.] "Patents, originally created to stimulate innovation, may now be having the opposite effect, at least in the software industry. Plagued by an exponential growth in software patents, many of which are not valid, software vendors and developers must navigate a potential minefield to avoid patent infringement and future lawsuits. Coupled with strategies to exploit this confusion over patents, especially in standards setting organizations, it appears that software advancement will become stifled unless legal action is taken to resolve the situation. This article examines the current situation facing software developers and users, the methods employed by standards setting organizations to address these problems, and recommends strategies for resolving the problem caused by software patents... The problems presented by software patents are numerous and must be addressed on many levels. Standards setting organizations can partially resolve this problem by following the W3C's model in which the intellectual property policy is clearly stated and members are required to adhere to that policy. New legislation providing protection from patent-farming and submarine patents is necessary. In addition, governments should recognize the importance of interoperability to any free market for computer software, and should legislate to allow the royalty-free use of patented principles for interoperability purposes... Worldwide government organizations can also impact this problem by scrutinizing the purpose and process of software patenting... The Open Source community faces a particular challenge in the area of software patents and standards. Since many Open Source developers are non-profit, they don't have the ability to pass on royalty payments to consumers. Yet, the Open Source community is now the predominant provider of software for many applications. Standards setting organizations should help the Open Source community to at least partially avoid the patent minefield. To accomplish this, the two communities must work together to develop policies that meet both of their needs in a way that continues to fulfill the ultimate goal of standardization: interoperability."

  • [December 07, 2004] "Introducing XML Canonical Form. Making XML Suitable for Regression Testing, Digital Signatures, and More." By Uche Ogbuji (Principal Consultant, Fourthought, Inc). From IBM developerWorks (November 07, 2004). "XML's heritage lies in the document world, and this is reflected in its syntax rules. Its syntax is looser than that of data formats concerned with database records. An XML parser converts an encoded form of an XML document (the encoding being specified in the XML declaration) to an abstract model representing the information in the XML document. The W3C formalized this abstract model as the XML Infoset, but a lot of XML processing has to focus on the encoded source form, which allows a lot of lexical variance: Attributes can come in any order; whitespace rules are flexible in places such as between an element name and its attributes; several means can be used for representing characters, and for escaping special characters, and so on. Namespaces introduce even more lexical flexibility (such as a choice of prefixes). The result is that you can have numerous documents that are exactly equivalent in XML 1.0 rules, while being very different under byte-by-byte comparison of the encoded source. The W3C addresses this problem with the XML Canonicalization spec (c14n), which defines a standard form for an XML document that is guaranteed to provide proper bit-wise comparisons and thus consistent digital signatures... Canonical XML is an important tool to keep at hand. You may not be immediately involved in XML-related security or software testing, but you'll be surprised at how often the need for c14n pops up once you are familiar with it. It's one of those things that helps cut a lot of corners that you may have never thought of avoiding in the first place..." See "XML Digital Signature (Signed XML - IETF/W3C)."

  • [December 07, 2004] "What's Wrong With RSS is Also What's Right With It." By David Berlind. From ZDNet Tech Update (November 07, 2004). "The variety of Web syndication techniques is one of those proverbial situations where the greatest thing about standards is that there are so many of them. It would require thesis-level research to make real sense of RSS 1.0, RSS 2.0 (which is not the successor to RSS 1.0, but rather version 0.94), Atom, and the gaggle of tangentially connected Internet syndication technologies. If there are problems at the specification level and they can't get worked out (my sense is that some conflicts have been overblown by the press), can we expect it to get any easier in the trenches? This story is about users just trying to get something to work... One problem that I've run into with RSS 2.0 is the way publishers often publish their RSS-based feeds using different conventions. Although this flexibility is one of RSS 2.0's greatest benefits, the burden of normalizing multiple RSS feeds for aggregation and presentation shifts to the consumption side... I wonder whether RSS might also prove just how difficult it will be for vendors to deliver on the development-for-mortals promise -- the one where technical neophytes will be able to build complex, transactional, server-side applications with a point, click and drag. RSS is, after all, today's poster child of what XML can do for the masses. It's also the closest that most people have come to working with XML. Given RSS' momentum, it could very well turn into the primary method by which all data (structured or unstructured) gets pumped — regardless of whether the application is just to stay abreast of Weblogs, to retrieve e-mail (boy, wouldn't that put an end to spam?), or to pass transactional data through a complicated workflow. As such, RSS is also the prime candidate to be a proof-point for point-and-click programming..." See: (1) "RDF Site Summary" | "Really Simple Syndication" (RSS)"; (2) "Atom Publishing Format and Protocol."

  • [December 07, 2004] "Internationalized Resource Identifiers (IRIs)." By Martin Dürst (World Wide Web Consortium; [WWW]) and Michel Suignard (Microsoft Corporation; [WWW]). Announced as an IETF Proposed Standard. IETF Network Working Group. Internet Draft. R eference: 'draft-duerst-iri-11'. November 30, 2004, expires May 31, 2005. 45 pages. From the announcement: "This document describes Internationalized Resource Identifiers and their relationship to URIs. While the character limitations of URIs are not usually an issue for protocol processing, they may restrict the usefulness of the identifiers presented to end users or systems expecting a different range of characters. Rather than extend URIs, this document introduces a new identifier type and a describes a relationship to URIs. Within an IETF context, IRIs will likely be used as presentation elements. There are cases, such as XML namespaces, in which an IRI may be used as a token, because character-by-character equivalence is the only property used for protocol processing. In no case should an implementor assume that an IRI may be substituted for a URI in an existing protocol grammar; either the generative grammar associated with the protocol must be updated to specify IRIs or the implementation must transform an IRI into a URI before use... This work was initiated in the W3C, and it has been broadly accepted in that context. It has also been discussed on the URI mailing list and a public, open list (public-iri at dedicated to the topic. Considerable care has been taken to keep this specification well-synchronized with the URI specification. There were issues raised during IETF Last Call, and a new document version resolving those issues was submitted..." IETF URI: General references in "Markup and Multilingualism."

  • [December 6, 2004] "Web Services Patents Fetch $15.5 Million." By Alorie Gilbert. In CNET (December 6, 2004). "A mysterious bidder paid $15.5 million Monday in a bankruptcy court auction of dozens of Internet-related patents — and then rushed out of the courtroom. On the United States Bankruptcy Court auction block were 39 patents owned by Commerce One, a bankrupt software company in Santa Clara, Calif., that's in the process of shutting down and liquidating its assets... The winning bidder was a company called JGR Acquisitions. An attorney representing JGR was mum about his client, dodging reporters' questions as he rushed out of the court room at the close of the auction. Attorneys for Commerce One and the bankers who solicited bids for the auction also declined to discuss JGR. A document the company filed with the court was scarce on information as well, so JGR's business, its owners, location and its plans for the newly acquired patents all remain mysteries. JGR beat out seven other bidders, including two companies connected to Nathan Myhrvold, a former Microsoft executive who now runs Intellectual Ventures, a company that collects patents. One of those companies was ThinkFire Services USA, an intellectual property consulting firm in Clinton, N.J. The firm, where Myhrvold serves as chairman and co-founder, bid as high as $14.3 million. Brissac, which also employs Myrhvold, bid as high as $14.9 million. In another unusual twist, the identities of two bidders who placed bids through their attorneys remained undisclosed. Judge Dennis Montali, who is hearing the bankruptcy case, said he'd never encountered that condition before and allowed it despite an objection from one of the other bidders..." See other references in the news story "CommerceNet Proposes Collecting Contributions to Purchase Key Web Services Patents"

  • [December 06, 2004] "Bankrupt Commerce One Patents Fetch $15.5M." From Forbes [Associated Press]. December 06, 2004. "Bankrupt Internet software maker Commerce One Inc. auctioned off dozens of prized online patents for $15.5 million in a sale that could provoke a legal scuffle over whether the new owner is entitled to collect royalties from a long list of technology heavyweights. A secretive company called JGR Acquisition Inc. wrested the patents from two other bidders with ties to former Microsoft Corp. chief technology officer Nathan Mhyrvold, who is now running a startup that hopes to accumulate a treasure chest of valuable patents. JRG attorney Mark Mullion of the Dallas law firm Haynes and Boone declined to discuss the company or its plans for the patents sold in a liquidation of Commerce One, a former dot-com darling that collapsed into bankruptcy two months ago. The bidding war highlights the rising value of intellectual property rights as the world becomes more dependent on computers. The wrangling involved obscure patents covering a wide range of administrative tasks conducted online throughout corporate America. Commerce One, now based in Santa Clara, patented a series of techniques that are widely used by big and small companies to pay bills and buy supplies online. Hoping to build a bustling marketplace for its own software products, Commerce One allowed companies to use the patents without paying royalties. Although the strategy didn't pay off, many companies took advantage of the royalty waiver to shift more of their operations online. Those who might be targeted for royalty claims under the patents include Microsoft, IBM Corp. and other tech icons, as well as smaller companies, according to one intellectual property group. The group, the Electronic Frontier Foundation, intends to contest any attempt to collect royalties from the Commerce One patents, arguing that the company previously promised not to seek payments for using the technology in question..." See: (1) the news story "CommerceNet Proposes Collecting Contributions to Purchase Key Web Services Patents"; (2) "Patents and Open Standards."

  • [December 06, 2004] "Case Study: UK National Health Service NPfIT Uses ebXML Messaging." Authored by OASIS and BT, approved for publication by UK NHS NPfIT. December 06, 2004. 9 pages. Abstract from Pim van der Eijk: "The UK's National Programme for Information Technology (NPfIT) is the world's largest civil IT project. A central component of the NHS Care Records Service is the Transactional Messaging Service (TMS) Spine using the ebXML Messaging Service OASIS Standard. The Transaction and Messaging Service provides the communications infrastructure for the National Programme. It serves to interconnect regional network clusters managed by Local Service Providers (LSPs) and national services such as systems for electronic booking and transmission of prescriptions. The technology framework used for TMS is based on a large number of advanced technical specifications and standards. This includes the ebXML Messaging Service OASIS Standard. Within the TMS Spine, ebXML is used to provide reliable messaging functionality. National services such as the Electronic Booking Service (Choose and Book) and Electronic Transmission of Prescriptions are accessed using pairs of XML request and response documents. These documents are transported within the NHS network as ebXML messages. With an anticipated yearly volume of over message by 2010, TMS is likely to be among the largest messaging systems in production in the world. For this very reason, TMS is also likely to be among the larger systems worldwide that will use the ebXML Messaging OASIS Standard..." See: (1) "XML in Clinical Research and Healthcare Industries"; (2) "Electronic Business XML Initiative (ebXML)." [source]

  • [December 04, 2004] "SAML: The Secret to Centralized Identity Management." By Hank Simon. From Intelligent Enterprise (December 04, 2004). ['Complicated by too many systems, too many applications, and too many passwords, identity management is a major headache for most organizations. Can an intelligent, Web-services approach employing new standards ride to the rescue?'] "Identity management refers to provisioning, password management, and access control. Typically, access rights are stored in different locations, with separate access-control lists for individual applications and resources. Identity management must control data, people, and resources that are distributed across different locations. SAML enables Web-based security interoperability functions, such as single sign-on, across sites that are hosted by multiple companies. SAML supports secure interchange of authentication and authorization information by leveraging the core Web services standards of XML, Simple Object Access Protocol (SOAP), and Transport Layer Security (TLS). Many vendors, such as RSA, Netegrity, IBM, Oracle, BEA, Oblix, and Jericho have committed to SAML and are implementing the specification in their products. A SAML assertion uses the header in a SOAP message to pass though HTTP, transferring security information between an assertion authority and a relaying party. For example, a user can login at one site; a SAML assertion transfers the user authentication token; and the transferred token provides authentication to a remote site. A SAML package can include the authentication token as well as user attributes that can be tested against the rules engine for authorization and access control. It's important to note that SAML doesn't perform the authentication; rather, it transports the authentication information. In addition, SAML can use different authentication authorities, such as LDAP, Active Directory, and Radius, allowing for different identification methods such as password, biometric, Public Key Infrastructure (PKI), Secure Socket Layer (SSL), Kerberos, and so on. Then, as the transport, SAML passes the assertion information that the user is authenticated. In contrast, SAML doesn't perform authorization or transport access-control information..." General references in: "Security Assertion Markup Language (SAML)."

  • [December 03, 2004] "Adding Reliability to an Egg-and-Spoon Race." By Tony Graham (Staff Engineer, Sun Microsystems, Dublin, Ireland). Presented at the IDEAlliance XML 2004 Conference and Exposition (December 15-19, 2004, Washington, DC, USA). "How is Web Services over an unreliable network like an egg-and-spoon relay race? How does adding reliability alter the dynamics of the race? These and other important questions (details of the reliability standardization underway) are answered... Conclusion: This paper began by noting the essential similarities between an egg-and-spoon race, a computer network, and a Web Service. It then discussed ways of adding reliability to each of those without fundamentally changing their dynamics. The method, for Web Services, is to layer reliability on top of existing Web Services infrastructure, and the remainder of the paper discussed the features of the WS-Reliability standard being developed at OASIS and the WS-ReliableMessaging specification that is owned by BEA, IBM, Microsoft, and Tibco. Based on the essential similarity of an egg-and-spoon race to a Web Service, it should be possible to create a reliability protocol for egg-and-spoon relay races that will deliver more eggs to the finish line — especially where egg order is significant — than an unreliable protocol under the same conditions..." General references in "Reliable Messaging."

  • [December 02, 2004] Web Services Security: SAML Token Profile. Committee Draft, approved as an OASIS Standard. Reference: October 21, 2004. Edited by Phillip Hallam-Baker (VeriSign), Chris Kaler (Microsoft), Ronald Monzillo (Sun), and Anthony Nadalin (IBM). 31 pages. The WSS SAML Token Profile approved as an OASIS Standard describes how to use Security Assertion Markup Language (SAML) Version 1.1 assertions with the Web Services Security (WSS): SOAP Message Security specification. It defines how SAML assertions are carried in and referenced from <wsse:security> headers and describes how SAML assertions are used with XML Signature to bind the statements of the assertions (i.e., the claims) to a SOAP message. General references in "Security Assertion Markup Language (SAML)." [source PDF]

Earlier XML Articles

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: