The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: June 03, 2002
XML Articles and Papers. January - March 2002.

XML General Articles and Papers: Surveys, Overviews, Presentations, Introductions, Announcements

References to general and technical publications on XML/XSL/XLink are also available in several other collections:

The following list of articles and papers on XML represents a mixed collection of references: articles in professional journals, slide sets from presentations, press releases, articles in trade magazines, Usenet News postings, etc. Some are from experts and some are not; some are refereed and others are not; some are semi-technical and others are popular; some contain errors and others don't. Discretion is strongly advised. The articles are listed approximately in the reverse chronological order of their appearance. Publications covering specific XML applications may be referenced in the dedicated sections rather than in the following listing.

March 2002

  • [March 29, 2002] "Versioning Extensions to WebDAV. (Web Distributed Authoring and Versioning)." By Geoffrey Clemm (Rational Software), Jim Amsden (IBM), Tim Ellison (IBM), Christopher Kaler (Microsoft), and One Microsoft Way, Redmond, WA 90852 Jim Whitehead (UC Santa Cruz, Department of Computer Science). IETF Network Working Group. Request for Comments: 3253. March 2002. "This document specifies a set of methods, headers, and resource types that define the WebDAV (Web Distributed Authoring and Versioning) versioning extensions to the HTTP/1.1 protocol. WebDAV versioning will minimize the complexity of clients that are capable of interoperating with a variety of versioning repository managers, to facilitate widespread deployment of applications capable of utilizing the WebDAV Versioning services. WebDAV versioning includes automatic versioning for versioning-unaware clients, version history management, workspace management, baseline management, activity management, and URL namespace versioning... The benefits of versioning in the context of the worldwide web include: (1) A resource has an explicit history and a persistent identity across the various states it has had during the course of that history. It allows browsing through past and alternative versions of a resource. Frequently the modification and authorship history of a resource is critical information in itself. (2) Resource states (versions) are given stable names that can support externally stored links for annotation and link server support. Both annotation and link servers frequently need to store stable references to portions of resources that are not under their direct control. By providing stable states of resources, version control systems allow not only stable pointers into those resources, but also well defined methods to determine the relationships of those states of a resource... WebDAV Versioning defines both basic and advanced versioning functionality. Basic versioning allows users to: (1) Put a resource under version control (2) Determine whether a resource is under version control (3) Determine whether a resource update will automatically be captured (4) Create and access distinct versions of a resource. Advanced versioning provides additional functionality for parallel development and configuration management of sets of web resources... To maximize interoperability and the use of existing protocol functionality, versioning support is designed as extensions to the WebDAV protocol (RFC2518), which itself is an extension to the HTTP protocol (RFC2616). All method marshalling and postconditions defined by RFC 2518 and RFC 2616 continue to hold, to ensure that versioning unaware clients can interoperate successfully with versioning servers. Although the versioning extensions are designed to be orthogonal to most aspects of the WebDAV and HTTP protocols, a clarification to RFC 2518 is required for effective interoperable versioning... When an XML element type in the DAV: namespace is referenced in this document outside of the context of an XML fragment, the string DAV: will be prefixed to the element type. When a method is defined in this document, a list of preconditions and postconditions will be defined for that method. If the semantics of an existing method is being extended, a list of additional preconditions and postconditions will be defined. A precondition or postcondition is prefixed by a parenthesized XML element type that identifies that precondition or postcondition..." Other documents on versioning are referenced with deliverables from the IETF Delta-V Working Group. See "WEBDAV (Extensions for Distributed Authoring and Versioning on the World Wide Web."

  • [March 29, 2002] "WebDAV." By Rael Dornfest. Emerging Technology Brief from O'Reilly Research. March 26, 2002. "WebDAV (Web-based Distributed Authoring and Versioning, also called DAV) is a set of extensions to HTTP/1.1 (HyperText Transfer Protocol, the protocol spoken by Web browsers and servers) allowing you to edit documents on a remote Web server. DAV provides support for: (1) Editing: creating, updating, deleting (2) Properties: title, author, publication date, etc. (3) Collections: analogous to a file system's directory or desktop folder (4) Locking: prevents the confusion and data corruption caused by two or more people editing the same content at the same time WebDAV is platform independent, both in terms of client and server. This means that Macintosh, *nix, and Windows users can collaborate on Web content without all the usual conversion problems. Furthermore, it doesn't matter whether your documents are hosted on an Apache or Microsoft IIS server... WebDAV is an open standard, published by the IETF (the Internet Engineering Task Force) (RFC 2518). A completely open process, all it takes to join the WebDAV working group is subscription to and participation on a mailing list. Involved in the original development of WebDAV were representatives of companies the likes of Microsoft, Netscape, Novell, and Xerox. WebDAV support appears in a veritable cornucopia of Open Source projects, programming languages, commercial products, and services. WebDAV is baked right into the Windows (Web Folders) and Mac OS X operating systems as folders, that for all intents and purposes appear to be on your local machine, but are actually network connections to a remote server. The Zope Open Source content management system affords editing of content from well-known authoring tools like Adobe GoLive 5. DAV modules exist for most programming languages; they are either native or there are plug-ins for about every Web server in existence..." See "WEBDAV (Extensions for Distributed Authoring and Versioning on the World Wide Web."

  • [March 29, 2002] "Information Modelling for System Specification Representation and Data Exchange." By Erik Herzog and Anders Törne (RTSLAB, Department of Computer and Information Science, Linköpings Universitet, Sweden). Pages 136-143 (with 20 references) in Proceedings of Eighth Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS '01, April 17-20, 2001, Washington DC, USA). Abstract. "This paper presents the emerging STEP standard AP-233 with focus on the non-functional requirements that have guided the development process. The purpose of the paper is to present and motivate the modelling assumptions and approach selected for the AP-233 information model, and to present how the EXPRESS information modelling language have been used. Although the paper is focused on AP-233 and the constraints imposed by the STEP framework it is believed the structures and requirements presented are general and applicable to other systems engineering information-modelling projects. (Standard for the Exchange of Product model data -- ISO 10303) is an attempt to reduce the cost for implementing tool data exchange capabilities through the definition of a standardised information model for the systems engineering domain. By combining the information model with other STEP elements it is possible to automate significant parts of the interface development process, thus reducing effort and cost for enabling tool data exchange capabilities... The selection of STEP as framework can be questioned in the light of the tremendous interest in the XML standard. However, the choice of framework shall not be over dramatised. An information model developed in express can easily be translated to XML. A standard mapping from EXPRESS to XML is defined in ISO/PDTS 10303-28:2000 Product data representation and exchange: Implementation methods: XML representation of EXPRESS schemas and data.. The end result, a standardised information model, regardless of language used, is the lasting value of the activity... EXPRESS supports the definition of (1) Entities: the basic object of the information model, i.e., a representation of an element within the scope of the information model. (2) Inheritance relationships: The specialisation/generalisation relationship between entities. Inheritance in EXPRESS can take one of the following forms: 'One of', an instantiation of a supertype is exactly one of the subtypes, 'And', an instantiation of a supertype is the union of all subtypes, or 'AndOr', an instantiation of a supertype is a variable subset of the union of all subtypes. (3) Basic types: an elementary type that can not be further subdivided, e.g., an integer or a string. (4) Properties: entities have properties. A property is a special aspect of an entity. From an information modelling view point a property may be expressed using combinations of the constructs below. (5) Attributes: representing an aspect of an entity. Within AP-233 the term attribute is used to refer to aspects represented using basic types. (6) Relationships: Defining associations between two constructs in an information model. Within AP-233 a relationship is always defined between entities. (7) Cardinality constraints: a constraint on relationships and attributes defining the number of instances of one construct that can be associated by another construct. Constraints may be closed or open-ended. (8) Textual constraints: EXPRESS supports the definition of formal constraints on entities, relationships, attributes and other modelling constructs. The expressive power is comparable to the combination of UML and OCL... EXPRESS offer two mechanisms for defining the semantics and ensuring integrity of an information model. As in any other information modelling language semantics may be defined explicitly be using specific entities with for each concept supported. The second approach is to define fewer entities and use formal rules to define valid attribute and relationship value combinations. In AP-233 the preferred modelling approach is to explicitly define entities for each concept supported and to minimise the use of rules for defining the semantics of the model. This approach result in more entities defined, but improves model transparency and readability. [...] This paper has presented the information modelling approach selected for the development of AP-233. The main contribution is an outline of the basic modelling assumptions, how the information modelling language, EXPRESS, has been used in AP-233. Maintaining specification semantics has been the high priority in information model construction. The current revision of the information model is extensive as it captures the semantics as well as syntax of specifications. We believe this to be a crucial prerequisite for the successful standardisation and industrial acceptance of data exchange information models in general and for AP-233 in particular. At the time of writing the latest draft of AP-233 is being validated through the implementation of tool data exchange interfaces. The lessons learned from this exercise will be incorporated and the model will harmonised with the new modular structure in STEP..." See: (1) other STEP/EXPRESS documentation from the SEDRES [System Engineering Data Representation and Exchange Standardisation] Project; (2) STEP/EXPRESS and XML; (3) STEPml XML Specifications. [cache]

  • [March 29, 2002] "DISARM: Document Information Set Articulated Reference Model." By Rick Jelliffe. Discussion Draft. February 24, 2002. ISO/IEC JTC1/SC34 Document #292. "This note proposes an ISO standard 'Document Information Set Articulated Reference Model' be developed, to provide the basis for ISO DSDL and for renewing ISO 8879 SGML... The utility of DISARM might include that it can provide an attractive way to allow a top-down re-specification of SGML in a future ISO 8879. It would might also provide some help for DSDL." Motivation: "Since 1986, there have been four notable streams in markup languages: (1) ISO 8879 SGML, extended by the General Facilities, Architectural Forms Definitions Requirements (ADFR), Lexical Types Definition Requirements (LTDR), Formal System Identifiers (FSI), Annexes J to L, augmented with OASIS Catalogs. A parser implementation of mature SGML in Open Source is James Clark's SP. (2) W3C HTML, in various versions, with dialects including ASP, JSP, PHP, and Blogger. A parser implementation for mature HTML in Open Source is Dave Ragget's Tidy. (3) W3C XML, extended by Namespaces, XBase, XInclude. Widespread implementations of parsers use the mature SAX API. (4) The current ISO DSDL project, informed by RELAX Namespaces, RELAX NG, W3C XML Schemas, Schematron. The Xerces XNI API is a recent attempt to cope with post-processing XML, for uses such as validation and creating typed information sets. In all these cases, the natural increase in complexity of evolving standards has made it difficult to understand the processing order and operation. ISO 8879 has been widely criticized for not being amenable to simple grammatical analysis ('not using "computer science concepts"'), yet the same problems are experienced even with overtly layered specifications such as the XML family, due to this entropy. These problems would be reduced by introducing a reference model which was neutral with regard to each of the four main streams, but allowed clear and diagrammatic exposition of the stages of parsing and processing a marked-up document incrementally from bits to a terminal information set... The reference model uses UML terminology and diagrams at the top-level only. If desired, specific graphical stereotypes could be created, as allowed by UML. It models the kinds of markup processing of interest as a chain of components, one connected to the next, each of which implements a common event-passing interface. Different markup languages and SGML features can be modeled using particular chains of components..." Cf. also the DSDL list. References: see "Document Schema Definition Language (DSDL)." [cache]

  • [March 29, 2002] "IBM Xperanto Demo." March 2002. ['Get a sneak preview of IBM's exciting new standards-based information integration technologies! Xperanto represents IBM's work combining emerging XML and XQuery standards with the power of data integration. This interactive demo shows how a newly-merged bank and financial services company uses XQuery as a single interface to deliver a single view of data to a customer and to a sales representative.] "The IBM Xperanto demo is a technology preview that illustrates how IBM is advancing the state of integration technology with Xperanto, combining XML and the emerging standard, XQuery, with the power of data integration across relational databases, XML documents, flat files, spreadsheets, Web services, and more. The demo financial scenario page of the demo describes common situations for which this technology is a solution. The technology details pages display the queries and demonstrate how IBM integrates query, federation, Web services, and text search technologies using XQuery, the common query language for accessing XML. Using IBM's Xperanto, you can simplify data integration tasks for the new breed of Web and XML applications that require delivering a complete enterprise view of customers, partners, and services to improve customer service, supply chain management, and enterprise decision-making..." See also (1) "Meet the Experts: Jim Kleewein talks about the Xperanto Technology Demo"; (2) "Xperanto, Bridging Relational Technology and XML"; this second article describes the design and implementation of an XML middleware system to create XML views of relational data, query XML views and store/query XML documents using a relational database system. See "XML and Query Languages."

  • [March 29, 2002] "XUL Tutorial." By Neil Deakin. March 18, 2002 or later. "This tutorial describes XUL, the XML-based User-interface Language. This language was created for the Mozilla application and is used to define its user interface. The XUL implementation and the Mozilla browser are ever-changing. Some of the information contained within this tutorial may be outdated. The default skin has changed since some of the screen shots were taken, so the images may not match up to recent builds... XUL (pronounced zool and it rhymes with cool) was created to make development of the Mozilla browser easier and faster. It is an XML language so all features available to XML are also available to XUL. Most applications need to be developed using features of a specific platform making building cross-platform software time-consuming and costly. This may not be important for some, but when you consider that users may want to use an application on other devices such as handheld devices or set-top boxes, it is quite useful to allow users to. A number of cross-platform solutions have been developed in the past. Java, for example, has portability as a main selling point. XUL is one such language designed specifically for building portable user interfaces. It takes a long time to build an application even for only one platform. The time required to compile and debug can be lengthy. With XUL, an interface can be implemented and modified quicky and easily. XUL has all the advantages of other XML languages. For example XHTML or other XML languages such as MathML or SVG can be inserted within it. Also, XUL is easily localizable, which means that it can be translated into other languages easily. Style sheets can be applied to modify the appearance of the user interface (much like the skins or themes feature in WinAmp or some window managers)..." See also the "Mozilla XUL and Script Reference." Local references: "Extensible User Interface Language (XUL)."

  • [March 29, 2002] "Template Languages in XSLT." By Jason Diamond. From XML.com. March 27, 2002. ['Our main feature this week on XML.com takes up where Eric van der Vlist left off in his July 2000 article on "Style Free Stylesheets." Jason Diamond follows up on Eric's observations that XSLT doesn't encourage a good separation between content and presentation, and pursues the development of a higher-level templating language aimed at creating a cleaner XSLT template infrastructure. Jason shows how his example-based template language, implemented in XSLT itself, is easier for everyday use, especially where non-technical colleagues are involved.'] "Despite its simplicity and its original purpose, XSLT is an extremely rich and powerful programming language. Just about anything that can be done with XML can be implemented in XSLT -- all it really takes is a little bit of creativity and a whole lot of pointy brackets. One of the most common uses of XSLT is to transform XML content into something more suitable for viewing. This separation between content and presentation seems to be the most often cited advantage for many XML advocates. XSLT was designed specifically for this task It could be argued, however, that, XSLT fails miserably at separating these two layers. Traversing source documents with any sort of XPath or XSLT instructions like xsl:for-each and xsl:apply-templates in your style sheets is like opening a connection to a database and performing a query in the middle of an ASP or JSP page. Good programmers don't do this because it breaks the separation between the presentation and data tiers in their applications. Thinking about it from an altogether different perspective, having literal result elements interspersed with XSLT instructions in your transforms is like generating HTML by concatenating strings and then printing them to your output (as is often done when implementing servlets). Most designers can't work in an environment like that. Even if they can, they shouldn't have to concern themselves with all the logic of extracting and manipulating the data they're trying to present... Getting XSLT to process your custom templates isn't as easy as I would like it to be, but once the initial framework is created, adding new instructions and variables is relatively painless. Creating a prototype with XSLT is certainly the quickest way to go as you can easily add new instructions when your template designer needs them. I've personally used the techniques described in this article to prototype a template language with close to 200 instructions. The templates that utilized those instructions were still preferable to hardcoded XPath/XSLT, and it was possible to re-implement the template language processor in a more efficient language (a subject for another article) once the design was finalized without requiring any changes to the templates themselves..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [March 29, 2002] "SVG Tips and Tricks, Part One." By Antoine Quint. From XML.com. March 27, 2002. ['Antoine Quint has been busy in his labs, brewing up content for his SVG column. This month, Antoine packs a whole bunch of cool tricks into creating an animated data-exploration widget.'] "The previous installments of this column discussed the two major techniques at the core of interactive SVG: SVG animation and DOM scripting. We saw how the powerful declarative syntax adapted from SMIL brought life to SVG documents, and how SVG's DOM goes further than XML's core DOM. This month's column, rather than a focused exploration of a particular SVG topic, will examine a few helpful tips and tricks. First, I'll introduce the viewBox attribute for zooming purposes, then explain how to use the SMIL DOM interfaces for remote animation startups, and, last, I'll conclude with a closer look at DOM Events. At the end, you will understand the tricks used and abused in the interactive tree map companion demo. Before we turn to specifics, I should tell you a bit about the demo. The SVG code of the tree map was actually generated by XSLT from an XML file. The aim was to construct a mechanism to represent an XML file's hierarchy in SVG and be able to browse it in an interactive way. So we are dealing with a nice little tree app here. If you look at the SVG code you'll see that there is still a strong depth structure here too. The idea is that each time we met a node in the XML file, we would create a rectangle and a text label. If the node had a child, then we would process the subtree according to the same rule..." Note also in this connection the title SVG Essentials: Producing Scalable Vector Graphics with XML, by J. David Eisenberg (O'Reilly, First Edition: February 2002; ISBN: 0-596-00223-8): "Eisenberg begins with basics needed to create simple line drawings and then moves through more complicated features like filters, transformations, and integration with Java, Perl, and XSLT. Unlike GIFs, JPEGs or PNGs (which are bitmapped), SVG images are both resolution- and device-independent, so that they can scale up or down to fit proportionally into any size display or any Internet device -- from PDAs to large office monitors and high-resolution printers. Smaller than bitmapped files and faster to download, SVG images can be rendered with different CSS styles for each environment. They work well across a range of available bandwidths." See: "W3C Scalable Vector Graphics (SVG)."

  • [March 29, 2002] "Basic Training. [XML Q&A.]" By John E. Simpson. From XML.com. March 27, 2002. ['Could you describe XML in simple, concise language? That's the challenge John Simpson has taken up this week in his XML Q&A column. The result is a gentle introduction to XML that will prove useful for beginners.'] "In this month's column, we celebrate XML's fourth year (belatedly) by way of a deceptively simple question... Is it even possible to explain XML in simple English... XML (an acronym for Extensible Markup Language) is a set of rules, published by the W3C (World Wide Web Consortium), for building new languages..."

  • [March 29, 2002] "W3C XML Schema Needs You." By Leigh Dodds. From XML.com. March 27, 2002. ['One of the consequences of complexity in an open specification is a decreased likelihood of interoperability in implementation. XML developers have been bumping into this problem with the W3C XML Schema language recently. Leigh Dodds covers these problems, and a call for developers to aid the progress of greater interoperability.] "The W3C XML Schema (XSD) specifications have drawn fire again recently, with a number of concerns being aired about an apparent lack of interoperability between implementations. Jonathan Robie, a member of the Schema Working Group, has issued a rallying cry for developers to unite and help push for interoperability... There was a resurgence of the 'XML Schema is too complex' debate on XML-DEV last week. While this is an oft debated topic, the issues have had a slightly different slant this time around with claims that XSD is so complex that it's proving extremely difficult to implement... A few constructive suggestions were circulated during the discussion, some more radical than others. Rob Griffin suggested producing a list of standard error messages for validators, which ought to help achieve some level of consistency across implementations, as well as clarifying the circumstances in which each error should arise. Andrew Watt recommended the addition of a use case document that would provide an additional means of tackling the specifications. Watt pointed to the XML Query documents as a good exemplar. Rick Jelliffe's suggestion to modularize XML Schema was the most radical. Jelliffe suggested that instead of a rewrite the schema specifications should be split into eight small sections which '...would allow greater modularity, let readers and implementers concentrate and advertise conformance on different parts, and fit in with ISO DSDL, for users who, say, want to use RELAX NG with XML Schemas primitive datatypes'. Jelliffe also commented that rather than criticizing XML Schema, the important first question should be to consider which schema language or combination of languages is most suited to a particular application domain. Jelliffe offered a prediction that document oriented systems will likely settle on DSDL, while database oriented applications will find XML Schemas most suitable..." See: (1) "Document Schema Definition Language (DSDL)"; and (2) "XML Schemas."

  • [March 29, 2002] "Thinking XML: Basic XML and RDF Techniques for Knowledge Management. Part 5: Defining RDF and DAML+OIL schemata." By Uche Ogbuji (Principal consultant, Fourthought, Inc.). From IBM developerWorks, XML Zone. March 2002. ['Uche Ogbuji moves on to define RDF and DAML+OIL schemata for the issue tracker application, continuing the discussion of modeling as he goes along.'] "In my last installment of this column, I discussed how XML knowledge management systems such as RDF shed a different light on age-old problems of data design and modeling. This was done toward the goal of nailing down a schema for the issue tracker package that I have been using to illustrate the use of RDF in association with XML applications. Now I'll complete the definition of the issue tracker schema, in RDFS and DAML+OIL form. Again, familiarity with RDF, RDFS, and DAML+OIL are required. Since the last installment, I have published an introduction to DAML+OIL with my colleague Roxane Ouellet, so you no longer have to slog through the dense specifications to get a handle on it. [...] Generally, even if you wish to apply constraints in the loose way discussed in the last installment of this column, you should have a schema of some sort, for documentation if nothing else. RDFS is still the simplest and most pervasive choice, but DAML+OIL has many things to recommend it: not just the additional features, but the cleaner core semantics as well. Now that we have a schema for the issue tracker, we'll move on to improving the way we construct our queries: We'll look at Versa, an open query language for RDF that will make all the query code we've presented simpler and faster..." Also available in PDF format.

  • [March 28, 2002] "Encoded Archival Context (EAC) - Recent Developments." By Per-Gunnar Ottosson (Riksarkivet, Stockholm). In LEAF Newsletter Issue 1 (March 2002). The EAD (Encoded Archival Description) SGML/XML DTD "has elements for names of corporate bodies and persons with attributes allowing for links to authority files. There are also elements for the narrative administrative histories and biographies, as well as elements for controlled access in terms of functions and geographic names. However, EAD does not provide support for separate files of authority and context information. In response to this need, an international group of archivists and information scientists met in Toronto in March 2001 to lay down the principles for governing such an encoding standard. The group prepared for the meeting by drafting and reviewing a set of principles and criteria to direct its work, and agreed that the standard needs to address more than traditional authority control of headings and that accompanying documentation is needed for contextual information. The name of the format became the 'Encoded Archival Context', thereby stressing its wider scope: Archival context information consists of information describing the circumstances under which records (defined broadly here to include personal papers and records of organisations) have been created and used. This context includes the identification and characteristics of the persons, organisations, and families who have been the creators, users, or subjects of records, as well as the relationships amongst them. For the development of the DTD, a special working group was assigned consisting of Daniel Pitti (University of Virginia), Joanne Evens (University of Melbourne), Stephan Yearl (Yale University), and, from LEAF, Gunnar Karlsen (University of Bergen) and P-G Ottosson (National Archives of Sweden). During a meeting in Charlottesville in June, the group came up with a draft DTD, which was ready for circulation to the full group in the middle of July. The DTD has been successfully tested on LEAF data by Gunnar Karlsen. The EAC DTD is adopted to librarian standards for authority records, such as UNIMARC/Authorities. Especially when it came to the elements of the header and the entry elements it was regarded as crucial to keep a compatibility with MARC records. A special attribute (ea= encoding analog) documents the relation between an EAC element and the MARC field of the source. The Committee for Description Standards of the International Council of Archives is now reviewing the ISAAR(CPF): International Standard Archival Authority Record for Corporate Bodies, Persons and Families . Some of the members of the committee took part in the development of EAC, and it is proposed that the new version of ISAAR(CPF) shall accommodate the structure of EAC..." See: (1) "Encoded Archival Context Initiative (EAC)"; (2) "Linking and Exploring Authority Files (LEAF)"; (3) "Encoded Archival Description (EAD)"; general references in (4) "Markup Languages for Names and Addresses."

  • [March 27, 2002] "XN3 - XML for N3." By Graham Moore (Vice President Research and Development, empolis GmbH). "This is a short paper that describes how the N3 notation can be represented using XML. We acknowledge that RDF exists and that the result of processing a N3 document can be equivalent to processing an XML RDF document. However there has been much comment in the RDF community about the verbosity and lack of clarity that exists in the XML serialisation of RDF. The community seems agreed on what the XML is expressing and this has been captured succinctly in N3. This paper shows how the ideas in N3 can be captured as an XML language. We have developed such a language and called it XN3 (XML N3). The aim of this language is to gain the useful property of parsability inherent in XML while being as simple, powerful and elegant as N3. We conclude this paper with some areas for improvement and make a general statement about XML development activities... The N3 notation and the associated primer are responsible for clearly communicating the elegance and simplicity that lies behind the RDF model. It achieves this through prose but also through a syntax, concise and precise syntax. The RDF XML serialisation does have these properties and is further confused by a 'concise' format. The RDF XML serialisation does have the benefit that it is XML. However the RDF model is not the XML model and any serialisation should enable easy, understandable interchange between the model and the syntax. What we attempt to do with XN3 (XML N3) is to maintain the power of XML as an easy to process markup language and keep the simplicity and elegance of N3... The following section describes how we have taken N3 and turned it into an XML language. It describes the different constructs from N3 and how they are represented in the XN3..."

  • [March 27, 2002] "Specifying OLAP Cubes on XML Data." By Mikael Rune Jensen, Thomas H. Møller, and Torben Bach Pedersen (Database Systems Group, Institute for Electronic Systems, Department of Computer Science, Aalborg University, Denmark). In Journal of Intelligent Information Systems: Integrating Artificial Intelligence and Database Technologies Volume 17, Numbers 2/3 (December 2001), pages 255-280 (with 35 references). "On-Line Analytical Processing (OLAP) enables analysts to gain insight about data through fast and interactive access to a variety of possible views on information, organized in a dimensional model. The demand for data integration is rapidly becoming larger as more and more information sources appear in modern enterprises. In the data warehousing approach, selected information is extracted in advance and stored in a repository, yielding good query performance. However, in many situations a logical (rather than physical) integration of data is preferable. Previous web-based data integration efforts have focused almost exclusively on the logical level of data models, creating a need for techniques focused on the conceptual level. Also, previous integration techniques for web-based data have not addressed the special needs of OLAP tools such as handling dimensions with hierarchies. Extensible Markup Language (XML) is fast becoming the new standard for data representation and exchange on the World Wide Web. The rapid emergence of XML data on the web, e.g., business-to-business (B2B) e-commerce, is making it necessary for OLAP and other data analysis tools to handle XML data as well as traditional data formats. Based on a real-world case study, this paper presents an approach to specification of OLAP DBs based on web data. Unlike previous work, this approach takes special OLAP issues such as dimension hierarchies and correct aggregation of data into account. Also, the approach works on the conceptual level, using Unified Modeling Language (UML) as a basis for so-called UML snowflake diagrams that precisely capture the multidimensional structure of the data. An integration architecture that allows the logical integration of XML and relational data sources for use by OLAP tools is also presented... Motivated by the increasing use of OLAP tools for analyzing business data, and XML documents for exchanging information on the Web, this paper provides techniques that enable existing OLAP tools to exploit XML and relational data, without requiring physical integration of data. This paper proposed a multidimensional model, the UML snowflake diagram, enabling a precise specification of an OLAP DB based on multiple XML and/or relational data sources. The UML diagramming method was used for describing and visualizing the logical structure of XML documents, easing the design of the OLAP DB. The paper described how to handle the special considerations that need to be taken when designing an OLAP DB on top of XML data. Also, an architecture for integrating XML data at the conceptual level was presented. The architecture also supported relational data sources, making it well suited for building OLAP DBs which are based partly on in-house relational data and partly on XML data available on the web. We improve on previous work on integration of web-based data by focusing on data integration at the conceptual rather than the logical level. Also, the data integration approach takes special OLAP issues, such as handling dimensions with hierarchies and ensuring summarizability, into account. The implementation of a prototype using the approach described in this paper is currently in progress. A very important aspect of the implementation is to investigate efficient query processing techniques such as query translations and data caching. Storing higher-level summaries of the data can also speed up query processing considerably. Furthermore, if XML Schema advances to a W3C Recommendation it would be interesting to consider using this richer formalism for describing XML data sources instead of using DTDs. Other aspects of XML, such as whether preservation of document order is of relevance to OLAP analysis should also be investigated..." [abstract in part from the related Technical Report R-01-5003, Department of Computer Science, Aalborg University, June 13, 2001. For similar publications, see the bibliography page of Mikael R. Jensen.

  • [March 27, 2002] "JavaOne: Sun Wades into Open-Source Waters with Java." By Ashlee Vance. In InfoWorld (March 27, 2002). "Sun Microsystems answered a long-standing call from open-source software developers Tuesday, saying Java fans will be able to submit some changes for the platform under open-source licenses and receive financial support from Sun for their projects. Sun's move toward a more open Java was announced by company Chairman and CEO Scott McNealy during a keynote address here at the JavaOne conference. Sun teamed with The Apache Software Foundation (ASF), maker of the popular Apache Server, to refine the procedures for open-source modifications of Java. The changes are designed to address issues that have dogged open-source companies looking to certify their products as Java compatible through the JCP (Java Community Process) that governs Java's maturation. Companies have been wary of submitting open changes for Java because of licensing issues, confidentiality concerns and the costs associated with running compatibility tests, said Jason Hunter, vice president of the ASF, joining McNealy on stage. As a response to some of these concerns, all Sun-led JSRs (Java Specification Requests) for standardizing a feature through the JCP can be submitted under an open-source license. In addition, test kits may also be submitted under the open licenses, Hunter said. Some existing JSRs will also be available for open-source implementations, he said. Sun has submitted more current JSRs than any other vendor... Sun did not say give the specifics of the open source license it will use for Java. Officials however indicated it would not use a license as broad as the GPL (General Public License) used in some open-source projects, which allows developers to freely modify and distribute code as long the changes are made public. Sun has long been under the watch of developers who were concerned about how much control the company exerts over a technology used by myriad companies. Sun, however, had voiced worries about the fragmentation of Java due to incompatible implementations of the technology from outside parties... With the move Tuesday, Sun may have assuaged some of the developers' fears and found a way to tap the talents of the Java community and open-source programmers as a whole. One company, however, remains unimpressed with Sun's new stance after fighting with the company in the past over open-source Java projects... Sun's close ties to the Apache Software Foundation on this project lend some credence to the company's intentions, as the ASF manages many of the open-source world's most successful projects. In a press conference after his speech, McNealy highlighted the importance of maintaining XML as a standard technology and of not allowing vendors to implement their own versions..." See "Java Community Processs Embraces Open Source."

  • [March 27, 2002] "Microsoft Opens .Net Code to Academics." By George A. Chidi Jr. and Matt Berger. In InfoWorld (March 27, 2002). "Microsoft will allow academic researchers to view the nuts and bolts of some of the .Net source code the company will use in its wide-ranging initiative to supply applications and services over the Internet, Microsoft announced Wednesday. More than one million lines of source code for .Net will be made available under Microsoft's previously announced 'Shared Source' licensing program to academic researchers in university computer-science departments. Shared source is Microsoft's response to the open-source software movement and the growing popularity of the Linux operating system. Open-source software such as Linux typically is developed by programmers collaborating and freely sharing code updates. Under Microsoft's shared source license, developers have been able to view source code, but not modify it as they can with Linux. The shared-source implementation for .Net and Microsoft's Common Language Infrastructure for academics will run on the Windows XP operating system and the open-source FreeBSD derivative of the Unix operating system. Windows source code is also available to academics under shared source licensing, allowing noncommercial modification for academic and research purposes. Microsoft's source-code announcement Wednesday came as Sun Microsystems Inc. handed developers more pieces of its Java programming technology designed for building and deploying Web services, at the JavaOne Developer Conference in San Francisco. Sun Tuesday said developers will be able to submit some changes for Java under open-source licenses and receive financial support from the company for their projects. Microsoft, based in Redmond, Wash., has made a number of moves recently that have been seen as a reaction to both Sun's Java efforts and growing momentum for open-source projects. For example, Microsoft has submitted some of the underpinnings of its .Net initiative to a European standards body. Those technologies, which include the C# programming language and a component of its .Net Framework called CLI (Common Language Infrastructure), were approved as standards by the European Computer Manufacturers Association (ECMA) in December. Microsoft also funded an effort by software maker Corel to implement the ECMA standards and create the version of the .Net Framework for FreeBSD and testing. That implementation is what Microsoft will hand out under its latest academic deal... C#, a component-oriented programming language Microsoft developed, has been compared to Java in that, among other things, it is intended to allow developers to write code and reuse pieces of it when building various applications. CLI is the underlying technology for enabling developers to write .Net applications in more than 20 programming languages. Microsoft's implementation of those technologies is called the .Net Framework. The company intends to use .Net Framework as the common platform for Web services and software that link business processes together over the Internet with XML..." See "Microsoft Releases Shared Source CLI and C# Implementation Availability of Over 1 Million Lines of Source Code for FreeBSD and Windows Underscores Microsoft's Commitment to Open Standards, Academia and Developers."

  • [March 26, 2002] "Exploring XML Encryption, Part 1. Demonstrating the Secure Exchange of Structured Data." By Bilal Siddiqui (CEO, WAP Monster). From IBM developerWorks, XML Zone. March 2002. ['XML Encryption provides end-to-end security for applications that require secure exchange of structured data. XML itself is the most popular technology for structuring data, and therefore XML-based encryption is the natural way to handle complex requirements for security in data interchange applications. Here in part 1 of this two-part series, Bilal explains how XML and security are proposed to be integrated into the W3C's Working Draft for XML Encryption.'] "Currently, Transport Layer Security (TLS) is the de facto standard for secure communication over the Internet. TLS is an end-to-end security protocol that follows the famous Secure Socket Layer (SSL). SSL was originally designed by Netscape, and its version 3.0 was later adapted by the Internet Engineering Task Force (IETF) while they were designing TLS. This is a very secure and reliable protocol that provides end-to-end security sessions between two parties. XML Encryption is not intended to replace or supersede SSL/TLS. Rather, it provides a mechanism for security requirements that are not covered by SSL. The following are a two important areas not addressed by SSL: (1) Encrypting part of the data being exchanged; (2) Secure sessions between more than two parties. With XML Encryption, each party can maintain secure or insecure states with any of the communicating parties. Both secure and non-secure data can be exchanged in the same document. For example, think of a secure chat application containing a number of chat rooms with several people in each room. XML-encrypted files can be exchanged between chatting partners so that data intended for one room will not be visible to other rooms. XML Encryption can handle both XML and non-XML (e.g., binary) data. We'll now demonstrate a simple exchange of data, making it secure through XML Encryption. We'll then slowly increase the complexity of the security requirements and explain the XML Encryption schema and the use of its different elements... In our next installment of this series of articles, we will discuss and implement the details of cryptography. We'll demonstrate the working of encryption and decryption classes and their interaction with parsing logic, and present applications of XML Encryption in Web services." See: "XML and Encryption."

  • [March 27, 2002] "Donald Eastlake on XML Digital Signatures. An Interview With One of the Specification's Pioneers." By Larry Loeb [and Donald Eastlake]. From IBM developerWorks, XML Zone. March 2002. ['In this exclusive developerWorks interview, XML Digital Signatures pioneer Donald Eastlake responds to Larry Loeb's recent article on the topic by clarifying a number of issues about how this technology is used. Note from Larry Loeb: In a recent article on XML Digital Signatures, I questioned their utility and usefulness. Since the proposal has just been recommended for passage into general usage, I decided it was time to check back on the topic again. This time, I talked with Donald Eastlake, the editor of the XML Digital Signature (XMLDSIG) RFC, and someone who should know something about the subject, since he has been intimately involved with XML specifications since the effort began in the 1990s. He has also served on IETF efforts too numerous to list. His responses appear unedited, except for minor grammatical changes.'] Eastlake on real world use of XML digital signatures: "I believe there will be a full spectrum of usage but two areas of particular prominence seem likely: documents and messages. While these sometimes blend into each other, documents tend to be longer lived and the signatures on them tend to indicate human approval of all or part of the document, although they may also have time stamp signatures affixed automatically. Messages tend to be transient and would more commonly have authentication automatically affixed and removed. Of course, a message could include one or more documents, in the sense I'm using the word here, in its content... Documents are most likely to use public key techniques while messages, depending on the application, could use public key or symmetric secret key techniques. Documents are more likely to be something "important" like a mortgage or court filing. But if XML digital signatures are widely used for messages, messages could be several orders of magnitude more numerous." Article also in PDF format. See "XML Digital Signature (Signed XML - IETF/W3C)."

  • [March 26, 2002] "Streaming API for XML (StAX)." Java Specification Request #173. Specification Lead: Christopher Fry (BEA Systems). ['The Streaming API for XML (StAX) is a Java based API for pull-parsing XML.'] Initial Expert Group Membership: BEA Systems; James Clark, Thai Open Source Software Center; K Karun, Oracle Corporation; Gregory Messner, The Breeze Factor; Aleksander Slominski, Indiana University; James Strachan, dom4j; Anil Vijendran, Sun Microsystems. "The Streaming API for XML (StAX) parsing will specify a Java-based, pull-parsing API for XML. The streaming API gives parsing control to the programmer by exposing a simple iterator based API. This allows the programmer to ask for the next event (pull the event) and allows state to be stored in a procedural fashion. Two recently proposed JSRs, JAXB and JAX-RPC, highlight the need for an XML Streaming API. Both data binding and remote procedure calling (RPC) require processing of XML as a stream of events, where the current context of the XML defines subsequent processing of the XML. A streaming API makes this type of code much more natural to write than SAX, and much more efficient than DOM. The goal of this API is to develop APIs and conventions that support processing XML as a stream. The specification will address three main areas: (1) Develop APIs and conventions that allow a user to programmatically pull parse events from an XML input stream. (2) Develop APIs that allow a user to write events to an XML output stream. (3) Develop a set of objects and interfaces that encapsulate the information contained in an XML stream. The specification should be easy to use, efficient, and not require a grammar. It should include support for namespaces, and associated XML constructs. The specification will make reasonable efforts to define APIs that are 'pluggable'... Two standard main approaches for processing XML exist: (1) the Simple API for XML processing [SAX] and (2) DOM [Document Object Model]... To use SAX one writes handlers (objects that implement the various SAX handler APIs) that receive callbacks during the processing of an XML document. The main benefits of this style of XML document processing are that it is efficient, flexible, and relatively low level. It is also possible to change handlers during the processing of an XML document allowing one to use different handlers for different sections of the document. One drawback to the SAX API is that the programmer must keep track of the current state of the document in the code each time one processes an XML document. This creates overhead for XML processing and may lead to convoluted document processing code... DOM provides APIs to the programmer to manipulate the XML as a tree. At first glance this seems like a win for the application developer because it does not require writing specific parsing code. However this perceived simplicity comes at a very high cost: performance. Some implementations require the entire document to be read into memory, so for very large documents one must read the entire document into memory before taking appropriate actions based on the data. Another drawback is the programmer must use the DOM tree as the base for handling XML in the document. For many applications the tree model may not be the most natural representation for the data..." See the full list of JSRs.

  • [March 26, 2002] "The Java XML Pack, Spring 02 Release." "The Spring 02 Release includes the following: (1) Java API for XML Messaging (JAXM) v1.0.1 EA2; (2) Java API for XML Processing (JAXP) v1.2 EA2; (3) Java API for XML Registries (JAXR) v1.0 EA2; (4) Java API for XML-based RPC (JAX-RPC) v1.0 EA2. This release of the Java XML Pack has been tested with various configurations, using Tomcat 4.0.1 and J2EETM 1.3_01 and 1.3.1 with Java 2 SDK, Standard Edition (J2SETM) versions 1.3.1_02 and 1.4 on the following platforms: Solaris 2.8, Windows 2000, Professional Edition, Windows XP, Professional Edition, RedHat Linux 7.2... The Java XML Pack is an all-in-one download of Java technologies for XML. Java XML Pack brings together several of the key industry standards for XML -- such as SAX, DOM, XSLT, SOAP, UDDI, ebXML, and WSDL -- into one convenient download, thereby giving developers the technologies needed to get started with web applications and services. Bundling the Java XML technologies together into a Java XML Pack ensures Java developers of a quick and easy development cycle for integration of XML functionality and standards support into their applications. Through support of these technologies in conjunction with the Java Platform, Java XML Pack technology enables interoperability between applications, services, and trading partners through a vendor-neutral platform that allows for sharing of custom industry standard data formats. The Java XML Pack includes current publicly-available releases of Java APIs and Architectures for XML, both production and early access (EA) versions. The Java XML Pack will have frequent quarterly refreshes to ensure the underlying Java XML technology is the latest available..." See also: "Java Technology and XML: Frequently Asked Questions."

  • [March 26, 2002] "System Integrated Automaton Parser for XML (SIA Parser)." Communiqué 2002-03-26 from Robert Berlinski. Version: March, 2002. "Available for free... "...a new implementation of the parser. The most important is a new automaton engine that improves efficiency by 14% to 22%. Besides that classes have new interface and are not compatible with the previous versions. Please expect a few minor improvements in the near future... The SIA Parser improves SAX by integrating an automaton within it. Generally speaking the SIA Parser retains all SAX's functionality and additionally makes very simple to: (1) Uniquely identify XML nodes by state numbers. In other words a reference to a particular node can be made by a unique state ID. And the process of matching nodes with state numbers is very simple. (2) Automatically generate a source code to parse a particular XML. Just run the generator against your XML data file, save the source code, fill in business logic, generate code and your application is done. (3) Parse altering XML that might include nested XML and process the nested XML. (4) Work in self-learning mode to accept any XML and perform general tasks. (5) Provide statistical and structural information about an XML... The SAX parser offers a great efficiency and is faster than DOM parser. On the other hand the DOM parser provides structure information regarding parsed XML. It is very important since, an XML might have might have different nodes in different positions but with the same names. The SAX parser provides only local information about the structure (events: startDocument, endDocument, startElement, endElement). Then it is up to the application to keep track of the information to build the global state. The SIA Parser was introduces to fill up the gap. It is possible by integrating an automaton with the SAX parser. An universal automatic mechanism builds a new interface for application. The interface provides information about global state plus all the information available from SAX parser. Additionally it is possible to build universal tools based on the SIA parser to solve common issues like: Formatting an XML to a readable form Providing statistics about an XML Automatically generating dedicated parsers for an XML The SAX roots guarantees high performance while the integrated SIA automaton helps with Rapid Application Development and reduce code maintains costs..."

  • [March 26, 2002] "Using Object-Oriented Attribute Grammars as ODB System Generator." By Takashi Imaizumi, Takeshi Hagiwara, Katsuhiko Gondow, and Takuya Katayama. Department of Information Engineering, Faculty of Engineering, Niigata University, Niigata 950-2181, Japan. Presented at the Third Workshop on Attribute Grammars and their Applications (WAGA'00), Ponte de Lima, Portugal. July 7, 2000. 20 pages. "This paper presents MAGE2 system. It implements a computational model OOAG (Object-Oriented Attribute Grammars) and creates its attributed object trees in object-oriented database (ODB) using persistent object allocation mechanism of object-oriented database management systems (ODBMS). The MAGE2 is a programming support and execution environment for OOAG. The focus of this paper is on an execution system. We indicate core techniques to implement MAGE2, that is, how to execute specifications of OOAG and how to generate an ODB system. We are planning to use MAGE2 to design databases for storing data that have logical structures such as program source files, XML documents and so on... OOAG has been derived from attribute grammars. Declarative structures, separation of semantics and syntax definition, and local description resulting in high readability and high maintainability, and clear description due to functional computation of attributes are all desirable characteristics of AGs. We summarize the OOAG features as a generator of database systems as follows: (1) OOAG has been derived from attribute grammars; (2) We can program how to manipulate software objects by message passing; (3) We can describe data structure and manipulation method of software objects at the same place; (4) We can generate software repository system automatically from formal repository specification written in OSL language. An OOAG description is separated into two parts: one is a static specification and the other is a dynamic specification. They are described in a specification language OSL 'Object Specification Language'. We describe briefly each part and then give the correspondence of OSL language constructs to conventional attribute grammars constructs... A tool constructed by the MAGE system creates attributed object trees in an ODB persistently. Operations to persistent object trees can be described in OSL specifications by the message passing mechanism of OOAG model. Programmers who use MAGE will only write interface codes between generated tool. Created ODBs will be maintained by the OOAG evaluator. This includes updating, adding, or deleting objects. They will be described in dynamic subtrees operation by message passing. If the state of object trees in the database will be inconsistent, OOAG evaluation loop will keep them consistent. From above features of the MAGE system, we can develop complicated object-oriented database systems which manage structured data effciently..." See also "XML and Attribute Grammars." [source, Postscript]

  • [March 26, 2002] "SmartTools: A Development Environment Generator Based on XML Technologies." By Isabelle Attali, Carine Courbis, Pascal Degenne, Alexandre Fau, Joël Fillon, Didier Parigot, Claude Pasquier, and Claudio Sacerdoti Coen. In XML Technologies and Software Engineering, Toronto, Canada. ICSE'2001, ICSE Workshop Proceedings. "SmartTools is a development environment generator that provides a structure editor and semantic tools as main features. SmartTools is easy to use, thanks to its graphical user interface. Being based on Java and XML technologies offers all the features of SmartTools to any defined language. The main goal of this tool is to provide help and support for designing software development environments for programming languages as well as domain-specific languages defined with XML technologies... From the abstract syntax definition of programming (e.g., Java) or domain-specific languages, it is possible to easily generate an interactive environment with SmartTools. This latter automatically offers a well-known visitor pattern technique to specify semantic analysis on DOM tree structures. Its graphical part is mainly based on free existing implementations of standards (XSLT, BML). We have chosen to use non-proprietary APIs in the concern to be open and take advantage of future or external developments. Thus, we can focus on semantics tools (visitor technics, aspect-oriented programming). There are already some examples of easy and successful integration of research tools, and technology transfer in industrial environment. Additionally, we hope to benefit from the large fields of applications that appear through XML technologies. Also in Postscript format. See similarly "SmartTools: A Generator of Interactive Environments Tools," in ETAPS'2001: Electronic Notes in Theoretical Computer Science (ENTCS). Tools Demonstrations at CC'01, edited by Reinhard Wilhelm. Project URL: see "SmartTools System: Aspect and XML-oriented Semantic Framework Generator." From the section 'Using XML technologies': "As XML will be more and more used as a communication protocol between applications, we wanted to be able to handle any XML document in SmartTools. Any XML document importing a DTD (Document Type Definition) has a typed structure. That DTD describes the nodes and their types, that is very similar to our AST formalism. In order to obtain this result, we have specified and implemented a tool which converts a DTD formalism into an AST equivalent formalism. With this conversion, we automatically offer a structure editing environment for all languages defined with XML in the SmartTools framework. It is important to note that XML documents produced by SmartTools are well-formed... We are also studying XML schemas and RDF (Resource Description Framework) schemas, the successors of DTD. Thus any application that respects the implementation of the APIs, can be XML-compliant. All the manipulated trees in SmartTools are Java DOM Trees to ease the integration with other tools and to have a very open data structure. We offer a tool to automatically generate parsers. This tool can be useful for a designer to define a user-friendly concrete syntax for his language. But, extra data are required in the definition of the language. We have also integrated the XSL (XML Style-sheet Language) specifications that describe the layout of a document as well as the XSLT (XSL Transformation).

  • [March 26, 2002] "Standardizing XML Rules: Rules for E-Business on the Semantic Web." Invited Presentation (45-minutes, presentation, with slides in PDF format). By Benjamin N. Grosof (MIT Sloan Professor in E-Commerce Information Technology). August 5, 2001. Presented at the Workshop on E-business and the Intelligent Web at the International Joint Conference on Artificial Intelligence (IJCAI-01). See also the short paper; preliminary prose outline of the talk, and appears in the Workshop Proceedings. The principal topic of discussion is the Rule Markup Language (RuleML). See: "Rule Markup Language (RuleML)." [alt URL for paper; cache]

  • [March 26, 2002] "Facilitating Semantic Web Search with Embedded Grammar Tags." By Gautham K. Dorai and Yaser Yacoob (Department of Computer Science, University of Maryland, College Park, MD, USA 20742). August 5, 2001. Presented at the Workshop on E-business and the Intelligent Web at the International Joint Conference on Artificial Intelligence (IJCAI-01). 6 pages. "We propose a new framework for intelligent information access. The backbone of this framework consists of embedded grammar tags (EGT's) that capture natural language queries. These embedded grammar tags reflect information content in web pages by anticipating the queries that may be launched by users to retrieve a particular content. These grammars provide a unifying component for speech recognition engines, semantic web page representation and speech output generation. We demonstrate the new EGT representation to enable a software agent to respond to natural speech input from users in narrow domains such as weather, stock market and news queries. [...] In this paper a new semantic tagging representation (i.e., EGT) was proposed and developed. The tagging approach is a departure from existing definition and use of tags in XML, RDF and DAML. Employing BNF grammar to represent the queries which users may employ to recover information changes the current view of semantic content of web pages since we reach beyond meaning into anticipation of query syntax and semantics. There are far reaching impacts to this proposal. First, the designer of the web is given the role of anticipating the queries that are matched to particular content items. Second, the web-search engine is relieved from the load of performing NLP since the mapping between queries and content has been already programmed into the page. Third, users can creatively expand the semantic reach of the content of web-pages by simply creating new EGTs that reflect potential queries." [cache]

  • [March 26, 2002] "An Expressive Constraint Language for Semantic Web Applications." By Peter Gray, Kit Hui, and Alun Preece (University of Aberdeen, Computing Science Department, Aberdeen AB24 3UE, Scotland). Presented at the Workshop on E-business and the Intelligent Web at the International Joint Conference on Artificial Intelligence (IJCAI-01). 8 pages. "We present a framework for semantic web applications based on constraint interchange and processing. At the core of the framework is a well-established semantic data model (P/FDM) with an associated expressive constraint language (Colan). To allow data instances to be transported across a network, we map our data model to the RDF Schema speci- fication. To allow constraints to be transported, we define a Constraint Interchange Format (CIF) in the form of an RDF Schema for Colan, allowing each constraint to be defined as a resource in its own right. We show that, because Colan is essentially a syntactically-sugared form of first-order logic, and P/FDM is based on the widely-used extended ER model, our CIF is actually very widely applicable and reusable. Finally, we outline a set of services for constraint fusion and solving, which are particularly applicable to business-tobusiness e-commerce applications. All of these services can be accessed using the CIF... XML Constraint Interchange Format: "In defining our Constraint Interchange Format, we were guided by the following design principles: (1) the CIF would need to be serialisable into XML, to make it maximally portable and open; (2) constraints should be represented as resources in RDF, so that RDF statements can be made about the constraints themselves; (3) there must be no modification to the existing RDF and RDF Schema specifications, so that the CIF would be layered cleanly on top of RDF; (4) it must be possible for constraints to refer to terms de- fined in any RDF Schema, with such references made explicit. As we showed in the previous section, the entity-relational basis of both our P/FDM data model and RDF made it relatively straightforward to map from the former to the latter. In building the RDF Schema for our CIF we were guided by the existing grammar for Colan which relates constraints to entities, attributes and relationships present in the ER model. This grammar serves as a metaschema for the Colan constraints (such metaschemas are very common in relational and object database systems). A number of issues arose in developing the RDF Schema for CIF, discussed in the following subsections... At the core of the framework is a well-established semantic data model (P/FDM) with an associated expressive constraint language (Colan). To allow data instances to be transported across a network, we have mapped our data model to the less expressive (but adequate) RDF Schema. To allow constraints to be transported, we have provided a Constraint Interchange Format (CIF) in the form of an RDF Schema for Colan, allowing each constraint to be defined as a resource in its own right. Because Colan is essentially a syntactically-sugared form of first-order logic, and P/FDM is based on the widely-used extended ER model, our CIF is actually very widely applicable and reusable... In linking Colan to RDF Schema, we also allow its usage with more expressive data modelling languages built on top of RDF Schema, including DAML-ONT and OIL. However, a basic requirement of our approach in defining the RDF Schema for Colan expressionswas that it should in no way require modification to the underlying RDF definitions (this is in contrast to the OIL approach, which requires modification at the RDF layer in order to capture certain kinds of expression. Our constraint interchange and solving services are being incorporated into the AKT infrastructure, as one of the basic knowledge reuse mechanisms in the AKT service layer. Further information on this work can be found at www.aktors.org..." [cache]

  • [March 26, 2002] "Generating Web Content with Cocoon. [Exploring XML.]" By Michael Classen. From WebReference.com. March 18, 2002. ['Cocoon provides for developers a way to generate content dynamically using XML data. XML expert Michael Classen takes a look at the version 2 release, which, among other things, improves scalability by using SAX instead of the DOM.'] "The Apache project is well-known for the Web server software it produces that is carrying its name. In the past, many other interesting software projects were also started there, mainly in the Java and XML space. Cocoon is one of them. Cocoon is a Java Web-application for generating dynamic content using XML. It can be installed on any Java Servlet Engine and comes with a wide variety of components for generating, transforming and outputting data with XML. Cocoon 2 was recently released as a complete rewrite of its predecessor, with improved flexibility and scalability. The central concept in Cocoon is the pipeline, a number of components plugged together in a serial configuration to process incoming data that will be passed along... The Cocoon developers set out to create a similar system for generating content on the Web by piping XML through a configurable set of tools. The first version of the software was passing around full DOM documents, limiting scalability with regard to the size of documents that could be processed, and the amount of parallelism in the pipeline. Furthermore, the pipeline was defined through processing instructions within the documents, making reuse in different contexts difficult. Version 2 eliminates these problems by using SAX instead of DOM, and connecting the processing components through SAX events. This way XML documents of arbitrary size can be processed, and the components can work in parallel on the same document. The configuration of the pipeline is now moved out of the data documents and into a separate sitemap file... out of which components can a pipeline be built? Cocoon comes with many configurable components for generating, transforming and serializing data with XML... The Cocoon framework is a powerful software application for dynamically generating Web content without needing to know a programming language. Although it is written in Java, by no means do you have to use or know Java, short of configuring a Web application for a servlet engine such as Tomcat. Cocoon 2 eliminates the shortcomings of Version 1 and provides an interesting alternative to your favorite scripting language..."

  • [March 26, 2002] "HP Adds Transaction Support To Web Services: Java One." By Richard Karpinski. In InternetWeek (March 25, 2002). "Hewlett-Packard this week will release what it claims is the first implementation of an emerging XML protocol that will let emerging Web services infrastructures better handle business transactions. HP's new Web Services Transaction Server 1.0 is based on the OASIS Group's Business Transaction Protocol (BTP) specification. While Web services offer great flexibility with their loosely coupled architecture, that doesn't translate well into a more transaction-oriented environment where messages must be passed in a timely and reliable manner. Traditionally, systems such as transaction-processing monitors have emerged to enable such highly reliable software environments. The BTP specification aims to bring the same reliability to the world of XML and Web services, said Joe McGonnell, HP's product manager for Web services. The HP transaction server features an implementation of JTS, or Java Transaction Service, underneath the covers, working at an API level, said McGonnell. Meanwhile the new BTP implementation works at a higher level, coordinating how SOAP messages are sent back and forth by a Web service... In other news from Java One, HP is releasing its Web Services Platform version 2.0, a developer environment for creating, deploying, and registering Web services. The tool integrates with the HP Application Server, as well as app servers from other vendors. The HP platform also places a major emphasis on bridging the gap between Java- and .Net-based Web services. HP -- which runs a public UDDI node -- is also releasing a new version of its Web Services Registry 2.0, to help users build private UDDI registries..."

  • [March 25, 2002] "A URN Namespace for the Web3D Consortium (Web3D)." By Aaron E. Walsh (Mantis Development Corp.; WWW). IETF Network Working Group, Internet-Draft. Reference: 'draft-walsh-urn-web3d-00.txt'. March 25, 2002, expires: September, 25 2002. "This document describes a Uniform Resource Name (URN) namespace for the Web3D Consortium (Web3D) for naming persistent resources such as technical documents and specifications, Virtual Reality Modeling Language (VRML) and Extensible 3D (X3D) files and resources, Extensible Markup Language (XML) Document Type Definitions (DTDs), XML Schemas, namespaces, style sheets, media assets, and other resources produced or managed by Web3D. Web3D is the only non-profit organization with a mandate to develop and promote open standards to enable 3D for the Internet, Web and broadcast applications. Web3D is responsible for developing, advancing, and maintaining the VRML97 ISO/IEC International Standard (ISO/IEC 14772-1:1997), X3D (the forthcoming official successor to VRML) and related technologies..." See: "VRML (Virtual Reality Modeling Language) and X3D." [cache]

  • [March 25, 2002] "Java, XML, and Web Services." By Jon Udell. In InfoWorld (March 22, 2002). "Simple text messages, readable and writable by people and computers, live at the core of every successful Internet application. XML seeks to grow the expressive power of these texts while preserving their accessibility. Java, although born to the Internet, has been oddly slow to embrace these paradigms. For example, regular expressions are the most basic tool for working with patterned text. Yet only now, in the JDK (Java Development Kit) 1.4 release, do regular expressions become a standard feature of the Java platform. Likewise, basic XML facilities such as parsing with SAX (Simple API for XML) and DOM (Document Object Model) interfaces, and transformation with XSLT (Extensible Stylesheet Transformation) -- although long available from other sources -- make their first official debut in J2SE (Java 2 Standard Edition) 1.4. Although Sun's Java/XML engine may have started slowly, it's really cranking now. A set of unbundled XML and Web-services APIs, in various stages of development, seeks to complement the XML core that's built into the platform. These "JAX Pack" APIs define a Java framework within which developers can perform several tasks... So what exactly are the features of Sun's emerging Web services model? For Hal Stern, CTO of iPlanet's software division, there are three core activities: creating service endpoints, assembling services into business processes, and deploying these in ways that administrators can control and users can comprehend... Creating context for Web services is the charter of ebXML, one of the messaging profiles supported by JAXM (Java API for XML Messaging); another JAXM profile is WS-Routing, formerly SOAP-RP. But business-collaboration protocols such as ebXML will not soon, and may never, solve the kinds of semantic problems that plague systems integrators who regularly struggle with the need to match one company's definition of supplier, customer, or purchase order to another's... Clearly there will be lots of ways to produce and consume SOAP services in Java. Too many, perhaps, but when you're bootstrapping it's wise to accommodate a broad range of legacy systems and attitudes. The real question for Java developers, and indeed for all developers, is how to contextualize the use of those services. To that end, iPlanet's Stern argues for "a thin waistline of core standards" -- just SOAP and WSDL and UDDI. "Then let's innovate on top of these to solve the assembly problem," he says. At this level, conventional programming languages fade into the woodwork. The focus shifts to languages that are today spoken, and protocols that are today enacted, by people. When a company signs a new supplier, Stern points out, lawyers gather in a room to negotiate terms and conditions, and all documents subsequently exchanged fall within the scope of that agreement. ebXML addresses this realm. So does IBM's WSFL (Web Services Flow Language), and Microsoft's XLANG, the orchestration dialect of BizTalk Server. Until basic Web services find their way into routine use, we won't be able to fully evaluate these approaches to composing systems based on them..."

  • [March 25, 2002] "Iona Advances Web Services Platform, Adds Security: JavaOne." By Richard Karpinski. In InternetWeek (March 25, 2002). "Vendor Iona Inc. is adding new capabilities to its Web services tools, including support for the latest Java standards and a new security framework that addresses a key missing piece of the Web services puzzle. Iona this week is rolling out a new version of its core developer platform. Iona XMLBus Edition 5.1 includes new security tools, improved developer features, and the promise of a new UDDI server to be added shortly. The big addition is the security framework, the Iona Security Service, which lets developers use existing security databases -- initially LDAP but later Active Directory and other platforms -- to implement a password-protection scheme for their Web services. The service is based on SAML, or Security Assurance Markup Language, which provides standards for distributing authentication, such as in a Web services architecture. In later releases, IONA will add single sign-on and other security features to the product, including integration with PKI products from VeriSign and others... In addition to the new security framework, Iona is also adding new intelligent interface mapping tools to XMLBus that will let developers build WSDL -- or Web Services Description Language -- descriptions of Web services without a lot of hand-coding, Rymer said. In addition, the vendor plans to add a private UDDI repository to the tool within the next 30 days..." See the announcement: Iona Announces Web Services Security Framework. IONA Security Services Deliver Open and Comprehensive Solution for End 2 Anywhere Integration of Enterprise Applications Across the Internet."

  • [March 25, 2002] "OASIS Hones Web Services Standards." By Tom Sullivan and Ed Scannell. In InfoWorld (March 22, 2002). "Looking to take Web services protocols higher up the interoperability stack, two groups within the Organization for the Advancement of Structured Information Standards (OASIS) are developing specifications for content delivery and end-user interfaces. Known as Web Services for Interactive Applications (WSIA) and Web Services for Remote Portals (WSRP), which met this week, the groups were created to advance user-facing Web services and enable Web services and other applications to plug and play with portals and portlets. Building on this momentum, Sun Microsystems and IBM plan to announce on Monday at JavaOne a portlet specification submitted to the Java Community Process (JCP) that complements Billerica, Mass.-based OASIS' WSRP group. Thus far, the core Web services standards -- SOAP (Simple Object Access Protocol), XML, UDDI (Universal Description, Discovery, and Integration), and WSDL (Web Services Description Language) -- have focused on system-to-system interoperability. The standard expected to emerge from the WSIA, however, would improve any service that requires users to fill out online forms, for example, said Dwight Davis, an analyst at Summit Strategies in Kirkland, Wash... recognizing this need, a host of companies have backed the WSIA initiative, including IBM, BEA Systems, Bowstreet, divine, Documentum, Epicentric, and Plumtree. And the lineup of WSRP supporters looks similar, with Documentum, Epicentric, divine, IBM, Sun, Hewlett-Packard, Iona, and Oracle all on board. 'There is a certain set of base functions that we are trying to do jointly between the two committees, and then WSIA will try to go beyond that and do some things that are not required for portals,' explained Charles Wiecha, manager of the next-generation user experience frameworks department at Yorktown Heights, N.Y.-based IBM Research and chair of the WSIA committee. The technologies expected to drive the WSIA standard include IBM Research's WSXL (Web Services Experience Language) and the combined work on Web services graphical interfaces done by Epicentric and divine. IBM has also included in its WSXL proposal plans for XLink (XML Linking Language) to be used to hook together a patchwork of Web services to make them appear as a single application..." See (1) "Web Services for Remote Portals (WSRP)"; (2) "Web Services for Interactive Applications (WSIA)."

  • [March 25, 2002] "Extensible Rights Markup Language (XrML) Interoperability with Digital Transmission Content Protection (DTCP)." By ContentGuard, Inc., with contributions from Intel Corporation and Microsoft Corporation. 2002. ['This paper addresses interoperability across the digital content industry with its multiple devices and a myriad of different business models. This interoperability is based on open widely accepted standards within the industry and will allow for maximum flexibility for content owners, device manufacturers and consumers.'] "As a proof of concept, the following technical discussion will address the manner in which leading industry standards, XrML and the rights expression function within DTCP, are capable of interoperability. XrML is a semantically precise language for expressing rights and business rules related to the use, duplication and distribution of content. DTCP, also sometimes referred to as '5C, is a specification that enables secure distribution of digital content between devices across IEEE 1394 and other home network interconnects. We will discuss the benefits of using XrML in combination with DTCP to expand the range of application of both technologies, and to explore ways for rights management systems using XrML to interoperate with devices that support DTCP. A guiding principle in the technical approach is to ensure systems using either or both technologies can relate to each other in a way that content can efficiently flow between them while retaining the content owner's expressed rights, rules and restrictions... Devices that are built today based on the DTCP specification have been granted certain usage rights for the DTCP protected content that they receive. These rights are bound to the content itself via the CCI bits. XrML has the ability to specify the use of DTCP when permitting the export of content to devices. By enabling interoperability between these technologies, industry and consumers can take part in a broader set of business models and can maximize opportunities throughout the content value chain. As a business case example, Microsoft's DRM technology, incorporating XrML, will be capable of enabling interoperability with DTCP. This approach will bring together the benefits of a robust digital rights language and a high degree of protection for entertainment content distribution to the consumer. This in turn will help to demonstrate the foundation of technology and standards needed to enable a more rapid transition to the digital economy..." See "Extensible Rights Markup Language (XrML)" and the recent TC proposal, "OASIS Members Propose a Rights Language Technical Committee."

  • [March 22, 2002] "RELAX NG Schemas for TEI P4." Prepared by Sebastian Rahtz (OUCS Information Manager). See also the ZIP package. Sebastian has prepared RELAX NG schemas for the TEI "which are up to date with the latest version of P4 (now effectively frozen), and are derived automatically from the [ODD] source of TEI P4. I have been working on this for some time, but please regard it as a personal project for now, and not a product of the TEI. It is not intended to be used in production... There is no documentation yet, and only one example. You can validate test0.xml against test0.rng. test0.rng is the base example showing how to construct an instance schema. I will be added more complex test cases in due course... I have tested this with James Clark's "jing", and Sun's "MSV" tool. Sun's "relmes" tool, which allows Schematron assertions to be added, also works, but not the public release. The author is hoping to get a new release out soon. Anyone attending XML Europe 2002 in May may like to come and hear me give a talk about this work. Or of course I am happy to discuss it in public or private..." [XML Europe 2002, Wednesday, 22-May-2002: "TEI and RELAX NG." Presented by: Sebastian Rahtz, Information Manager, Oxford University Computing Services, United Kingdom. This presentation describes work undertaken to show how the Guidelines can use fragments of the Relax NG schema language internally, and generate either full schemata or DTDs on demand. It will also show how it can evolve to keep up with modern standards."] References: "Text Encoding Initiative (TEI) - XML for TEI Lite."

  • [March 22, 2002] "What is XSL-FO?" By G. Ken Holman. From XML.com. March 20, 2002. ['In an extended excerpt from his renowned training materials, Ken Holman explains the W3C's XSL Formatting Objects technology, XSL-FO, intended to facilitate page-based formatting of XML documents. Ken introduces XSL-FO's basic concepts and processing model, and places it in the context of XML and XSLT. Including plenty of examples and diagrams, "What is XSL-FO?" should give you a good grounding and leave you ready to start experimenting with this exciting technology.'] "Crane Softwrights Ltd. has published Practical Formatting Using XSLFO covering every formatting object of XSLFO and their properties, according to the final XSL 1.0 Recommendation of October 15, 2001. The first two chapters of this book have been rewritten in prose and are made available here as an introduction to the technology and its use. This material assumes no prior knowledge of XSLFO and guides the reader through background, context, structure, concepts, introductory terminology, and a short introduction of each of the formatting objects. Note that neither the Recommendation itself, nor Crane's training material, attempt to teach facets of typography and attractive or appropriate layout style, only the semantics of formatting, the implementation of those semantics, and the nuances of control available to the stylesheet writer and implemented by the stylesheet formatting tool. XSLFO is a very powerful language with which we can possibly create very ugly or very beautiful pages from our XML-based information... Two vocabularies specified in separate W3C Recommendations provide for the two distinct styling processes of transforming and rendering XML instances. The Extensible Stylesheet Language Transformations (XSLT) is a templating markup language used to express how a processor creates a transformed result from an instance of XML information. The Extensible Stylesheet Language Formatting Objects (XSLFO) is a pagination markup language describing a rendering vocabulary capturing the semantics of formatting information for paginated presentation. Formally named Extensible Stylesheet Language (XSL), this Recommendation normatively incorporates the entire XSLT Recommendation by reference and, historically, used to be defined together in a single W3C draft Recommendation..." See also the two "how-to" articles of J. David Eisenberg on XSL-FO, published last year: [1], [2]. For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [March 22, 2002] "What's New in XPath 2.0." By Evan Lenz. From XML.com. March 20, 2002. ['Evan Lenz presents the first part of a two-part series on the next generation of XSLT and XPath. In "What's New in XPath 2.0" Evan explains the new features available in XPath, and its relationship to the W3C XML Query language.'] "This article provides a brief tour through some of the new features in XPath 2.0. It assumes that you already have a basic understanding of XPath 1.0, and that you've most likely used it in the context of XSLT. It is by no means an exhaustive overview but merely points out some of the most noteworthy features. Relationship between XPath 1.0 and XPath 2.0: Both the XPath 1.0 recommendation and the latest XPath 2.0 working draft say that "XPath is a language for addressing parts of an XML document". This was a fairly appropriate characterization of XPath 1.0. (Of course, it doesn't mention that you can have arithmetic expressions and string, number, and boolean expressions, but those features were kept to a minimum.) On the other hand, as a characterization of XPath 2.0, it leaves a lot to be desired. XPath 2.0 is a much more powerful language that operates on a much larger domain of data types. A better way of describing XPath 2.0 is as an expression language for processing sequences, with built-in support for querying XML documents. Querying? Isn't that XQuery's job? Relationship between XPath 2.0 and XQuery 1.0: For over a year now, the W3C XSL and XML Query Working Groups have been working closely together. The goal has been to share as much between XSLT 2.0 and XQuery 1.0 as is technically and politically feasible and to give that common subset the name "XPath 2.0". This effectively means that the driving forces behind XPath 2.0 include not only the XPath 2.0 Requirements document but also many of the XML Query language requirements... XPath 2.0 is a strict syntactic subset of XQuery 1.0. In fact, both working drafts and language grammars were automatically generated from a common source..."

  • [March 22, 2002] "Introducing XML::SAX::Machines, Part Two." By Kip Hampton. From XML.com. March 20, 2002. ['Kip Hampton returns with the second installment of his introduction to the XML::SAX::Machines Perl module.'] "In last month's column we began our introduction to XML::SAX::Machines, a group of modules which greatly simplifies the creation of complex SAX application with multiple filters. This month we pick up where we left off by further illustrating how XML::SAX::Machines can be used to remove most of the drudgery of building SAX-based XML processing applications..."

  • [March 22, 2002] "Web Service Sublimation." By Martin Gudgin and Timothy Ewald. From XML.com. March 20, 2002. [Tim Ewald and Martin Gudgin plunge naked into the slimy pool of debate and attempt to catch the slippery fish that is the real definition of a "web service."] "In the broadest possible sense, Web Services are an attempt to use XML to build distributed information processing systems that work across the Internet without necessarily requiring a browser as the client. Many present Web Services as a silver bullet that makes building this sort of system easy, but this view is naive. Serialized XML messages are easy to parse because the syntactical rules of XML 1.0 + Namespaces are well understood. Once parsed, XML messages are easy to manipulate using a range of technologies. However, while the ability to parse and interpret messages is necessary, it is not sufficient to build a distributed system. There are a lot of other issues that must be resolved. How should messages flow between different parts of the system? Should the messages by typed or untyped? If they are typed, what type system should be used? And should parts of the system be strongly or loosely coupled? Answering these questions is key to deciding what Web Services really are. But deciding on answers is extremely difficult, as recent debates in the XML world have shown ... Collectively, while people envision Internet-based distributed information processing applications sending messages based on a wide array of patterns, including but not limited to request-response, it isn't clear how that will be done relative to HTTP as it is defined today. Independent of message flows, people have not yet agreed on whether messages should be typed, which type system to use, and what degree of coupling is acceptable. (For the record, messages should be typed, described in XSD, and as loosely coupled as possible -- as specified in their XSD definitions.) So what is a Web Service? It is an Internet-friendly distributed application that uses XML. That's about all anyone can say for now."

  • [March 22, 2002] "SAML Advances Single Sign-On Prospects." By Andy Patrizio. In XML Magzine Volume 3, Number 2 (March 2002), pages 10-11. ['Promising a standard means of authentication and authorization, SAML passes an important OASIS milestone.'] "The Organization for the Advancement of Structured Information Standards (OASIS) has completed the heavy lifting on its latest XML standard, the Security Assertion Markup Language (SAML), a standard for exchanging authentication and authorization information between domains. SAML (pronounced 'sam-el') is designed to offer single sign-on for both automatic and manual interactions between systems. It will let users log into another domain and define all of their permissions, or it will manage automated B2B exchanges between two parties. SAML addressed the need to have an industry standard way of representing assertions of authentication and authorization for users and interactions, according to Jeff Hodges, co-chair of the Security Services Technical Committee (SSTC) at OASIS that developed the spec and principal engineer at Oblix... SAML replaces two previous efforts by OASIS to create an authorization and authentication protocol, called S2ML and AuthXML. These efforts were being carried out by separate camps, but the SSTC decided it was in everyone's best interests to get all of the camps under one spec and combined the two efforts, because they handled two separate functions. What does pass for Web-based single sign-on is proprietary, the most well known being Microsoft's Passport. SAML is meant to be vendor neutral and is based on XML encoding rather than ASN.1 protocol, which is used in other areas of network sign-on and permissions, such as LDAP. For various reasons, people are gravitating toward using XML rather than ASN.1, according to Hodges. One reason is that XML is textual while ASN.1 is compiled into a binary language. The Web world to a fair degree expects things to be textually based, he said. Another reason is the knowledge level out there. There's a lot more available in terms of learning for XML over ASN.1. SAML is designed not only for user logon to a system, but also for automated B2B transactions that require a secure transaction between the two parties. Again, the automated services run the same as the manual, human-driven functions. The connecting party gives the authorization to access the system and specifies the tasks it can perform -- in this case, a data exchange..." See: (1) the Security Assertion Markup Language website; (2) references in "Security Assertion Markup Language (SAML)."

  • [March 22, 2002] "State of the Union." By Daniel F. Savarese. In XML Magzine Volume 3, Number 2 (March 2002), pages 18-24. Cover story. ['XML data and Java logic form a partnership that is growing ever deeper -- take a tour of XML-related specs for the Java platform.'] "In the past year or two, the Java platform has been criticized for lagging behind competing platforms in integrating with XML, and Sun Microsystems has been accused of deliberately dragging its feet to adopt XML as part of its distributed computing strategy. Whatever truth there may have been to those appraisals, it is evident that XML is of strategic importance to the continued evolution of the Java platform and that in a short period of time Java has become an ideal environment for developing XML-based applications. In fact, the inroads XML has made into the Java platform are too numerous to account for completely. Here, we'll take a tour of the places in the Java platform that XML is or will be used and that merit developer attention... Without parsers, schema validators, and high-level APIs for applying XML to specific uses such as messaging and object serialization, developing XML-based applications becomes rather burdensome. XML did not catapult into the limelight of application development because many of these elements did not exist initially and took time to develop. Even though the potential synergy between XML and Java was touted from early on, it was on the backs of other programming languages that XML began its steady climb to the summit of cross-platform computing. Although many independent efforts produced Java-based XML processing APIs, it was not until March 2000 that a standard API, Java API for XML Processing (JAXP), was released. With that foundation, it was possible for the nearly score of additional XML-related Java APIs to be developed and implemented. [Figure 1] shows an example Web service scenario for a fictional coffee retail chain that makes use of Java APIs for XML to solicit price bids from distributors and provide online ordering for customers. The uses of XML in the Java platform can be divided into two categories: APIs that directly manipulate XML documents or specific schemas and APIs that make incidental use of XML. Table 1 lists most of the APIs in the first category that the Java Community Process (see Resources) is working on. Those in the second category are listed in Table 2. Categorizing some of the APIs is rather subjective, but in general, the APIs in Table 1 provide primitive functionality for processing generic XML documents or documents belonging to a specific XML schema. Those in Table 2 provide a specific functionality other than XML processing through a Java API that happens to produce or use some data in XML. For example, logging has little to do with XML processing, but the Logging API allows logs to be formatted as XML records... [Schema Manipulation. These are Java APIs that manipulate XML or specific schemas: SR-5/JSR-63 Java APIs for XML Processing (JAXP); JSR-31 XML Data Binding Specification (JAXB); JSR-67 Java APIs for XML Messaging (JAXM); JSR-93 Java APIs for XML Registries (JAXR); JSR-101 Java APIs for XML-based RPC (JAX-RPC); JSR-102 JDOM 1.0; JSR-106 XML Digital Encryption APIs; JSR-110 Java APIs for WSDL; JSR-157 ebXML CPP/A (Collaboration Protocol Profile/Agreement) APIs for Java.]"

  • [March 22, 2002] "Stay in Sync While on the Go." By Jeff Jurvis. In XML Magzine Volume 3, Number 2 (March 2002), pages 52-53. ['Use the common protocol SyncML to pass text-based updates from one source to the other.'] "... Sponsored by Ericsson, IBM, Lotus, Motorola, Nokia, Matsushita, Openwave, Psion, and Starfish Software, the SyncML consortium organized the work around establishing an open, common language to synchronize compliant devices, applications, and services running over any network. SyncML is designed to work over HTTP; Wireless Session Protocol (WSP) for wireless Web applications that run over Wireless Application Protocol (WAP); OBEX (an object exchange protocol that runs over infrared and Bluetooth connections and that is built into most operating systems); lower level TCP/IP; and e-mail protocols such as SMTP, POP3, and IMAP. SyncML uses XML to encode commands and data and is designed to run on top of tried and true Web protocols such as HTTP, SSL, and WAP, and therefore is compatible with the applications developed for Web-friendly mobile platforms such as J2ME. A developer looking to add synchronization capabilities to a mobile app needs only the bare minimum tools... The SyncML language is supported by a corresponding SyncML framework that lays out the architecture for a complete end-to-end cross-platform synchronization solution that encompasses nearly all mobile, desktop, and server data sources, but even the SyncML consortium does not aim to override existing end-to-end single platform solutions. Microsoft's ActiveSync technology works great across Windows platforms and will likely stay proprietary. But look for Microsoft to join Palm, IBM, the major mobile phone manufacturers, and the rest of the world in providing hooks to SyncML for those ever-so-common instances where proprietary devices need to talk to each other..." [Note: the SyncML Data Synchronization Specifications and SyncML Device Management Specifications advanced to version 1.1 in February, 2002. The release of the version 1.1 specifications began a 45-day review period; specifications are subject to change prior to final approval (expected) on Tuesday, April 2, 2002.] See "The SyncML Initiative."

  • [March 22, 2002] "Java Architecture for XML Binding (JAXB): A Primer." By Tai-Wei Lin. From [Sun] Java Developer Connection. March 13, 2002. "This article introduces you to the basics of Java Architecture for XML Binding (JAXB) Early Access Implementation v 1.0. You will learn a few basic uses of the API and tools that the EA v 1.0 provides. This paper provides brief explanations on how to create simple binding codes using the API and tools. In addition, this paper also discusses a few situations where JAXB shows its strengths, and is intended for developers who have working understanding of the Java programming language, are familiar with XML, and interested in getting a brief introduction to JAXB. Introduction to JAXB Java Architecture for XML Binding (JAXB) provides an API and tool that allow automatic two-way mapping between XML documents and Java objects. With a given Document Type Definition (DTD) and a schema definition, the JAXB compiler can generate a set of Java classes that allow developers to build applications that can read, manipulate and recreate XML documents without writing any logic to process XML elements. The generated codes provide an abstraction layer by exposing a public interface that allows access to the XML data without any specific knowledge about the underlying data structure. In addition to DTD support, future versions of JAXB will add support for other schema languages, such as the W3C XML Schema. These features enable the JAXB to provide many benefits for the developers..." See: "Java Architecture for XML Binding (JAXB)."

  • [March 22, 2002] "Java Web Services Developer Pack Part 1: Registration and the JAXR API." By Ed Ort. With contributions from Ramesh Mandava. From [Sun] Java Developer Connection (February 28, 2002). ['This first article in a series on the Java Web Services Developer Pack shows you how to use the Java API for XML Registries (JAXR) API to publish and search for Web services in a registry.'] "You will find the following topics covered in this article: (1) An Introduction to Web Services; (2) Web Services Technologies [XML, UDDI, SOAP, ebXML]; (3) Java Technologies and Tools for Web Services; (4) JAXR [Clients and Providers, JAXR Packages, JAXR Example]... This article is the first in a series that describes the Java WSDP. The series highlights the technologies and tools included in the Java WSDP, and shows how you can use those technologies and tools to build Web services and applications that access Web services. This first article focuses on registration, in particular, the JavaTM API for XML Registries (JAXR) API, an API in the Java WSDP that you can use to register a Web service. Later articles in the series will focus on other components of the Java WSDP. However before describing JAXR, let's look at some fundamental technologies that drive the Web services model -- especially those that are specifically pertinent to registration. The Java technologies in the Java WSDP support these fundamental Web services technologies. For example, the JAXR API can be used to access standard registries such as those that conform to UDDI or ebXML. A business can use the JAXR API in a Java program to register its Web services in a standard registry, or search for Web services that are registered in standard registries. [...] This article focused on one of the Java APIs in the Java Web Services Developer Pack: JAXR. This API enables you to request Web service registration operations in the Java platform. The article showed how you can use the JAXR API to register a Web service in a standard business registry such as a UDDI registry. It also showed how you can use the API to search a registry for Web services. Later articles in the series will focus on other components in the Java WSDP, and illustrate how you can use them to access and use Web services in the Java platform."

  • [March 22, 2002] "Deploying Web Services on Java 2, Enterprise Edition (J2EE)." By Qusay H. Mahmoud and [with contributions from] Ramesh Mandava. From [Sun] Java Developer Connection (March 08, 2002). "The Java Web Services Developer Pack (Java WSDP) is an all-in-one download containing key technologies to simplify building of Web services using the Java 2 Platform. JWSDP is a collection of tools and APIs developed by Sun, and other members of the Java community, that allow you to build Web services quickly and easily. The J2EE platform has established itself as the platform of choice for building multi-tiered enterprise applications and has been adopted as the preferred platform for developing enterprise information systems because of its flexibility and scalability. Deploying web services on the J2EE platform is a natural extension. The Java WSDP components can be integrated with the Java 2 Platform, Enterprise Edition (J2EE), and they can be run on J2EE. The SQE team has come up with guidelines for integrating and deploying web services on J2EE 1.3.1. This article: (1) Presents an overview of Web Services; (2) Shows how to set up the J2EE environment for web services; (3) Shows how to integrate Web services with J2EE; (4) Shows how to deploy sample web services on J2EE; (5) Shows how to run web services on J2EE. [...] This article showed the step-by-step instructions for configuring the J2EE SDK 1.3.1 so that web services can be deployed on top of it. The integration process involved copying some JAR files from JWSDP to J2EE, and configuring port numbers and setting security permissions. The rest of the article showed how to deploy sample web services that come with the JWSDP, on the J2EE platform."

  • [March 22, 2002] "Content Repository for Java Technology API." Java Specification Request #170. Specification Lead: David Nuescheler. This JSR "specifies a standard API to access content repositories in Java 2 independently of implementation." Supporting this JSR: Laird Popkin, 3path, Remy Maucherat, Dirk Verbeeck, ATG, Day Software, Deloitte Consulting, Hewlett-Packard, IBM, Nat Billington, Oyster Partners, SAP Portals, Software AG. Description: "The API should be a standard, implementation independent, way to access content bi-directionally on a granular level within a content repository. A Content Repository is a high-level information management system that is a superset of traditional data repositories. A content repository implements "content services" such as: author based versioning, full textual searching, fine grained access control, content categorization and content event monitoring. It is these 'content services' that differentiate a Content Repository from a Data Repository. Many of today's (web) applications are interacting with a content repository in various ways. This API proposes that content repositories have a dedicated, standard way of interaction with applications that deal with content. This API will focus on transactional read/write access, binary content (stream operations), textual content, full-text searching, filtering, observation, versioning, handling of hard and soft structured content... Today, (web) applications have to adapt to every vendor's proprietary API to interact with content repositories. This has the negative effect of locking a large percentage of information assets in vendor specific formats, limiting access to information, impacting system evolution/migration, and availability of third party content management tools. This API will examine solutions to these and other issues deemed important by the expert group. There is no easy way to integrate content-producer-applications (CMS) and content-consumer-applications (CRM, Personalization, Portal, etc.) independently of the actual underlying content repository. The expert group will examine solutions to this problem also... The Content Industry has defined a number of specifications on a protocol level to exchange content (ICE, WebDAV, etc.). However, there is no specification on an API level that addresses the unique requirements of a Content Repository. As well, there exists no Content Repository centric standard that appears to address issues such as version handling, full-text searching, and event-monitoring in a coherent manner... Proposed functional areas: (1) Granular Read/Write Access: This is the bi-directional interaction of content elements. Issues with access on a property level and not just on a 'document' level should be examined. A content transaction is any operation or service invoked as part of a system interaction with a content repository. (2) Versioning: Transparent version handling across the entire content repository, would provide the ability to create versions of any content within the repository and select versions for any content access or modification. (3) Hard- and Soft-structured Content. (4) Event Monitoring (Observation). (5) Full-text Search and filtering: The entire (non-binary) content of the repository could be indexed by a full-text search engine that enables exact and sub-string searching of content. (6) Access Control: Unified, extensible, access control mechanisms will be examined. (7) Object Classes. (8) Namespaces and Standard Properties. (9) Locking and Concurrency. (10) Linking: A standard mechanism to soft/hard link items and properties in a repository along with providing a mechanism to create relationships in the repository will be examined..." See the dedicated website and JSR ballot review. Compare, in addition to WebDav and ICE, the Interwoven Content Services specification.

  • [March 22, 2002] "Linking in Context." By Samhaa R. El-Beltagy, Wendy Hall, David De Roure, and Leslie Carr (Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK). In Journal of Digital Information Volume 2 Issue 3 (March 12, 2002). This paper was first presented at ACM Hypertext 2001 in August in Aarhus, Denmark, where it won the SIGWEB Douglas Engelbart Best Paper Award. ['This paper explores the idea of dynamically adding multi-destination links to Web pages, based on the context of the pages and users, as a way of assisting Web users in their information finding and navigation activities. The work does not make any preconceived assumptions about the information needs of its users. Instead it presents a method for generating links by adapting to the information needs of a community of users and for utilizing these in assisting users within this community based on their individual needs. The implementation of this work is carried out within a multi-agent framework where concepts from open hypermedia are extended and exploited. In this paper, the entities involved in the process of generating and using 'context links' as well as the techniques they employ to achieve their tasks, are described. The result of an experiment carried out to investigate the implications of linking in context on information finding, is also provided.'] "... One of the goals of the work presented is to address the limitation of switching between linkbases according to the context of documents. Another is to enable dynamic creation of links to populate linkbases. Manual authoring of links is an expensive and inefficient process. The Web is full of a wealth of handmade links that can be used to generate links independently of the documents in which they were authored and re-applied again in contexts similar to those in which they were originally created. So, in the context of this work, there are three steps involved in the process of linking in context: The creation of links in context (via link extraction) The propagation of links to users based on user interests (which within this work define user context) The rendering of links based on document context Within the developed multi-agent architecture, two types of agents are responsible for carrying out these tasks: link extraction and contextualizer agents, and user interface agents. Details of the architecture and agents used to implement the work described in this paper can be found in [refs]..." Related references in "XML Linking Language."

  • [March 22, 2002] "Annotated example of Proposed OWL Knowledge Base Language." By Peter F. Patel-Schneider, Ian Horrocks, and Frank van Harmelen. 19-March-2002 or later. "We illustrate the proposed "light" (frame-based) idiom of OWL. These examples are loosely based on the DAML+OIL walkthrough at "Annotated DAML+OIL Ontology Markup" [W3C Note 18-December-2001]. See the W3C OWL requirements document.

  • [March 22, 2002] "Circumstance, Provenance and Partial Knowledge. Limiting the Scope of RDF Assertions." By Graham Klyne. 13-March-2002 or later. This note discusses attempts to explore some aspects of representing unreliable and incomplete information using RDF (and the structure defined as 'reification' by Resource Description Framework (RDF) Model and Syntax Specification). A simple RDF graph consists of triples, each of which corresponds to a statement that is held or asserted to be true. In this, there is no recognition that truth may vary according to circumstance: [1] some statements may be true in some qualifying circumstance (e.g., at some given point in time, or in some defined situation. Holmes is a detective in some well-known fiction, but a Supreme Court Justice in US legal history is a widely-noted example of this); [2] statements may be accepted as true only if they come from a source considered to be reliable (if a car dealer says some car is sound I don't assume it is, but if an independent inspector says so I may feel it is safe to purchase); [3] statements may be recognized as truth only if the recipient is in possession of certain other information (Clark Kent, being the same person as Superman, is a strong person but Lois Lane doesn't know that Clark is Superman so does not accept that Clark is a strong person). Although these examples are very different, there is a common theme that some statement(s) may be considered true or not according to the circumstance in which they are evaluated. In his PhD thesis, section 1.4, R. V. Guha makes a similar argument for contexts being an appropriate solution for a diverse range of problems. This note revisits some topics that I explored in an earlier paper in the light of some subsequent discussions and experiments, and aims to move towards a goal of formalizing the representation of these ideas in RDF, and their corresponding model theoretic denotation. A key idea that I wish to preserve is that the denotation of a URI reference in an RDF graph is largely invariant according to circumstance; what may change is access to information about what is denoted... Initial exploration suggests that the relationship between interpretations in a context structure can be constrained using RDF properties and classes without introducing any inconsistencies into the overall framework. This supposition is not proven, and such proof might be a future project. Future work might also explore whether the relationship between interpretations in a context structure would be further illuminated by using modal logics, and in particular whether the relationships can be characterized in terms of accessibility relationships between the possible worlds of modal logic..."

  • [March 21, 2002] "ASC X12 Reference Model for XML Design Rules." Accredited Standards Committee (ASC) X12 and Data Interchange Standards Association (DISA). Version 0.4. Draft. February 25, 2002. 74 pages. ['This paper was motivated by the action item that X12's Communications and Controls subcommittee (X12C) took at the August 2001 XML Summit to develop 'draft design rules for ASC X12 XML Business Document development'. Acting on that action item, X12C's EDI Architecture Task Group (X12C/TG3) determined that XML design rules could not be developed in a vacuum, without a basis for determining which XML features to use and how to use them. Thus the group also set about developing a philosophical foundation and putting forth some general design principals. This Reference Model covers those topics in addition to a preliminary set of design rules. The approach discussed herein is intended to be the foundation for X12's future XML development. It is consistent with the decisions of X12's Steering Committee to develop its XML work within the ebXML framework. We expect it to undergo further refinement as the work progresses from its current status as a Task Group Reference Model to a full X12 standard.'] "This Reference Model addresses the semantic and syntactic representation of data assembled into business messages. The semantic representation defines an overall architectural model and refines the model to an abstract level of detail sufficient to guide the message development process. The syntactic representation utilizes features of the target syntax, while imposing semantic-to-syntax mapping rules and syntax constraints intended to simplify the task of interfacing business messages to business information systems and processes. The large-scale structure of this architecture has five discrete levels of granularity. Each level builds on the levels below it in manners particular to their differing natures. The five levels are: (1) Template; (2) Module; (3) Assembly; (4) Block; (5) Component. The first two levels, Template and Module, provide features that promote interoperability between national cross-industry standards and proprietary user communities. The remaining three levels, Assembly, Block, and Component have characteristics expressly designed around a rational semantic model for granularity. Specifications of optionality and repetition are supported for all levels with the exception of the Template level. Special attention has been paid to the differing needs of senders and receivers in expressing the use of optionality and repetition required by their particular business practices. The five-level structure of this architecture is designed to provide useful granularity, while at the same time preserving a useful semantic clarity. Design rules come in two basic forms: [1] Syntactic, and [2] Semantic. An example of a syntactic design rule in X12 would be the basic data types, i.e. alphanumeric, date, etc. An example of a semantic design rule in X12 would be the general prohibition against duplication. These two aspects of design cannot stand alone. The existing X12 design rules are a direct outgrowth of the particular X12 syntax and the history that created it. For the ASC X12 XML Reference Model, a semantic design approach has been selected, breaking the EDI lexicon into units for re-use. This approach has some pitfalls that result from a decomposition of EDI issues using only syntax as a guide... A primary requirement for this effort has been to meet a need first expressed at the first XML Summit in August 2001. This was a desire for non-X12 participants to contribute and make use of X12 work but in a manner that didn't require an all-or-nothing commitment to either the X12 process or X12 conclusions in every detail. The top two layers, Template and Modules, directly support this need. An external entity, corporation, organization, or individual can contribute fully-constructed Modules that fit into a Template." See "ANSI ASC X12/XML and DISA." [cache]

  • [March 21, 2002] "The Company Internet." By Mandy Andress. In InfoWorld (March 15, 2002). ['This flexible, granular identity and access management framework is perfect for controlling Web applications and migrating to Web services. Most aspects of the solution can be customized. Support for XML, SAML (Security Assertions Markup Language), and other developing standards is key to NetPoint's flexibility and interoperability. Support for a wide range of Web and directory servers means the solution fits into almost any environment.'] "In steps NetPoint, a new identity management and access control solution from Oblix, which allows you to control how users identify themselves and to determine which services they can access after the authentication process is completed. NetPoint allows you to use the Internet as your corporate network: The solution does not differentiate between requests from company employees and requests made by random Web surfers. That kind of ingenuity impressed us enough to award NetPoint a Deploy rating in our tests. NetPoint has two major components: the Identity System and the Access System. The Identity System allows administrators to create, delete, and manage user information. Software known as the Identity Server processes all user-related requests (including credential management); WebPass, a Web server plug-in, manages the information exchange between the Web server and the Identity Server. The Access System, the second major part, allows administrators to define and enforce policy-based authorization and single sign-on rights. The Access System is configured to prevent everyone but a few select users from making changes. For example, an employee at a key supplier could be given rights to browse inventory data, but not to update it. The Access System consists of three components: the Access Server, WebGate gateway, and AccessXML Server. The Access Server, the heart of the product, processes policy evaluations for access requests. WebGate, an intermediary, takes the requests from the Web server and passes them on to the Access Server for authorization. Finally, the AccessXML Server translates the XML requests from the Web server into Access Server API equivalents..."

  • [March 21, 2002] "JavaOne: Sun to Bake Web Services Into J2EE." By Paul Krill and Tom Sullivan. In InfoWorld (March 19, 2002). "Facing stiff competition from Microsoft's .Net platform, Sun Microsystems this week will detail its forthcoming J2EE (Java 2 Enterprise Edition) 1.4 specification and a second prerelease version of its Web services developer pack. Speaking at the JavaOne conference in San Francisco, executives will announce that the J2EE 1.4 specification supports the full Web services stack, but will not be available for deployment until the first quarter of 2003. Added support for SOAP (Simple Object Access Protocol), UDDI (Universal Description, Discovery, and Integration), and WSDL (Web Services Description Language) rounds out J2EE's support for the set of de facto Web service standards as 1.3 already supports XML. 'It fully and completely implements standards-based Web services,' said George Grigoryev, senior product manager for J2EE at Palo Alto, Calif.-based Sun. Sun is also moving to accelerate developers' ability to build and deploy Web services using the J2EE 1.3 specification, with the availability this week of a second prerelease version of its Web services developer pack. In addition, the company is merging its JCA (J2EE Connector Architecture), JMS (Java Messaging Server), and Entity Beans to help companies make legacy data available via the Web services model, executives report. But Sun's position in the Web services race is not only drawing fire from Microsoft, it now places Java developers in a position of choosing which specification to adopt..."

  • [March 21, 2002] "Java Technology and XML Part 2: API Benchmarks." By Thierry Violleau. From [Sun] Java Developer Connection. March 04, 2002. ['Part 2 of this series tests the sample programs from Part 1, providing information about the performance of different APIs.'] "Neither Java nor XML Technology need an introduction, nor the synergy between the two: 'Portable Code and Portable Data'. With the growing interest in web services and e-business platforms, XML is joining Java in the developer's toolbox. As of today, no less than six extensions to the Java Platform empower the Java developer when building XML-based applications... This second article focuses on the relative performance of these APIs as obtained by running the sample programs presented in the first article. This series will conclude with a third article which gives tips on how to improve the performance of XML-based applications from a programmatic and architectural point of view. The purpose of the tests presented in this paper is primarily to highlight the respective performance of different XML processing techniques: SAX, DOM, XSLT, and the impact of validation against a DTD or an XML Schema. The performances of different API implementations: Xerces, Crimson, Xalan, Saxon, XSLTC, and so on when run on different Java runtimes JDK 1.2 and JDK 1.3 (Client and Server) are also compared. The results presented here don't claim to cover all the API implementations available today but underline that the tradeoff between ease of use and performance of a chosen processing models can be biased by the implementation of the underlying parser, document builder or style sheet engine. [...] In this second article, we have tested the different sample programs presented in the first article and analyzed their respective performance when run in different configurations: with different sizes of processed documents conforming to either a DTD or an XML Schema, with or without validation, with different underlying parser or style sheet engine implementations and with different JVM versions. Taking into account the results presented in this document, the next article will attempt to give tips on how to improve the performance of XML-based applications from a programmatic and architectural point of view."

  • [March 21, 2002] "Java Technology and XML Part 3: Performance Improvement Tips." By Thierry Violleau. From [Sun] Java Developer Connection (March 2002). "Neither Java nor XML Technology need an introduction, nor the synergy between the two: "Portable Code and Portable Data." With the growing interest in web services and e-business platforms, XML is joining Java in the developer's toolbox. As of today, no less than six extensions to the Java Platform empower the developer when building XML-based applications... The first of the three articles in this series gave an overview of the different APIs available to the developer by presenting some sample programs. The differences in performance were addressed in the second article. This third article gives tips on improving the performance of XML-based applications from a programmatic and architectural point of view. XML processing is very CPU, memory, and I/O or network intensive. XML documents are text documents that need to be parsed before any meaningful application processing can be performed. The parsing of an XML document may result either in a stream of events if the SAX API is used, or in an in-memory document model if the DOM API is used. During parsing, a validating parser may additionally perform some validity checking of the document against a predefined schema (a Document Type Definition or an XML Schema). Processing an XML document means recognizing, extracting and directly processing the element contents and attribute values or mapping them to other business objects that are processed further on. Before an application can apply any business logic, the following steps must take place: (1) Parsing; (2) Optionally, validating [which implies first parsing the schema]; (3) Recognizing; (4) Extracting; (5) Optionally, mapping. Parsing XML documents implies a lot of character encoding and decoding and string processing. Then, depending on the chosen API, recognition and extraction of content may correspond to walking through a tree data structure, or catching the events generated by the parser and processing them according to some context. If an application uses XSLT to preprocess an XML document, even more processing is added before the real business logic work can take place... In this article, we presented different performance improvement tips. The first question to ask when developing an XML-based application is 'Should it be XML based?' If the answer is yes, then a sound and balanced architecture has to be designed, an architecture which only relies on XML for what it is good at: open inter-application communications, configuration descriptions, information sharing, or any domain for which a public XML schema may exist. It may not be the solution of choice for unexposed interfaces or for exchanges between components which should be otherwise tightly coupled. Should XML processing be just a pre or post-processing stage of the business logic or should it make sense for the application to have its core data structure represented as documents, the developer will have to choose between the different APIs and implementations considering not only their functionalities and their ease of use, but also their performance. Ultimately, Java XML-based applications are developed in Java, therefore any Java performance improvement rule will apply as well, especially, those regarding string processing and object creation."

  • [March 21, 2002] "A Design and Implementation of XML-Based Mediation Framework (XMF) for Integration of Internet Information Resources." By Kangchan Lee, Jaehong Min, Kishik Park, and Kyuchul Lee. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences. HICSS 2002. Big Island, Hawaii, USA, January 7-10, 2002. Edited by R. H. Sprague. Los Alamitos, CA, USA: IEEE Computer Society, 2002. Abstract. "As the proliferation of the Internet, especially World Wide Web, numerous information resources have been constructed. The characteristics of information resources on the Internet are that the information resources are distributed, autonomous, and heterogeneous. Moreover each information resource has its own query method, data representation, and schema structure. The integration of information resources is one of the most important research issues in the Internet data management. The task of information resources integration system is to answer queries that require extracting and combining data from multiple information sources. In this paper, we propose an XML-based Mediation Framework (XMF) for integrating the Internet information resources. [...] With the recent advances in information technology such as digital libraries, WWW, data warehouse, and CALS, structured and unstructured data have been widely recognized as important information resources. Moreover, information resources on the Internet are often maintained in heterogeneous, distributed, and autonomous information repositories. Thus, the integration of Internet information resources is one of the significant issues. In this paper, we propose a new integration framework, XMF, which provides uniform views over large number of Internet information resources by using only XML and Internet. XML provides self-describing modeling method for capturing semantic of heterogeneous information resources, and the Internet protocol supports the common data communication mechanism. The features of XMF are integrating various kinds of information sources and its application on the Internet, supporting common data model and run-time integration of information resources by using its mediation mechanism and query language. In consequence, XMF supports common architecture and query language for integrating the Internet information resources and user can easily access XMF with uniform method. Furthermore, XMF can be easily implemented with current Internet technology and XML-related software. We anticipate that flexible, efficient, and generalpurpose heterogeneous and distributed information resource integration methodology is needed as huge amount of information is accumulated on the Internet. XMF is the one of the solutions of seamless integration of Internet information resources." Note: Appendix A supplies the DTD for XMR (XMF Mediation Rules.

  • [March 21, 2002] "XML-Based Supply Chain Management -- As SIMPLEX as It Is." By Peter Buxmann, Luis Martín Díaz, and Erik Wüstner (Freiberg University of Technology, Chair of Information Systems / Wirtschaftsinformatik, Lessingstr. 45 09596 Freiberg / Germany). In Proceedings of the 35th Annual Hawaii International Conference on System Sciences. HICSS 2002. Big Island, Hawaii, USA, January 7-10, 2002. Edited by R. H. Sprague. Los Alamitos, CA, USA: IEEE Computer Society, 2002. Abstract. "In this paper we want to examine to what extent XML is able to support the exchange of business documents in supply chains. Thereby we focus on the problem of converting different data formats of participants of the supply chain. First results show that XML and its surrounding standards of the XML family highly accelerate and simplify the conversion process. Therefore, XML allows using a common standard on a lower level, without reducing variety on a higher level, due to the use of different XML vocabularies. First, we examine different approaches for solving the transformation problem. Second, we show how XML can actually be implemented for a Web-based integration in supply chains. We present a java-based prototype that enables document exchange over the Internet using XML business vocabularies for document representation, XSLT for document conversion and presentation, and both DOM and SAX for processing and integrating documents into in-house-systems. [...] In this paper, we present a prototype for supporting the document exchange in supply chains. The use of XML thereby plays a key role: The documents are described with XML syntax and transferred between the partners of the supply chain. The fundamental advantage is that the XML standard only defines a general language for the description of documents, yet it does not determine their content. That means that any kind of business document and thus, for instance, all currently available EDI standards can be represented with XML. Considering this, multiple standardization initiatives have meanwhile arisen, which define business vocabularies that are industry-specific or at least adapted to certain application needs. The partners in the supply chain can thus use XML as a common fundamental language building upon it different business vocabularies, which best meet their specific requirements. As shown in this paper, the conversion between different business vocabularies using XSLT style sheets is possible. A translation between two standards is indeed still necessary; the advantage, however, is that this translation is relatively simple. Our experience shows that it is possible, with appropriate previous knowledge, to create such an XSLT style sheet in one day. Nevertheless, due to the different degrees of detail of the particular vocabularies, a translation without loss of information cannot always be achieved... At this stage of development, SIMPLEX just supports the exchange, the conversion and the integration of business documents based on XML. In a next step, we want to integrate the XML-data described in planning processes of the supply chain. The optimization of these extended planning processes is the fundamental thought of Supply Chain Management. The entire knowledge base of SIMPLEX will remain XML. This will make all the processing and structuring advantages of the XML family available to support inter-organizational planning and optimization procedures. Furthermore we want to test our solution in different live environments..." See the SIMPLEX website and the similar white paper.

  • [March 21, 2002] "XBRL: Standard Bearer of Financial Reporting." By Ivan Schneider. In CMP BankTech (March 05, 2002). "An emerging XML standard called XBRL, or Extensible Business Reporting Language, promises to simplify the mechanics of working with financial statements. As XBRL catches on, analysts will be able to spend less time building intricate spreadsheets from scratch and more time scrutinizing companies' finances and accounting practices-which they're bound to be doing more of in the wake of the Enron scandal. Indeed, XBRL, while nothing more than a standard, could have a fundamental impact on the financial services industry, in the manner that the barcode changed inventory management or the MP3 compression standard upended the music business... The beneficiaries of XBRL aren't limited to analysts or public companies. Virtually all of the participants in the financial information supply chain will find it worthwhile to take advantage of the enhanced data. 'No industry will benefit more from XBRL than banking,' said Coffin. 'A bank plays in numerous places around that supply chain.' To wit, banks have controllers, tax analysts, investor relations personnel, loan officers, credit analysts, and regulatory and compliance officers. Furthermore, converged financial institutions have brokerage, investment banking and research capabilities under the same roof. 'Literally, it hits them in about 20 different ways,' said Coffin. Essentially, XBRL allows preparers of business information to designate the purpose, denomination and time frame for each and every number, statistic and statement in a document, drawing from a standard dictionary of financial terms and accounting classifications. Non-standard elements can also be included, as long as they're also described within the document. Standards for describing information make it far easier for software developers to work with financial data. Accordingly, the industry is gearing up to provide XBRL-compatible tools for providers of business information. IBM, Microsoft, Oracle, PeopleSoft and SAP are among the many software companies involved with the standard-setting body. For example, FRx Financial Reporter, from Microsoft Great Plains, Fargo, N.D., will connect to over 24 general ledger systems. 'Users of the data that need to feed the banks [financial] information will be literally a few mouseclicks away from feeding the banks an XBRL product,' said Rob Blake, the Microsoft representative on the XBRL Committee. Banks, in turn, can benefit greatly from receiving XBRL-formatted financial information from their borrowers. 'It would reduce both the credit risk and the operational risk,' said Philip Walenga, assistant director in the insurance division of the FDIC. Among other things, the ability to systematically process incoming financial information will make it possible for banks to automatically evaluate the fiscal status of a large number of companies, prioritize the workload of loan officers and analysts, and detect loan covenant violations..." See: "Extensible Business Reporting Language (XBRL)."

  • [March 21, 2002] "The Semantic Web." By Seth Grimes. In IntelligentEnterprise Volume 5, Number 6 (March 28, 2002), pages 16-17, 52. ['A semantic Web will enable automated use of disparate, distributed Internet information sources and services.'] "... The semantic Web relies on three technologies filling key roles: (1) XML for syntax and structure; (2) Ontology systems that define terms and their relationships; (3) The Resource Definition Framework (RDF), which provides a model for encoding ontology defined meaning. Other technologies and concepts also come into play: universal resource identifiers, which are globally recognized and unique element definitions, rules-processing (inference) systems, and the usual Internet-infrastructure protocols. An ontology starts with a taxonomy, a structured arrangement of information into classes that categorizes a subject area and relates its elements. Many ontology projects are underway. (If you want to check one out, try OpenCyc, which I first encountered while researching 'Out in the Open, which discussed open-source decision-support tools. OpenCyc is a planned subset of Cycorp's general-knowledge ontology. Because it's open source, you can freely use and help extend it.) The semantic Web effort mandates that you express ontologies in RDF using XML. XML forms a strong foundation for modern, layered approaches to constructing markup language, and it's great for syntax. But without RDF (or an equivalent), XML-based constructs lack meaning. The RDF specification envisions an object-oriented system of classes forming a schema... The semantic Web will be realized through the continued development of standards and technologies including those I've described and others such as the DARPA agent markup language (DAML). DAML is a family of markup languages providing the power to express ontologies that use RDF to provide semantic meaning. DAML languages incorporate artificial intelligence knowledge-representation concepts and are designed to support agent and inference-engine interactions with suitable marked-up sites. Tools such as W3C's Annotea collaborative metadata-annotation system are essential in completing the picture of a web of semantically defined services..."

  • [March 21, 2002] "United Front." By Art Taylor. In IntelligentEnterprise Volume 5, Number 6 (March 28, 2002), pages 35-39. ['Web services are only a partial answer to the complexities of enterprise application integration. Complementary approaches may enable a full solution. This article examines the potential effect of Web services on the information-driven enterprise, why Web services represent only a partial solution to some familiar IT problems, and how using J2EE as a complementary approach can help fill some of these gaps.'] "As I'll explain in this article, even assuming that Web services deliver to their full potential, you would still have work to do to develop strategic business applications for information interchange. Fortunately, Java 2 Enterprise Edition (J2EE) technology provides a potential complement to Web services; together, the two technological approaches can provide a cost-effective, strategic business application solution... despite their differences, Web services and J2EE are complementary. EJB, the middleware components of J2EE, can be exposed as Web services on a number of application servers, including BEA Systems' WebLogic and IBM WebSphere. J2EE Web components (JSPs and servlets) and client components (applets and Web Start applications) can also be developed as consumers of Web services. For example, an enterprise application that needs to access an information asset could do so by consuming a Web service. This application front end could be implemented using C# and acting as a consumer of a Web service created with a C# component deployed in a Web server that exposes that service. Alternatively, the same C# front end could consume a Web service created using an EJB deployed in a J2EE-compliant application server. Yet another alternative would be to have a JSP front end consume a Web service deployed and exposed within Microsoft IIS. The latter two examples highlight one of the more important values of Web services: Components or applications written in different languages can easily exchange data. They also demonstrate that the concept of Web services doesn't specify a technology; rather, it specifies a development paradigm for the interaction of components or applications. The technology is irrelevant as long as the components or applications communicate via XML-formatted messages on top of recommended standards. J2EE offers a large bag of tricks, and in practice, Web services will only be one of the tricks used by developers. Using an open-minded approach where all technologies are considered based on merit, other J2EE technologies will undoubtedly prove more appropriate for many services..."

  • [March 21, 2002] "Compare the Mobile Internet Toolkit to XSLT." From Microsoft Corporation. March 15, 2002. ['Compares the Microsoft Mobile Internet Toolkit to XSLT (Extensible Stylesheet Language Transformation) to create mobile Web applications.'] "Why Is Mobile Development So Challenging? There are a tremendous number of mobile devices available to consumers, and manufacturers are releasing new devices all the time. Each time you develop a mobile Web application, you face the following challenges as a result of this wide variety of devices: (1) Different markup languages, including HTML, compact HTML (cHTML) for Japanese i-mode phones, and WML for wireless application protocol (WAP) phones. (2) Different form factors, including varying screen size, screen orientation (horizontal or vertical), and color, grayscale, or black and white screens. These variables affect content pagination and the type of graphics you must generate. (3) Different device capabilities, including whether the device can display images, initiate a phone call, or receive notification messages. (4) State management, including whether cookies are supported... The Mobile Internet Toolkit contains server-side technology that extends the Microsoft ASP.NET programming model to deliver content to a wide variety of mobile devices. Because each device can have a unique combination of capabilities, the Mobile Internet Toolkit provides an abstraction layer so developers can write applications without worrying about the specific details of each device. The Mobile Internet Toolkit takes advantage of the Microsoft .NET Framework, including the performance gains. In contrast, XSL transformations are typically slower, and the more complex the XSL, the slower the transformation. ... In comparing the Mobile Internet Toolkit to XSL, developers choose the Mobile Internet Toolkit to leverage the rapid application development, performance, and code reuse of ASP.NET. In addition, the Mobile Internet Toolkit integrates seamlessly into Visual Studio .NET, taking advantage of its many benefits, and the Toolkit abstracts mobile device capabilities on two levels..."

  • [March 21, 2002] "Jaggle XML Web Services Application. Overview of the Jaggle Architecture." From Microsoft Corporation. By Leendert Versluijs (Software Engineer), Jeroen Huitink (Infrastructure Engineer), Sander Duivestein (Public Relations, Cap Gemini Ernst & Young). March 19, 2002. ['An overview of the functionality, architecture, and components of this real estate application, composed of XML Web services and implemented on the .NET Framework.'] "Jaggle is a real estate application composed of XML Web services and implemented on the .NET Framework. The application consumes internal and external Web services with each of the internal Web services itself being an autonomous N-Tier application. The application, source code, and documentation are intended for anyone planning and building Web-based applications on the .NET Framework using Microsoft Visual Studio.NET. The Web application guides users through a product selection process, gathering requirements and then returning products that match those requirements. The Web site is specific to the real estate application, while the underlying business logic is a more generic framework for finding and comparing products. The documentation includes architectural models in UML, discussion of design patterns and decisions, and development details... This set of articles is intended for anyone planning to participate in developing Web-based applications in the .NET Framework using the tools provided in the Microsoft Visual Studio .NET environment. This includes architects, developers, programmers, and testers, as well as planners, system managers, consultants, and Microsoft Certified professional trainers. This overview provides the reader with a basis for understanding the more detailed descriptions found in the other articles in this set... The Jaggle Real Estate Web application offers users the opportunity to find real estate products that will best meet their requirements. The Web application guides users through a product selection process, asking them for specific information about their individual requirements and restrictions. Users' answers are converted into criteria, and based upon these criteria, the Web application offers a selection of real estate products. Users can then select from these matching products to view more detailed product information..." See also the Jaggle sample source code.

  • [March 21, 2002] "DocBook V4.2 Release Candidate 1." Posted by Norman Walsh for the OASIS DocBook Technical Committee. DocBook "is general purpose XML and SGML document type particularly well suited to books and papers about computer hardware and software (though it is by no means limited to these applications)... The DocBook Technical Committee is pleased to announce the release of DocBook V4.2 Release Candidate 1 in XML and SGML. DocBook V4.2 incorporates numerous enhancements over DocBook V4.1 and DocBook XML V4.1.2. These changes are documented in the DocBook Document Type specification. Development of DocBook V4.2 is 'finished'. The purpose of the Candidate Release phase is to encourage widespread testing of the latest DocBook release. If no problems are reported in the next 30 days, the DocBook Technical Committee plans to advance DocBook V4.2 to Committee Specification status. Please give DocBook V4.2 a try in your favorite tools and report any problems that you encounter to the docbook@lists.oasis-open.org mailing list." See references in "DocBook XML DTD."

  • [March 21, 2002] "DocBook HTML Forms Module V1.1." Norman Walsh and the OASIS DocBook Technical Committee. The HTML Forms Module is an extension to DocBook XML V4.1.2. It adds support for HTML Forms markup. "Version 1.1 introduces properly parameterized HTML element names (so that the namespace prefix can be changed on a per-document basis) and provides the declaration for that prefix on each HTML element." See the DTD and test instance document.

  • [March 21, 2002] "XML Character Entities Version 0.2." By Norm Walsh [and the OASIS DocBook Technical Committee]. March 19, 2002. In (March 19, 2002). "This Standard defines XML encodings of the 19 standard character entity sets defined in Non-normative Annex D of [ISO 8879:1986]. Added Latin 1, Added Latin 2, Greek Letters, Monotoniko Greek, Russian Cyrillic, Non-Russian Cyrillic, Numeric and Special Graphic, Diacritical Marks, Publishing, Box and Line Drawing, General Technical, Greek Symbols, Alternative Greek Symbols, Added Math Symbols: Ordinary, Added Math Symbols: Binary Operators, Added Math Symbols: Relations, Added Math Symbols: Negated Relations, Added Math Symbols: Arrow Relations, Added Math Symbols: Delimiters. The SGML declarations for these entities use the specific character data (SDATA) entity type that is not supported in XML, so alternative XML declarations are necessary..." See the reference page and .ZIP archive. References: "SGML/XML Entity Sets and Entity Management."

  • [March 20, 2002] "Manning Publications Releases J2EE and XML Development." Information in a communiqué from Helen Trimes. From Manning Press: J2EE and XML Development. By Kurt A. Gabrick and David B. Weiss. April 2002. ISBN: 1930110-308. Print edition: Softbound, 304 pages, $39.95. Ebook edition: PDF format, 1.2 MB, $13.50. "J2EE and XML are important technologies in their own right, but applications that use them together benefit from their synergy. Java and J2EE make a powerful platform for building robust application logic. XML facilitates flexible data storage and manipulation. Developers who properly use XML with J2EE develop the most powerful enterprise systems that can be built today. J2EE and XML Development is a rich yet concise guide that teaches how, where, and why to use XML in each layer of a J2EE application. The book categorizes and explains many recent Java and XML technologies and the ways in which a J2EE application can best use them. It untangles the web of Java APIs for XML, including the JAX family, as well as other popular emerging standards like JDOM, explaining each in terms of its functionality, and illustrating its intended use through examples..." See the blurb. The ebook Edition in PDF is available now.

  • [March 20, 2002] "New Sun Features Will Enhance Web Services Integration." By Jeffrey Burt. In eWEEK (March 19, 2002). "Sun Microsystems "will unveil expanded integration capabilities for its iPlanet Portal and Integration servers, designed to enable developers to more quickly and easily create and deploy Web services applications. The Palo Alto, Calif., company also will announce a joint initiative with IBM to create a portlet API standard, named right now JSR 168. The portal connection specification will enable businesses to aggregate content and data within a portal and move the information between disparate portal platforms, said Sanjay Sarathy, director of developer enablement at Sun... Enhancements to the iPlanet Integration Server include the release of the iPlanet XML Adapter Designer, or XAD, and new import capability for WSDL (Web Services Description Language) and direct support for SOAP (Simple Object Access Protocol). The combination will enable users to integrate legacy applications and will offer end-to-end management of reusable Web services. Also, the XAD framework gives developers the capability to build and deploy XML adapters for the iPlanet Integration Server, EAI Edition. This will enable them to more quickly integrate back-end systems with a company's Web-based system, said Dave Hearn, director of group product marketing for Sun ONE (Open Net Environment) business integration... The joint project with IBM to create a portal specification will help users move Web services between portal platforms, which Sarathy said was becoming the user interface of choice among developers. The specification is being put before the Java Community Project..."

  • [March 20, 2002] "StarOffice Goes Commercial, and Stays Open-Source." By Tom Krazit. In InfoWorld (March 19, 2002). "Sun Microsystems's StarOffice 6.0 will come with enhanced features and added support, but at a price, as Sun aims to attract a wider audience, such as businesses, towards the office-productivity software suite. A less sophisticated version of the product will still be available for free download from OpenOffice.org, the open-source community sponsored by Sun, the company announced Tuesday. 'We are positioning this product as a direct competitor to Microsoft's Office,' said Mike Rogers, vice president and general manager of desktop and office productivity at Palo Alto, California-based Sun. StarOffice is an office suite that includes word processing, spreadsheet, Web publishing and database applications. Along with version 6.0, currently in beta-testing, Sun will throw in 'enhanced support features,' such as online and phone support, training and deployment assistance, said Rogers. 'CIOs (chief information officers) at enterprises are uneasy about (adopting) a product without support and training for their IT staff. Our beta testers told us they wouldn't standardize on a product without that,' which led Sun to make those support features available, Rogers said. It will also come with features such as added fonts, a larger clip-art library, a database, a spell-checker, and other third-party applications. The current version, StarOffice 5.2, has been available for download on http://www.openoffice.org/, or purchased via retail outlets for $39.95, which included a CD and documentation. With the release of 6.0, Sun will stop distributing version 5.2 altogether; free downloads will end and the product will not be shipped either, said Rogers..." [StarOffice 6.0 Beta software "previews new features and capabilities, including improved interoperability with Microsoft Office files, support for XML file formats, integrated creativity and productivity tools, and improved international support with unicode technology..."] See: "StarOffice XML File Format."

  • [March 20, 2002] "A Set Theory Based Approach on Applying Domain Semantics to XML Structures." By Wolfgang Schuetzelhofer (IBM Austria, Obere Donaustrasse 95, A-1020 Vienna, Austria) and Karl M. Goeschka (Vienna University of Technology, Gusshausstrasse 27-29/384, A-1040 Vienna, Austria). Pages 1210-1219 in Proceedings of the 35th Annual Hawaii International Conference on System Sciences. HICSS 2002. Big Island, Hawaii, USA, January 7-10, 2002. Edited by R. H. Sprague. Los Alamitos, CA, USA: IEEE Computer Society, 2002. With 19 references. Abstract. "As XML is gathering more and more importance in the field of data interchange in distributed business to business (B2B) applications, it is increasingly important to provide a formal definition of XML-structures together with a well defined way to map business domain semantics to these structures. An XML-algebra, similar to the relational algebra, is required for the formal definition of operations and transformations and to prove the correctness and completeness of design methods. To develop an XML algebra, we propose a sound mathematical foundation, modeling XML-structures as typed directed graphs based on set theory. Together with a formal method to apply domain semantics to directed graphs we present a three layer meta model to address the separation of structure and content, and we introduce extensible type hierarchies on nodes and links. This allows to model and validate business domain semantics on different levels of abstraction... In this paper we have introduced the concept of a Domain Graph (DoG) as a directed graph with typed nodes and links, being based on what is known as a semantic net. We have outlined, how domain semantics can be applied to a DoG by specifying types and structural constraints. Business domain modeling at the type-level, thus separating structure and content, is proposed as a flexible approach of delivering semantic data. The concept of type hierarchies as well as the approach of link composition have proved to be powerful means of abstraction. In future work, the concept of composition as described in this paper can profitably be generalized introducing the notion of composite graphs. Where a composite graph is a DoG with nodes representing entire graphs called component graphs, and with links representing relationships between component graphs. Assuming a component graph itself is a composite graph, this allows nested graph composition of arbitrary depth. Throughout this paper we have presented static aspects of DoGs using set theory to describe their structure. The set theory based model of a DoG, together with the formal specification of validity, build a sound mathematical basis to develop an XML-algebra allowing to define operations on XML-structures, thus describing dynamic aspects. Manipulating a DoG by inserting, deleting or updating nodes and links, thereby maintaining consistency and validity of the DoG are such dynamic aspects to be described. Set theory as the mathematical basis provides for formal specifications and proofs of correctness and completeness of design methods by proving the equivalence and soundness of transformations. New and extended design methods, which can be formally specified and verified, are seen to be a profitable output of future work based on this paper."

  • [March 20, 2002] "An Ontology-Based HTML to XML Conversion Using Intelligent Agents." By Thomas E. Potok, Mark T. Elmore, Joel W. Reed, and Nagiza F. Samatova (Oak Ridge National Laboratory). In Proceedings of the 35th Annual Hawaii International Conference on System Sciences. HICSS 2002. Big Island, Hawaii, USA, January 7-10, 2002. Edited by R. H. Sprague. Los Alamitos, CA, USA: IEEE Computer Society, 2002. With 13 references. Abstract. "How to organize and classify large amounts of heterogeneous information accessible over the Internet is a major problem faced by industry, government, and military organizations. XML is clearly a potential solution to this problem, however, a significant challenge is how to automatically convert information currently expressed in a standard HTML format to an XML format. Within the Virtual Information Processing Agent Research (VIPAR) project, we have developed a process using Internet ontologies and intelligent software agents to perform automatic HTML to XML conversion for Internet newspapers. The VIPAR software is based on a number of significant research breakthroughs. Most notably, the ability for intelligent agents to use a flexible RDF ontology to transform HTML documents to XML tagged documents. The VIPAR system is currently deployed at the US Pacific Command, Camp Smith, HI, traversing up to 17 Internet newspapers daily." See also "VIPAR Multi-Agent Intelligence Analysis System": "In VIPAR, intelligent software agents have been successfully developed to address challenges facing the intelligence community in quickly gathering and organizing massive amounts of information then distill that information into a form directly and explicitly amenable for use by an intelligence analyst. The Oak Ridge National Laboratory successfully implemented this technology for the US Pacific Command. The VIPAR project resulted in the development of intelligent software agents that read and organized information from electronically available documents (specifically, in this case, dozens of Internet newspapers). The VIPAR system then visually present information, organized according to the needs of the customer, to the analyst. This system automatically and intelligently provides the leveraging of analysts expertise to process and distill information many times faster and more thoroughly than could be done by the intelligence analysts, themselves..." VIPAR Presentation (/PPT) and VIPAR General Description. [cache]

  • [March 20, 2002] "Experiments in the Use of XML to Enhance Traceability between Object-Oriented Design Specifications and Source Code." By Jim Alves-Foss (Center for Secure and Dependable Software, University of Idaho, Moscow, ID 83844-1008), Daniel Conte de Leon (Center for Secure and Dependable Software), and Paul Oman (Schweitzer Engineering, Laboratory, Inc., Pullman, WA 99163-5603). In Proceedings of the 35th Annual Hawaii International Conference on System Sciences. HICSS 2002. Big Island, Hawaii, USA, January 7-10, 2002. Edited by R. H. Sprague. Los Alamitos, CA, USA: IEEE Computer Society, 2002. With 20 references. Abstract. "In this paper we explain how we implemented traceability between a UML design specification and its implementing source code using XML technologies. In our linking framework an XMI file represents a detailed- design specification and a JavaML file represents its source code. These XML-derivative representations were linked using another XML file, an Xlink link-base, containing our linking information. This link-base states which portions of the source code implement which portions of a design specification and vice-versa. We also rendered those links to an HTML file using XSL and traversed from our design specification to its implementing source code. This is the first step in our traceability endeavors where we aim to achieve total traceability among software life- cycle deliverables form requirements to source code... We successfully used Xlink to design and implement links from a detailed-design specification to its implementing source code for our SeaBASS project. An XMI file represented this detailed-design specification and a JavaML file did it for our SeaBASS source code. These XML-derivative representations were linked using an Xlink link-base containing our linking information. This link-base states which portions of our SeaBASS Java source code implement which portions of our SeaBASS design specification. We modified a version of an XSL stylesheet that transforms an XMI design into an HTML table format. The new stylesheet transforms the XMI file and, at the same time, parses the links stated in our link-base. The result of this new stylesheet is an HTML file showing the design of our SeaBASS project in the same previous table format but now containing links to each of the source code files implementing the class at design level. The main advantage in using XML is that it allows us to automate creation and management of links. In this case-study links in our XMI specification and templates in our XSL stylesheet were manually added. We plan to develop a prototype of a software development environment where links between object-oriented design and source code would be automatically created and maintained by the system based on implicit and explicit information given by the software engineer. Using such a tool would allow engineers to develop large software systems while maintaining consistency between objectoriented design specifications and its implementing source code. We plan to use XML-derivates not just for detaileddesign and source code, though for all software life-cycle deliverables, and this work is a proof-of concept that we can implement traceability among those disparate documents by means of XML and Xlink... Our current view of links stated in a link-base is using simple HTML links. More work is needed to add different behavior to the links and not to limit to today browsers' simple links behavior. For example add behavior to a link in a way that when we right-click on a class, in a graphical UML design, options to see/edit the source code appear or to navigate to the requirements that this class is implementing. This can be done by means of XSL and/or by means of an integrated software development environment..."

  • [March 20, 2002] "XML-Based Available-to-Promise Logic for Small and Medium Sized Enterprises." By Joerg-Michael Friedrich and Jochen Speyerer (Information Systems Research Network, University of Erlangen-Nuremberg, Äusserer Laufer Platz 13-15, 90403 Nuremberg, Germany). In Proceedings of the 35th Annual Hawaii International Conference on System Sciences. HICSS 2002. Big Island, Hawaii, USA, January 7-10, 2002. Edited by R. H. Sprague. Los Alamitos, CA, USA: IEEE Computer Society, 2002. With 22 references. Abstract. "At the Information Systems Research Network (FORWIN), Nuremberg, Germany we have prototypically implemented a lean and flexible available-to-promise application which is integrated with a framework of software components fulfilling the functions of supply chain management (SCM). This project is to demonstrate that it is possible to implement cost-effective and flexible software tailored to the needs of small businesses providing reliable information about product availability. To suit a large variety of companies, the way in which the component influences decisions or automates processes can be adjusted through different parameters, such as timeout, substitution, automatic processing or prioritization of suppliers. In order to integrate all sorts of existing MRP or legacy systems along the supply chain the information flow is organized through a transaction-based exchange of standardized XML documents via the Internet... The necessary software to run the CWSCM application includes the Java 2 runtime environment (JRE), Standard Edition, in version 1.3.0, the Java API for XML processing (JAXP) package in version 1.0.1, and the Xalan XSLT processor version 1.2. All these programs are available for most operating systems and can be freely Instead of sending information between the partners of the SC in plain text format, the software utilizes the XML standard. The structuring of information with the help of tags may seem excessive, since the documents contain a redundant overhead of data, but the use of XML offers several advantages, which legitimate this procedure. First of all, it is simple for the software to control if a document contains all necessary data. Not only is the application able to check whether a document is well formed, by using a DTD, the software can also test the validity of a document. Another advantage of XML can be seen in the fact that data is separated from format instructions. With a cascading style sheet (CSS) or extensible style sheet language transformations (XSLT), a document can be transformed into and represented in a wide variety of formats, e.g., hypertext markup language (HTML) or portable document format (PDF), without the need of recoding. Finally, XML adopted Unicode, a character encoding standard that supports most languages, by providing a unique number for every character. This, and the fact that XML is a very strict standard, ensures a platform and software independent way to communicate, no matter the language."

  • [March 20, 2002] "Using XML to Facilitate Information Management across Multiple Local Government Agencies." By G.M. Bryan, J.M. Curry, C. McGregor, D. Holdsworth and R. Sharply (Centre for Advanced Systems Engineering, University of Western Sydney, Locked Bag 1797, Penrith South DC NSW 1797).. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences. HICSS 2002. Big Island, Hawaii, USA, January 7-10, 2002. Edited by R. H. Sprague. Los Alamitos, CA, USA: IEEE Computer Society, 2002. Abstract. "This paper details a collaborative research initiative between the Penrith City Council, Penrith Australia and the Centre for Advanced Systems Engineering (CASE) at the University of Western Sydney. It details the development of a fully functioning XML-based prototype system that provides for effective integration of services offered by a collaborating group of legacy systems. The key contribution of this work is to provide an open systems based infrastructure that allows collaborating legacy systems, based on heterogeneous database and server platforms, to offer an integrated query service over the Internet... In mediator/translator systems, the mediator contains a centralised knowledge base storing details of how to process a request for a specific type of information. A translator interfaces each data repository to a mediator. The translator converts incoming queries from a system standard format into a format that the underlying database management system can execute. The translator also converts the data resulting from a transaction to conform to the standard required. Examples of mediator / translator based systems include TSIMMIS, BEA System's WebLogic, and IBM's WebSphere... Our approach differs from deployment techniques used in systems based on data gateway or mediator type systems where the required functionally to execute a query is installed in a library which is then replicated throughout system by the system administrator... A typical client interaction with the Information Brokerage would start with a client wishing to access a service and obtaining details of the required service from the Business Exchange. The Business Exchange contains details of electronic services available to clients. The type of information held in a Business Exchange includes a description of the service being offered and details of the service's input/output protocols in the form of two XML documents. (1) The Service Request XML document, which sets out the input requirements of the service and processing instructions. The processing instructions contained in the Service Request document are used by the Service Broker to execute the service. (2) The Output XML document, which, as the name implies, provides the client with a sample output from the system. Once the client has completed the Service Request document they forward it to a Service Broker. Both the Business Exchange and the Service Broker may be located on the client's site or any external site on the Internet. Upon receipt of a Service Request document the Service Broker begins an ACID transaction. The Service Broker unpacks the document and using the Service Provider details (description of XML input and output documents and the URL of the service), processing instructions and input data it contains, executes the required service... The Client Environment provides the user with the tools necessary to build and maintain an interface with the Information Broker. The Client Environment consists of four principle components: (1) Service Selection: which allows the client to select a service from a Business Exchange and execute the service on a Service Broker. (2) EditX: an editor that allows the client to create XSL documents to transform both incoming and outgoing XML documents. (3) The Legacy System Interface; which allows the client to directly interface a legacy system to a Service Broker and (4) GenX: which generates XML documents from non- XML data formats using a sample document..."

  • [March 20, 2002] "Continuous Queries within an Architecture for Querying XML-Represented Moving Objects." By Thomas Brinkhoff and Jürgen Weitkämper (Institute of Applied Photogrammetry and Geoinformatics [IAPG], Fachhochschule Oldenburg/Ostfriesland/Wilhelmshaven, Ofener Str. 16/19, D-26121 Oldenburg, Germany. Pages 136-154 (with 33 references) in Proceedings 7th International Symposium on Spatial and Temporal Databases (SSTD 2001). July 12-15, 2001, Redondo Beach, CA, USA. Pubished in Lecture Notes in Computer Science, Volume 2121. "The development of spatiotemporal database systems is primarily motivated by applications tracking and presenting mobile objects. Another important trend is the visualization and processing of spatial data using XML-based representations. Such representations will be required by Internet applications as well as by location-based mobile applications. In this paper, an architecture for supporting queries on XML-represented moving objects is presented. An important requirement of applications using such an architecture is to be kept informed about new, relocated, or removed objects fulfilling a given query condition. Consequently, the spatiotemporal database system must trigger its clients by transmitting the required information about relevant updates. Such queries are called continuous queries. For processing continuous queries, we have to reduce the volume and frequency of transmissions to the clients. In order to achieve this objective, parameters are defined which model technical restrictions as well as the interest of a client in a distinct update operation. However, delaying or even not transmitting update operations to a client may decrease the quality of the query result. Therefore, measures for the quality of a query result are required. [...] In this paper, an important query in the field of spatiotemporal database systems -- the continuous query -- has been investigated. The work was motivated by applications tracking and presenting mobile objects in an Internet environment where different types of clients including mobile devices are used. For such applications querying moving spatial objects, an architecture has been proposed, which supports an XML-based data representation. This architecture has been the base for discussing continuous queries. After a classification of the update operations in a spatiotemporal database system, a formal definition of the result set of continuous queries has been presented. In order to reduce the volume and frequency of transmissions to the clients, parameters have been defined in order to model technical restrictions as well as the interest of a client in a distinct update operation. A component for performing such a filtering has been integrated into our architecture. However, delaying and not transmitting update operations to a client mean to decrease the quality of the query result. Therefore, two quality measures have been presented. The next step will be the development on efficient algorithms for performing the filtering. A special attention will be given to maintain the quality of the query results. A further requirement to such algorithms concerns their scalability for supporting large number of clients. The definition of continuous queries and the XML-based representation of spatiotemporal objects were based on a quite simple model of spatiotemporal objects. Therefore, future work should cover a definition using a more expressive data model. Especially, the support of motion described by a motion vector instead of a constant object position must be investigated. Another aspect is the behavior of the restricting parameters. The resolution of a client may be changed by performing a zoom operation and the parameters minOps, maxOps and minPeriod may be affected by the traffic of other users of the network connection or by a changed capacity of the connection. For example, using the new mobile telephone standard UMTS, the maximum speed of a connection will depend on the distance of the mobile telephone to the next base station. Therefore, filter algorithms are required, which observe varying parameters."

  • [March 20, 2002] "BitCube: A Three-Dimensional Bitmap Indexing for XML Documents." By Jong P. Yoon, Vijay Raghavan, and Venu Chakilam (Center for Advanced Computer Studies, University of Louisiana, Lafayette, LA 70504-4330); Larry Kerschberg (Department of Information and Software Engineering, George Mason University, Fairfax, VA 22030-4444). In Journal of Intelligent Information Systems: Integrating Artificial Intelligence and Database Technologies Volume 17, Numbers 2/3 (December 2001), pages 241-254 (with 13 references). "XML is a new standard for exchanging and representing information on the Internet. Documents can be hierarchically represented by XML-elements. In this paper, we propose that an XML document collection be represented and indexed using a bitmap indexing technique. We define the similarity and popularity operations that are suitable for bitmap indexes. We also define statistical measurements in the BitCube: the center and the radius. Based on these measurements, we describe a new bitmap indexing-based technique to cluster XML documents. The techniques for clustering are motivated by the fact that the bitmap indexes are expected to be very sparse. Furthermore, a 2D bitmap index is extended to a 3D bitmap index called the BitCube. Sophisticated querying of XML document collections can be performed using primitive operations such as 'slice', 'project' and 'dice'. Experiments show that the BitCube can be created efficiently and that the primitive operations can be performed more efficiently with the BitCube than with other alternatives... The main contributions of this paper are (1) the application of bitmap indexing to represent XML document collection as a 3-Dimensional data structure: XML document, XML-element path, and terms or words, (2) the definition of BitCube index based schemes to partition documents into clusters in order to efficiently perform BitCube operations, and (3) a document retrieval technique based on application of BitCube operations to subcubes resulting from the clustering phase. Experiments to show that our bitmap approach improves document clustering and performance of document retrieval on the Internet over alternative approaches are in progress. (4) Even for big XML document collections, the indexing is done in considerable amount of time. The time taken for various BitCube operations remained constant." [cache]

  • [March 20, 2002] "An Aggressive Aggregation of XML Documents for Summary Data Generation." By Jong P. Yoon and Larry Kerschberg. Presented at the Fifth World Multi-Conference on Systemics, Cybernetics and Informatics (SCI 2001) Orlando, Florida, 22-25 July 2001. With 19 references. "Aggregate functions are critically important and widely used to build summary data in WWW. Aggregation queries, that are used to summarize source data, may often result in incorrect answers due to the irregularity of XML data: an XML-element appears with irregular content structure and contains non-atomic or empty content although it follows a DTD or XML Schema. To cope with this problem, we propose an aggressive aggregation method for summarizing XML data. The contribution of this paper includes sound and complete collection of information from irregular XML data, and construction of summary data for XML documents in WWW. The method proposed in this paper can also be used for many other Web-based applications involving semistructured documents for electronic commerce and for OLAP data cubes. [...] This paper has described the 'aggressive' approach to generate summary data for XML documents in WWW. The summary data contains aggregation information of XML documents. In this paper, we considered only a counting method by using XML Schema and the concept hierarchy. We constructed a concept tree in XML which can be easily integrated with user-provided queries. The proposed approach can be easily applied to other aggregation operations such as SUM, MAX, MIN, AVG. The contribution of this paper includes a new method of generating complete summary data about XML documents. Our extended work includes approximate aggregation of XML documents in WWW..." [cache]

  • [March 19, 2002] "Ektron Puts XML Editing in the Browser." By [Seybold Bulletin Staff]. In The Bulletin: Seybold News and Views On Electronic Publishing Volume 7, Number 24 (March 20, 2002). "Flush with the success of its browser-based editing software (eWebEditPro), emerging content-management and graphic design toolmaker Ektron is rolling out a new browser-based XML editor. Positioned in the gap between Altova's XML Spy and the higher-end XML editing products (Arbortext's Epic and SoftQuad's XMetaL), eWebEditPro+XML takes a practical approach to embedding XML-tagging functionality to an editing product aimed at nontechnical users... EwebEditPro+XML has several advantages compared with most of the XML capture facilities currently offered by Web content management vendors. First, it's not limited to wrapping tags around the entire contents of a windowful of text. Because the API makes it easy to grab and transform selected text, eWebEditPro+XML offers out-of-the-box functionality for adding XML markup at a very granular level-for example, designating company or product names as elements within an article. At the same time, the product gives the developer plenty of freedom to craft metadata input forms to go with the element tagging... We think Ektron's new product hits a sweet spot in XML editing-a lightweight (browser-based) client designed for business users. Altova's product is better for developers, and Arbortext and SoftQuad are still better choices for full-tiime editorial use, but in between, the market needs tools that are easy to use out of the box, yet don't compromise too much on functionality. Ektron has struck such a balance, and we'll be surprised if it doesn't draw a crowd of imitators later this year..."

  • [March 19, 2002] "UBL NDR Position Papers." By Members of the UBL Naming and Design Rules Subcommittee (NDR SC). First Public Release. 16-March-2002. 48 pages. Intended audience: EDI experts, business experts, and XML experts interested in the development of an international standard for basic XML business schemas. Version URL: http://oasis-open.org/committees/ubl/200203/ndrsc/review/draft-ndr-20020316.pdf. This PDF document contains four separate papers (also described below): (1) "Position Paper: Definition of Elements, Attributes, and Types"; (2) "Position Paper: Code Lists"; (3) "Elements versus Attributes"; (4) "Position Paper: Modularity, Namespaces and Versioning." These papers are considered part of the first UBL review cycle, "being made available at this time to gain early input from UBL liaison organizations" and for wider public review and comment. To post comments publicly, subscribe to the list ubl-comment@lists.oasis-open.org through the list manager. See also the announcement "UBL Library Content Subcommittee Releases Draft UBL Library of Reusable Types" for references to other UBL Library review materials. The UBL Library Content Subcommittee is developing "a standard XML business library content by taking an existing library (xCBL 3.0) as a starting point and modifying it to incorporate the best features of other existing business and core component libraries. Its goals are to create a BIE Catalog by identifying the Basic Information Entities out of the xCBL Library, to create XML (XSD) Schemas for business document types, and to document a customization methodology." The review package contains a methodology document describing the approach taken in this design work, draft XML Schemas derived from spreadsheets, and sample XML instances of UBL Order documents. The three XML schemas represent the UBL Library, the UBL Order document, and the Core Component Library. Review comment are being accepted through April 08, 2002. See: "Universal Business Language (UBL)." [cache]

  • [March 19, 2002] "Position Paper: Definition of Elements, Attributes, and Types." By Mark Crawford (LMI), Arofan Gregory (CommerceOne), and Eve Maler (Sun). Date: 16-March-2002. Reference: 'draft-arofan-tagspec-03.doc'. 9 pages. Published as one of four papers in "UBL NDR Position Papers." By Members of the UBL Naming and Design Rules Subcommittee (NDR SC). 16-March-2002. "In W3C XML Schema (known as XSD), elements are defined in terms of complex or simple types and attributes are defined in terms of simple types. The rules in this section govern the consistent naming and structuring of these constructs and the manner of unambiguously and thoroughly documenting them... These rules refer to the following concepts taken from ISO 11179 and used subsequently in the ebXML Core Components work... [Object Class; Property Term; Qualifier; Representation Term (RT); Core Component Type (CCT)]... Rules are given below on documenting XML constructs to indicate the unambiguous relationship of each construct to its corresponding Core Component-based semantic representation."

  • [March 19, 2002] "Position Paper: Code Lists." By Eve Maler (Sun). Date: 27-February-2002. Reference: 'draft-maler-codelists-04.doc'. 6 pages. Published as one of four papers in "UBL NDR Position Papers." By Members of the UBL Naming and Design Rules Subcommittee (NDR SC). 16-March-2002. "A code list, for our purposes, is a closed set of codes (possibly with a provision for indicating custom codes) that is defined and maintained by an organization along with documentation of the meaning of each code. A 'code' is a character string (letters, figures or symbols) that for brevity and/or language independence may be used to represent or replace a definitive value or text of an attribute. Codes usually are maintained in code lists per attribute type (e.g., colour)... The mechanism for handling all appearances of codes in UBL markup is the same, whether the code is internal or external. The code is an XML qualified name, or 'QName', consisting of a namespace prefix and a local part separated by a colon... The intent is for the namespace prefix in the QName to be mapped, through the use of the xmlns attribute as part of the normal XML Namespace mechanism, to a URI reference that stands for the code list from which the code comes. The local part identifies the actual code in the list that is desired. Following is an example of a mapping of the 'baskin' prefix to Version 1.0 of a Baskins-Robbins ice cream flavor namespace, assuming that UBL has had to define its own URI reference for this namespace: <IceCream xmlns:baskin="http://www.oasis-open.org/committees/ubl/codelists/BR31-V1.0" IceCreamFlavorCode="baskin:Chocolate"/>..."

  • [March 19, 2002] "Elements versus Attributes." By Gunther Stuhec (SAP). 18-March-2002. 17 pages. Published as one of four papers in "UBL NDR Position Papers." By Members of the UBL Naming and Design Rules Subcommittee (NDR SC). 16-March-2002. "A common cause of confusion, or at least uncertainty, in the design of a schemas is the choice between specifying parts of the document as elements or attributes... Elements are logical units of information in a schema. They represent information objects... Attributes are atomic, referentially transparant characteristics of an object that have no identity of their own. Generally this corresponds to primitive data types (e.g., Strings, Date, etc.). Taking a more logical view, an attribute names some characteristic of an object that models part of its internal state, and is not considered an object in its own right. That is, no other objects have relationships to an attribute of an object, but rather to the object itself... Is the content to be spell-checked? [If 'yes', use an element; if 'no', use an attribute]... The following diagram illustrates a way to find out how want to be an Element or an Attributes necessary to be define it... " [In terms of the Core Components Technical Specification:] "Component Content will be represented as an Element-Value; The Supplementary Components will be represented as Attributes." Note: most characterizations about element and attribute presented in this paper and in previous treatises represent opinions about how one, arguably, ought to best model "content" in XML documents; in most cases, the judgments are arbitrary, as XML 1.0 itself does not make normative statements about element/attribute semantics, nor even about what should constitute "content" or "not-content" from the data modeling perspective. One may recommend guidelines for best practice in particular application scenarios; in general, it will be dangerous as well as antithetical to core principles of XML (having no pre-defined application-level processing semantics, including default non/display semantics) to declare what kind of content should or should not be modeled in element or attribute structures. See other presentations: "SGML/XML: Using Elements and Attributes."

  • [March 19, 2002] "Position Paper: Modularity, Namespaces and Versioning." By Bill Burcham (Sterling Commerce). Date: March 15, 2002. Reference: 'draft-burcham-modnamver-03.doc'. 12 pages, with addditional 3 pages of examples. Published as one of four papers in "UBL NDR Position Papers." By Members of the UBL Naming and Design Rules Subcommittee (NDR SC). 16-March-2002. "There are many possible mappings of XML schema constructs to namespaces and to operating system files. This paper explores some of those alternatives and sets forth some rules governing that mapping in UBL. It addresses three topics related to namespaces: (1) Namespace Structure: What shall be the mapping between namespaces and XML Schema constructs (e.g., type definitions)? (2) Module Structure: What shall be the mapping between namespaces and XML Schema constructs and operating system files? (3) Versioning: What support for versioning of schema shall be provided?..."

  • [March 18, 2002] "Aggregate UDDI Searches with Business Explorer for Web Services. Developers can radically simplify their Web services searches with BE4WS." By Liang-Jie Zhang (Researcher, IBM T.J. Watson Research Center) and Qun Zhou (Software Engineer, IBM Software Group). From IBM developerWorks, Web services. March 2002. ['Many developers believe that Web services will open up a world of interlocking business services that find each other over the Internet, thus integrating disparate code into useful programs. But if this vision is to come to pass, users must be able to find services out there on the vast public network. Current searching APIs are rudimentary at best, and a developer must write a lot of code in order to find the Web services he or she desires. Business Explorer for Web Services (BE4WS) is an alphaWorks technology, based on the Java programming language and XML, that aims to simplify Universal Description, Discovery, and Integration (UDDI) searches for developers and users alike. Liang-Jie Zhang and Qun Zhou walk you through some example code to show you how it's done -- and show you how you can a build a Web-based application that will allow users to find Web services without writing any code at all.'] "Web services are typically published to a public or private UDDI registry. The design of UDDI allows simple forms of searching and allows trading partners to publish data about themselves and their advertised Web services to voluntarily provide categorization data. In general, UDDI can locate businesses whose identities are well known; and users can find out what services those businesses offer and how to interface with them electronically. The current UDDI search mechanism can only focus on a single search criterion, such as business name, business location, business category, or service type by name, business identifier, or discovery URL. From an e-business application developer's point of view, it would be ideal to send a few sequential or programmed search commands to the UDDI registry for information aggregation. Potential information sources could include multiple UDDI registries and other searchable sources. Obviously, there is a need to dramatically extend the current search capability for Web services to improve efficiency and performance. All existing UDDI search engines only support a single UDDI registry. For example, Microsoft's UDDI search technology only allows users to search its UDDI registry, and those searches can only use a single search query, based on one of the following categories: business name, business location, business category, and service type by name, business identifier, or discovery URL. The known taxonomy types include NAICS, UNSPSC, SIC, or a geographic code (GEO); the known identifier types include D-U-N-S, Thomas Registry numbers, and US Tax ID. In this article, we will introduce a newly released technology, BE4WS, an XML-based UDDI-exploring engine that provides developers with standard interfaces for efficiently searching business and service information in individual or multiple UDDI registries. BE4WS is written in Java programming language; it uses information in UDDI Search Markup Language (USML) documents to direct UDDI clients like UDDI4J to conduct complex UDDI searches. You can build BE4WS into your Java programs, or invoke it from a servlet and create a BE4WS Web application or Web service..." See "Universal Description, Discovery, and Integration (UDDI)."

  • [March 15, 2002] "RSS Beta Validator Now Available." Posting from Leigh Dodds to 'rss-dev@yahoogroups.com' mailing list. "I've just uploaded a beta of the revised RSS validator: http://www.ldodds.com/rss_validator/1.0/validator.html. There's a separate form for using the beta. This revised version adds support for the three core modules (DC, Content and Syndication) and provisional support for the proposed modules. Only streaming and taxonomy are not currently supported. I'm hoping to add support for the remaining modules next week. I also fixed up a couple of typos/bugs here and there. The core Schematron schema has actually been heavily reworked and now relies on abstract rules to perform much of the validation -- which is in fact common to many elements. You can grab a beta copy of the original Schematron schemas off of the validator page. Feedback welcomed on or off list..." Documentation: 'Experimental Online RSS 1.0 Validator': "This prototype is based around a Schematron schema for validating RSS 1.0. The schema is used to generate an XSLT stylesheet which performs the actual validation. In this version of the validator the validator produces a simple HTML report listing the errors, as well as copy of the original RSS 1.0 file. This prototype is not meant for production use as yet [2002-03-15], as it's a prototype for demonstration and testing purposes. The validator currently does not generate as nicely formatted a report as I would wish. This is due to limitations in the XSLT processor used by the W3C service. The prototype uses a slightly tweaked version of sch-report2.xsl which generates a tweaked validator..." See also "RSS Validator: A Schematron Schema for RSS." See general references in "RDF Site Summary (RSS)."

  • [March 15, 2002] "Introduction to DAML: Part II." By Roxane Ouellet and Uche Ogbuji. From XML.com. March 13, 2002. ['Uche Ogbuji and Roxane Ouellet return this week with the second part of their introduction to the DARPA Agent Modeling Language, DAML. Their article develops the ontology from the first installment, demonstrating some more advanced DAML concepts.'] "RDF was developed by the W3C at about the same time as XML, and it turns out to be an excellent complement to XML, providing a language for modeling semistructured metadata and enabling knowledge-management applications. The RDF core model is successful because of its simplicity. The W3C also developed a purposefully lightweight schema language, RDF Schema (RDFS), to provide basic structures such as classes and properties. As the ambitions of RDF and XML have expanded to include things like the Semantic Web, the limitations of this lightweight schema language have become evident. Accordingly, a group set out to develop a more expressive schema language, DARPA Agent Markup Language (DAML). Although DAML is not a W3C initiative, several familiar faces from the W3C, including Tim Berners-Lee, participated in its development. The preceeding article in this series presented basic DAML concepts and constructs, explaining the most useful modeling tools DAML puts into the designer's hands. The present article demonstrates more advanced DAML concepts and constructs, expanding on the Super Sports example... So far we have looked at how DAML+OIL gives us richer means for expressing constraints in schemas. If this were all it did, it would still be a welcome advance over RDFS. But it happens to go well beyond that. DAML+OIL gives modelers a rich expressiveness. It is not just a schema language but also an ontology language, providing primitives that support the general representation of knowledge. For one thing, it allows one to express classifications by inference rather than by explicitly listing which resources go into which buckets. Behind this simply-stated idea lies a surprising range of nuance for accommodating the classic difficulty of translating the models we hold in our minds to the models we mold in our code... In the first two articles of this series, we have presented the basics of DAML+OIL by example. There are additional property restrictions based on the cardinality (number of occurrences) of a property for each instance, and there are many nuances we have not covered. DAML+OIL introduces many constructs, and at first it can be a bit vexing to try to remember all the different constructs from RDF, RDFS, and DAML+OIL. The final article will provide some assistance by tabulating all these constructs, noting which specification defines them, and briefly describing their usage." See "DARPA Agent Mark Up Language (DAML)."

  • [March 15, 2002] "XLink: Who Cares?" By Bob DuCharme. From XML.com. March 13, 2002. ['Bob observes that XLink took over four years to reach W3C Recommendation status, at which point it has now been for eight months. Despite all that time, there is still very little activity around XLink. In his article, he sets out to find why there's such a lack of interest in the third member of the original XML trinity.'] "... I don't mean it rhetorically. I really want to know: who out there still cares about XLink? I did care, ever since I first heard about the work on 'XML Part 2: Linking,' as it was called at the announcement of XML's existence at SGML '96. (XSL, before XSLT was split away from XSL-FO, was Part 3). I got excited at the concept of linking that was more powerful than HTML's but easier to understand than HyTime's. I looked forward to the creation of out-of-line links that connected two, three, or more resources into a single link without requiring write access to those resources. I saw how the ability to define and assign link types would ease the end user's difficulty in navigating the growing amount of connected information on the Web. I thrilled to the talk of linkbases becoming a new category of information product to buy and sell, creating new information by making intelligent connections between existing information. XLink is the only XML-related W3C specification that took over four years to get from first Working Draft to Recommendation status. Now that it's been a stable, finished spec for eight months, we're still seeing very little activity. So what's out there? Who cares about XLink?... Perhaps the interest isn't dying away, but was merely shifted to RDF and Topic Maps, technologies that seem to be fulfilling much of the promise of XLink. Perhaps their success represents the triumph of the original ideas of XLink, with out-of-line links and linkbases finding success under different names. This would put a positive spin on the history of XLink, but I'm losing hope on seeing any large-scale use of the elements described in the XLink Recommendation. By describing resource traversal in terms of the ending resource displaying in a new window or the same window, and in terms of when it does so, the link semantics are framed in a way that can be implemented on multiple systems, so it has some of the portability and longer shelf life advantages of markup that is not presentation-oriented. But it's still about how the resources are presented to the user and is, therefore, not markup that you'd store in your core XML database or document collection that you use to generate other formats as needed. Your elements that reference footnotes would store the ID information necessary to find the right footnotes, and your news stories, legal briefs, and judges' decisions that reference legal statues would store the ID information necessary to find those. If you decided that the footnotes or legal statutes should appear in a new window, you'd have a stylesheet add the appropriate XLink markup to the version of the markup being sent to the browser -- if the browser knew what to do with XLink markup..." See "XML Linking Language."

  • [March 15, 2002] "Inside Sablotron: Virtual XML Documents." By Petr Cimprich. From XML.com. March 13, 2002. ['Petr Cimprich, who is behind the Sablotron XSLT processor, explains the internals of Sablotron and expands on a concept of "virtual XML documents", which offer the power of XPath, XSLT and DOM over non-XML data sources.'] "Despite the growing popularity of streaming XML processing, many applications still need or prefer to store an entire XML tree in memory for processing. The internal representation can either stick to the Document Object Model (DOM) or use any other convenient form. DOM-like optimized structures allow fast access to documents using the DOM API methods. On the other hand, the binary representation optimized for the DOM isn't well suited to different kinds of processing, such as XPath and XSLT. The reason is an incompatibility of the DOM and the XPath models: the DOM's 'everything-is-a-node' approach isn't effective for the XPath and slows the resolution of queries down. This is why XPath and XSLT processors usually use their own internal representations rathern than DOM. Whatever internal representation is used, one still needs a convenient interface to access it. The interface needn't be published, as it is typically used for internal purposes only; however, it's hard to imagine a maintainable and extensible XML processor implementation without a well-defined interface to its temporary storage. Beside the fast native interface optimized for specific purposes, the processor can also provide other, standard interfaces allowing to access to documents once they've been parsed. This is true of Sablotron, an Open Source XML processor project I'm currently involved in. I use it here to illustrate the possibilities of XML processors, some of them not deployed quite yet. But back to internals and interfaces; Sablotron uses its own in-memory objects and a set of interface functions optimized for XPath and XSLT, but parsed documents can be accessed and modified via a subset of the standard DOM API at the same time... since the interface working as a base for XPath querying and XSLT transformations can be replaced with user-defined callback functions, external handlers can be used to get an arbitrary XML representation passed to XPath/XSLT directly. What this approach promises is a notable speed increase and a memory consumption decrease when compared to building whole documents. If you would like to experiment with this, I invite you to try out Sablotron. I'm not aware of any other XML processor supporting external handlers currently; information on a similar effort or your experiences with the XPath/XSLT/DOM via callbacks is welcomed."

  • [March 15, 2002] "Processing Model Considered Essential." By Leigh Dodds. From XML.com. March 13, 2002. ['Leigh Dodds sheds light on an unfulfilled requirement for XML, that of a processing model. As the number of XML-related specifications grow, such a model becomes essential in order to fully understand their interactions. For instance, what is the right order in which to process XInclude inclusions and XSLT transformations?'] "This week's XML-Deviant takes a step backwards in an attempt to foreground an issue that has been behind several recent debates in the XML community, namely, the lack of a processing model for XML... It's historical fact that the syntax of XML was defined before its data model, the XML Information Set (Infoset). While this contributed to the speed of delivery of the XML specification, it also lead to a number of subsequent problems; most notably, the discontinuities between the DOM and XPath, both of which define different tree models for XML documents. Looking at the plethora of additional specifications that have been subsequently produced, it is useful to characterize their functionality as specific manipulations on an infoset. For example, XInclude augments an infoset, XSLT transforms an infoset, and schema validation annotates an infoset with type and validity information. While valid in an abstract sense, this perspective is missing a statement of the possible orderings of these operations. Do certain operations need to be performed before others? Must entities be resolved before XSLT processing? Must one canonicalize a document before generating its signature ? How does one specify the order of operations to be carried out on a document? How do I state that I want to do a schema validation only after I've carried out all inclusions? Or vice versa? The W3C held an XML Processing Model Workshop in July, 2001, to begin discussing these issues... Creating an XML application should be like creating a mosaic: piecing together simple, well-defined pieces to create a whole. The complexity and richness should arise from how that whole is constructed. Individual pieces that don't fit should be clipped accordingly. It's time for the W3C to organize its output around a consistent processing model. A processing model is not merely desirable, it's essential..."

  • [March 15, 2002] "Tuning In to iTV: The Opportunities and Challenges of Developing Interactive Television Apps With XML." By John Papageorge (CEO, Media Overdrive). From IBM developerWorks, XML zone. February 2002. ['The dream of using your remote control to interact with television broadcast shows has finally become a reality, thanks to interactive television (iTV). Here, John Papageorge explores the opportunities and challenges of creating applications for the set-top box platforms (such as OpenTV, AOLTV, and Worldgate) that allow for interactive television.'] Also in PDF format.

  • [March 15, 2002] "XPERANTO: Bridging Relational Technology and XML." From International Business Machines Corporation, DB2 Developer Domain. By Catalina Fan, John Funderburk, Hou-in Lam, Jerry Kiernan, and Eugene Shekita (IBM Almaden Research Center, San Jose, CA 95120) and Jayvel Shanmugasundaram (Cornell University). [March 2002.] 9 pages. ['The cutting edge of data management research! The XPERANTO research project enables XML-based applications to leverage relational database technology by using XML views of existing relational data.'] "XML has emerged as the standard data-exchange format for Internet-based business applications. These applications introduce a new set of data management requirements involving XML. However, for the foreseeable future, a significant amount of business data will continue to be stored in relational database systems. Thus, a bridge is needed to satisfy the requirements of these new XML-based applications while still leveraging relational database technology. This paper describes the design and implementation of the XPERANTO middleware system, which we believe achieves this goal. In particular, XPERANTO provides a general framework to create and query XML views of existing relational data. One of the features provided by XPERANTO is the ability to create XML views of existing relational data. XPERANTO does this by automatically mapping the data of the underlying relational database system to a low-level default XML view. Users can then create application-specific XML views on top of the default XML view. These application-specific views are created using XQuery, a general-purpose, declarative XML query language currently being standardized by W3C. XPERANTO materializes XML views on demand, and does so efficiently by pushing down most computation to the underlying relational database engine. Another feature provided by XPERANTO is the ability to query XML views of relational data. This is important because users often desire only a subset of a view's data. Moreover, users often need to synthesize and extract data from multiple views. In XPERANTO, queries are specified using the same language used to specify XML views, namely XQuery. XPERANTO executes queries efficiently by performing XML view composition so that only the desired relational data items are materialized. In summary, XPERANTO provides a general means to publish and query XML views of existing relational data. Users always use the same declarative XML query language (XQuery) regardless of whether they are creating XML views of relational data or querying those views. ... XPERANTO exposes relational data as an XML view. Users can then query these XML views using a general-purpose, declarative XML query language (XQuery), and they can use the same query language to create other XML views. Thus, users of the system always work with a single query language In addition to providing users with a powerful system that is simple to use, the declarative nature of user queries allows XPERANTO to perform optimizations such as view composition and pushing computation down to the underlying relational database system." See also "XPERANTO: Publishing Object-Relational Data as XML" and "IBM Federated Database Technology," by Laura Haas and Eileen Lin. Also earlier: "IBM Spills Beans on Xperanto Database Initiative." General references: (1) "XML and Query Languages"; and (2) "XML and Databases."

  • [March 15, 2002] "New Windows Could Solve Age-Old Format Puzzle -- At A Price." By Mike Ricciuti. In CNET News.com (March 13, 2002). "To achieve the long-elusive goal of easily finding information hidden in computer files, Microsoft is returning to a decade-old idea. The company is building new file organization software that will begin to form the underpinnings of the next major version of its Windows operating system. The complex data software is meant to address a conundrum as old as the computer industry itself: how to quickly find and work with a piece of information, no matter what its format, from any location... [via] a unified data store... The company plans to include the first pieces of the new data store in next release of Windows, code-named Longhorn, which is scheduled to debut in test form next year... Microsoft's first -- and perhaps largest -- challenges will be internal: how to overcome the technical and organizational obstacles it encountered when it set out to solve the very same problem in the early 1990s. At that time, the company launched an ambitious development project to design and build a new technology called the Object File System, or OFS, which was slated to become part of an operating system project code-named Cairo. 'We've been working hard on the next file system for years, and--not that we've made the progress that we've wanted to -- we're at it again,' Ballmer said. While the Cairo project eventually resulted in Microsoft's Windows 2000 operating system, the file system work was abandoned because of complexity, market forces and internal bickering. 'It never went away. We just had other things that needed to be done'... Microsoft executives say the company plans to resurrect the OFS idea with the Longhorn release of Windows. 'This will impact Longhorn deeply, and we will create a new API for applications to take advantage of it,' Allchin said. He said bringing the plan back now makes sense because new technologies such as XML (Extensible Markup Language) will make it much easier to put in place. XML is already a standard for exchanging information between programs and a cornerstone of Microsoft's Web services effort, which is still under development..."

  • [March 14, 2002] "Web Services Architecture Using MVC Style. Access Services Statically and Dynamically At The Same Time." By Naveen Balani (Technical Analyst, Syntel India Ltd.). From IBM developerWorks, Web services. February 2002. ['The Model-View-Controller (MVC) pattern is fairly useful in software engineering of object-oriented applications. This article takes a look at how it can be applied to the call static or dynamic Web services.'] "Web services can be invoked statically using a WSDL service interface and service implementation documents, or dynamically by retrieving the service type definitions and the service implementation via UDDI. But until now, you couldn't do both at the same time. You can now do this using the Model View Controller pattern (or MVC); this architecture supports both dynamic and static Web services. This article is primarily a design exercise and assumes that you know about design patterns and the MVC system. The MVC paradigm is a way of breaking an application, or even just a piece of an application's interface, into three parts: the model, the view, and the controller. The model represents enterprise data and the business rules that govern access to and updates of this data. Often the model serves as a software approximation to a real-world process, so simple real-world modeling techniques apply when defining the model. A view renders the contents of a model. It accesses enterprise data through the model and specifies how that data should be presented. It is the view's responsibility to maintain consistency in its presentation when the model changes. This can be achieved by using a push model, where the view registers itself with the model for change notifications, or a pull model, where the view is responsible for calling the model when it needs to retrieve the most current data. A controller translates interactions with the view into actions to be performed by the model. In a stand-alone GUI client, user interactions could be button clicks or menu selections, whereas in a Web application, they appear as GET and POST HTTP requests. The actions performed by the model include activating business processes or changing the state of the model. Based on the user interactions and the outcome of the model actions, the controller responds by selecting an appropriate view... The MVC architecture has the following benefits: (1) Multiple views using the same model. The separation of model and view allows multiple views to use the same enterprise model. Consequently, an enterprise application's model components are easier to implement, test, and maintain, since all access to the model goes through these components. (2) Easier support for new types of clients. To support a new type of client, you simply write a view and controller for it and wire them into the existing enterprise model.." Article also in PDF format.

  • [March 13, 2002] "XRole: XML Roles for Agent Interaction." By Giacomo Cabri, Letizia Leonardi, and Franco Zambonelli. In Proceedings of the Third International Symposium "From Agent Theory to Agent Implementation", 16th European Meeting on Cybernetics and Systems Research, Vienna, Austria, April 3-5, 2002. "Engineering interactions is a very important issue in the design and development of Internet applications. The wideness, the openness and the uncertainty of the Internet environment call for appropriate methodologies. In this paper we propose XRole, a system that helps in dealing with such a kind of interactions in a modular and effective way. XRole is based on the definition of roles, intended as intermediaries between the application needs and the environment needs. XRole is implemented exploiting the XML language. An application example in the agent-negotiation area shows the effectiveness of the approach." Cited with the MOON Project papers [Mobile Object Oriented Environments]. See similarly "Role-based Infrastructure for Agents", in Proceedings of the 8th IEEE Workshop on Future Trends of Distributed Computing Systems (FTDCS 2001), Bologna, Italy, October 2001. ['...we are exploring the developing of a system for the definition of roles and their concrete exploitation in implemented applications. Such system should address interoperability to suit the openness of the Internet. We are planning to use XML for the definition of roles and XSL for the translation into documentation and real code...' (cache)

  • [March 13, 2002] "The Semantics of DQL." By Ian Horrocks and Sergio Tessaris. March 4, 2002. ['preliminary proposal for a formalisation of the semantics of DQL'] "... As DAML+OIL's RDF syntax is rather verbose we will use the standard DL abstract syntax..." Note: DQL and Description Logics are relevant to the W3C Ontology Web Language (OWL). See also the accepted papers for the 2002 International Workshop on Description Logics (DL2002), including "Two Proposals for a Semantic Web Ontology Language" (Peter F. Patel-Schneider) and "Combining XML and DL for content-based manipulation of documents" (Rim Alhulou and Amedeo Napoli). See also: (1) "Semantic Web Chalk Talk: Amateur Introduction to Description Logics"; and (2) "Description Logics Markup Language (DLML)."

  • [March 13, 2002] "WS-I: Trying to Rise Above the Fray." By Michael Vizard and Steve Gillmor. In InfoWorld (March 12, 2002). ['As the standoff between Sun Microsystems and the founders of the WS-I (Web Services Interoperability Organization) looks like it's about to become a prolonged debate, executives from IBM and Microsoft in a rare joint public relations effort are making their case for WS-I. Sun insists that it should be given a position on the organization's board of directors because that is where the organization's agenda will be determined. The founders of the organization don't necessarily agree, given Sun's historic foot-dragging on XML and Web services in general. In an interview with InfoWorld Editor in Chief Michael Vizard and Test Center Director Steve Gillmor, Bob Sutor, IBM director of e-business standards strategy, and Neil Charney, Microsoft director of .Net platform strategy, claim that the organization's primary goal is to promote Web services interoperability rather than industry turmoil.'] [Charney:] "The first thing to note is that WS-I is not a standards body. Think of it as an organization that's really a standards integrator. From the perspective of a developer who's building Web services, as these things start to really move out into production, the specifications are being generated by a variety of standards bodies. So it's important to have a place for developers to go where a circle has been drawn around the various standards that are out there for this thing called Web services. It's really an implementor's forum, if anything. It's really an attempt to respond to customers telling us that they want to have some sense of confidence that the interoperability of Web services can be assured. The thing they've made very clear is that they want to see leadership and they want to see the various industry leaders align around a shared and common definition of Web services. Customers are looking for guidance and clarity, because there are a variety of standards efforts and standards bodies, and there's a tendency in our industry to not even have the conversation." [Sutor:] "We're not going to put 25 standards all in the W3C or in OASIS. There needs to be some sort of central industry community that helps to make sense of all that... Some subset of every community is always concerned with something. We went out with 55 companies supporting WS-I. There were in fact nine founders, including Oracle, who has done a little bit in the Java community, as well as BEA. We really went out of our way to make sure that we could get a lot of companies in there, and we invited a tremendous number of them. Sun will have to make up its mind about this. Sun is more than welcome. We are working our way through this list of 400-plus inquiries..." See: "Web Services Interoperability Organization (WS-I)."

  • [March 12, 2002] "Group Looks to Join Life Sciences With Web Services." By Ashlee Vance. In ITworld.com News (March 12, 2002). "A consortium of technology heavyweights and life sciences bodies put the finishing touches Monday on the group's agenda designed to make sending research data between organizations easier After several months of work, representatives from Sun Microsystems Inc., IBM Corp., Millennium Pharmaceuticals, the Whitehead Institute and others have finalized the organizational structure of The Interoperable Informatics Infrastructure Consortium (I3C). With board members, an agenda and a Washington D.C.-based contact in place, the group hopes to accelerate its goal of sending complex scientific data across disparate computing networks. The I3C is looking to mimic some of the work being done by airlines, Web sites and phone companies to link parts of different companies' data infrastructure, said Tim Clark, vice president of informatics at Millennium Pharmaceuticals and lead member of I3C. This concept, known as Web services, has emerged as a hot topic as companies try to make it possible, for example, for a consumer to buy an airline ticket and then have the dates of the flight automatically plugged into the consumer's Web-based calendar. Some of the key parts of this process from a computing standpoint are the Java programming language, XML (extensible markup language), SOAP (Simple Object Access Protocol) and UDDI (Universal Description, Discovery, and Integration), standards for creating a consistent set of technologies for exchanging data between various organizations. Members of I3C will look into the roles that Java, XML, SOAP and UDDI can play in making legacy applications and large sets of data used within one organization more accessible to the life sciences industry as a whole, said Jill Mesirov, chief information officer and director of bioinformatics and computational biology at the Whitehead Institute Center for Genome Research... Members of I3C were quick to stress that they do not want to be a standards body, creating protocols and regulations for work in this area. Instead, the group hopes to try various methods for opening up data to more people and present models of what works best."

  • [March 11, 2002] "Using RDF with SOAP. Beyond Remote Procedure Calls." By Uche Ogbuji (Principal Consultant, Fourthought, Inc.). From IBM developerWorks, Web services. February 2002. ['This article examines ways that SOAP can be used to communicate information in RDF models. It discusses ways of translating the fundamental data in RDF models to the SOAP encoding for PC-like exchange, or for directly passing parts of the model in RDF/XML serialized form.'] "SOAP is a transport protocol for carrying XML payloads over lower level Internet protocols. Specifications of the transport prior to 1.2 built in a suggested encoding of XML that is geared towards the serialization of programming language constructs. Such encodings are the staple of what is known as remote procedure call (RPC) systems, which have the common aim of making requests to remote computers look just like local procedure invokations. Other examples of RPC encodings are External Data Representation (XDR), from 'classic' RPC (and defined in RFC 1014), and Common Data Representation (CDR) from CORBA. As a result of bundling an encoding with such relatives, SOAP took on a decidedly application-programming feel, and its usefulness for general data exchange seemed suspect. These early flavors of SOAP generated much controversy. Firstly, mixing transport and data encoding specifications seems to be a very messy approach to communications, and seems to fly in the face of layered protocols that have been the practice in networking for decades. After all, the specification for HTML mark-up is not embedded into the HTTP specification. Secondly, choosing an RPC-like encoding for pre-eminence puts SOAP in an odd spot; it has little more expressive power than pre-XML RPC mechanisms, yet it is practically guaranteed to be less efficient because of XML's verbosity and the more generic architectures of HTTP, SMTP, and the like. It would seem that the only advantage SOAP brought as a next-generation RPC was to unify the Microsoft and CORBA camps; this is important, but certainly not what SOAP appeared to be promising. One important down-side consequence of SOAP-as-RPC is that such a system is completely unsuitable for the next-generation-EDI ambitions of Web services in general. If Web services are to become the new way businesses communicate over networks, they would seem to need a transport mechanism that communicates at the level of business and legal requests, rather than at the level of programming language APIs. And surely enough, the ebXML initiative, whose ambition is to use XML to craft a system for international electronic business communication, originally balked at using SOAP, as did a few other influential organizations... There are other approaches and ideas when it comes to how SOAP and RDF can inter-operate, and indeed it is a topic of constant interest as RDF users discover Web services and vice versa... Certainly more generic systems for serializing XML-based data will only enrich the world Web services." Also in PDF format. See "Resource Description Framework (RDF)."

  • [March 11, 2002] "Path Predicate Calculus: Towards a Logic Formalism for Multimedia XML Query Languages." By Peiya Liu, Amit Chakraborty, and Liang H. Hsu. In Markup Languages: Theory & Practice 3/1 (Winter 2001), pages 93-106 (with 22 references). "Many document query languages are currently proposed for specifying document retrieval. But the formalisms for document query languages are still underdeveloped. An adequate formalism is critical for query language development and standardization. Classical formalisms, relational algebra and relational calculus, are used to evaluate the expressive power and completeness of relational query languages. Most relational query languages embed within them either one or a combination of these classical formalisms. However, these formalisms cannot be directly used for tree document query languages due to different underlying data models. In this paper, we propose a logic formalism, called path predicate calculus, based on a tree document model and paths for querying XML. In the path predicate calculus, the atomic logic formulas are element predicates rather than relation predicates as in relational calculus. In this path predicate calculus, queries are equivalent to finding all proofs of the existential closure of logical assertions in the form of path predicates that document elements must satisfy." General references: "XML and Query Languages."

  • [March 11, 2002] "Complexity of Context-Free Grammars with Exceptions and the Inadequacy of Grammars as Models for XML and SGML." By Romeo Rizzi (Facoltà di Scienze, Dipartimento di Informatica e Telecomunicazioni, Università degli Studi di Trento). In Markup Languages: Theory & Practice 3/1 (Winter 2001), pages 107-116 (with 19 references). "The Standard Generalized Markup Language (SGML) and the Extensible Markup Language (XML) allow authors to better transmit the semantics in their documents by explicitly specifying the relevant structures in a document or class of documents by means of document type definitions (DTDs). Several authors have proposed to regard DTDs as extended context-free grammars expressed in a notation similar to extended Backus-Naur form. In addition, the SGML standard allows the semantics of content models (the right-hand side of productions) to be modified by exceptions. Inclusion exceptions allow named elements to appear anywhere within the content of a content model, and exclusion exceptions preclude named elements from appearing in the content of a content model. Since XML does not allow exceptions, the problem of exception removal has received much interest recently. Motivated by this, Kilpeläinen and Wood have proved that exceptions do not increase the expressive power of extended context-free grammars and that for each DTD with exceptions, we can obtain a structurally equivalent extended context-free grammar. Since their argument was based on an exponential simulation, they also conjectured that an exponential blow-up in the size of the grammar is a necessary devil when purging exceptions away. We prove their conjecture under the most realistic assumption that NP-complete problems do not admit non-uniform polynomial-time algorithms. Kilpeläinen and Wood also asked whether the parsing problem for extended context-free grammars with exceptions admits efficient algorithmic solution. We show the NP-completeness of the very basic problem: given a string w and a context-free grammar G (not even extended) with exclusion exceptions (no inclusion exceptions needed), decide whether w belongs to the language generated by G . Our results and arguments point up the limitations of using extended context-free grammars as a model of SGML, especially when one is interested in understanding issues related to exceptions." A related paper was published as IRST Technical Report 0101-05, Istituto Trentino di Cultura, January 2001 (December 2000: Centro per La Ricerca Scientifica e Tecnologica, Istituto Trentino di Cultura). See the original Postscript and the online abstract. Related references: (1) "Use (and Non-use) of Exceptions in DTDs"; (2) "SGML/XML and Forest/Hedge Automata Theory." [cache]
  • [March 11, 2002] "A Simple Property Set for Contract Architectural Forms." By Sam Hunting. In Markup Languages: Theory & Practice 3/1 (Winter 2001), pages 73-92 (with 14 references). "Because the contract is ubiquitous in commercial life (and thus in life), applications for a contract property set are almost too numerous to be worth mentioning. Therefore, I will simply list a few here: (1) On-line, ready-to-use, boilerplate contracts; (2) Specification for conversion operations; (3) Lending equity to XSL transforms; (4) Electronic commerce; (5) Enterprise modeling; and (6) Semantic overlays to legacy procedural code. These applications may well require different architectures conforming to the contract property set... Conclusion: "Contracts, because of their power and ubiquity, seem a natural target for an international standards effort using property sets. Property sets provide a simple and very powerful mechanism for representing such complex, real-world relationships." See: "Architectural Forms and SGML/XML Architectures."

  • [March 11, 2002] "The Relationship Between General and Specific DTDs: Criticizing TEI Critical Editions." By David J. Birnbaum (Associate Professor and Chair of the Department of Slavic Languages and Literatures, University of Pittsburgh. Email: djbpitt+@pitt.edu; WWW). "The present study discusses the advantages and disadvantages of general vs specific DTDs at different stages in the life of an SGML document based on the example of support for textual critical editions in the TEI. These issues are related to the question of when to use elements, attribute, or data content to represent information in SGML and XML documents, and the article identifies several ways in which these decisions control both the degree of structural control and validation during authoring and the generality of the DTDs. It then offers three strategies for reconciling the need for general DTDs for some purposes and specific DTDs for others. All three strategies require no non-SGML structural validation and ultimately produce fully TEI-conformant output. The issues under consideration are relevant not only for the preparation of textual critical editions, but also for other element-vs-attribute decisions and general design issues pertaining to broad and flexible DTDs, such as those employed by the TEI... Conclusion: "Any of the three strategies discussed above (processing a modified TEI DTD with respect to TEIform attribute values, transformation of a custom DTDs to a TEI structure, and architectural forms) provides a solution to the issues posed by a score-like edition. Specifically, these strategies all permit much greater structural control than is available in the standard TEI DTDs, rely entirely on SGML for all validation, and produce a final document that is fully TEI-conformant." See "Text Encoding Initiative (TEI) - XML for TEI Lite."

  • [March 11, 2002] " Are Politics Eclipsing Sun from the Web Services Scene?" By Eric Knorr and David Berlind. In ZDNet AnchorDesk (March 11, 2002). "A month ago, Microsoft and IBM formed the Web Services Interoperability (WS-I) organization, an industry consortium dedicated to promoting best practices for Web services. It's hard to overstate the WS-I's importance -- mainly because it's the first major industry organization devoted to Web services and boasts dozens of titans as members, including Accenture, BEA Systems, Compaq, Ford, Fujitsu, Hewlett-Packard, Oracle, Qwest, SAP, United Airlines, and VeriSign. More practically, the WS-I is important because ensuring that Web services interoperate and conform to standards is absolutely vital. If they don't -- and Web services enablers or providers factionalize -- the whole proposition falls apart. Too bad the WS-I has already gotten off on the wrong foot. The reason: Sun Microsystems hasn't joined yet -- and the circumstances surrounding its absence smack of hardball politics. Although Bill Gates derided Sun for not signing on during his February 13 introduction of Microsoft's Visual Studio.Net, Sun was invited to join just days before--and then, according to Sun, only as a contributor, not as a board member or founder. In fact, Sun was only informed of the WS-I's existence by IBM on the evening of February 4 (see Web services push attracts a crowd) -- nine days before Gates's comments, and within 48 hours of the WS-I's February 6 launch. That's hardly enough time to do the necessary due diligence when a chief competitor approaches you about joining an industry group, let alone enough time for competitors to credibly rattle their sabers... According to Sun spokesperson Russell Castronovo, Sun sent the WS-I a request to become a board member three weeks ago and still hasn't received a response... The WS-I is reportedly holding its first board meeting this week..." See: "Web Services Interoperability Organization (WS-I)."

  • [March 08, 2002] "Future Development of ISO 639." By Håvard Hjulstad (Convener of ISO/TC37/SC2/WG1 'Coding systems'). Document reference: ISO/TC37/SC2/WG1 N89. Date: 2002-03-04. 4 pages. "ISO 639-1 (alpha-2 code)1 and ISO 639-2 (alpha-3 code)2 are designed to meet the needs of terminology and library applications. The two parts of the standard and the coordinated effort to develop these two parts represent a vast step toward a universally acceptable set of identifiers for linguistic units. In particular the library community has a genuine need to keep the set of identifiers stable. There are at least a nine-digit number of records using these identifiers. Although there is broad acceptance that the present parts of ISO 639 will be developed further, this development needs to be conservative. For the ICT industry and for language resource and language technology applications there is also a genuine need to expand the current set of language identifiers and language identification mechanisms greatly. There may be a need for identifiers for 15-20 times as many linguistic units as the current tables provide. ISO/TC37 is ready to initiate projects to meet these needs. The projects will be carried out within the framework of ISO/TC37/SC2/WG1. It is, however, recognized that it may be necessary to utilize working procedures and organizational structures that are different from most projects under ISO/TC37 and other ISO committees. It will not be possible to meet the requirements as to timeliness without substantial external funding..." See also: (1) the news item of 2002-03-08: "ISO Working Group on Coding Systems Outlines New Language Encoding Initiatives"; (2) "Language Identifiers in the Markup Context." [source .DOC, cache]

  • [March 08, 2002] "Toward a Model for Language Identification. Defining an Ontology Of Language-Related Categories." By Peter G. Constable (SIL Non-Roman Script Initiative, NRSI). Document reference: ISO/TC37/SC2/WG1 N91. February 27, 2002. 34 pages. Draft of paper for the 21st International Unicode Conference, Dublin, Ireland, May 2002. "To deal with the diverse language identification needs, people are looking to the ISO 639 family of standards, which provide over 400 different language identifiers. For those working with hundreds or thousands of less well-known languages, however, this number falls well short of what is needed. Similarly, these standards do not provide mechanisms that accommodate intralanguage distinctions involving parameters such as script. Some protocols have some ability to overcome the limitations in ISO 639 by making reference to the derivative standard provided in RFC 3066, which allows for the creation of tags that add additional qualifiers to the ISO 639 codes, or for the registration of entirely original identifiers. There are potential concerns with introducing a greatly expanded set of tags under the terms of RFC 3066, however, since it could quickly lead to considerable confusion, for reasons I will describe momentarily... This paper is intended to explore what an adequate model of 'language' identification should look like. In particular, it aims to describe the ontology for which 'language' identifiers are needed; that is, the different kinds of language-related entities in the real world that are relevant for IT purposes, and the relationships between them. In view of this ontology, I will also attempt to derive implications for an adequate system of 'language' identifiers to be used in IT applications... in the view presented here, we are dealing with multiple types of categories, all of which are related to language per se but some of which are also somehow different. In other words, not all of the distinctions for which we use 'language' identifiers are between languages. Thus, in making reference to 'language' identification, what is really meant is identification with regard to various types of language-related categories..." See also: (1) the news item of 2002-03-08: "ISO Working Group on Coding Systems Outlines New Language Encoding Initiatives"; (2) "Language Identifiers in the Markup Context." [source]

  • [March 08, 2002] "Tale of Two Rodneys." By Steve Gillmor. In InfoWorld Volume 24, Issue 10 (March 08, 2002), page 66. "... Ed Julson, Sun director of product management Java and XML technologies, told InfoWorld that WS-I 'is the exact opposite approach to the way standards should be developed.' Rather than submitting ideas or early technologies that may or may not be collectively tuned or even completely transformed into standards from where the technology emerges, Julson suggested Microsoft and IBM are developing the technology themselves then trying to push that through the standards body, more or less intact. Julson says WS-I goes a long way backward to proprietary technologies disguised as standards. Rich Green, Sun vice president and general manager Java software and XML technologies, sees it differently. 'Ed works in this organization and he's certainly entitled to his opinion. I'm in charge of defining Sun's strategy and approach with respect to [WS-I], so I'm giving you the actual Sun answer.' Green is supportive of WS-I, or at least a WS-I that includes Sun. 'If we have any concern at all, it is in fact whether or not the mandate that [WS-I] has defined for itself is broad and stringent enough to ensure interoperability. We do have some questions about the model of self-certification, and we're concerned that this body, if it's going to take out this piece of industry real estate, that it actually has enough teeth to ensure interoperability..." See "Web Services Interoperability Organization (WS-I)."

  • [March 08, 2002] "Securing Web Services with Single Sign-On." By Zdenek Svoboda. In TheServerSide.com (March 2002). "In this part of the Web services tutorial we will learn how to secure applications with a single sign-on utlility. We will introduce the simple scenario where the client gets the authentication token from the SSO service and appends it to the outcoming request. The receiving party can validate the incoming token by calling the SSO service. We will also shown how SAML, the standard format for the security information exchange, can enhance the SSO architecture... The basic idea of the single sign-on security architecture is to shift the complexity of the security architecture to the so-called SSO service and thus release other parts of the system from certain security obligations. In the SSO architecture, all security alghorithms are found in the single SSO server which acts as the single and only authentication point for a defined domain. Thus, there is a second benefit to an SSO approach to autnentication/registration: a user has to sign-on only once, even though he may be interacting with many different secure elements within a given domain. The SSO server, which can itself be a Web service, acts as the wrapper around the existing security infrastructure that exports various security features like authentication and authorization... An advanced approach permits the token itself to contain some valuable security information that allows validation without having to call the SSO server each time. The token contains the authentication or authorization information. This information is 'signed' by the SSO server, so provided the token recipient trusts this server, it doesn't have to do any further verification... There is a new standard for exchanging security-related information in XML called Security Assertions Markup Language (SAML). This is currently being completed at OASIS and its first release is expected at the time of this article's publishing. Basically, the security information described by SAML is expressed in the form of assertion statements about security subjects (e.g. users, machines or services). SAML defines the protocol by which the service consumer issues the SAML request and the so-called SAML authority returns the SAML response with assertions. There are three kinds of assertions: The Authentication statement informs about the authentication of a particular subject in a specific time and scope. The Authorization decision allows or denies a subject access to a specific resource. The Attributes further qualify the subject (e.g. credit line info, citizenship etc.). The use of SAML isn't limited to the SSO scenario. It can be used in a much broader sense. If our Web services applications understand SAML it shouldn't be difficult to flexibly reconfigure the security architecture without lenghty re-coding. You can take a look at a the SAML authorization request below. Notice that it contains the user credentials (username and encrypted password in our case) and some descriptions like response requirements, credentials types, etc...." See: "Security Assertion Markup Language (SAML)."

  • [March 08, 2002] "Portal Standards." By Thomas Schaeck and Stefan Hepper. In TheServerSide.com (February 2002). "With the emergence of an increasing number of enterprise portals, a variety of different APIs for portal components, so-called portlets, have been created by different vendors. Similarly, different mechanisms for invocation of remote visual components are being introduced by various vendors. The variety of incompatible interfaces creates problems for application providers, portal customers, and portal server vendors. To overcome these problems, the Java Portlet API and Web Services for Remote Portals (WSRP) standards will provide interoperability between portlets and portals as well as interoperability between portals and visual, user-facing web services. With these standards in place, application providers or portal customers can write portlets, or visual, user-facing web services independent of a specific enterprise portal product. The Java Portlet API will be compatible with WSRP and therefore allow to publish portlets as web services... Web Services for Remote Portals (WSRP) are visual, user-facing web services that plug and play with portals or other applications. They are designed to enable businesses to provide content or applications in a form that does not require any manual content- or application-specific adaptation by consuming portals. As WSRP includes presentation, WSRP service providers determine how their content and applications are visualized for end-users and to which degree adaptation, transcoding, translation, etc. may be allowed. WSRP services can be published into public or corporate service directories (UDDI) where they can easily be found by portals that want to display their content. Web application deployment vendors can wrap and adapt their middleware for use in WSRP-compliant services. Vendors of intermediary applications can enable their products for consuming WSRP services. Using WSRP, portals can easily integrate content and applications from internal and external content providers. The portal administrator simply picks the desired services from a list and integrates them, no programmers are required to tie new content and applications into the portal. To accomplish these goals, the WSRP standard will define a web services interface description using WSDL and all the semantics and behavior that web services and consuming applications must comply with in order to be pluggable; it will also define the meta-information that has to be provided when publishing WSRP services into UDDI directories. The standard allows WSRP services to be implemented in very different ways, be it as a Java/J2EE based web service, a web service implemented on the .NET platform, or a portlet published as a WSRP service by a portal..." See: "Web Services for Remote Portals (WSRP)."

  • [March 08, 2002] "Georgia Portal Driven by Sun." By Dibya Sarkar. In Federal Computer Week (March 07, 2002). "Georgia officials recently signed a $7.3 million contract with Sun Microsystems Inc. to develop an enterprise portal that uses a Web services architecture that will mean greater interoperability among state agencies and integration of current applications. Sun's Web services architecture allows use of XML (Extensible Markup Language) and SOAP (Simple Object Access Protocol), enabling the state to integrate and link its legacy systems, rather than replace them. The platform also allows the state to use new standards-based products and emerging technologies from other vendors. As is the trend among portals, Georgia's will be intentions-based for residents and businesses -- meaning it will be organized around services and information that users want -- rather than agency-based. Several Georgia Department of Motor Vehicle Safety applications are being developed in conjunction with the portal. A projected 400,000 residents are expected to use driver's license renewal applications -- online and via interactive voice response channels -- slated for a July launch. Other DMV applications include being able to take a written driver's test, renew tags and check traffic conditions. Sun selected Atlanta-based EzGov Inc., which specializes in technology for governments, to provide the motor vehicle applications... In a second phase, GTA has requested $8.7 million to help caseworkers from state and local governments and nonprofit agencies better deliver children and family services through a seamless system, according to a press release..."

  • [March 08, 2002] "XML Set to Boost Biometrics." By Margaret Kane. In CNET News.com (March 07, 2002). "A standards group is hoping that a key Web language will provide a standard way for computers and technology to describe human characteristics. The Organization for the Advancement of Structured Information Standards, or OASIS, said Thursday that it has formed a technical committee to develop an XML standard for biometrics... The proposed XCBF standard will define information such as DNA, fingerprints, iris scans and hand geometry for use in identification and authentication. Its basis in XML will help facilitate the transfer of biometric information across the Internet, the organization said..." See references and description in "XML Common Biometric Format (XCBF)."

  • [March 08, 2002] Database Vendors Keep the XML Faith." By Tom Sullivan and Paul Krill. In InfoWorld (March 07, 2002). "Database archenemies IBM and Oracle are at it again, and the battle over how to store and manage XML data rages on. Both companies are set to issue new versions of their relational databases in the near future, with Oracle planning a May release and IBM slating the next iteration of DB2 for the middle of the year, and both companies are eyeing up XML as a means to extend their data management strategies. Oracle is planning to boost support of XML come May with Oracle9i Release 2, which will be a 'fully unified XML and relational database,' said Robert Shimp, vice president of database product marketing at Oracle, in Redwood Shores, Calif. 'Not only can you, in an Oracle database, store all the traditional transactional processing data, but you can also store full XML documents.' Although Oracle has had basic support for XML since early 1999, support planned for Release 2 will be much more expansive. "What you can do is with a single SQL query access both the XML and relational data," Shimp said. For example, a technical support person might field a call about a problem with a specific product, Shimp said. The support person might want to access information about the product as well as credit memos and internal product documents. "You can look up that information simultaneously with a single query, whereas in the past you would have had to search different databases to find this information,' Shimp said. Oracle's XML work is based on the W3C XML schema data model, to provide its database customers with a standard way to function with applications, Shimp said. Currently, Oracle's database has full XML parsers, an XML schema processor, and a full SQL XML utility for managing XML data. But unified SQL queries are not possible with current iterations of the database, Shimp said. As a part of its strategy for entering what it calls the next wave of data management, IBM is taking a three-faced approach and working to offer a database system that is capable of managing objects, relational data, and XML documents. Big Blue, based in Armonk, N.Y., plans to extend the core database engine currently in DB2 to include support for XML, with technologies such as new index structures that relate to XML, according to Nelson Mattos, an IBM distinguished engineer and director of IBM's information integration group. Although IBM has supported both objects and relational data in DB2 for some time now, the addition of XML will enhance that support. 'XML gives you a very flexible model to manage all the metadata around objects,' Mattos said... By supporting XML, relational, and object data, IBM's database will be able to interact with XML documents; structured information, such as rows and columns; and data written in object-oriented programming languages, namely Java and C++. To that end, support for the W3C's XML Query standard means that an XML application only needs to know XML Query to get at data residing in DB2..."

  • [March 08, 2002] "Learning C# XML." By Niel Bornstein. From XML.com. March 06, 2002. ['One in a series of articles examining the XML APIs provided by Microsoft's C# language. Many XML programmers currently use Java as their language of choice. Niel Bornstein approaches the C# XML APIs from a Java programmer's perspective. The first installment uses as an example the porting of a SAX-based Java application to use the C# XmlReader class.'] "In his opening keynote at the IDEAlliance XML 2001 Conference in Orlando, Florida, in December, James Clark said: 'Just because it comes from Microsoft, it's not necessarily bad'. With that in mind, I decided to explore what C# has to offer to the Java-XML community. I've been watching the continuing Microsoft story with a vague combination of intrigue and apprehension. You almost certainly know by now that, due to an awkward combination of hubris and court orders, Microsoft has stopped shipping any Java implementation with Windows, choosing instead to hitch its wagon to a star of its own making, C#. As a consumer, I'm not sure whether I like Microsoft's business practices. As a software developer, however, I'm interested in learning new languages and technologies. I've read enough to see that C# is enough like Java to make an interesting porting project. Even if I never write another line of C# code, there is certainly a lot to be learned from how Microsoft has integrated XML into its .NET platform. In this series I'll be porting a few small XML applications, which I've hypothetically written in Java, to C# in order to see if I can improve my Java programming... The first Java application to port to C#, which I call RSSReader, does something that most XML programmers have done at some point: read in an RSS stream using SAX and convert it to HTML. For our purposes, I'll expect to be reading an RSS 1.0 stream using JAXP and outputing out an HTML stream using java.io classes. We'll see that this example ports nicely to the C# XmlReader class. Future examples will convert JDOM to the C# XmlDocument and XmlNode classes, as well as experimenting with ports from an XML databinding framework to ADO.NET. There's a lot to ADO.NET, and I'll discuss some of that as well... unlike Java's XML libraries, all of System.Xml is provided by Microsoft. This means that, among other things, there is a consistent interface and a consistent set of tools for all your XML needs. No need to shop around for parsers and SAX implementations... If you don't want to write either an event loop or callbacks, the read-only, forward-only, stream-based model might not be for you; you might prefer a whole-document model (like, say, DOM). In that case, XmlReader will not appeal to you any more than SAX does. There is another set of tools in C#, starting with XmlDocument, which we'll discuss in the next article, which gives you all the power of a document stored in memory, plus the added convenience of building on what you've already learned."

  • [March 08, 2002] "Creating Efficient MSXML Applications." By Ben Berck. From XML.com. March 06, 2002. ['A report outlining real-life experiences with Microsoft's MSXML processor. Ben Berck describes how his development team turned a resource intensive XML processing application into an efficient scalable one.'] "What happens when you need to parse XML files on the order of 122 MB, with each file originating from a different source application? That was the dilemma my team faced when presented with the challenge of developing our company's new server-based XML rendering engine, along with a proof-of-concept Web site that would allow anyone to upload files and convert them to XML, SVG, HTML, etc. In short, we needed to be able to accommodate everything from Microsoft Word documents to Quark files, none of them small... We first encountered the Big O issue when trying to parse a large XML file using a parser that implements the XML DOM. Such an approach loaded the entire XML file into a memory resident tree structure. This used O(n) time to read the file and O(n) space to store it, which -- at first glance -- seemed acceptable. However, there was more involved than just loading the document. We still needed to write code to read the DOM, analyze it, and carry out tasks. We assumed this analysis, such as scanning for a particular element, would take O(n) time. Surprisingly, we found it to be O(n2) in practice, which is not acceptable... we found that once you hook up MXXMLWriter with a proper implementation of IStream, the rest of the program stays the same. Every time MXXMLWriter accumulates 4,096 Unicode characters (which occupy 8 KB of memory), it invokes the Write method of IStream, which converts the buffer to UTF-8 (which takes another 3.3 KB) and writes it to the file. Regardless of the size of the file or the subdocuments, this implementation will consume less than a dozen kilobytes of memory at a time. That's what you call O(1) space, and it means you'll now have to look elsewhere for the bottleneck on your busy server... I have included sample code for the IFileStream class, which implements IStream, as well as the code that converts a buffer of Unicode characters into UTF-8; if you want to see our XML rendering server in action, visit http://www.createxml.com...

  • [March 08, 2002] "Reading Multiple Input Documents." By Bob DuCharme. From XML.com. March 06, 2002. ['Bob explains how an XSLT script can read multiple input documents with the help of the document() function.'] "When you run an XSLT processor, you tell it where to find the source tree document -- probably in a disk file on a local or remote computer -- and the stylesheet to apply to it. You can't tell the processor to apply the stylesheet to multiple input documents at once. The document() function, however, lets the stylesheet name an additional document to read in. You can insert the whole document into the result tree or insert part of it based on a condition described by an XPath expression. You can even use this function with the xsl:key instruction and key() function to look up a key value in a document outside your source document. To start with a simple example, we'll look at a stylesheet that copies one document and inserts another into the result document..."

  • [March 08, 2002] "All That is Solid Melts Into Air." By Kendall Grant Clark. From XML.com. March 06, 2002. ['Kendall Clark focuses on the ethereal nature of technology. Just when you thought we'd all agreed that HTTP and XML were good things, people come along and challenge the foundation on which we're all building.'] "... One of the newest, most faddish drugs is 'Web Services', which most of the largest corporations in the industry, including IBM, Microsoft, and Sun, are pushing vigorously. Breathless marketeers are calling 'Web Services' the next 'Web revolution', piling high the myths and overblown expectations. And yet, as is too often the case, it's not entirely clear what Web Services is... or are -- the indecision seems to go all the way down to the grammar: is 'Web Services' one thing, an 'architectural vision', perhaps, or is it a lot of disparate or related things?... Eric van der Vlist questioned the extensibility of XML, calling it a myth. As van der Vlist put it, 'XML is based on trees which are not the most extensible structures (compared to tables or triples). If you extend a tree you are likely to break its structure (and existing applications). I would say that trees grow but are not "extended".'..."

  • [March 08, 2002] The Ontopia Knowledge Suite: An Introduction." Ontopia white paper for the Ontopia Knowledge Suite version 1.3. March 2002. 19 pages. "This white paper gives a quick introduction to version 1.3 of the Ontopia Knowledge Suite (OKS), describing the architecture, functionality, and composition of the suite. There is also some discussion of the possible usage areas of the suite. A basic understanding of the main concepts of topic maps is assumed; for explanations of these, please see the various topic map introductions available on our web site at http://www.ontopia.net/topicmaps... The Ontopia Topic Map Engine is what topic map applications and products use to work with topic maps. This SDK lets applications load topic maps from XML documents, store topic maps in databases, modify and access topic maps, and generally do all an application may need to do with a topic map. The engine has a core topic map API which all applications use to access topic map data, regardless of where those data are stored. Thus, whether the topic map is in-memory, in a database, or a virtual view is all the same to the application... The Query Engine is an implementation of the tolog query language for topic maps. This language can query topic maps for topics of specific types, which participate in certain combinations of associations, and supports inference rules. You can sort query results, and there is also support for counting query matches, and sorting by counts. Using tolog allows complex retrieval operations to be expressed compactly and easily, making it easier to develop and maintain applications. tolog is not a standardized query language, but is provided while ISO completes its standardized TMQL query language. Once completed, Ontopia will provide full support for TMQL... The Full-Text Integration allows topic maps to be indexed and then searched for topics by their names and the contents of their occurrences. This can be very helpful for users new to topic maps who need to find something quickly in a topic map. The Java-based search engine Lucene comes bundled with the integration. Lucene is open source, powerful, robust, scalable, and lightning fast. the Schema Tools are an implementation of the Ontopia Schema Language (OSL), which is a schema language for topic maps that allows the expression of constraints on a topic map. For example, it can be used to say that composers must have only a single name, they must have a date of death and of birth, and they must have composed at least one opera. Using the Schema Tools applications can easily validate whether or not the topic maps they work with follow the prescribed topic map structure or not. Schemas are also useful as compact and precise documentation of the structure of a topic map..." Paper also available in PDF format. See the news item of 2002-03-08: "Ontopia Knowledge Suite Supports Query and Schema Tools for Topic Maps." [cache]

  • [March 07, 2002] "The OASIS XML-Based Access-Control Markup Language (XACML)." Committee Draft from the OASIS Extensible Access Control Markup Language Technical Committee ("definng an XML specification for expressing policies for information access over the Internet"). March 08, 2002. 37 pages. Document identifier: 'draft-xacml-v0.10'. Location: http://www.oasis-open.org/committees/xacml/docs. Edited by Tim Moses (Entrust) and Simon Godik (Simon Godik). Contributors: Anne Anderson (Sun Microsystems), Bill Parducci (Bill Parducci), Carlisle Adams (Entrust), Ernesto Damiani (University of Milan), Hal Lockhart (Entegrity), Ken Yagen (Crosslogix), Michiharu Kudo (IBM, Japan), Pierangela Samarati (University of Milan), Polar Humenn (University of Syracuse), Sekhar Vajjhala (Sun Microsystems). Posted by Tim Modes to the XACML mailing list. Send comments to: xacml-comment@lists.oasis-open.org. "This specification defines the syntax and semantics for XML-encoded access control rule statements and policy statements. The XACML schema is an extension schema for SAML... The context and schema of XACML are described in two models that describe different aspects of its operation. These models are: the data-flow model and the policy language model... Some of the data-flows shown [in the diagram] may be facilitated by a repository. For instance, the communications between the PDP and the PIP may be facilitated by a repository, or the communications between the PDP and the PRP may be facilitated by a repository or the communication between the PAP and the PRP may be facilitated by a repository. The XACML specification is not intended to place restrictions on the location of any such repository, or indeed to prescribe a particular communication protocol for any of the data-flows. The model operates by the following steps. (1) PAPs write rules and make them available to the PRP. From the point of view of an individual PAP, its rule statements may represent the complete policy for a particular target. However, the PRP may be aware of other PAPs that it considers authoritative for the same target. In which case, it is the PRP's job to obtain all the rules and (if necessary) use a PMP to remove any conflict amongst those rules and combine the rules in accordance with a meta-policy. The result should be a self-consistent rule statement. (2) The PEP sends an authorization decision request to the PDP, in the form of a SAML request. The decision request contains some or all of the attributes required by the PDP to render a decision, in accordance with policy. (3) The PDP locates and retrieves the policy statement applicable to the decision request from the PRP. (4) The PRP returns the complete policy to the PDP in the form of an XACML rule statement or policy instance. The PDP ensures that the decision request is in the scope of the policy statement or rule statement... [#5 - #9]... Policy Language Model: A rule statement contains: a ruleId; a target; an effect; a metaPolicyRef and a condition. Target defines the set of names for the: resources; subjects and actions -- to which the rule statement applies. If the rule statement applies to all entities of a particular type, then the target definition is the root of the applicable name space. An XACML PDP must verify that the resources, subjects and actions identified in an authorization decision request are each included in the target of the rule statement that it uses to evaluate the request. MetaPolicyRef specifies the meta-policy by which the rule statement may be combined with other rule statements..." See also the XML schema. References: "Extensible Access Control Markup Language (XACML)."

  • [March 07, 2002] "XML Five Years On: Simplicity Gives Way to Complexity. [Standards. Will Success Spoil XML Or Just Complicate It?]" By Liora Alschuler and Mark Walter. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 1, Number 23 (March 11, 2002), pages 1, 3-12. ['Five years after XML debuted as "simplified SGML," some of the complexity is creeping back in. At the center of this year's controversy are a plethora of schema-language proposals and growing doubt over the wisdom of remaining SGML-compatible. Meanwhile, XML is also being adopted for the data-processing industry -- whose interests don't always coincide with publishers' goals. Our story delineates the differences among the schemas and recaps the varied responses of the publishing-technology vendors. Finally, we look at the current crop of XML editing tools.'] "... [XMetaL:] The third major release of XMetaL, previewed at the December show and due to be available this month, features support for XML Schema as well as DTDs. The new version also adds out-of-the-box support for change tracking, a new Java API (and enhancements to other APIs), a forms editor, support for WebDAV and tighter Office integration for import from Word and Excel... [XML Spy:] Shortly before XML '01, Altova, best-known for its schema-design and -editing tools, announced the release of the 'all-new' XML Spy 4.0 Document Editor, 'an enterprise-wide content-management solution for creating and deploying large volumes of XML in real-world production environments.' This sounded to us like a significant new entrant in what has been a dwindling field of viable choices, so we reviewed it in that context. Since the show, Larry Kim, Altova's marketing director, has backed off somewhat from that positioning, stating that the XML Spy Document Editor is not intended as a ready-to-use environment for full-time authors doing editorial-production work. Rather, at this time its focus is on short, data-driven documents that the organization wants to capture and store as XML... [Topologi Markup Editor:] Rick Jelliffe's editor, called Topologi Markup Editor, is aimed at legal publishers, but it could be a tool with an even bigger future. The design criteria are evident in the editor's strengths: fast, keyboard-centric text entry and markup; smooth configuration management; and support for collaboration across the enterprise. It also promises to be the editor that supports the most XML schema languages, including XML and SGML DTDs, XML Schema, RELAX NG and Schematron... XML editors are gaining traction as the editing interfaces to content-management systems, but it's too early to tell if SoftQuad's head start in schema support will have lasting advantage. While we saw no mass movement away from the desire to do everything in Word, neither are the structured-markup tools designed to work with Word finding a general audience... we expect to see widespread adoption of XSD within Web content management. Because so many Web pages at large sites are built as collections of components, the data-set model of XML Schema makes sense as a way to describe and validate material entering the repository. For vendors in this arena, support for XML Schema will be more critical, because they'll be expected to receive and deliver a wide variety of content -- everything from news stories, press releases and proposals to catalogs, sales orders and personalized, multidimensional, compound Web documents. Privately, Web content-management vendors we've spoken with admit that customers are asking about XML Schema support. But most have not made public their timetable for supporting it in their products. We expect to see progress on this front throughout the year... The challenge for XML, as we see it, is to remain responsive to its diverse constituents, a move which could be healthy, in the long run, for the mainstream power brokers as well as those immersed in less popular, more-specialized applications. SoftQuad CTO Peter Sharpe observes that it is ironic that XML has instigated a revolution in database technology while the narrative applications for which it was created have been left relatively unaffected. If the computer industry is to maintain this mainline to innovation and explosive new capability, it would do well to find a way to restore balance in the standard's development process. If not, the publishing community may find that the ripples from the XML data revolution reach farther and with more force than most of us envisioned five years ago..." [excerpts from another very fine article in Seybold Report -rcc]

  • [March 07, 2002] "Content-Management Systems: Comparing Installation Numbers. [Comparing Installations: Numbers, Implications.]" By Luke Cavanagh. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 1, Number 23 (March 11, 2002), pages 13-15. ['Firm facts about the vendors' market presence and positioning can help you plan your next system upgrade. Comparative data on content-management systems is hard to find and hard to interpret. So we've rounded up the key facts on the main vendors and boiled everything down to a simple table. We've also classified the products into five market segments for your shopping convenience. We will update the statistics at regular intervals.'] "When you go looking for a new content-management system, it pays to consider a company's health and the relative maturity of its products before making a multi-year commitment to its technology. Is a vendor a raw start-up or veteran supplier? Is this a brand-new product, one that's well-established, or one that's past its prime? There's no 'right' answer to these questions; the supplier and product that make a good choice depend on your organization and the application at hand. But regardless of what you're looking for, it's helpful to know where the vendors sit relative to their competitors. Similarly, it helps to have a general idea about pricing. If you've got $100,000 to spend, it may not be the best use of your time to open discussions with a company that typically installs quarter-million-dollar solutions." The 'Content-Management Systems Installation Overview' (in chart form) reviews 27 systems. "... we've developed a table outlining the leading content-management vendors on the market today. We've also classified them in five categories. The categories are based on several factors: the geographical reach of the company's sales efforts, the number of systems installed, the average price of an installation and the company's software-to-services revenue ratio. The software-to-services ratio can be a hint of high ongoing service costs, and it also shows how well new sales efforts are currently proceeding. The goal of most companies is to keep the software portion of the pie as large as possible... The three XML-based systems listed here are also large in scale and price, but they have not achieved the same level of penetration as those in the first group. They feature built-in XML functionality that manages content at a very granular level and are typically installed in cross-media publishing applications, such as technical documentation or commercial reference publishing. These products have roots in SGML -- which helps explain their niche penetration rates. Now that XML is on the rise, it remains to be seen if the larger players will leverage or subsume these vendors' expertise..."

  • [March 07, 2002] "Progressive On Target With Vasont Cross-Media Content-Management System. New Name, Better Online Functionality for Target 2000." By Mark Walter. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 1, Number 23 (March 11, 2002). ['Vasont is version 7 of the turnkey content-management system formerly called Target 2000 from Progressive Information Technologies. New features include a Web interface and toolkit, a graphical workflow designer and XML integration. It's aimed at reference works, and it tries to be entirely media-agnostic. Though the Web interface is too tightly tied to Microsoft Active Server Pages, it has a lot going for it.'] Heralding the product's move from the '90s into the twenty-first century, Progressive Information Technologies has rechristened its Target 2000 content-management system Vasont, coinciding with release 7 of the product. The new version of the software, which is targeted at cross-media publishing, features a new Web interface and toolkit, a graphical workflow designer and several other changes. Overall, it has evolved into a mature and sophisticated product... Vasont is an XML-aware content-management system designed for cross-media publishing of diverse content, including text-heavy reference works, technical documentation and educational materials. Its first customer was W. B. Saunders, which uses it to manage Dorland's Medical Dictionary, a large, single-volume book that is offered on CD-ROM and online. Shortly afterward, Progressive did a similar project for IEEE and its Standard Dictionary of Electrical and Electronics Terms. Since then, Progressive has installed Vasont at a handful of other publishers, including McGraw-Hill for its Science & Technology Encyclopedia; Merck, for use with the Merck Index; and the American Society of Health-System Pharmacists for use with its drug information database, AHFS DI... Progressive has upgraded its ties to third-party editing products in version 7. It has completed integrations with Arbortext's Epic, Corel (formerly SoftQuad) XMetaL and Adobe FrameMaker, and it has built Microsoft Word templates with logic to translate the files to XML. It also has integrated Vasont with Advent's 3B2 composition system... A new Read DTD utility furnishes a front end for setting up a data load based on an XML document type definition (DTD). Progressive has also added subset extraction and loading, so that selected portions of a database can be pulled out and reloaded without affecting the rest of the content... Vasont is the new name for a well-proven system aimed at customers with modest IT resources who want a turnkey system to manage a reference database. For customers with aging systems from Auto-Graphics (another longtime player in reference-database publishing), Vasont's updated Web and third-party authoring integration could be compelling reasons to switch. In the larger market, where Vasont competes against XML-aware content-management systems (Astoria, Documentum, Empolis, XyEnterprise), its 'tag-neutral' approach and strong product-development features dovetail with emerging standards and cross-media publishing needs. Vasont's feature set reflects Progressive's hands-on publishing experience, gained over several decades as a service organization..."

  • [March 06, 2002] "CapeConnect Three Web Services Platform. Technical Overview." Cape Clear Software white paper. [March] 2002. 18 pages. "Cape Clear Software produces the software platform and tools necessary to build new Web Services and to enable legacy systems as Web Services. CapeConnect is a complete Web Services platform that automatically exposes Java, Enterprise JavaBeans (EJB), and CORBA components as Web Services based on Simple Object Access Protocol (SOAP), Web Services Description Language (WSDL), and Universal Description, Discovery, and Integration (UDDI). CapeConnect is ideal for enterprise application integration, especially in heterogeneous environments where multiple types of front-end clients are connected to multiple types of back-end systems. CapeConnect unites COM, Java, J2EE, CORBA, and XML around an open Web Services model. This paper provides a technical overview of the CapeConnect Web Services platform... The CapeConnect architecture contains four core components: (1) The CapeConnect gateway is a servlet that runs in a servlet engine on a Web server. The gateway acts as a communication bridge between remote clients and the CapeConnect XML engine. (2) The CapeConnect XML engine converts SOAP messages from the gateway to Java calls or CORBA calls on back-end components. The XML engine then converts the results of these calls to SOAP messages and returns these messages to the gateway. (3) The CapeConnect J2EE engine is Cape Clear's implementation of the Java 2 Platform, Enterprise Edition (J2EE). (4) The CapeConnect UDDI registry is Cape Clear's implementation of the UDDI 1.0 standard... XML Engine: The core of the CapeConnect system, the XML engine converts SOAP requests into invocations against back-end systems. This translation is entirely dynamic and does not involve code generation or the need to re-deploy business logic. The XML engine can invoke Java classes within the same Java virtual machine (JVM) or can make external calls to EJB or CORBA components running in a separate application server process..."

  • [March 06, 2002] "Cape Clear Targets Integration Market With Web Services." By Richard Karpinski. In InternetWeek (March 06, 2002). "Web-services vendor Cape Clear Software will release a new version of its Web-services development platform next week that will mark its most direct attack yet against traditional enterprise integration rivals. The release of Cape Clear's CapeConnect 3.5 server and CapeStudio 3.5 development platform supports the usual run of Web-services protocols including XML, SOAP, and UDDI. But Cape Clear has added a new integration framework and other capabilities to the latest release in an attempt to offer enterprises a more affordable, standards-based alternative to traditional integration products, said Annrai O'Toole, Cape Clear CEO. Mainstream EAI vendors -- including companies such as Vitria, webMethods, Tibco, SeeBeyond, and others -- re furiously adding Web-services protocols to their integration platforms, which in the past have often been based on proprietary protocols (and more recently, more multi-platform technologies like J2EE)... New CapeConnect features include: XML mapping technology that can link an XML data source with any Web service hosted within the system; an array of built-in legacy connectors; support for both J2EE and .NET; improved database support; upgraded UDDI functionality; and more. CapeConnect 3.5's development pricing starts at $2500 while runtime pricing starts at $10,000..."

  • [March 06, 2002] "AIIM Show Spotlights Content Management." By Cathleen Moore . In InfoWorld (March 06, 2002). "The latest developments in enterprise content management will soak up the limelight at the AIIM (Association for Information and Image Management) 2002 Conference in San Francisco this week, as vendors such as Documentum, Ipedo, Ektron, and Wright Technologies roll out systems for digital asset management and XML control. Documentum at the show is introducing two new digital asset management products based on technology it acquired from the Bulldog Group in December. The Documentum Media Services product is integrated with the company's 4i Enterprise Content Management platform, letting enterprises manage multimedia, Web content, images, and documents from a single platform. Media Services allows users to edit media assets such as video, audio, and images, and combine those assets with other types of content; other features include automatic extraction of media attributes such as format, compression, or color; advanced search capabilities; format conversion capabilities; and streaming media support. The other product, Documentum Digital Asset Management Edition, is a souped-up media platform designed for entertainment or media organizations that distribute broadcast-quality digital assets. It includes specialized features such as tracking of time-based media and a Tape Library Manager for digitizing videotapes...Meanwhile, XML content management will receive some attention at AIIM, as Ipedo unveils its XML Database 3.0. The Ipedo XML Database is designed to ease the integration and management of dynamic XML content over the Internet. Version 3.0 adds a W3C-compliant XML query implementation, XML document versioning, improved SOAP (Simple Object Access Protocol) support, and performance enhancements... Ektron plans to roll out an XML editor aimed at empowering business users to apply XML to Web content. The company's eWebEditPro+XML lets users add and manage XML tags in a semi-structured content view across multiple media and device types..."

  • [March 06, 2002] "Additional XML Security URIs." By Donald E. Eastlake 3rd (Motorola). IETF Internet-Draft. Reference: 'draft-eastlake-xmldsig-uri-02.txt'. January 2002; expires: July 2002. ['Distribution of this draft is unlimited. It is intended to become an Informational RFC and will probably also be published as a W3C Note. Comments should be sent to the author or the XMLDSIG working group'] Abstract: "A number of algorithm and keying information identifying URIs intended for use with XML Digital Signatures and XML Encryption are defined." From the introduction: "XML Digital Signatures have been standardized by the joint IETF/W3C XMLDSIG working group. The Proposed Standard is specified in RFC 3075 and a Draft Standard version is pending before the IESG [XMLDSIG-D]. Canonical XML, which is used by many digital signatures, has been standardized by the W3C and is documented in Informational RFC 3076. In addition, XML Encryption and Exclusive XML Canonicalization are currently being standardized by the W3C. All of these standards and recommendations use URIs to identify algorithms and keying information types. This document is intended to be a convenient reference list of URIs and descriptions for algorithms in which there is substantial interest but which can not or have not been included in the main documents for some reason. Note in particular that raising XML digital signature to Draft Standard in the IETF requires remove of any algorithms for which there is not demonstrated interoperability from the main standards document. This requires removal of the Minimal Canonicalization algorithm, in which there appears to be continued interest, to be dropped from the standards track specification. It is included here..." See references in : (1) See "XML Digital Signature (Signed XML - IETF/W3C)"; (2) "XML and Encryption."

  • [March 06, 2002] "Internet Registry Information Service." By Andrew L. Newton (VeriSign, Inc.). IETF Network Working Group, Internet-Draft. Reference: 'draft-newton-iris-00'. February 19, 2002; expires: August 20, 2002. 26 pages. Abstract: "This document describes an application layer client-server protocol for a framework of representing the query and result operations of the information services of Internet registries. Specified in XML, the protocol defines generic query and result operations and a mechanism for extending these operations for specific registry service needs." Formal XML Syntax: "IRIS is specified in XML Schema notation. The formal syntax presented is a complete schema representation of IRIS suitable for automated validation of IRIS XML instances." Description: "Each of the three types of registries, address, routing, and domain, are considered to occupy their own namespace. This registry namespace is identified by the URI, more specifically a URN, used within the XML instances to identify the XML schema formally describing the information service. A registry information server may handle queries and serve results for multiple registry namespaces. Each registry namespace for which a particular registry operator serves is a registry information service instance. IRIS, and the XML schema formally describing IRIS, does not specify any registry, registry namespace, or knowledge of a particular service instance or set of instances. IRIS is a specification for a framework with which these registry namespaces can be defined, used, and in some cases interoperate. The framework merely specifies the elements for session management and the elements which must be used to derive query elements and result elements. This framework allows a registry namespace to define its own structure for naming, entities, queries, etc. through the use XML namespaces and XML schemas (hence, a registry namespace is identified by the same URI that identifies its XML namespace). In order to be useful, a registry namespace must extend from this framework. The framework does define certain structures that can be common to all namespaces, such as entity references, search continuations, authentication types, and more. A registry namespace may declare its own definitions for all of these, or it may mix its derived definitions with the base definitions. IRIS defines two types of referrals, an entity reference and a search continuation. An entity reference indicates specific knowledge about an individual entity, and a search continuation allows for distributed searches. Both types may span differing registry namespaces and instances. In addition, IRIS specifies requirements for representing entity references as URIs. No assumptions or specifications are made about roots, bases, or meshes of entities..." [cache]

  • [March 06, 2002] "Mission Interoperable: Rivals Aim To Link Web Services." By Richard Karpinski. In InternetWeek (March 06, 2002). "An ad hoc group of Web services pioneers - from large vendors like Microsoft and IBM to small upstarts and one-man development shops - held the latest in a series of hands-on meetings recently to test the interoperability of key Web services specifications. After a pair of meetings focused on SOAP (Simple Object Access Protocol), the group -- dubbed SoapBuilders after the public mailing list that drives the community -- turned its attention to WSDL (Web services Description Language). Participants in the latest round of testing included: BEA, Borland, Cape Clear, Hewlett-Packard, IBM, Macromedia, and Oracle, along with a slew of smaller companies and developers... The group's outlook after its most recent meeting: An early focus on interoperability is serving Web services well, though plenty of detail-oriented work remains to ensure key protocols like SOAP and WSDL work consistently across different clients and servers. Results of the SOAP and WSDL interoperability tests are available via several sources... A WSDL specification is currently submitted to the W3C as a note, and a group of companies are implementing that spec in their products and testing interoperability. Eventually, the W3C will come out with a formal recommendation for describing Web services; vendors will adjust if need be to this W3C guidance. As happened with SOAP, vendors are going through a now well-established process to get WSDL deployed and tested. First, individual vendors read and implement the specification. Next -- and this is what began happening last week -- they test interoperability among different clients, servers, and development tools... The companies involved made a lot of progress in working through WSDL interoperability issues in their most recent session; they'll also keep banging away at publicly accessible servers to resolve remaining issues until the group meets again, most likely within three months. After that, look for the Soap Builders group to tackle other crucial Web services areas, most likely security, where emerging XML specs are defining how digital signatures and encryption will work in a Web services environment..."

  • [March 06, 2002] "IBM Touts XML for DB2 Database." By Tom Sullivan. In InfoWorld (March 05, 2002). "As part of its strategy for entering what it calls the next wave of data management, IBM plans to offer a database capable of managing objects, relational data, and XML documents. In a briefing with InfoWorld this week, the Armonk, N.Y.-based company detailed plans to extend the core database engine currently in DB2 to include support for XML, with technologies such as new index structures that relate to XML, according to Nelson Mattos, an IBM distinguished engineer and director of IBM's information integration group. While IBM has supported both objects and relational data in DB2 for some time, the addition of XML will enhance that support. 'XML gives you a very flexible model to manage all the metadata around objects,' Mattos said. Mattos said that the idea is to make the core DB2 look like a relational database engine with XML capabilities from the perspective of applications looking for relational data, while making it look like an XML database with relational capabilities or an object database with relational capabilities from the perspectives of applications looking for those data types. To that end, support for the XQuery standard means that an XML application only needs to know XQuery to get at data residing in DB2... Furthering its distributed approach to data management, Big Blue is planning to launch a new version of its Content Manager software in the second quarter of this year."

  • [March 06, 2002] "Vignette Manages Content." By Michael Vizard. In InfoWorld (March 05, 2002). ['In an interview with InfoWorld Editor in Chief Michael Vizard, Vignette CEO Greg Peters and Senior Vice President and General Manager for Strategy and Technology Bill Daniel talk about the strategic role content management plays in unifying data assets across the enterprise.'] "... [Daniel:] We're supporting both the J2EE and the Microsoft .Net technology stacks because we see them as the providers of infrastructure. Web services hold out the promise of standardization of the protocols and the communication and the exchange of information between components of an application inside the firewall and disparate applications or pieces of applications. That is coming, and it's coming fast. It promises to make integration and a kind of aggregation of services quite easy. I think there's also a set of content management-related issues, because what's really flying around in a Web services world is content... immediately you'd better be able to handle XML natively and you'd better have the ability to process and transform XML and understand XML built into your applications. Every day there's a new standard for querying and another standard for interchange of documents between people in a certain industry. It's almost like we've learned how to talk and now we're creating every language known to man. Of course, it has some real advantages over HTML or other ways of storing information because XML has an actual structure to it and it's self-describing. What our customers are telling us [is] that over time they want to use XML as kind of the backbone for content management solutions. They're not necessarily saying that they want to throw everything away and convert it all to XML. They're talking about more future [plans]. We're really working with our infrastructure partners. From a repository point of view, we see the infrastructure providers coming on very strong there. We're focused at the application services level, making sure that we can handle XML and manage XML, but we're not focused on the storage and repositories because we think those issues are being solved very nicely by other vendors..."

  • [March 05, 2002] "Jabber." By Jeremie Miller, Peter Saint-Andre, and James Barry (Jabber Software Foundation). IETF Network Working Group, Internet-Draft. February 21, 2002; expires: August 22, 2002. Reference: 'draft-miller-jabber-00.' Abstract: "This informational document describes the Jabber protocols, a set of open, XML-based protocols developed over a number of years mainly to provide instant messaging and presence services. In addition, this document describes the known deficiencies of the Jabber protocols." From the introduction: "Jabber is a set of open, XML-based protocols for which there exist multiple implementations. These implementations have been used mainly to provide instant messaging and presence services that are currently deployed on thousands of domains worldwide and are accessed by millions of IM users daily. Because a standard description of the Jabber protocols is needed to describe this new traffic growing over the Internet, the current document defines the Jabber protocols as they exist today. In addition, this document describes the known deficiencies of the Jabber protocols; however, this document does not address those deficiencies, since they are being addressed through a variety of standards efforts... The standard transport mechanisms for XML-RPC, SOAP, and other forms of XML data interchange are HTTP and, to a lesser extent, SMTP; yet neither of these mechanisms provides knowledge about the availability of network endpoints, nor are they particularly optimized for the often asynchronous nature of data interchange, especially when such data comes in the form of relatively small payloads as opposed to the larger documents originally envisioned to be the main beneficiaries of XML. By contrast, the existing instant messaging (IM) services have developed fairly robust methods for routing small information payloads to presence-aware endpoints (having built text messaging systems that scale up to millions of concurrent users), but their data formats are unstructured and they have for the most part shunned the standard addressing schemes afforded by URIs and the DNS infrastructure. Given these considerations, the developers of the Jabber system saw the need for open protocols that would enable the exchange of structured information in an asynchronous, near-real-time manner between any two or more network endpoints, where each endpoint is addressable as a URI and is able to know about the presence and availability of other endpoints on the network. Such protocols, along with associated implementations, would not only provide an alternative (and in many cases more appropriate) transport mechanism for XML data interchange, but also would encourage the development of instant messaging systems that are consistent with Internet standards related to network addressing (URIs, DNS) and structured information (XML). The Jabber protocols provide just such functionality, since they support asynchronous XML-based messaging and the presence or availability of network endpoints..." See: "Jabber XML Protocol." [cache]

  • [March 05, 2002] "RDF Site Summary 1.0 Modules: Streaming." By Ben Hammersley. Version 1.0. 2002-03-05. Status: Proposed. "This module addresses the additional needs of streaming-media providers. It is seen as an addition to existing standard and proposed modules, especially Dublin Core. The main features involve the associated application for the media stream, the codec the stream is encoded with and additional tags for the segmentation of live/continual broadcasts. It is predominantly technical/practical information: I envisage information such as music style, or video content to be dealt with by Dublin Core, the mod_content etc..." See "RDF Site Summary (RSS)."

  • [March 05, 2002] "RSS Tools and Stuff." By Mark Gibbs. In Network World Fusion (March 04, 2002). "In last week's Gearhead we discussed a standard for news syndication called Rich Site Summary. To recap, RSS lets Internet sites with something to say make their content findable through an XML-formatted file that summarizes what is available and where it is. These summaries are called RSS feeds. Through the good offices of our esteemed Online Executive Editor Adam Gaffin, an RSS feed for Network World's NWFusion Web site is available... A particularly interesting aspect of this feed is the DIY part: The RSS data is created on-the-fly from the output of the search engine used by the site, and you can embed whatever search terms you want in the RSS URL you request, giving you the ability to get just the news you want. Cool... if you want to use RSS feeds, you're going to need a tool for the heavy lifting. May we suggest Headline Viewer (eight gearteeth out of 10) from Vertex Development? Headline Viewer is reasonably functional in that it crashes only occasionally. On the other hand, as it is currently uncharged for, we can't complain. We write "uncharged for," because the software is not actually freeware nor is it commercial yet - Vertex plans to charge for the program when it reaches Version 1.0, and it is currently stalled out at Version 0.97 (Version 1.0 was scheduled for last year but . . . ). Headline Viewer polls a list of publishers for RSS files at intervals from one and 24 hours. As each RSS file is retrieved the headlines are added to a list that is displayed for the currently selected publisher. Clicking on an item will take you to the URL to which the headline refers. Headline Viewer can load lists of publishers from a selection of built-in aggregators that includes Userland, XMLTree, GrokSoup and NewsIsFree. You can also define your own publishers..." See previously "All the News That's Fit to RSS." References: "RDF Site Summary (RSS)."

  • [March 05, 2002] "Vignette Melds Web Services, Content Management." By Cathleen Moore. In InfoWorld (March 05, 2002). "Throwing its hat into the Web services ring, Vignette on Tuesday plans to add support for two Web services standards to its flagship Web CM (content management) system, V6. Specifically, Vignette will arm V6 with support for SOAP (Simple Object Access Protocol) and WSDL (Web Services Description Language), which form the current core of emerging standards most often associated with Web services along with XML and UDDI (Universal Description, Discovery, and Integration). Support for SOAP and WSDL will allow CM processes to be exposed as a Web service, thereby reducing the cost and complexity of content or application sharing, according to the company. V6 features native support for J2EE (Java 2 Enterprise Edition) and .Net environments, as well as existing support for XML. The new Web services-based CM process exposes content objects through an XML-based API, wraps the objects in a SOAP envelope, and deploys it to an active directory that is used by other companies to find the Web service they want, according to Santi Pierini, vice president of product strategy at Vignette, in Austin, Texas..." See: (1) "Vignette's New Support for Web Services Reduces Complexity and Cost of Delivering Content to Users and Business Applications. Vignette V6 Now Enables Organizations to Expose Any Content Management Process as a Web Service"; and (2) "The State Of New Mexico Selects Vignette V6. Vignette V6 at Core of Strategy to Bring Web Services to eGovernment."

  • [March 05, 2002] "IBM Spells Out Web Services Strategy." By Rob Wright. In CMP VARBusiness (March 04, 2002). "Feeling the heat in the Web services market from top competitors such as BEA Systems, Oracle and Sun Microsystems, IBM on Monday highlighted its own Web services strategy, positioning itself firmly in between the competing Java and .Net standards while declaring that it is the true market leader in the emerging market. Officials from IBM Software Group and IBM Global Services, along with IBM software brands Lotus and Tivoli, fleshed out Big Blue's overall approach to Web services, one that focuses on open standards and support of both Java-based Web services and Microsoft's .Net platform. Most of the attention, however, was focused on IBM's middleware brands, specifically the WebSphere platform and application server, as the key to driving growth and adoption of Web services in the enterprise market. IBM officials say the company will invest $700 million in WebSphere this year, which grew rapidly in 2001 and gained significant market share against BEA's WebLogic application server, which was the market leader last year... To offer such a high degree of connectivity through WebSphere, IBM has rolled out support of all major technologies and Internet standards, which has been a key selling point for Big Blue's Web services push. IBM has been a major contributor to Java, J2EE and UDDI and offers strong support for XML, SOAP, WSDL -- even .Net. While IBM and Microsoft have two distinct and separate Web services strategies and product lines, the technology giants teamed up recently to form the Web Services Interoperability Organization (WS-I), a group focused on developing interoperable standards to connect multiple platforms, applications and programming languages. Accenture, Intel, BEA, Hewlett-Packard and Oracle are also members of WS-I. 'The world is heterogeneous,' says Robert Sutor, director of e-business standards strategy at IBM. 'If customers buy our software and they can't communicate with Microsoft, BEA and other competitors, we fail.' IBM, however, will be walking a fine line with Microsoft. While Big Blue has attacked such threats as BEA, Sun and Oracle, the company is restrained with Microsoft because it sees opportunity around .Net, Microsoft's proprietary Web services platform. IBM officials say they're concentrating on getting Microsoft to support open standards, and it has worked to a degree. Along with forming WS-I, the two companies worked together on developing SOAP... IBM also pointed out that Java is the more popular technology for enterprises buying Web services. In addition, the company cited a recent survey from analyst firm Giga Information Group that showed 32 percent of customers say WebSphere is the most important Web services platform, compared with 22 percent for Microsoft .Net. Going forward, IBM says it will concentrate on developing standards with WS-I, which the company says has had membership inquiries from more than 450 companies, and offering solution providers and software developers more resources and support through its partner program and new partner initiative, WebServices on WebSphere..." See: "Web Services Interoperability Organization (WS-I)."

  • [March 01, 2002] "CSS3 module: Lists." W3C Working Draft 20-February-2002. Edited by Tantek Çelik (Microsoft Corporation) and Ian Hickson. Latest version URL: http://www.w3.org/TR/css3-lists. This first public working draft for the W3C Cascading Style Sheets (CSS) Level 3 'Lists' Module "describes how lists are rendered and offers enhanced list marker styling... The list model in this module differs in some important ways from the list model in CSS2, specifically in its handling of markers. Implementation experience suggested the CSS2 model overloaded the ::before and ::after pseudo-elements with too much behavior, while at the same time introducing new properties when existing properties were sufficient. Most block-level elements in CSS generate one principal block box. In this module, [the authors] discuss two CSS mechanisms that cause an element to have an associated marker: one method associates one principal block box (for the element's content) with a separate marker box (for decoration such as a bullet, image, or number), and the other inserts a marker box into the principal box. Unlike ::before and ::after content, the marker box cannot affect the position of the principal box, whatever the positioning scheme... There are significant changes in this module when compared to CSS2: (1) display:marker has been replaced with ::marker; (2) It is no longer possible to make end markers; (3) The marker-offset property is obsoleted; (4) The marker display type is obsoleted; (5) Markers are now aligned relative to the line box edge, rather than the border edge; (6) Markers now have margins; (7) The introduction of the box list style type as well as a number of alphabetic and numbering types; (8) Error handling rules for unknown list style types were changed to be consistent with the normal parsing error handling rules; (9) The list-item predefined counter identifier has been introduced..." See "W3C Cascading Style Sheets."

  • [March 01, 2002] "The State of Web Services." Interview with Bob Sutor (IBM). By Ellie MacIsaac (Assistant Managing Editor, WebSphere Advisor). In [XML Strategies] Advisor (February 28, 2002). ['Advisor recently spoke with IBM director of e-business standards strategy Bob Sutor about current trends in Web services adoption, and the standards integration work being done by the Web Services Interoperability Organization (WS-I).'] "The Web Services Interoperability Organization (WS-I) is based on providing guidance and clarity both for the developers and the people making investment decisions. They need to know that the products, such as the tools, the runtime, and the Web services themselves are based on open standards. They also need to know which open standards the developers used, and if they used common industry practices to put them together. We want our customers to have the confidence to say, 'Alright, I can use that Web service. I know that will be compliant with what I already have.' We think that confidence will really speed up the adoption of Web services. We all agree Web services is a good idea -- it's hard to argue against it. Therefore, we want to get this technology into our customers' systems as quickly as possible, but they want the reassurance it'll do the job. They want to know that all these promises of interoperability are more than just marketing hype -- that this notion of interoperability is something they can concretely see and measure... We've started doing some preliminary work over the last year around the basic standards such as SOAP and WSDL, which are the basic ways for describing Web services. This organization's timing is very important. We think we're about to enter a period where there will be many more technologies coming into the standardization process. I mentioned some before -- security, reliability, and transactions -- but there are several more. So, we're at the point where the foundation for Web services, in terms of the basic connectivity, has been laid. We've clearly been talking about Web services a lot, and people at least have a vague idea of what Web services are. We want to make sure now that when we move off the basic platform for all the future standardization efforts that will take place in the different organizations, that interoperability is built in -- that it's a requirement from now on to make sure the standard coming from one place will work with the standard coming from another place. There are just too many different specifications to be handling them all in one place. The W3C will not do them all; OASIS will not do them all; same with the OMG, or any of the other big organizations. So, it's assumed that this standardization work will be done in a distributed way. So, we need a central organization, like WS-I, that's neutral and isn't affiliated with any of these organizations, and therefore isn't involved in any of the existing cross-organization politics. We're also neutral regarding programming languages; we don't say you have to do Web services with Java or C# or .NET. So, WS-I is an appropriately neutral industry body that can drive forward the roadmap for Web services to work in a complementary fashion with the standards organizations and finally make this notion of interoperability real..." See: "Web Services Interoperability Organization (WS-I)." [fcCG]

February 2002

  • [February 28, 2002] "Canonical XML Encoding Rules (CXER) for Secure Messages. An ASN.1 Schema for XML Markup." By Phillip R. Griffin. February 19, 2002. Based upon slides and speaker notes from a presentation given at the RSA Security Conference (McEnery Conference Center, San Jose, California). From a communiqué: "The presentation is entitled, 'Canonical XML Encoding Rules (CXER) for Secure Messages - An ASN.1 Schema for XML Markup'. It describes how to use ASN.1, the Abstract Syntax Notation One standards defined by ISO, IEC and ITU-T, as a schema for XML messages. The presentation will be given by Phillip H. Griffin. By using an ASN.1 schema, XML values can be transferred in a compact, efficient binary format. These same values can then be represented and used locally as verbose markup text. This capability allows XML to be used effectively in environments with constraints imposed by mobility and/or limited bandwidth (e.g., wireless communications with personal digital assistants), high volumes of transactions (e.g., Internet commerce), or limited storage capacity (e.g., smart cards)... ASN.1 is a schema for encoded values: Type definitions are based on the X.680-series notation; Types describe the expected general structure of values; Each builtin type defines a class, a set of distinct values; Constraints restrict a class and the validity of values... Using the Canonical XML Encoding Rules (CXER), the same ASN.1 XML Value Notation example can be encoded one and only one way as a single long string containing no 'white-space' characters outside of data..." A free copy of the presentation, including slides and speaker notes, is available online. See: "ASN.1 Markup Language (AML)" and note the new TC: "OASIS Technical Committee for XML Common Biometric Format (XCBF)" which "...will define an ASN.1/XML CBEFF schema, an introduction and overview of canonical DER, PER, and XER, and the processing and security requirements needed for the creation and verification of all cryptographic types defined in X9.84, in the form of XML encoded objects..." [cache]

  • [February 28, 2002] [Review of] Digital Rights Management: Business and Technology. Reviewed by John S. Erickson, Ph.D. (Hewlett-Packard Laboratories). The book is authored by William Rosenblatt, William Trippe and Stephen Mooney (Hungry Minds, Inc., Indianapolis, IN, November 2001; ISBN: 0-7645-4889-1.) "This book was [...] the best, most comprehensive treatment of digital rights management that I have seen to date. The book excels primarily because the authors continually emphasize the overarching business imperatives while considering the applicable technologies, at times in depth. The book does an important service to the industry by combining useful abstract models with considered discussions of real technologies and solutions... Part II, 'The Technology of DRM' methodically introduces the reader to the world of DRM technology, beginning with the conceptual basis for 'rights models' and how these may be crafted to embody a variety of business models (Chapter 4). Chapter 5 presents the authors' 'DRM Reference Architecture,' an extremely useful tool for understanding the system components required in any practical DRM system and their various relationships. Chapter 6 provides a timely and thoughtful treatment of DRM standards activity, be they formal or de facto. The terms 'standard' and 'digital rights management' have traditionally seemed oxymoronic, but the authors demonstrate that progress is being made and DRM standards already have business relevance. For example, readers will find the sections on ICE, a standard protocol for content syndication, and the XrML rights specification language from ContentGuard to be useful and enlightening. Chapters 4-6, in this reviewer's mind, were the 'gems' of the book, providing the most important take-home messages. The final chapter in Part II focuses on significant technologies and major technology players in the industry, including the likes of Digimarc, Adobe, Intertrust, Microsoft and RealNetworks. The authors' thorough treatment of Microsoft's 'Unified DRM'/BlackBox digital rights management technology was especially timely and useful, given the awarding of US Patent 6,330,670 'A Digital Rights Management Operating System' on December 12, 2001... Finally, this reviewer's criticisms: I felt that there could have been a bit more discussion of the impact that various XML-based security initiatives might make on emerging DRM standards or solutions in general. I felt that the digital library community might like to see more consideration of the problems that conventional DRM technologies face when trying to deliver cross-organizational authentication and authorization in manageable ways..." See the online chapter from the book: "[DRM] Technology Standards: Leveling the Playing Field." For references, see "XML and Digital Rights Management (DRM)."

  • [February 28, 2002] "Dublin Core Metadata Initiative Progress Report and Workplan for 2002." By Makx Dekkers (Managing Director, Dublin Core Metadata Initiative) and Stuart L. Weibel (Executive Director, Dublin Core Metadata Initiative). "The Dublin Core Metadata Initiative (DCMI) progressed on many fronts in 2001, including launching important organizational changes, achievement of major objectives identified in the previous year, completion of ANSI standardization, and increased community participation and uptake of the standard. The annual workshop, held in Asia for the first time this past October, was broadened in scope to include a tutorial track and conference. This report summarizes the accomplishments and changes that have taken place in the Initiative during the past year and outlines the workplan for the coming year... DCMI remains committed to its mission to serve the user community, to further develop its role in the wider context of the semantic Web, and to create a stable platform for future developments and outreach to the commercial sector (especially product development and knowledge technologies). DC and OAI: The initial version of the OAI protocol calls for the use of unqualified Dublin Core as the required default metadata set. However, implementation of this recommendation is not without difficulty, and may result in awkward representation of some types of resources. In addition, the current protocol associates an OAI-specific XML schema with the DC namespace rather than pointing to the DCMI-maintained site. Recent concerns about these issues in both communities have resulted in closer liaison between their technical groups, and this is expected to result in collaborative efforts to resolve these issues in the coming year... Expression of simple and qualified Dublin Core in RDF/XML: Two documents were finalized in 2001 and are expected to become DCMI Recommendations in early 2002. The first of these documents explains how to express unqualified Dublin Core metadata in RDF with XML syntax, and contains many encoding examples. The second document addresses the more complex case of encoding qualified Dublin Core in RDF... Development of Library and Government Application Profiles: Two communities have been active in defining how to use Dublin Core metadata in their domain: the library community and the government community. These groups are in the process of defining Application Profiles and identifying additional implementation rules and controlled vocabularies (thesauri, ontologies) that will allow implementations in these domains to achieve a high level of interoperability... Expression of Dublin Core metadata in XML: Following the recommendations on how to express Dublin Core metadata in RDF/XML, a need has been identified for a similar recommendation how to express DC metadata in XML without the use of RDF. A draft of such a recommendation has been prepared in 2001 and is expected to go through finalization and review in 2002 as one of the activities in the Architecture working group... In 2001, we have seen important achievements, both in the technical area as well as in the organizational restructuring of the Initiative. The 2002 workplan is well underway and moving towards DC-2002, to be held in Florence, Italy in October of this year. The commitment of the many people who invest their time, energy and intellectual resources to develop the Dublin Core gives ample reason for optimism that DCMI will continue to lead the development of cross disciplinary resource discovery standards for the Web..." See: "Dublin Core Metadata Initiative (DCMI)."

  • [February 28, 2002] "E-Government Strategy. Implementing the President's Management Agenda for E-Government." Simplified Delivery of Services to Citizens. February 27, 2002. From: [US] Executive Office Of The President, Office Of Management And Budget (OMB). 37 pages. "... The E-Government Task Force found that the federal government could significantly improve customer service over the next 18 to 24 months by focusing on 23 high-payoff, government- wide initiatives that integrate agency operations and IT investments (subsequently, payroll processing was added as the 24th E-Government initiative). These initiatives could generate several billion dollars in savings by reducing operating inefficiencies, redundant spending and excessive paperwork The initiatives will provide service to citizens in minutes or hours, compared to today's standard of days or weeks Moreover, by leveraging IT spending across federal agencies, the initiatives will make available over $1 billion in savings from aligning redundant investments..." The 24 initiatives chosen represent a balance of initiatives and resources across the four key citizen groups (individuals, businesses, intergovernmental and internal) The initiatives will integrate dozens of overlapping agency E-Government projects that would have made worse the confusing array of federal Web sites Additionally, the 24 initiatives represent the priorities of the members of the President's Management Council, who can provide the key leadership support needed to overcome resistance to change. The Government to Business (G2B)initiatives will reduce burden on businesses by adopting processes that dramatically reduce redundant data collection, provide one-stop streamlined support for businesses, and enable digital communication with businesses using the language of E-business (XML)... Plans call for: (1) Integrated Human Resources HR Logical Data Model including metadata, extended markup language ( XML) tags, including proposal for standard Federal HR data [by 9/30/02]; (2) Complete XML or non EDI formats (schemas) for electronic filing of 94x tax products (businesses) [by 8/31/02]; (3) Complete Records Management and archival XML schema [by 2/28/03]. See also the press release. Reference from Walter R. Houser. [cache]

  • [February 28, 2002] "Lord of the Schemas, Part 1: Fellowship of the Schema." By Sean McGrath. In XML In Practice (February 21, 2002). "In the Land of Markup where the Schema languages lie One DTD to rule them all, One DTD to find them, One DTD to reify them all and to the objects bind them, In the Land of Markup where the Schema languages lie... This part of our tale chronicles some of the events in MiddleMark that occurred during the Great Years following the Third Age. Our focus is on the emergence of XML during the Great Profiling and, in particular, the growth in power and danger of One Schema Language (known in the Common Tongue as "DTD"). The First Age ended with the Great Battle when the Office Document Architecture (ODA ISO 8613:1988) was smitten by SGML (ISO 8879:1986). SGML was crafted by an ISO committee of high Elves, Dwarves, and Hobbits who worked with runes in the ancient Elven tongues. Their utterances were written in pure Mithril from the Caves of Moria. The lore therein was only viewable when moonlight shone on the parchment of the sacred ISO-bound volumes. These mighty tomes were stored, high upon the sturdy shelves of specialist bookstores from whence only deep dollars could retrieve them. Even the great Gandalf, wielding the sword of HTTP, could not dislodge SGML from behind the ancient gates of www.iso.ch. SGML's promise was plain to see but its magic was buried deep, and only available to the sages of Eldar. These sages made good money as consultants, especially working for the deep-pocketed great armies of the protectors of the Western Way. The history of the Second Age concerns the birth of XML..."

  • [February 28, 2002] "Registration of xmlns Media Feature Tag." By Simon St.Laurent and Ian Graham (Emfisys, Bank of Montreal). February 22, 2002; expires: August 23, 2002. Reference: 'draft-stlaurent-feature-xmlns-02.txt.' Posting from Simon St.Laurent: "A new Internet-Draft of '"Registration of xmlns Media Feature Tag' is available... This draft includes clarifications in its introduction, more explicit notice that the order in which features are listed is unimportant, and a new author (Ian Graham). Please direct comments to ietf-xml-mime@imc.org. Abstract: "This document specifies an xmlns Media Feature per RFC 2506 for identifying some or all of the URIs defining XML namespaces in a given XML resource, and the relative importance of these namespaces. This feature is designed primarily for use with the XML Media Types defined in RFC 3023, to provide additional hints as to the processing requirements of a given XML resource." From the Introduction: "MIME Content-Type identifiers have proven very useful as tools for describing homogeneous information. They do not fare as well at describing content which is unpredictably heterogeneous. XML documents may be homogeneous, but are also frequently heterogeneous. It is not difficult to create, for instance, an XHTML document which also contains RDF metadata, MathML equations, hypertext using XLink, XForms,and SVG graphics. XSLT stylesheets routinely include information in both the XSLT namespace and in the namespaces of the literal result elements. RFC 3023 defines a set of XML media types capable of indicating, among other things, a 'most important' type for an XML resource. For example, the content-type header: Content-Type: application/xslt+xml indicates that the data is XML and that it should be processed by an XSLT processor. In XML terminology, this is more or less the same as saying that the XSLT 'namespace' is the 'most important' namespace relevant to the processing of the data... XML data can contain many different 'types' of data, each identified by a namespace URI, and successful processing of a resource may depend on knowledge of some or all of these. For example, using XSLT to produce XHTML output is likely useful only if the recipient is also capable of processing XHTML. Similarly, a program may be better able to choose among a set of XSLT stylesheets if it knows the namespaces of the results they generate, or a renderer may take advantage of foreknowledge to begin setting up components before content actually arrives. Alternatively, processors working with SOAP envelopes may find it useful to know what they will be facing inside the envelope. The Media feature described in this document can provide additional information about some or all of the namespaces relevant to a given XML resource beyond that indicated by the basic content-type. Such a list can provide guidance to a recipient application as to how best to process the XML data it receives..." See also XML Media/MIME Types."

  • [February 28, 2002] "VRML Successor Aims For 3D Web." By Richard Karpinski. In Internet Week (February 27, 2002). "The Web3D Consortium this week debuted a new specification it hopes will succeed where past efforts have failed and make 3D graphics a mainstream part of the Web. A draft version of the new X3D -- or Extensible 3D -- standard was unveiled this week. It will be submitted to the International Standards Organization (ISO) later this year. Backers hope X3D will have more success than VRML (virtual reality modeling language), which garnered plenty of attention but never took off. Developers envisioned using VRML to build 3D shopping malls, branded online characters, and interactive product models, among other applications. Whether such uses ever catch the imagination of the everyday Web user remains the big question. Clearly, game boxes like PlayStation 2 and Xbox have made 3D graphics a mainstream phenomenon... In a boost to the fledgling standard, the Motion Picture Experts Group (MPEG) has chosen X3D as the basis for lightweight 3D graphics in the MPEG-4 standard. X3D is built using a Java-based toolkit and XML schemas for lightweight delivery and fast performance. Source code is available under the GNU Lesser General Public License, which could also help the standard take hold..." References: "Web3D Consortium Publishes Draft for Royalty-Free Extensible 3D Standard."

  • [February 28, 2002] "Standards Group Unveils Web3D Specification." By Clint Boulton. In Internet.com Developer News (February 27, 2002). "A kind of 3D version of Extensible Marking Language (XML) has come to the fore this week. The Web3D Consortium said Tuesday that it has dusted up a draft version of the X3D (Extensible 3D) standard to weave 3D graphics into applications for wireless devices, set-top boxes and gaming consoles -- pretty much any computing product. Ultimately, Web3D hopes the spec will serve as the basis for commercial use of a 'open, royalty-free standard' in preparation for submission to the International Standards Organization (ISO) in August. The organization's promise (and mantra) is to deliver '3D Anywhere' over the Web and on broadcast applications. The idea of 3D graphics for the Web is hardly new -- it just hasn't moved much since its seminal days years ago as Virtual Reality Modeling Language (VRML), when greater computing power and more bandwidth were more of an exception than the norm. To date, graphics on small client devices such as personal digital assistants (PDAs) have been nothing to rave about, but X3D, when it's ready to be implemented, could change minds of the wireless persuasion. Orinda, Calif.'s Web3D hopes to rectify lackadaisical graphics with 3D. The spec poses profiles to meet the demands of sophisticated applications, including: an Interchange Profile for exchanging X3D content among authoring and publishing systems; an Interactive Profile to support delivery of lightweight interactive animations; an Extensibility Profile to enable the development of add-on components and robust applications; and a VRML97 Profile to ensure interoperability between X3D and VRML97 legacy content. One aspect of the draft spec, which came to light this past Sunday at the Web3D Symposium in Tempe, Ariz., received approval from the Motion Picture Experts Group (MPEG): MPEG has accepted the X3D Interactive profile as the basis for interactive 3D graphics in the still-being-tinkered-with MPEG-4 multimedia standard. Specifically, the Interactive Profile will allow interoperability between X3D and MPEG-4 content, which would ideally provide a consistent platform for 3D graphics and application development across Web and broadcast environments..." See the news item of 2002-02-28: "Web3D Consortium Publishes Draft for Royalty-Free Extensible 3D Standard."

  • [February 28, 2002] "Apache Xindice XML database 1.0rc2 Released." Announcement posted by Kimbro Staken. "The Apache Xindice team is pleased to announce the release of Apache Xindice 1.0 release candidate 2. Full source code is available under the terms of the Apache Software License and downloads are available from http://xml.apache.org/xindice. Apache Xindice is a native XML database. As such it has basically one purpose, easy management of large quantities of XML data. It is not intended as a competitor for relational databases and is primarily targeted at new application development where XML plays a significant role. The server is currently suitable for medium volume XML storage applications. It supports XPath for queries and XML:DB XUpdate for XML updates. An implementation of the XML:DB XML database API is provided for Java developers and access from other languages is enabled through the download of an XML-RPC plugin. Apache Xindice was formally known as dbXML. The dbXML source code was donated to the Apache Software Foundation in December 2001. The 1.0 release of Xindice represents the conclusion of the work undertaken by the dbXML project and the official commencement of new development on the Xindice code base. The development team has added two new members and it is expected we'll add several more in the coming weeks. Future development will focus on improved performance, ACID properties, better standards support and better integration with other Apache projects..." See the FAQ document. Related references: see (1) XML Database Products (Ronald Bourret) and (2) "XML and Databases."

  • [February 28, 2002] "The Visual Display of Quantitative XML." By Fabio Arciniegas A. From XML.com. February 27, 2002. "The need to display quantitative data stored in XML files is quite common, even when transforming the most basic documents. For example, consider the following cases: (1) Number and type of hits registered in a server log; (2) Percentage of sales by an individual on an annual sales report; (3) Number of technical books vs. the total book count in a book list (almost every XML book in the world has that example); (4) Proportion of annotations per paragraph in a DocBook article. While quantitative XML data is everywhere, a less common thing to find is examples of effective ways to display such information. Most resources will merely show you how to use XSLT to convert XML data to HTML, which is often not nearly enough when you need to explain complex or large sets of data. This article discusses the creation of useful graphical presentations of quantitative XML data using XSLT and SVG... The use of XSLT and SVG opens up exciting new ground for the presentation of XML data on the Web. The correct use of these tools may improve vastly the quality and quantity of information your users can consume, as well as your process to present and create it. The creation of good visual representations of XML data using XSLT is governed by principles and best practices both on the programming and technical graphic design sides. In this article we have examined a few of them while providing an illustration of their implementation. The principles studied can be summarized in the following list. Naturally, this article is not exhaustive, but I hope it whets your appetite for the creation of useful graphic data using XML technologies..." See "W3C Scalable Vector Graphics (SVG)."

  • [February 28, 2002] "Server Side SVG." By J. David Eisenberg. From XML.com. February 27, 2002. "If you've been using SVG or reading XML.com, you probably know about the Adobe SVG Viewer, and you may have heard of the Apache Batik project. Although Batik is most widely known for its SVG viewer component, it's more than that. Batik, according to its web site, is a 'Java technology-based toolkit for applications that want to use images in the Scalable Vector Graphics (SVG) format for various purposes, such as viewing, generation or manipulation.' The Batik viewer application uses the JSVGCanvas component, which is a Java class that accepts SVG as input and displays it on screen. In this article, we'll use two of the other Batik components, SVGGraphics2D, and the Batik transcoders. The SVGGraphics2D class is the inverse of JSVGCanvas; you draw into an SVGGraphics2D environment using the standard Java two-dimensional graphics methods, and the result is an SVG document. The transcoders take an SVG document as input and produce either JPG or PNG as output. The context in which we'll use these tools is a servlet that generates geometric art in the style of Piet Mondrian. If the client supports SVG, the servlet will return an SVG document. Otherwise, it will return a JPEG or PNG image, depending upon client support for those image formats..." See "W3C Scalable Vector Graphics (SVG)."

  • [February 28, 2002] "Doing That Drag Thang." By Antoine Quint. From XML.com. February 27, 2002. "In last month's article, we took a wee trip in the exciting lands of SMIL-powered SVG animation. In that article, we used XML elements to achieve our goals. Today I will show you around a place that might sound a little scary, but that's just as much fun when you take the time to imagine how many possibilities it offers: it is time to take a look at scripting SVG, for all the nifty interactions that declarative SVG Animation could not handle. As an XML application, SVG benefits from the Document Object Model. The DOM is an object-oriented API for reading from and writing to an XML document. Even if you've never heard of the DOM, you might have had some unfortunate experience with its wayward sibling, DHTML. DHTML really was the combination of HTML, CSS, JavaScript, and a DOM. What made DHTML such a headache is that the two main browser vendors had different DOMs, neither being compliant with the DOM as specified by the W3C. Recent versions of major browsers now support the W3C DOM Level 2, just like the Adobe SVG Viewer, which also offers support for the SVG DOM... Scripting SVG opens up many new possibilities. While client-side scripting is a well-established practice in different environments (especially DHTML and Flash ActionScript), I believe the SVG scripting environment offers a more comprehensive and standards-based approach. Adobe's SVG Viewer version 3.0 offers stable and powerful tools for us to work with in a way that has never been possible before..."

  • [February 28, 2002] "Handling Attachments in SOAP. Transferring Foreign Objects With Apache SOAP 2.2." By Joshy Joseph (Software Engineer, IBM Software Group). From IBM developerWorks, XML Zone. February 2002. ['Web services will require the ability to send more than just text messages between services in a process. Often it will involve complex data types such as language structures, multimedia files, and even other embedded messages. This article takes a look at how the SOAP with Attachments specification can be used to send such information. It provides a programming example of how to handle custom data type mapping and attachments in your SOAP services.'] "Web services has been well received by the industry to solve the complex problems and distributed processes across multiple platforms and systems. Web services achieves this through the use of standard-based protocols like SOAP, WSDL, and UDDI, and through the developments in standards groups. These standards are still evolving and have yet to solidify solutions for all the areas needed. Within these grey areas, you still need to figure out how to transfer custom data types, such as arrays of data structures, between services, as well as how to handle attachments to transfer binary and other complex data files. In this article, I will explain the current protocol and tools support for handling the custom data encoding and attachments in Web services. I will also present a simple case study that illustrates how you can use the existing tools and standards to create a file upload and download Web services. Before I move on to those topics, let's review how SOAP and WSDL handles data encoding while they work... The XML Protocol committee initiated work to define a set of new software practices or usage scenarios. This includes multiple asynchronous messaging, caching, routing, and the streaming of huge data. It may take some time for the standardization but you can expect SOAP processors coming to the market with these new features including data streaming capabilities, rather than just block transfers using direct attachments. The BEEP (Blocks Extensible Exchange Protocol) is a new protocol that supports multi channels over a single TCP connection, which can be used for such asynchronous and streaming types of data communication. Direct Internet Message Encapsulation (DIME) is another lightweight, binary message format that can be used to encapsulate one or more application-defined payloads of arbitrary type and size into a single message construct. DIME has strengths in two areas. The sender should do either pre-computation of the size of the payload or "chunks" the payload into records of fixed size. This helps server applications to compute the memory requirements in advance and hence increased performance. The ability to specify new media types using URI mechanism may allow receiving applications to load handlers for new media types. However, this still primarily a Microsoft-defined specification..." Also in PDF.

  • [February 28, 2002] "Extending XML Tools with Jacl Scripts. How to extend open-source XML tools with a Java implementation of the Tcl scripting language." By Phil Whiles (Freelance Java Developer, Skyline Computing Ltd.). From IBM developerWorks, XML Zone. February 2002. ['This article shows how to extend open-source Apache XML tools using Jacl, a Java implementation of the popular Tcl scripting language. With Jacl, you can embed scripted functionality within XML or XSL. In addition, due to its Java extensions, you can use Jacl to interact with Java objects within the Java-based Apache tools. While this article shows how to use Jacl with the Ant build tool, the approach is equally valid for extending other Apache XML tools such as Xalan and Cocoon. More than a dozen reusable code samples demonstrate the techniques.'] "Jacl gives Java developers yet another way to manipulate XML. With Jacl, a Java implementation of the popular scripting language Tcl, you can get under the hood and add functionality to your XML build made with Ant or to XSL transformations done with Xalan or Cocoon. For XML and XSL developers, Jacl could open up a whole new world... In addition to implementing most of the Tcl command set, Jacl provides some of its own Tcl commands that allow the programmer to interact with the Java VM. These additional commands allow Jacl code to create Java objects, invoke methods in them, invoke static methods, introspect Java objects, and even bind Jacl listeners to Java events. This ability to use both Tcl and Java code in the same script opens up a world of possibilities. Imagine writing a suite of Java services, components, or building blocks for a system. By using Jacl in the same VM as these components, you can allow for scripted execution and control of your Java components... Jacl will open up all sorts of possibilities for the Ant build process developer. You can also use Jacl to extend Xalan and Cocoon in a similar fashion, and IBM WebSphere now uses Jacl as the scripting language for its command-line interface, WSCP, so you can put your newfound knowledge to use there as well. Maybe this article will even persuade you to use Jacl as a standalone scripting language for your Java projects..."

  • [February 27, 2002] "Standards-Based Methodology for U.S. E-Government Initiatives." By Alan Kotok. DISA (Data Interchange Standards Association). February, 2002. 33 pages. "E-government efforts now underway can benefit immediately and directly from open standards, which provide consistency and stability, while encouraging interoperability among agencies and applications, as well as fostering innovative solutions. The National Technology Transfer and Advancement Act of 1995 and Office of Management and Budget (OMB) circular A-119 spell out the value of open IT standards, from which agencies and the private sector have derived benefit over the years. In the past year, the Electronic Business XML or ebXML specifications, a joint undertaking of the United Nations Centre for Trade Facilitation and Electronic Business (that uses the acronym UN/CEFACT) and the Organization for the Advancement of Structured Information Standards (OASIS), have begun taking root in the business world. Those specifications, which take advantage of existing Internet standards and encourage migration from existing interchange formats, can provide a framework for the administration's egovernment initiatives. This paper outlines a methodology based on open standards for the planning and development of the egovernment initiatives. The approach is based on the Open-edi Reference Model, an international standard for the development of e-business specifications, published in 1997, and which plays an important role in the ebXML specifications. The Open-edi Reference Model specifies two complementary views of e-business: a business-operational view that defines the interactions between the parties, and a functional-services view that outlines the technical aspects of the interactions, such as required protocols and interfaces. This approach to e-business planning puts an emphasis on the identification or definition of business processes separate from the technology, which helps create specifications independent of technical implementation. This approach helps ensure vendor-neutral business requirements and encourages interoperability among applications... With their business processes defined, organizations (agencies, companies, or entire industries) can then identify the parties in the transactions, the messages exchanged between the parties, and the data included in those messages. At that point, organizations can identify standards and specifications that apply to those processes, especially where messages or data elements can be reused. This reusability encourages common implementations and thus interoperability, which can reduce costs and open new interactions among the parties. To illustrate the approach, this paper takes three of the 23 Federal e-government initiatives, and for each project recommends a set of business processes identified by ebXML, as well as corresponding electronic data interchange (EDI) transactions and XML schemas defined by the Open Applications Group. Also for each initiative, the paper identifies potential industry vocabularies using XML. Just these few examples suggest areas for interoperability that come from identification of business processes and current open standards..." See the DISA reference page.

  • [February 27, 2002] "Codes for the representation of names of languages -- Part 1: Alpha-2 code. [Codes pour la représentation des noms de langue -- Partie 1:Code alpha-2.]." From ISO/TC 37/SC 2 (Secretariat: SCC). International Standard ISO/FDIS 639-1. Reference: ISO/FDIS 639-1:2002(E/F). Final Draft. 48 pages. Voting begins on 2002-02-28. Voting terminates on 2002-04-28. "ISO 639 provides two language codes, one as a two-letter code (ISO 639-1) and another as a three-letter code (ISO 639-2) for the representation of names of languages. ISO 639-1 was devised primarily for use in terminology, lexicography and linguistics. ISO 639-2 represents all languages contained in ISO 639-1 and in addition any other language, as well as language groups, as they may be coded for special purposes when more specificity in coding is needed. The languages listed in ISO 639-1 are a subset of the languages listed in ISO 639-2; every language code element in the two-letter code has a corresponding language code element in the three-letter code, but not necessarily vice versa. Both language codes are to be considered as open lists. The codes were devised for use in terminology, lexicography, information and documentation (i.e., for libraries, information services, and publishers) and linguistics. ISO 639-1 also includes guidelines for the creation of language code elements and their use in some applications... The alpha-2 code was devised for practical use for most of the major languages of the world that are not only most frequently represented in the total body of the world's literature, but which also comprise a considerable volume of specialized languages and terminologies. Additional language identifiers are created when it becomes apparent that a significant body of documentation written in specialized languages and terminologies exists. Languages designed exclusively for machine use, such as computer-programming languages, are not included in this code..." Background may be found at an ISO 639 web site maintained by Håvard Hjulstad. See: "Language Identifiers in the Markup Context."

  • [February 27, 2002] "IDF: Experts Wrangle With Web Services Barriers." By Dan Neel. In InfoWorld (February 27, 2002). "Although the adoption of Web services is still in its infancy, representatives from leading IT vendors today at the Intel Developer Forum (IDF) discussed the technical challenges that increased distributed computing will pose. Problems like a lack of existing best practices for deploying Web services, how to distribute and balance compute cycles across a complex Web service network, and changing company business models to better accommodate a Web service infrastructure were all mentioned as challenges facing the evolution of Web services. Panel member Eric Schmidt, the technical evangelist for Microsoft, in Redmond, Wash., said the exploding amount of Web-based messaging expected by the growth of Web services means figuring out how to build hardware networks equipped to handle the increased traffic. 'When you think about the CPU cycle time that will be required for Web services, what's the best [hardware] architecture to go after? What should we be asking vendors to build to deliver this degree of messaging?' asked Schmidt. Keith Yedlin, a senior architect for Intel's computer modeling division in Santa Clara, Calif., agreed that Web services will put a strain on hardware as they continue to grow. 'The XML routing and parsing alone will add a huge additional requirement for MIPS [million instructions per second] -- an old measure of a computer's speed and power -- and this could be a potential barrier to the adoption of Web services if you have to add all this new hardware,' he said. Recognizing that there will be an increased amount of data traffic resulting from multiple messaging systems running Web services, Ben Renard, a principal technologist for BEA, in San Jose, Calif., recommended that companies implement Web services first in-house, where testing can be done more easily and more securely...."

  • [February 26, 2002] "RosettaNet Updates Supply Chain Results." By Tom Smith. In InternetWeek (February 26, 2002). "RosettaNet, a consortium focused on developing standards for automated supply chain interactions, on Tuesday delivered a status update on its activities through the end of 2001 in several critical areas. The organization -- backed by technology giants including Intel, Cisco, and others -- said it had been able to effect 450 'partner connections' worldwide from May 2001 to the end of 2001, a result that was made more difficult due to economic conditions and companies' reluctance to pour resources into IT projects, said RosettaNet CEO Jennifer Hamilton. 'The industry has been extremely challenged, and had to struggle through smaller staffs and smaller budgets,' Hamilton said. Financial and market conditions made RosettaNet board members question whether they had too many or too aggressive 'milestones,' she said. Despite that, the organization achieved its participation milestones, as well as goals related to specific processes. In one of those critical areas-- order management -- RosettaNet electronics industry members in Japan implemented multiple Partner Interface Processes (PIPs) that are referred to as business scenarios. These companies deployed RosettaNet order management standards that address regional requirements. In another area -- product discovery/distribution -- RosettaNet DesignWin standards were used to create greater channel efficiency, automated data collection and sharing between partners, and better, faster data reporting..." See "RosettaNet."

  • [February 26, 2002] "W3C Won't Let Patent Fees Enter Standards Process." By Richard Karpinski. In InternetWeek (February 26, 2002). "Responding to howls of protest, the World Wide Web Consortium Tuesday forwarded a revised proposal that backs away from allowing companies to extract royalties for technologies they own and are used as part of W3C standards. The W3C opened debate on the topic last fall. At issue: whether or not to allow vendors to enforce so-called RAND, or reasonable, non-discriminatory, patent claims. If such claims were allowed, vendors would have been able to collect license fees from companies making use of W3C standards that included their patented technologies. Not surprisingly, the proposal drew heated feedback -- including from InternetWeek readers. Critics charged that the proposal undercut the values of the W3C and threatened the openness of the Web. Tuesday, the W3C published a new patent policy draft that removed RAND licensing from the mix, instead focusing on developing new policies for ensuring that technologies contributed to W3C standards be, by policy, 'royalty-free'... While the RAND proposal was removed from this draft, apparently the issue is not completely dead..."

  • [February 26, 2002] "W3C Retreats From Royalty Policy." By Margaret Kane. In CNET News.com (February 26, 2002). "An Internet standards body has retreated from a proposal that would have allowed companies to claim patent rights and demand royalties for technologies used in its standards. The World Wide Web Consortium works with developers, software makers and others to come up with standards for the Web. Generally those standards either use publicly available technology or get the agreement of patent holders not to enforce their patents... In a reference draft being published Tuesday, the W3C has moved back to the 'royalty free' standard... 'The current practice is we set the goal of producing royalty free standards but it doesn't really have a mechanism for enforcing that,' said Daniel J. Weitzner, chair of the patent policy working group at the W3C. 'What we're proposing in this draft is to add a legally binding commitment on the part of anyone who participates in a standard that any patents they have will be available royalty free.'..." See the news item: "W3C Publishes Royalty-Free Patent Policy Working Draft."

  • [February 26, 2002] "W3C Flips, Endorses Royalty-Free Standards." By Scarlet Pruitt. In InfoWorld (February 26, 2002). "Responding to thousands of e-mail messages lobbying against the attachment of royalty fees to World Wide Web Consortium (W3C) standards, the group released a new draft patent policy Tuesday endorsing free specifications. The W3C, whose goal is to develop common protocols to ensure interoperability on the Web, said that the new draft places a 'strong and explicit commitment' to royalty-free standards. The group was met with a flurry of criticism last August when it released its first patent policy working draft, which opened the door for companies to claim patent rights and collect royalties for standards endorsed by the W3C. The group said that it revised its patent policy draft after receiving thousands of e-mail messages from both W3C members and the public expressing concern about the royalty fees. Advocates of open-source software, which is often cooperatively developed and freely available over the Web, were particularly unsettled by the possibility of royalty rates being attached to international Web standards. Although the group has changed its stance on the matter, it said that it still has to figure out how to deal with technology that is only available for a fee..." See the news item: "W3C Publishes Royalty-Free Patent Policy Working Draft."

  • [February 26, 2002] "Government To Give Web Services the Go-Ahead." By [Bellman]. In IT-Director.com (February 26, 2002). "Next month, Andrew Pinder, [UK Office of the e-Envoy], is due to set out a consultation paper for the further development of the UK's eGovernment Interoperability Framework (eGIF). It is expected that this paper will recommend the use of SOAP and UDDI standards for the provision of web service and, in so doing, will provide a massive boost to the web services market. eGIF is a collection of rules, policies and technical specifications designed to create an infrastructure through which central and local government units can share and present information efficiently and with some consistency. The reason that this recommendation will be such a boost is that, if adopted, the choice of standard becomes mandatory for all UK government departments that deliver their public services online. Of course, under the eGovernment initiatives, the aim is to get all of the possible public services into the online domain by 2005. There will be significant demand within the UK for any technology that is incorporated into the eGIF specifications... Given the nature of the solutions that the eEnvoy is seeking for 'joined-up government', the use of SOAP and UDDI as standards for integrating and executing application components is not going to be a great surprise. These are the key features that will ensure that the actual choice of technology vendor will have the smallest possible effect on government's ability to work in a consistent environment..." See: "e-Government Interoperability Framework (e-GIF)."

  • [February 25, 2002] "Using XBRL For Data Reporting." Submitted by the Australian Bureau of Statistics. In UN/ECE Statistical Division (February 15, 2002). Statistical Commission and Working Paper No. 20. Conference Of European Statisticians Joint UNECE/EUROSTAT Work Session on Electronic Data Reporting (Geneva, Switzerland, 13-15 February 2002). Topic (iii): Metadata, conceptual models and standards. "Over the years, a number of different mechanisms for exchanging data have been developed. Until the Internet and Extensible Markup Language (XML) these mechanisms tended to be proprietary or unique to the application or purpose for which each was created. eXtensible Business Reporting Language (XBRL) is one of the many industry specific 'languages' of XML. XBRL hailed as 'the digital language of business' facilitates the reuse of information contained in business reports, providing structure and context for that information... The Australian Bureau of Statistics sees XBRL as the 'language' likely to succeed as the industry accepted 'business reporting language'. Leaders in the accounting profession such as the Financial Accounting Standards Board (FASB) and the International Accounting Standards Committee (IASC) have researched the impact of the Internet on the distribution of financial information and have reached the conclusion that XBRL, or something similar is needed. XBRL is strongly supported in the Australian accounting and consulting sector. The Australian Prudential Regulatory Authority (APRA) is also strongly supporting XBRL and is already accepting and disseminating information in XBRL... An XBRL taxonomy is not a standard chart of accounts to use, rather, it is a way to map an internally used chart of accounts to common terms used externally. XBRL does not change the underlying accounting and classification differences that exist today in financial reporting." See: "Extensible Business Reporting Language (XBRL)."

  • [February 25, 2002] University of Bologna Test Implementation of XPointer. 2002-02 work in progress. One of "two different implementations of XPointer at the University of Bologna (each one part of a larger project)... The XSLT++ engine is an extended XSLT processor for the generation of meta-information out of a large base of homogeneous XML documents. In our intention, it should match not only nodes of the XML tree, but also arbitrary strings and patterns within the source document. We foresee several extensions, especially for XPointer-based expressions..." See the "very simple interface for an early implementation of XPointer. This work is part of the XSLT++ project, which is being done by Claudio Tasso for his master thesis in Computer Science at the University of Bologna, under the supervision of Fabio Vitali. This program is released as free software under the terms of the GNU General Public License; see source code. The current implementation extends the XPath libraries available with Xalan. The current implementation is not complete yet; for instance, character escaping is not supported at the moment... Try it: Enter a well-formed XML document in the first textarea, and an XPointer query in the text box..." See W3C XML Pointer, XML Base and XML Linking.

  • [February 25, 2002] "Diameter XML Dictionary." By David Frascone, Mark Jones, and Erik Guttman. AAA Working Group Internet-Draft. Reference: 'draft-frascone-aaa-xml-dictionary-00.txt'. "This document describes a coding of Diameter dictionaries in XML. This coding is intended for use by Diameter implementations to represent Applications, Commands, Vendors, and AVPs... Diameter is an extensible protocol used to provide AAA services to different access technologies. To maintain extensibility, Diame- ter uses a dictionary to provide it with the format of commands and AVPs. This document describes the representation of the Diameter dictionary using XML..." With XML DTD. On Diameter, see Diameter Base Protocol, draft-ietf-aaa-diameter-08.txt (AAA Working Group, November 2001). "The Diameter base protocol is intended to provide a AAA framework for Mobile-IP, NASREQ and ROAMOPS. This draft specifies the message format, transport, error reporting and security services to be used by all Diameter applications and must be supported by all Diameter implementations. The basic concept behind Diameter is to provide a base protocol that can be extended in order to provide AAA services to new access technologies. Currently, the protocol only concerns itself with Internet access, both in the traditional PPP sense as well as taking into account the ROAMOPS model, and Mobile-IP. Although Diameter could be used to solve a wider set of AAA problems, we are currently limiting the scope of the protocol in order to ensure that the effort remains focused on satisfying the requirements of network access. Note that a truly generic AAA protocol used by many applications might provide functionality not provided by Diameter. Therefore, it is imperative that the designers of new applications understand their requirements before using Diameter..." The AAA (Authentication, Authorization and Accounting) working group "focuses on the development of requirements for Authentication, Authorization and Accounting as applied to network access." [cache]

  • [February 25, 2002] "XML Grove." By Jeff Southard. ['XML Grove' is a working title.] Updated 2001-12-12 or later. "This is a work in progress designed to introduce designers and developers to XML tree concepts and SVG. The XML Grove is a demonstration of XML, XLST and SVG. The grove is a collection of XML trees. An XSL Transform generates a SVG-based visualization of each XML document tree as...a tree: (1) Element nodes are branches. (2) Attribute nodes are leaves. (3) Text nodes are fruit. Visitors tour the grove and inspect the trees. Each part of the tree is 'hot.' On rollover, it shows its corresponding node value... Requires SVG Viewer 2.0 or higher. If you know XSLT and the basics of SVG, you may find the source files illustrative..." Compare "SVG tree" from Wendell Piez ("draws an SVG 'tree structure' representing the XSLT/XPath infoset of an arbitrary input document...").

  • [February 25, 2002] "All the News That's Fit to RSS." By Mark Gibbs. In Network World (February 15, 2002). "Getting our news fix is tough. We have ongoing quest for new sources of news and better methods of mining those sources. Others obviously must feel the same, as there is now a standard for news syndication called Rich Site Summary (RSS) that has achieved remarkable acceptance (as determined by the number of organizations using it). RSS, also called RDF Site Summary, is an XML-based format that lets Web sites describe and syndicate site content. Actually, according to one of the main culprits in the development of RSS, the infamous Dave Winer of UserLand, 'There is no consensus on what RSS stands for, so it's not an acronym, it's a name.'... RDF stands for Resource Description Framework, a framework for the description and interchange of metadata concerning just about anything that has a uniform resource identifier, or URI... Anyway, in 1999, Netscape released a format for adding news channels to its portal My.Netscape.Com - this was RSS 0.9, which was based on RDF. There followed a reasonably complex history... The complex history has resulted in multiple versions of the standard being deployed. You will find RSS 0.9, 0.91, 0.92, 0.93 and 1.0 in the field (apparently RSS 0.9 and 0.91 are the most popular). Today, the W3C standard is RSS 1.0. So, what does RSS do for news? Well, according to the O'Reilly Network, it is a 'specification used for distributing news, product announcements, discussion threads, and other assorted content as channels.'... you must be wondering how RSS is actually deployed. First, a Web site that wants to distribute its content (that is, be a publisher) creates an RSS specification of what it has to offer. That file is usually located in the root of your Web site but you can put it anywhere you please. Indeed, a single site could have multiple RSS specification scattered throughout its structure. The next step is to register with an RSS directory -- see ASPRSS Directories, UserLand, XMLTree, NewsIsFree and GrokSoup. Note that anyone can publish anything, so you'll find many fabulously self-indulgent Web logs among the more useful news sources. Then again, everyone has to start somewhere with banging the rocks together . . . . Next week, a cool utility for accessing news sources via RSS..." See references in "RDF Site Summary (RSS)."

  • [February 22, 2002] "REST and the Real World." By Paul Prescod. From XML.com. February 20, 2002. ['A follow-up by Paul Prescod to his "Second Generation Web Services" article published at the beginning of this month. In his first article, Paul proposed a model called REST as the successor to current web services technology. In "REST and the Real World", he goes on to explain how REST meets real-world requirements such as security and auditing. The REST model has certainly been the center of much debate on the mailing lists recently, proposing as it does a different view from that taken by SOAP advocates.'] "In the last article I described a new model for web services construction. It is called Representational State Transfer (REST), and it applies the principles of the Web to transaction-oriented services, rather than publishing-oriented sites. When we apply the strategy in the real world, we do so using web technologies such as URIs, HTTP, and XML. Unlike the current generation of web services technologies, however, we make those three technologies central rather than peripheral -- we rethink our web service interface in terms of URIs, HTTP, and XML. It is this rethinking that takes our web services beyond the capabilities of the first generation technologies based on Remote Procedure Call APIs like SOAP-RPC. In this article I discuss the applicability to REST of several industry buzzwords such as reliability, orchestration, security, asynchrony, and auditing. Intuitively, it seems that the Web technologies are not sophisticated enough to handle the requirements for large-scale inter-business commerce. Those who think of HTTP as a simple, unidirectional GET and POST protocol will be especially surprised to learn how sophisticated it can be... REST is a model for distributed computing. It is the one used by the world's biggest distributed computing application, the Web. When applied to web services technologies, it usually depends on a trio of technologies designed to be extremely extensible: XML, URIs, and HTTP. XML's extensibility should be obvious to most, but the other two may not be. URIs are also extensible: there are an infinite number of possible URIs. More importantly, they can apply to an infinite number of logical entities called "resources." URIs are just the names and addresses of resources. Some REST advocates call the process of bringing your applications into this model "resource modeling." This process is not yet as formal as object oriented modeling or entity-relation modeling, but it is related. The strength and flexibility of REST comes from the pervasive use of URIs... The best part about REST is that it frees you from waiting for standards like SOAP and WSDL to mature. You do not need them. You can do REST today, using W3C and IETF standards that range in age from 10 years (URIs) to 3 years (HTTP 1.1). Whether you start working on partner-facing web services now or in two years, the difficult part will be aligning your business documents and business processes with your partners'. The technology you use to move bits from place to place is not important. The business-specific document and process modeling is."

  • [February 22, 2002] "SOAP Encodings, WSDL, and XML Schema Types." By Martin Gudgin and Timothy Ewald. From XML.com. February 20, 2002. ['This month's installment of XML Endpoints, our web services column, focuses on the relationship between WSDL, SOAP encoding and W3C XML Schema types. In particular, it looks at how the encoding of a SOAP message is generated when a WSDL decription and corresponding schema are not the starting point for using the web service.'] "Using a web service involves a sender and a receiver exchanging at least one XML message. The format of that message must be defined so that the sender can construct it and the receiver can process it. The format of a message includes the overall structure of the tree, the local name and namespace name of the elements and attributes used in the tree, and the types of those elements and attributes. The name and types of the element and attributes contained in the message can be defined in a schema. The Web Services Description Language (WSDL) can use a schema in this way. And if a WSDL description of the web service is the start point, then the message format is known before a line of code is written. However, in many cases, the code that is to be exposed as a web service already exists. In other cases, developers are reluctant to start with WSDL, preferring to start with some programmatic data structure. Even in these cases, some description of the web service is needed in order for clients to correctly construct request messages and destructure responses. Ideally that description would still be WSDL, otherwise clients will have to learn to read and understand multiple description languages. So in cases where a schema and associated WSDL are not the starting point, how is the WSDL to be generated and what format do the XML messages have? Many of the SOAP implementations that exist today will happily take a programmatic data type, typically a class definition of some sort, and serialize that type into XML. But in the absence of a schema, how do these implementations decide whether to use elements or attributes? How do they decide what names to give to those constructs and what the overall structure of the tree should be? The answer can be found in the SOAP Encoding section of Part 2 of the SOAP 1.2 specification..." See "Simple Object Access Protocol (SOAP)."

  • [February 22, 2002] "XML 2.0 -- Can We Get There From Here?" By Kendall Grant Clark. From XML.com. February 20, 2002. ['Will there ever be an XML 2.0? Yes, says Kendall Clark in this week's XML-Deviant. The current smorgasbord of XML specifications, which currently jostle against each other for primacy, need tidying up. Kendall looks at Tim Bray's 'skunkworks' draft of an XML 2.0 specification, and the community's reaction.'] "It seems inevitable that the W3C will eventually offer a standards document which it calls XML 2.0. The post-XML 1.0 period has seen the development of too many attendant technologies, has too often heard pleas for refactoring from the development community, and too many XML-standards family warts are now widely conceded for XML 1.0 to last indefinitely. The only interesting questions which remain unanswered are what XML 2.0 will look like and how politically nasty the process that creates it will be... Tim Bray, who was so instrumental in XML 1.0, [offers] a draft for what XML 2.0 might eventually become: Extensible Markup Language - SW (for Skunkworks). Though, it should be said at the outset, Bray offers XML-SW as a highly provisional proposal; or, as he put it, 'nobody so far - not even me - has taken the stand that this is a good idea'. But it is a start. XML-SW is a conglomeration of XML 1.0 2nd edition minus the DTD machinery, including entities, with the addition of namespaces, XML Base and XML infoset. The result, in Bray's view, as well as that of some other notable XML developers, is a net gain of simplicity and elegance. Bray described some of the changes in detail..."

  • [February 22, 2002] "A Generic Fragment Identifier Syntax for URI References." By Jonathan Borden (The Open Healthcare Group) and Simon St.Laurent. IETF Network Working Group, Internet-Draft. February 19, 2002; expires: August 20, 2002. Reference: 'draft-borden-frag-00.txt'. "URI references with fragment identifiers uniquely identify parts of a document. Such identifiers have been specified as SGML/XML IDs e.g., in HTML. The XPointer specification is intended to serve as a fragment identifier syntax for XML documents. IDs conform to the XPointer 'raw name' form. Specifications constraining the behavior of user agents such as SMIL, XHTML, and SVG have all supported this simple fragment naming convention though some extend it. Specifications such as XML Namespaces and RDF use URI references as opaque names. Such usage does not depend on resolution of the URI. In such usages, no media type is specified and the proper fragment identifier syntax is undefined. As it has become common practice to use URI references as opaque identifiers, this proposal seeks to provide a minimal definition of what might be identified by a URI reference... Fragment identifier syntax, in practice, is often constant from media type to media type. In order to enable robust use of fragment identifiers, particularly outside a particular HTTP transaction, we propose a generic, media type independent, fragment identifier syntax. This fragment identifier syntax is compatible with current usage of fragment identifiers, and is generally compatible with future proposed syntaxes such as XPointer. This specification does not itself specify how user agents are to process or interpret fragment identifiers, such as may be specified with individual MIME media type registrations, rather provides a consistent syntax for fragment identifiers and a registration mechanism for schemes associated with fragment identifier syntaxes..." See the posting.

  • [February 21, 2002] "XML Inclusions (XInclude) Version 1.0." W3C Candidate Recommendation 21-February-2002. Edited by Jonathan Marsh (Microsoft) and David Orchard (BEA Systems). Version URL: http://www.w3.org/TR/2002/CR-xinclude-20020221. Latest version URL: http://www.w3.org/TR/xinclude/. Previous version URL: http://www.w3.org/TR/2001/WD-xinclude-20010516/. ['The Working Group invites implementation feedback on this specification. We expect that sufficient feedback to determine its future will have been received by 30-April-2002.'] "This document specifies a processing model and syntax for general purpose inclusion. Inclusion is accomplished by merging a number of XML information sets into a single composite Infoset. Specification of the XML documents (infosets) to be merged and control over the merging process is expressed in XML-friendly syntax (elements, attributes, URI references)... Many programming languages provide an inclusion mechanism to facilitate modularity. Markup languages also often have need of such a mechanism. This specification introduces a generic mechanism for merging XML documents (as represented by their information sets) for use by applications that need such a facility. The syntax leverages existing XML constructs - elements, attributes, and URI references... XInclude has a dependency on XPointer. This adds significantly to the complexity of XInclude implementations. The XML Core Working Group specifically requests feedback on the use of XPointer in XInclude, including the following: (1) Would a subset of XPointer simplfy XInclude implementation? Which features should be available in this subset? (2) Would a subset of XPointer assist in building streaming XInclude processors? Which features should be available in this subset? In addition to the specific points above, any feedback on patterns of implementation and use of this specification would be very welcome. Comments on XPointer can also be reported against the XPointer specification. See (1) the mailing list archives and (2) "XML Inclusion Proposal (XInclude)."

  • [February 21, 2002] "Speech Technology For Applications Inches Forward." By Matt Berger. In InfoWorld (February 21, 2002). "An early version of an emerging technology that will allow users to control software applications using the human voice was released to developers Wednesday. A group led in part by Microsoft and Speechworks International, known as the SALT Forum, short for Speech Application Language Tags, released the first public specification of its technology. When completed, the technology would allow developers to add speech 'tags' to Web applications written in XML (Extensible Markup Language) and HTML (Hypertext Markup Language), allowing those applications to be controlled through voice commands rather than a mouse or a keyboard. Other founding members of the SALT Forum include Cisco Systems, Comverse, Intel and Philips Electronics. Nearly 20 other companies have announced support for the effort, according to information at the group's Web site. Version 0.9 of the SALT Forum specification, which is available for download, includes early design details for how a developer would go about adding a speech interface to an application. It also offers suggestions about how developers might consider using the technology to voice-enable Web applications, creating what are known as 'multimodal' programs that can be controlled by both voice and traditional input methods. The SALT specification is also designed for applications that don't have a visual user interface, such as those accessed by telephone. Microsoft said it plans to make use of SALT in three major product areas. It will release in May the beta version of an add-on tool for Visual Studio .Net that will allow developers to voice-enable applications. Those developers will also be provided with the test release of a voice-enabled version of Microsoft's Internet Explorer Web browser... After collecting comments from developers who review the specification, the SALT Forum said it plans to submit the technology to a standards body for review. Microsoft, which hosted the launch of the SALT Forum in October at an event at its Mountain View, Calif. campus, has said it expects the specification to become an open standard that will be available on a royalty-free basis. A rival effort is under way to develop a standard for speech interfaces based on a technology called VoiceXML. That effort is led by a group of companies including IBM, Motorola, AT&T and Lucent Technologies. First announced in early 1999, VoiceXML originally was designed to allow applications to be accessed by telephone. Efforts are under way to add the capability to voice-enable applications that are accessed using the Web..." See: (1) the news item "SALT Forum Publishes Draft Specification for Speech Application Language Tags"; (2) references in "Speech Application Language Tags (SALT)."

  • [February 21, 2002] "XMLHTTP Control Can Allow Access to Local Files." Microsoft Security Bulletin MS02-008. Date: 21-February-2002. Software: Microsoft XML Core Services. Impact: Information disclosure. "Microsoft XML Core Services (MSXML) includes the XMLHTTP ActiveX control, which allows web pages rendering in the browser to send or receive XML data via HTTP operations such as POST, GET, and PUT. The control provides security measures designed to restrict web pages so they can only use the control to request data from remote data sources. A flaw exists in how the XMLHTTP control applies IE security zone settings to a redirected data stream returned in response to a request for data from a web site. A vulnerability results because an attacker could seek to exploit this flaw and specify a data source that is on the user's local system. The attacker could then use this to return information from the local system to the attacker's web site. An attacker would have to entice the user to a site under his control to exploit this vulnerability. It cannot be exploited by HTML email. In addition, the attacker would have to know the full path and file name of any file he would attempt to read. Finally, this vulnerability does not give an attacker any ability to add, change or delete data. Mitigating Factors: (1) The vulnerability can only be exploited via a web site. It would not be possible to exploit this vulnerability via HTML mail. (2) The attacker would need to know the full path and file name of a file in order to read it. (3)The vulnerability does not provide any ability to add, change, or delete files..."

  • [February 21, 2002] "Microsoft Fortifies Web Services Security." By Brian Fonseca. In Network World (February 21, 2002). "Highlighting efforts to comply with its pledge to improve security, Microsoft at the RSA Conference 2002 this week unveiled sample code for an XML filter to fortify Web services environments. Designed as a plug-in for Microsoft's enterprise firewall, Internet Security and Acceleration (ISA) Server 2000, the software is designed to protect Web services transactions from potential XML-based outside attacks and drop inadequate or suspicious message requests, said Zachary Gutt, technical product manager of ISA Server for Microsoft... Built through Visual Studio.Net and extending the application-layer filtering capability of ISA Server, Gutt said the XML filter will help customers establish a trusted framework to authenticate users, route and authorize Simple Object Access Protocol (SOAP) messages, and verify the integrity of XML data transmissions. Security analyst Chris Christiansen, program director of Internet security at IDC, suggests that escalating levels of XML traffic will open the door for hybrid attacks that could disrupt or overpower Web services, by methods such as buffer overflow or denial-of-service bombardments... Microsoft has said it will add support for Kerberos in its Passport .Net authentication mechanism to extend application interoperability. Kerberos, a network-authentication protocol using strong secret-key cryptography, is already embedded in a variety of Microsoft products and operating systems... The sample XML filter employs a simple algorithm to decide whether an XML request is valid, if the user is allowed to access the Web service behind the firewall, and if the structure and content of the XML document are valid. The free filter is available for download at Microsoft's ISA Server Web site..."

  • [February 21, 2002] "Securing Web Services with ISA Server." By Scott Seely. Microsoft Developer Network. February 2002. 26 pages. ['Create a Web Service whose security is handled by Microsoft ISA Server: look at the Web Service and what it allows callers to do, create a client application that will call Web Methods, and create an ISA Server extension to secure the Web Service... we took a look at how to add security to a Web Service at the method level by creating an ISA Extension. This extension also allowed us to block invalid incoming requests before they even access the Web Service. By using ISA, we were able to reduce the likelihood that a denial of service attack would be successful against the Web Service.'] "Routers and other network hardware can provide protection for the boxes hosting the Web Service. These can filter IP addresses, perform load balancing, and a whole host of other items. Still, a hardware router is not a good place to store custom code for inspecting SOAP messages to authenticate at the Web Method level. In SOAP v1.1, an HTTP header named SOAPAction was created to declare the intent of a SOAP request. Such a mechanism would work great in a hardware router if not for the fact that this header and the SOAP request must agree. If the caller tries to break in by claiming to execute one thing while actually executing something completely different, the hardware solution fails you. So, what do you do? Looking at the title of this article, you have probably guessed that Microsoft Internet Security and Acceleration (ISA) Server can help out. ISA filters are ISAPI filters that are registered with ISA. If you have written ISAPI filters in the past, you already know how to extend ISA Server. The only real difference is in how they are registered (we'll get to how later). In this article, we are going to create a Web Service whose security is handled by ISA Server. We will first look at the Web Service and what it allows callers to do. We will then create a client application that will call the Web Methods. This client will be able to break the Web Service due to some fabricated bugs. Finally, an ISA Server extension will be created to secure the Web Service. The extension will allow us to authenticate users at the Web Method level as well as inspect the XML to assure that the requests themselves are valid..."

  • [February 21, 2002] "CPL: A Language for User Control of Internet Telephony Services." Internet Engineering Task Force, Internet Draft. IPTEL WG. Reference: 'draft-ietf-iptel-cpl-06.txt'. January 15, 2002; expires: July, 2002. By Jonathan Lennox and Henning Schulzrinne (Department of Computer Science Columbia University). "The Call Processing Language (CPL) is a language that can be used to describe and control Internet telephony services. It is not tied to any particular signalling architecture or protocol; it is anticipated that it will be used with both SIP and H.323. The CPL is powerful enough to describe a large number of services and features, but it is limited in power so that it can run safely in Internet telephony servers. The intention is to make it impossible for users to do anything more complex (and dangerous) than describing Internet telephony services. The language is not Turing-complete, and provides no way to write loops or recursion. The CPL is also designed to be easily created and edited by graphical tools. It is based on XML, so parsing it is easy and many parsers for it are publicly available. The structure of the language maps closely to its behavior, so an editor can understand any valid script, even ones written by hand. The language is also designed so that a server can easily confirm scripts' validity at the time they are delivered to it, rather that discovering them while a call is being processed. Implementations of the CPL are expected to take place both in Internet telephony servers and in advanced clients; both can usefully process and direct users' calls. This document primarily addresses the usage in servers. A mechanism will be needed to transport scripts between clients and servers; this document does not describe such a mechanism, but related documents will." See the XML DTD. Document also in Postscript. See "Call Processing Language (CPL)." Compare "Voice Browser Call Control (CCXML)." [cache]

  • [February 21, 2002] "Sybase Buys a Seat On Web Services Train." By Wylie Wong. In ZDNet News (February 20, 2002). "Sybase has joined its competitors in planting a stake in the Web services market. The database software maker said Wednesday that it has overhauled its entire line of e-business software and tools to support Web services, a method for building software that lets companies with different computing systems interact and conduct transactions. Sybase is the latest software maker to push Web services, joining Microsoft, Sun Microsystems, IBM, Oracle and others in the emerging market...Sybase has introduced a Web services toolkit that will allow software programmers to build, test and run Web services. The toolkit will also make available services on online directories that support a Web services specification called UDDI (Universal Description, Discovery and Integration). The toolkit works in conjunction with Sybase's PowerDesigner 9.0 software development tool, which supports all the Web services specifications. The company has also added support for Web services in a number of its software products, including its Java application server software that runs e-business and other Web site transactions; its integration server software that allows different business software to communicate; its database software that manages vast amounts of data; and portal server software that allows companies to build portal sites for employees and customers. Sybase's strategy includes Web services support in its iAnywhere Solutions wireless software products, which allow Internet-ready devices such as cell phones to access Web services; and technology called BizTracker for monitoring and managing Web services to ensure they're working properly..."

  • [February 20, 2002] "XPipe - An XML Processing Methodology." By Sean McGrath (CTO, Propylon). February 12, 2002. 80 slides. ['I recently gave a talk about the XPipe appoach to XML processing at the XML SIG of New York; the slides from the talk are available...'] "XPipe is an architecture / methodology /framework for developing robust, scaleable, manageable XML processing systems. based on proven mechanical manufacturing techniques. Specifically: The Assembly Line Principle; Component assembly and component re-use... The XPipe philosophy hinges on the fact that every complex XML transformation can be broken down into a series of smaller ones than can be chained together... It is a way of thinking about systems that focuses on structured dataflows rather than Object APIs... The idea is that XPipe become the reference implementation of the architecture and also serve as a focal point for XPipe experimentation and discussion. The XPipe project was instigated by (and is sponsored by) Propylon to promote the assembly line approach to XML processing in the industry..." See the SourceForge project.

  • [February 20, 2002] "Why Microsoft Will Lead Web Services -- For Now." By Eric Knorr. In ZDNet Tech Update (February 20, 2002). "... The latest (and seemingly innocuous) move, which earned a PowerPoint slide in Bill Gates' presentation, was the launch of the Web Services Interoperability Organization (WS-I.org), a group dedicated to 'promote Web services interoperability across platforms, operating systems, and programming languages.' That sounds like a standards organization, right? Nope. In fact, its main function is to promote best practices for Web services. The WS-I will offer 'sample implementations' -- which remind me of Sun's BluePrints, except that the prime objective will be to ensure Web services interoperability (which is fundamental to this whole machine-to-machine thing reaching critical mass). In addition, the WS-I will offer 'profiles' that suggest how groups of Web services protocols might work together. Which protocols are those? Here's where the plot thickens. No agreement has been reached on Web services protocols beyond the gang of four (XML, SOAP, WSDL, and UDDI), which together really provide a modicum of interoperability. Seems like the WS-I will run out of profiles pretty quickly. What other protocols might it be talking about? Bear with me as we rewind to last October, when Microsoft announced several 'WS' standards -- WS-Security, WS-Routing, and so on -- as part of a Global XML Web Services Architecture (GXA). From the beginning, Sun (which was dragged kicking and screaming into Web services) had been hammering Microsoft for supposedly ignoring Web services' security and authentication vulnerabilities. I had assumed at the time that when Microsoft introduced GXA, it was responding to Sun's criticism simply by saying 'we control the .Net universe and these are the protocols we're going to use for Web services security, routing,' and so on. But Microsoft was actually hunting bigger game. See, GXA is a modular framework for additional security and business process protocols, so that Web services developers can simply pick and choose the protocols they need for specific Web services implementations. GXA is intended to be used by the entire industry and to capitalize upon the millions of hours that organizations such as OASIS, the W3C, OMG, and RosettaNet have put into developing everything from ebXML business-to-business schemas to XrML for digital rights management. You see the genius here: We're at a level above all this protocol stuff, says Microsoft, but here's how you should plug it all together. The WS-I may not be a standards organization, but when it starts putting together those protocol profiles, whose modular framework will they reflect? [...]" See: "Web Services Interoperability Organization (WS-I)."

  • [February 20, 2002] "SALT Forum Publishes Specs." By Dennis Callaghan. In eWEEK (February 20, 2002). "The Speech Application Language Tags (SALT) Forum released a working draft of its 0.9 specification for adding speech tags to other Web application development languages on Wednesday. Applications created by that combination would combine speech recognition with text and graphics. The draft, published by the founding members of the SALT Forum -- Cisco Systems Inc., Comverse Inc., Intel Corp., Microsoft Corp., Philips Electronics N.V. and Speechworks International -- lays out the XML elements that make up SALT, the typical application scenarios in which it will be used, the principles which underline its design and resources related to the specification. Elements the draft focuses on include speech input, speech output, Dual Tone Multi-Frequency (DMTF -- generic form of Touch-Tone) input, platform messaging (using Simple Messaging Extension -- SMEX), telephony call control and logging. The SALT Forum is an alternative to a plan pushed by IBM, Motorola Inc. and Opera Software ASA to combine VoiceXML with XHTML to integrate speech and Web application development, and form so-called multi-modal applications. That group has already submitted specifications to the World Wide Web Consortium (W3C). The SALT Forum has indicated that it too will submit to an international standards body but hasn't said which one yet..." See: (1) the news item "SALT Forum Publishes Draft Specification for Speech Application Language Tags"; (2) references in "Speech Application Language Tags (SALT)."

  • [February 20, 2002] "Corel Jumps On XML, .Net for Enterprise Push." By Matt Berger. In InfoWorld (February 20, 2002). "The web services bandwagon Wednesday made a stop at software maker Corel, as the company announced plans to offer a line of enterprise software tools and services for creating "smart" content that can be tied to back-end servers and modified over the Internet. The company unveiled a new initiative called Deepwhite, an umbrella name for a line of content creation software products, as well as enterprise services, that the company plans to begin rolling out later this year. Based on industry-standard technologies such as XML (Extensible Markup Language) and Web services technology developed by Corel investor Microsoft, the company said it would allow customers to design graphics and text documents that can be published for a variety of media and viewed on a variety of computing devices. 'Our intention is to take XML technologies and give customers the ability to create content and not worry about how to format it for different devices,' Derek Burney, Corel's chief executive officer, said in a phone interview. Burney announced Deepwhite during a keynote presentation at the Seybold Seminar being held this week in New York... The company will release a set of XML-based content creation tools that enable software vendors and corporate IT departments to build custom enterprise applications with graphics and other design elements that access data on the fly..."

  • [February 20, 2002] "Corel Launches New Strategy: Deepwhite." By [Seybold Staff]. In The Bulletin: Seybold News and Views On Electronic Publishing Volume 7, Number 19 (February 21, 2002). "Unifying its recent acquisitions under a single brand, Corel today launched Deepwhite, a new moniker for Corel products aimed at corporate software buyers. The longtime vendor of desktop graphic arts applications will look to join the growing ranks of vendors offering server-based products to the enterprise publishing market. Though the announcement was heavy on vision and light on specifics, we do know the new business unit will focus on developing and selling XML-based products that focus on enabling the enterprise to leverage XML for publishing and graphic arts without installing an entirely new foundation of processes and systems. 'It's premature to expect everyone to switch to XML-based content management from the ground up,' Corel CEO Derek Burney told The Bulletin this week. 'But by bolting our technology on to what they already have, it allows people that are not creating XML to still take advantage of XML.' Burney indicated that the technology and sales channels acquired through the company's mergers with Micrografx and SoftQuad in the past year would fuel the Deepwhite initiative. Typically known for selling shrink-wrapped software, Corel will rely on the more system-savvy customer bases and sales staffs of Micrografx and SoftQuad to try and gain momentum as a server vendor. This is not terribly unlike what's happening at Adobe, Corel's chief rival, which is looking to make a similar move into server-based enterprise products, albeit with a focus on XML. Burney also indicated that the company is 'betting heavily' on the W3C's Scalable Vector Graphics (SVG), an XML-based format for generating vector graphics... Burney is correct to say that it's too early yet for widespread adoption of XML-based content management, and also correct in his assessment that XML is best when it remains invisible to the user. But until we've seen any products and had a chance to gauge market reaction, that's about all we can say..."

  • [February 20, 2002] "The Smart Content Revolution." Corel Corporation White paper. 2002-02. "DEEPWHITE is the realization of Corel's plan to deliver enterprise-class solutions to large organizations. Supported by a series of strategic acquisitions and innovative software developments, the company is leveraging the strengths of XML and other open standards to introduce a new platform for enterprise computing. This document explores the DEEPWHITE vision for enterprise content solutions by examining how smart content can reduce content creation costs, maximize content reuse, accelerate productivity, and generate new revenue opportunities. This vision promises to dramatically change the way large organizations create, exchange, and interact with content... XML is the core of smart content as it enables content that is structured, manageable, and based on open standards. XML allows content to be stored in its purest form - data. XML separates content from the presentation layer, which allows content to be output to any presentation format. XML can describe many types of content, including text, numbers, vector graphics, layout, 3D graphics, audio, and animation. In addition, any technology, process, or person can interact with this data..." Also in PDF format.

  • [February 20, 2002] "Resource Directory Description Language (RDDL)." Published by Jonathan Borden. Edited by Jonathan Borden (The Open Healthcare Group) and Tim Bray (Antarcti.ca Systems). Current version: February 18, 2002. Previous Version: March 5, 2001. "This document describes the Resource Directory Description Language (RDDL). A RDDL document, called a Resource Directory, provides a package of information about some target, including: Human-readable descriptive material about the target. A directory of individual resources related to the target, each directory entry containing descriptive material and linked to the resource in question. The targets which RDDL was designed to describe are XML Namespaces. Examples of 'individual related resources' include schemas, stylesheets, and executable code designed to process markup from some namespace. A Resource Directory is designed to be suitable for service as the body of an entity returned by dereferencing a URI serving as an XML Namespace name. The Resource Directory Description Language is an extension of XHTML Basic 1.0 with an added element named resource. This element serves as an XLink to the referenced resource, and contains a human-readable description of the resource and machine readable links which describe the purpose of the link and the nature of the resource being linked to. The nature of the resource being linked to is indicated by the xlink:role attribute and the purpose of the link is indicated by the xlink:arcrole attribute. A DTD for RDDL has been defined using Modularization for XHTML..." Jonathan said [XML-DEV]: I've placed a minor and interim update to the RDDL spec at http://www.rddl.org/ -- it contains fixes to some errata, some rewording, and inclusion of a RELAXNG 1.0 schema for RDDL... No substantial changes... More work to follow -- let me know if your favorite erratum hasn't yet made the fix list." Added note from Tim Bray: "[I] Should add a word of explanation: the rewording is basically a rewrite of the first two paragraphs of the document, which asserted that a namespace was a 'class of resources' (?) and implied that the RDDL directory was of links from the namespace to the related resources. Michael Sperberg-McQueen pointed out that the links were actually from the descriptions of the related resources to the resources... claiming you're linking to a namespace is a highly philosophically risky proposition... Oh, and fixed a few inconsistencies in the use of terms and got the resource/entity/URI nomenclature right per the RFCs..."

  • [February 19, 2002] "The Universal Business Language." By Jon Bosak (Sun Microsystems; Chair, OASIS UBL TC). Presentation given to the U. S. Government XML Working Group, Washington, D.C., 20-February-2002. 17 pages (34 slides). "UBL: (1) Synthesis of existing XML B2B languages [xCBL, cXML, RosettaNet, OAG, etc.]; (2) Primary inputs: xCBL, ebXML core components, ebXML context methodology; (3) Applicable across any sector or domain of electronic trade, transport, and administration [purchasing, payments, logistics, transportation, statistical reporting, social administration, healthcare, etc.]; (4) Interoperable with existing EDI systems; (5) Based on a core library plus a context-sensitive extension mechanism; (6) Unencumbered by intellectual property claims; (7) Intended to become a legal standard for international trade. The big problem: Context. 'Standard' business document components are different when used in different business contexts. Example #1: shipping addresses: Addresses in Japan are different from addresses in the United States; Addresses in the auto industry are different from addresses in other industries. Example #2: invoice items: An invoice for shoes needs item fields for color; an invoice for gourmet coffee needs item fields for grind; Invoices for microprocessor boards have to contain serial numbers for the processor chips to detect substitution in shipment..." Also in HTML format. See: "Universal Business Language (UBL)."

  • [February 19, 2002] "Web Services Quandary. Tomorrow's Business-to-Business E-Comm Requires Navigating the Maze of Conflicting Web Services Standards." By Stephen Lawton. In Network World (February 18, 2002). "Despite the blitzkrieg of standards work, Web services languages today are in roughly the same position as word processors were 15 years ago -- lots of incompatible choices. The standards battle is being waged on two fronts: Consortia are creating competing specifications, as are XML tool developers. Network executives who ignore the war will be letting these groups decide which will become the specifications of choice. A worst-case scenario could find a company building its internal Web services in one language but its competition - and its suppliers - building in a different language. Sorting out XML standards is 'a bit of a rat's nest,' says Nathaniel Palmer, chief analyst and director of the Business of Technology practice at The Delphi Group. The World Wide Web Consortium (W3C) is working on high-level infrastructure issues, he notes, while the Organization for the Advancement of Structured Information Standards (OASIS) is defining practical processes users need... XML tool vendors and consortia developing XML schemas often compete. It's not unusual to have several schemas defining the same process. Yet, an application designed to use one schema cannot access a service built to use a different schema. This crowd has created four major competing business-process schemas: IBM has its Web Services Flow Language; Microsoft has XLANG, which is part of BizTalk, a .Net component; OASIS offers its Business Process Service, a component of ebXML; and the Business Process Management Institute (BPMI) has BPML. [Paul] Harmon expects only two of these business-process XML schemas to survive, but can't predict which two. IBM will be seen as the traditional safe bet, but Microsoft is likely to be the least expensive, he says. OASIS has the draw of being developed by a standards group but BPMI's is the most vendor-agnostic. ... In addition to creating business-process schemas, companies such as IBM and Microsoft, and groups such as OASIS, are rapidly building vertical schema. By some accounts, nearly 100 separate XML-related standards are in various stages of development. many find consortia to be a good solution. Physicians Insurance Corp. (PIC) Wisconsin is looking to ACORD to specify Web services standards for its members, much like it did for electronic data interchange. Jay Chenoweth, IS manager at the Madison, Wisc., malpractice insurer, says ACORD has its own EDI system. Now it has its own XML approach. ACORD did a lot of work to define XML for insurance providers, by creating the XML vocabulary for the industry..."

  • [February 19, 2002] "Introducing Cocoon 2.0." By Stefano Mazzocchi. From XML.com. February 13, 2002. ['Cocoon project founder Stefano Mazzochi describes the Apache XML Project's XML/XSLT server application framework. Stefano also gives a history of Cocoon, from its humble origins as 100-line Java servlet to the sophisticated development platform recently released as Cocoon 2.0.'] "It took two years and three different project leaders to finish Cocoon 2.0 but we made it. It's an XML framework that raises the usage of XML and XSLT technologies for server applications to a new level. Designed for performance and scalability around pipelined SAX processing, Cocoon offers a flexible environment based on the separation of concerns between content, logic and style. A centralized configuration system and sophisticated caching enable you to create, deploy, and maintain rock-solid XML server applications. Cocoon was designed as an abstract engine that could be connected to almost anything, but it ships with servlet and command line connectors. The servlet connector allows you to call Cocoon from your favorite servlet engine or application server. You can install it beside your existing servlets or JSPs. The command line interface allows you to generate static content as a batch process. It can be useful to pre-generate those parts of your site that are static, some of which may be easier to create by using Cocoon functionalities than directly (say, SVG rasterization or applying stylesheets). For example, the Cocoon documentation and web site are all generated by Cocoon from the command line. Cocoon is now based on the concept of component pipelines. Like a UNIX pipe, but instead of passing bytes between STDIN and STDOUT, Cocoon passes SAX events. The three types of pipeline components are generators, which take a request and produce SAX events; transformers, which consume SAX events and produce SAX events; and serializers, which consume SAX events and produce a response. A Cocoon pipeline is composed of one generator, zero or more transformers, and one serializer. As with UNIX pipes, a small number of components give you an incredible number of possible combinations. Think of active Lego bricks for XML manipulation... Cocoon is currently based on many other Apache projects -- Ant, Avalon, Xerces, Xalan, FOP, Batik, Velocity, Regexp -- but due to its high modularity, it has ful support for alternative implementations of underlying W3C technologies. The Cocoon development community is one of the more active under the Apache Software Foundation: boasting more than 15 active developers, around 500 subscribers of the development mail list, and around 1100 on the user list. We consider Cocoon 2.0 stable in both implementation and API: this means that we consider it safe to be used for production environments. And it's already being used on many such projects."

  • [February 19, 2002] "Web Services Pitfalls." By David Orchard. From XML.com. February 06, 2002. ['David Orchard puts the web services model into the context of business requirements, covering such issues as security, billing and provisioning. Orchard concludes that while web services will be an important technology, for the foreseeable future human intervention and contract negotiation will have a large part to play.'] "The latest hot ticket for vendors to sell and journalists to write about is web services. The appeal is natural: web services promise users and developers greater choice of components and services. This article examines perhaps the most futuristic of web services, those offered by a standalone service provider. In particular, it focuses on the infancy of the standards and technology in standalone web services... Web services is an umbrella term used to describe components and services that are addressable and available using web technology. The kinds of web services are typically user-oriented and browser-based, API-accessible, or system services functionality. A web service could be a browser-based e-mail program, an XML-based interface to an HR system, a SOAP service offered by a machine, a SOAP monitoring service, XML-based integration with an EAI or legacy system, and so on. The standards for the way components in a web service exchange data is crucial. Some of the infrastructure standards that are being created include SOAP, WSDL, UDDI, SAML, ebXML, as well as many vertical standards being created as well... The real summary is that web services will be used as an enabling technology to integrate applications together more quickly and easily. There are real-world integration problems that are being solved by web services standards compliant products and deployments. It will be quite some time, if ever, before there is widespread adoption of standalone web services that are usable without significant human intervention and contract negotiation."

  • [February 19, 2002] "Introducing XML::SAX::Machines, Part One." By Kip Hampton. From XML.com. February 13, 2002. ['Kip Hampton's Perl column this month introduces the new Perl module XML::SAX::Machines. This module provides a high-level wrapper that allows chaining of SAX processors, demonstrated in the article by showing how custom tags can be implemented in an Apache mod_perl application.'] With sample code. "In recent columns we have seen that SAX provides a modular way to generate and filter XML content. For those just learning how SAX works, though, the task of hooking up to the correct parser-generator-driver and building chains of filters can be tricky. More experienced SAX users may have a clearer picture of how to proceed, but they often find that initializing complex filter chains is tedious and lends itself to lots of duplicated code... Barrie Slaymaker's outstanding new XML::SAX::Machines addresses both the complexity and the tedium of creating SAX systems. [...] We've only touched the surface of what XML::SAX::Machines can do. Tune in next month when we will delve deeper into the API and show off some of its advanced features."

  • [February 19, 2002] "Message Patterns and Interoperability." By Leigh Dodds. From XML.com. February 13, 2002. ['Leigh Dodds summarizes the conversations in his XML-Deviant column, focusing on types of message patterns and the reaction to the recently announced industry web services consortium, the Web Standards Interoperability Organization.'] "Web services have been the major topic of conversation on XML-DEV this week following a discussion on types of messaging patterns and some wary initial reactions to WS-I, the 'standards integrator' organization."

  • [February 19, 2002] "Web Resource Description Language." By Paul Prescod. Draft only. 2002-02-19 or later. [Replaces Web Service Operations Language and Simple Web Service Behaviour Language. See following bibliographic entry.] "The basic model of a REST Web Service is that services are described as webs of resources. Typically the resources will be represented by dynamically generated XML documents but that is not necessarily the case. Just as with services based on COM, CORBA or SOAP, it would be nice to have a declaration of a resource's operation in advance so that reliable software can be constructed with less testing. Insofar as a service consists of XML documents, XML schemas provide a partial description of the service. What they do not describe is the transitions from one document to another: the service's runtime behaviour. That's what WSOL does. It is intended to be the IDL/WSDL for HTTP web services. Just as with IDL it may one day make sense to 'bind' WSOL into a statically typed programming language to make the construction of type-incorrect client software impossible..." [Post to XML-DEV: "I had a much better idea of how to do static service declarations (as per IDL or WSDL) for REST web services...It reflects the underlying Web Architecture much more clearly than my old proposal did. It is consequently simpler. It has first-class concepts of 'resource', 'representation', 'method', 'input' and 'output'. Methods are HTTP methods like PUT, GET, DELETE, YOURMETHOD. Inputs are like HTTP method inputs: headers, query params and body (the URI is implied by the resource!). Outputs are like HTTP outputs: status code, headers and body. Resources know what representations they support and you can navigate from resource to resource through hyperlinks without worrying about the XML or HTML syntax of the representation (unless you want to). Following a hyperlink is a type-safe operation. That's about all the concepts in it. A rough proposal for an API (Java-ish syntax to prove it isn't biased towards dynamic languages) is provided. Everything is statically type checked just as with IDL or WSDL. Of course in Python everything would be done at runtime and thus save a build step. When I implement it I think it will be a really cool tool. Until then, I think it is a useful pedagogic tool for those working on REST Zen. If you want to implement it before I get around to it, in whatever language, please do..."]

  • [February 19, 2002] "Web Service Operations Language." By Paul Prescod. Partial draft only. 2002-02-19 or later. "The basic model of a REST Web Service is that services are described as webs of documents. Typically the documents will be dynamically generated but that is not necessarily the case. Just as with services based on COM, CORBA or SOAP, it would be nice to have a declaration of a service's operation in advance so that reliable software can be constructed with less testing. Insofar as a service consists of documents, especially XML documents, XML schemas provide a partial description of the service. What they do not describe is the transitions from one document to another. That's what WSOL does. It is intended to be the IDL/WSDL for HTTP web services. Just as with IDL it may one day make sense to "bind" WSOL into a statically typed programming language to make the construction of type-incorrect client software more difficult... Think of a web service as a web of document types. The "find airline seat" document points to the "reserve seat" document through a URI. That document points to the "purchase seat" document through a URI and so forth. Each document has associated with it an XML Schema but also a operation description (or perhaps just a fragment of a operation description ). The "root" operation description for a service asserts that a particular URI references a service that conforms to a particular operation description. As you move from document to document following links, each link is strongly and statically typed by the operation Description. A failure to conform is a runtime error, just as with non-validation against an XML Schema or WSDL service desctiption..." ['...(maybe this) will give us a concrete language for modelling REST applications... much simpler and easier to read than WSDL and yet more powerful and complete...'] See the YahooGroups "REST Discussion Mailing List" [Discussion about REpresentational State Transfer, the name given to the architecture of the World Wide Web by Roy Fielding] and the 'REST' thread in the XML-DEV list archives.

  • [February 19, 2002] "Web Services for Remote Portals (WSRP)." Note 21-January-2002. Version URL: http://www.ibm.com/developerworks/library/ws-wsrp. Edited by Thomas Schaeck (IBM). Authors: Angel Luis Diaz, Peter Fischer, Carsten Leue, and Thomas Schaeck. "Web Services for Remote Portals (WSRP) are visual, user-facing web services centric components that plug-n-play with portals or other intermediary web applications that aggregate content or applications from different sources. They are designed to enable businesses to provide content or applications in a form that does not require any manual content- or application-specific adaptation by consuming intermediary applications. As Web Services for Remote Portals include presentation, service providers determine how their content and applications are visualized for end-users and to which degree adaptation, transcoding, translation etc may be allowed. WSRP services can be published into public or corporate service directories (UDDI) where they can easily be found by intermediary applications that want to display their content. Web application deployment vendors can wrap and adapt their middleware for use in WSRP-compliant services. Vendors of intermediary applicatios can enable their products for consuming Web Services for Remote Portals. Using WSRP, portals can easily integrate content and applications from many internal and external content providers. The portal administrator simply picks the desired services from a list and integrates them, no programmers are required to tie new content and applications into the portal. To accomplish these goals, the WSRP standard defines a web services interface description using WSDL and all the semantics and behavior that web services and consuming applications must comply with in order to be pluggable as well as the meta-information that has to be provided when publishing WSRP services into UDDI directories. The standard allows WSRP services to be implemented in very different ways, be it as a Java/J2EE based web service, a web service implemented on Microsoft's .NET platform or a portlet published as a WSRP Service by a portal. The standard enables use of generic adapter code to plug in any WSRP service into intermediary applications rather than requiring specific proxy code. WSRP services are WSIA component services built on standard technologies including SOAP, UDDI, and WSDL. WSRP adds several context elements including user profile, information about the client device, locale and desired markup language passed to them in SOAP requests. A set of operations and contracts are defined that enable WSRP plug-n-play..." See: (1) the news item of January 21, 2002: "Proposal for an OASIS Web Services Remote Portal (WSRP) Technical Committee"; (2) the topic page "Web Services for Remote Portals (WSRP)."

  • [February 19, 2002] "Web Services, Part VIII: Reading DTDs with JavaScript." By Yehuda Shiran and Tomer Shiran. From WebReference.com (February 11, 2002). "In this column we continue our series on Web services. In Part I, we introduced you to this hot topic. In Part II, we showed you how to call Web services. In Part III, we presented the WebService behavior and its four supported methods. In Part IV, we continued our coverage of the WebService behavior by describing its objects and properties. In Part V, we dove into XML and XSLT. In Part VI, we started a miniseries on how to load and manipulate XML files from JavaScript. We continued this miniseries in Part VII, and focused on the DOMDocument's nodes and node types. In this column, we dive into the world of document type definitions (DTDs). The XML format supports entity references. These are parameters you can substitute with real values upon loading. These real values are defined in the DTD file. In this column we'll show you how to interact with DTDs. We'll teach you how to assemble a DTD for both document structure validation, as well as entity substitution. We'll show you what the DOMDocument object looks like in case of entity referencing, and how to insert new entity references into a DOMDocument. In this column you will learn: (1) How to assemble and call DTDs; (2) How to define an XML file structure; (3) How to reference and specify entities; (4) How to load DTDs with a browser; (5) How to load DTDs with JavaScript; (6) How to add entity references on the fly..."

  • [February 19, 2002] "What Web Services Are NOT." By Sriram Rajaraman and Michael Classen. From WebReference.com (February 16, 2002). ['Web Services are the latest, and perhaps hottest buzz-word in the Web development world. And, as is typically the case with all emerging buzz-words, their exact purpose and definition seem to vary from site to site, and vendor to vendor. To help us unravel the hype, we welcome guest author Sriram Rajaraman, the VP of Engineering at Instantis, who will help us understand what Web services are by explaining what he believes they are not.'] "Although there is certainly some new technology here ( mainly directed towards standardization - you may have heard about Simple Object Access Protocol or SOAP, Web Services Definition Language or WSDL, and Universal Description, Discovery and Integration or UDDI) Web services are largely a convenience wrapper around existing technologies like HTTP, SMTP, and XML. A Web service is a way to expose some business functionality over the Internet using the SOAP protocol... It is portable, interoperable and not tied to any one vendor and this is precisely why it is useful. Technically speaking, here's how a Web service works. A client wanting to call a function formats a request with SOAP XML encoding, and sends it to the server over any mutually agreeable communication protocol (typically HTTP, but SMTP is possible). The server runs some sort of a listener that accepts the incoming SOAP calls, reads the information from the XML SOAP packets, and maps them to business logic processing "application" software on the server. This application layer on the server processes the request and returns output to the listener, which formats the output into a response packet in the SOAP XML encoding and returns it to the client. A separate XML file contains a description of the services provided by the server in an encoding format called SDL (Service Descriptor Language), analogous to a type library in COM or an IDL file in CORBA. While most people are in alignment regarding the technology, there continues to be confusion in the description of what Web services are really capable of, and what their attributes are. Rather than describe each element exhaustively, let me take a cut at examining what Web services are NOT..."

  • [February 19, 2002] "XML Schema and RELAX NG Element Comparison." By Michael Fitzgerald. Reference posted to the RELAX NG TC list. "This document briefly compares XML Schema's 42 elements with RELAX NG's 28 elements. In the table that follows, the first column lists all the XML Schema elements while the second column lists any RELAX NG elements that have a one-to-one relationship, a comparable purpose, or only a roughly similar purpose to XML Schema elements. Elements unique to each language are also listed in separate tables below..." ['I have made an attempt to briefly compare the purpose of XML Schema's elements with RELAX NG's elements. The comparison appears in three tables totaling about 2 and 1/2 pages printed. I would appreciate any comments you have about this document...'] Note also the relax ng links on the Wy'east Communications web site. See "RELAX NG" and "XML Schemas."

  • [February 19, 2002] "XML at the Heart of Texas A&M Student Services." By Karen D. Schwartz. In ZDNet Tech Update (February 12, 2002). [Problem: Develop a set of scalable and reliable applications that can communicate with each other despite disparate programming languages, operating systems, and servers. Solution: Write a series of Web-accessible applications in XML.'] "If Texas A&M University is any indication, it takes a village of sophisticated, connected systems to run an institution of higher education. The College Station, Texas-based university, with about 45,000 students and hundreds of faculty and administrators, had existed for years with a cornucopia of diverse systems written for a variety of operating systems in many different programming languages. The applications run the gamut, facilitating everything from student registration and admissions to bill disbursement, housing, and scholarship-tracking... A drastic change of technological direction would have to take place to enable the university not only to run more efficiently, but also to allow it to begin developing much-needed Web services... To find the right technology to solve the problem, Chester and his staff spent nearly three years studying the methods other large universities had employed, and evaluating their own technology options. In the end, it became clear that eXtensible Markup Language (XML) was the best choice, because it would allow them to develop and integrate disparate systems regardless of the platform, language, or application server used. To facilitate the use of XML, the university deployed EntireX Broker, middleware from Software AG that acts as a gateway, allowing developers to work with code from a variety of sources. EntireX accepts XML and translates it for the legacy systems, and vice versa. The middleware also lets Texas A&M programmers reuse existing code to speed development time and reduce errors -- all in a Web-based framework... To prove that XML and EntireX Broker provided the right combination of technology, Chester's team developed several pilot projects combining the two. The first pilot involved erecting a simple Web page where students could check the status of their admission applications--a significant time-saver for a university dealing with about 25,000 applicants per year. A second pilot involved the development of a gateway through which the university's 30,000 to 50,000 yearly prospective applicants could access information. The pilot tests proved successful, so the team moved on to a true test of its ability to provide far-reaching Web services -- the development of a Web-based class registration system that would allow students to check class availability and enroll for classes. To create the system, the team built many business objects for functions such as adding a class to a student's schedule, selecting the entire student's schedule to display on the screen, and more. Other Web-based applications include a system that allows students to enroll and eventually pay for new student conferences, and an application that allows people to request catalogs, campus maps, and applications..."

  • [February 19, 2002] "Streaming Transformations for XML (STX)." February 18, 2002. Revision 0.01. See the XML-DEV posting of 2002-02-19 from Petr Cimprich, "Streaming Transformations for XML." [The only purpose of this document is to be a subject to discussion.] "STX is an XML-based template language for transforming XML documents into other XML documents in a streaming way. It can also be seen as a language to define rules for generic SAX2 filters. This language doesn't depend on neither XPath nor DOM. STX claims to be platform and implementation independent language... The idea of a simple XML transformation language isn't new nor surprising and STX admits inspiration from many sources. Its syntax is very similar to the syntax of XSLT by intention; this feature hopefully allows to learn the most of STX easily to the wide community of XSLT users. Other sources of inspiration include SAX2 [SAX2] filters such as [NsF], [RgF] or [XFD], all of them exploring possibilities of streaming transformations... STX rules are written in stylesheets in the XML format. The stylesheets should more exactly be called transformation rule files, as they have nothing to do with styles, but the word 'stylesheet' is shorter and thanks to XSLT is widely understood as a transformation rule file..." See also the web entry point.

  • [February 18, 2002] "Inland Revenue CT E-Filing. The Business Case for XBRL." By PricewaterhouseCoopers and Walter Hamscher (Standard Advantage). 12 pages. Revision 1, 20-December-2001. "The UK government has made a broad and deep commitment to establishing a sound technical foundation for e-Government. Within this framework, there is a requirement for standardising data tags and vocabularies across government departments. CT e-Filing is an Inland Revenue initiative that will allow corporate entities to file form CT600 electronically using Extensible Markup Language (XML), along with XML Schema, which defines the arrangement of XM L data in a file. This builds on similar XML implementations for SA and PAYE forms. To that end, the Inland Revenue development team has created a standardised set of data tags and structures for representing the CT600 in XML. This was released for consultation to tax and accounting software vendors on 22-November-2001. In the commercial world, over 120 software vendors, accounting firms, and users of financial data have formed an independent consortium in order to standardise the data tags and vocabularies for business reporting, using XML and XML Schema for the same reasons that the Inland Revenue chose to do so. The resulting specification is the eXtensible Business Reporting Language (XBRL) and the community is XBRL.org. What XBRL adds to XML Schema is a framework for defining financial and business performance terms to be used consistently within and across many different software applications. The terms have a fixed meaning, defined and endorsed by professional associations and independent of any particular software application. The framework then allows those terms to be organized in any given business form or report along dimensions that are common in reporting: business entities, the period of reporting, and classification with respect to the type of each data item. Furthermore, XBRL is accompanied by a growing number of vocabularies covering large areas of accounting and financial data -- a core set of UK accounting concepts being among those vocabularies..." [from the author] See "XML Markup Languages for Tax Information."

  • [February 18, 2002] "Unicode in XML and other Markup Languages." Unicode Technical Report #20. Revised version [#6]. W3C Note 18-February-2002. Authored by Martin Dürst and Asmus Freytag. Version URLs: [Unicode] http://www.unicode.org/unicode/reports/tr20/tr20-6.html; [W3C] http://www.w3.org/TR/2002/NOTE-unicode-xml-20020218. Latest version URLs: http://www.unicode.org/unicode/reports/tr20/, http://www.w3.org/TR/unicode-xml/. "This document contains guidelines on the use of the Unicode Standard in conjunction with markup languages such as XML. The Technical Report is published jointly by the Unicode Technical Committee and by the W3C Internationalization Working Group/Interest Group in the context of the W3C Internationalization Activity. The base version of the Unicode Standard for this document is Version 3.2 [see following bibliographic entry]... Both the Unicode Standard and markup technologies are evolving; when appropriate, a new version of this document may be published... The Unicode Standard defines the universal character set. Its primary goal is to provide an unambiguous encoding of the content of plain text, ultimately covering all languages in the world... For document and data interchange, the Internet and the World Wide Web are more and more making use of marked-up text such as HTML and XML. In many instances, markup provides the same, or essentially similar features to those provided by format characters in the Unicode Standard for use in plain text. Another special character category provided by Unicode are compatibility characters. While there may be valid reasons to support these characters and their specifications in plain text, their use in marked-up text can conflict with the rules of the markup language. Formatting characters are discussed in chapters 2 and 3, compatibility characters in chapter 4. Issues resulting from canonical equivalences and Normalization as well as the interaction of character encoding and methods of escaping characters in markup are discussed in the Character Model for the World Wide Web. The issues of using Unicode characters with marked-up text depend to some degree on the rules of the markup language in question and the set of elements it contains. In a narrow sense, this document concerns itself only with XML, and to some extent HTML. However, much of the general information presented here should be useful in a broader context, including some page layout languages... Many of the recommendations of this report depend on the availability of particular markup. Where possible, appropriate DTDs or Schemas should be used or designed to make such markup available, or the DTDs or Schemas used should be appropriately extended. The current version of this document makes no specific recommendations for the design of DTDs or schemas, or for the use of particular DTDs or Schemas, but the information presented here may be useful to designers of DTDs and Schemas, and to people selecting DTDs or Schemas for their applications. The recommendations of this report do not apply in the case of XML used for blind data transport and similar cases..." See "XML and Unicode."

  • [February 18, 2002] Proposed Draft Unicode Technical Report #28. Unicode 3.2. Unicode version 3.2.0. By Members of the Editorial Committee. Date 2002-1-21. Version URL: http://www.unicode.org/unicode/reports/tr28/tr28-2. ['This document defines Version 3.2 of the Unicode Standard. This draft is for review with the intention of it becoming a Unicode Standard Annex. The document has been made available for public review as a Proposed Draft Unicode Technical Report. Publication does not imply endorsement by the Unicode Consortium. This is a draft document which may be updated, replaced, or superseded by other documents at any time. This is not a stable document; it is inappropriate to cite this document as other than a work in progress.'] "Unicode 3.2 is a minor version of the Unicode Standard. It overrides certain features of Unicode 3.1, and adds a significant number of coded characters... The primary feature of Unicode 3.2 is the addition of 1016 new encoded characters. These additions consist of several Philippine scripts, a large collection of mathematical symbols, and small sets of other letters and symbols. All of the newly encoded characters in Unicode 3.2 are additions to the Basic Multilingual Plane (BMP). Complete introductions to the newly encoded scripts and symbols can be found in Article IV, Block Descriptions... Additional Features of Unicode 3.2: Unicode 3.2 also features amended contributory data files, to bring the data files up to date against the expanded repertoire of characters. A summary of the revisions to the data files can be found in Article VII, Unicode Character Database Changes. All outstanding errata and corrigenda to the Unicode Standard are included in this specification. Major corrigenda having a bearing on conformance to the standard are listed in Article II, Conformance. Other minor errata are listed in Article VI, Errata. Most notable among the corrigenda to the standard is a further tightening of the definition of UTF-8, to eliminate irregular UTF-8 and to bring the Unicode specification of UTF-8 more completely into line with other specifications of UTF-8. The former UTR #21, Case Mappings has been upgraded in status to a Unicode Standard Annex in Unicode 3.2. This means that UAX #21, Case Mappings is now formally a part of the Unicode Standard..." See "XML and Unicode."

  • [February 18, 2002] "Internationalization Features in XML and XLIFF. Extensible Markup Language and XML Localization Interchange File Format are Powerful Tools for Multilingual Applications." By Ultan Ó Broin (Oracle Corporation). In MultiLingual Computing and Technology Volume 13 Issue 2 [#46] (March 2002), pages 53-55. ISSN: 1523-0309. "In this article I will first look at the internationalization features of XML in terms of what content development teams must do to provide for character set encodings, character representation, language identification, and the presentation and rendering of global content in different languages. Then I will look at what development teams must do to facilitate the localization of XML content and how XML features enhance the localization process... The best way to provide for localization of XML is to use the XML Localisation Interchange File Format (or XLIFF). XLIFF is an XML-based file format for the exchange of localization data, based on OpenTag 1.2 and including features of TMX. It was developed by a group of localization partners including Oracle, Novell, IBM/Lotus, Sun Microsystems, Alchemy, Berlitz, LionBridge, Moravia-IT, and the RWS Group. XLIFF is now maintained under the aegis of the Organization for the Advancement of Structured Information Standards (OASIS). XLIFF defines a specification for an extensible format that caters specifically for localization requirements. It allows any software publisher to produce a single interchange format understandable by any localization service provider. It requires that the format should be tool independent, standardized, and support the whole localization process. The XLIFF data format successfully meets the goal of the separation of localization data and process, providing a focus on automation, stopping the proliferation of internal XML formats, and turning localization into a commodity for all players. Software publishers are freed to focus on producing international products and vendors are freed to focus on translating without managing multiple translation tools or file formats." [excerpt provided by the author] See: (1) "XML Localization Interchange File Format (XLIFF)"; (2) "Language Identifiers in the Markup Context."

  • [February 18, 2002] "Architectural Theses on Namespaces and Namespace Documents." By Tim Bray. ['This document represents only the opinion of its author and has not been reviewed or approved by any other person or organization.'] "A namespace name is defined to be a URI reference. Some URI references may be dereferenced; when this is done, the result is referred to as the namespace document. In December 2000, I co-edited a proposal for Resource Directory Definition Language, an extension of XHTML with a <rddl:resource> element, designed for use in namespace documents. This document is as an outline of the architectural principles which led to the design of RDDL, although they were not thought through in this level of detail at that time. [...] (1) It is not strictly necessary for namespace documents to exist. (2) Namespaces vary widely in semantic effect. (3) Namespaces have definitive material... [theses #4-14 follow]" Referenced in XML-DEV 2002-02-18: "I have just posted some arguments about namespaces and namespace documents as a contribution to TAG debate - I suspect many here will be interested; see [the posting]..." See other references in "Namespaces in XML."

  • [February 15, 2002] "The e-Service Development Framework (eSDF)." Edited by Tim Benson. Published by the UK Cabinet Office, Office of the e-Envoy. Version 1.0b. 88 pages. February 06, 2002. Released for trial use and evaluation over the next six months. Comments to Adrian Kent (Interoperability Policy Adviser, Technology Strategy, Office of the e-Envoy). "The e-Service Development Framework (eSDF) provides a methodology for developing interoperability specifications for use in the public sector. The focus is on preserving the information content so that the information receiver can use it without loss or change of meaning. This document is an introduction to the Office of the e-Envoy's (OeE) e-Service Development Framework (eSDF) and e-Service Development Process Guidelines, which provide a road map for the development of electronic service delivery throughout the public sector. Overview: (1) Part 1 Section A: The e-Service Development Framework (eSDF) provides a high level map of all of the constituents of the process. (2) Section B: The specification of functional Requirements is the first, vital stage of any design process. The Requirements specification uses the Government Common Information Model (GCIM) and use case analysis as a framework for specifying each service interaction in a way that is appropriate to the domain. (3) Section C: Message Design Specification continues the example into the more technical aspects of message design. The service interactions specified in the Requirements specification are converted into detailed Message Specifications, based on the generic structure of the Government Message Reference Model (GMRM). Separate Message Types are defined for each type of message. These are technology-neutral. The conversion into specific technologies, such as XML schema is covered in a separate document. (4) Section D: Implementation using XML Schema... The Message Design Specification (MDS) is technology-neutral, but the e-GIF mandates the use of XML schema. Once the MDS has been prepared, the next step is the preparation of one or more XML Schemas describing the actual XML messages to be used. In principle, an XML Schema could be generated automatically from the MDS, provided that all relevant value sets are available in schema form. However, it is more likely that the best that can be generated automatically from the MDS is a useful first draft, or, as described below, tool-based support for a human schema designer. [We here present] a use case for XML schema development using the MDS; a domain model showing the relationships between XML message components and the Message Design Domain Model; and a short discussion of the XML infrastructure required to make the scenario described in the use case a reality. " See also the metadata description. Related references in "e-Government Interoperability Framework (e-GIF)."

  • [February 15, 2002] E-Government Metadata Standard (e-GMS). January 09, 2002 ['09/01/02']. Version 0.2. 36 pages. The e-GMS lists the elements and refinements that will be used by the public sector to create metadata for information resources. Draft for consultation. Comments due by 22-February-2002. Please send comments to: Maewyn Cumming, Metadata Policy Adviser, Office of the e-Envoy, Stockley House, 130 Wilton Road, London SW1V 1LQ. XML schema [to be completed]. "The first version of this Standard, as described in the e-GMF, consisted of simple Dublin Core. In this version, additional elements have been added to facilitate information and records management... A mapping description lists the elements in other metadata schemes that each element maps to. The other schemes compared are (1) Dublin Core: the set of metadata elements and refinements developed by the Dublin Core Metadata Initiative, which makes up the core of the e-GMS; (2) AGLS: Australian Government Locator Service; (3) NGDF: The National Geospatial Data Framework; (4) GILS: Government Locator Service, used in the USA; (5) PRO: Metadata elements recommended by the UK Public Record Office. Rationale: "The reasons and policies for developing this standard are outlined in the e-Government Metadata Framework: (1) Modernising Government calls for better use of official information, joined-up systems and policies, and services designed around the needs of citizens. (2) Considerable work has already been done to standardise government information systems so they can be accessed easily from central portals. (3) New systems for the handling of electronic records are being devised. Official records will not always be stored in paper format. (4) Metadata makes it easier to manage or find information, be it in the form of web pages, electronic documents, paper files, databases, anything. (5) For metadata to be effective it needs to be structured and consistent across organisations. (6) The e-GMF is therefore mandated across all government information systems. By association, so is the e-GMS.." Also in Word/RTF format; see the metadata description . See: "e-Government Interoperability Framework (e-GIF)." [cache]

  • [February 14, 2002] "Securing Signatures for Web Services." By Paul Festa. In CNET News.com (February 14, 2002). "The premier Web standards body on Thursday recommended a way of signing documents using XML, calling its new digital signature guidelines a key tool for Web services infrastructure. The World Wide Web Consortium's (W3C) XML Signature recommendation, developed in conjunction with the Internet Engineering Task Force (IETF), provides a standard way of signing XML documents so that recipients can verify the identity of the sender and the integrity of the data. Those guarantees are crucial to Web services, an area the W3C has been criticized for neglecting. 'XML Signature is a critical foundation on top of which we will be able to build more secure Web services,'" W3C founder and director Tim Berners-Lee said in a statement. 'By offering basic data integrity and authentication tools, XML Signature provides new power for applications that enable trusted transactions of all sorts.' The digital signature is just one tool in a group under construction at the W3C required for secure transactions. While the signature verifies a sender's identity and the data's integrity, an encryption method is required to scramble the message and prevent its being read en route to the recipient. The W3C is at work on XML Encryption..." See the news item "XML-Signature Published as a W3C Recommendation."

  • [February 14, 2002] "Exclusive XML Canonicalization Version 1.0." W3C Candidate Recommendation 12-February-2002. Authors/Editors: John Boyer (PureEdge Solutions Inc.), Donald E. Eastlake 3rd, Motorola), and Joseph Reagle (W3C). "Canonical XML specifies a standard serialization of XML that, when applied to a subdocument, includes the subdocument's ancestor context including all of the namespace declarations and attributes in the xml: namespace. However, some applications require a method which, to the extent practical, excludes unused ancestor context from a canonicalized subdocument. For example, one might require a digital signature over an XML payload (subdocument) in an XML message that will not break when that subdocument is removed from its original message and/or inserted into a different context. This requirement is satisfied by Exclusive XML Canonicalization... This specification from the IETF/W3C XML Signature Working Group is a Candidate Recommendation of the W3C. The Working Group believes this specification incorporates the resolution of all last call issues; furthermore it considers the specification to be very stable and invites implementation feedback during this period. The exit criteria for this phase is atleast two interoperable implementations over every feature, one implementation of all features, and one report of satisfaction in an application context (e.g., SOAP, SAML, etc.) Note, this specification already has significant implementation experience as demonstrated by its Interoperability Report. We expect to meet all requirements of that report within the two month Candidate Recommendation period (closing April 16, 2002)..."

  • [February 14, 2002] "Accenture, BEA, Hewlett-Packard, IBM, Intel, Microsoft, and SAP Form Web Services Interoperability Organization (WS-I) to Speed Development and Deployment of Web Services; Provide Support and Roadmap for Developers and Customers." By [WSJ Staff]. In Web Services Journal (February 12, 2002). "A broad group of technology leaders have formed the Web Services Interoperability (WS-I) Organization... Exclusive to WSJ, we interviewed Bob Sutor, director of e-business standards strategy, IBM, for deeper background on the new group. He commented, 'This came about from work IBM and Microsoft had done together with our industry partners on standards efforts. Most important, however, the need for interoperability is something that our customers made very clear to us. We have to ensure that 'interoperability' is not just a marketing term to associate with Web services, we need to make it real and we need to make it measurable.' When asked if it posed any special challenge to bring together this assortment of industry rivals into one organization, Sutor noted, 'No more than usual. After a lot of careful planning you have a limited time to engage with the partners and educate them on the goals of the organization and sign them up for the launch. Overall, it was an exciting experience.' WSJ asked if the group would physically meet as a committee, or will the sharing of ideas and standards occur via Internet cooperation? 'Probably a combination of both,' said Sutor. 'There will be a community meeting at the end of February where we will bring everyone together and form some workgroups. After that the workgroups will use e-mail, have teleconferences and have their own face-to-face meeting schedules. We will later decide the schedule for future full community meetings.'... Related article from WSJ: "W3C Sets Record Straight On New Web Services Alliance: 'WS-I Is Not a Competitor to W3C...They're Choosing Specs, Not Building Them'." See "Web Services Interoperability Organization (WS-I)."

  • [February 14, 2002] "Updating Your System. Is VoiceXML Right for Your Customer Service Strategy?" [Critical Decisions]. By Jonathan Eisenzopf. In New Architect: Internet Strategies for Technology Leaders Volume 7, Issue 3 (March 2002), pages 20-21. ISSN: 1537-9000. "VoiceXML is based on technology that has been used in IVR systems for years and deployed in many Fortune 500 companies. VoiceXML is simply a thin veneer that abstracts the low-level APIs used to develop IVR applications. Voice dialogs are specified by static (or dynamic) XML documents that contain sets of recorded or synthesized prompts and speech recognition grammars. These XML documents are converted by a VoiceXML gateway into low-level commands that interact with the digital signal processors (DSP) and telephony boards in a VoiceXML gateway. It's unlikely that VoiceXML will bring the Web to the phone, however. Despite the hype, VoiceXML isn't well suited as a general-purpose interface for providing telephone access to the Web. Instead, the two areas where it can provide immediate and compelling benefits are customer service and order entry... The airline industry has used IVR systems to provide flight arrival and departure information for some time. This has dramatically reduced costs by eliminating the need for live operators and shortening the average length of each call. However, customers can be frustrated by touch tone IVR systems, and will often press zero in an attempt to reach a live representative. VoiceXML-based IVRs are a better alternative to such systems because they offer speech recognition and text-to-speech capabilities. For example, Amtrak's IVR application lets callers speak to the system, rather than navigating through multiple menus. Before updating its IVR system to use speech recognition, roughly 70 percent of customers using the system would exit to speak with an operator. After the speech recognition technology was added, Amtrak reports that the exit rate was reduced to 30 percent... Although VoiceXML hasn't been widely adopted yet, the fact that technology vendors are taking an interest in the standard is reassuring. With companies like Oracle, HP, Motorola, and IBM jumping on the VoiceXML bandwagon, it's likely that you'll have access to VoiceXML-capable tools the next time you upgrade your application servers and Web development software... Several companies are already working to improve VoiceXML systems to address these issues. As with most technologies, once VoiceXML's appeal broadens and the benefits of deploying IVR solutions as a compliment to online e-business applications become more evident, the rate of adoption will increase. If you currently handle order entry and customer support with a combination of online and telephone support, now may be the time to consider VoiceXML as a way to reduce costs and realize greater return on your existing software investments..." Note: New Architect was formerly WebTechniques. See "VoiceXML Forum."

  • [February 13, 2002] "A Method for the Unification of XML Data." By Ronaldo dos Santos Mello. Paper presented at the OOPSLA 2001 "Workshop on Objects, XML and Databases." 15 pages (with 19 references). "XML is a common standard for data representation and exchange over the Web. Considering the increasing need for managing data on the Web, integration mechanisms are now required in order to access heterogeneous XML sources. We describe in this paper an unification method for XML schemata. The input to the unification method are object-oriented canonical schemata that conceptually abstract local DTDs. The unification process applies specific algorithms and rules to the concepts of the canonical schemata to generate an ontology. The ontology is the result of the semantic integration of the canonical schemata, acting as a front-end for user queries..." See also the slides, and related paper "A Mediation Layer for Integration of XML Data Sources with Ontology Support.". For summaries of other presentations at the OOPSLA 2001 workshop, see the "Workshop Report on Objects, XML and Databases."

  • [February 13, 2002] "Embed Binary Data in XML Documents Three Ways. Using XML for Data Transfer Between B2B Applications." By Gowri Shankar (Software engineer, AQUILA Technologies Pvt. Ltd). From IBM developerWorks, XML Zone. February 2002. ['The major advantages of XML for interoperability of data are its extensibility and its ability to represent all forms of data in text format. XML proves its worth even when dealing with binary data. This article focuses on three ways to represent binary data in XML. The first method uses XML and DTD to represent a binary file or data source in the most appropriate way. The second way uses a simple format where everyone can define their own format to represent the binary data. With the third method, all of the binary data is contained within the XML file.'] "XML has transformed the way data is exchanged, shared, and transferred between disparate applications -- applications with varying technologies, operating platforms, and locations. With all this data movement, the only thing that you have to remember for scalability's sake is to wrap the data through HTTP-enabled markup. The best way to send your data through HTTP is with XML, which is better than HTML for many reasons. Originally, HTML was supposed to handle only text, but today it is commonly used to refer and mark up non-text data as well. So it is quite natural that XML followed suit. Because XML does not follow a specified syntax (as HTML does) and is more extensible than HTML, people use it in any way they wish to mark up all types of data. Still, HTTP is commonly used as the transporting layer; thus XML has to work around many constraints while dealing with binary data. XML is only needed to mark up binary data when it is a part of the total information requested by a user or a client application. Furthermore, the advantage of encompassing binary data in XML is the ease with which it is transported through HTTP... Let's take a look at the three ways to represent or embed binary data in an XML file, listed briefly here and in more detail below: Type 1: Represent the binary data by means of external entity and notation; Type 2: Represent the binary data using MIME data types; Type 3: Embed the binary data in CDATA section..."

  • [February 13, 2002] "XML Documents On The Run: Part 1. SAX Speeds Through XML Documents With Parse-Event Streams." By Dennis M. Sosnoski. In JavaWorld (February 13, 2002). ['Event-driven XML document processing with SAX (Simple API for XML) and SAX2 can greatly improve performance and can avoid document size limits associated with in-memory representations such as DOM (Document Object Model) or JDOM. On the other hand, trying to wrap your brain around event-driven programming can drive you to a career in sales. In this article, Dennis Sosnoski aims to keep your career on track by introducing event-handler basics in Java.'] "One of the oldest approaches to processing XML documents in Java also proves one of the fastest: parse-event streams. That approach became standardized in Java with the SAX (Simple API for XML) interface specification, later revised as SAX2 to include support for XML Namespaces. Event-stream processing offers other advantages beyond just speed. Because the parser processes the document on the fly, you can handle it as soon as you read its first part. Other approaches generally require you to parse the complete document before you start working with it -- fine if the document comes off a local disk drive, but if the document is sent from another system, parsing the complete document can cause significant delays. Event-stream processing also eliminates any document size limits. In contrast, approaches that store the document's representation in memory can run out of space with very large documents. Setting a hard limit on a real-world document's size is often difficult, and potentially a major problem in many applications... In the next article in this series, I'll take the event-based programming approach further with an enhanced handler design that adds more flexibility to your programs. I'll also cover the pull-parser approach. Pull parsing resembles SAX/SAX2 event-stream parsing, but it gives your program control over the stream, which gives you all the advantages of event-stream parsing without the complexities of event-driven programming. Be sure to check back then for the rest of the story on parse-event stream processing of XML in Java." With source code.

  • [February 13, 2002] "Gates Casts Visual Studio .Net." By Matt Berger. In InfoWorld (February 13, 2002). "Microsoft's Bill Gates cast his company's .Net initiative wide Wednesday, releasing the final version of the long-anticipated developer toolkit, Visual Studio .Net, as well as the underpinnings of its emerging Web-based development platform, called the .Net Framework. Microsoft's chairman and chief software architect introduced the new application development tools with few bells and whistles, letting market momentum speak for itself. More than 3 million developers are testing and deploying applications with early release versions of Visual Studio .Net and the .Net Framework, the largest testing group in the company's history, according to Microsoft... more important than Microsoft's tools for building new .Net applications, is the final release of the .Net Framework, the technology that will allow these new applications to run on computers, servers and various computing devices such as handhelds... Microsoft has invested all of its resources into developing products around the .Net Framework, Gates said. It plans to spend about $5 billion each year on research and development to outfit its products with support for industry standards such as XML, SOAP (Simple Object Access Protocol) and UDDI (Universal Description, Discovery and Integration)... First the company is adding a layer on top of its software products that use XML as a data type to transfer data. Many of Microsoft's server and desktop software products continue to be updated with new support, including SQL Server database software. The next step, Gates said, is to release new versions of products that will include XML as the central data type. An XML-centric release of SQL Server code-named 'Yukon' is due for release next year..." See the announcement: "Microsoft Launches XML Web Services Revolution With Visual Studio .NET and .NET Framework Bill Gates Highlights Customers and Partners Autodesk, Borland, Computer Associates, Groove Networks, L'Oreal, Macromedia, Merrill Lynch and IBM Reaping Gains on .NET Platform."

  • [February 13, 2002] "Microsoft Sharpens Tools for SQL Server, BizTalk Server." By James Niccolai. In InfoWorld (February 13, 2002). "Microsoft released two developer toolkits Wednesday for building Web services on top of SQL Server 2000, its database product, and BizTalk Server, its software for tying together business applications... The toolkit for SQL Server 2000 is supposed to provide a way for developers to turn applications written for Microsoft's database into Web services, meaning they would be able to interact with applications at other companies or elsewhere in an organization regardless of who created the applications and what platforms they run on. Called the SQL Server 2000 Web Services Toolkit for Microsoft .Net, it packages existing related white papers and Web casts with code samples and Version 3.0 of SQL XML, an update to Microsoft's software for managing XML (Extensible Markup Language) data in a database, said Stan Sorensen, Microsoft director of SQL Server product management. Developers can use the tools to expose stored procedures -- the database functions invoked when an application is running --and server-side XML templates as Web services. They can also generate automatically the WSDL (Web Services Description Language) code describing the interfaces of a Web service, he said, allowing it to work with other applications... the BizTalk Server 2002 Web Services Toolkit serves a similar purpose as the SQL Server tool kit, helping developers turn BizTalk server business processes into Web services. It includes code samples written in both C# and Visual Basic .Net..." See the press release: "Microsoft Extends XML Web Services Support in .NET Enterprise Servers Through Visual Studio .NET. New Toolkits for SQL Server, BizTalk Server Integrate With Visual Studio .NET For Deepened Support for Building, Deploying and Orchestrating XML Web Services."

  • [February 13, 2002] XLIFF 1.0 Specification. OASIS Committee Specification, 6-Feb-2002. Edited by Yves Savourel and John Reid (Initial contribution editor). Version URL: http://www.oasis-open.org/committees/xliff/documents/xliff-20020206.htm. Abstract: "This document defines the XML Localisation Interchange File Format (XLIFF). The purpose of this format is to store localisable data and carry it from one step of the localisation process to the other, while allowing interoperability between tools." From the introduction: "XLIFF is the XML Localisation Interchange File Format designed by a group of software providers, localisation service providers, and localisation tools providers. It is intended to give any software provider a single interchange file format that can be understood by any localisation provider. It is loosely based on the OpenTag version 1.2 specification and borrows from the TMX 1.2 specification. However, it is different enough from either one to be its own format..." The XML DTD is provided in a separate document. See also the original XLIFF 1.0 Specification produced by the DataDefinition group. References: see "XML Localization Interchange File Format (XLIFF)."

  • [February 13, 2002] "The Platform for Privacy Preferences 1.0 Deployment Guide." W3C Note 11-February-2002. By Martin Presler-Marshall (IBM). Version URL: http://www.w3.org/TR/2002/NOTE-p3pdeployment-20020211. Previous Version URL: http://www.w3.org/TR/2001/NOTE-p3pdeployment-20011130. "The Platform for Privacy Preferences (P3P) provides a way for Web sites to publish their privacy policies in a machine-readable syntax. This guide explains how to deploy P3P on a Web site, and issues that Webmasters and content owners should consider when deploying P3P. This guide is intended for Web site administrators and owners. You can use it whether you operate your own Web server (or many of them), or are responsible for some pages on a server someone else operates. You should have some familiarity with publishing content (HTML files, images, etc.) to a Web server, but do not need to be an expert at configuring and operating Web servers. You also don't need to be a P3P expert. This guide will discuss how to go about deploying P3P. It will discuss: (1) What's involved in deploying P3P on a Web site. (2) How to decide how many P3P policies to use, and how to map those policies onto the Web site. (3) Different ways to publish your privacy policy, and the pros and cons of each. (4) Step-by-step instructions for deploying your privacy policy on various popular Web servers. The W3C maintains a list of P3P implementations which includes pointers to tools which can help with 'how to code a privacy policy in the P3P syntax'." See also the FAQ document "P3P and Privacy on the Web FAQ" and "W3C Privacy Activity Statement." Local references: "Platform for Privacy Preferences (P3P) Project."

  • [February 12, 2002] "Cuneiform for Everyman." By Robert K. Englund (Department of Near Eastern Languages and Cultures, UCLA). In D-Lib Magazine Volume 8 Number 1 (January 2002). [In Brief.] "The Cuneiform Digital Library Initiative, a joint project of the University of California at Los Angeles and the Max Planck Institute for the History of Science, Berlin, proposes to make available to an Internet public the form and contents of the first millennium of writing in ancient Mesopotamia (ca. 3300-2000 B.C.). Now in its second year of funding by the National Science Foundation (Division of Information and Intelligent Systems), the CDLI is pursuing an online distribution of text and image documentation of the ca. 120,000 cuneiform tablets from 3rd millennium Babylonia... The field of Assyriology, heretofore little known beyond the confines of academia, offers to members of related disciplines, and to an interested public, uncharted data documenting the linguistic, historical and intellectual developments of a long-lost age. The cuneiform archives that contain this information were in the second half of the 19th, and throughout the 20th century, excavated in Iraq and deposited in public and private collections spread across the globe... In cooperation with the curators of the leading museums of the world, with an international board of cuneiform specialists and historians of science, and with XML programmers and digital imaging experts, the CDLI has established markup and scanning standards for the entry and archiving of cuneiform texts to insure the long-term compatibility of its data with those of related digital library projects. Two associated online journals will offer a forum for the distribution of articles dealing with early language, writing, paleography, administrative history, mathematics, metrology, and the technology of modern cuneiform editing..." See "Encoding and Markup for Texts of the Ancient Near East."

  • [February 12, 2002] dtd2xs version 1.54. Announcement posted by Joerg Rieger. "We are pleased to announce a new release of 'DTD to XML Schema translator' [dtd2xs]. One may use it to translate a Document Type Definition (XML 1.0 DTD) into an XML schema (REC-xmlschema-1-20010502). The translator can map meaningful DTD entities onto named and therefore reusable XML Schema constructs such as <simpleType>, <attributeGroup> and <group>. The translator can map DTD comments onto XML Schema <documentation> elements. Freely available as a Java class, as a Web tool, and as Java application..."

  • [February 12, 2002] "National Information Standards Organization Reports OpenURL Standardization Moving Forward." From NISO. "NISO, the National Information Standards Organization, reports that Committee AX, the NISO standards committee that is preparing the OpenURL Standard, met on January 24th and 25th at the CNRI headquarters in Reston, Virginia and continues to make significant progress. Eric Van de Velde, the committee chair, summing up progress to date said that existing applications of OpenURL technology only scratch the surface of what else is feasible. Currently, we apply OpenURL technology to bibliographic citations. In the near future, we may apply it to many other types of information: subject headings, legal documents, biological (genome sequences), etc. Currently, we encode OpenURL in an HTTP GET or POST format. In the near future, we may encode OpenURL links in XML. Currently, we think of OpenURL links being provided by specific information providers. In the near future, third parties may provide OpenURL links for any information resource. With these and other opportunities yet to be explored, it is obvious that OpenURL is a technology in its infancy, and we should think of the emerging OpenURL standard as the beginning of a long-term evolutionary process. To insure that progress made so far will continue to have value, the committee adopted the OpenURL draft originally submitted to NISO as Version 0.1 of the standard. This document is available as part of the official record on the committee's web site..." See "NISO Develops an OpenURL Standard for Bibliographic and Descriptive Metadata."

  • [February 12, 2002] "OpenURLs, Citations, and Two-Level SRV-Record-Based Resolution." By Richard L. Goerwitz III. "This article retraces (from the standpoint of academic researchers) the fundamental issues behind the development of open linking strategies, particularly OpenURLs, and shows that, despite many advances afforded by the emerging OpenURL specification, OpenURLs lack not only the overall robustness but also the portability necessary for use in a broad academic context. Left unremedied, the portability problem alone would be enough to doom OpenURLs to oblivion. By establishing a two-level SRV-record-based OpenURL resolution system, however, this problem can be overcome - and a pathway can be opened up towards more robust support of real-world academic usage scenarios..."

  • [February 12, 2002] "Critics Clamor For Web Services Standards." By Paul Festa. In CNET News.com (February 12, 2002). ['As the computer industry increasingly focuses on Web services, the Web's premier standards body is weathering vociferous criticism from members and analysts alike that it is dropping the ball on this hot trend.'] "Discontent on the Web services front is hitting the World Wide Web Consortium (W3C) at a crucial time in the group's history--and during a potentially momentous period for the Web itself. Software makers and traditional businesses are trying to determine what shape Web services will take and whether they will live up to the hype they have generated... In a sign of the growing impatience that software companies have regarding Web services, Microsoft, IBM, BEA Systems and Intel last week launched the Web Services Interoperability Organization (WS-I), a consortium aimed at boosting Web services... it remains to be seen how compatible the systems will be. And while both Microsoft and IBM have joined the WS-I, Sun is conspicuously absent. In addition, the heavyweights have already introduced competing protocols for some aspects of the Web services architecture, for example Microsoft's Xlang and IBM's Web Services Flow Language for work-flow management. IBM said that calling that discrepancy a full-fledged rift is premature, and that the company released its protocol more for industry input than as a set-in-stone specification. Meanwhile, the W3C has responded to its critics with a long list of its recommendations and proposals that most agree will be part of any standardized Web services architecture. In addition, an alphabet soup of Web services-specific standards have been proposed and are under development not only by the W3C but by independent groups including Oasis, UDDI.org and the Business Process Management Initiative. Sources say Microsoft, IBM and other software giants are among those voicing frustration at the W3C's pace on Web services. Those companies, however, are circumspect or silent on the matter in public. Members of the WS-I flatly denied the group was formed in response to the W3C's slowness in creating Web services standards. According to Bob Sutor, IBM's director of e-business standards, the importance of Web services compatibility was first introduced by IBM and Microsoft during a W3C workshop last April, in which 70 companies brainstormed about the future of the field and its requirements. The WS-I says its priority is to educate businesses on how to build compatible Web services. But Sutor reiterated that it also wants to offer guidelines for creating specifications needed in areas such as security and reliability. It has nothing to do with the speed of the W3C or other standards groups, he said..." See "Web Services Interoperability Organization (WS-I)."

  • [February 12, 2002] "Introduction to Web Services." By John Canosa. In Embedded Systems Programming Volume 15 Number 2 (February 2002). ['Web services are coming to the enterprise, and embedded use can't be far behind. Here are the basics.'] "When you hear about Microsoft's .NET, Sun's ONE, HP's e-services and IBM's WebSphere, you are hearing about web services. Most of these organizations speak of web services in the context of Business-to-Business (B2B) and Business-to-Consumer (B2C) information exchange and e-commerce. As you will discover in this article, web services are just as powerful for connecting embedded systems and other distributed intelligent assets into the business enterprise. They can help provide such valuable services as automatically generated service requests, remote diagnostics, and automatic consumables reordering... A web service is a programmable component that provides a service and is accessible over the Internet. Web services can be standalone or linked together to provide enhanced functionality. Buying airline tickets, accessing an online calendar, and obtaining tracking information for your overnight shipment are all business functions that have been exposed to the outside world as web services... Web services consist of methods that operate on messages containing either document-oriented or procedure-oriented information. An architecture that is based on web services is the logical evolution from a system of distributed object-oriented components to a network of services. Web services provide a loosely coupled infrastructure that enables cross-enterprise integration. Web services differ from existing component object models and their associated object model specific protocols, such as CORBA and IIOP, COM and DCOM, and Java and RMI, in that the distributed components are interfaced via non-object-specific protocols. Web services can be written in any language and can be accessed using the familiar and firewall-friendly HyperText Transport Protocol (HTTP)..."

  • [February 11, 2002] "Combining the Power of W3C XML Schema and Schematron." By Eddie Robertsson. "This article shows how to combine W3C XML Schema and Schematron by inserting Schematron rules in the <xs:appinfo> element of the W3C XML Schema... After the W3C ratified W3C XML Schema as a full recommendation on May 2nd 2001 it has become clear that this is the most popular XML Schema language for developers. Many believed that W3C XML Schema would solve all the problems that existed with validation of XML documents but this was never the goal of W3C XML Schema... When W3C XML Schema is not powerful enough there are other options for developers. One option is to find a different XML Schema language that can express all the needed constraints. Another option is to add extra code to your application to check the things not expressible in the W3C XML Schema language. A third option, made available through one of W3C XML Schema's extension mechanisms, is to combine W3C XML Schema with another XML Schema language. This article will provide an explanation and several examples of how Schematron rules can easily be embedded within W3C XML Schemas. Schematron has its strengths where W3C XML Schema has its weaknesses (co-occurrence constraints) and its weaknesses where W3C XML Schema has its strengths (structure and data types). In the examples provided W3C XML Schema is used as far as possible and then the embedded Schematron rules are used to express what cannot be done with W3C XML Schema alone. The following four areas, which W3C XML Schema does not fully address, will be covered: dependant attributes, interleaving of elements, co-occurrence constraints and relationships between different XML documents. A short introduction to Schematron is provided but the reader will need a basic understanding of W3C XML Schema to benefit from the article..." [Posted note on 'xmlschema-dev@w3.org': "I'm currently working on a paper that will explain the details of embedding Schematron rules in the <xs:appinfo> element in a W3C XML Schema. I've put up a draft which contains some background, introduction to Schematron, examples of embedded Schematron rules and how the validation process works. The draft also contains a link to a zip file with all the examples used so you can try it out yourself. All comments are welcome..." For schema description and references, see "XML Schemas."

  • [February 11, 2002] "Combining UML, XML and Relational Database Technologies. The Best of All Worlds For Robust Linguistic Databases." By Larry S. Hayashi and John Hatton (SIL International). Pages 115-124 in Proceedings of the IRCS Workshop on Linguistic Databases (11-13 December 2001, University of Pennsylvania, Philadelphia, USA. Organized by Steven Bird, Peter Buneman and Mark Liberman. Funded by the National Science Foundation). "This paper describes aspects of the data modeling, data storage, and retrieval techniques we are using as we develop the FieldWorks suite of applications for linguistic and anthropological research. Object-oriented analysis is used to create the data models. The models, their classes and attributes are captured using the Unified Modeling Language (UML). The modeling tool that we are using stores this information in an XML document that adheres to a developing standard known as the XML Metadata Interchange format (XMI). Adherence to the standard allows other groups to easily use our modeling work and because the format is XML, we can derive a number of other useful documents using standard XSL transformations. These documents include (1) a DTD for validating data for import, (2) HTML documentation of diagrams and classes, and (3) a database schema. The latter is used to generate SQL statements to create a relational database. From the database schema we can also generate an SQL-to-XML mapping schema. When used with SQL Server 2000 (or MSDE), the database can be queried using XPath rather than SQL and data can be output and input using XML. Thus the Fieldworks development process benefits from both the maturity of its relational database engine and the productivity of XML technologies. With this XML in/out capability, the developer does not need to translate between object-oriented data and relational representation. The result will be, hopefully, reduced development time. Another further implication is the potential for an increased interoperability between tools of different developers. Mapping schemas could be created that allow FieldWorks to easily produce and transfer data according to standard DTDs (for example, for lexicons or standard interlinear text). Data could then be shared among different tools -- in much the same way that XMI allows UML data to be used in different modeling tools... Stroustrup [The C++ Programming Language] states that 'constructing a useful mathematically-based model of an application area is one of the highest forms of analysis. Thus, providing a tool, language, framework, etc., that makes the result of such work available to thousands is a way for programmers and designers to escape the trap of becoming craftsmen of one-of-a-kind artifacts.' UML is an excellent example of such a language and framework. UML tools that make use of XMI provide even greater longevity and availability to the modeling work. XML is also such a language and framework. Because XMI is XML, we have been able to use standard XML tools and the XML functionality of SQL Server to easily derive a number of implementation specific products... UML, XMI and XML provide a stable foundation for data modeling and software development. We expect our UML models to have longevity and we trust that the XMI representation will allow us to easily derive new functionality and better interface implementations as technology changes." See: (1) "Conceptual Modeling and Markup Languages"; (2) "XML and Databases."

  • [February 11, 2002] "Iona Expands Web Services Platform." By Richard Karpinski. In Internet Week (February 06, 2002). "Iona Wednesday shipped new versions of its Web services platform to help large companies integrate their enterprise applications and smaller companies more affordably link into trading partner networks. Iona, perhaps best known for its CORBA technology, began making a major splash in the Web services area earlier this year. At that time, it took its XML Bus technology, which even to its own surprise was being used by customers to solve some sticky enterprise-class problems, and used it as the foundation for its Orbix E2A Web Services platform. That platform provides access to XML-based Web services protocols as a way to integrate traditional enterprise technologies such as J2EE, .NET, CORBA, mainframes and message-oriented middleware... The Collaborate Edition adds new capabilities to the base platform, including improved security, including support for encryption, authentication and digital signatures; packaged adapters for Baan, Siebel, J.D. Edwards, PeopleSoft, SAP and others; and support for protocols including ebXML and RosettaNet..." See the announcement: "IONA Ships Orbix E2A Collaborate and Partner Editions - Industry's First Enterprise Web Services Integration Solutions With ebXML Support. Orbix E2A Eliminates Barriers to Integrating J2EE, .NET, CORBA, Mainframe and Message-Oriented Technologies for Business Efficiency and Return-on-Investment."

  • [February 11, 2002] "A New Wave Nears: Micosoft's Web Services. Microsoft Readies More Sophisticated Development Tools For Web Services." By Aaron Ricadela and Karyl Scott. In InformationWeek (February 11, 2002). "Microsoft has grand plans for its more than 4 million loyal business-software developers, and they involve turning programmers fluent in Windows into architects of a new type of distributed software called Web services. Phase one of the plan gets under way this week, as Microsoft chairman and chief software architect Bill Gates introduces a new suite of development tools, Visual Studio.Net, designed to let programmers build more sophisticated applications -- and potentially be more productive in the process. The promised benefits of Web services include the ability to access everything from a software component to a full-blown application as a service -- possibly offered on a subscription basis -- via the Web. Web services should also simplify the process of application and business-process integration. "For every dollar spent on an application, corporations spend an additional $7 on integration," says Tom Berquist, managing director of research at Goldman Sachs. Microsoft isn't alone among technology vendors providing the tools and infrastructure needed to create this next generation of computing. BEA Systems, IBM, Oracle, Sun Microsystems, and others are on the same track. But Microsoft is off to a fast start, having already delivered underlying technology in its BizTalk and Exchange servers. One exemplary project: Microsoft is turning its giant TerraServer database of topographic maps into a service addressable by any program that adheres to Web-services standards: XML for data integration; the Simple Object Access Protocol for messaging; the Web Services Description Language to describe a program's functions; and the Universal Description, Discovery, and Integration spec to locate components in directories. 'You can say this has been a dream of computer science for many decades,' Gates said recently. 'The momentum is there to drive [Web services] to the same kind of central position that the graphics interface and HTML had across all the different systems in the past'..."

  • [February 11, 2002] "Smart and Simple Messaging. Discover the Hidden Potential of SMS as a Killer App Builder." By Wes Biggs (Senior Software Engineer, kpe). IBM developerWorks, Wireless Zone. February 2002. ['Myriad challenges face developers who want to undertake wireless projects today. While the rosy glow of a wireless future beckons, working with widespread technologies like the Short Messaging System (SMS) requires a different mindset. In this article, Wes Biggs outlines the difficulties facing developers tackling wireless today, and explains how SMS-based solutions could have greater killer app potential than many realize.'] "How can you make your mark in wireless technology today? To answer that question, we'll take a look at what's really happening in wireless right now. In particular, we'll explore the uses of Short Messaging System (SMS) in wireless solutions, both on its own and in combination with emerging technologies such as VoiceXML. I'll explain why SMS has more killer-app potential than you may realize, and offer several concrete examples of how that potential could be applied today... In many cases, the user's caller ID information is available to the VoiceXML service; in others, the user could request a text message by stating his or her phone number and provider. VoiceXML scripts could then feed this data, converted by a speech-recognition tool, to an e-mail engine, which would generate and send the data in SMS format... Similar applications exist in many areas. For example, a phone service might be able to tell you who won the game, but wouldn't it make more sense to read the box score for yourself? VoiceXML combined with SMS could also be used for purchase or trade confirmation on the go (call in, get a text confirmation), dial-up, message-based driving directions, and more. Together, the two technologies can bridge the gap between the limited user interface of a cell phone and the rich interfaces users are accustomed to for data retrieval on the Web... SMS reminders can capitalize on the 'push' nature of the medium to inform people of changes in itineraries and provide other time-sensitive information. Travel booking site Travelocity.com, for example, allows you to receive SMS-based updates to flight times as the airlines change them... Combining VoiceXML and the callback feature makes for a full-circuit input-output loop. Users request data quickly and easily by connecting to a voice-enabled service. They receive information in the form of one or several SMS messages, which they can respond to by calling back the number sent as the sender. The callback number connects them again to the VoiceXML system, where they can request additional data... Until ubiquitous, standardized wireless device user interfaces are a reality, you should keep your applications small and simple. Build for today, but keep the future in mind. Create internal architectures that can be applied to tomorrow's devices, but work with the simple messaging-based applications that are viable now. When building an SMS-enabled service, focus on getting the right data to your audience, not on the presentation of that data. After all, if the hardest piece of your application to develop is the wireless client interface, chances are you're not really providing much value. Don't focus on a 'wow' user interface today, unless you really want to impress the limited audience that has the technology to be wowed. Instead, let your wireless solution be a peephole into the high-value machinery of your Internet application or Internet-enabled business..." See "XML Encoding for SMS (Short Message Service) Messages."

  • [February 11, 2002] "A Tour around W3C XML Recommendations." By Ivan Herman (W3C, Head of Offices). 35 slides. From a presentation at IDA, Singapore, 28-January-2002. The presentation surveys the W3C organization, activities, and standards. "The slides have been prepared in SVG, and need an SVG player or plugin to view them. You may want to check the SVG Implementations page for more details on players and on the latest versions. To ensure a proper display of the slides the latest releases of the players should be used. The first slide is: Titleslide.svgz; if you use older browsers, you might have problems with the compressed version of the slides; you should then use Titleslide.svg..."

  • [February 08, 2002] "[DRM] Technology Standards: Leveling the Playing Field." By Bill Rosenblatt, Bill Trippe, and Stephen Mooney. Chapter 6 [pages 103-137] in Digital Rights Management: Business and Technology (Hungry Minds, Inc, A Wiley Company, 2001-11; ISBN: 0764548891; 288 pages). This chapter (reprinted with permission) provides an overview of DRM standards as of 2002-11, with particular emphasis on DOI and Extensible Rights Markup Language (XrML). Excerpt: "... Rights metadata schemes can be standardized by adopting a rights model, [but] real-world rights specifications are often hard to pin down in a formal rights model, and many uses of content (such as fair use) cannot be accurately represented by a rights model. This implies that a truly comprehensive rights model would have to be large and powerful in order to be useful. The most comprehensive open standard rights model available now is XrML, which is discussed in detail later in this chapter... A useful subset rights model is embodied in the Information and Content Exchange (ICE) protocol, which deals with business-to-business content syndication. ICE is also covered in detail later in this chapter, in the section 'Information and Content Exchange.' A lightweight rights model is also included in the PRISM standard for magazines. The standard includes a set of rights description fields designed to be read and understood by humans rather than machines -- that is, they are really just text descriptions of the rights available on a given piece of content (for example, a magazine article). The more recent MPEG standards for audio, video, and multimedia include hooks for rights models and rights management tools, although they do not specify rights information per se. MPEG-7 also includes rights information, but not in detail -- its content management model includes pointers to detailed rights info, which are outside the scope of the standard (and possibly intended to be the domain of a language such as XrML)... [we here show] various layers of DRM-related standardization and how they relate to the basic standards of the Internet. The lowest layer shows the bedrock Internet standards, HTTP, HTML, and XML. The XML metalanguage forms the basis for many of the DRM-related standards at higher levels, including XrML, ICE, PRISM, and NewsML. The next layer up shows components of DRM that are necessary but not unique to DRM or to the media/publishing industries. These are mostly e-commerce elements, such as transaction processing, encryption, and authentication. The third layer (second from the top) is where most of the action is. These are standards that are specific to the content industries, and they are as we just specified. Some are pure metadata standards, whereas others are protocol standards that contain metadata models. The principal ones are DOI for content identification, XrML and ICE for rights metadata, industry-specific discovery metadata standards, and the various popular content formats. The top layer of the diagram shows those elements that are unlikely to ever become standardized..." For book details, see the online preface and table of contents; available from Amazon. Bill Rosenblatt maintains a DRM information web site. General references: "XML and Digital Rights Management (DRM)." Recent news: "XrML Under Review for the MPEG-21 Rights Expression Language (REL)."

  • [February 07, 2002] "Second Generation Web Services." By Paul Prescod. From XML.com. February 06, 2002. ['The stars of the current web services trend are most definitely the SOAP protocol and its accompanying friends WSDL and UDDI. However, as Paul Prescod points out in our main feature this week, these technologies simply represent the first generation of web service infrastructure. In "Second Generation Web Services" Paul focuses on the underlying "web" parts of web services, and presents a vision for the future of web services that is based on HTTP, XML and URIs.'] "In the early days of the Internet, it was common for enlightened businesses to connect to it using SMTP, NTTP, and FTP clients and servers to deliver messages, text files, executables, and source code... First generation web services are like first generation Internet connections. They are not integrated with each other and are not designed so that third parties can easily integrate them in a uniform way. I think that the next generation will be more like the integrated Web that arose for online publishing and human-computer interactions. In fact, I believe that second generation web services will actually build much more heavily on the architecture that made the Web work, using the holy trinity: standardized formats (XML vocabularies), a standardized application protocol, and a single URI namespace... This next generation of web services will likely adhere to an architectural style called REST, the underlying architectural model of the current Web. It stands for 'representational state transfer'. Roy Fielding of eBuilt created the name in his PhD dissertation. Recently, Mark Baker of Planetfred has been a leading advocate of this architecture. REST explains why the Web has URIs, HTTP, HTML, JavaScript, and many other features. It has many aspects and I would not claim to understand it in detail. In this article, I'm going to focus on the aspects that are most interesting to XML users and developers..." ["REST is a coordinated set of architectural constraints that attempts to minimize latency and network communication while at the same time maximizing the independence and scalability of component implementations. This is achieved by placing constraints on connector semantics where other styles have focused on component semantics. REST enables the caching and reuse of interactions, dynamic substitutability of components, and processing of actions by intermediaries, thereby meeting the needs of an Internetscale distributed hypermedia system..." See "Principled Design of theModernWeb Architecture."]

  • [February 07, 2002] "U.S. Federal XML Guidelines." By Alan Kotok. From XML.com. February 06, 2002. ['The US government has recently issued the first draft of its guidelines for developers in federal agencies working with XML. Alan Kotok has been investigating these guidelines, and finds that they are encouraging in their awareness of XML, but also that the guidelines necessarily raise questions of how one goes about choosing between competing XML technologies.'] "The United States federal government's XML Work Group, a sub-committee of the Chief Information Officers Council (CIOC), drafted its first guidelines that spell out best practices for the use of XML in federal agencies. This document, which begun circulating in early draft form for comment in January 2002, shows that U.S. government agencies, major users of information technology, are trying hard to get their hands around fast-moving XML developments. But the guidelines also show the difficulties the group faces. The CIOC created the XML Work Group in June 2000 and gave it the job of identifying best practices and recommending standards for XML in federal agencies. The U.S. Navy had already written a set of XML guidelines to cover its operations, and the government-wide work group took the Navy's document and generalized it. As of early February 2002, agencies were still reviewing the draft XML document, which, upon approval of the XML Work Group, will be submitted to the CIOC, and eventually to the Office of Management and Budget (OMB) for consideration as government-wide policy. Any practices or guidelines adopted across the federal government will have a large impact on XML developments elsewhere. Just the sheer scale and scope of federal information technology spending can cause ripples throughout the private sector economy..." See the local news item "U.S. Federal CIO Council XML Working Group Issues XML Developer's Guide."

  • [February 07, 2002] "Declaring Keys and Performing Lookups." By Bob DuCharme. From XML.com. February 06, 2002. ['DuCharme returns this week with his monthly XSLT column, "Transforming XML." This time Bob explains the use of XSLT's keys mechanism, which can used for fast lookup of information during stylesheet processing.'] "When you need to look up values based on some other value -- especially when your stylesheet needs to do it a lot -- XSLT's xsl:key instruction and key() function work together to make it easy. They can also make it fast. To really appreciate the use of keys in XSLT, however, let's first look at one way to solve this problem without them. Let's say we want to add information about the shirt elements in the following document to the result tree, with the color names instead of the color codes in the result..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [February 07, 2002] "The Value of Names in Attributes. The Future of History." By Kendall Grant Clark. From XML.com. February 06, 2002. ['Kendall Clark reports on recent debate in the XML developer community over the use of qualified names in attribute values, as employed by XSLT and W3C XML Schema.'] "Namespaces in Attribute Values: The latest scuffle in the long namespace saga concerns qualified names in attribute values, the use of which has prompted some XML programmers, outside the confines of the XML-DEV mailing list, to question the value of attributes at all... That the developer community disagrees about the utility, the design, the impact, and the implementation of qualified names in attribute values is a very good indication that it is a widely, essentially contested issue. I suspect that it, like many such issues, will inevitably be presented to the W3C's Technical Architecture Group, the TAG, for adjudication; which, as we will discover, does not mean that the TAG will actually decide the issue. The TAG bottleneck..." See references in "Namespaces in XML."

  • [February 07, 2002] "Extensible Markup Language - SW (XML-SW)." Skunkworks 7-February-2002. Abstract: "The Extensible Markup Language (XML) provides a set of rules for defining markup languages intended for use in encoding data objects, and specifies behavior for certain software modules that access them." Overview: "This document specifies XML SW. The recipe for the construction of XML SW is as follows: XML 1.0 [XML 2e], minus DTDs (and therefore necessarily entities), plus XML Base, plus the XML Information Set, plus XML Namespaces. The intent is to avoid introducing any modification to the semantics of any of the ingredient specifications, thus all of the syntax and behavior described in this document should be equivalent to that specified in one W3C Recommendation or another." Status: "This document is a private skunkworks and has no official standing of any kind, not having been reviewed by any organization in any way. This draft was assembled by Tim Bray from text edited by himself, John Cowan, Dave Hollander, Andrew Layman, Eve Maler, Jonathan Marsh, Jean Paoli, C. Michael Sperberg-McQueen, and Richard Tobin, the editors of XML's first and second editions, Namespaces in XML, XML Infoset, and XML Base. There should be no suggestion that anybody other than Tim Bray approves of the content or even the existence of the present document. The copyright statement above applies to almost all the text assembled for this document, but should not be taken as an indication that the W3C approves of the contents or existence of this document."

  • [February 07, 2002] "Are Your Standards High?" By Stephenie Cooper. Referenced by Jon Bosak on ubl-comment@lists.oasis-open.org, January 31, 2002. "The great thing about standards is: The solution. We need a metastandard -- a standard to define how a standard is defined. General Principles for names: (1) Choose a name for the standard that can be reduced to a TLA or an FLA; (2) One of the letters in the TLA or FLA should be an 'X' or a 'W'; (3) Data field names should be mostly lower case, with 2-3 capital letters whose positions are randomly chosen. The reuse principle: (1) re-use, re-use, re-use; (2) The stretch goal du jour is re-engineering result-driven, on-demand, top-down (and bottom-up) global systems, growing the bandwidth, leveraging the knowledge base, and fast-tracking proactive, strategic, and backward-compatible multitasking -- all without reinventing the wheel or going down rat holes. Are we on the same page? General Principles: Always use new, generic, standards-neutral terms for everything, so that your standard does not appear to be biased in favor of another standard..." [Jon's comment: At the EIDX/CompTIA meeting in Palo Alto this week, Stephenie Cooper gave a presentation on standards for the creation of standards that will, I hope, provide no guidance whatsoever for the UBL effort. Knowing that this seminal piece is sure to set the standard for standards standardization, I have asked for and received Stephenie's permission to post it on the UBL web site..."

  • [February 07, 2002] "The Functional Programming Language XSLT. A Proof Through Examples." By Dimitre Novatchev. November, 2001. "Until now it was believed that although XSLT is based on functional programming ideas, it is not as yet a full functional programming language, as it lacks the ability to treat functions as a first-class data type. Based on numerous concrete XSLT implementations of some of the major functional programming design patterns, including some of the most generic list-processing and tree-processing functions, this article provides ample proof that XSLT is in fact a full-pledged functional programming language. The presented code forms the base of a first XSLT functional programming library. It is emphasized that a decision to include higher-order functions support in XPath 2.0 will make functional programming in XSLT even more straightforward, natural and convenient... ... The purpose of this article is not only to prove that XSLT can be considered a functional programming language has been fulfilled by providing XSLT implementation for the most major FP design patterns and examples from John Hughes article 'Why Functional Programming matters' (this article contains the code of 35 functions), but as a side effect we have now available what can be considered the first XSLT functional programming library. The full library code, together with test examples demonstrating the use of individual functions, is available at the downloads page of TopXML.COM as specified in the links at the start of this article... "On the other side, the XSLT code of those functions seems too-verbose compared to the corresponding Haskell code. The process of writing functional XSLT code can be made much more straightforward and easier by providing support for higher-order functions in XPath and XSLT, thus definitely improving even further the compactness, readability and reliability of XSLT functional code. It is the ideal time right now for the W3C XPath 2.0 working group to make the decision to provide the necessary support for higher-order functions as part of the standard XPath 2.0 specification. In case this golden opportunity is missed, then generic templates and libraries will be used in the years to come." Download the full PDF (.ZIP) version.

  • [February 07, 2002] "Tunnel Setup Protocol (TSP)." By Marc Blanchet, Regis Desmeules, and Andre Cormier (Viagenie inc.). IETF Network Working Group, Internet-Draft. Reference: draft-vg-ngtrans-tsp-00. June 2001. "This document proposes a control protocol to setup tunnels between a client and a tunnel server or broker. It provides a framework for the negociation of tunnel parameters between the two entities. It is a generic TCP protocol based on simple XML messaging. This framework protocol enables the negociation of any kind of tunnel, and is extensible to support new parameters or extensions. The first target application is to setup IPv6 over IPv4 tunnels which is one of the transition mechanism identified by the ngtrans and ipv6 working groups. This IPv6 over IPv4 tunnel setup application of the generic TSP protocol is defined by a profile of the TSP protocol, in a companion document." [cache]

  • [February 07, 2002] "Can We Really Harmonize Web Services?" By David Smith and Yefim Natis (Gartner Analysts). Gartner Viewpoint. February 7, 2002. "The Web Services Interoperability Organization, a consortium formed by several major information-technology vendors, seeks to address an essential issue surrounding Web services: the promotion of a set of standards enabling Web services to interoperate. However, the new consortium faces a number of significant hurdles, including dealing with various standards bodies as well as managing the threat of eventual vendor or user apathy. Without adequately addressing these and other political challenges, the group could fade into obscurity by the end of 2003. On the other hand, even with its limitations (a limited budget and as of yet no participation by Sun Microsystems), Gartner believes that the group can help boost Web services for the following reasons: (1) The group will bridge the gap between Microsoft and the 'Java community' through the participation of BEA Systems, Hewlett-Packard, IBM and Oracle. The organization will act as a 'standard integrator,' therefore bringing some coherence to the effort carried out concurrently by the W3C (World Wide Web Consortium), OASIS (Organization for the Advancement of Structured Information Standards), OAG (Open Applications Group) and other informal groups. (2) In addition to overcoming political differences, the promotion of uniform standards involving security and transactions will remain a challenge. (3) Powerful vendors such as BEA, IBM, Microsoft, Oracle and SAP will work to establish what Web services will and will not be. (4) The group will help educate the market in Web services and help implement best practices. (5) If the group succeeds and collects enough support and membership, it could emerge as the de facto standards body for Web services..." See "Web Services Interoperability Organization (WS-I)."

  • [February 06, 2002] "Freedom of Expression: Emerging Standards in Rights Management. [Feature Story.]" By Neil McAllister (Senior Technology Editor). In New Architect: Internet Strategies for Technology Leaders Volume 7, Issue 3 (March 2002), pages 36-39. ISSN: 1537-9000. [New Architect was formerly WebTechniques] ['XrML, ODRL, XMCL -- Rights expression languages promise standards-based DRM. But can we really all just get along? Demand for digital content has never been higher, but widespread copyright violations threaten the success of the online media marketplace. Enter DRM. Neil explores how this set of breakthrough technologies is securing intellectual property and making content-driven revenue a reality.'] "It's one thing to standardize a language, but quite another for any one language to become a standard. To help legitimize their efforts, the creators of XMCL, ODRL, and XrML have each announced their intent to submit their work to various prominent standards bodies for ongoing development. The extent to which they've followed through on those promises, however, has varied... In the past, RealNetworks has had a good track record of support for open standards. The company's efforts contributed significantly to the development of the Realtime Streaming Protocol (RTSP) and the Structured Multimedia Integration Language (SMIL). But while RealNetworks's Jeff Albertson announced in June 2001 that the company planned to submit XMCL to the W3C standards body 'within a month,' so far that promise has gone unfulfilled. Supporters of the ODRL Initiative have been somewhat more proactive. In November 2001, the ODRL 1.0 specification was submitted to the ISO/IEC MPEG standards body for consideration as the rights expression language component for the developing MPEG-21 media distribution standard. The submission was backed by companies as diverse as Adobe, IBM, IPR Systems, Nokia, and Panasonic. Surprisingly, RealNetworks also supported it, choosing to merge the XMCL specification into ODRL rather than submit its own language independently. But in the end, it was XrML, and not ODRL, that was chosen as the starting point for the eventual MPEG-21 rights expression language... One way or another, consolidation amongst the various rights expression languages seems inevitable. ContentGuard's track record of success, though limited, makes it the current front-runner. Yet the reputation of the MPEG group may draw more attention to its own eventual, XrML-derived language, which will be specified with the input of both RealNetworks and the ODRL Initiative members. Whatever the outcome, a standard rights expression language is but one brick in the foundation of a viable DRM platform, albeit an important one. True, gaining the support of standards bodies is an essential step. But the long-term goal -- gaining the support of consumers and business customers alike -- still lies ahead..." See: (1) "XML and Digital Rights Management (DRM)"; (2) "XrML Under Review for the MPEG-21 Rights Expression Language (REL)" and (3) "MPEG Rights Expression Language."

  • [February 06, 2002] "SVG - Open for Business." By Michael Classen. From WebReference.com. 2002-02-04. "Scalable Vector Graphics are the prime technology for presentations and business charts. Pie charts, bar graphs, animated or static, values manipulated by the user or static, SVG can do all. And what is missing, added CSS and JavaScript will do. In this introduction we'll generate a bar chart with text, shadows, and some simple animation. Integrating an SVG file into an HTML page is done with the embed tag available in all current browsers. This tag includes a reference to Adobe's plug-in download page, where an appropriate viewer is available for free. Additional arguments are the reference to the SVG file, and the pixel dimensions of the object in the browser page... [concludes with] a small animation of bars on the bar chart. The animation consists of two parts, changing both size and position of the objects. This is necessary because in order to grow the bars from bottom to top, they need to move up and expand at the same time so that the bottom line stays constant... Putting it all together, here is our animated 3D bar graph. Once all browsers support SVG we can forget about stretching one-pixel images into shape in HTML..." References: (1) W3C SVG Web site; (2) "W3C Scalable Vector Graphics (SVG)."

  • [February 06, 2002] "Iona Adds Enterprise Products To Web Services Lineup." By Tom Sullivan. In InfoWorld (February 06, 2002). "Boosting its web services lineup, Iona on Wednesday will ship the Collaborate and Partner editions of its Orbix E2A Web Services Integration Platform. These products round out Iona's Web services integration platform, said John Rymer, vice president of marketing at Waltham, Mass.-based Iona... The Orbix E2A Web Services Integration Platform Collaborate Edition includes QoS, security, and packaged application adapters for SAP, PeopleSoft and Baan. The Collaborate Edition also adds support for the ebXML and RosettaNet protocols. Iona's Partner Edition is designed as a business-to-business integration tool for connecting to trading partners. At the low-end, Iona offers the E2A XMLBus Edition, which is designed for development, deployment and management of Web services and integration... Iona's platform differentiates itself from the other application server vendors wares in that Iona based it on native Web services standards, rather than tacking support for such standards onto existing applications and integration platforms. Companies can build a Web service, or other applications for that matter, and have them call on information from the back-end systems via Iona technology, including an application server, EAI and CORBA tools. Microsoft, for its part, earlier this week released a new incarnation of its data integration platform, BizTalk Server 2002. The latest version of BizTalk includes tighter integration with the Visual Studio.Net toolkit that Microsoft will formally launch next Wednesday, thereby enabling developers to use the server and tools to take advantage of Web services protocols to integrate business processes..." See the announcement: "IONA Ships Orbix E2A Collaborate and Partner Editions - Industry's First Enterprise Web Services Integration Solutions With ebXML Support. Orbix E2A Eliminates Barriers to Integrating J2EE, .NET, CORBA, Mainframe and Message-Oriented Technologies for Business Efficiency and Return-on-Investment."

  • [February 06, 2002] "Using tDOM and tDOM XSLT. A high-performance Tcl-scripted XSLT engine." From IBM developerWorks, XML Zone. February 2002. By Cameron Laird (Vice president, Phaseit, Inc.). ['tDOM is a high-performance, C-coded, DOM-oriented XML processor. tDOM XSLT is an XSLT engine built with tDOM that has extremely good performance in simple tests. tDOM and tDOM XSLT are open-source projects already in mission-critical production for several organizations. This article explains what you need to know to enjoy their advantages.'] "Simple benchmarks show that tDOM is one of the best-performing XML processors currently available. Access through the Tcl 'scripting' language makes for a particularly potent development environment -- fast in both development and execution. A 'dual level' (or two-language) model of development combines the advantages of Tcl and XSLT for different aspects of XML manipulation... tDOM is an open-source, C-coded DOM binding to the Tcl language, created and maintained by Jochen Loewer. tDOM incorporates a version of James Clark's expat, currently based on SourceForge release 1.95.1. expat is renowned for its quality and performance. The most recent release supports DOM Level 2... Several benefits motivated Loewer in his implementation of tDOM. It has a flexible object-oriented syntax. It's very speedy, which is a particular advantage with the large XML documents involved in enterprise-scale cataloguing and procurement work. It is also thrifty on memory usage, and boasts a convenient and powerful XPath implementation. Quantitative measurements of these advantages appear below..." See the SourceForge development directory and the Yahoo discussion forum.

  • [February 06, 2002] "Web Services Initiative Gains Momentum." By Ed Scannell. In InfoWorld (February 06, 2002). "Industry momentum behind the newly created Web Services Interoperability Organization -- formed to promote the consistency in development of Web services -- grew rapidly on Wednesday with 20 new members joining today alone, swelling to 40 the number of companies backing the consortium. Besides the nine founding members, which include IBM, Microsoft, BEA Systems, Intel, Hewlett-Packard, and Accenture, a wide range of vendors and user companies have also joined, including Oracle, Iona, Compaq, Groove Networks, Reuters, Ford, and Daimler-Chrysler. The glaring exception to the list is Sun Microsystems, which figures to play a role in influencing the direction of Web services given its dominant position in the Web server and application development tools markets... In about three weeks the 40 companies that have joined as of today will meet to create some working groups among themselves in order to create three types of deliverables. One such deliverable will be centered around the notion of profiling. These profiles would be a collection of standards from a variety of different standards organizations, and the groups would determine the best ways of mixing and matching them for a particular project. Profiles will be driven mostly by a second deliverable called scenarios, or some of the more practical applications for which most people might use the proposed standards. A third deliverable would be to create benchmarks in order to test Web services ensuring they adhere to the core standards such as SOAP, WSDL, and UDDI..." See "Web Services Interoperability Organization (WS-I)."

  • [February 06, 2002] "IBM and Microsoft to Form Web Services Alliance." By Ilaina Jonas and Siobhan Kennedy (Reuters). From Total Telecom (February 06, 2002). "IBM, Microsoft Corporation, and other fierce technology sector competitors are expected on Thursday to announce an alliance to hammer out standards to make it easier and cheaper for companies to do business over the Web, sources familiar with the project said. The group, to be named the Web Services Interoperability Organization, will work on standards for Web services, the new market for software that makes it easier for different computer systems to share information. This will make it easier for companies to carry out purchasing, insurance checking and other activities online. This is not the first time IBM and Microsoft have joined forces in the name of Web services. They have worked together under the auspices of certain Internet standards groups to develop underlying technical standards for Web services... IBM, Microsoft and others have already joined to create standards including such things as the Web services directory, known as UDDI, and other low-level technical standards like SOAP, WSDL and XML, [John] DiFucci [CIBC World Markets] said..." Source also from NYTimes/Reuters. See "Web Services Interoperability Organization (WS-I)."

  • [February 06, 2002] "Sun Opts Out of IBM, Microsoft Web Services Alliance." By Siobhan Kennedy. Via Reuters. February 06, 2002. "Microsoft Corp., IBM and a host of rival technology competitors on Wednesday said they formed an organization to work on standards to make it easier for companies share information and do business over the Web. The anticipated news sees Microsoft and IBM coming together with a string of fierce rivals in the technology sector -- including Intel Corp., Oracle Corp., SAP AG, Hewlett-Packard Co. and Fujitsu Business Systems Ltd.. The group, called the Web Services Interoperability Organization (WS-I), aims to provide companies with a standard way of using Web services -- the hot new market for software that makes it easier for different computer systems to share information to carry out business tasks such as purchasing or inventory checking over the Web... The organization wants to ensure that companies use the low-level technical standards -- UDDI, WSDL, XML and SOAP -- that govern the development of Web services in the same way, Bob Sutor, IBM's Program Director for XML Technology said. He likened it to the use of the English language. There are lots of valid ways of putting the words together to make sentences, but eventually people develop common phrases that succinctly communicate what they want to say, Sutor said... Web services are designed to overcome these incompatibility problems by wrapping those software applications in such a way that they can be used on any system, be it Java, .Net or some other type of software system. Sun Microsystems, however, was noticeably absent from the list of companies supporting the alliance, although companies such as IBM, BEA and Oracle, which support Java, said they acknowledged that linking software applications to do business was a big issue for their customers... As well as working on existing standards, Sutor said the Web Services Interoperability Organization would also work with Internet standards bodies, like the World Wide Web Consortium, to ensure future Web services standards, governing such areas as security, work together..." See the news item: "IBM and Microsoft Announce Web Services Interoperability Organization (WS-I)."

  • [February 06, 2002] Throngs Join New Web Services Effort." By Wylie Wong. In CNET News.com (February 06, 2002). "About 50 companies have joined a new nonprofit organization led by Microsoft and IBM to promote Web services. As reported earlier by CNET News.com, Microsoft and IBM on Wednesday launched a new industry consortium, called the Web Services Interoperability Organization, to educate businesses on how to build compatible Web services. Joining Microsoft and IBM as founding members are Accenture, BEA Systems, Hewlett-Packard, Intel, Oracle, SAP and Fujitsu. As founders, they will set the agenda for the organization... Other companies that have signed on to support the new consortium include Akamai Technologies, Compaq Computer, DaimlerChrysler, Ford Motor, Qwest Communications, United Airlines and VeriSign..." See the news item: "IBM and Microsoft Announce Web Services Interoperability Organization (WS-I)."

  • [February 06, 2002] "Leading Tech Competitors Form Web Services Group. WS-I Committed to Interoperability." By Elizabeth Montalbano. In CRN (February 06, 2002). "In a move aimed at helping to promote Web services adoption, a group of technology competitors, including IBM, Microsoft, BEA Systems, Hewlett-Packard, SAP, Intel and Accenture, have joined forces to form a Web services interoperability group. The Web Services Interoperability Organization (WS-I) will speed development and deployment of interoperable Web services across multiple platforms, applications and programming languages, the companies said in a statement. The mission of WS-I will be to guide the implementation of companies deploying Web services, to promote interoperability among Web services and to develop and define a common industry vision for Web services, the statement said. The group also will create a road map for solution providers developing services and customers deploying services. So far, companies that have signed on to support WS-I are AutoDesk, Cape Clear, Compaq Computer, J.D. Edwards, Epicor, Fujitsu, Groove Networks, IONA Technologies, Kana, Macromedia, Plumtree and Qwest Communications, among others. Sun Microsystems, a hardware giant and major proponent of Web services, was noticeably absent from the list... In the press statement, Rod Smith, vice president of emerging technology at IBM, said WS-I will help accelerate Web services deployment beyond those initial companies..."

  • [February 06, 2002] "IBM, Microsoft, BEA Systems to Partner on Web Services." By Ed Scannell. In InfoWorld (February 05, 2002). "In an attempt to ensure consistency in the development of Web services, IBM, Microsoft, and BEA Systems on Thursday will announce a software group the purpose of which will be to promote existing and future standards as defined by the World Wide Web Consortium (W3C) and the Organization for the Advancement Of Structured Information Standards (OASIS). According to those who are familiar with the charter of the new group, called the Web Service Interoperability Organization, it will campaign to better educate developers about how to build Web services as well as advocate the consistency of building block standards such as SOAP (Simple Object Access Protocol), UDDI (Universal Description, Discovery, and Integration), and the WSDL (Web Services Description Language). And, perhaps more importantly, the group will be actively encouraging the consistency of future Web services standards to come that address fundamental capabilities such as transactions management systems, security, identification, and authorization, sources said. 'This group will not be all about developing new standards, but making sure existing ones like SOAP and UDDI interoperate well. If there are differences in SOAP 1.2 and SOAP 1.3, then they might see a need for some testing benchmarks and other tools to make sure things are interoperating more deeply than just being able to talk to each other,' said one industry source familiar with the group's charter. 'They are not coming up with some silver bullet that makes Java and .Net interoperate better or bigger scale things like that,' he said..."

  • [February 06, 2002] "Giants Forging Web Services Consortium ." By Wylie Wong. In CNET News.com (February 05, 2002). "Microsoft, IBM, BEA Systems and Intel on Wednesday are expected to launch a new industry consortium aimed at promoting Web services. The new group--called the Web Services Interoperability Organization--plans to educate businesses on how to build Web services and how to ensure they do it in a compatible way, according to sources familiar with the announcement. The consortium will promote existing and future standards defined by the World Wide Web Consortium and the Organization for the Advancement of Structured Information Standards (OASIS), for example. Other technology companies are expected to join the organization, the sources said..."

  • [February 05, 2002] "MPEG Decides To Adopt ContentGuard's XrML as the Basis for the Rights Expression Language (REL) Component of MPEG-21." From "DRM Watch" [GiantSteps/Media Technology Strategies]. February 1, 2002. Edited by Bill Rosenblatt. "XrML originally had two competitors for adoption in MPEG-21: Open Digital Rights Language (ORDL), from the consultant Renato Iannella in Australia, and RealNetworks' eXtensible Media Commerce Language (XMCL). Shortly before the MPEG-21 meeting in Thailand, RealNetworks dropped XMCL -- which had never even gotten as far as a version 1.0 spec -- and decided to embrace ODRL. MPEG-21 is now working with ContentGuard to address issues it would like to see resolved. Among these issues is the overly large size of XrML expressions that would be downloaded to the small portable devices to which MPEG-21 applies, which is a byproduct of XrML's high complexity. XrML's lack of simplicity will continue to be an issue for ContentGuard to address as it positions the language for further success. XrML's adoption as part of MPEG-21 is good news for ContentGuard and for the cause of badly-needed DRM standardization, but the news should be taken in context. Although the MPEG-21 blessing is important, it is not guaranteed that MPEG-21 -- which is mightily complex itself and is not scheduled to receive ISO standard status until Summer 2003 -- will have much of an impact on the market. Furthermore, perhaps the more important outcome is the demise of XMCL, which was RealNetworks' attempt at mounting an anti-Microsoft initiative by aggregating support from many other firms who compete with Microsoft..." Note: Rosenblatt's 'DRM Watch' summaries of events related to content rights management and their significance in the market. See the news item of 2002-02-05: "XrML Under Review for the MPEG-21 Rights Expression Language (REL)."

  • [February 05, 2002] "MPEG-21 Adopts XrML as Rights Language." By [Seybold Staff]. In The Bulletin: Seybold News and Views On Electronic Publishing Volume 7, Number 17 (February 6, 2002). "The Motion Picture Experts Group (MPEG), the standards family for the encoding of digital audio and video content in a compressed format, has selected ContentGuard's XrML as the basis for the Rights Expression Language (REL) component of its MPEG-21 standard. The purpose of the MPEG-21 standard is to 'define a multimedia framework to enable transparent and augmented use of multimedia resources across a wide range of networks and devices used by different communities.' Rights management and secure delivery are a component of that infrastructure, and XrML will now be the recommended language for expressing rights within MPEG digital content... [additional editorial comment from Bill Rosenblatt; see previous reference] See the news item of 2002-02-05: "XrML Under Review for the MPEG-21 Rights Expression Language (REL)."

  • [February 05, 2002] "WSDL Gets Close Look." By Darryl K. Taft. In eWEEK (February 04, 2002). "Major software vendors have touted WSDL as a key specification for driving the interoperable world of Web services. But some developers say its patrons are using Web Services Description Language and other standards to force enterprises into choosing one vendor's Web services initiative over another's, effectively removing the openness and interoperability that are cornerstones of the technology. WSDL, largely written by IBM and Microsoft Corp., details a machine-readable way to describe Web services. This description includes everything needed to make a Web service call. But where some developers say WSDL, which became a de facto standard after IBM and Microsoft issued the code, is unnecessary and counterproductive, others say the specification needs to be fixed. Still others say it's crucial to Web services. To clarify the matter, the World Wide Web Consortium last month created a working group to look at Web services standards, including WSDL. The group, chaired by Microsoft, has its first conference call this week. Dave Winer, a developer and CEO of UserLand Software Inc., in Millbrae, Calif., said WSDL is an unnecessary piece of standards infrastructure for Web services and exists largely as a way for big companies such as Microsoft, IBM and Sun Microsystems Inc. to set up entry barriers to smaller competitors. Winer said there is no need for WSDL "unless you are trying to lock developers into your development environment, as Microsoft and Sun and IBM surely are trying to do." Instead, developers and vendors should use normal programming techniques or Simple Object Access Protocol schemas, which he and others said are more open and less complex than WSDL. Barriers to competition occur when smaller enterprises must decide whether to bet their Web services strategy on a company such as Microsoft, IBM, Oracle Corp. or Sun, all of which offer 'some level of proprietary code that locks you into their solution,' said Britt Johnston, chief technology officer at NuSphere Corp., a Bedford, Mass., open-source technology vendor... Rich Salz, CTO of Zolera Systems Inc., in Waltham, Mass., and a member of the WSDL working group, said the specification is necessary but flawed: 'The people who say it's not necessary are those who are focusing on simple things'..." See: "Web Services Description Language (WSDL)."

  • [February 05, 2002] "Ents - Character Entity/Reference Replacement." By Simon St.Laurent. 2002-02-05 or later. From the online description: "Ents is a small Java package designed to simplify the process of converting XML entity references to character references and vice-versa. While XML 1.0 entities are useful, the move away from DTDs (as well as difficulties in processing them with non-validating parsers has made them less reliable.) By running Ents over a file, all the character entities may be converted to character references or vice-versa. Ents only processes entities it has rules for, leaving the rest untouched. Ents uses an XML format to specify lists of equivalent entity names and character references. Ents may only be used for single character entities. A file supporting XHTML 1.1 (and HTML) character entities is included, as is support for compiling that information into a class file. The name comes from an abbreviation of entities, as Ents provides reduced functionality from XML 1.0's full set of entity capabilities. The Tolkien connection is pretty intriguing too, however Ents is currently [2002-02-05] in alpha. The core functionality seems complete, but there's still potential for improvement, expansion, and as always, better documentation. (Including RDDL documents for the rules file format.) Ents is distributed under the Mozilla Public License 1.1. For more information, see the javadoc." From the XML-DEV posting: "Ents, like most of my filters, takes a list of rules in an XML format. In this case, the rules define equivalences between character references and entity names. The processor then uses a Java FilterReader to process a document, either replacing entity with references (to send to computers who don't care about entities) or references with names (more typically for humans). The processing is reversible. Ents comes with a rules file built from the entity declarations in Modularization of XHTML. I left in the descriptions of the characters, but they do nothing in processing. Ents rules can easily be embedded in other documents (schemas or whatever), and the rules loader will ignore all content but its own rules. These rules may also be compiled into Java, using a code-generating class that is part of the package. I may be adding support for one-way transformation of general entities into text, as well as a SAXFilter which processes the skippedEntity event using these rules, but neither of those options is presently available..."

  • [February 05, 2002] "XML Character Entities Version 0.2." Edited by Norman Walsh for the OASIS DocBook Technical Committee. Working Draft 04 -February-2002. This Standard defines XML encodings of the 19 standard character entity sets defined in Non-normative Annex D of ISO 8879:1986 (ISO 8879:1986 Information processing -- Text and office systems -- Standard Generalized Markup Language (SGML). 1986. [Caveats: a working draft constructed by the editor; not yet an official committee work product, and may not reflect the consensus opinion of the committee; a few acknowledged discrepancies WRT the character mappings in this draft.] "This Standard defines XML encodings of the standard SGML character entity sets. Non-normative Annex D of [ISO 8879:1986] defines 19 standard SGML character entity sets: Added Latin 1, Added Latin 2, Greek Letters, Monotoniko Greek, Russian Cyrillic, Non-Russian Cyrillic, Numeric and Special Graphic, Diacritical Marks, Publishing, Box and Line Drawing, General Technical, Greek Symbols, Alternative Greek Symbols, Added Math Symbols: Ordinary, Added Math Symbols: Binary Operators, Added Math Symbols: Relations, Added Math Symbols: Negated Relations, Added Math Symbols: Arrow Relations, Added Math Symbols: Delimiters. The SGML declarations for these entities use the specific character data (SDATA) entity type that is not supported in XML, so alternative XML declarations are necessary. In XML, the specific character data of most entities can be expressed as a Unicode character." In addition to the character entity sets, the document defines 'XML Character Elements'. Design rationale: "Named XML entities (except for the five predefined entities) cannot be used if they are not declared. Entity declaration requires either an external or an internal subset. Some classes of applications forbid the occurrence of markup declarations in documents. For these documents, named character entities are inaccessible...we [therefore] introduce an XML vocabulary with the semantics of character entity reference. This Standard defines the semantics of elements and attributes declared in the http://www.oasis-open.org/docbook/xmlcharent/names namespace. This namespace contains exactly one element, char. The char element has two attributes, entity and name. They are mutually exclusive. The entity attribute identifies characters by their character entity names. (The set of valid names is the closed set of names associated with character entity sets defined by this Standard.) Case is significant in entity names. The name attribute identifies characters by their Unicode character names..." Related references: SGML/XML Entity Sets and Entity Management."

  • [February 05, 2002] "Draft schema for character entity definition document type." Posted to XML-DEV by Henry S. Thompson with Subject: "A heavier-weight proposal for character entity definition."

  • [February 04, 2002] "WSFL in Action, Part 1. When Web Services Cooperate." By Ajamu Wesley (Senior software architect, IBM Software Group). From IBM developerWorks, Web services Zone. January 2002. ['This article is based upon a chapter from Programming Web Services with Java (Manning, 2002) which explains how you can use the flexible Web Services Flow Language (WSFL) to create new Web services by integrating other Web services. This is the first of a two part series with a detailed example of how several Web services from different service providers can work in a flow model. The article is a programming example that illustrates the features of the Web Services Flow Language. It assumes that you already know the basic concepts behind workflow and WSFL.] "Let's imagine that Strong.com acts as a brokerage and hosts a Web service which manages client financial accounts. Additionally consider that the New York Stock Exchange (NYSE) hosts a Web service that allows traders to buy and sell stocks just as if they were on the trading floor. Strong.com realizes that they can provide a compelling service to their end users if they integrate these Web services together... When clients request stock purchases from the brokerage service, it kicks off a business process that first checks the real-time stock quote service from our fictional service provider StockQuote.com, to determine the stock price. After determining the stock price, the total price of the transaction may be computed based on the number of shares requested. Strong.com's account manager Web service will determine if the client has enough funds to cover the transaction. After successfully verifying the client's ability to purchase the stock, the trade is executed by invoking the NYSE's virtual trading floor Web service. Thus, Strong.com will define a new stock-trading Web service by integrating its account manager service with StockQuote.com's stock quote service and the NYSE's virtual trading floor Web service. This programmatic integration of remote Web services into a single Web service is accomplished by defining the sequencing and integration of Web service operations within a WSFL flow... In the next part we will look into how to link together these activities and complete the flow..." See also from July 2001: "WSFL and Recursive Composition." Article also in PDF format. See: "Web Services Flow Language (WSFL)."

  • [February 04, 2002] "The 'xmlns:' URI Scheme for XML Namespace Reification and Namespace Prefix Declaration." By Patrick Stickler (Nokia Research Center, Tampere, FI). IETF Network Working Group, Internet-Draft. January 16, 2002. Reference: 'draft-pstickler-xmlns-00'. "This document describes the 'xmlns:' Uniform Resource Identifier (URI) scheme for the reification of XML namespaces and namespace prefix declarations... The 'xmlns:' URI scheme is intended to provide a simple but consistent means by which XML Namespaces can be reified, and which also provides for the association of a namespace prefix with a given namespace. The 'xmlns:' URI scheme belongs to the class of URIs known as Uniform Resource Values (URV) which are themselves a subclass of Uniform Resource Primitives (URP), a class of URI which constitutes a 'WYSIWYG' URI, one which is not dereferencible to and does not denote another web resource, but constitutes a self-contained resource where the full realization of that resource is expressed in the URI itself. ... Because a URP is not dereferencible, and hence does not permit the suffixation of a fragment identifier (there is no such thing as a URP Reference), it is not necessary to escape any hash marks '#' occurring in the namespace URI Reference part of a 'xmlns:' URI..." See also (1) "An Extended Class Taxonomy of Uniform Resource Identifier Schemes" [January 2002]; (2) "The 'qname:' URI Scheme for XML Namespace Qualified Names," IETF 'draft-pstickler-qname-00', which "describes the 'qname:' Uniform Resource Identifier (URI) scheme for the representation of XML Namespace qualified names." See references in "Namespaces in XML."

  • [February 04, 2002] "Parsing Filters." By Ravi Akireddy. In XML Journal Volume 3, Issue 2 (February 2002). "This article presents a simple and open framework that eases custom parsing and checking of any type of document - XML, HTML, text, and so on - through the concept of filters and XML-based configurations. The idea here is to define filters, tie them up in a piped fashion, and stream the input documents through these chained filters - and in the process of streaming, make each filter apply its specific logic on the same streamed source at the same time. A filter is meant for a specific function, and each one serves its own purpose of massaging the provided input stream. A filter could be an implementation of your own specific logic or a wrapper around an existing tool. If you're familiar with the Unix operating system, the concept appears similar to its piping feature. The difference could be that in this case the parameters are supplied through an XML-based configuration file. Filters are made Runnable instances and run as threads. This helps in the simultaneous execution of all the filters in parallel in the input stream. Filters are aggregated in a chain and are applied simultaneously, similar to the concept of pipes in Unix... The filters to be applied are identified through a configurable XML document and are connected by piping the input and output of each filter. Once the pipe is established, input documents are streamed through these piped filters to the desired location. As the documents are being streamed, each filter applies its own filtering logic on it in parallel. The filtering framework provides plug-and-filter functionality by simply manipulating the configuration file. You can apply some filters for one class of documents and other filters for other classes of documents by modifying the configurations.. I've tried to simplify the logic of parsing any kind of document in a custom-specific fashion. This way you can have a mix and match of filtering logic for a variety of techniques. You can apply the logic for making well-formed documents or regular expression filtering or any other kind of massaging to documents. The guts of each one lie in the specific filter and each works independently. To apply a new logic, write a filter and update the filter's configuration XML document; the rest is transparent..."

  • [February 04, 2002] "Analysis of the MARC 21 Bibliographic and Holdings Formats." By Tom Delsey. Prepared for the Network Development and MARC Standards Office Library of Congress. January 4, 2002. [Of probable interest to metadata experts.] "This study was commissioned by the Network Development and MARC standards Office in order to link MARC 21 format data with models identified in major studies that have recently been developed in the area of bibliographic control. Applying the new models from the Functional Requirements for Bibliographic Records (FRBR) and the related The Logical Structure of the Anglo-American Cataloguing Rules to the data elements accommodated in MARC 21 records was a logical step to assist bibliographic data research. The expert largely responsible for both of the above studies carried out the analysis on contract to the Office, Tom Delsey, of Thomas J. Delsey consultancy. By sponsoring the analysis and making it available, the MARC office and others can use the information when analyzing or making decisions on MARC 21 data related to format maintenance, system implementation, and data sharing. It will be an important tool for continuing development of MARC 21... Appendix E contains a detailed mapping of data elements specified in the MARC bibliographic and holdings formats to a defined set of user tasks. For the purposes of this analysis user tasks are divided into three broad categories: (1) tasks pertaining to resource discovery, (2) tasks pertaining to resource use, and (3) tasks pertaining to data management..." See "MARC (MAchine Readable Cataloging) and SGML/XML."

  • [February 04, 2002] "XSlip XSliding Away." By Kurt Cagle. In XML Magazine Volume 3, Number 1 (February 2002), pages 60-64. ['A visual presentation's ancillary documentation is often as important as its content. In this first of a two-part series, you'll learn how to use XML, SVG, and XSLT to organize presentations, separate logical organization from presentation and implementation layers, and create a robust slide show application that you can use with a variety of media.'] "I like flipping XML content back and forth anyway, so the idea of creating an XML slide show is appealing. Surprisingly enough, once you put the basics in place, the same XML is remarkably well suited for creating both the content and the structure of not only a slide show, but also the ancillary support materials. By using XML both to organize the presentation and to separate the logical organization from the presentation and implementation layers, you can create a robust application to use with multiple types of media. I'll outline the basics of how you can create an online slide system, including outputting it into a format suitable for both Web and presentation, and converting it into a PDF document for support materials. I'll also talk a bit about the real strength of XML: separating content (model) from the implementation (control), and separating control from presentation (view)... When you start Microsoft PowerPoint (or other applications such as Corel Harvard Graphics or Sun StarOffice 6), you are immediately presented with a place to add a sequence of slides. The slides are of course linear. After all, a presentation takes place in time, and time is linear -- realistically, you can view only one slide at a time. Consequently, the tool for entering content into slides is also linear. It's also one of the worst approaches to organizing your content... The Document Object Model (DOM) applies at least as much to effective presentation design as it does to programming. Break up your documents logically to get the maximum benefit down the road. Most presentations have definite superstructures, starting with an introductory section in which you establish your credentials and lay out what you'll talk about. You don't rattle off the names of each slide while showing the introduction; instead, you talk about big-picture arcs... You can create flexible, robust, and portable applications by breaking your XML processes into the manipulation of models by a controller mechanism to create a specific view -- in essence, avoiding coupling and entanglements. The principles I've presented here, such as the power of abstracting out the presentation interfaces in favor of the logical ones, illustrate a paradigm that, while hardly new, seems to reach a certain elegance with XML. The design pattern is called the model/view/controller (MVC) pattern..."

  • [February 04, 2002] "Hewlett-Packard's HP SA8250 XML Director." [Review] by Logan G. Harbaugh. In XML Magazine Volume 3, Number 1 (February 2002), pages 19-20. ['HP's SA8250 XML Director gives system architects the ability to improve server-to-server communications using XML.'] "Given the function of XML, a hardware review in this publication may seem strange. However, the HP SA8250 XML Director is in fact a piece of hardware designed to facilitate and improve server-to-server communications using XML. It provides SSL encryption and decryption to allow offloading this server-intensive function from Web servers, as well as XML inspection, rules-based routing of XML traffic, and prioritization. While these features may be of limited interest to software developers, system architects should make themselves familiar with the technology, which can allow architects to build quality of service into their offerings as well as additional enterprise-class features... For instance, the XML Director also recognizes and works with MIME-encoded e-mail traffic, allowing XML e-mail to be separated from other e-mail before processing by an e-mail server. It intercepts HTTP-type 400, 500, and 600 error messages, allowing for session recovery so that clients can be redirected to a working Web server instead of receiving an error message. In addition, the XML Director performs IP load balancing, which allows administrators to build Web farms for redundancy and load scaling by using multiple Web servers that provide the same Web site. Configuring the device is simple. Connect a serial terminal and specify the network-side and server-side network addresses, subnet mask, and router information, as well as the device's IP address and name, and then run the rest of the configuration from any browser... The XML routing rules allow for complex configurations, with AND, OR, NOT, <, >, =, and other operators operating on XML elements, attributes, or function calls. These operators allow you to create a rule that routes traffic to a server or virtual cluster of servers if a particular element exists, or if the element contains a certain value, has a value greater or less than a given value, and so on..."

  • [February 04, 2002] "Architectural Forms: A New Generation." By John Cowan. Draft 2.1. 2002-02-04 or later. "This is a preliminary description of a work in progress called 'Architectural Forms: A New Generation', or AF:NG for short. This document is highly subject to change without notice... AF:NG provides the facilities, but does not employ the syntax, of SGML Architectural Forms. AF:NG is intended to be used in conjunction with the schema language RELAX NG, but is not dependent on it in any way. The purpose of AF:NG is to provide for tightly specified transformations of XML documents, consisting of renaming or omitting elements, attributes, and character data. AF:NG is not intended as a general-purpose transformation language like XSLT or Omnimark. Using AF:NG, a recipient may, instead of specifying a schema to which documents must conform exactly, specify a schema to be applied to the output of an AF:NG transformation. In that way, the actual element and attribute names, and to some degree the document structure, may vary from the schema without rendering the document unacceptable. In particular, it is easy to use AF:NG to reduce a complex document to a much simpler one, when only a subset of the document is of interest to the recipient. The information provided to AF:NG consists of a short XML document called an architectural map, or archmap, plus the appearance of a special attribute called the form attribute within the source document. The name of the form attribute is given in the archmap, and it is the only required portion of the archmap. This draft of AF:NG does not have the ability to map a source attribute into architectural character data..." References: see "Architectural Forms and SGML/XML Architectures."

  • [February 04, 2002] "Zope Page Templates. Defining Dynamic Content Using Attributes." By Amos Latteier. In Dr. Dobb's Journal #333 (February 2002), pages 67-75. [Internet Programming] "Zope Page Templates let you define dynamic content using attributes on existing HTML/XML tags." ZPTs use attributes rather than elements; this allows the use of common HTML/XML editing tools, because the tools are typically more tolerant of unrecognized attributes. ['With some other template systems, content, presentation and logic are all blended into one place and each respective party is expected to keep their hands off of the other's domain. Zope's ZPT solution seeks to adopt a consistent, XML compliant, markup language as a target, embed all logic in namespaced attributes (where they are ignored by the web page tools) and provide a means to supply prototypical content to give the presentation designer a sense of context.'] See the announcement "Zope Corporation Releases Zope 2.5 and Python 2.2": "Zope 2.5 includes Zope Page Templates (ZPT), the new model for dynamically generating pages. ZPT embraces W3C standards by leveraging namespace attributes to insert page directives. This approach allows site designers and developers to work side-by-side, since the interim and final product remains valid HTML. The 2.5 release also offers built-in session tracking, encrypted password support and significant performance improvements. Zope Corporation has just released version 1.2 of its Content Management Framework (CMF)..." The online DDJ article includes the source code.

  • [February 04, 2002] "Relaxing into 2002." By Sean McGrath. In XML In Practice (January 10, 2002). ['RELAX NG is a blend of simplicity and pragmatism aimed at providing a validation system for XML documents. By separating the validation from other XML processing features, RELAX NG keeps DTDs from doing more than they are intended to do.'] "From time-to-time, a markup technology comes along without much fanfare that really changes the way we think about and build XML systems. The SAX API, developed under the tutelage of David Megginson on the XML-DEV mailing list, springs to mind. Without a big brouhaha or marketing budget, without the imprimatur of any august institution or consortium, SAX has quietly become an indispensable part of the XML application development landscape. I strongly suspect that RELAX NG is following in the footsteps of SAX. SAX was a humble blend of simplicity and pragmatism aimed at providing an event-oriented API for XML processing. Similarly, RELAX NG is a humble blend of simplicity and pragmatism aimed at providing a validation system for XML documents. RELAX NG schemas are themselves XML documents. Now those of you who work with XSLT know that can be a mixed blessing. On one hand, you can throw all your XML processing tools at them; on the other hand, they can be somewhat legibility challenged. RELAX NG is a pure delight in this regard, being truly readable after an hour or two of practice. Furthermore, you can intermingle your own markup at will into a RELAX NG schema, which makes adding your own annotations for documentation and module maintenance very simple. Anyone familiar with DTDs that wish to play around with Relax should get DTDinst, a Java based tool that converts DTDs into RELAX NG notation..." See: "RELAX NG."

  • [February 04, 2002] "BizTalk Follows Web Protocol. Business Process Integration Set to Become More Accessible." By Michael Vizard and Heather Harreld . In InfoWorld (February 04, 2002). "Microsoft hopes to make the arcane art of business process integration more accessible to the average developer next week when it ships BizTalk Server 2002. The new release of the company's business process integration server will feature tight integration with Visual Studio .Net, allowing developers to use Web services protocols as a conduit for integrating business processes using a graphical environment. The need to work with cumbersome protocols such as EDI (electronic data interchange) has been a big inhibitor to the adoption of business process integration tools. But a new class of business process integration tools that supports SOAP (Simple Object Access Protocol), XML, and eventually UDDI (Universal Description, Discovery, and Integration) will make this class of enterprise software a more mainstream toolset. For companies actively interested in Web services, this marks a 'very nice next-generation step' for Microsoft, said IDC analyst Rikki Kirzner, in Framingham, Mass. However, Microsoft's success will depend on how well it sells the concept of Web services and how well BizTalk integrates with Unix, she said. In addition to adding support for Visual Studio .Net, Microsoft has added a new seed feature that makes it easier to distribute configuration changes and updates across a supply chain made up of BizTalk servers. Microsoft has also better integrated BizTalk with its Microsoft system administration tools. Those tools can now track events that take place in BizTalk servers and initiate actions based on rules defined by the developer... The business process integration category is widely recognized as having been dominated by companies that created asynchronous messaging tools such as IBM's MQ Series to solve this problem. But those tools are fairly complex and rivals such as Microsoft and BEA are working to create simpler graphical tools that a broader swath of developers could master... Long term, Microsoft hopes to integrate its business process modeling tools with Microsoft Office and offer them to business systems analysts. In addition, it hopes to work with IBM and industry standards bodies such as the World Wide Web Consortium to create industry standards for business workflow..." See the announcement: "Microsoft Announces General Availability of BizTalk Server 2002. Integration With Visual Studio .NET Provides Additional Support For Orchestrating XML Web Services. Tyco International Deploys BizTalk Server 2002 Solution in Less Than Four Months."

  • [February 02, 2002] "Speech Vendors Shout for Standards." By Ephraim Schwartz. In InfoWorld (February 01, 2002). "The battle for speech technology standards is set to escalate next week when a collection of industry leaders submits to the World Wide Web Consortium (W3C) a proposed framework for delivering combined graphics and speech on handheld devices. The VoiceXML Forum, headed by IBM, Nuance, Oracle, and Lucent will announce a proposal for a multimodal technology standard at the Telephony Voice User Interface Conference, in Scottsdale, Arizona. Meanwhile, Microsoft will counter with its own news, using the same conference to announce the addition of another major speech vendor to its SALT (Speech Application Language Tags) Forum. The as yet unnamed vendor intends to rewrite its components to work with Microsoft's speech platform. The announcement will follow the addition of 18 new members to the SALT Forum, a proposed alternative to VXML's multimodal solution. New members of the SALT Forum include Compaq and Siemens Enterprise Networks. Founding members include Cisco, Comverse, Intel, Microsoft, Philips, and SpeechWorks... Most mainstream speech developers are currently creating Voice XML speech applications built on Java and the J2EE (Java 2 Enterprise Edition) environment, and running on BEA, IBM, Oracle, and Sun application servers. This week General Magic and InterVoice-Brite announced a partnership to develop Interactive Voice Recognition (IVR) enterprise solutions for 'J2EE environments,' using General Magic's VXML technology. Until recently Microsoft offered only a simple set of SAPI (speech APIs). Now through acquisition and internal development it has its own powerful speech engine which it is giving away to developers royalty free, said Peter Mcgregor, an independent software vendor creating speech products. Microsoft redeveloped SAPI in Version 5.1 to run on its new speech engine, while simultaneously proposing SALT as an alternative to VXML. Wrapping it all up in a marketing context, Microsoft's Mastan called the company's collection of speech technologies a 'platform,' a term previously not used... The issue over which specification of SALT, not due to be released until sometime later this year, or VXML, whose Version 2 is now out for review, is better is an argument that can only be determined by developers. Each side claims the other's specifications are deficient... IBM's William S. 'Ozzie' Osborne, general manager of IBM Voice Systems in Somers, N.Y.: 'I hope that we get to one standard. Multiple standards fragment the market place and create a diversion. I would like to see us get to a standard that is industry wide and not proprietary. What we are proposing to the W3C, using VXML for speech and x-HTML for graphics in a single program, is cheaper and easier than SALT without having to have the industry redo everything they have done'... Note the 2002-01-31 announcement: "The SALT Forum Welcomes Additional Technology Leaders as Contributors. New Members Add Extensive Expertise in All Aspects of Multimodal and Telephony Application Development and Deployment." See: (1) "VoiceXML Forum"; (2) "Speech Application Language Tags (SALT)."

  • [February 02, 2002] "Building a Knowledge Base of Morphosyntactic Terminology." By William Lewis, (Department of Linguistics, University of Arizona), Scott Farrar, and D. Terence Langendoen. Pages 150-156 in Proceedings of the IRCS Workshop on Linguistic Databases (11-13 December 2001, University of Pennsylvania, Philadelphia, USA; Organized by Steven Bird, Peter Buneman and Mark Liberman; Funded by the National Science Foundation). "This paper describes the beginning of an effort within the Linguist List's Electronic Metastructure for Endangered Languages Data (E-MELD) project to develop markup recommendations for representing the morphosyntactic structures of the world's endangered languages. Rather than proposing specific markup recommendations as in the Text Encoding Initiative (TEI), we propose to construct an environment for comparing data sets using possibly different markup schemes. The central feature of our proposed environment is an ontology of morphosyntactic terms with multiple inheritance and a variety of relations holding among the terms. We are developing our ontology using the Protégé editor, and are extending an existing upper-level ontology known as SUMO... The paper describes the first stage in reaching the second of these goals. Our decision to begin work on the analysis of morphosyntactic terms was based on the recommendations of a markup work group that the Linguist List organized at the Language Digitization Workshop in Santa Barbara, June 21-24, 2001. That group divided the task of developing markup recommendations into several problem areas, and identified morphosyntactic markup as the first problem to be tackled... The architecture for the envisioned system is given [in Figure 4]. The three major components of the E-MELD system are (1) the graphical user interface (GUI), (2) the knowledge base (containing the ontology and query engine), and (3) the database of endangered languages marked up in XML format. The end user will be able to access the E-MELD system via the World Wide Web as the knowledge base and language data will reside together at a remote site. The user may pose queries to the knowledge base in standard search engine format (similar to that of Yahoo or Google). For example, the query 'ergative P2' will return a list of languages and/or actual language data from P2 languages containing ergative constructions. The only requirement that is required is that the documents containing the individual language data be in XML format. The query engine will have access to XML metadata and all language data in each file. Once the envisioned system is implemented only minimal maintenance will be required to add additional language data. Adding new data sets merely requires the ontology manager to interpret the researcher's tagset and to incorporate it into the existing ontology..." See also: (1) "E-MELD: Electronic Metastructure for Endangered Languages Data"; (2) "E-MELD Language Codes Workgroup." See "Electronic Metadata for Endangered Languages Data (EMELD)." [cache]

  • [February 02, 2002] "Morpho-Syntax Ontology. Concept Hierarchy for E-MELD Ontology Project." [Reported] By Scott Farrar (University of Arizona, Tucson, AZ, USA). "The ontology for morpho-syntax terms is now ready for a first review. Keep in mind this is a first attempt and that we're constantly revising and open to critique by the group... The following represents an effort by E-MELD to create an ontology of linguistic concepts. Thus far, only concepts specific to the domain of morpho-syntax has been addressed. The morpho-syntax concepts fit into a larger ontology of 'all things' which is in the following form [entity [physical | abstract]]... The two sub-hierarchies of relevance here are: ContentBearingObject and GrammaticalAttribute. All associated morpho-syntax concepts fall under these two super-concepts. Please click on the following concepts to see documentation. We are also working on a third area which will be under Process. Look for this soon..." See previous bibliographic entry and "Electronic Metadata for Endangered Languages Data (EMELD)."

  • [February 02, 2002] "Introduction to DAML: Part I." By Roxane Ouellet and Uche Ogbuji. From XML.com. January 30, 2002. ['Learn about the DAML language for modeling data on the Semantic Web.'] "RDF was developed by the W3C at about the same time as XML, and it turns out to be an excellent complement to XML, providing a language for modeling semi-structured metadata and enabling knowledge-management applications. The RDF core model is successful because of its simplicity. The W3C also developed a purposefully lightweight schema language, RDF Schema (RDFS), to provide basic structures such as classes and properties. As the ambitions of RDF and XML have expanded to include things like the Semantic Web, the limitations of this lightweight schema language have become evident. Accordingly, a group set out to develop a more expressive schema language, DARPA Agent Markup Language (DAML). Although DAML is not a W3C initiative, several familiar faces from the W3C, including Tim Berners-Lee, participated in its development. This article series introduces DAML, including practical examples and basic design principles. This first article presents basic DAML concepts and constructs, explaining the most useful modeling tools DAML puts into the designer's hands. In the next article we shall take a more in-depth look, introducing more advanced features and outlining a few useful rules of thumb for designers. Keeping the concepts straight between RDF, RDFS and DAML+OIL can be difficult, so the third article will serve as a reference of constructs, describing each, and pointing to the relevant spec where each is defined..." Local references: "DARPA Agent Mark Up Language (DAML)."

  • [February 02, 2002] "Web Services Interoperability." By James Snell. From XML.com. January 30, 2002. ['From Hello World application to SOAP-based Web service.'] "Web services, at their core, are technologies designed to improve the interoperability between the many diverse application development platforms that exist today. And while the reality of Web services interoperability is still less than flattering to the cause, great strides have been made over the last year to move things in the right direction. The key point to remember is that over time, interoperability will improve and the current wrinkles that exist in the process of creating, deploying, and consuming Web services eventually will be ironed out. In this article, I give a brief picture that highlights exactly what interoperability in Web services means. To do so, I am going to pull an example out of O'Reilly's recently published Programming Web Services with SOAP, which I coauthored. The example is the ubiquitous 'Hello World' application evolved into a SOAP-based Web service... In the book, we implement Hello World in the Java, Perl, and .NET environments, and we demonstrate how to invoke each of them from the other two environments..."

  • [February 02, 2002] "Hidden Whitespace, Hidden Meaning." By John Simpson. From XML.com. January 30, 2002. ['Simpson hunts down mysterious newlines and explains the semantics of bare XML.'] "... the extra newlines in your output are there because your XSLT style sheet says to include them. Remember what XSLT does: it transforms a well-formed source tree into a result tree. The source tree is the 'input document'... , the one referred to by xsl:template and xsl:value-of elements in a style sheet; in your case, the source tree [if the style sheet says] 'instantiate in the result tree everything between this xsl:template element's start and end tags', [...] the operative word there is 'everything'."

  • [February 02, 2002] "Document Associations." By Leigh Dodds. From XML.com. January 30, 2002. "This week the XML-Deviant attempts to disentangle the threads of a number of tightly woven discussions that have taken place on XML-DEV recently. The general theme of these discussions is how one associates processing with an XML document. On the surface this may seem like a simple problem, but there are a number of issues that expose some weak points in the XML architecture. Actually, in circumstances where you are exchanging XML between tightly coupled systems, there are very few issues, beyond the usual systems integration gotchas. The difficulties begin to arise in circumstances that make the most of the advantages that markup provides. In these loosely coupled environments data may be subjected to processing and constraints unforeseen by the original information provider. Data tends to take on a life of its own, separate from the producing and consuming systems..."

  • [February 02, 2002] "Welcome Web Services Activity." By Edd Dumbill. From XML.com. January 30, 2002. "This week the World Wide Web Consortium announced the formation of a Web Services Activity. Within the W3C, "Activity" is the name given to an ongoing focus of development encompassing one or more Working Groups. Until this time, the W3C's only participation in the web services world was through the XML Protocol Working Group, which is essentially tidying up SOAP. Since the formation of the XML Protocol Working Group, several companies followed the example of the SOAP team and joined together in ad-hoc groupings to develop the complementary machinery needed to make SOAP work with their programming environments. One technology devised this way was the Web Services Description Language, WSDL, which has become closely intertwined with the use of SOAP. There are pitfalls with trying to standardize something that's brand new, and WSDL has come in for some criticism, in the same way as SOAP did..." See (1) the announcement for the W3C's Web Services Activity, and (2) the local news item.

  • [February 01, 2002] "SAML Brings Security to XML." By Edmund X. DeJesus. In XML Magzine Volume 3, Number 1 (January 11, 2002), pages 35-37. ['SAML uses XML to distribute authentication and authorization information across platforms, organizations, and vendors.'] "Who can you trust? That's the major problem with security -- and one of the major obstacles both within organizations and in cross-organization transactions like B2B. If system A trusts you, does that mean that system B should too? If so, how can A and B exchange security information to perform transactions? Security Assertions Markup Language (SAML) is a new standard that uses XML to encode authentication and authorization information. That information can then be moved between systems within an organization, or between organizations in a transaction. Because its basis is XML, SAML (pronounced 'sam-el') is platform-independent and can move around as simply as text. SAML can be the solution for a variety of security problems facing many organizations today... How SAML achieves the transfer of authentication is fairly straightforward. SAML eschews the kind of hierarchical trust relationship necessary for systems such as Public Key Infrastructure (PKI) and instead uses a peer-to-peer trust model. Two organizations must first agree on the authentication and authorization attributes they require and how they will handle authentication and authorization procedures when the information arrives. For example, if system B trusts system A automatically, then anyone authenticated by A is authenticated for B. In this situation, SAML would pass to B the user credentials that satisfy A. Of course, B may not trust A perfectly. Perhaps A's authentication satisfies only part of what B requires for trust. In this case, A uses SAML to pass to B user credentials that include all of A's criteria and possibly more information, as well. This extra information may be sufficient to satisfy B -- in which case the user is authenticated on B automatically and transparently -- or else B may need to request additional information from the user to complete its authentication. This interaction is more awkward for the user, but can still be less intrusive than entering manually the full authentication information B requires. Although a hierarchical trust system is not required, such a system is still possible with SAML and may be useful in certain circumstances... OASIS working groups are now finalizing SAML. 'The first public version is expected to be ready by the end of 2001,' said Joe Pato, principal scientist at HP Labs and cochair of the OASIS Security Services Technical Committee. The folks at OASIS want to get comments during another quarter to see if they need to address any major problems. Getting that feedback requires a certain critical mass of adopters to incorporate early versions of SAML into their software. Companies including Oblix and Netegrity already have plans to include SAML in the next versions of their products..." See: "Security Assertion Markup Language (SAML)."

  • [February 01, 2002] "Unified XML Content and Data: Oracle's Next-Generation Database. [Oracle9i to Unite Relational Data and XML Content. Content Management.]" By Victor Votsch. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 1, Number 21 (February 04, 2001). ['Relational databases work well with structured data, and XML databases are designed for unstructured content. Oracle's next version, due in late spring, will unite both models within the same database. It's an important advance that we think will affect corporate IT shops and publishers alike, eventually chopping the cost of custom-written XML applications down to off-the-shelf levels. In this preview, we describe the Oracle9i product family, discuss its potential applications and assess its impact on users, developers and competitors.'] In the initial version of Oracle9i, released in June 2001, Oracle added XML support directly to the database, primarily to improve performance for accessing transactional content. Some of the data-transformation utilities that had accompanied the 8i application server were brought inside the 9i database, and the developers' kit was enhanced with new functions. The Oracle9i XDK contains a number of tools for moving XML into and out of the database, several of which were enhanced from the Oracle8i release... The second release of Oracle9i Database, slated to become commercially available in the first half of 2002, will improve the performance and tighten the integration of XML handling by the database core. The new features, collectively referred to as Oracle XML DB, unite the SQL and XML metaphors for XML documents and structured data. The new features embrace the W3C XMLSchema data model, providing structured storage for XML data and documents within the repository. Among the features introduced in this release are automatic identification and parsing of XML schemas, navigation that adheres to the XPath standard written by the W3C, and substantial performance enhancements... We see Oracle9i R2 as an important product introduction. One reason, of course, is that Oracle is the leading database vendor and almost anything it does matters to a lot of users. More important, though, we are impressed by Oracle's new functionality. (1) It gives the database administrator and developer greater flexibility for designing systems and manipulating data. XML processing can occur in either the database or a middle tier of logic. (2) The melding of XML and SQL means that organizations can continue to use their existing SQL skills while they figure out how to take advantage of XML. (3) Native support for XML (and related standards such as XML Schema, Xpath and DOM) provides an alternate method for handling unstructured data. (4) It's important to note that these implementations are fully compliant with W3C standards. This is vital for those building complex B2B applications which must communicate with multiple systems..."

January 2002

  • [January 30, 2002] "A Moment of Clarity. The real benefits of Web services may be delivered in the coin of BI [Business Intelligence]." By Justin Kestelyn. In Intelligent Enterprise Volume 5, Number 3 (February 1, 2002), page 6. "... The just-in-time solution is to make information travel across the value chain more quickly than purchase orders do. And what better potential answer than cross-company business intelligence (BI) networks that give every partner involved visibility into customer demand, inventory levels, and logistics? Business Objects is to be commended for its efforts here. As of this writing, the company has beta users test-driving a .Net-based Web services SDK that will let customers build what the company calls 'second-generation BI extranets' -- fully integrated, multipoint BI networks that cross corporate firewalls. (A J2EE edition is expected for general availability in Q2 2002.) According to senior product manager Karl van den Bergh, the SDK represents the first BI application of Web services protocols. Because Web services are self-describing, dynamic, and based on standards such as TCP/IP and XML (and de facto ones such as SOAP), theoretically a partner could use this SDK to build an interactive BI application that supports the analysis of value chain information originating from multiple applications outside the firewall. The partner would 'subscribe' to the required information from a private UDDI directory hosted somewhere along the value chain. Business Objects is onto something here. The beauty of this approach is that it makes BI links with second- and third-tier suppliers more feasible (no expensive point-to-point connections are required) and eliminates the distinction between internal and external data. It also makes the BI process truly collaborative... The fact that strategic business application companies such as Oracle, SAP, PeopleSoft, and Siebel Systems are building Web services frameworks around their products can only fuel the demand for BI networks, although the corporate culture-related challenges of collaborative commerce will still apply. The Web services movement has been focused on improving B2B collaboration via integrated business processes. But the real benefits of Web services may in fact be delivered in the coin of BI, not transaction processing."

  • [January 30, 2002] "A Friendly Interface. OASIS unveils new technical committee for standardizing interactive Web services." By Jeanette Perez. In Intelligent Enterprise Volume 5, Number 3 (February 1, 2002), page 15. Note: The WSCM TC voted to change its name and clarify its charter; the name has been changed to 'Web Services for Interactive Applications' (WSIA) "As Web services technology continues to mature, companies readying Web services need to develop more and more standards, especially in areas such as presentation, interaction, and user interfaces. In response to the increasing demand for interaction standards, the Organization for the Advancement for Structured Information Standards (OASIS) formed a technical committee to develop a Web Services Component Model (WSCM)... Although the big players in the Web services space, Microsoft and Sun Microsystems, have yet to join the committee, many analysts believe the WSCM is a step in the right direction. The Web Services User Interface (WSUI) working group submitted the WSUI specification to OASIS for consideration in the WSCM. WSUI includes influential members from the portal and content markets, such as Epicentric, Documentum Inc., Intraspect Software Inc., Jamcracker Inc., NewsEdge Corp., Securant Technologies Inc. (now owned by RSA Security Inc.), and YellowBrix. Tyler McDaniel, director of application strategies at the Hurwitz Group, discussed in an OASIS release how WSCM and WSUI could help Web services evolve. 'The nascent stage of Web services requires nurturing not just in terms of commercial credibility but also in terms of usable standards,' said McDaniel. 'This concerted effort by OASIS, leveraging the work of WSUI.org, will help the market address a key issue of presenting Web services throughout the Internet ecosystem. With strong vendor leadership, focused through OASIS, enterprises should get the benefit of a thorough specification,' McDaniel said..." See "OASIS to Develop Interactive Web Applications Standard Through a Web Services Component Model (WSCM)"

  • [January 30, 2002] "Web Services: The New Web Paradigm, Part 2." By Rob Cutlip (IBM Software Group). In DB2 Magazine Volume 7, Number 1 (Quarter 1, 2002), pages 40-46. ['Web services that enhance customer experiences can spring up from existing solutions -- and take them one step farther. The right tools can make all the difference.'] Web services, as you may have read in Part I of this article, are self-contained, modular business-process applications based on open standards. They provide a simpler means for businesses to connect their applications with other applications over the Internet or across a network. In Part I of this article, I presented a solution that enabled a financial services data provider to replace its time-consuming quarterly delivery of ASCII-format documents with an XML-based data exchange that lets customers retrieve data on demand. In this installment, I'll show you how to convert that XML solution into a Web service. We will also examine how DB2, using the Web services Object Runtime Framework (WORF), can provide Web services access through SQL, XML, or stored procedures. Finally, we will touch on the use of the IBM WebSphere Studio Application Developer (WSAD) environment for development and deployment of a DB2 Web service... DB2 uses WORF as the underlying framework to support Web service access to DB2. WORF is delivered independently from the DB2 XML Extender Web site or with WSAD and WebSphere Studio Site Developer (WSSD) Technology Preview, a subset of WSAD. When delivered with the WSAD or WSSD, WORF contains a set of tools that automate the process of building of Document Access Definition extension (DADX) Web services, which I'll cover in some detail. The WORF framework supports Web services that make use of DB2 XML Extender collections for query and storage. The XML collections approach enables decomposition of XML documents that can then be stored in DB2 relational tables. It also enables composition of XML documents. In either case, the stored procedures delivered with DB2 XML Extenders perform the data manipulation operations. In addition to the XML approach, WORF also allows the use of SQL UPDATE, DELETE, and INSERT operations and stored procedure invocation. SQL-based Web services don't require the use of XML Extenders because there is no user-defined mapping of XML elements to SQL data. Stored procedures let you provide input parameters dynamically and retrieve the results. When returned, the results include a simple, default XML tagging..." See also Part 1.

  • [January 30, 2002] "UDDI Registries and Reuse." By Anthony Meyer (Technical Director, Flashline.com). ebizQ. (January 28, 2002). "A UDDI registry is a central place to store information about Web services. Registries compliant with the Universal Description, Discovery and Integration (UDDI) specification are already plugged into the Internet and are in use today. Given that the technology for low-coupled reuse is inherent in the Web services methodology, is the UDDI registry the missing piece in the reuse puzzle? [...] In theory, a developer could search the UDDI registry, find a service, discover how to connect to the service and then use the service within the software he/she is developing. The technical potential of Web services is nothing short of revolutionarily. In addition, UDDI makes it a point to discuss what service is being provided instead of how. This is a much-needed change in an environment where platform and language carry so much weight. A service is a service, regardless of what it is running on, and using a service does very little to herd a developer toward one technology corral or another. Function should always be the primary driver of reuse, and UDDI helps drive this point home. But reuse is driven by trust, and UDDI does little to address the issue of trust. A certain level of trust must be established if you expect anything but the most minimal reuse to occur. The data being sent from the service to the reusing developer must be worthy of his/her trust and reliance. Is it timely? Is it accurate? How does it scale? If a business triples its transaction volume, will the services on which it depends run smoothly, or will they slow to a crawl or simply choke? How dependable is the team supporting the service? When there are problems, how quickly are they resolved? What's the turnaround time for a phone call? In what priority will your issue be addressed? Do reliable change management procedures exist? In the case of a disaster--natural or otherwise -- is there a remote failover site? These questions of quality, reliably and dependability are not addressed formally in the current UDDI registry. There is no data to enable service evaluation -- only description. It's the equivalent of a medicine man coming to your door saying 'these little white pills will cure your headaches,' or a list of local hotels but with no prices or ratings. Knowing what something claims to do is very different from verifying that the task is truly and accurately performed..." See "Universal Description, Discovery, and Integration (UDDI)."

  • [January 29, 2002] "Editorial Pointers [on Ontology]." By Diane Crawford (CACM Executive Editor). In Communications of the ACM (CACM) Volume 45, Number 2 (February 2002), page 5. Special issue on 'Ontology: different ways of representing the same concept.' Guest Edited by Michael Gruninger and Jintae Lee. "Early philosophers spoke of ontology as 'the science of being.' The nature of existence; the notion of presence, if you will. We all recognize the concept of a fish, for example (and for cover purposes), but the image we conjure up as we read the word 'fish' is anything but similar. If people and computers-and computers and computers-are to communicate, share, and reuse knowledge seamlessly, we need some agreed-upon vocabulary and more exact specifications with which to convey what we mean. And that's where the science of modern day ontology comes in. In practice, ontology abstracts the essence of a concept and helps to catalog and distinguish various types of objects and their relationships. In truth, even the word 'ontology' means different things in different senses and settings. This month's special section cuts through the vagueness by examining the roots of ontology and its applications in the fields of AI, knowledge engineering, and management, among others. The authors in this section also explore the challenges of designing, evaluating, and deploying ontologies in the real world and within intelligent systems. Guest editors Michael Gruninger and Jintae Lee point out that ontology is garnering attention not only in academic circles, but from industries as diverse as high-tech, financial, medical, and agricultural. We hope this section clarifies the reasons for and the ramifications of this trend..."

  • [January 29, 2002] "Predicting How Ontologies for The Semantic Web Will Evolve." By Henry M. Kim [WWW] (Assistant Professor of Information Systems, Schulich School of Business, York University, Toronto). In Communications of the ACM (CACM) Volume 45, Number 2 (February 2002), pages 48-54. Special issue on 'Ontology'. Abstract [from draft version]: "There is much excitement about ontologies and the Semantic Web, though it is unknown how they will evolve. A way to predict the future of ontologies is to analyze the history of something similar. As opposed to the ontologies as Semantic Web based models of codified knowledge, business forms as paper based models are examined. Business forms innovations are explained in terms of organizational tendency to structure to reduce complexity unless a disruptive innovation increases uncertainty. These explanations are projected to predict that unless reducing uncertainty is more important than reducing complexity, XML will be a better or more proven platform than ontologies. An ontology development tool is identified as possibly a disruptive innovation that raises uncertainty. Then it is predicted that: 1) Ontologies may be widely adopted, if there are ontology development tools that can be practically used by knowledge workers, not necessarily by ontologists; 2) Ontologies are likely to be widely adopted, if an ontology developed by the knowledge worker is of use to the worker irrespective of whether it is used for data sharing. Therefore, ontologies may likely be widely adopted first for software specification; and 3) The first phase in the evolution of the Semantic Web may be the development of de-centralized, adaptive ontologies for software specification... The following summarize the 'XML vs. Ontologies' analysis: A unit is nearly decomposable for purposes of data sharing if it is reasonable to assume that shared understanding can be implicitly or informally applied to interpret data within that unit (a community). Within a near decomposable unit, it is important to reduce complexity in data sharing. If near decomposability cannot be assumed, reducing uncertainty of data sharing by explicitly and formally defining semantics in ontologies may be warranted. Unless reducing uncertainty is more important than reducing complexity, XML will be a better or more proven data sharing platform than ontologies..." See the related online draft version of the paper. [cache]

  • [January 29, 2002] "Evaluating Ontological Decisions With OntoClean." By Nicola Guarino (Institute for System Theory and Biomedical Engineering of the Italian National Research Council [LADSEB-CNR], Padova, Italy) and Christopher Welty (Vassar College, Poughkeepie, NY, USA). In Communications of the ACM (CACM) Volume 45, Number 2 (February 2002), pages 61-65. Special issue on 'Ontology'. This article exposes common misuses of the subsumption relationship and the formal basis for why they are wrong. Ontology Works uses the OntoClean methodology in commercial database integration processes to check for consistency (that formal metaproperties have been expressed). See also "Cleaning-up WordNet's Top-Level" and "Conceptual Analysis of Lexical Taxonomies: The Case of WordNet's Top-Level." References are provided at the web site 'Ontological Foundations of Conceptual Modeling and Knowledge Engineering.'

  • [January 29, 2002] "W3C To Examine Patent Policy." By [Seybold Bulletin Staff]. In The Bulletin: Seybold News and Views On Electronic Publishing Volume 7, Number 16 (January 30, 2002). "The World Wide Web Consortium has called for public comment on how patented technologies should be used in W3C standards... Under current policy, the Consortium requires its members to disclose whether their proposals use any patented technology. (There's no statutory penalty for failure to disclose, but in at least a couple of cases, the U.S. courts have declined to uphold a patent that was knowingly hidden.) In the past, the Consortium membership has refused to adopt any proposals unless the patent owner agreed to royalty-free licensing. But the W3C is worried that it may someday need to adopt a royalty-bearing technology because there is no reasonable alternative. This has provoked howls of outrage from some commentators, particularly in the open-source community. The current draft proposal suggests that the Consortium be open to technologies that can be licensed under 'reasonable and nondiscriminatory' (RAND) terms. The charter document for each working group would then clearly define whether it is trying to develop a royalty-free standard or will accept a RAND standard. It also calls for a dispute resolution process. Patents have their place, but they are not very helpful in getting Internet standards defined and adopted. The whole point of standards is to allow diverse, autonomous networks to cooperate in forming 'the network of all networks.' A standard that some members could not afford would be a poor standard for this purpose. It's hard to argue that W3C should never adopt a necessary technology that can be licensed on reasonable terms. Nevertheless, if such a proposal comes to the floor, we hope W3C looks long and hard for alternatives..." See the news item: "W3C Patent Policy Supports Royalty-Free (RF) Working Groups."

  • [January 25, 2002] "DB2 MQ XML Functions: Using MQSeries and XML Extender from DB2 Applications." By Morgan Tong and Dan Wolfson. From International Business Machines Corporation. January, 2002. "This article describes how MQSeries and DB2 XML Extender can be used together to construct applications that combine XML messaging and database access. We focus on a set of optional DB2 functions and stored procedures that can be installed with DB2 XML Extender Version 7.2. With these functions and procedures, it is possible to support a wide range of applications that use XML messages and MQ functions. These functions and stored procedures provide an easy and yet powerful way to integrate diverse software applications, an essential element for constructing many kinds of systems, such as business-to-business transaction systems and Customer Relationship Management systems. We first provide a quick overview of the DB2 XML Extender, followed by an overview of the DB2 features released in DB2 7.2. With this background in place, we then review the latest capabilities of the DB2 XML Extender that integrates DB2 MQ features with XML messaging. Finally, we illustrate some of the usage examples through an insurance software application... The combination of DB2 MQ functions and the XML Extender can be used to meet a variety of application needs. The new DB2 MQ XML features are easy to use and provide a powerful repertoire of capabilities and functions. Using a set of functions and stored procedures, you can develop database applications more efficiently, especially when integrating a heterogeneous set of applications across a variety of platforms." See 'DB2 XML Extender MQSeries XML Functions and Stored Procedures, Release Notes, Version 7.2'.

  • [January 25, 2002] "What's Next for the UML?" By Roger Smith. From DevX.com (January 11, 2002). ['With better XML support on the way and open-source modeling tools entering the picture, the relatively quiet Unified Modeling Language world is experiencing major upheaval.'] At a book signing in the mid-1980s, not long after I got out of school with my newly minted computer science degree, I ran into E.F. Codd, the IBM researcher who originated the relational database model. As you might expect, Codd was none too enthusiastic about the term and practice of 'object-oriented' development, just then being popularized by methodologists such as Grady Booch, who went on to create (with Jim Rumbaugh and Ivar Jacobsen) the Unified Modeling Language (UML). 'You might as well describe birds as 'air-oriented' creatures,' Codd complained... For the past few years, object and data professionals have brokered an uneasy truce, especially with respect to technology such as Enterprise JavaBeans (EJBs) that are written in Java but typically stored in relational databases such as Oracle or IBM's DB2, which use non-object technology. To overcome the well-known impedance mismatch problem that exists in relational DBMSs, where an application programmer is forced to work in a language (such as Java or C++) that has a syntax, semantics, and type system different from the data manipulation language (that is, SQL) of the DBMS, many developers have resorted to using some sort of object-relational mapping. Author and consultant Scott Ambler has done the majority of the heavy lifting in this area, with his advocacy of a vendor-neutral UML Persistence Model profile defined as part of the upcoming UML 2.0 standard... A profile is defined as a specialized use of UML within the UML specification, and some of the more interesting work currently going on inside the Java Community Process (the Sun-led participatory process that develops and revises Java technology specifications) revolves around the creation of an EJB UML-mapping profile. The JSR-000026 UML profile for EJB defines a set of extensions to UML that can be used to model software implemented with Enterprise JavaBeans in UML. These extensions will let Java IDE, app server and other enterprise tool vendors provide EJB modeling capabilities using UML within their tools, as well as forward and reverse engineering between UML models and EJB implementations. The specification defines an XML DTD for a file placed within the EJB-JAR that identifies a UML model stored in that EJB-JAR and its relationship to other EJBs in the same EJB-JAR. This will allow enterprise tool and framework vendors to use Java's automation and reflection APIs to access UML models stored in EJB-JARs. What is especially compelling about this is that it gives EJB components the capability of self-describing their contents and capabilities, using either use case or other UML diagrams. The proposed profile will also support Extensible Meta-Data Interchange (XMI), the widely used meta-data representation format based on XML. With Windows UML modeling tool prices running upwards of $3,000 per user (with an additional $1,000 to $2,000 surcharge on the UNIX platform), it's arguably true that the high cost of quality modeling tools has kept the majority of developers from adopting object-oriented analysis and design techniques -- which should already have been state of the practice five or six years ago, if not more. But driven by open-source modeling tool efforts such as Thorn and ArgoUML, this state of affairs is about to change... Thorn is a UML modeling tool written in Java that allows you to use XML to save the models you create. The purpose of the Thorn modeling tool is to help develop and manage increasingly sophisticated open source development efforts. ArgoUML is a modular and extensible open-source Java/UML project from Tigris.org, a mid-sized open-source community that focuses on building better tools for collaborative software development. Based on the UML 1.3 specification and licensed similar to the Apache web server, ArgoUML provides comprehensive support for XMI, the XML Model Interchange format, and OCL (the Object Constraint Language)..."

  • [January 25, 2002] MiniDOM and PullDOM. By Paul Prescod. June 2000 or later. "MiniDOM is a tiny subset of the DOM APIs. PullDOM is a really simple API for working with DOM objects in a streaming (efficient!) manner rather than as a monolithic tree. The biggest problem with SAX is that it is inconvenient and to a certain extent, pretty complicated. You have to fill in a lot of methods with predetermined signatures, go through a few pre-parse incantations and organize your code in a callback pattern. You have to take complete control of any state you need to keep. If you don't you won't even be able to differentiate characters in a title from characters in an emphasis. I'm not saying its brutally complex, I'm just saying that it isn't as easy as PullDOM. xmllib has most of the same issues. The biggest problem with the standard DOM is that you must parse the whole document into a random access structure which typically means that you must have a lot of RAM to process a very big document. I get nervous writing software when I know that a big document to crash it. PullDOM has 80% of the speed of SAX and 80% of the convenience of the DOM. There are still circumstances where you might need SAX (speed freak!) or DOM (complete random access). But IMO there are a lot more circumstances where the PullDOM middle ground is exactly what you need... the events stream object is pretty sophisticated. Still, it has limits. You can only expand the current node because events and nodes relating to any other node are probably lost in the mists of time. An XML document into a random access data structure! That's all you need to know to use PullDOM: the DOM APIs, the parse method, the for-looping convention and the "expandNode" method. You need Python 1.6 or at least a modern version of PyExpat to run this..." The code is available online [cache]

  • [January 25, 2002] "SAXDOMIX 1.0: Free Open-Source Standards-based Framework for Scalable XML Processing." From Devsphere.com. "Currently, SAX and DOM are the most popular APIs for XML parsing. Each of them has PROs and CONs. Why not mixing SAX and DOM to get maximum advantage? The Simple API for XML (SAX) can be used to obtain the content of an XML document as a sequence of events. For example, you'll be notified when the parsing starts and ends, when a start / end tag if found or when a chunk of character data is read. This approach is very efficient because the parser doesn't hold the information in memory and the processing is minimal at the parser's level. In most cases, however, you can't do much with the information of a single event... The Document Object Model (DOM) can be used to build in memory a tree structure containing the information of an entire XML document. The DOM API defines interfaces whose instances are linked in a tree and maintain information about elements, attributes, character data sections, processing instructions, etc. DOM isn't the easiest way to manipulate the information of a document, but it has a huge advantage: a DOM tree is a mirror of the original document's structure and content. This makes DOM the ideal foundation for many XML tools such as XPath and XSLT processors. The problems start to appear when you need to process large documents... The mixing of SAX and DOM can reduce dramatically the memory requirements when the application doesn't need an entire DOM tree in memory. In addition, there are pieces of information that can be extracted directly from the SAX events without the need to build a DOM sub-tree. For example, the application could analyze the attributes of an element and decide if it needs the entire DOM sub-tree rooted by that element. Such a technique improves the performance of the application because the creation of the DOM objects is an expensive operation. Finally, Extensible Stylesheet Language Transformations (XSLT) are the most used way to process XML documents. SAXDOMIX provides special support for this kind of processing. In a mixed transformation, the same XSLT instructions are applied to each DOM sub-tree and the results are inserted into the output document..."

  • [January 25, 2002] "W3C's Chairman Talks About the Momentum of The Web Services Standards Effort." By Eugene Grygo. In InfoWorld (January 24, 2002). [Note: On January 13, 2002 the W3C announced that Dr. Jean-François Abramatic was stepping down as W3C Chairman: "From 1996 to 2001, Jean-François led the Consortium with wisdom and insight. Many thanks and best wishes... Dr. Steven R. Bratt, W3C's new Chief Operating Officer. Steve will oversee worldwide operations, the W3C Process, the Team, strategic plans, budget, legal matters, and major events."] "Diversity will not be the undoing of Web services infrastructure standards, says Jean-François Abramatic, chairman of the International World Wide Web Consortium (W3C) since 1996. Abramatic, who oversees the strategic direction of the W3C, believes it's a good thing for the Web services standards that the W3C oversees -- including XML; SOAP (Simple Object Access Protocol), an XML-based messaging standard; and WSDL (Web Services Description Language) -- to be implemented in a variety of languages and environments. Abramatic, who is also senior vice president of research and development for iLog, which provides customizable, pre-built C, C++, and Java components for application development, spoke in a recent interview with InfoWorld associate news editor Eugene Grygo about the momentum of the Web services standards effort." [Abramatic:] 'The availability of the XML infrastructure has made it [Web services standards effort] possible, I mean reasonable, for Web services to be deployed. And naturally people have invested energy in that direction. So, first of all, I see the XML protocol as the common default front. The SOAP standard has been submitted to W3C, which gives it sort of an official status in the W3C process. We're at the second working draft stage with SOAP, so it's ... on its way to recommendation and there is no reason to doubt that it will not come out. WSDL is at an earlier stage, but it's also built upon SOAP anyway. It's at the level of submission. ... it's been submitted by more than 20 companies, which is a record for the organization. Meaning that, one, it shows the interest and the momentum, and two, it shows the level of first-level consensus in the community... So, those are the two pieces which are on W3C's agenda. UDDI Universal Description, Discovery, and Integration is not'..."

  • [January 25, 2002] "Understanding the Value of Web Services. Intel's Thomas Says Vendors, IT in Danger Of Missing The Real Payoff." By Jeff Moad and Christopher Thomas. In eWEEK Volume 19, Number 3 (January 21, 2002), page 39. "What are web services? The answer depends on whom you ask. To major vendors promoting the idea, Web services are self-describing applications that live online and that, using standards such as SOAP (Simple Object Access Protocol), Web Services Description Language, and Universal Description, Discovery and Integration, can be accessed and used by any client. To others, however, that way of viewing Web services is at best incomplete and, at worst, counterproductive. Intel Corp. Chief e-Strategist Christopher Thomas, for example, said he believes that, by focusing too soon on that ambitious view of Web services, IT managers risk overlooking the real, near-term value of Web services technologies: to solving vexing enterprise integration problems. Thomas explained his views in a recent interview with eWeek Executive Managing Editor Jeff Moad..." Thomas: 'People are doing two things with SOAP. The first thing they're doing is they're wrapping XML as a message. The second thing they're talking about -- which is where most people focus their discussion -- is SOAP as an[RPC (remote procedure call) and a self-describing process. Almost no one is using SOAP as an RPC and a self-describing process. Almost everyone is using XML and XML wrapped in SOAP. The focus on SOAP as an RPC and self-describing process is the residue of an industry that is [used to] capturing developers' mind share and setting the stage for the new round of products coming through and is not necessarily capable of helping IT do something today because they don't have their products ready... In the first generation of Web services, all we're doing is sharing files. We're literally putting XML into the messaging environment... Inside the business, having a central repository of data makes total sense. But how that data is distributed has to change from a tethered connection to that repository to a distributed connection. So I can get that data in more ways than just a terminal accessing it. So that's the change XML offers'..."

  • [January 25, 2002] "Tool Kit Accents Web Services Hurdles." By Peter Coffee. In eWEEK Volume 19, Number 3 (January 21, 2002), pages 11, 14. "Released last week, Office XP Web Services Toolkit invites application developers to transform desktop data containers (such as Excel spreadsheets and Word documents) into active collectors and filters, feeding from a worldwide supply of XML-punctuated data. The tool kit is a relatively svelte download at less than 2MB (before expansion of compressed files), and in eWeek Labs' tests it installed quickly. But it also installed with less visibility as to what it was doing than we'd prefer. When the dust had settled, though, we found ourselves equipped to explore the incorporation of Web services directly into Microsoft Outlook, Access or Excel, aided by included tutorials on XML Document Object Model and SOAP (Simple Object Access Protocol), including complex data types... Before developers get too starry-eyed over the possibilities of Web services, we urge them to reflect on what the tool kit labels as trouble-shooting tips--but that we found to be general caveats for any plans of rapid Web services adoption. For example, many users are still not living in the always-connected world that the Web services paradigm assumes. And many desktop, mobile and handheld systems aren't yet equipped with the software infrastructure (such as Microsoft's SOAP Type Library DLL) that's needed to interpret Web services requests and responses..."

  • [January 25, 2002] "Web Services on Desktop." By Peter Galli. In eWEEK Volume 19, Number 3 (January 21, 2002), page 14. "Office XP Web Services Toolkit and Smart Tag Enterprise Resource Toolkit, released last week, will enable developers to put XML Web services data into Office XP, which Microsoft Corp. officials said will make it easier for users to access important information in applications they use daily... Smart Tag Enterprise Resource Toolkit provides a road map for how to plan, design, implement and deploy robust, scalable smart tags within the enterprise, while Office XP Web Services Toolkit enables developers to apply XML Web services in Office XP, Brown said. Microsoft has already signed up General Motors Corp. for its newly launched pilot program to recruit partners and customers into building Office-based Web services. GM is rolling out Windows 2000, Office XP and some of these tools to about 120,000 workstations... Microsoft's Brown said the tool kits also allow developers to discover XML Web services using the standards-based Universal Description, Discovery and Integration service and integrate them directly into Office XP solutions with a single click..."

  • [January 25, 2002] "XML Standards Updated. Key XML Query 1.0 Specification Moves Closer to Completion Amid Controversy Over Lack of Update Features." By Timothy Dyck. In eWEEK Volume 19, Number 3 (January 21, 2002), pages 44, 48. "The all-too-familiar struggle to satisfy time-to-market simplicity and final-feature-set criteria is in full swing in several key XML standards bodies, the results of which will affect all users of XML. The World Wide Web Consortium just finished one of its busiest periods ever, with 27 publications released last month. Several of these proposals were releases of new or updated working drafts for key forthcoming XML standards, including XQuery 1.0; XPath (XML Path Language) 2.0; XSLT (Extensible Stylesheet Language Transformations) 2.0; and XML 1.1, an update to XML itself. XML-based technologies have become so important to so many powerful organizations that it is now quite difficult to find consensus on how to define higher-level search and data manipulation techniques for XML. As a result, there are a number of overlaps among different standards that provide dissimilar ways of doing similar things. For example, both XPath and XQuery provide ways to search through an XML document and return found data (for example, to search through a list of customer records to find those records where the state element is equal to 'WA' and the credit check element is equal to 'passed'). Likewise, XQuery and XSLT provide different ways to write logic to change the format of XML documents (for example, to reorder elements in an XML document or to generate different kinds of markup from the same source file). The most recent round of standards setting tries to eliminate some of these inconsistencies. XPath 2.0 is now a subset of XQuery 1.0 work, and the goal is for XPath 2.0 expressions to be fully compatible with XQuery 1.0 and generate exactly the same search results. The December XPath 2.0 working draft even states that the XPath 2.0 and XQuery 1.0 working draft documents are generated from common source files. This convergence of XPath and XQuery (and of XPath and XSLT 2.0) is prompting changes in XPath, and early adopters using XPath 1.0 will have to recheck their XPath queries to make sure they still work as intended in XPath 2.0... XML, the basis for all this work, is also being updated with a proposed 1.1 release. Just two major changes are in the update. The first is to upgrade Unicode character support to the current Unicode 3.1. XML 1.0 supports Unicode 3.1 characters in data but restricts metadata such as tag names to Unicode 2.0 characters (at 94,140 characters, Unicode 3.1 is close to triple the size of Unicode 2.0). This change won't affect many developers but is appropriate for an international standard as fundamental as XML. The second change is to make the line-end character sequence used on OS/390 systems a legal line-end symbol in an XML file. This would let OS/390 users edit XML files using native text editing tools and transfer XML files generated on mainframe systems to other systems without any line-ending conversions..."

  • [January 24, 2002] "Sun to Build UDDI Directory for Web Services." By Michael Vizard and Tom Sullivan. In InfoWorld (January 23, 2002). "With the advent of Web services, iPlanet E-commerce Solutions, the Sun/Netscape Alliance, has embarked on an ambitious effort to re-engineer its networking software to support large-scale deployments of distributed components. At the center of that effort will be a new implementation of the company's directory offering that will support UDDI (Universal Description, Discovery, and Integration). UDDI, akin to an online Yellow Pages in which Web services can be registered and located, makes up the core de facto standards for Web services in conjunction with XML, SOAP (Simple Object Access Protocol), and WSDL (Web Services Description Language). iPlanet expects customers to use its directory, which is built on a hierarchical database, to discover, track, and understand the relationship between objects distributed across a network... The new Sun directory offering will become a strategic element of Sun's overall Continuous Java initiative, and as such will be integrated with Sun's JSR (Java Specification Request) and JMS (Java Message Service) technologies, which will be made more accessible to programmers invoking a SOAP interface. Once the directory is in place, the Sun/Netscape Alliance will follow up with a range of distributed networking software, including a distributed file system that will include storage virtualization capabilities and a set of distributed cache management tools. As distributed objects that leverage Web services to communicate begin proliferating around the enterprise, industry analysts said directories are expected to play a key role..."

  • [January 24, 2002] "XML, Web Services to Share Winter Olympic Podium." By John Fontana. In Network World Volume 19, Number 3 (January 21, 2002), page 12. "At next month's Winter Games in Salt Lake City, SchlumbergerSema, the technology service company behind the Olympic network, hopes to prove that XML can handle that same pressure with the grace of a figure skater, the speed of a bobsledder and the accuracy of a biathlete. And in the 17 days of Olympic events, IT executives will get a battle test on XML and Web services, middleware or application code, based on a collection of XML-based protocols, as a key integration technology for corporate systems... SchlumbergerSema, which this year replaces IBM as the architect of the Olympic network, used XML to integrate 30 disparate systems that tie together everything from timing/scoring devices to custom applications to the participant management system. The network, which uses SONET on the WAN and 100M-byte Ethernet on the LAN, spreads across 40 Olympic venues, two data centers and a mission-control facility in and around Salt Lake City... The Olympic network includes 40 applications, 4,500 workstations and laptops, 225 servers, 145 Unix boxes and 32,000 miles of fiber-optic cable. It is run by 3,000 IT staff during the Games. Where IBM took a mostly single-vendor approach, SchlumbergerSema has assembled a consortium of vendors, including Sun, Cisco, Microsoft, Oracle, Xerox, Kodak and Seiko, and tied their products together using custom-written transport protocols, queuing mechanisms and guaranteed delivery technology based on XML. The project started more than two years ago when SchlumbergerSema began developing two XML-based protocols - the On-Venue Transfer Protocol (OVTP) and the Queuing Protocol (QP) - to support standards-based data transport and guaranteed data delivery between disparate systems. SchlumbergerSema also developed a specification called ORIS+, an XML representation of the Olympic Results and Information Service (ORIS), which outlines how event results must be represented. ORIS+ defines the contents of XML data packets, which are then moved between systems using OVTP for transport and QP to guarantee delivery. The model is similar to the Simple Object Access Protocol (message transport) and the Web Services Description Language (packet description), which are foundations of Web services technology... SchlumbergerSema also created an XML 'shim,' software that provides an interface between 11 On-Venue Results (OVR) systems, which collect timing/scoring data and are built on Microsoft's Windows NT and SQL Server, and a centralized data-collection system based on Sun hardware and Oracle databases. The data-collection system is integrated via XML to a suite of SchlumbergerSema custom applications called Info Diffusion, which displays results, athlete biographies, news, weather and travel information. The OVR systems also feed data to a custom application called Commentator Information System, which provides real-time results to TV and radio commentators, such as split times during racing events..."

  • [January 24, 2002] "The Home Depot's Latest Project: XML, Web Services." By Ann Bednarz. In Network World Volume 19, Number 3 (January 21, 2002), page 12. "The Home Depot is set to launch a pilot of its new point-of-sale system, which would replace four sales applications the company maintains in about 1,300 stores... At the core of The Home Depot's development strategy are XML for defining data to be shared between applications and Web services for breaking up monolithic applications into reusable components... The Home Depot's existing POS environment is highly customized and tightly coupled with multiple store systems, including tool rental and special orders. It works well, but The Home Depot found it tough to tailor the system to respond to business changes... Also driving the need for the new POS system is The Home Depot's desire to synchronize information from different sales channels, including special order, phone and Web systems, and share consistent information with consumers and marketing staff... With Web services and XML messaging, companies can isolate - and then reuse - software components that handle specific tasks. The new POS system will replace the four existing systems, and its component-based design will take business changes more easily than packaged applications. The Home Depot's existing POS, tool rental and special order systems don't have common code, but share common functions such as price lookup, tax calculation, tender management and returns authorizations... The Home Depot also uses XML to keep its orders and customer information in synch between the Web site, stores and its central customer database. When a Web order is placed, the company uses XML documents and messaging over HTTP to send the order to be processed in a specific store - for accountability purposes and to make sure the Web site wasn't competing with any particular stores..." [Information from Ray Allen, Senior IS Manager, The Home Depot.] See the news item of January 18, 2002: "ARTS and IXRetail Release XML Price and Digital Receipt Schemas for Retail Industry."

  • [January 24, 2002] "Top Web Services Worry: Security." By John Fontana. In Network World Volume 19, Number 3 (January 21, 2002), pages 1, 10. "The absence of security and reliability is proving to be a major stumbling block in convincing companies that Web services can thrive outside of corporate firewalls. IT executives are finding that Web services technology can ease internal application integration. But for business-to-business integration, the technology is lacking key standards for enterprise-class transactions, according to experts attending last week's Next Generation Web Services Conference, which drew about 700 participants. Informal polls showed security was the top issue among those considering Web services. Work is under way to develop protocols and mechanisms to strengthen the security, reliability and workflow capabilities of Web services, but some experts argue that they might not be robust enough or may overlap, and cause interoperability or integration problems down the road... Web services technology is being touted for its ability to transform application logic housed in disparate systems into components with XML-based interfaces. Those components can be integrated or aggregated into complex business applications or processes. The vision is that Web services from any number of sources could be dynamically combined over the Internet into hybrid applications for business-to-business commerce... Web services specifications that begin to solve those problems are being developed now, including the Extensible Access Control Markup Language (XACML), Security Assertions Markup Language (SAML), XML Key Management (XKMS), XML Encryption, Web Services Flow Language, XML Digital Signature, Business Transaction Protocol and extensions to the Simple Object Access Protocol (SOAP). Meanwhile, IBM has proposed HTTP-R for reliable transport of SOAP messages. And Microsoft is working on a Global XML Architecture, which includes proposed standards called WS-Security and WS-Routing. The Organization for the Advancement of Structured Information Standards is developing ebXML, which includes models for security and standardizing electronic business processes. Others are proposing extensions to SOAP, which can carry directives in the header fields of its messages... Eduardo Fernandez [Department of Computer Science and Engineering at Florida Atlantic University] says XACML and SAML don't follow classic maps for security and might eventually produce errors, and XML Encryption and XKMS overlap in many places. In the interim, a handful of vendors, including IBM, Microsoft, Kenamea, Sonic, Iona, Tibco, Flamenco Networks and Grand Central, are using a collection of standard and proprietary technology in middleware software or services that use security, reliable delivery of messages and transactional integrity of business processes exposed using Web services..." See for example: Security Assertion Markup Language (SAML); Extensible Access Control Markup Language (XACML); XML and Encryption; XML Digital Signature (IETF/W3C); XML Key Management Specification (XKMS).

  • [January 24, 2002] "XML for Data: Modeling Many-To-Many Relationships. Tips and Techniques to Help You Create More Flexible XML Models" By Kevin Williams (CEO, Blue Oxide Technologies, LLC). From IBM developerWorks, XML Zone. January 2002. ['In this column, Kevin Williams takes a look at some options for modeling many-to-many relationships in XML. Several different techniques, and the advantages and disadvantages of each, are discussed. Examples are provided in XML.'] "Relational databases are, by their nature, more flexible than hierarchical data storage structures such as XML. Many relationships that are simple to model in relational databases (such as the relationship between invoices and parts in a shipping system) turn out to be fairly difficult to model in XML. In this column, I'll take a look at a typical many-to-many modeling challenge, and go through some options you have when creating an XML model for that information... A typical modeling puzzle If you have some experience modeling data for relational databases, you know that many-to-many relationships between different relational entities appear all the time. This column uses a common example as its starting point: invoices and parts in a shipping system. This is the classic example of a many-to-many relationship: Invoices may include many parts, and each part may appear on many invoices. Additionally, you may have other information associated with the relationship itself that you need to model. For example, when a part appears on an invoice, it will typically have a quantity and price associated with it... XML provides a couple of different mechanisms for representing relationships between elements. The most commonly used mechanism is the parent-child relationship. This can be used to represent a one-to-one or one-to-many relationship between elements. However, when you try to represent a many-to-many relationship, this mechanism is insufficient, as each element may only have a single parent element. Relationships in XML can also be represented with ID-IDREF(S) attributes. Using these attributes, an element may refer to one or more other elements, viz., by including the value of those elements' ID fields in the pointing element's own IDREF or IDREFS field. While this may seem to be directly analogous to a relational database's key mechanisms, there's one important difference: Most parsers treat these pointers as unidirectional. In other words, given an IDREF or IDREFS field, it's possible to quickly find the element or elements with the associated ID or IDs, but not the other way around. As you'll see when I discuss modeling solutions, this turns out to be a real impediment to design..." Article also in PDF format.

  • [January 24, 2002] "Relax NG, Compared." By Eric van der Vlist. From XML.com. January 23, 2002. ['The RELAX NG schema language explained and compared to W3C XML Schemas.] "This article is a companion to two different works already published on XML.com: my introduction to W3C XML Schema is a tutorial introducing the language's main features, with a progression which I hope is intuitive; and my comparison between the main schema languages, an attempt to provide an objective and practical feature-by-feature comparison between XML schema languages. In this new article, I have taken the same approach as the one used in the W3C XML Schema tutorial but this time I've implemented the schemas using RELAX NG... it provides a good starting point for those of us who know W3C XML Schema and want to quickly point out the differences with RELAX NG. Links are provided throughout to the corresponding sections of the W3C XML Schema tutorial, and you are encouraged to follow both simultaneously... Throughout this comparison, we have seen that one of the main differences between the two languages is a matter of style: while RELAX NG focuses on generic 'patterns', W3C XML Schema has differentiated these patterns into a set of distinct components (elements, attributes, groups, complex and simple types). The result is on one side a language which is lightweight and flexible (RELAX NG) and on the other side a language which gives more 'meaning' or 'semantic' to the components that it manipulates (W3C XML Schema). The question of whether the added features are worth the price in terms of complexity and rigidity is open, and the answer probably depends on the applications. Independently of this first difference between the two, the different positions regarding 'non-determinism' between RELAX NG, which accepts most of the constructs a designer can imagine, and W3C XML Schema, which is very strict, mean that a number of vocabularies which can be described by RELAX NG cannot be described by W3C XML Schema. A way to summarize this is to notice that an implementation such as MSV (the 'Multi Schema Validator' developed by Kohsuke Kawaguchi for Sun Microsystems) uses a RELAX NG internal representation as a basis to represent the grammar described in W3C XML Schema and DTD schemas. This seems to indicate that RELAX NG can be used as the base on which object oriented features such as those of W3C XML Schema can be implemented. The value of an XML-specific object-oriented layer is still to be determined, though, since generic object-oriented tools should be able to generate RELAX NG schemas directly..." See W3C XML Schema and "RELAX NG." For schema description and references, see "XML Schemas."

  • [January 24, 2002] "Digging Animation." By Antoine Quint. From XML.com. January 23, 2002. "Animation is a core feature of SVG. It is a large part of SVG's specification and is based on SMIL Animation. In fact, if you know about SMIL animation already then this article ought to be a doodle or a nice little bit of review. You might want to have the SVG Animation spec chapter handy before we start. My mission in this article is to show you how to recreate one of those nifty gravity animation effects that people gaze at for hours. SVG will certainly not make this kind of animation any more useful than its implementation in Flash, but it is certainly very instructive to create. For a little taste of what we're going to create, have a look at Niklas Gustavsson's original SWF animation, although we're going to add a few enhancements... Adobe's SVG Viewer on my laptop gives me a much smoother animation than the SWF played in the Flash player. By the way, the gzipped version (commonly known as SVGZ) of the final animation is only 926 bytes, while the SWF version is 1.29 KB... I hope you enjoyed this ride through some of the nicest features of SVG animation. There is a lot more to SVG animation, especially the idea of synchronization, which we'll consider in future articles. I think this simple animation highlights some of the key concepts of SVG animation, primarily that it is time-based and property-based, as well as some of its differences from the SWF animation capabilities. It's probably now time for you to read the whole animation chapter of the SVG 1.0 specification and start digging into it yourself. One thing you'll find missing at the moment is an authoring tool for SVG animations. Jasc's WebDraw 1.0 has just been announced and includes support for some of SVG's animation features. And Adobe just recently asked on the SVG-Developers list for feedback on the need for SVG support in their animation tool, LiveMotion. Adobe is soliciting your input on its email wish list. If you want to see SVG grow in terms of user penetration and ease-of-use, make sure Adobe and other vendors hear from you..." On November 21, 2001, Antoine Quint published a survey article "SVG: Where Are We Now?" For Adobe's SVG Viewer, see the SVG Zone; the Adobe SVG Viewer 3.0 is available in 15 languages. See: (1) the W3C SVG Web site; and (2) local references in "W3C Scalable Vector Graphics (SVG)."

  • [January 24, 2002] "Update to Market Impact of the ebXML Infrastructure Specifications." By Mike Rawlins. January 23, 2002. "... I would regard critical mass for ebXML as adoption by most of the Fortune 1000 and a sizeable percentage of other organizations. If we take the U.S. as an example, I recall estimates that there are somewhere between 8 and 10 million businesses that could conceivably use EDI, but only one or two hundred thousand actually do. Considering this abysmal adoption rate for EDI, if ebXML were adopted by even 10% of SMEs it would achieve critical mass... I find it interesting that although there were comments on my assessments of every one of the specifications, the assessment that dominated the discussions was that of the CPA/CPP specification. I hesitate to infer very much from this, but because of it the CPA/CPP assessment is the only one that I am revising. A good point was made that the analogy between the X12 838 transaction set and the CPA/CPP specification is not as valid as I implied (though I think they still have a lot more in common with each other than a negotiated modem handshake!). Configuring systems to achieve secure, reliable data exchange over the public Internet is inherently more complex than configuring EDI systems to exchange data using VANs. Regardless of whether or not the ebXML messaging system is used, the CPA/CPP may be a useful aid in that configuration. However, past experience has shown us that in new technology rollouts, vendors often implement a subset of features that they regard as being the minimum required to meet market demand. Current trends with ebXML software seem to be confirming this..." See: "Electronic Business XML Initiative (ebXML)."

  • [January 22, 2002] "UBL 'Might Help .NET' Says XML Founder." By [XML-J Staff and] Jon Bosak. In XML-Journal (January 22, 2002). ['Officially titled Distinguished Engineer, the highly articulate Bosak has been Sun Microsystems's point man involved with XML ever since a cross-industry group, organized and led by Sun, first drafted it as a simplified subset of SGML capable of supporting the definition of an unlimited number of special-purpose languages optimized for different specific industries and domains... XML-J Industry Newsletter invited the team-spirited metalinguist to bring SYS-CON Media's audience up to speed on the latest far-reaching e-commerce initiative he's spearheading, namely the Universal Business Language (UBL)'] "...[UBL has] formal liaisons from some key industry groups -- EIDX for the electronics industry, ARTS for retail, XBRL for accounting, and RosettaNet for information technology. I expect quite a few more of these vertical industry organizations to establish relationships with UBL as they start to realize that we're solving some basic information exchange problems that are common to all of them. And we're working on getting liaisons from the main EDI standards bodies, though that takes a while... UBL and ebXML complement each other; they're not competitors. So ebXML is not going to replace UBL. The deliverables promised for ebXML never included a designated XML syntax. The whole project was 'syntax neutral' so that the semantic models could be specified in a way that leaves the binding to a specific notation undefined. This lets you produce XML or EDI versions of the data from the same models. Now if we're only interested in a single XML language for standard business forms, then we can simplify this approach by eliminating that journey out to the abstract layer and just define the data model right in the XML schema. As far as I know, no one has yet identified a data modeling requirement for electronic commerce that can't be met with XML schemas... Thanks to ebXML, we've now got secure XML messaging built on SOAP, we've got a consensus on how to form trading partner agreements, which can be done either manually or automatically, we've got the basic specification for a very powerful taxonomy-driven registry, we've got a technology for the discovery and classification of core data components in the data dictionary, and we've got a preliminary understanding of how a library of components can be changed to reflect the current business context in which they're being used. If you add a standard syntax to the infrastructure pieces already defined by ebXML, we're ready to rock and roll. So even though the more visionary pieces of ebXML still have a long way to go, for projects over the next few years I personally consider ebXML version 1.0 essentially done, and what I want to see us do is start using it..." See: "Universal Business Language (UBL)."

  • [January 22, 2002] "The ebXML Registry." By Kristian Cibulskis. In XML-Journal Volume 3, Issue 01 (January 2002). "The Electronic Business Extensible Markup Language, better known as ebXML, aims to allow companies of any size to conduct business electronically via the Internet. Obviously, companies doing business together isn't a new idea. EDI (electronic data interchange) has been used between large businesses to conduct electronic business since the 1960s. However, EDI often requires the implementation of custom protocols and proprietary message formats between the individual companies. Because of this, its use has been restricted to larger corporations that can absorb the initial costs required to do business in this fashion. The goal of ebXML is to provide a flexible, open infrastructure that will let companies of any size, anywhere in the world, do business together. The ebXML effort is jointly sponsored by the United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT) and OASIS, the Organization for the Advancement of Structured Information Standards, along with approximately 30 other industry leaders. UN/CEFACT is also the standards body behind EDIFACT, an EDI standard used heavily throughout Europe and the Pacific Rim. The ebXML group has delivered three key components of a next-generation B2B infrastructure: (1) An XML messaging specification; (2) A trading partners agreement specification; (3) A registry/repository specification. A second initiative at OASIS has begun to create a Universal Business Language (UBL), essentially a standard set of XML business documents to be used for B2B transactions. UBL is based on xCBL 3.0, which is freely available and widely deployed. In this article we'll explore the ebXML Registry/Repository, one of the cornerstone components of the ebXML architecture..." See (1) "OASIS ebXML Registry Technical Committee Releases Approved Version 2.0 RIM/RS Specifications"; and (2) "Announcing JAXR RI 1.0 EA." See also: "Electronic Business XML Initiative (ebXML)."

  • [January 22, 2002] "XSL Formatting Objects: Here Today, Huge Tomorrow." By Frank Neugebauer. In XML-Journal Volume 3, Issue 01 (January 2002). "There are two parts to the W3C Recommendation: a transformation part (XSLT) and a formatting part (XSL Formatting Objects, or XSL-FO for short) with the intent being the presentation of XML. However, since XSLT is also its own (more mature) W3C Recommendation, it has enjoyed the attention of developers wishing to transform XML into other markup languages such as HTML. In a very real sense XSLT is how XML is currently being visually presented. Although using XSLT to transform XML to HTML can be very powerful and useful, it also has some serious limitations. For example, HTML, even when combined with Cascading Style Sheets (CSS), can cause unpredictable results when printing. Another limitation is the need for developers to understand the inner workings of the expected eventual output format(s) (e.g., HTML, WML) and to code XSL Stylesheets for each such output format expected. In theory (and partially in practice), XSL Formatting Objects can overcome these shortcomings because the language is a kind of "general" markup language with extensive formatting capabilities without being output format-specific. It may eventually be positioned as the ultimate language for expressing the output format of XML documents across software (e.g., Web browsers, word processors) and hardware (e.g., printers, cell phones) boundaries. Admittedly this speculation is a bit optimistic - the result of the promise brought about by the growing maturity and capability of XSL-FO processors. Currently, a number of maturing XSL-FO processors are capable of rendering such formats as Portable Document Format (PDF), Rich Text Format (RTF), and plaintext, among others... In this article I'll use a demo version of one such processor, RenderX's (www.renderx.com ) XEP, to demonstrate how XSL-FOs work in the hopes of providing you with the same "enlightenment" I experienced when I first started using XSL-FO..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [January 22, 2002] "SOAP Messages with Attachments." By Ian Moraes. In XML-Journal Volume 3, Issue 01 (January 2002). "Organizations are being challenged to partner with other organizations in order to respond more rapidly to new business opportunities, increase the efficiency of business processes, and reduce the time to market for their products. To address these issues, they're typically required to develop interoperability between disparate, legacy applications to support collaborative business processes. This is accomplished by coordinating the exchange of business documents between applications in a predefined manner. For example, two insurance companies with different systems may need to exchange auto insurance claim data, such as a TIFF file for claims processing. Enterprise applications that support these types of requirements are known as business services. Supporting business services through Web applications, commonly known as Web services , requires a careful evaluation of available technologies. An important underlying technology of Web services is Simple Object Access Protocol. SOAP enables applications to communicate with each other in a platform, language, and operating system-independent manner. For those who need to use a Web service to perform a functionality such as sending a document in the form of attachments (e.g., a TIFF file) from one application to another using SOAP, a pertinent specification is the SOAP Messages with Attachments note. This article discusses this emerging W3C note and illustrates how it can be used with the Apache SOAP implementation..." See SOAP Messages with Attachments, W3C Note 11-December-2000; it "defines a binding for a SOAP 1.1 message to be carried within a MIME multipart/related message in such a way that the processing rules for the SOAP 1.1 message are preserved. The MIME multipart mechanism for encapsulation of compound documents can be used to bundle entities related to the SOAP 1.1 message such as attachments. Rules for the usage of URI references to refer to entities bundled within the MIME package are specified." See also: "Simple Object Access Protocol (SOAP)."

  • [January 22, 2002] "JAX Pack! -- Bridging The Gap Between Java and XML Technologies." By Hitesh Seth. In XML-Journal Volume 3, Issue 01 (January 2002). "Java and XML are perfectly married. Java represents a technology evolution for platform-independent development and deployment, and an effective mechanism for achieving distributed computing. XML, a very simple concept, has taken the industry by storm and is revolutionizing how data is represented and exchanged within a company and between enterprises. In a nutshell, Java represents portable code and XML represents portable data -- a perfect marriage. In an effort to fuel this marriage, Sun has launched a set of technologies collectively known as the JAX Pack. JAX Pack -- essentially a bundled set of Java technologies for XML -- consists of JAXP (XML Processing), JAXB (XML Binding), JAXM (XML Messaging), JAX-RPC (XML-based RPC), and JAXR (XML Registries). These technologies for XML are available as separate specifications, APIs, and reference implementations (some specs/implementations are generally available; others are works in progress). However, JAX Pack is also available as a combined set of all the JAX technologies in a single download. The standards/specifications representing the various Java technologies for XML have been developed collaboratively with the Java Community Process. The objective of this article is to walk through these APIs, review their functionality, and, using code examples, illustrate how they can be used within applications..." See other details in: "Sun Microsystems Releases Java XML Pack"

  • [January 22, 2001] "BSML: An Application of XML to Enhance Boundary-Scan Test Data Transportability." By David E. Rolince (Teradyne Inc.). Pages 83-91 (with 3 references) in IEEE 2001 Autotestcon Proceedings. IEEE Systems Readiness Technology Conference, Valley Forge, PA, USA. August 20-23. 2001. "The IEEE 1149.1 standard has successfully defined the rules of boundary scan design implementation. This has served as the catalyst for commercial companies to develop boundary scan test generation tools. Except for Serial Vector Format (SVF), an industry standard practice for expressing test vectors, each commercial offering uses proprietary formats for the data that is used to debug and diagnosis failures that are detected during test execution. This data tends to be in a form that is compatible only with a specific target test system. The introduction of XML (Extensible Markup Language) offers a solution for boundary scan test data that not only standardizes the expression of key information, but also enables this information to be retargeted easily through the use of XSL (Extensible Stylesheet Language) style sheets. The objective of this paper is to show how XML can be applied to boundary scan test data as a way to enable this data to be transported to applications and test systems that utilize this information... Expressing large amounts of highly interrelated data in XML can significantly enhance its utility and transportability. It can also be used as a means for defining data requirements for building databases as well as extracting relevant information from them. XML offers the advantage of expressing information and complex interrelationships among it in a human-readable, data-neutral format that is compatible with web browsers. More importantly it provides a means for expressing information in a non-proprietary format for use between different applications dependent on the same data..." [cache]

  • [January 22, 2001] "Leveraging Web and XML Technologies for Boundary-Scan Test Debug." By David E. Rolince (Teradyne Inc.). Presented at Etronix 2001, Anaheim Convention Center, Anaheim, CA, USA. "While the data produced by commercially available boundary scan test generation tools is not normally associated with web applications, it represents an excellent example of large amounts of highly interrelated data that can exploit XML and web browser technology. Most boundary scan test products provide automatic diagnostics that indicate the source of failures to a test operator during production testing. However, these diagnostics can be incorrect or misleading if invoked during initial test integration and program debug when the source of program failures is more likely to be unrelated to a physical defect on the unit under test (UUT). Nonetheless, the test engineer debugging the problem needs to refer to the boundary scan test database in the process of resolving it. By knowing the expected response of the test, the observed response of the test, and the state of other device leads, clues as to the source of the problem can be obtained. All the necessary data is normally available, but not in a form that is quickly and easily understood. By representing boundary scan data in XML and using a web browser to display the data, debugging failures encountered in testing electronic modules that incorporate boundary scan can be significantly simplified... Given the inherent flexibility and data-neutrality of XML, applying it to the large amount and diverse nature of boundary scan test would seem to help simplify the presentation of that data for initial test integration and debug. Since XML is a language for creating a markup language, the first step to harnessing XML is to develop an "instance document" which defines the XML schema for the specific information in your application. The instance document provides the rules that define the elements and structure of the new markup language. It will also serve as the guidelines for other developers to interface with the application. In our application example, the schema for boundary scan test data is organized in three instance documents with main elements <circuit> for topology data, <boundary_scan_data> for position data, and <SerialVectors> for pattern data. Under each main element are sub-elements that further define the information contained in each schema. In this way XML allows the interrelationships of these schema to be expressed so that collections of related data can be easily located and displayed in a web browser. This is possible because the XML tags in the elements and sub-elements are defined to indicate specific information types..." See also previous bibliographic item referenced. [cache]

  • [January 22, 2002] "A Roundup Of Editors. XML Matters #6." By David Mertz, Ph.D. (Archivist, Gnosis Software, Inc.). From IBM developerWorks, XML Zone. January 2001. ['In this column David Mertz gives an up-to-date review of a half-dozen leading XML editors. He compares the strengths, weaknesses and capabilities of each -- especially for handling text-heavy prose documents. The column addresses the very practical question of just how one goes about creating, modifying, and maintaining prose-oriented XML documents.'] "Working with marked-up prose Perhaps it is obvious enough that the very first requirement of any approach to working with XML documents is assurance that we are producing valid documents in the process. When we use a DTD (or an XML schema), we do so because we want documents to conform to its rules. Whatever tools are used must assure this validity as part of the creation and maintenance process. Most of the tools and techniques I discuss are also a serviceable means of working with more data-oriented XML documents, but the emphasis in this column is working with marked-up prose. A few main differences stand out between prose-oriented XML and data-oriented XML. Some XML dialects, moreover, fall somewhere between these categories, or outside of them altogether (MathML or vector graphic formats are neither prose nor data in the usual ways). Prose-oriented XML formats are generally designed to capture the features one expects on a printed page (and therefore in a word processor). While most such formats aim to capture semantic rather than typographic features (e.g., the concept 'foreign word' rather than the font style 'italic'), their connection to traditional written and read materials is close. On the other hand, data-oriented XML formats mirror more closely the contents of (relational) database formats; the contents can often be thought of as records/attributes (rows/columns), and one expects patterns of recurrent data fields. In prose-oriented XML dialects, one tends to encounter a great deal of mixed content: In data-oriented XML dialects, one tends to encounter little or no mixed content. That is, most 'data' is text that has character-level markup scattered through it... A good text editor for working with XML will have syntax highlighting that is generic for all XML dialects, and also probably the option of configuring something more specific for a given dialect. It will have flexible (maybe regular-expression based) search and replace capabilities. If the text editor can support folding (sometimes called 'code hiding'), that often proves to be of great benefit, especially in handling large documents. Obviously, being able to perform operations on blocks is useful, whether it is indent/outdent, cut/copy/paste, or templating. And probably your favorite text editor (if you are a programmer) is easy to configure to call external programs that take the current working file as input. With a powerful text editor in hand, a few guidelines make working with prose-oriented XML easier..." Article also in PDF format.

  • [January 22, 2002] "An XML Application for Genomic Data Interoperation." By Kei-Hoi Cheung, Yang Liu, Anuj Kumar, Michael Snyder, Mark Gerstein, and Perry Miller. Pages 97-103 (with 12 references) in Proceedings Second Annual IEEE International Symposium on Bioinformatics and Bioengineering (BIBE 2001). Bethesda, MD, USA, 4-6 November 2001. Los Alamitos, CA: IEEE Computer Society, 2001. "As the Extensible Markup Language (XML) becomes a popular or standard language for exchanging data over the Internet/Web, there are a growing number of genome Web sites that make their data available in XML format. Publishing genomic data in XML format alone would not be that useful if there is a lack of development of software applications that could take advantage of the XML technology to process these XML-formatted data. This paper illustrates the usefulness of XML in representing and interoperating genomic data between two different data sources (Snyder's laboratory at Yale and SGD at Stanford). In particular, we compare the locations of transposon insertions in the yeast DNA sequences that have been identified by BLAST searches with the chromosomal locations of the yeast open reading frames (ORFs) stored in SGD. Such a comparison allows us to characterize the transposon insertions by indicating whether they fall into any ORFs (which may potentially encode proteins that possess essential biological functions). To implement this XML-based interoperation, we used NCBIs 'blastall' (which gives an XML output option) and SGD's yeast nucleotide sequence dataset to establish a local blast server. Also, we converted the SGD's ORF location data file (which is available in tab-delimited formal) into an XML document based on the BIOML (BIOpolymer Markup Language) standard..." See: "BIOpolymer Markup Language (BIOML)."

  • [January 22, 2002] "A Metadata Framework for Interoperating Heterogeneous Genome Data Using XML." Pages 110-114 in Proceedings of the American Medical Informatics Association 2001 Annual Symposium, 2001. By Kei-Hoi Cheung, PhD, Yale Center for Medical Informatics Yale Center for Medical Informatics; K. Cheung, Yale University, New Haven, CT; A. Deshpande, Yale University, New Haven, CT; N. Tosches, Yale University, New Haven, CT; S. Nath, Yale University, New Haven, CT; A. Agrawal, Yale University, New Haven, CT; P. Miller, Yale University, New Haven, CT; A. Kumar, Yale University, New Haven, CT; M. Snyder, , Yale University, New Haven, CT. Abstract: "The rapid advances in the Human Genome Project and genomic technologies have produced massive amounts of data populated in a large number of network-accessible databases. These technological advances and the associated data can have a great impact on biomedicine and healthcare. To answer many of the biologically or medically important questions, researchers often need to integrate data from a number of independent but related genome databases. One common practice is to download data sets (text files) from various genome Web sites and process them by some local programs. One main problem with this approach is that these programs are written on a case-by-case basis because the data sets involved are heterogeneous in structure. To address this problem, we define metadata that maps these heterogeneously structured files into a common Extensible Markup Language (XML) structure to facilitate data interoperation. We illustrate this approach by interoperating two sets of essential yeast genes that are stored in two yeast genome databases (MIPS and YPD)..."

  • [January 22, 2002] "Utilizing Multiple Bioinformatics Information Sources: An XML Database Approach." By Raymond K. Wong, and William Shui (School of Computer Science and Engineering, New South Wales University, Sydney, NSW, Australia). Pages 73-80 (with 21 references) in Proceedings Second Annual IEEE International Symposium on Bioinformatics and Bioengineering (BIBE 2001). Bethesda, MD, USA, 4-6 November 2001. Los Alamitos, CA: IEEE Computer Society, 2001. "Biological databanks have proven useful to bioscience researchers, especially in the analysis of raw data. Computational tools for sequence identification, structural analysis, and visualization have been built to access these databanks. This paper describes a way to utilize these resources (both data and tools) by integrating different biological databanks into a unified XML framework. An interface to access the embedded bioinformatic tools for this common model is built by leveraging the query language of XML database management system. The proposed framework has been implemented with the emphasis of reusing the existing bioinformatic data and tools. This paper describes the overall architecture of this prototype and some design issues..." Related document by Raymond K. Wong, F. Lam., S. Graham, and William Shui: "An XML Repository for Molecular Sequence Data," presented at the IEEE International Symposium on Bio-Informatics and Biomedical Engineering, 2000.

  • [January 21, 2002] "Registration of xmlns Media Feature Tag." IETF Network Working Group, Internet-Draft. January 20, 2002, expires: July 21, 2002. By Simon St.Laurent (O'Reilly & Associates). Reference: draft-stlaurent-feature-xmlns-00.txt. Abstract: "This document registers a media feature tag 'xmlns', per RFC 2506, intended for use in a Content-features features header to indicate the XML namespaces used in an XML document. This information augments MIME content-type information, providing a finer granularity of content description for XML documents." From the Introduction: "MIME Content-Type identifiers have proven very useful as tools for describing homogeneous information. They do not fare as well at describing content which is unpredictably heterogeneous. XML documents may be homogeneous, but are also frequently heterogeneous. It is not difficult to create, for instance, an XHTML document which also contains RDF metadata, MathML equations, hypertext using XLink, and SVG graphics. XSLT stylesheets routinely include information in both the XSLT namespace and in the namespace of the format resulting from proper execution of the stylesheet. This document specifies a Media Feature which identifies the URIs used as XML namespaces in a given XML document. While a list of namespaces cannot tell a recipient application everything about the use of those namespaces and there interactions in a given document, it can provide a baseline understanding. A program may be better able to choose among a set of XSLT stylesheets if it knows the namespaces of the results they generate, or a renderer may take advantage of foreknowledge to begin setting up components before content actually arrives. Processors working with SOAP envelopes may find it useful to know what they will be facing inside the envelope. Applications faced with 'unknown' XML namespaces may want to attempt to download RDDL documents to collect information on how to process them. Applications may also choose to reject documents containing unknown namespaces. This feature is designed primarily to be used with the XML Media Types defined in RFC 3023. By providing additional information about the content of the document beyond its overall type, it provides XML applications with a more comprehensive view of information they may (or may not) wish to process, potentially avoiding wasted parsing and processing..." XML-DEV note from S.S.L. 2002-01-21: "While I'm announcing this publication on XML-DEV because it seems relevant to (and was in part inspired by) the RDDL and namespace conversation here over the last week, ietf-xml-mime@imc.org is probably the most appropriate list for discussion..." See also "Resource Directory Description Language (RDDL)." [cache]

  • [January 17, 2002] "XML: Plugging into 'Standard' Hybrids." By Renee Boucher Ferguson. In eWEEK Volume 19, Number 1 (January 07, 2002), pages 24-25. "It was supposed to be so simple. XML would enable companies to move beyond paper-, e-mail- and electronic data interchange-based commerce to the world of Internet transactions... Having such an open platform was supposed to provide a lower-cost way for developing applications that would be universally accessible to all of a company's business partners. Now, more than three years after XML's introduction, IT shops implementing industry-specific variants find themselves looking at multiyear, multimillion-dollar projects that leave two fundamental obstacles unchallenged: how to shift partners from trading through traditional means to trading with XML and how to interoperate with other industries. These vertical-industry XML flavors for many companies have created walls around their Internet trading software that require more code to be written and more expense incurred to make sure that some potential buyers or suppliers can take part in business-to- business e-commerce. What's needed now, in the view of IT managers, software vendors and analysts, is a horizontal XML blueprint of sorts to describe a syntax and vocabulary that vertical industries can use to interoperate with B2B trading software from other verticals. ebXML (electronic business XML) is being touted as one solution -- not just another XML variation but an architecture that provides a horizontal messaging framework. Other cross-industry standards in the works include UBL (Universal Business Language) and XSL (Extensible Stylesheet Language). However, until a universal standard or set of standards is agreed upon, vertical industries will continue to support individual XML standards that do not interoperate. Auto industry electronic trading hub Covisint LLC provides an example of why a horizontal XML standard is needed. The Southfield, Mich., company is endorsing ebXML, but a huge chunk of its supplier base -- companies in the chemicals and plastics industries -- are backing a different vertical-industry XML flavor called CIDX, the Chemical Industry Data Exchange standard... Covisint, for its part, is working with the Open Applications Group Inc. standards body to create an XML schema for the auto industry. There's no question Covisint will have considerable sway in the auto industry in driving ebXML. Jeffrey Cripps, director of industry relations for Covisint, plans to provide financial incentives to suppliers that adhere to the standard... Faced with the proliferation of these XML variants, standards bodies OASIS, or the Organization for the Advancement of Structured Information Standards, and the United Nations Centre for Trade Facilitation and Electronic Business sponsored development of ebXML as a cross-industry electronic glue. ebXML is a modular suite of specifications that provides a messaging and enveloping standard for companies to exchange e-business messages, communicate data in common terms, and define and register business processes. It encompasses five areas: Messaging, a TPP (Trading Partner Profile), a Registry/Repository, a Core Component and a Business Process component. The TPP is a way to represent electronically the parameters that describe a business in XML. The Registry/Repository houses a company's TPP and allows for XML-based queries on specific types of businesses. The Core Component draws from data elements that are within EDI -- name, address, dollar amount of transaction -- as well as XML standards to construct new data elements without having to define lower-level XML. The Business Process component allows users to define business scenarios between trading partners that are independent of the underlying technology. However, filling out the technical details of ebXML has not all been smooth sailing. The work on ebXML is split between two organizations that are suffering from political infighting, according to Jeff Eck, vice chair of the OASIS ebXML Implementation, Interoperability & Compliance group, as well as the global product manager for GE Global Exchange Services." Note the sidebar "Seeking ebXML Interoperability." See: (1) "Electronic Business XML Initiative (ebXML)"; (2) "Covisint Supports ebXML Message Specification and OAGIS Standards."

  • [January 17, 2002] "IBM Tools Help Create, Manage Web Services." By Darryl K. Taft. In eWEEK Volume 19, Number 2 (January 14, 2002), page 30. "IBM [has] announced new tools to help Web services developers and service providers create, host and manage Web services. The new tools -- IBM's Web Services Toolkit 3.0 (WSTK), Web Services Hosting Technology and Web Services Gateway -- are available for free, trial download from IBM's Web site at www.alphaworks.ibm.com... WSTK 3.0, the latest version of IBM's Web services tool kit that was initially released in July 2000, offers developers a runtime environment, along with examples to design and execute Web services, and an introduction to Web services development for those starting out. IBM said WSTK 3.0 consolidates Web services-related technologies from its various development and research labs. The functions of WSTK are based on specifications such as SOAP (Simple Object Access Protocol), WSDL (Web Services Description Language), WS-Inspection and UDDI (Universal Description, Discovery and Integration), and run on both Linux and Windows operating systems. New features and functions in WSTK 3.0 include common utility services, a connector for LotusScript applications to Web services, a WSDL document utility, enhanced Apache support and new UDDI for Java technology. The common utility services in the WSTK includes services such as metering, accounting, contracting, common data notification and identity services, the company said. SoapConnect for LotusScript enables LotusScript applications in Lotus Domino and Notes to integrate with Web services. The technology is an implementation of Version 1.1 of SOAP for LotusScript. The WSDLdoc utility parses WSDL documents and delivers HTML documentation describing the Web services. WSTK 3.0 also supports the third generation of the Apache open-source SOAP implementation and IBM's UDDI for Java Version 2 preview. Another component of the announcement, IBM's Web Services Hosting Technology, is a set of management tools that support Web services. IBM officials said this technology supports the provisioning and metering of Web services without requiring code changes, and enables service providers to develop an integrated billing model... The third component of [the] announcement, the Web Services Gateway, provides enhanced security for Web services across firewalls."

  • [January 17, 2002] "IBM Shows Its Commitment to Web Services With New Java Tools." By Yefim Natis and Massimo Pezzini. Gartner Note FT-15-2587. 14-January-2002. ['IBM has released an alpha version of Java-based tools for Web services. But enterprises should wait for the commercial versions before deploying these tools.'] "On 7-January-2002, IBM introduced a pre-release set of Java-based tools designed to enable enterprises to develop, integrate and manage Web services. The new tools can be downloaded for free in an alpha version; IBM plans to make most of them commercially available by 3Q02, either as independent products or as options for WebSphere Application Server or other WebSphere products. IBM also released an update to its Web Services Toolkit... As a vendor for mainstream enterprises and one committed to Java, IBM focuses primarily on introducing Web services to complement established computing models. For example, the Web Services Gateway, which wraps established applications as Web services, complements the already available or announced support for the SOAP family of standards (which includes WSDL and UDDI) in WebSphere Application Server, WebSphere MQ (formerly MQseries), DB2, CICS and other major products. Unlike Microsoft, the other leader in Web services, IBM see its major opportunities in providing Web service infrastructure, not in selling business services over the Web. For this reason, IBM also plans to offer a set of technologies to help true Web service providers, including: (1) Public and private UDDI registries; (2) The extended Web Services Toolkit for developers; (3) Web Services Hosting Technology for tracking and billing commercial Web service transactions. Much of the new technology relies on IBM protocols and application programming interfaces, e.g., for security, tracking and billing -- an approach that could result in an IBM 'flavor' for Web services. IBM is working with Microsoft and others on common standards for these and other essential features of the commercial Web service infrastructure, but complete agreement will likely not occur before 2004. Enterprises should expect follow-up announcements concerning new joint standards or agreements on interoperability between the leading versions of enterprise Web service specifications. By the time IBM releases its Web Services products commercially, some (perhaps most) of the new protocols will change to accommodate this process... This announcement further confirms IBM's commitment to the Web service model, and further strengthens the company's position as a co-leader with Microsoft in Web service technology..." See also the HTML version.

  • [January 16, 2002] "Working XML: Compiling XPaths. HC Kicks Off With a First Implementation of DFA Construction." By Benoît Marchal (Consultant, Pineapplesoft). From IBM developerWorks, XML Zone. January 2002. ['The Java-based Handler Compiler (HC) project for SAX parsing nears its alpha release. This month our columnist describes how he implements the DFA construction algorithm, giving the first concrete example of using the compiler to recognize XPath. Each month in the Working XML column, Benoît Marchal discusses the progress of his open-source projects for XML developers, from design decisions to coding challenges. The current project, HC (short for Handler Compiler), will take some drudgery out of event-based XML parsing by automatically generating the SAX ContentHandler for a list of XPaths.'] "Two columns ago I launched HC, the Handler Compiler, as a new project for this column. The goal of HC is to compile a proxy content handler that matches XPaths to specific methods (the application handler) in SAX parsing. I have found that in my own SAX programming I spend too much time on low-level, repetitive tasks such as state tracking. HC is my attempt to break free from those technicalities. If you have not read the last two columns, I encourage you to review them now as this month I implement the algorithms introduced in the previous columns... Most specifically, in the last column I reviewed algorithms to compile a so-called Deterministic Finite Automaton (DFA). The DFA is a popular algorithm to construct a state machine to recognize patterns. In the case of HC, the patterns are XPaths. Most of the development work for this column has been in implementing the DFA construction algorithm introduced last month. Before the actual algorithm, however, I first had to implement a few utility classes for message display..." See ananas.org, a companion web site for the IBM developerWorks "Working XML" column by Benoît Marchal.

  • [January 16, 2002] "Lightspeed Rescues Astoria." [Edited] by Luke Cavanagh. In The Bulletin: Seybold News and Views On Electronic Publishing Volume 7, Number 14 (January 16, 2002). "For an undisclosed amount of cash up front and royalties for the next two years, Lightspeed Interactive of Pleasanton, CA, has acquired the intellectual assets of Chrystal Software from Xerox. Lightspeed is now the sole source for Chrystal's products -- Astoria, Eclipse and Lingua--as well as Lightspeed's existing iEngine product line. The deal is good news to the more than 100 Chrystal customers who have been in limbo since Xerox pulled the plug on the company in October. The most important component of the acquisition is Astoria, an XML-aware content-management system... According to Pearsall, Lightspeed will continue to develop Astoria and Eclipse and provide on-going maintenance. Eclipse, though, will likely take a back seat, because Lightspeed offers its own Astoria-based delivery solution, iEngine. The complete package gives Lightspeed an end-to-end solution for XML-based content management and publishing to sell to new prospects... This is good news for both Chrystal customers and the market in general. Lightspeed has key developers that understand XML, and it clearly is making a commitment to improve the product, which otherwise would have languished. If Lightspeed delivers on its promises, the market retains a product that is well suited to certain types of XML-based reference applications and offers a distinct alternative to the Web-centric content-management systems that, for the most part, have much weaker XML functionality..." See the announcement: Lightspeed Completes Deal With Xerox Corporation to Acquire Chrystal Software Assets. Acquisition Positions Lightspeed as Major Enterprise Software Player."

  • [January 15, 2002] "Microsoft Ushers Office XP into Web Services World." By Ed Scannell. In InfoWorld (January 14, 2002). "Microsoft on Monday [2002-01-14] will execute one of its first attempts to harness the desktop as a strategic element of its Web services platform with the delivery of its Office XP Web Services Toolkit. The new toolkit gives developers and corporate users the ability to search out multiple Web services from across the Internet and integrate them within the toolkit's development environment. Developers will also be able to cobble together applications from those Web services within the Microsoft Office environment... While he would not discuss specific details, Fitzgerald said that eventually Office XP will play a significant role in making Web services available among clients through a peer to peer (P2P) implementation, most likely using Groove Networks' P2P environment... One enterprise that sees some merit in Microsoft's Office-based Web services strategy is General Motors. The company is thinking of using Office XP on the desktop to link to server-based data on sales and information on the status of orders. That information could be brought down directly into an Excel spreadsheet and integrated with data from other sources such as inventories and manufacturing. It would then be shared among sales people at a single location or across the company... Some of the capabilities in the Office XP Web Services Toolkit include using UDDI to search by keyword or business for Web services that can be imported directly into the Office XP environment. Developers and users can make sure a particular Web service they find is right for them by testing it on any XML-based service and a built-in test page. All source code generated in the Visual Basic for Applications (VBA) class is available to developers so they can see how an XML Web service is accessed using SOAP. Once developers have found an XML-based service they want to add to a specific solution, developers can then add it as a reference with one mouse click, Microsoft reports. All the methods associated with creating an XML Web service are available in VBA through proxy classes, which are created with standard VBA classes based on the 2.0 version of the SOAP toolkit..." See the announcement: "New Microsoft Office XP Tools Bring XML Web Services To Knowledge Workers. New Tools Bring Powerful .NET Experiences to the Desktop Through Integration With XML Web Services."

  • [January 15, 2002] "Microsoft Office XP Web Services Toolkit.". Microsoft MSDN. January 14, 2002. "The Microsoft Office XP Web Services Toolkit brings the power of XML Web services to Office XP by enabling developers to use the Universal Description, Discovery, and Integration (UDDI) Business Registry or the URL to a Web Services Description Language (WSDL) file to reference XML Web services in Office XP solutions directly from within the Visual Basic Editor. The Office XP Web Services Toolkit contains the Web Service References tool for Visual Basic for Applications plus a series of technical articles and samples describing how to use the tool."

  • [January 15, 2002] "Microsoft Smart Tag Enterprise Resource Kit." Microsoft MSDN. January 14, 2002. "The Smart Tag Enterprise Resource Toolkit provides a roadmap on how best to plan, architect, implement, and deploy robust and scalable smart tags within the enterprise. The toolkit includes: (1) A set of whitepapers on planning, implementing, and deploying enterprise smart tags (2) A robust sample, with complete source code, which illustrates how to structure and develop an enterprise smart tag (3) A set of tools that enable developers to more efficiently develop smart tag solutions..."

  • [January 12, 2002] "Tying the Application Knot." By Mario Apicella. In InfoWorld (January 10, 2002). "Web services promise to deliver an open, Web-based architecture to connect business processes, which could potentially turn upside down the way we think of, create, and use software. It's reasonable to predict that more granular applications, sized to solve discrete business problems, will replace today's monolithic, all-encompassing suites. Therefore, companies should be able to support their business by building a mosaic of best-of-breed Web services, choosing (and therefore paying for) only those that satisfy requirements. While the infrastructure for publishing and discovering Web services matures, b-to-b integration vendors such as Tibco and Vitria are realigning their applications, taking advantage of Web services to facilitate business process integration, which in time will entice competitors to do the same. Similarly, SAP, in addition to joining Hewlett-Packard, IBM, and Microsoft to provide UDDI (Universal Description, Discovery, and Integration) registry operator services, has promised to weave Web services capabilities into its applications... UDDI and WSDL (Web Services Description Language) grant companies the ability to find service providers on the Internet and to learn their modus operandi, but these standards lack the scope of defining business rules to orchestrate the overall Web services-based transaction. For example, a simple business process for online consumer sales could be to accept an order only after a satisfactory credit check. Similar rules need to be coded in such a way that the resulting transaction will correctly identify partners' roles and rules of engagement, regardless of the providers and services used to build it. WSCL (Web Services Conversation Language), an interesting set of specifications released in May by Hewlett-Packard, complements the service descriptions of WSDL with essential elements to describe the flow of a business process between partners. WSCL completes a UDDI registry for a company with business process information such as the document formats to exchange, the activities needed to carry on the transaction, and their sequence. However, even with the addition of WSCL, UDDI registries could fall short describing all the roles and interactions that make a cooperative e-business scenario. A competing set of specifications for e-business, called ebXML (E-Business XML), promises a more comprehensive approach to discovering services and partners and defining business scenarios involving multiple parties... New specifications, including WSCL (Web Services Conversation Language) and ebXML (e-business XML), take a more comprehensive approach to describing process flows..." See: "Web Services Conversation Language (WSCL)."

  • [January 12, 2002] "James Clark Awarded First XML Cup." In XML Files: The XML Magazine Issue 33 (January 2002). "Each year IDEAlliance (formerly GCA) hosts the premier XML meeting. The meeting began in the 1980's as a Markup conference. It evolved into the SGML conference in the early 1990's and in 1997 became the SGML/XML conference. With the announcement of XML as a W3C Recommendation in 1998, this conference made its final transition to become the XML conference. According to Marion Elledge, Executive Vice President of IDEAlliance, 'Our XML conference and exposition has been in existence since the XML standard was first announced at our annual event over five years ago. We thought it was time to recognize those who have helped to make XML an important standard. The XML Cup Award was created to honor talented individuals whose vision and contributions have made a lasting impact on XML technology.' James Clark, founder of the Thai Open Source Software Center, has been involved with SGML and XML for more than 10 years, both in contributing to standards and in creating open source software. 'I am truly honored to receive this award,' Clark said. 'And I look forward to the new challenges facing those of us working with XML and other related standards'. James was technical lead of the XML WG during the creation of the XML 1.0 Recommendation. He was editor of the XPath and XSLT Recommendations. He was the main author of the DSSSL (ISO 10179) standard. Clark has also been the author of several open source software applications. Currently, he is chair of the OASIS RELAX NG TC and editor of the RELAX NG specification..." See the photo and the news item from 2001-12-12: "James Clark First Recipient of the IDEAlliance XML Cup Award."

  • [January 12, 2002] "Transformational Interactions for P2P E-Commerce." By Harumi Kuno, Mike Lemon, and Alan Karp (Software Technology Laboratory, HP Laboratories Palo Alto, California). HP Reference: HPL-2001-143 (R.1). October 10, 2001. To be published in and presented at IEEE HICSS-35, Hawaii International Conference on System Services, January 2002. "We propose a facilitator service mechanism that can leverage 'reflected' XML-based specifications (borrowed from the web service domain) to direct and enable coordinated sequences of mes-sage exchanges (conversations) between services. We extend the specification of a message exchange with the ability to specify transformations to be applied to both inbound and outbound documents. We call these extended message exchanges transformational interactions. The facilitator service can use these transformational interactions to allow service developers to decouple internal and external interfaces. This means that services can be developed and treated as pools of methods that can be composed dynamically... We extended the Web Service Conversation Language (WSCL) to meet our need for a conversation definition language that includes document transformation specifications. WSCL is an XML-based specification that defines a service interface in terms of a list of interactions (keyed by document type), a list of transitions that describe legal interaction orderings. We extended our usage of WSCL to include mappings of the input and output document types to corresponding document transformations... WSCL addresses the problem of how to enable services from different enterprises to engage in flexible and autonomous, yet potentially quite complex, business interactions. It adopts an approach from the domain of software agents, modelling protocols for business interaction as conversation policies, but extends this approach to exploit the fact that service messages are XML-based business documents and can thus be mapped to XML document types. Each WSCL specification describes a single type of conversation from the perspective of a single participant. A service can participate in multiple types of conversations. Furthermore, a service can engage in multiple simultaneous instances of a given type of conversation. For example, a service that supports the 'secured album' conversation type [from Figure 1] expects a conversation to begin with the receipt of a LoginRQ or a RegRQ document. Once the service has received one of these documents, then the conversation can progress to either a 'logged in' state or a 'registered' state, depending on the type of message the service generates to return to the client. There are three elements to a WSCL specification: Document type descriptions specify the types (schemas) of XML documents that the service can accept and transmit in the course of a conversation; Interactions model the state of the conversation as the document exchanges between conversation participants; Transitions specify the ordering relationships between interactions... Our solution is unique in that we distinguish between the conversational protocols and service-specific interfaces. This allows us to provide an extremely lightweight solution relieving service developers from the burden of implementing conversation-handling logic. In addition, we also introduce transformational interactions that allow facilitator services to leverage document transformations and make possible the automated coordination of complex conversations between peer services that do not support compatible message document types. In the future, we plan to investigate more sophisticated uses of conversation policies. For example, we would like to provide a model for the explicit support of deciding conversation version compatibility. We would also like to explore how to support both nested conversations and multi-party. Finally, we hope to address how to exploit document type relationships when manipulating message documents. For example, we would like to use subtype polymorphism to establish a relationship between a document type accepted as input by an interface specification and a corresponding document type in a conversation specification..." See: "Web Services Conversation Language (WSCL)." [cache]

  • [January 11, 2002] "New Routes to XML Data Integration." By Richard Adhikari. In Application Development Trends Volume 9, Number 1 (January 2002), pages 41-44. ['Developers are finding new methods to replace unreliable screen scraping, costly combos of XML databases and coding. The emerging event model scheme and hierarchical views show promise at some IT sites.'] "As budgets are tightened and staff downsized, IT departments have to find new ways of leveraging XML's tagging schema to access data from disparate sources. Screen scraping, the traditional method, has provided unreliable results, and money and manpower constraints make it difficult to use the combination of XML databases and heavy-duty coding some of the larger software corporations require. New methods of leveraging XML's ability are emerging. One involves using an event model, where an event is an intersection of time, topic and location, to describe situations. Another involves new ways of extending the legacy green screen by intercepting the data stream. Yet another approach is to map meta data from disparate databases into XML to create a hierarchical view of data, and to then use advanced distributed query processing technology and software adapters to query data sources in their native format. A fourth approach is aimed at rich data environments; it lets users deal with both XML data and XML documents, and will soon include audio and video capabilities... PRAJA Inc., San Diego, sees business or IT situations in terms of an event model where an event is 'an intersection of time, topic and location,' said CEO Mark Hellinger. Events can have sub-events, as well as components that consist of objects and data, and are a higher level of abstraction than objects for representing data and interrelationships between various types of data or abstractions of that data. PRAJA's ExperienceWare platform generates XML schema to map data from external sources into its event model. Developers first identify the key management and measurement components that can be represented as part of an event. They then use ExperienceWare's XML schema to extract and transform data from disparate sources and load that data dynamically in real time as part of the ExperienceWare engine... Whichever approach developers select for XML data integration, the most important thing to remember is that business needs must drive technology. Gartner Inc. analyst Dale Vecchio said corporations should consider these issues when planning to use XML to access legacy data: (1) It is about the people, not the technology. Is there a culture of change and are people receptive to change? (2) Decide whether you want to use XML as a message description mechanism or a data description mechanism. (3) Figure out what data means. 'XML is an enabling technology that allows you to describe content, but you still need to agree on meaning,' Vecchio said. Does everyone in the organization agree on what a customer is, for example? (4) Figure out whether your approach is inside to outside, or outside to inside. If you are looking to extend legacy systems to new customers, that is inside to outside. If you are building a brand-new application and are looking for a way to get at legacy data as a source of new information for that application, that is outside to in. (5) Figure out how to solve ancillary issues such as data integrity and how to roll back data when there is a failure, for example..."

  • [January 11, 2002] "RDF Declarative Description (RDD): A Language for Metadata." By Chutiporn Anutariya, Vilas Wuwongse, Kiyoshi Akama, and Ekawit Nantajeewarawat. In Journal of Digital Information [ISSN: 1368-7506](January 2002). Metadata: Papers from the Dublin Core 2001 Conference. "RDF Declarative Description (RDD) is a metadata modeling language which extends RDF(S) expressiveness by provision of generic means for succinct and uniform representation of metadata, their relationships, rules and axioms. Through its expressive mechanism, RDD can directly represent all RDF-based languages such as OIL and DAML-family markup languages (e.g., DAML+OIL and DAML-S), and hence allows their intended meanings to be determined directly without employment of other formalisms. Therefore, RDD readily enables interchangeability, interoperability as well as integrability of metadata applications, developed independently by different communities and exploiting different schemas and languages. Moreover, RDD is also equipped with computation and query-processing mechanisms." Full article available in PDF format. See "Resource Description Framework (RDF)." [cache]

  • [January 11, 2002] "XML Speak. Companies Tout XML For Fed Market." By Joab Jackson. In Washington Technology (January 07, 2002), pages 18-20. "'Government agencies are adapting extensible markup language to smooth the flow of information everywhere from Congress to the Pentagon. Anybody can manipulate XML documents on any platform, on any language whatsoever,' said John Taylor, marketing director for U.S. operations of Software AG... Harford County chose an XML-based solution provided by Netherlands-based Seagull. For $100,000, the county extended its database into the Web at a fraction of the cost of a new system. The county's approach is just one of many uses for XML, which government agencies are adapting to smooth the flow of information everywhere from Congress to the Pentagon. Launched in 1996, this open standard is being touted as the lingua franca of the computer world. In February 2001, the U.S. CIO Council created a portal (www.xml.gov) that helps agencies use XML. In August, Congress, the Library of Congress and the Government Printing Office developed XML definitions to make it easier to track bills as they wind their way through Congress. Other XML projects are under way at the departments of Energy and Health and Human Services, and at the Patent and Trademark Office... Another company using XML to develop middleware is Vitria Technology Inc., Sunnyvale, Calif. In November, Vitria introduced its Value Chain Markup Language, an XML definition for purchase orders, invoices and other transactions that replace electronic date interchange specifications. The company is marketing this language to the Department of Defense's Collaborative Defense Department, an initiative to cut costs and improve response times by bringing commercial best practices to the department's logistics management. The new possibilities XML presents may go far beyond extending the life spans of legacy systems. A chief advantage of XML is that it provides the ability for developers to define their own tags that instruct applications how to interpret the data. The Defense Department developed an XML-based standard, called Sharable Content Object Reference Model, or Scorm, that allows e-learning content to be used across multiple end-user devices, The Army National Guard has used Scorm as the basis for its Distributive Training Technology Project, which will provide online, on-demand, multimedia e-learning materials to reserve members across the country in formats such as text and video.... Sebastian Holst, vice president of marketing for Artesia, said the key to the National Guard win was the upfront work the company already completed in XML. As a provider of enterprisewide digital management solutions, Artesia used XML to index media formats, ranging from sound files to video, banking that this open standard would be used widely in the future... With the National Guard contract, that approach paid off. 'We put in tens of millions of dollars of development for our digital asset management, and with very small incremental effort we had it speak Scorm,' Holst said... In November, e-learning software provider SkillSoft Corp., Nashua, N.H., won a cooperative research and development agreement with the Naval Air Warfare Center Training Systems Division to investigate ways of reformatting approximately 8,000 weeks worth of video training to a Web-based instruction. This project, called Red Knot, will encode the training material in XML using SkillSoft's tool set..." See also "Shareable Content Object Reference Model Initiative (SCORM)."

  • [January 11, 2002] "Structuring Biographical Data in EAD with the Nomen DTD." By Antonio M. Calvo. In OCLC Systems and Services Volume 17, Number 4 (2001), pages 187-199. ISSN: 1065-075X, Emerald / MCB University Press. Abstract: "Biographical data, including authorized name information, adds depth, richness and retrievability to bibliographic records and archival finding aids. The use of encoded archival description (EAD) has enabled the description of archival collections in fine detail. EAD allows for biographical information to be coded directly into finding aids in several ways. However the process is time consuming and may result in duplication of effort and inconsistency. This article presents the Nomen XML DTD for biographical data, and puts forth the idea that its use could simplify and enhance the encoding of biographical data in EAD. The Nomen DTD provides a record structure for encoding the authorized name, variant names and biographical details of a person or a group being associated with informational items as subjects or creators. The structure of the Nomen DTD is described in relation to the MARC21 name authority format followed by a discussion of how it may be used as a means to create an authority file for EAD biographical data encoding and linking." See: "Nomen Project for Enhanced MARC 21 Name Authority."

  • [January 11, 2002] "Working out the Bugs in XML Databases." By John Cox. In Network World Volume 19, Number 1 (January 07, 2002), page 24. The article summarizes the pros and cons of special XML repositories. ['As network executives begin to experiment with Web services, they're likely to find that they need a new kind of data store: the XML database. There's a growing belief that XML-based information needs its own database.'] "XML database software products are designed to efficiently store and manage the growing numbers of XML documents that users are creating, especially in Web interactions with business partners and customers. Advocates cite several advantages of XML databases compared with traditional databases: simplicity, ease of application development, ability to search and query XML documents, and fast document retrieval. There's no formal, standard definition of an XML database, although the XML:DB Initiative describes such a database as one that defines a logical model for an XML document (not for the data in the document), and manages documents based on that model. The key point is the database 'thinks and acts' based on XML - XML goes in, and XML comes out, even though these products can physically store the documents in an object or relational database or a proprietary storage model, such as indexed files. The lack of formal definition is just one issue that raises the hackles of critics. They also point to the immaturity of the products and of XML standards; the absence of a standard, reliable query language to match the SQL used in relational databases; and possible data integrity problems... Analysts expect these benefits to fuel a fast-growing market. IDC estimates enterprise spending for XML databases will grow by 130% annually, reaching $700 million in 2004. XML databases will complement relational databases, according to IDC analyst Anthony Picardi - the former being better suited for storing and processing XML documents, the latter for numbers and text. There are plenty of choices for network executives to evaluate, with at least two dozen native XML database products (see XML Database Products). The key vendors include Software AG and eXcelon - which stores documents in its ObjectStore object-oriented database. There are a host of smaller vendors, such as NeoCore, IXIA and ZYZFind, working on XML database products. There are also a number of open source projects. One is Xindice, formerly dbXML Core, which now is being handled by The Apache Software Foundation..." See: "XML and Databases."

  • [January 11, 2002] "An Introduction to the XML:DB API." By Kimbro Staken. From XML.com. January 09, 2002. ['The growing number of native XML databases all have different programming interfaces. The XML:DB API is an open source project to provide a unified API for native XML databases.'] "In my last article, 'Introduction to dbXML', I provided an example that used the XML:DB API to access the dbXML server. This time around we'll take a more detailed look at the XML:DB API in order to get a better feel for what the API is about and how it can help you build applications for native XML databases (NXD). Currently, there are about 20 different native XML databases on the market. Among them are commercial products such as Tamino, X-Hive and Excelon. And open source NXDs include dbXML (now renamed Apache Xindice), eXist, and Ozone/XML. While this selection is a nice thing to see in an emerging market, it makes developing applications quite a bit more difficult. Each NXD defines its own API which prevents the development of software that will work with more then one NXD without coding for each specific server. If you've worked with relational databases, then you've likely worked with ODBC or JDBC to abstract away from proprietary relational database APIs. The goal of the XML:DB API is to bring similar functionality to native XML databases. The XML:DB API project was started a little over a year ago by the XML:DB Initiative and is currently still evolving. Most of the core framework is stable, and it has already been implemented by dbXML/Xindice and eXist. There's also a reference implementation in Java available, and there are several other implementations in progress, including some for commercial databases... There is much more to the XML:DB API than what's illustrated in this simple example and short article. But I have given you a better idea of what the API is and how it is used. If you want to find out more you should take a look at the XML:DB API site and the dbXML developers guide. The eXist documentation also contains some information about developing with the API. While there is still a lot of work to do on the XML:DB API, what is available today is already usable and provides a solid framework to build on. In fact, projects like Apache Xindice are using the XML:DB API as the primary Java API for accessing the server. Participating in API development is open to anyone who's interested; feel free to join the project mailing list and contribute to the development of the XML:DB API." See: "XML and Databases."

  • [January 11, 2002] "Web Services Acronyms, Demystified." By Pavel Kulchenko. From XML.com. January 09, 2002. '[The coauthor of Programming Web Services with SOAP presents a quick guide to the protocols and the specifications behind more than 20 acronyms related to Web services, from SOAP to XLANG, including a description of how they relate to each other and where each sits on the Web services landscape.'] "More than twenty acronyms related to Web services came to light during this year, and in this article I present a quick guide to the protocols and the specifications behind them, including a description of how they relate to each other and where each sits on the Web services landscape. Some of those acronyms are scrutinized in O'Reilly's recently published Programming Web Services with SOAP, which is a complete guide to using SOAP and other leading Web services standards, including WSDL and UDDI... The Web services architecture is implemented through the layering of several types of technologies. These technologies can be organized into the following four layers that build upon one another: [Discovery, Description, Packaging/Extensions, Transport]... Each layer of the Web services stack addresses a separate business problem, such as security, reliable messaging, transactions, routing, workflow and so on. Addressing the need for standardization in this field, several players have come up with a set of specifications that serve as the foundation for their own versions of a comprehensive Web services architecture... Let's wade through a bit of the acronym soup first to get a sense of the breadth and scope of the proposals floating around right now, and then we'll map those protocols to the architecture stacks being pushed by some of the companies... As you can see, the picture is quite complex. To make things even more confusing, some of these specifications define extensions for SOAP messages (WS-Routing, WS-Security, WS-License and SOAP-DSIG), some define packaging format (SWA, DIME), some define SOAP-based protocol (UDDI and WS-Referral) or XML-based protocol (USML), and others define an XML format for service description or orchestration (WSDL, WSEL, WSFL, WSUI and the rest)..." Principal resources for the article: [1] OASIS, IBM: Web Services Component Model (WSCM); [2] Sun: Open Net Environment (SUN ONE); [3] HP: Services Framework Specification; [4] Microsoft: Global XML Web Services Architecture; [5] UN/CEFACT, OASIS: ebXML.

  • [January 11, 2002] "From Excel to XML." By John Simpson. From XML.com. January 09, 2002. ['John Simpson's XML Q&A column this week focuses on the point where spreadsheets and XML meet, illustrating how you might go about moving a spreadsheet into an XML vocabulary, and also offers more follow-up detail on the topic of naming in XML.'] (1) How do I convert Microsoft Excel data to XML?... Converting a spreadsheet's data to XML is a specific form of the general question, "How do I convert tabular data to XML?" Depending on the spreadsheet in question -- and on the character of the desired XML output -- answering it can be extremely simple or mind-bendingly complex. Let's look at a simple example... (2) XML naming constraints revisited: In last month's column, I discussed why element and attribute names cannot begin with a digit or other (mostly non-alphabetic) character... Rick Jelliffe sent me a note listing some real reasons for the limitation ... One of these reasons should be so obvious to an application developer that I'm embarrassed not to have thought of it. Once you open the door to an element or attribute named, say, 30DaySpan, then you must also allow an element or attribute named simply 30. And then performing mathematical or Boolean operations would become an excruciating guessing game for a human, let alone an XML parser..."

  • [January 10, 2002] "Thinking XML: Once Again Round The Block. An Updated Survey of Semantic Transparency in XML." By Uche Ogbuji (Principal consultant, Fourthought, Inc.). From IBM developerWorks XML Zone. January 2002. ['Once again, this column takes a break to look at what's new and what has been neglected in the normal run of discussion. This time, Uche Ogbuji examines a couple of older XML schema systems for common business transactions that are overdue for a look (xCBL, cXML), as well as a new entry to the field (UBL), and some updates in the wide world of RDF.'] "Semantic transparency, shared business semantics, metadata, and knowledge management are all areas of flux as political and philosophical pressures reshape the highly competitive field. In this update, I'll look at XML Common Business Library (xCBL) and Commerce XML (cXML), a pair of technologies that I've previously neglected. I'll also look at Universal Business Language (UBL), a new entrant to the fray of business interchange formats. Finally, I'll look at some changes in the RDF family of specifications. ... Next month, we'll continue with our hands-on look at knowledge management techniques to enhance existing applications. We'll begin to discuss an RDF schema for defining the model of the issue tracker. Along the way, we'll look at some conceptual matters that are important to consider when designing schemata for knowledge systems." Article also in PDF format.

  • [January 10, 2002] "On Database Theory and XML." By Dan Suciu (University of Washington). In SIGMOD Record Volume 30, Number 3 (2001). 7 pages (with 64 references). "Over the years, the connection between database theory and database practice has weakened. We argue here that the new challenges posed by XML and its applications are strengthening this connection today. We illustrate three examples of theoretical problems arising from XML applications... [We describe] three XML research problems, inspired from our own work. XML's semistructured data model represents paradigm shift for theoretical database research. It is not the first one: for example the object-oriented data model can also be considered a paradigm shift, which generated a vast amount of theoretical and applied research. This time, however, the shift comes from outside the community (XML was imposed on us) and this, at least, settles easily the question of applicability. It offers us both a chance both to apply research on old topics (query containment) and to conduct research on new topics (typechecking)... Today the most promising approach to typechecking remains that based on type inference. The XDuce language defines a type inference system for a functional language with recursion; the XQuery algebra defines a type inference system using XML Schema as its type system. Since we know that this approach cannot be as robust as typechecking in general-purpose programming languages, a study of its applicability and limitations is needed. XML Storage XML data is a labeled tree; a relation is a table. The problem of storing XML data in one or several tables is a challenging one, both for theoreticians and practicians. Since the tree is meant to describe some irregular structure while tables are by definition regular, we are attempting to store some irregular data into a regular data type. In addition to the pure combinatorial aspect, there is a logical aspect to the storage problem: given a storage mapping, one needs to be able to translate queries formulated over the XML data into relational queries formulated over the relational storage. The combination of combinatorics and logic make the problem particularly appealing. Several approaches have been tried so far. The simplest is to store XML as a graph, in a ternary relation (two columns for the edges, the third for the labels and/or data values). This approach is explored by Florescu and Kossman. The price one pays for its simplicity is that many self-joins of the edge table are required in order to reconstruct a given XML element: one join for each subelement. Shanmugasundaram et al. ["Relational databases for querying XML documents: limitations and opportunities"] use the DTD (or XML-Schema) to derive a relational schema. One table is created for each element type that can occur in a collection position. This technique works well in practice whenever one has a schema for the XML document. A subtle problem is that the resulting storage is very sensitive to that schema. For example if the content of <person> changes from (name, phone) to (name, phone*) then we need to move all phone numbers to a separate table, although perhaps the XML document has changed very little. The case when the XML document has no schema, or when the schema changes frequently is harder, and has a more dramatic impact on performance... The challenge in any storage schema is that it has to be flexible enough to accommodate any XML data, yet it has to be as efficient as regular data storage when the XML data happens to be regular. Finding the largest regular subset in an irregular data instance is a problem which can be formulated and addressed theoretically..." See: "XML and Databases." [source]

  • [January 10, 2002] "XML with Data Values: Typechecking Revisited." By Noga Alon (Tel Aviv University), Tova Milo (Tel Aviv University), Frank Neven (Limburgs Universitair Centrum), Dan Suciu (University of Washington), and Victor Vianu (University of California, San Diego). Presented at PODS [Principles of Database Systems] 2001, Santa Barbara, California, USA. 12 pages (with 26 references). "We investigate the typechecking problem for XML queries: statically verifying that every answer to a query conforms to a given output DTD, for inputs satisfying a given input DTD. This problem had been studied by a subset of the authors in a simplified framework that captured the structure of XML documents but ignored data values. We revisit here the typechecking problem in the more realistic case when data values are present in documents and tested by queries. In this extended framework, typechecking quickly becomes undecidable. However, it remains decidable for large classes of queries and DTDs of practical interest. The main contribution of the present paper is to trace a fairly tight boundary of decidability for typechecking with data values. The complexity of typechecking in the decidable cases is also considered... The decidability results highlight subtle trade-offs between the query language and the output DTDs: decidability is shown for increasingly powerful output DTDs ranging from unordered and star-free to regular, coupled with increasingly restricted versions of the query language. Showing decidability is done in all cases by proving a bound on the size of counterexamples that need to be checked. The technical machinery required becomes quite intricate in the case of regular output DTDs and involves a combinatorial argument based on Ramsey's Theorem. For the decidable cases we also consider the complexity of typechecking and show several lower and upper bounds. The undecidability results show that specialization in output DTDs or recursion in queries render typechecking unfeasible. If output DTDs use specialization, typechecking becomes undecidable even under very stringent assumptions on the queries and DTDs. Similarly, if queries can use recursive path expressions, typechecking becomes undecidable even for very simple output DTDs without specialization. Several questions are left for future work. We showed decidability of typechecking for regular output DTDs and queries restricted to be projection free. It is open whether the latter restriction can be removed. With regard to complexity, closing the remaining gaps between lower and upper bounds remains open. Beyond the immediate focus on typechecking, we believe that the results of the paper provide considerable insight into XML query languages, DTD-like typing mechanisms for XML, and the subtle interplay between them." See "XML and Query Languages." [source]

  • [January 09, 2002] "XML and WebSphere Studio Application Developer. Part 1: Developing XML Schema." By Christina Lau (Senior Technical Staff Member, IBM Toronto Lab). In IBM WebSphere Developer Technical Journal (December 30, 2001). "IBM's WebSphere Studio Application Developer is a new application development product that supports the building of a large spectrum of applications using different technologies such as JSP, servlets, HTML, XML, Web services, databases, and EJBs. This is the first of a series of articles that will focus on the XML tools provided with Application Developer. This article covers the XML Schema Editor. It provides a birds-eye view of the XML Schema Editor that is included in WebSphere Studio Application Developer. In future articles, we will cover more advanced topics such as: (1) Creating schemas from multiple documents; (2) Identity constraints; (3) Generating Java beans from XML Schema; (4) Generating XML documents from XML Schema; (5) How the wildcard works. The XML Schema Editor is a visual tool that supports the building of XML Schema that conforms to the XML Schema Recommendation Specification (May 2001)... The XML Schema Editor has three main views: Outline View, Design View, and Source View. You can use the Outline View to add, remove or rearrange components in your schema. When you select an object in the Outline View, the Design View will display the properties that are associated with that schema component object. You can use the Design View to enter values for the selected object. You can switch to the Source View to edit the schema source directly. The XML Schema Editor also uses the Task View from the workbench for errors reporting... The XML Schema specification defines a large number of components such as schema, complexType, simpleType, group, annotation, include, import, element, and attribute, etc. To create a valid schema, you must understand the containment relationships between these components. For example, an annotation element can only appear as the first child of any element. The include, import or redefine elements must appear before any other children of the schema element. An attribute can only be added to a complex type, but not a simple type. A group can only be defined at the schema level, but can be referenced by a complex type, etc. The XML Schema Editor removes the burden to remember all these details for you. You can use the Outline View to add schema components via the pop-up menu. The pop-up menu will only display the list of objects that are relevant for the selected object. It will also add the object at the correct location in the XML Schema..." For schema description and references, see "XML Schemas."

  • [January 09, 2002] "Real-World XML Schema. Good Naming Conventions Extend Beyond Retail." By Paul Golick (Programmer, IBM) and Richard Mader (Executive Director, ARTS). From IBM developerWorks XML Zone. January 2002. ['This article presents a set of 17 broadly applicable practices for using XML. These practices were published by the Association for Retail Technology Standards to aid its development of standardized XML messages for exchange between information technology systems that support retail stores.'] "Does your industry provide a set of best practices for XML Schema to streamline industrywide data integration? If not, perhaps it should follow retail's lead. Since 1993, the Association for Retail Technology Standards (ARTS) of the National Retail Federation (NRF) has been developing a standard data model to help retailers integrate applications and interface point-of-sale (POS) data more easily. The International XML Retail Cooperative (IXRetail) is the ARTS committee that is standardizing XML messages for exchange between IT systems that support retail stores. IXRetail has adapted names and definitions from the ARTS Data Model standard for use in XML messages. IXRetail has also worked on standardizing other aspects of XML technology across the retail industry and among its vendors... XML provides the format for identifying information that applications need, but does not assure that the information needed by the recipient is provided. However, XML provides formatting structures that help obtain this assurance. The XML Schema language elaborates on XML and related specifications to provide a flexible way to describe a shared vocabulary of names that can be used to mark up XML documents. By using a shared schema, applications can use validating parsers to assure that appropriate information is sent or received. IXRetail has chosen XML because of the universal applicability of XML to structured document and data exchange on the Web... The goal of these guidelines is to assist development of standardized XML schemas. Fundamental features include choosing names for descriptive value and continuity with prior industry standards, using local naming to keep message sizes reasonable, and planning for change. We hope that you find some of our results applicable to your needs... This article was initially prepared for publication in two installments in NRF's STORES Magazine..." See: "ARTS IXRetail."

  • [January 07, 2002] "The XML FAQ. Frequently Asked Questions about the Extensible Markup Language." Version 2.1 (2002-01-01). Edited by Peter Flynn. Originally maintained on behalf of the World Wide Web Consortium's XML Special Interest Group. The first version was published 31-January-1997. "This is the list of Frequently-Asked Questions about the Extensible Markup Language. It is restricted to questions about XML: if you are seeking answers to questions about HTML, scripts, Java, databases, or penguins, you may find some pointers, but you should probably look elsewhere as well. It is intended as a first resource for users, developers, and the interested reader, and does not form part of the XML Specification..." Peter's note to XML-DEV: "Version 2.1 of the XML FAQ (January 2002) is now available at http://www.ucc.ie/xml/, served via Cocoon. Other formats available include PostScript and PDF (Letter and A4 formats), plaintext, static HTML, and MS-Word. Please report all errors and problems to me by email. By all means also post queries to the newsgroups and mailing lists, but please copy them to me also if you want them to be actioned..."

  • [January 07, 2002] "An Algorithm for RELAX NG Validation." By James Clark.. January 07, 2002. Author's note to XML-DEV: 'I have written a paper describing one possible algorithm for implementing RELAX NG validation. This is the algorithm used by Jing, which I believe has also been adopted by MSV... If you try to use this to implement RELAX NG and something isn't clear, let me know and I'll try to improve the description.' From the introduction: "This document describes an algorithm for validating an XML document against a RELAX NG schema. This algorithm is based on the idea of what's called a derivative (sometimes called a residual). It is not the only possible algorithm for RELAX NG validation. This document does not describe any algorithms for transforming a RELAX NG schema into simplified form, nor for determining whether a RELAX NG schema is correct. We use Haskell to describe the algorithm. Do not worry if you don't know Haskell; we use only a tiny subset which should be easily understandable." Jing is a validator for RELAX NG implemented in Java; it represents an adaptation of the validator for TREX. Jing is written on top of SAX2. See: "RELAX NG."

  • [January 07, 2002] "Making XML Work in Business." By Alan Kotok. From XML.com. January 02, 2002. ['In this report from the XML 2001 conference, Alan Kotok describes where XML is really working inside businesses.'] "XML was developed to meet the real needs of real organizations, and its novelty and its promise have attracted plenty of attention from technical and business people. For XML to continue to thrive, however, it needs to deliver real value to companies and organizations, particularly in these tough economic times. Several of the sessions at IDEAlliance's recent XML 2001 conference showed how XML can deliver for businesses. But the discussions also suggested that the number of organizations able to take immediate advantage of XML is still quite small, and most businesses will probably not see benefits from XML until further down the road... Una Kearns of Documentum outlined the information management challenges faced by both public and private organizations. These organizations collect and generate massive amounts of information -- what we now call content -- such as catalogs, contracts, requests-for-proposals, product specifications, news items, marketing data, technical documentation, financial analyst reports. The content produced and collected in individual departments often ends up managed differently in various departments, which makes an overall organization-wide strategy difficult. Despite the difficulty of the task, organizations able to tame this beast can reap significant rewards. The most immediate savings come from reusing information across divisional boundaries. For example, capturing a company's product specifications in a way that can be reused directly by its marketing and service documents save the marketing and service people hours and days of time recreating that information and reduces the potential for inconsistencies and errors... The idea of information reuse is hardly new, especially to participants from earlier XML and SGML conferences, but the relative simplicity of XML should make it more palatable to larger numbers of companies and organizations. The ability with XML to identify key variables and identify them with common tags can make information in one department meaningful to other departments..."

  • [January 07, 2002] "Controlling Whitespace, Part Three." By Bob DuCharme. From XML.com. January 02, 2002. ['Bob DuCharme's XSLT column, Transforming XML, progresses to the third and final part of its look at controlling whitespace in XML. This month's edition shows how to add tabs and indentation to the output of an XSLT transformation.'] "In the first and second parts of this three-part series, we looked at techniques for stripping, processing, and adding whitespace when creating a result document from a source document. This month we'll see how to add tab characters to a result document, and how to automate the indenting of a result document according to the nesting of its elements... an indent value of 'yes' is useful if every element in your source document has either character data and no elements as content or elements and no character data as content; but it can lead to unpredictability if your source document has elements that mix child elements with character data..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [January 07, 2002] "XQuery Questioned." By Leigh Dodds. From XML.com. January 02, 2002. ['As 2001 drew to a close, the XML querying language XQuery became the new hot topic in the world of XML standards. December's debate in the developer community focused on the developing technology, and Leigh Dodds has the low down on the reaction from the XML-DEV list in his XML-Deviant column. He asks whether the XQuery specification should be refactored, and whether it should be released without specifying significant parts of the expected feature set.'] "Examining the discussion following the publication of several new Working Drafts, the XML-Deviant discovers that the plans of the XQuery Working Group are not meeting developer expectations... XML developers haven't been short on reading material over the holiday season with the publication of a sheaf of new W3C Working Drafts. The XQuery and XSL Working Groups were the most prolific. Updated versions of the XQuery use cases, the XQuery draft itself, its data model, and functions and operators have all been released. The latter two specifications are also applicable to XPath 2.0 whose initial draft has also just been published... Developers are invited to direct their comments to the www-xml-query-comments@w3.org mailing list. Significant lobbying from the community seems to be only way to ensure that XQuery is delivered with anything like what is available in SQL..."

  • [January 07, 2002] "<taglines/> Anti-Awards 2001." By Edd Dumbill. From XML.com. January 02, 2002. [An antidote to the flattery of industry awards ceremonies, the Anti-Awards take an irreverent look at the XML industry over the past year.'] "As expected, James Clark deservedly scooped up the 'XML Cup' for contributions to the XML industry at XML 2001. To redress the balance in favor of the usual cynical sniping, I'm happy to present the <taglines/> Anti-Awards for 2001, intended to burst some overinflated XML bubbles. The panel of judges had a tough time making the final decisions, having been overwhelmed by the trickle of nominations received during the holiday season. If you feel your company, project, or consortium has been unfairly omitted from the prize winners, please let us know and amends will be made in next year's awards..."

  • [January 04, 2002] "DP9 Service Provider for Web Crawlers." By Xiaoming Liu (Computer Science Department, Old Dominion University, Norfolk, Virginia, USA). In D-Lib Magazine Volume 7 Number 12 (December 2001). ISSN: 1082-9873. "The Open Archive Initiative (OAI) team (K. Maly, M. Zubair, M. Nelson, X. Liu) of the Old Dominion University (ODU) Digital Library group has "announced DP9 -- a new OAI service provider for web crawlers. DP9 is an open source gateway service that allows general search engines, (e.g., Google, Inktomi, etc.) to index OAI-compliant archives. DP9 does this by providing a persistent URL for repository records and converting this to an OAI query against the appropriate repository when the URL is requested. This allows search engines that do not support the OAI protocol to index the "deep web" contained within OAI-compliant repositories. Indexing OAI collections via an Internet search engine is difficult because web crawlers cannot access the full contents of an archive, are unaware of OAI, and cannot handle XML content very well. DP9 solves these problems by defining persistent URLs for all OAI records and dynamically creating a series of HTML pages according to a crawler's requests. DP9 provides an entry page, and if a web crawler finds this entry page, the crawler can follow the links on this page and index all records in an OAI data provider. DP9 also supports a simple name resolution service: given an OAI Identifier, it responds with an HTML page, a raw XML file, or forwards the request to the appropriate OAI data provider. DP9 consists of three main components: a URL wrapper, an OAI handler and an XSLT processor. The URL wrapper accepts the persistent URL and calls the internal JSP/Servlet applications. The OAI handler issues OAI requests on behalf of a web crawler. The XSLT processor transforms the XML content returned by the OAI archive to an HTML format suitable for a web crawler. XSLT allows DP9 to support any XML metadata format simply by adding an XSL file. DP9 is based on Tomcat/Xalan/Xtag technology from Apache... The DP9 code is available for installation by any interested OAI-compliant repository." See also "Repositories Open Up to Web Crawlers," by Scott Wilson [CETIS], November 28, 2001. OAI References: (1) "Open Archives Metadata Set (OAMS)" and (2) Open Archives Initiative web site.

  • [January 04, 2002] "Distributed Interoperable Metadata Registry." By Christophe Blanchi and Jason Petrone (Corporation for National Research Initiatives). In D-Lib Magazine Volume 7 Number 12 (December 2001). ISSN: 1082-9873. "Interoperability between digital libraries depends on effective sharing of metadata. Successful sharing of metadata requires common standards for metadata exchange. Previous efforts have focused on either defining a single metadata standard, such as Dublin Core, or building digital library middleware, such as Z39.50 or Stanford's Digital Library Interoperability Protocol. In this article, we propose a distributed architecture for managing metadata and metadata schema. Instead of normalizing all metadata and schema to a single format, we have focused on building a middleware framework that tolerates heterogeneity. By providing facilities for typing and dynamic conversion of metadata, our system permits continual introduction of new forms of metadata with minimal impact on compatibility... To describe each metadata schema we adopted Part 3 of the ISO11179 standard. Part 3 of the standard organizes metadata elements into five general categories: identifying, definitional, relational, representational, and administrative. The specific set of attributes expressed in each of these categories provides a precise, unambiguous, description of the nature, context, and conditions of use of each metadata element within a metadata schema. The complete set of metadata element descriptions for a given metadata schema represents that schema's definition. This description enables independent parties to acquire the same understanding of the nature, context, and condition of use of each field of the metadata schema. It is important to note at this point that although we use the ISO11179 standard to describe our metadata schemas, the framework's mechanisms are not dependent on the standard to function. Indeed, another method for describing the metadata schemas could be used instead of, or in conjunction with, the standard as long as the resulting descriptions precisely and completely describe each metadata schema. To facilitate generation of metadata schema descriptions, we created a Document Type Definition (DTD) that specifies the various attributes for describing a metadata element and that encapsulates some of the rules described in Part 3 of the ISO11179 standard. Using Extensible Markup Language (XML) simplifies the metadata schema description encoding process and provides an additional level of integrity checking. The use of XML enables the independent generation of accurately encoded metadata schema definitions..." See (1) the ISO 11179 section in the document "Registries and Repositories - XML/SGML Name Registration"; and (2) "XML Registry and Repository."

  • [January 04, 2002] "RQL: A Declarative Query Language for RDF." By Greg Karvounarakis, Vassilis Christophides, and Dimitris Plexousakis (Institute of Computer Science, FORTH; Heraklion, Greece). In D-Lib Magazine Volume 7 Number 12 (December 2001). ISSN: 1082-9873. "In the next evolution step of the Web, termed the Semantic Web, vast amounts of information resources (i.e., data, documents, programs) will be made available along with various kinds of descriptive information (i.e., metadata). This evolution opens new perspectives for Digital Libraries (DLs). Community Web Portals, E-Marketplaces, etc. can be viewed as the next generation of DLs in the Semantic Web era. Better knowledge about the meaning, usage, accessibility or quality of web resources will considerably facilitate automated processing of available Web content/services. The Resource Description Framework (RDF), enables the creation and exchange of resource metadata as any other Web data. To interpret metadata within or across user communities, RDF allows the definition of appropriate schema vocabularies (RDFS). The most distinctive feature of the RDF model is its ability to superimpose several descriptions for the same Web resources in a variety of application contexts (e.g., advertisements, recommendations, copyrights, content ratings, push channels, etc.), using different DL schemas (many of which are already expressed in RDF/RDFS. See examples. Yet, declarative languages for smoothly querying both RDF resource descriptions and related schemas, are still missing... This ability is particularly useful for next generation DLs that require the management of voluminous RDF description bases, and can provide the foundation for semantic interoperability between DLs. For instance, in knowledge-intensive Web Portals, various information resources such as sites, articles, etc. are aggregated and classified under large hierarchies of thematic categories or topics. These descriptions are exploited by push channels aiming at personalizing Portal access (e.g., on a specific theme), using standards like the RDF Site Summary... Motivated by the above issues, we have designed RQL, a declarative query language for RDF descriptions and schemas. RQL is a typed language, following a functional approach (as in ODMG OQL or W3C XQuery). RQL relies on a formal graph model (as opposed to other triple-based RDF query languages) that captures the RDF modeling primitives and permits the interpretation of superimposed resource descriptions by means of one or more schemas..." See "Resource Description Framework (RDF)" and "RDF Site Summary (RSS)."

  • [January 04, 2002] "Doubt Cast Over Web Standard's Ownership." By Margaret Kane. In CNet News.com (January 3, 2002). "A Canadian company is claiming that a popular Web technology infringes on a patent it owns. The technology in question, Resource Description Framework, is based on Extensible Markup Language (XML) and allows programmers to write software to access Web resources, such as Web page content, music files and digital photos. The RDF standard has been endorsed by the World Wide Web Consortium, which evaluates and recommends standards for Web technologies. Vancouver-based UFIL Unified Data Technologies, a private company, claims that it owns U.S. patent 5,684,985, a "method and apparatus utilizing bond identifiers executed upon accessing of an endo-dynamic information node." The patent was awarded in November 1997. UFIL is working with Toronto-based Patent Enforcement and Royalties Ltd. (PEARL) to enforce the claims. According to press releases on PEARL's Web site, the companies believe as many as 45 companies may be infringing on the patents... The patent may also infringe on the RDF Site Summary standard, a way to describe Web content that's written in something other than HTML. RSS lets Web sites exchange information about Web site content and e-commerce data, for instance. RSS was originally developed by Netscape Communications, now owned by AOL Time Warner. Netscape's Mozilla browser uses the technology, as do other programs. Daniel Weitzner, technology and society domain leader at the W3C, said the consortium has not been approached directly regarding the patent issue..." See "Resource Description Framework (RDF)" and "RDF Site Summary (RSS)."

  • [January 02, 2002] "What's New in VoiceXML 2.0." By Jim A. Larson. In VoiceXML Review Volume 1, Issue 11 (December 2001). "So what's new with VoiceXML 2.0? Plenty. What was a single language, VoiceXML 1.0, has been extended into several related markup languages, each providing a useful facility for developing web-based speech applications. These facilities are organized into the W3C Speech Interface Framework... The VoiceXML 2.0 supports four I/O modes: speech recognition and DTMF as input with synthesized speech and prerecorded speech as output. VoiceXML 2.0 supports system-directed speech dialogs where the system prompts the user for responses, makes sense of the input, and determines what to do next. VoiceXML 2.0 also supports mixed initiative speech dialogs. In addition, VoiceXML 2.0 also supports task switching and the handling of events, such as recognition errors, incomplete information entered by the user, timeouts, barge-in, and developer-defined events. Barge-in allows users to speak while the browser is speaking. The VoiceXML 2.0 is modeled after VoiceXML 1.0 designed by the VoiceXML Forum, whose founding members are AT&T, IBM, Lucent, and Motorola. VoiceXML 2.0 contains clarifications and minor enhancements to VoiceXML 1.0. VoiceXML also contains a new <log> tag for use in debugging and application evaluation... The W3C Voice Browser Working Group has extended VoiceXML 1.0 to form VoiceXML 2.0 plus several new markup languages, including speech recognition grammar, semantic attachment, and speech synthesis. The speech recognition and speech synthesis markup languages were designed to be used in conjunction with VoiceXML 2.0, as well as with non-VoiceXML applications. The speech community is invited to review and comment on working drafts of these languages." See "VoiceXML Forum."

  • [January 02, 2002] "VoiceXML 2.0 from the Inside." By Dr. Scott McGlashan. In VoiceXML Review Volume 1, Issue 11 (December 2001). "With the publication in October 2001 of VoiceXML 2.0 as a W3C Working Draft, VoiceXML is finally on its way to become a W3C standard. VoiceXML 2.0 is based on VoiceXML 1.0, which was submitted to the W3C Voice Browser Working Group by the VoiceXML Forum in May 2000. In this article, we examine some of the key changes in the first public working draft of VoiceXML 2.0 as compared to the VoiceXML 1.0 specification... Since the founding of the Voice Browser Working Group in March 1999, the group had the mission of developing a suite of standards related to speech and dialog. These standards formed the W3C Speech Interface Framework and cover markup languages for speech synthesis, speech recognition, natural language and dialog, amongst others. Since the VoiceXML Forum had made clear its intention to develop VoiceXML 1.0 and submit it to the Voice Browser Working Group, the dialog team focused its efforts on specifying requirements for a W3C dialog markup language and providing detailed technical feedback to the Forum as VoiceXML 1.0 evolved. With the submission of VoiceXML 1.0, the dialog team began its work in earnest of developing VoiceXML into a dialog markup language for the Speech Interface Framework. A change request process was established in order to manage requests for changes in VoiceXML 2.0 from members of the Working Group and other interested parties; changes could include editorial, clarification, functional enhancements, all the way up to complete redesign of the language. Rather than try to incorporate every possible change into VoiceXML 2.0, we decided to limit the scope of changes..." See "VoiceXML Forum."

  • [January 02, 2002] "First Words: So What's New?" By Rob Marchand. In VoiceXML Review Volume 1, Issue 11 (December 2001). ['This month's column touches on some of the things that you can look for in VoiceXML 2.0, and how it impacts some of the VoiceXML tricks and tips he's introduced throughout the year.'] "The VoiceXML Forum founders (AT&T, Motorola, IBM, and Lucent) prepared the original VoiceXML 1.0 Specification. It was then passed over to the W3C Voice Browser Working Group to be evolved into VoiceXML 2.0. It was released as a public working draft on October 23rd of this year, with public comments being accepted until November 23rd . The process moving forward will include (possibly) additional working drafts, followed by a 'Last Call' working draft. Finally, a 'candidate recommendation' will be made available for final comment, followed by the formalization of VoiceXML 2.0 as a W3C Recommendation. There is still substantial work to go through in moving VoiceXML 2.0 through the W3C process, but the specification itself should now include most substantive changes and features that will be considered for the 2.0 recommendation. The current working draft of VoiceXML 2.0 improves on the VoiceXML 1.0 specification in a number of ways. If you're developing on any of the publicly available developer systems, you probably already have access to these features, or at least some of them..." See "VoiceXML Forum."

  • [January 03, 2002] "SXML as a higher-order markup language and a tool for literate programming." By Oleg Kiselyov. "S-expressions, DOM trees and syntax-heavy XML documents are three different realizations of a hierarchy of containers made of strings and other containers (Infoset). Unlike DOM trees, S-expressions and XML documents both have an external representation. SXML is a S-expression-based, parsed, abstract syntax tree representation of an XML document; as such SXML is concise, expressive and more suitable for queries and transformations than the raw XML... SXML is also suitable for literate XML programming -- design of a markup format. A literate design document should permit a transformation into a well laid-out, easy-to-read hyperlinked user manual. A literate design document should be easy to write. And yet the user manual should be precise enough to allow automatical extraction of a formal specification. SXML fulfills all these roles. SXML is similar to TeX, but far easier to write and read. SXML transformations do the job of 'weaving' a document type specification and of 'typesetting' the user manual. See also: XML and Scheme. References: see (1) the recent news item "xmLP: A New Literate Programming Tool for XML" and (2) "SGML/XML and Literate Programming."

  • [January 02, 2002] "Attributes Versus Elements: The Never-ending Choice." By Sean McGrath (CTO, Propylon). In XML In Practice [ITWorld Newsletter] (December 13, 2001). "Few topics re-occur more frequently, wherever XML developers congregate, than the attributes versus elements debate. The more experience you have of developing XML systems, the murkier the waters surrounding this question. The innocent sounding question can, and does, spark off debates that touch everything from pragmatism to epistemology to mereology and back again... Most developers start out by thinking that having both elements and attributes is useful and, furthermore, that situations best suited to one of the other are sort of, well, obvious. This is about the time that 'rules of thumb' force their way into your head such as 'if it appears on the printed page, then it should be an element, otherwise an attribute'; or, 'if it has a fixed number of atomic values, then use an attribute, otherwise use an element' and so on... As you get more familiar with XML, the distinction between an element and an attribute becomes more slippery. Attributes cannot contain markup and are thus guaranteed to be atomic, whether this is either good or bad depends on your point of view. Elements are flexible and hierarchical and can have zero or more textual elements in them. Again, either good or bad depending on how you look at it... Somewhere along the line, it occurs to you that attributes and elements are often interchangeable... For a while perhaps, you start using elements exclusively and only hoisting content up into attributes if it is required by some specific program or process. You develop a taste for modern schema languages that blur the distinction between elements and attributes almost to the point of disappearance. For example, the RelaxNG schema languages allow elements and attributes to be used practically interchangeably because the structure of constraint expressions stays largely the same syntax..." [XML In Practice: 'This guide explores how to write well-formed XML documents, model business requirements using XML, how to integrate XML with existing applications and more. It will benefit IT professionals who are involved in designing and deploying B2B solutions.'] References: "SGML/XML: Using Elements and Attributes."

  • [January 02, 2002] "The Evolution of E-Business: A Look at Web Services. [Standards.]" By Hans Hartman. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 1, Number 19 (January 07, 2002). ['HTTP can carry messages across the Net, and XML can define complex semantics for those messages. From those technologies has grown a platform-independent way of finding and describing commercial services. So far, Web services' has been mainly a buzzword, but we think real utility is almost at hand. We describe the key ingredients (WSDL and SOAP, mainly) and some possible applications.'] "Web services are going to be the Next Big Thing for the Internet. At least, that is what companies such as Microsoft, IBM, Sun, HP, BEA, Bowstreet and Adobe tell us. We're not quite there yet, of course. In fact, much of the technology to create Web services easily is still being developed, and vendors still show examples or prototypes more often than deployed services. Furthermore, it is often unclear what companies mean when they claim to support Web services. In this article, we'll attempt to clarify the concept of Web services and remove some of the confusion. As you may already suspect, by 'Web services' we don't mean pages or transactions that are processed by a Web server. Rather, Web services are software components that interact with one another via the Internet, using standard Web protocols to transfer information between disparate systems... Web services' main thrust is that software components should be able to 'talk to' each other easily. This differentiates them from large client-server or Web applications, which can typically only be bridged to other applications through custom integration processes. With Web services, it is no longer necessary to program for custom APIs (application programmer interfaces) or use complex technologies such as CORBA (Common Object Request Broker Architecture) or DCOM (Distributed Component Object Model)... We fully agree that integrated, smooth-running environments are what printers and publishers want. The question (which is still open) is whether the Web services approach makes them cheaper to build, or offers application developers more choices than traditional application-development methods. The potential clearly seems to be there... Much will depend on whether the vendors keep their promises to adhere to common standards for Web services. Unfortunately, we have to be suspicious. We've seen many examples in the Internet era where vendors have said they were adopting a standard, then added proprietary functionality that destroyed any hope for universal deployment..."

  • [January 02, 2002] "Corel Outlines 2002 Strategy. New Executive Team Plans Major Moves Into Enterprise, Services." By Mark Walter. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 1, Number 19 (January 07, 2001). "... Corel company president Derek Burney has begun to implement the long-range strategic plan first articulated in January 2000. Last month Burney and chief financial officer John Blaine hosted a teleconference with Wall Street analysts to outline the company's plans and forecasts for 2002. In the call, Burney outlined his plan to make Corel a more diverse software powerhouse, with products across six market segments. Burney also said that Corel expects to lose money in 2002 as it invests in building its XML solutions and technical illustration businesses based on the acquisition of SoftQuad and Micrografx... The new strategy calls for three divisions serving six specific segments of the software market: consumer graphics, professional graphic arts, business/government/legal, technical illustration, cross-media publishing and enterprise process management... Corel's return to composition software is a bold move. It will take resourcefulness and commitment to pull it off, but fresh competition for Adobe and Quark products will be welcome... Corel faces significant hurdles to building a 'solutions' business from discrete stand-alone products, especially without the database applications that form the core of collaborative systems. For example, Burney cited newspapers as a potential target for Corel cross-media products. We think that's an unlikely segment. In the past 10 years, SoftQuad has never established a presence in the newspaper market. Indeed, quite a few newspaper vendors are going ahead and building their own XML editors, rather than OEM one from SoftQuad or Arbortext. The new version of XMetaL, while innovative in its support for XML Schema and improved with the additions of a forms editor and change tracking, does not offer new features of particular interest to newspapers. We think it more likely that XMetaL 3 will win support in other sectors, such as financial services, product data catalogs and medical information, where support for XML Schema will make more of a difference..."

  • [January 02, 2002] "High-End Page Layout: Quark XPress 5 vs. Adobe InDesign 2." By John Parsons. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 1, Number 19 (January 07, 2001). ['Desktop publishing has long been a mature market, yet there are still a few points of distinction between the next versions of Quark XPress and Adobe InDesign. For this article, we examined the beta versions of both. We found that the print-oriented features are roughly at parity, though there are differences in table creation and transparency support. More interesting are the divergences in cross-media production approaches: handling hyperlinks, PDF output, HTML generation, and XML import and export. Though better than before, both programs still fall short here.'] "... Both products tout their XML capabilities for high-end cross-media production. InDesign directly supports importing XML or tagging elements within a document for subsequent export. XPress 5 is bundling Avenue.quark plus, which is a new XML-import capability, that was previously available only in beta. XPress maintains at least two separate, dynamically linked files: one or more print or Web document and an XML file. Quark's 'Sequence' palette allows the user to create logical groupings of story elements prior to XML export. A major difference between the two approaches is the use of DTDs. XPress 5 requires a DTD and uses it to create valid and well-formed XML. InDesign, on the other hand, does not use DTDs at all. According to Adobe, it always creates well-formed XML, which can subsequently, and independently, be validated. Both products performed as advertised, although we were not able to fully test either program's XML import. Creating, applying and modifying tags were relatively straightforward, especially in InDesign. Both applications allowed tag-to-style and style-to-tag mapping. It quickly became evident, however, that an XML workflow was not something to be undertaken lightly, and that considerable advance planning is a prerequisite... Both Quark XPress 5 and Adobe InDesign 2 are substantial upgrades that fill major gaps in page design and production workflows. Current users, especially those who need to create tables and more robust PDF files, will probably benefit from an upgrade... In our opinion, InDesign's feature set now edges out that of XPress. In some cases, such as transparency support, there is simply no comparison. In others, the differences are subtler and probably not sufficient to be decisive for either side. Adobe will continue to make inroads among system vendors and in some vertical markets, but it will not be easy to increase market share of its stand-alone product in a mature market, dominated by a very capable product: Quark XPress. Cross-media production will be the arena for the next round, and the contenders are just getting started."


Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/xmlPapers2002Q1.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org