The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: April 28, 2003
XML Articles and Papers. April - June 2002.

XML General Articles and Papers: Surveys, Overviews, Presentations, Introductions, Announcements

References to general and technical publications on XML/XSL/XLink are also available in several other collections:

The following list of articles and papers on XML represents a mixed collection of references: articles in professional journals, slide sets from presentations, press releases, articles in trade magazines, Usenet News postings, etc. Some are from experts and some are not; some are refereed and others are not; some are semi-technical and others are popular; some contain errors and others don't. Discretion is strongly advised. The articles are listed approximately in the reverse chronological order of their appearance. Publications covering specific XML applications may be referenced in the dedicated sections rather than in the following listing.

June 2002

  • [June 28, 2002] "DSDL Examined." By Leigh Dodds. From XML.com. June 26, 2002. ['In his final column Leigh looks at DSDL, the ISO activity to standardise XML document validation.'] "The core of DSDL will be the Interoperability Framework (Part 1): the glue that binds together the other modules. This week Eric van der Vlist, who is the appointed editor of this section, and Rick Jelliffe have separately produced proposals that aim to explore these kind of framework structures in more detail. The two proposals, neither of which have any formal standing, take very different approaches to the same problem. Van der Vlist's XML Validation Interoperability Framework (XVIF) takes the approach of embedding validation and transformation pipelines within another vocabulary. The specification and online demonstrator both show how this could be achieved by embedding the pipelines within a schema language, but in principle the XVIF is language-neutral so could be embedded within an XSLT transformation for example. XVIF elements just rely on their container to provide the context node on which they will interact. The embedded pipelines may generate other nodes or a simple boolean validation flag. Van der Vlist has produced a prototype that supports using pipelines containing XPath expressions, XSLT transformations, and manipulating content with simple regular expressions, or using Regular Fragmentations. In contrast, Rick Jelliffe's proposal, 'Schemachine' is closer to other pipeline frameworks such as XPipe and Cocoon in that the pipelines are defined by a separate vocabulary. In fact Jelliffe notes that the proposal borrows a lot from XPipe and Schematron in that it has a number of similar elements and structures, e.g., phases. Schemachine divides pipeline elements up into particular roles such as Selectors (e.g., XPath expressions), Tokenizers (e.g., Regular Fragmentations) and Validators (e.g., RELAX NG, Schematron). Jelliffe differentiated XVIF and Schemachine as 'innies and outies'. Technology aside, the important aspect of these proposals is the intent: publicly exploring strawman proposals and implementations to gather feedback before considering standardization. That's a path which seems not only likely to produce viable results, but may actually deliver useful tools that others benefit from in the shorter term..." On DSDL, see: "Document Schema Definition Language (DSDL)." For schema description and references, see "XML Schemas."

  • [June 28, 2002] "Cataloging XML Vocabularies." By Eric van der Vlist. From XML.com. June 26, 2002. ['Eric's been thinking how to solve the problem of choosing which XML vocabulary to use. The solution involves crawling the web for XML namespaces, doing which has revealed interesting statistics on namespace use.'] "I've been involved recently in many discussions and projects oriented around a simple and common question: 'how do I create an XML vocabulary?' The formulation was often different -- 'how do I create a namespace?' or 'how do I publish an XML schema?' -- but the central issue was always about what infrastructure to create and which methods should be used to advertise the newly created vocabulary. Choosing a XML vocabulary today is a very challenging task, quite similar to finding a web page before the development of the big search engines. The main difference is that it can be much more harmful to choose a 'wrong' vocabulary than a 'wrong' page. When I need to choose a XML vocabulary, I want first to have a comprehensive list of vocabularies which could meet my needs. Ideally, I would be using a search engine like Google or AltaVista, but unfortunately, there is no specialized search engine for XML vocabularies. To choose between those candidates, I need as much information as possible and a directory such as DMoz or Yahoo would be of great value. Unfortunately, there are lots of 'schema repositories' covering vocabularies developed by a number of disjoint communities, and this really doesn't help in comparing those vocabularies. Furthermore, these repositories often publish the descriptions provided by the authors, which usually lacks the critical touch brought by the DMoz or Yahoo editors. Finally, I find it very difficult to judge the dynamics of a vocabulary: to distinguish between a two years old specification abandoned by its authors whose usage is slowly declining, and a brand new one with a sharply rising market adoption. Statistics such as those provided by the Netcraft surveys would be invaluable for this purpose. If the bad news is that none of the tools I have mentioned are available, the good news is that most, if not all, the information I need is available somewhere on the Web. In what follows I describe a solution to retrieve and present this information..."

  • [June 28, 2002] "Enforcing Association Cardinality." By Will Provost. From XML.com. June 26, 2002. ['Our main feature this week is the first in a new, ongoing, series focusing on W3C XML Schema, called "XML Schema Clinic." Will Provost will be examining issues in schema design and XML data modeling. In this first installment, Will discusses using W3C XML Schema to control the cardinality of associations between elements in a document type.'] "If you're like me, XML document design brings out your darker side. We like a schema whose heart is stone -- a schema that's just itching to call a candidate document on the carpet for the slightest nonconformity. None of this any-element-will-do namespace validation for us. We enjoy the dirty work: schemas, we think, are best built to be aggressive in ferreting out the little mistakes in the information set that could later confuse the more sheltered and constructive-minded XML application. In this article, we'll practice a bit of that merciless science. Specifically, we'll look at ways to control the cardinality of associations between XML elements. The basic implementation, which we'll review momentarily, of an association between two types is simple enough but is only sufficient for many-to-one relationships. What if multiple references, or zero references, to an item are unacceptable? What if a referenced item may or may not be present? These variations will require other techniques, and these are essential for the truly draconian schema author. This article will use a simple UML notation to illustrate patterns and examples. Knowledge of both XML Schema 1.0 (Part 1 in particular) and UML is assumed, although in developing our notation we'll have a chance to review a little of both... The Unified Modeling Language (UML) provides a basis for a simple notation which will serve our needs in identifying rudimentary design patterns and in illustrating specific examples. Note that many UML-to-Schema mappings are possible; see the 'XMI Production for XML Schema' specification from the OMG for one much more formal option..." See: (1) "XML Schemas"; (2) "XML Metadata Interchange (XMI)."

  • [June 28, 2002] "Variables and Paths." By John E. Simpson. From XML.com. June 26, 2002. ['John Simpson returns with our monthly helping of XML Q&A. In an intruigingly mixed bag of problems John tackles the XSLT chestnut "Can I change the value of a variable?" as well as dealing with stock quotes and generation of valid document trees from DTDs.'] [1] 'Can I change the value of an XSLT variable?' - "Of all the hurdles facing people who're learning XSLT, in my opinion the biggest one is the word "variable" when used in this context. That's because XSLT variables do not vary. Rather, the word signifies that a value is not necessarily known until runtime: it may vary every time a stylesheet is consumed by an XSLT processor in order to perform a transformation. But, once set in a given transformation, the variable's value remains unchanged..." [2] 'How do I extract all possible document paths using just a DTD?' [3] 'Where can I get XML-based currency quotes?'

  • [June 28, 2002] "JXTA v1.0 Protocols Specification." Edited by Michael J. Duigou (Project JXTA) for the JXTA Specification Project. IETF Internet-Draft. June 21, 2002; expires: December 20, 2002. Reference: 'draft-duigou-jxta-protocols-00'. 95 pages. "The JXTA protocols defines a suite of six XML-based protocols that standardize the manner in which peers self-organize into peergroups, publish and discover peer resources, communicate, and monitor each other. The Endpoint Routing Protocol (ERP) is the protocol by which a peer can discover a route (sequence of hops) to send a message to another peer potentially traversing firewalls and NATs. The Rendezvous Protocol (RVP) is used for propagating a message within a peergroup. The Peer Resolver Protocol (PRP) is the protocol used to send a generic query to one or more peers, and receive a response (or multiple responses) to the query. The Peer Discovery Protocol (PDP) is used to publish and discover resource advertisements. The Peer Information Protocol (PIP) is the protocol by a which a peer may obtain status information about another peers. The Pipe Binding Protocol (PBP) is the protocol by which a peer can establish a virtual communication channel or pipe between one or more peers. The JXTA protocols permit the establishment a virtual network overlay on top of physical networks allowing peers to directly interact and organize independently of their network location and connectivity. The JXTA protocols have been designed to be easily implemented on unidirectional links and asymmetric transports..." [cache]

  • [June 28, 2002] "SOAP Version 1.2 Email Binding." By Highland Mary Mountain (Intel), Jacek Kopecky (Systinet), Stuart Williams (HP), Glen Daniels )Macromedia), and Noah Mendelsohn (IBM). W3C Note 26-June-2002. Version URL: http://www.w3.org/TR/2002/NOTE-soap12-email-20020626. Latest version URL: http://www.w3.org/TR/soap12-email. The document has been published as a "result of the Transport Binding Task Force (TBTF), which is part of the XML Protocol WG. The document is meant to be an illustration of the SOAP 1.2 Protocol Binding Framework applied to a well known Internet transport mechanism, Email, specifically RFC2822... The motivation for this document is to illustrate the SOAP 1.2 Protocol Binding Framework and the creation of an alternative protocol binding specification to the Default HTTP binding. This second binding is meant to validate the Protocol Binding Framework for completeness and usability; note that this document is a non-normative description of an Email Binding. It is not the responsibility of this SOAP binding to mandate a specific email infrastructure, therefore specific email infrastructure protocol commands (such as SMTP, POP3, etc) are not covered in this binding document. The underlying email infrastructure and the associated commands of specific email clients and servers along the message path are outside the scope of this email binding... This SOAP binding specification adheres to the SOAP Protocol Binding Framework, and as such uses abstract properties as a descriptive tool for defining the functionality of certain features. Properties are named with XML qualified names (QNames). Property values are determined by the Schema type of the property, as defined in the specification which introduces the property..." Note in this connection that the W3C XML Protocol Working Group has released four SOAP Version 1.2 Last Call Working Drafts: the Primer, Messaging Framework, Adjuncts, and Assertions and Test Collection. See "Simple Object Access Protocol (SOAP)."

  • [June 27, 2002] "Microsoft, IBM Offer WS-Security Specification to OASIS." By Eric Knorr. In ZDNet Tech Update (June 27, 2002). ['Today is a watershed day for Web services.'] "Microsoft, IBM, and VeriSign have submitted WS-Security, a group of Web services security specs first announced last April, to the OASIS standards group. The growing list of all-star players that have already agreed to serve on the OASIS technical committee includes BEA, Cisco, Intel, Iona, Novell, RSA, SAP, and [...] Sun Microsystems. For those who've watched the promise of Web services falter in the face of the often vitriolic Microsoft-Sun rivalry, the news couldn't be more welcome. Bill Smith, Sun's director of Liberty Alliance technology (and a past president of OASIS) sees the decision to submit WS-Security to OASIS as an unmistakable olive branch from Microsoft. 'We were surprised and very pleased that we were approached to participate,' he says. 'You can expect to see us engaged very heavily.' Just as important as the players involved, however, is the decision by Microsoft, IBM, and VeriSign to ensure WS-Security will be royalty-free. Explicitly, no party will be able to collect licensing fees from the use of WS-Security, a stipulation that Smith told me was a prerequisite for Sun's participation. He believes the proposed royalty-free license is 'sufficient in all regards. Had they not done that, we would not have participated.' [...] Bob Sutor, IBM's director of e-business standards strategy downplays the political implications of choosing OASIS, arguing that the standards organization was selected mainly for its history of evangelizing XML directly to business users... Previously, Microsoft and IBM submitted the basic Web services protocols, SOAP and WSDL, to the W3C for approval. Steven VanRoekel, director of Web services technical marketing for Microsoft, is careful to note that 'this is not a departure from the W3C in any way,' contending that the W3C will almost certainly be on the receiving end of other Microsoft standards proposals -- and that OASIS' previous security work simply made it a more appropriate choice this time around. OASIS' foremost security effort has been the Security Assertion Markup Language (SAML), which provides an XML framework for exchanging authentication and authorization credentials. According to Phillip Hallam-Baker, principal scientist for VeriSign, one of the key goals of the technical committee will be to determine exactly how WS-Security and SAML interact. When Hallam-Baker was involved in writing SAML, he says, it was understood another spec would need to spell out the confidentiality and integrity checks required for Web services messages. 'There was kind of a hole in the spec where we said, 'Put WS-Security here,' although we didn't have the name yet.'..."

  • [June 27, 2002] "Microsoft Boosts RosettaNet Support." By Carolyn A. April. In InfoWorld (June 27, 2002). "Microsoft on Thursday [2002-06-27] will bolster its support for business-to-business data exchange among high-tech companies when it ships Version 2.0 of its BizTalk Accelerator for RosettaNet. The latest iteration of the product aims to facilitate RosettaNet implementations through an enhanced suite of development and deployment tools and pre-built support for all of the RosettaNet PIPs (Partner Interface Processes), according to Microsoft officials, who plan to announce the product at the Microsoft High Tech Solutions event in Mountain View, Calif. PIPs provide a standard way for partners to securely conduct such transactions as ordering parts or forecasting inventory. The new BizTalk tools let users import new, customized PIPs from trading partners, as well as integrate their own applications and partners into a RosettaNet hub. A new wizard is featured that enables companies to add and deploy new processes created by a trading partner. Microsoft officials said the wizard gives users the ability to more quickly change, update, and test the processes. BizTalk Accelerator for RosettaNet 2.0 also sports support for additional e-business industry standards, including the Chemical Industry Data Exchange (CIDX) and the Petroleum Industry Data Exchange (PIDX). Defined by an e-business standards consortium of the same name, RosettaNet is a set of XML standards to help technology companies exchange data across suppliers, manufacturers, and partners and drive more b-to-b automation. Along with other XML-based e-business standards, RosettaNet is seen as a cheaper, easier alternative to traditional EDI (electronic data interchange) implementations, according to observers. EDI and RosettaNet can co-exist among companies that need to handle both types of transactions..." Details are in the announcement: "BizTalk Accelerator for RosettaNet Version 2.0 Launched At the Microsoft Silicon Valley High-Tech Solutions Launch. New Version Includes Support for All Current PIPs And Improved Suite of Development, Management and Deployment Tools; Broad Adoption by Industry Leaders Including Intel and Others." See "RosettaNet."

  • [June 27, 2002] "Sun Switches Gears On Encryption." By Wylie Wong. In CNET News.com (June 27, 2002). "Microsoft, IBM and VeriSign have submitted a security specification for Web services to an industry standards body, a move that has won the backing of an unlikely supporter: Sun Microsystems. WS-Security is a 2-month-old technology that encrypts information and ensures that data passed between companies remains confidential. Its three creators -- Microsoft, IBM and VeriSign -- said Thursday they have submitted the specification to a standards body called the Organization for the Advancement of Structured Information Standards (OASIS). Sun had been devising its own rival Web services security specification, but the royalty-free licensing of WS-Security specifications allayed Sun's concerns, a source familiar with the negotiations said. Sun will now focus all its development work on WS-Security and work with its rivals to improve the specification through the OASIS group, said Bill Smith, Sun's director of Liberty Alliance technology. 'They're taking WS-Security into a recognized, open industry organization, and Web infrastructure on a royalty-free basis is an important thing as well,' Smith said. 'You should expect Sun to actively participate in this forum. We will bring whatever we have that can help fill out WS-Security. This is where (the security work) is being done.' Sun's support of WS-Security alleviates concerns about a possible standards war over Web services security. Proponents of Web services have feared that industry squabbling could derail the much-hyped movement. Every software maker has touted Web services as the future of software because such services allow companies to interact and conduct business via the Internet. But Web services won't work unless the entire tech industry coalesces around a single set of standards. Analysts have said lack of security is the biggest obstacle to the adoption of Web services -- and that WS-Security took a big step in addressing the issue. Besides WS-Security, IBM, Microsoft and VeriSign plan to build five more security specifications in the next year and a half to provide additional security measures that businesses may need for Web services. Although Smith declined comment on it, sources said Sun had been quietly working on its own security specification that was royalty free. Over the past several months, Sun executives had expressed concern that IBM and Microsoft might charge 'tolls' to developers -- in the form of royalties on patents -- for using two existing Web services specifications: the Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL). Neither Microsoft nor IBM has formally stated a desire to charge royalties on the standards, which are in part based on patents held by them..." See: (1) "Web Services Security Specification (WS-Security)"; (2) the announcement "IBM, Microsoft and VeriSign Submit WS-Security Specification to OASIS for Standardization. Advanced Web Services Security Specification Broadly Supported by Industry."

  • [June 27, 2002] "WS-Security Specification Sent to OASIS." By Darryl K. Taft. In eWEEK (June 27, 2002). ['IBM, Microsoft and VeriSign announce they will push their Web services security standard through OASIS.'] "Moving ahead on promises made when they formed the initiative in April, IBM, Microsoft Corp. and VeriSign Inc. Thursday announced that they will submit the latest version of the Web Services Security (WS-Security) specification to the Organization for the Advancement of Structured Information Standards for ongoing development. The WS-Security specification is a leading Web services standards effort to support, integrate and unify multiple security models, mechanisms and technologies, allowing a variety of systems to interoperate in a platform- and language-neutral manner, the companies said. Eric Newcomer, chief technology officer of Iona Technologies Inc., in Waltham, Mass., and a founding member of the working group that will handle the WS-Security standards effort within OASIS, said from his perspective IBM and Microsoft grew 'impatient' with the efforts of the Worldwide Web Consortium (W3C) to deliver a standard around security and Web services... In addition to Iona, many OASIS member companies pledged support for WS-Security, including Baltimore Technologies plc., BEA Systems Inc., Documentum Inc., Entrust Inc., Netegrity Inc., Novell Inc., Oblix Inc., RSA Security Inc., SAP AG, Sun Microsystems Inc., Systinet Corp., Vodafone Group plc. and webMethods Inc... The WS-Security specification, which provides the foundation for that road map, defines a standard set of Simple Object Access Protocol (SOAP) extensions, or message headers, which can be used to implement integrity and confidentiality in Web services applications. Web services are applications that can be accessed through XML and SOAP-based protocols, making them platform- and language-independent. WS-Security provides a foundation layer for secure Web services, laying the groundwork for higher-level facilities such as federation, policy and trust." See: (1) "Web Services Security Specification (WS-Security)"; (2) the announcement "IBM, Microsoft and VeriSign Submit WS-Security Specification to OASIS for Standardization. Advanced Web Services Security Specification Broadly Supported by Industry."

  • [June 27, 2002] "Web Services Security Spec Sent to OASIS." By Thor Olavsrud. In InternetNews.com (June 27, 2002). "Aiming to address critical security issues that still hang over Web services, the powerhouse triumvirate of VeriSign, IBM, and Microsoft Thursday submitted their Web Services Security (WS-Security) specification to the OASIS standards body. Analysts have long considered security one of the missing pieces hampering the adoption of Web services technologies, which allow the rapid integration of disparate platforms and legacy systems, applications and information. Even more tantalizing is the possibility of supply chain integration by tying together the systems of partners, customers and suppliers. But before that can happen on a large scale, the security issues must be addressed. WS-Security defines a set of SOAP (define) extensions which can be used to implement integrity and confidentiality in Web services applications, laying the groundwork for higher-level facilities like federation, policy and trust. The three companies are making the specification available royalty-free, and even Sun Microsystems, which has been opposing IBM and Microsoft by backing the development of rival specifications, threw its support behind the WS-Security specification and said it would participate in the OASIS development effort..."

  • [June 27, 2002] "Combining UDDI and DNS: A Step Towards a Better E-Commerce?" By Anders Rundgren (OBI Express Project Manager). White paper. June 2002. ['The paper describes how OBI Express' Web Service Discovery functions augmented by DNS, contribute to the creation of a convenient, robust, and secure peer-to-peer-based computing environment for business-to-business applications. Or P2P4B2B as we sometimes call this. A similar solution where DNS and Web Service Discovery are used to create a better user experience, and richer functionality, is featured in an enhanced version of VISA's 3D Secure payment system, tentatively called .PAY (pronounced dot-pay). This should be regarded as input to UDDI V3, currently in development.'] "The following is a spin-off from our work with OBI Express, a secure, plug-and-play, Web Services-based, B2B e-commerce standard in development. The adoption of public UDDI-registers have so far been rather limited, indicating that maybe the UDDI-model itself needs some 'adjustments' to ever become mainstream. One weakness seems to be the mixing of 'Yellow Pages' (YP) type of information that is relatively uncritical, with 'e-business interface descriptions' (a.k.a. 'Green Pages') as the latter most likely need frequent and secure updates to work, which is hard to achieve on a global scale using a centralized approach. Although replication can reduce some of the technical problems, the business case for running commercial subordinate UDDI-nodes remains unclear. By splitting registry responsibilities as described below, you get a considerably more scalable model, both technically and business-wise. Another issue is that DNS, rather than UDDI, is probably the most natural 'placeholder' for organizations' e-business interfaces, now, as well as in the future. Therefore we suggest that YP-registers (that do not have to use UDDI) either keep URLs, or (to achieve highest possible 'information robustness'), keep DNS-names to locally maintained and directly accessed repositories containing the actual interface definitions. By eliminating replication, you limit both errors and security issues. It is true that DNS uses replication, but this replication has a more limited scope than required by UDDI..." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [June 26, 2002] "Sun, Others Propose New Web Services Standard." By Matt Berger. In InfoWorld (June 26, 2002). "Sun Microsystems and a group of software vendors are proposing to add another standard to the recipe for building Web services. On Wednesday, the companies detailed a specification that would allow developers to "choreograph" events and transactions that take place between computers when applications and services are accessed over the Internet. The specification is called the Web Service Choreography Interface (WSCI), and it is designed to work with Web services based on the standard data format XML (Extensible Markup Language). Joining Sun in drafting and publishing the specification are software makers SAP, BEA Systems and Intalio. For Web services that make use of several existing Web-based applications, the specification aims to define a standard way for developers to describe which actions must occur and in what order they must take place, so that the Web service they are building can process information in an orderly manner, said Karsten Riemer, an XML architect with Sun. For example, a Web site where users can book airline tickets online might require an application that combines existing Web services for various tasks, such as determining whether the user is a member of a frequent flier program, figuring out which airlines fly to the destination being requested, and checking that the user has sufficient funds in his or her bank account to purchase the ticket..." See: (1) the announcement: "BEA, Intalio, SAP, Sun Publish Web Services Choreography Interface, Take Web Services Collaboration to New Level. New XML-Based Specification Helps Developers Build Web Services for Open Application to Application Collaboration."; (2) the reference page "Web Service Choreography Interface (WSCI)."

  • [June 26, 2002] "BizTalk Accelerator for RosettaNet 2 Ships." By Renee Boucher Ferguson. In eWEEK (June 26, 2002). "Working hard to garner further industry support for RosettaNet, Microsoft Corp., with the Gartner Group, a research firm out of Boston, Mass., surveyed 18,000 suppliers to find out what some of their basic needs are in moving forward with a RosettaNet implementation. The results are wrapped into Microsoft's BizTalk Accelerator for RosettaNet 2.0, to be released tomorrow. Three main design goals emerged from the survey. It turns out suppliers are looking for speed in implementing a standards-based solution, flexibility of change management, and complete support for the standard -- in this case RosettaNet -- according to Microsoft officials. In regard to rapid deployment, Microsoft added a Wizard user interface that allows users to quickly set up new partner relationships and PIPs [partner interface processes] as defined by RosettaNet. Likewise, if a customer or partner modifies or customizes a PIP, the user can deploy the new PIP through the wizard-based UI, according to officials. For more flexible change management, a RosettaNet Console is included in the second iteration, which allows a user to manage trading partnerships, again using a Wizard-based UI. In response to the supplier's need for fuller support of the standard, Microsoft now supports each of the 77 PIPs defined by RosettaNet. It also added support for CIDX [Chemical Industry Data Exchange] and PIDX, the petroleum industry data exchange. Both industries are working hard to utilize RosettaNet in their business-to-business transactions. However, like others in the high-tech manufacturing space for which RosettaNet was originally formed, getting partners to use the XML-based business process standard proves often to be an expensive and difficult undertaking..." See "RosettaNet."

  • [June 26, 2002] "Sun, BEA, Others Publish New Web Services Specification. Sun to Offer Free Tool Around WSCI." By Elizabeth Montalbano. In Computer Reseller News (June 26, 2002). ['Several industry vendors Wednesday unveiled a new XML-based specification to facilitate Web services interoperability.'] "Sun Microsystems, BEA Systems, Intalio and SAP have made the new spec, Web Service Choreography Interface (WSCI), available for review on their Web sites, said Susy Struble, manager of XML industry initiatives at Sun. To help solution providers and developers become familiar with WSCI, Sun will release a new tool, the Sun ONE Web Service Choreography Interface Editor, on its Web site Friday for free download, she added. Karsten Riemer, a Sun XML architect, said WSCI picks up where WSDL leaves off in describing what a Web service does. For instance, WSDL will describe the functions of a particular service, but not how those functions relation to each other, said Riemer. 'WSCI describes all the relationships between all the things you can do [with a Web service],' said Riemer. 'It's the sequence of steps, the glue around individual WSDL operations.' Struble said Sun and the companies that developed WSCI hope to submit the spec to a standards body such as the W3C once companies have had a chance to review WSCI... Struble said Sun is 'pleased to be taking a leadership' role in promoting Web services interoperability with the WSCI spec. She would not comment on whether the spec would become a part of the blueprints WS-I is developing to promote interoperability..." See (1) the announcement: "BEA, Intalio, SAP, Sun Publish Web Services Choreography Interface, Take Web Services Collaboration to New Level. New XML-Based Specification Helps Developers Build Web Services for Open Application to Application Collaboration."; (2) the reference page "Web Service Choreography Interface (WSCI)."

  • [June 26, 2002] "SAN Management Using CIM and WBEM." By Steve Jerman and John Crandall. In InfoStor Volume 6, Number 6 (June 2002), pages 22-24. ['Two standards may alleviate management headaches for both end users and management software developers.'] "Today, a SAN (storage area network) administrator may have a SAN network management tool, storage resource management (SRM) software, and multiple device-specific management tools. All of these tools have various management interfaces (SNMP, Fibre Channel Services, vendor-specific APIs, etc.), resulting in multiple vendors replicating development work and potentially providing inconsistent and incomplete information to SAN administrators. A promising solution to this problem is an interoperable, open environment for storage management based on the Web-Based Enterprise Management (WBEM) standard and the Common Information Model (CIM), both of which were developed by the Distributed Management Task Force (DMTF). WBEM ('webem') is a set of standards developed to unify the management of enterprise computing environments. The DMTF has developed a core set of standards that make up WBEM, including a data model, the CIM standard, an encoding specification using XML, and a transport mechanism using HTTP. The Storage Networking Industry Association (SNIA) has decided to use WBEM as the basis for a standard SAN management interface. The organization has worked with the DMTF to define the necessary modeling extensions to manage a SAN and all of the storage devices in it WBEM is an XML-based management interface using CIM. WBEM consists of three elements: [1] An object model (CIM); [2] An XML encoding specification (xmlCIM), which is written in Document Type Definition (DTD); xmlCIM defines XML elements representing CIM classes and instances; and [3] A transport mechanism, (CIM Operations over HTTP), which describes how to access a CIM model using xmlCIM over HTTP... Today, the SAN management model includes: Device discovery, Topology, Device configuration, Device statistics, Zoning configuration, Asset management, and Software management... The SNIA (Storage Networking Industry Association) believes WBEM is a key technology for SAN management. Designed for heterogeneous SAN and storage management, leveraging existing technologies such as XML and HTTP, WBEM can evolve as new transports and protocols come along..." Note in this connection the proposal for an OASIS Management Protocol Technical Committee which would produce a Management Protocol Specification by June 2003; "the proposed initial scope of this committee will be to develop open industry standard management protocols to provide a web-based mechanism to monitor and control managed elements in a distributed environment based on industry accepted management models, methods, and operations, including, OMI, XML, SOAP, DMTF CIM, and DMTF CIM Operations..." See "DMTF Common Information Model (CIM)."

  • [June 26, 2002] "The Web's Future Passkey." By Lawrence M. Walsh. In Information Security Magazine (June 2002). ['SAML supporters say the standard could provide ubiquitous, transparent Web authorization.'] "Baltimore Technologies recently designed its security management suite, SelectAccess 5.0, as an XML-based application to leverage its access control functions for the emerging world of Web services. A key element of SelectAccess is the Security Assertion Markup Language (SAML), a relatively new standard that's rapidly becoming the de facto means for exchanging user credentials between trusted environments... Developed by the Organization for the Advancement of Structured Information Standards, SAML could be the success story for the next generation of online computing. As Web services and trusted online relationships continue to evolve, many see SAML as the mechanism that will bring single sign-on (SSO) to B2B and B2C environments... SAML's infrastructure is rather simple. To make it work, a Web-based network must have a SAML server deployed on its perimeter. The server sits alongside the Web server and interacts with its back-end access control database. Once a user authenticates to the site, the SAML server will transparently transmit his credentials to every partner site. The SAML server on the other end will automatically accept him as being a trusted user... Given the extent of a partner community, users can transparently pass from site to site without ever touching an access control or authorization mechanism. This transparency, developers believe, will facilitate greater use of online services and information sharing, since users won't have to remember and enter a myriad of authentication information... Granting trust between SAML servers isn't done blindly. SAML doesn't grant users access, say how they should be authenticated or enable automated provisioning for new services. Essentially, it's nothing more than an exchange of information between trusted, known parties. That's where things get a little tricky. While an enabled SAML system will create transparent exchanges of authorization information, the establishment of those trusted relationships must still be done out of band... SAML typically uses digital certificates to authenticate servers to one another--preventing a rogue SAML server from spoofing access rights--and encrypts all data passed between networks. However, the standard doesn't authenticate users; rather, it relies on existing access control and authentication solutions. It also does nothing to protect user identification information stored locally. All of this means partner sites must develop mutual requirements for user authentication and data protection... In addition to Baltimore, other security vendors are incorporating SAML in their products. Waveset and Netegrity are each integrating SAML in their access control products, and Netegrity has already released a toolkit for making existing SiteMinder applications SAML-compliant..." See: (1) "Security Assertion Markup Language (SAML)"; (2) Burton Catalyst Conference 2002 ['Day Two examines industry support for the Security Assertions Markup Language (SAML), Microsoft's proposal to apply Kerberos to the security federation problem, and whether PKI is ready to deliver on its promise of an interoperable public trust network.']

  • [June 26, 2002] OpenTravel Alliance 2002A Message Specification. Public Review Draft 3. OpenTravel Alliance, Inc. Prepared in partnership with Data Interchange Standards Association (DISA). 21-June-2002. 23 pages. The OpenTravel Alliance 2002A Message Specification overview presents "a brief description of the OTA 2002A Specification RQ/RS message pairs. For a more detailed definition of the messages, one should refer to the OTA XML Schema Definition files (XSDs). Major contents: Section 1 - The Air Working Group; Section 2 - The Car Working Group; Section 3 - The Hotel Working Group; Section 4 - Package Tour Messages; Section 5 - Golf Messages ; Section 6 - Travel Insurance Messages. Mapping documents that demonstrate the changes between 2001B and 2002A messages for the Car Working Group, Hotel Working Group, Package Tour Messages, and Golf Messages will accompany this final specification." See the news item of 2002-06-26 "OpenTravel Alliance XML Specification Supports Multiple Travel Verticals." [cache]

  • [June 26, 2002] "Is the JCP Adequately Preparing Java for Web Services? A look at the recently released and forthcoming Web services APIs." By Jennifer Orr. In JavaWorld Magazine (June 21, 2002). ['The official release of the newest Java Web Services Developer Pack introduces the Java API for XML Registries and Java API for XML Remote Procedure Call, recently approved through the Java Community Process. The JCP is currently reviewing additional Web services APIs that should prove important to Java Web services development. In this article, Jennifer Orr spotlights the latest Web services technologies and examines how the JCP is responding to Web services.'] "In April 2002, the JCP released the final JAXR version, which gives developers an API for building Web services that interact with standard XML registry specifications, including the two dominant registries: Universal, Description, Discovery, and Integration (UDDI) and ebXML. Regardless of whether a service has been published in a UDDI registry or an ebXML registry, with JAXR, a Web service can discover that service and publish its own services to either registry...Peter Kacandes, senior product manager for Java XML APIs in the Java software products division at Sun praises JAXR for helping developers work more efficiently. 'You learn JAXR and now you have full access to the full range of both the ebXML standard and UDDI spec,' he says. 'With IBM's UDDI4J, for example, programmers have to learn the UDDI4J API, and when they want to use ebXML, they'd have to use some other specific API.' This ability to leverage multiple underlying standards with one API makes developers more efficient says Sun. JAX-RPC, just finalized in June, features similar capabilities. JAX-RPC allows developers to build Web applications that incorporate XML-based RPC (remote procedure call). The RPC mechanism lets a client communicate a remote procedure call to a server. JAX-RPC uses the Web services standards SOAP (Simple Object Access Protocol), WSDL (Web Services Description Language), and XML Schema, and defines how to develop and deploy portable and interoperable Web services with Java..."

  • [June 25, 2002] "IBM to Sharpen Web Services Toolkit." By Ed Scannell. In InfoWorld (June 25, 2002). "IBM on Wednesday [2002-06-26] is expected to roll out a spruced-up version of its WebSphere Software Developer Toolkit for Web Services that contains a UDDI (Universal Description, Discovery, and Integration) repository, the latest APIs for XML, and a built-in database upon which developers can host applications and Web services. One aim of this latest version, which is a follow-up to IBM's first Web services toolkit just more than a year ago, is to hasten the adoption of Java-based Web services among developers in both the Windows and Linux communities... The new version weaves together both tools and core runtime infrastructure that most developers need to design, build, and test Java-based Web services, said Hebner. The resulting services can be deployed to any open Web services platform, he said... In related news IBM will announce on Wednesday that it is contributing two Web services technologies to the Apache Software Foundation in hopes of advancing the adoption of Web services in the open-source world. The first is called the WSIF (Web Services Invocation Framework), a technology for invoking services that are compliant with the WSDL (Web Services Description Language) across a number of different network protocols including SOAP (Simple Object Access Protocol), JMS, and RMI (Remote Methods and Invocation). The second is called the WSIL4J (Web Service Inspection Language for Java), which will allow Java programmers to both access and process Web Services Inspection Language documents on a Web site. Co-developed by IBM and Microsoft, the WS-Inspection specification defines how an application can examine a Web site for those Web services that are available. It is intended to be a complement to UDDI global directory technology by helping discover services on Web sites that are not listed in the UDDI registries... While he declined to say specifically when, Hebner said future versions of the Web services toolkit will integrate new Web services features and functions that support the WS-I's (Web Services Interoperability Organization's) upcoming reference profiles and scenarios. WSIF and WSIL4J have both been open-sourced through the Apache Software License." Both technologies can be downloaded; see WSIF. See: (1) "Universal Description, Discovery, and Integration (UDDI)"; (2) "IBM alphaWorks Releases Web Services Invocation Framework (WSIF)."

  • [June 25, 2002] "Efficient Filtering of XML Documents with XPath Expressions." By Chee-Yong Chan, Pascal Felber, Minos Garofalakis, and Rajeev Rastogi (Bell Laboratories, Lucent Technologies). Pages 235-244 (with 19 references) in Proceedings of IEEE ICDE 2002 [18th International Conference on Data Engineering], San Jose, California, February 2002. "We propose a novel index structure, termed XTrie, that supports the efficient filtering of XML documents based on XPath expressions. Our XTrie index structure offers several novel features that make it especially attractive for large scale publish/subscribe systems. First, XTrie is designed to support effective filtering based on complex XPath expressions (as opposed to simple, single-path specifications). Second, our XTrie structure and algorithms are designed to support both ordered and unordered matching of XML data. Third, by indexing on sequences of element names organized in a trie structure and using a sophisticated matching algorithm, XTrie is able to both reduce the number of unnecessary index probes as well as avoid redundant matchings, thereby providing extremely efficient filtering. Our experimental results over a wide range of XML document and XPath expression workloads demonstrate that our XTrie index structure outperforms earlier approaches by wide margins... Indexing on a carefully-selected set of substrings (rather than individual element names) in the XPEs [XPath expressions] is a key ingredient of our approach that enables us to minimize both the number and the cost of the required index probes. The key intuition here is that a sequence of element names has a lower probability (compared to a single element name) of matching in an input document, resulting in fewer index probes. In addition, since there are fewer indexed XPEs associated with a 'longer' substring key, each index probe is likely to be less expensive as well. To support online filtering of streaming XML data, our XTrie indexing scheme is based on the event-based SAX parsing interface, to implement XML data filtering as the XML document is parsed. This is in contrast to the alternative DOM parsing interface, which requires a mainmemory representation of the XML data tree to be built before filtering can commence. To the best of our knowledge, the only other SAX-based index structure for the XPE retrieval problem is Altinel and Franklin's XFilter ['Efficient Filtering of XML Documents for Selective Dissemination of Information,' VLDB 2000], which relies on indexing the XPE element names using a hashtable structure. By indexing on substrings rather than individual element names, our XTrie index provides a much more effective indexing mechanism than XFilter. A further limitation of XFilter is that its space requirement can grow to a very large size as an input document is parsed, which can also increase the filtering time significantly. Our experimental results over a wide range of XML document and XPath expression workloads validate our claims, demonstrating that our XTrie index structure significantly outperforms XFilter (by factors of up to 4)...[source PS.gz, slides]

  • [June 24, 2002] "UBL and Web Services." By Matthew Gertner. In XML Journal Volume 3, Issue 6 (June 2002). "In this article we examine the question of standardizing Web service semantics... The challenge of representing all but the most trivial Web service semantics in a machine-readable way was well exposed last fall in an article by Clay Shirky. Existing Web service standards, specifically SOAP, WSDL, and UDDI, provide little more than a way for applications to invoke a Web service once they already know what its interface looks like. This is all very well for Web services whose purpose is sufficiently transparent, but it's exactly these services that are the least interesting. After all, we're talking about revolutionizing computing...By far the most comprehensive effort was launched last year under the name Universal Business Language. Structured first as an independent group, UBL was formally accepted as an OASIS Technical Committee last October. UBL brings to the table a number of strengths, including experienced and proven leadership, broad industry and vendor support, and a solid technical foundation. UBL takes as its starting point xCBL, widely accepted as one of the most comprehensive XML-based business libraries. UBL's Library Content subcommittee has been entrusted with the task of harmonizing xCBL with the fruits of EDI's Joint Core Components initative and with other business libraries, including vocabularies for industry verticals. Official liaisons have been appointed to UBL from several vertical standards organizations to ensure that the basic UBL business documents will work across multiple industries. At the same time, UBL's technical subcommittees are specifying the nuts and bolts that will underpin the document library. These include tricky but important decisions about which schema features to use and how to name tags in a clear, concise, and consistent manner. The ebXML context extension methodology is also being adopted and improved in order to produce an automated procedure for creating extended schemas (e.g., for a specific industry, region, or company) that interoperate with the base schemas in the document library. A first version of the UBL document library is scheduled to be completed 12 months into the effort (a draft of the first schema, for Purchase Order, has recently been released). The context extension methodology will be released approximately one year later - sooner if it turns out that this work can be performed in parallel with the creation of the document library... By defining what in essence are the basic interfaces for a complete set of business processes, the UBL effort will have huge implications for Web services. Consider, for example, a Web service for online payment. The core functionality of this service is to receive invoices, create payment request documents based on these invoices, and settle the payment through a bank payment gateway. The defining principle of Web services is that a service of this type should be able to interact in a plug-and-play manner with other Web services, and that it should be replaceable by any other Web service targeting the same functionality. Without UBL this goal isn't achievable because the exact formats of the invoice and payment documents would have to be determined by the implementer of the system, so they wouldn't be interoperable with Web services from other vendors. UBL solves this problem by providing standard formats for the invoice and payment documents. Any Web service that produces an invoice (a billing service, for example) can thus interface with the payment service. By using the context methodology, subtle differences in invoice and payment formats can be handled without invalidating the overall approach. In addition, the payment service can be swapped for another one, perhaps with some advantages, such as better terms, higher availability, or interfaces to more banks..." See: "Universal Business Language (UBL)."

  • [June 24, 2002] "Mapping XML and Relational Schemas with Clio." By Lucian Popa, Mauricio A. Hernández, Yannis Velegrakis, Renée J. Miller, Felix Naumann, and Howard Ho. In Proceedings 18th International Conference on Data Engineering (ICDE) [San Jose, CA, USA; February 26, 2002 - March 1, 2002] "Merging and coalescing data from multiple and diverse sources into different data formats continues to be an important problem in modern information systems. Schema matching (the process of matching elements of a source schema with elements of a target schema) and schema mapping (the process of creating a query that maps between two disparate schemas) are at the heart of data integration systems. We demonstrate Clio, a semi-automatic schema mapping tool developed at the IBM Almaden Research Center. In this paper, we showcase Clio's mapping engine which allows mapping to and from relational and XML schemas, and takes advantage of data constraints in order to preserve data associations." See the news item 2002-06-24 "IBM Clio Tool Supports Mapping Between Relational Data and XML Schemas."

  • [June 24, 2002] "A Better XML Parser Through Functional Programming." By Oleg Kiselyov (Software Engineering, Naval Postgraduate School). Presented at the Fourth International Symposium on Practical Aspects of Declarative Languages (PADL 2002). 49 pages (slides). Also == pages 209-224 (with 14 references) in the Proceedings, edited by S. Krishnamurthi (Lecture Notes in Computer Science, Number 2257. Berlin: Springer-Verlag). "This paper demonstrates how a higher-level, declarative view of XML parsing as folding over XML documents has helped to design and implement a better XML parser. By better we mean a full-featured, algorithmically optimal, pure-functional parser, which can act as a stream processor. By better we mean an efficient SAX parser that is easy to use, a parser that does not burden an application with the maintenance of a global state across several callbacks, a parser that eliminates classes of possible application errors. This paper describes such better XML parser, SSAX. We demonstrate that SSAX is a better parser by comparing it with several XML parsers written in various (functional) languages, as well as with the reference XML parser Expat. In the experience of the author the declarative approach has greatly helped in the development of SSAX. We argue that the more expressive, reliable and easier to use application interface is the outcome of implementing the parsing engine as an enhanced tree fold combinator, which fully captures the control pattern of the depth-first tree traversal." [See also SSAX and SXML at Sourceforge: "A SSAX functional XML parsing framework consists of a DOM/SXML parser, a SAX parser, and a supporting library of lexing and parsing procedures. The procedures in the package can be used separately to tokenize or parse various pieces of XML documents. The framework supports XML Namespaces, character, internal and external parsed entities, attribute value normalization, processing instructions and CDATA sections. The package includes a semi-validating SXML parser : a DOM-mode parser that is an instantiation of a SAX parser (called SSAX). SSAX is a full-featured, algorithmically optimal, pure-functional parser, which can act as a stream processor. SSAX is an efficient SAX parser that is easy to use . SSAX minimizes the amount of application-specific state that has to be shared among user-supplied event handlers. SSAX makes the maintenance of an application-specific element stack unnecessary, which eliminates several classes of common bugs. SSAX is written in a pure-functional subset of Scheme. Therefore, the event handlers are referentially transparent, which makes them easier for a programmer to write and to reason about..."] Source PS.gz.

  • [June 24, 2002] "Patently Absurd." By Gary L. Reback. In Forbes ASAP (June 24, 2002). ['Patents are no longer invention stimulants, but have become widely abused revenue-generating vehicles. The U.S. Patent and Trademark Office (USPTO) has aided those who abuse the system by allowing anyone to get a patent for almost anything.'] "Too many patents are just as bad for society as too few. There are those who view the patent system as the seedbed of capitalism -- the place where ideas and new technologies are nurtured. This is a romantic myth. In reality, patents are enormously powerful competitive weapons that are proliferating dangerously, and the U.S. Patent and Trademark Office (USPTO) has all the trappings of a revenue-driven, institutionalized arms merchant. My own introduction to the realities of the patent system came in the 1980s, when my client, Sun Microsystems -- then a small company -- was accused by IBM of patent infringement. Threatening a massive lawsuit, IBM demanded a meeting to present its claims. Fourteen IBM lawyers and their assistants, all clad in the requisite dark blue suits, crowded into the largest conference room Sun had. The chief blue suit orchestrated the presentation of the seven patents IBM claimed were infringed, the most prominent of which was IBM's notorious 'fat lines' patent: To turn a thin line on a computer screen into a broad line, you go up and down an equal distance from the ends of the thin line and then connect the four points. You probably learned this technique for turning a line into a rectangle in seventh-grade geometry, and, doubtless, you believe it was devised by Euclid or some such 3,000-year-old thinker. Not according to the examiners of the USPTO, who awarded IBM a patent on the process. After IBM's presentation, our turn came. As the Big Blue crew looked on (without a flicker of emotion), my colleagues -- all of whom had both engineering and law degrees -- took to the whiteboard with markers, methodically illustrating, dissecting, and demolishing IBM's claims. We used phrases like: 'You must be kidding,' and 'You ought to be ashamed.' But the IBM team showed no emotion, save outright indifference. Confidently, we proclaimed our conclusion: Only one of the seven IBM patents would be deemed valid by a court, and no rational court would find that Sun's technology infringed even that one. An awkward silence ensued. The blue suits did not even confer among themselves. They just sat there, stonelike. Finally, the chief suit responded. 'OK,' he said, 'maybe you don't infringe these seven patents. But we have 10,000 U.S. patents. Do you really want us to go back to Armonk [IBM headquarters in New York] and find seven patents you do infringe? Or do you want to make this easy and just pay us $20 million?' After a modest bit of negotiation, Sun cut IBM a check, and the blue suits went to the next company on their hit list. In corporate America, this type of shakedown is repeated weekly. The patent as stimulant to invention has long since given way to the patent as blunt instrument for establishing an innovation stranglehold. Sometimes the antagonist is a large corporation, short on revenue-generating products but long on royalty-generating patents. On other occasions, an opportunistic 'entrepreneur' who only produces patent applications uses the system's overly broad and undisciplined patent grant to shake down a potential competitor...' Note: While IBM's name appears here as 'threatening a massive lawsuit,' I understand that any number or large companies could be named in this slot: Reback's argument is not against any one company, but against all companies that (ab)use the US patent system in this way. See related articles in Forbes: (1) "The Smother of Invention" and (2) "Search 500,000 Documents, Review 160,000 Pages in 20 Hours, and Then Do It All Over Again. The detailed life of a patent examiner." References: "Patents and Open Standards."

  • [June 24, 2002] "VoiceXML and the Future of SALT. [Voice Data Convergence.]" By Jonathan Eisenzopf. In Business Communications Review (May 2002), pages 54-59. ['Jonathan Eisenzopf is a member of the Ferrum Group, LLC, which provides consulting and training services to companies that are in the process of evaluating, selecting or implementing VoiceXML speech solutions.'] "The past year has been eventful for VoiceXML, the standard that application developers and service providers have been promoting for the delivery of Web-based content over voice networks. Many recent developments have been positive, as continued improvements in speech-recognition technology make voice-based interfaces more and more appealing. Established vendors are now validating VoiceXML, adding it to their products and creating new products around the technology. For many enterprises, this means that the next time there's a system upgrade, VoiceXML may be an option. For example, InterVoice-Brite customers soon will be able to add VoiceXML access to their IVR platform, which would provide callers with access to Web applications and enterprise databases... The introduction of SALT as an alternative to VoiceXML for multi-modal applications will present alternatives for customers who are not focusing exclusively on the telephone interface. However, VoiceXML is likely to be the dominant standard for next-generation IVR systems, at least until Microsoft and the SALT Forum members begin to offer product visions and complete solution sets..." See: (1) "Speech Application Language Tags (SALT)"; (2) "VoiceXML Forum."

  • [June 24, 2002] "UDDI and LDAP: A Perfect Fit?" By Richard Karpinski. In InternetWeek (June 20, 2002). "As rapidly as some Web services protocols, such as Simple Object Access Protocol (SOAP), have taken hold, others, perhaps most notably Universal Description, Discovery and Integration (UDDI), have met with less success. Most experts still believe that UDDI will ultimately play a key role in the adoption of Web services. It's best described by its own white pages/yellow pages metaphors. At its core, UDDI provides a place for businesses to describe and register -- and for other companies to discover -- the Web services interfaces they are making available to the world. UDDI has been anything but a slam-dunk thus far. The technology hasn't been as flexible as many developers would like. And participation in public UDDI registries as been low. But it's still early in the game. Many early UDDI server implementations were solely for proof-of-concept. Also, many early servers were public deployments, while much of the early work with Web services turned out not to be over the public Internet but behind corporate firewalls. In the end, UDDI isn't strictly necessary to do SOAP-based Web services. But if such architectures are to scale, developers need a place to register their services so they can be easily found and consumed. Ultimately, criticizing early UDDI specifications and deployment efforts was beside the point -- the technology won't really be needed until Web services take firm hold on the enterprise landscape. Despite its early flaws, UDDI is making progress. The current, 2.0 version of the specification, by most accounts, was a major step forward from the initial standard, adding support for so-called multiple entities that made it much easier for businesses to actually map their organizations and workflows to the repository. Market-watchers expect additional steps forward this summer with the release of version 3.0 of UDDI. Yet even more significantly, vendors are beginning to bring UDDI more into the enterprise mainstream. For instance, the past few weeks have seen growing amounts of activity around mapping UDDI repositories into Lightweight Directory Access Protocol (LDAP) directories, which in their own right have emerged as a key enabling e-business technology. Security is already emerging as a key stumbling block for Web services deployments; existing LDAP-based authorization and access schemes can help jump-start those efforts, [Nathan] Owen said... Despite the early hurdles UDDI has faced, developers still believe in the technology, according to a study released this week. According to a poll of developers by Flashline, a provider of software reuse solutions, 55 percent of respondents are currently evaluating internal or external UDDI registries, while 11 percent are already using a UDDI registry to organize access to Web services..." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [June 24, 2002] "Systinet Upgrades Web Services Suite." By Paul Krill. In InfoWorld (June 24, 2002). "Systinet on Monday [2002-06-24] plans to release Version 4.0 of WASP (Web Applications and Services Platform), its suite of products for building and managing Web services that features improvements in areas such as security and performance. A new pricing structure also is being detailed that enables developers to build test applications for free. The platform consists of the WASP Developer development tool; UDDI, a registry for registering Web services in a UDDI (Universal Description, Discovery, and Integration) directory; and Server, a Web services platform available in versions for Java or C++ programming. Systinet has improved security in WASP, offering support for the HTTP Basic and HTTP Digest mechanisms for authentication and encryption. Previous versions of the product supported just SSL (Secure Sockets Layer) security. 'SSL allows you to encrypt a message, but it doesn't give you a mechanism to find out who the [user] is,' said Systinet CTO Anne Thomas Manes. Systinet also is adding a browser-based Web services management console to monitor applications and meter service requests. Performance and scalability have been improved through optimization of the WASP XML processing system, to remove limits to the number of concurrent users.. Systinet also has enhanced capabilities for interoperability with Microsoft .Net, making it easier to use .Net as a client to WASP or vice-versa. Previously, a lot of custom coding was required for this..."

  • [June 24, 2002] "ebXML: The Missing Ingredient for Web Services?" By Klaus-Dieter Naujok. From Web Services Journal (July 2002). "Web Services have the potential to transform eBusiness into a plug-and-play affair. Not only will Web Services simplify how businesses will interconnect, they will also enable businesses to find each other. One reason for the increased interest in Web Services is the promise of interoperability, in the same way that Web pages can be accessed from anywhere on the Internet. However, complex standards are needed to achieve true interoperability, not only at the messaging and transport layer, but also at the business (application) layer. The success of Web Services will depend on how easily businesses will be able to engage interoperability at all levels. There are many efforts in standardizing Web Services, but none of them provide the required features for eBusiness transactions. Web Service standards only address the infrastructure side. This is where ebXML can provide the standards for interoperability at the business layer, making it the standard solution for Web Business Services. In other words, ebXML is the missing ingredient for Web Services..." See other resources on ebXML and UN/CEFACT from the Klaus' Korner.

  • [June 24, 2002] "FTC Issues Complaint Against Rambus, Inc. Deception of Standard-Setting Organization Violated Federal Law." From the US Federal Trade Commission. (June 19, 2002). "The Federal Trade Commission has charged Rambus, Inc., based in Los Altos, California, with violating federal antitrust laws by deliberately engaging in a pattern of anticompetitive acts and practices that served to deceive an industry-wide standard-setting organization, resulting in adverse effects on competition and consumers... According to the FTC's complaint, Rambus participated in JEDEC's SDRAM-related work for more than four years without ever making it known to JEDEC or its members that Rambus was actively working to develop, and did in fact possess, a patent and several pending patent applications that involved specific technologies proposed for, and ultimately adopted in, the relevant standards. By allegedly concealing this information, in violation of JEDEC's operating rules and procedures, and through other alleged bad-faith, deceptive conduct, the complaint charges that Rambus purposefully sought to, and did, convey to JEDEC the materially false and misleading impression that it had no relevant intellectual property rights. Specifically, the complaint alleges that 'Rambus's very participation in JEDEC, coupled with its failure to make required patent-related disclosures, conveyed a materially false and misleading impression - namely, that JEDEC, by incorporating into its SDRAM standards technologies openly discussed and considered during Rambus's tenure in the organization, was not at risk of adopting standards that Rambus could later claim to infringe upon its patents.' In developing standards for use throughout the semiconductor industry, JEDEC has endeavored to promote free competition among industry participants. As the FTC's complaint notes, one of JEDEC's 'basic' rules is that standardization programs conducted by the organization 'shall not be proposed for or indirectly result in -- restricting competition, giving a competitive advantage to any manufacturer, [or] excluding competitors from the market.' The complaint notes that 'JEDEC also has maintained a commitment to avoid, where possible, the incorporation of patented technologies into its published standards, or at a minimum to ensure that such technologies, if incorporated, will be available to be licensed on royalty-free or otherwise reasonable and non-discriminatory terms.' 'Toward this end,' the complaint states, 'JEDEC has implemented procedures designed to ensure that members disclose any patents, or pending patent applications, involving the standard-setting work being undertaken by the organization.'..." See additional references in "Patents and Open Standards."

  • [June 22, 2002] "XMILE: An XML Based Approach for Incremental Code Mobility and Update." By Cecilia Mascolo, Luca Zanolin, and Wolfgang Emmerich (Department of Computer Science, University College London, London, UK). In Automated Software Engineering Volume 9, Number 2 (April 2002), pages 151-165 (with 27 references). "Extensible Markup Language (XML) was originally defined to represent Web content, but it is being increasingly used to define languages, such as XPL, that are used for coding executable algorithms, policies or scripts. XML-related standards, such as XPath and the Document Object Model, permit the flexible manipulation of fragments of XML code, which enables novel code migration and update paradigms. The XMILE approach that we describe in this paper exploits these mechanisms in order to achieve flexible and fine-grained code updates, even without stopping execution. We describe a Java-based prototype that implements XMILE and our experience in using XMILE in the domain of code updates on mobile devices... We show how we are using XMILE in different application domains such as mobile devices application update, programmable networks, and mobile devices application service provision. We plan to continue to apply XMILE in different context to prove its flexibility. We are defining a policy XML based language and we want to use xmile to distribute and update different kind of policies such as security policies. Technologies such as XML RPC, SOAP or JAXM could potentially be integrated with our approach in order to achieve code mobility in more flexible ways: we plan to investigate this research direction. From a logical mobility point of view we are interested in extending our variable binding mechanism allowing dynamic downloading of non present variable in addition to the dynamic local binding..." See: (1) the news item 2002-06-22; (2) the project website for XMILE: XML in Mobile Incremental Logical Environment.[cache]

  • [June 22, 2002] "Legacy Assets Meet SOAP." By Jon Udell. In InfoWorld (June 24, 2002). ['Where are all the Web services? Everywhere, actually. Web services wrappers for database stored procedures, mainframe applications, and components of every flavor are rapidly becoming available... Creating Web services wrappers for stored procedures, Java classes, EJB, COM/COM+ components, and CORBA objects isn't a panacea. But it opens up new integration vistas, and it's a great opportunity to repurpose and refactor these assets.'] "The ongoing conversation about Web services invariably turns to barriers to adoption. A major roadblock, people say, is that would-be consumers of services find few available. This classic chicken-and-egg dilemma may shortly untangle itself. Many existing assets, including SQL stored procedures and CORBA, COM (Component Object Model), and EJB (Enterprise JavaBeans) components, can now be offered as Web services with very little effort. Simply wrapping Web services interfaces around legacy assets is no panacea. But services created in this way, although unglamorous and easy to overlook, are about to become common. And they may be more important than we think. Consider how much of the world's middle-tier business logic lives in the stored procedures of databases. The real number may be impossible to know, but it's a safe bet that for logic that touches transacted data the answer is lots. Today in Oracle, IBM DB2, and Microsoft SQL Server, it is possible to export a stored procedure as a WSDL (Web Services Description Language)-defined, SOAP (Simple Object Access Protocol)-callable service. OpenLink Software's Virtuoso, the forward-looking universal server provides Web services encapsulation for the stored procedures of Oracle and SQL Server as well as those residing in its own native database... For DB2 users, the capability of exposing stored procedures as Web services has been available for about a year, as part of the DB2 XML Extender. IBM has, of course, long been working to unify SQL and XML data management, and this work continues under the name Xperanto... The analogous Microsoft SQL Server add-on, SQLXML Version 3, has been available since February. In May, the latest release of Oracle's application server brought the same exposure capability to authors of PL/SQL procedures. 'Web services are about being open to any programming language,' says Steve Muench, an Oracle consulting product manager and the company's XML evangelist. 'Now we've brought PL/SQL into the fold.' There's nothing magical about turning stored procedures into Web services, but then again there's nothing magical about Web services, either. In 1999 you could, and some people did, offer XML-over-HTTP interfaces to applications and data. But the network effects we're eagerly awaiting will happen only when applications and data can be exposed in a standard way without much head-scratching. Doing SOAP encapsulation of stored procedures may not be sexy. But now that it's a no-brainer, the universe of Web services can expand in an important way... So where are all the Web services? They're crawling out of the woodwork. Virtually every software asset can now be offered, or soon will be able to be offered, as a Web service. We'll have to look elsewhere to find those barriers to adoption..."

  • [June 22, 2002] "Databases Wrestle XML." By Ed Scannell and Paul Krill [with Mark Jones]. In InfoWorld (June 21, 2002). "As XML works its way deeper into the enterprise, the database war heats up, with IBM, Microsoft, and Oracle wrestling over emerging query standards and database architectures. Despite agreement that XML has become the lingua franca of the Internet, a lack of tight integration between XML and databases may hinder the wide-scale deployment of Web services. XML itself may also be an obstacle, as its verbosity carries overhead that can choke application performance. Pursuing its own agenda, IBM is set to announce support for the XML query language XQuery in July when it delivers the next beta version of its DB2 database to select customers. Due to ship later this year, the final product, Xperanto, will more fully exploit XQuery and will build on IBM's goal of seamlessly marrying structured and unstructured data across diverse environments. Meanwhile, Oracle is leveraging existing investments in object databases to flesh out XML support in its XDB architecture and has plans to incorporate XQuery. Also planning XQuery support, Microsoft is working to marry its structured and unstructured databases with SQL and Exchange Server, respectively, wrapping XML in a relational format with SQL XML. At the same time, native XML database players such as Ipedo, Software AG, and NeoCore are gathering steam, seeking direct expression of XML content in the database itself..." See: "XML and Databases."

  • [June 21, 2002] "RELAX NG's Compact Syntax." By Michael Fitzgerald. From XML.com. June 19, 2002. ['The RELAX NG schema language for XML offers a simple approach to writing schemas for XML documents, and is seen as a competitive alternative to W3C XML Schema for many applications. RELAX NG is currently being developed by an OASIS Technical Committee. One of the most recent things to emerge from that committee has been the RELAX NG Compact Syntax. If you've ever written a schema by hand, you'll know all the XML tags are tedious to write and can obscure the meaning of the schema. RELAX NG Compact is a non-XML syntax that makes schemas a lot more readable. In our main feature this week Michael Fitzgerald provides an introduction to this new syntax and the tools that use it.'] "Working with XML Schema is like driving a limousine. It's true that it has some nice appointments (datatypes come to mind), but the wheelbase is a bit on the long side, making it difficult to turn corners easily, and I am inclined to let somebody else do the driving for me. Using RELAX NG, on the other hand, is like driving a sports car. It holds corners amazingly well, and I am much less interested in handing over the keys to anyone. You may prefer to drive a limo over a sports car. But I'll take the sports car any day. You are probably familiar with XML Schema and RELAX NG. Both are schema languages for XML. The former was released by the W3C in May 2001, while the latter was released in December 2001 by OASIS. RELAX NG, which was developed by a small technical committee lead by James Clark, merges Murata Makoto's RELAX and Clark's TREX. It is a simple, yet elegant evolution of the DTD, which is also easy to learn. It is modular in design. The main core of RELAX NG is focused on validation alone and doesn't modify the infoset in the process of validation; in other words, no PSVI. RELAX NG is also part of an ISO draft standard, ISO/IEC DIS 19757-2. RELAX NG schemas were originally written in XML, but there's also a compact, non-XML syntax. While this article doesn't contain an exhaustive review of all the features of RELAX NG, it will give you a good idea of how to use the main parts of the compact syntax. If you don't know much about RELAX NG, I suggest that you read Eric van der Vlist's RELAX NG Compared before finishing this article. I think you'll find the compact syntax quite readable and easy to learn. In some respects, a RELAX NG schema in compact form looks like a context-free grammar, which provides a familiar view of the language, is readily comprehensible, and amenable to parsing..." See: "RELAX NG."

  • [June 21, 2002] "Can XML Be The Same After W3C XML Schema?" By Eric van der Vlist. From XML.com. June 19, 2002. ['Eric van der Vlist has just finished writing a book on W3C XML Schemas for O'Reilly, an experience that has given him a unique perspective on schema languages and W3C XML Schema in particular. Here Eric reflects on the nature of W3C XML Schema and how it could affect XML as a whole'] "The first question to ask is why is W3C XML Schema different? Why am I asking this question about W3C XML Schema and not about DTDs, Schematron, or RELAX NG? The short answer is datatypes and object orientation. These two aspects of W3C XML Schema are tightly coupled. Datatypes are to W3C XML Schema what classes are to object oriented programming languages. Both promote a categorization of information into classes and subclasses, analogous to the taxonomies biologists use to classify species. Although this process of classification or derivation seems natural, it is not universal and is much less visible in other schema languages. To reuse the metaphor of species, a rule based language such as Schematron does not attempt to put a sticker on a species, but rather set of rules defining if an animal belongs to a set of 'valid' animals ('the set of animals having four legs and able to run at least 50 km/h'). Grammar based languages, including RELAX NG and DTD, describe the patterns used to build the animal ('an animal made of a body, a neck, a head, a tail, and four legs')... My reward for digging into the W3C XML Schema Recommendation has been to discover an unexpected pearl far away from the limelight: W3C XML Schema is exceedingly good at associating metadata with elements or attributes...Schematron is about rules, and RELAX NG is about patterns; neither of them describes elements or attributes as such. Schematron can define rules to be checked in the context of an element, and RELAX NG can describe a pattern containing a single element, but W3C XML Schema is the only one which can describe elements and attributes. As long as validation is your primary concern, this may not make much difference, but for attaching metadata to elements and attributes, a language which describes elements and attributes seems to be a better fit..." For schema description and references, see "XML Schemas."

  • [June 21, 2002] "PDF Presentations Using AxPoint." By Kip Hampton. From XML.com. June 19, 2002. ['Kip Hampton provides us our monthly dose of Perl. In his 'Perl & XML 'column Kip introduces AxPoint, a toolkit for producing presentations in PDF. If you've a presentation due any time now, AxPoint could be just the ticket.'] "... we will be examining AxPoint, Matt Sergeant's Perl/XML-based tool for creating visually rich presentations and slideshows in PDF. Originally designed for use exclusively within the AxKit XML publishing environment, AxPoint has recently undergone a major facelift and is now also available outside of AxKit as a simple SAX Handler, XML::Handler::AxPoint... AxPoint is one of those rare applications that strikes a good balance between simplicity and functionality. If you want to get fancy, you can -- for example, the latest version even supports certain SVG primitives, natively -- but if you need something fast, the language itself gets out of the way and lets you focus on the important parts without sacrificing a professional look. If you have presentation deadlines coming up (and I know I do) I strongly encourage you to give AxPoint a close look..." See: "XML and Perl."

  • [June 21, 2002] "Utility Deregulation Requires Effective E-Business Standards." By Alan Kotok (DISA). June 2002. ['This paper analyzes business practices in the gas and electric power utilities industries resulting from deregulation, including the current use of e-business, and offers recommendations for e-business standards to support the goals of deregulation.'] "The utilities industries have relied on voluntary industry standards in some form since the 1960s, but since the trend in deregulation has accelerated, voluntary standards have taken on more importance. For e-business transactions, many companies in the industry successfully use EDI transactions, especially in the wholesale gas and retail electric quadrants. The Petroleum Industry Data Exchange, Gas Industry Standards Board (now North American Energy Standards Board), and Utilities Industry Group have all written implementation guidelines for EDI. With deregulation, and the introduction of trading in energy supplies and futures, as well as electronic marketplaces, the use of XML and Web-based transactions has increased, particularly for companies that had not yet implemented EDI. The paper outlines a high-level strategy for e-business in this more dynamic environment, yet that still recognizes the continuing need for reliability, a heightened need for security, and recent calls for more transparency. The utilities industries should consider developing common business processes, independent of technology and cutting across the traditional boundaries, to provide a roadmap for the development and integration of e-business transactions. Parts of the utilities industries already have experience with this approach to business processes. The paper recommends continuing the use of EDI for high-volume and direct transactions with stable content, while planning for increased use of XML (including ebXML and Web services) where conditions call for more flexibility or intermediaries. The industries should also consider adapting XML practices and vocabularies from the financial services and retail industries that can short cut the development process as well as provide more transparency in electronic transactions..." See the recent news item "Chemical, Petroleum, and Agricultural Industries to Develop eBusiness Standards." [cache]

  • [June 21, 2002] "Sun Wants to Set Web Services Free." By Wylie Wong. In CNet News.com (June 19, 2002). ['Sun Microsystems is attempting to gain popularity among developers in the emerging Web services market by giving away a crucial piece of e-business software.'] "Sun plans to give away an updated basic version of its application server software, a key piece of infrastructure software for building business applications. Application server software is technology that runs e-business and other Web site transactions... Overall, Sun is hoping a free application server will help drive sales of its more expensive products, including servers and other e-business software, such as upgrades to its higher-end application servers and portal software that allows businesses to create portal Web sites for their employees and partners. The company hopes to woo developers who are choosing between its Java-based software--sold under the umbrella name of the Sun Open Network Environment (Sun One) -- and Microsoft's overarching Web services strategy, called .Net. Sun has been battling Microsoft and IBM for popularity in the market but has been playing catch-up with its rivals during the past year. Sun's e-business software, including its application server, lies at the heart of its Web services strategy. Sun has been working to seize more of the initiative, however. It quietly began working on its own royalty-free standard for Web services security and has signed up major partners to back its Liberty Alliance Project, which is working on software standards to let a person use different computer systems while only having to log in once. The Liberty standard, which will govern how to 'federate' different organizations' authentication and authorization systems, is scheduled for release in early July, said Jonathan Schwartz... Sun is building Liberty support into its Sun ONE software. The first Liberty-enabled products -- likely Sun's directory server software widely used to store username-password combinations -- are expected within 45 days. However, that product could be an 'early access,' not a fully supported version, Schwartz said. Sun would like Liberty to grow from a multi-company collaboration to a formal standard endorsed by the World Wide Web Consortium, Schwartz said. 'I'd be thrilled to see it in the W3C,' but that move will be up to the Liberty Alliance partners, not just Sun, to decide'..."

  • [June 21, 2002] "Sun To Give Away J2EE Server." By Richard Karpinski. In InternetWeek (June 19, 2002). "Sun Microsystems on Wednesday said it will offer a basic, but full-featured version of its Java application server for free, not only on its own Solaris operating system but on Windows, Linux, IBM-AIX, and HP-UX as well. That decision will no doubt shake up the application server market, where Sun today badly lags market leaders IBM and BEA Systems. Sun has little to lose by trying to change the rules of the game -- and it is doing so by hoping to commoditize the J2EE app server and make the game more about the rest of the middle-tier 'stack,' including directory, identity, integration, messaging, and portal servers. In addition to the new pricing, the new app server includes fully integration with the Java Web Services Developer Pack, which includes support for SOAP, UDDI, and WSDL, including straightforward ways to bind these XML technologies with Java code and classes... Sun also announced an all-new Sun ONE Developer platform, a full slate of tools for creating, assembling, and deploying Java Web services. The platform includes Sun ONE Studio 4.0 and Sun ONE Application, Portal, Identity, Registry, and Integration server software..."

  • [June 19, 2002] "Information Management: Challenges in Managing and Preserving Electronic Records." From the United States General Accounting Office (GAO). Report to Congressional Requesters. June 2002. 83 pages. Reference: GAO-02-586. See Appendix II: "Approaches to Archiving Electronic Records Provide Partial Solutions." Excerpt: "In the wake of the transition from paper-based to electronic processes, federal agencies are producing vast and rapidly growing volumes of electronic records. The difficulties of managing, preserving, and providing access to these records represent challenges for the National Archives and Records Administration (NARA) as the nation's recordkeeper and archivist. GAO was requested to (1) determine the status and adequacy of NARA's response to these challenges and (2) review NARA's efforts to acquire an advanced electronic records archiving system, which will be based on new technologies that are still the subject of research... The challenge of managing and preserving the vast and rapidly growing volumes of electronic records produced by modern organizations is placing pressure on archives and on the information industry to develop a costeffective long-term preservation strategy that will free electronic records from the constraints of proprietary file formats and software and hardware dependencies... After considerable research in this area, some agreement is being reached on the metadata (data about data) required for preserving electronic records, and some practical applications are using XML (Extensible Markup Language) for creating such metadata. However, there is no current solution to the electronic records archiving challenge, and so archival organizations now rely on a mixture of evolving approaches that generally fall short of solving the long-term preservation problem. The four most common approaches -- migration, emulation, encapsulation, and conversion -- are in use or under consideration by the major archives. NARA is supporting the investigation of a new approach involving records conversion (known as persistent object preservation), but this has yet to mature... The Victoria archive uses XML to encapsulate records along with standardized metadata describing each record in a Victorian Electronic Record Strategy (VERS) format. XML is also used by the National Archives of Australia, which converts files from their native formats to XML versions, while retaining a copy of the original source file. The Australian archives has also developed a metadata model, but it has not yet determined its final preservation metadata requirements... NARA is investigating an advanced form of conversion combined with encapsulation known as persistent object preservation (POP). Under this approach, records are converted by XML tagging and then encapsulated with metadata. According to NARA, the persistent object transformation approach would make electronic records self-describing in a way that is independent of specific hardware and software. The architecture for POP is being developed through the National Partnership for Advanced Computational Infrastructure. The partnership is a collaboration of 46 institutions nationwide (including NARA) and 6 foreign affiliates, with the San Diego Supercomputer Center serving as the technical resource..." See also "Australian Public Record Office Victoria Uses VERS Standard for Records Management."

  • [June 19, 2002] "Imaging Group Clicks on Net Photos ." By David Becker. In CNET News.com (June 17, 2002). "A coalition supported by some of the biggest companies in digital imaging announced Monday an open standard and network intended to simplify ordering photo prints. The International Imaging Industry Association (I3A) -- a nonprofit trade group supported by Eastman Kodak, Hewlett-Packard, Fujifilm and others -- is developing the Common Picture Exchange Environment (CPXe), a new standard for distributing photos over the Internet. The I3A will maintain a directory of retail photofinishers that support the standard and will supervise the network that will allow online photo services, retail photofinishers and other services supporting CPXe standard to exchange images with each other. In one of the most common scenarios, consumers would send images from their PC to an online storage service, order prints and pick them up at a neighborhood photofinishing shop. This process is more similar to the film-based system that consumers are familiar with than that of current digital alternatives such as ordering photos online and waiting for them to arrive in the mail, or using a print kiosk at a photo store, said Lisa Walker, executive director of the I3A... Mark Cook, director of product management for Kodak's digital imaging division, said that while the initial focus will be on connecting consumer PCs with photofinishers, new applications are likely to emerge as support for CPXe proliferates... Ramon Garrido, director of digital imaging programs for Hewlett-Packard, said he expects CPXe to generate widespread support in the imaging industry... CPXe will be based on established Web services standards such as XML and Simple Object Access Protocol (SOAP), making it one of the first major efforts of the much-hyped Web services push aimed at consumers..." See: (1) the technical white paper, and (2) the announcement: "Leading I3A Imaging Companies Develop Industry Initiative To Expand Digital Photofinishing Services for Consumers. Eastman Kodak Company, Fujifilm and HP Give Consumers More Ways to Print Digital Images and Create New Business Opportunities for Retailers."

  • [June 19, 2002] "Reap the Benefits of Document Style Web Services. Web services are not exclusively designed for handling remote procedure calls." By James McCarthy (President and CTO, Symmetry Solutions, Inc.). From IBM developerWorks, Web services. June 2002. "While most Web services are built around remote procedure calls, the WSDL specification allows for another kind of Web services architecture: document style, in which whole documents are exchanged between service clients and servers. In this article, James McCarthy explains what document style is and when you should use it. Buried deep in the Web Service Definition Language (WSDL) specification is a very subtle switch that can turn the SOAP binding of a Web service from a remote procedure call to a pass-through document. The style attribute within the SOAP protocol binding can contain one of two values: rpc or document. When the attribute is set to document style, the client understands that it should make use of XML schemas rather than remote procedure calling conventions. This article will provide a description of this WSDL switch, describe its benefits, and explain when you should use pass-through documents... [Conclusion:] When designing your next Web service, you need to consider all of the options that the current WSDL specification gives you. Before starting with a procedural interface, consider how the service will be used, who will be using it, and the type and volume of information that needs to be exchanged. Designing and developing a document style Web service may require a little more effort, but in many cases the effort will pay off in the quality of information and the reliability of the exchange..."

  • [June 18, 2002] "Understanding Educational Technology Interoperability Standards: An Annotated Resource List." By Raymond Yee (IST -- Interactive University). In Berkeley Computing and Communications Volume 12, Number 3 (Summer 2002). "There has been much recent activity on campus surrounding course management systems, also known as learning management systems (LMS). Purchased solutions, such as Blackboard and WebCT, have been in use for several years. A third system, CourseWeb, is currently being developed by the campus. ETS (Educational Technology Services) and IST are planning the future of learning management systems on campus. Any 'new product will be a confluence of existing University systems, best practices at Berkeley, and developments from other universities and commercial courseware companies'. In many discussions surrounding LMS, the issue of educational technology interoperability standards (hereafter referred to as interoperability standards) arises. One encounters phrases such as 'conforms to all relevant IMS standards' and 'led the pack with first AICC certified product in the authoring tool category and continue to be a frontrunner in SCORM compliance as well as other learning standards'. Interoperability standards are generally portrayed as being beneficial and desirable. But what exactly are these standards about? To help readers unravel the mysteries of interoperability standards, I present the following list of articles, websites, and resources that I have found particularly helpful in this regard..." See the news item of 2002-06-18: "OASIS Discussion List for a Proposed edXML Technical Committee."

  • [June 18, 2002] "An XSL Calculator: The Math Modules of FXSL." By Dimitre Novatchev. ['This article describes the 32 functions of the latest 6 FXSL math modules'] "This article is a follow-up from three recent publications on functional programming in XSLT. Described are various new modules of the XSLT functional programming library FXSL, which implement trigonometric and hyperbolic-trigonometric, exponential and logarithmic functions and their reverse functions. Also implemented are two general methods of solving equations of one real variable. These are used within an XSLT Calculator - an interactive XSLT application driven entirely by a single stylesheet, which depends on an external object for its IO. The code of the implementation demonstrates the use of such powerful, functional programming design patterns as partial application and creation of functions dynamically... Conclusion: We described the newest 6 modules of FXSL, which implement a variety of useful math functions and two numerical methods of finding the root of a continuous function of one real variable. Their use was demonstrated through the construction of a simple yet powerful, accurate and efficient XSL Calculator. The XSL Calculator also demonstrates a reliable method of serializing a series of calls to extension functions, closely following the definition of the Monad class in Haskell." See the SourceForge project "FXSL -- the Functional Programming Library for XSLT." The FXSL functional programming library for XSLT "provides XSLT programmers with a powerful reusable set of functions and a way to implement higher-order functions and use functions as first class objects in XSLT." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [June 18, 2002] "XML Stores Get Richer Queries." By Matt Hicks. In eWEEK (June 17, 2002). "Native XML database developers X-Hive Corp., Excelon Corp., Ipedo Inc. and Software AG are adding more support in upcoming releases for emerging standards for such functions as querying. Much of the focus for the developers with their latest crop of XML databases is on bolstering querying capabilities through the XQuery XML data retrieval standard. X-Hive, for example, last week began shipping Version 3.0 of its X-Hive/DB, which supports XQuery, said officials in Rotterdam, Netherlands. Separately, Excelon, in a point release to its Extensible Information Server, due next month, will add full XQuery support. Also in releases planned over the next year and a half, the Burlington, Mass., company aims to support the XForms standard for handling XML forms, officials said. Ipedo, of Redwood City, Calif., is beefing up its current XQuery support. Version 3.1 of its namesake XML database, due next week, will be able to perform updates in the querying language. That release will also include support for the WebDAV, or Web-based Distributed Authoring and Versioning, protocol so documents from popular client applications can be published into the database server, officials said. For its part, Software AG plans to add full XQuery support in the next major release of its Tamino XML Server, Version 4.11, due by the end of the year. That release will also include validation of XML Schema; enterprise-level backup and restore of the database; and improved tools for Web services features such as Universal Description, Discovery and Integration directories, said company officials, in Darmstadt, Germany. All these companies are looking to extend their technological lead over Oracle Corp., IBM and Microsoft Corp., which offer XML add-ons and have plans to embed XML support deeper within their database engines..." See: "XML and Databases."

  • [June 18, 2002] "IBM Submits Mobile Standard to OASIS." By Ephraim Schwartz. In InfoWorld (June 17, 2002). "The mobile infrastructure may soon have its own Web services protocol defining standard methods for presenting enterprise applications on handhelds. IBM submitted to the OASIS (Organization for Advancement of Structured Information Standards) Web services standards body earlier this month a protocol called WSRP (Web Service Remote Portal) to allow wireless devices to access portlets. A portlet is any application launched from a portal server, according to Rod Smith, vice president of Internet Emerging Technologies, at IBM in Raleigh, N.C... When a device calls a portal server, WSRP will tell the server how much screen real estate is available, for example. The portlet aggregates information to build a page and generates it in XML. WSRP would sit on top of SOAP (Simple Object Access Protocol) and WSDL (Web Services Description Language) layers. 'The idea is to make a portlet Web service-ized, and in this way I can generate XML interfaces for voice or other interfaces,' said Smith. But while Smith admitted that WSRP and many of the other middleware protocols are still maturing, one ISV sees the lack of 'mature' standards as a major worry for ISVs..." See Web Services for Remote Portals (WSRP) TC.

  • [June 18, 2002] "Web Services for Remote Portals (WSRP) Overview." By OASIS WSRP Technical Commitee. April, 2002. "Web Services for Remote Portals (WSRP) will define a standard for interactive, user-facing web services that plug and play with portals. WSRP will define: (1) A WSDL interface description for invocation of WSRP services; (2) How to Publish, Find, Bind WSRP services and metadata; (3) Markup Fragment Rules for markup emitted by WSRP services; (4) Applicable Security Mechanisms, Billing information..." See Web Services for Remote Portals (WSRP) TC. [source PPT]

  • [June 18, 2002] "Introduction to the Controlled Trade Markup Language (CTML) Technical Committee." Draft document. May, 2002. Posted 2002-05-19 in a message from Todd Harbour. "This document provides background information for the CTML TC, identifies its goals and objectives, identifies areas of cooperation, approach the controlled trade domain, and status of activities." See (1) the recent announcement: "OASIS Members Form Technical Committee to Develop International Standard for Controlled Trade. Supports Coordinated Management Strategy for Transfer of Sensitive and Strategic Goods"; (2) the main reference page "Controlled Trade Markup Language (CTML)". [cache ZIP/PPT original document; OASIS link 2002-05-23]

  • [June 17, 2002] "application/xenc+xml Media Type Registration." By Joseph M. Reagle Jr. (W3C; Massachusetts Institute of Technology Laboratory for Computer Science). IETF Internet-Draft. Reference: 'draft-reagle-xenc-mediatype-00'. June 2002, expires: October 2002. ['describes a media type (application/xenc+xml) for use with the XML Encryption specification'] "The XML Encryption Syntax and Processing document specifies a process for encrypting data and representing the result in XML. The data may be arbitrary data (including an XML document), an XML element, or XML element content. The result of encrypting data is an XML Encryption element which contains or references the cipher data. The application/xenc+xml media type allows XENC applications to identify XENC documents for processing. Additionally it allows applications cognizant of this media-type (even if they are not XENC implementations) to note that the media type of the decrypted (original) object might a type other than XML. This media-type is only used for documents in which the XENC EncyptedData and EncryptedKey element types appear as the root element of the XML document. XML documents which contain XENC element types in places other than the root element can be described using facilities such as W3C XML-schema or 'Registration of xmlns Media Feature Tag' (StLaurent)..." See: "XML and Encryption."

  • [June 17, 2002] "Microsoft's BizTalk Speaks to Small Biz." By Wylie Wong. In ZDNet News (June 17, 2002). "Microsoft is courting smaller companies with new versions of its business integration software. The software company on Monday released two new versions of its BizTalk Server software, which is designed to help companies link computing systems to enable communications and to conduct e-commerce transactions using Extensible Markup Language (XML). The two new versions are aimed at small and midsized companies. In February, Microsoft released an update to its BizTalk Server, which included new software that helps automate connections between companies, and new technology that manages BizTalk and monitors the status of the transactions. But the $25,000 price tag for the BizTalk Server 2002 Enterprise Edition was limiting the product to large corporations, Microsoft executives said... With BizTalk, Microsoft competes with software makers IBM, Oracle, Tibco, Vitria, WebMethods, SeeBeyond and others in the growing market for integration software. As more companies take their businesses to the Web, systems that were never meant to be integrated, now must be tied together, which makes integration software necessary..." Pricing details are available in the announcement: "Microsoft BizTalk Server 2002 Partner and Standard Editions Bring Integration to the Masses. Microsoft Continues to Lower the Barrier for Integration by Offering Two New Editions Of BizTalk Server 2002 That Meet the Needs of Small to Medium-Sized Businesses." See "BizTalk Framework."

  • [June 17, 2002] "Microsoft Looks to Serve The Little Guys." By Carolyn A. April. In InfoWorld (June 17, 2002). "Looking to lower the cost barriers to connecting to an electronic trading hub, Microsoft on Monday rolled out two entry-level versions of BizTalk Server 2002 -- including one that carries a sub-$1,000 price tag. The new editions, BizTalk Server Standard and BizTalk Partner, are not 'feature-crippled,' according to Microsoft officials, meaning that they will sport all the technology found in their Enterprise Edition brethren. However, the Standard and Partner editions will support a limited number of CPUs, integrated applications, and trading partners... Wolf offered the example of a small supplier in the high-tech industry that sells to a limited number of four manufacturers, each dealing in different electronic exchange formats such as EDI, XML, and flat files. With BizTalk Standard, Wolf said, the small supplier gets the technology necessary to conduct business electronically in those varied formats, but at a deployment price much lower than found in a multipartner, multiapplication trading hub implementation. BizTalk Standard is priced at $6,999. It is limited to one CPU, 10 trading partners, and five applications that users can integrate within the firewall. BizTalk Partner edition, priced at $999, is limited to one CPU, two internal applications, and two trading partners..."

  • [June 17, 2002] "Novell Heats Web Services Agenda." By Peter Sayer, Carolyn A. April, and Paul Krill. In InfoWorld (June 14, 2002). "Attempting to strengthen its Web services plans, Novell this week bid to purchase SilverStream Software, just days after joining the UDDI (Universal Description, Discovery, and Integration) Advisory Group. The $212 million acquisition of Billerica, Mass.-based SilverStream, which specializes in Web services application development tools, will extend Novell's offerings, enabling it to help enterprises deploy advanced Web applications... Novell plans to rebrand SilverStream's eXtend software line as its own, and SilverStream president and CEO David Litwack will become a senior vice president at Novell. In an effort to wield its directory expertise, Novell earlier this month joined the UDDI Advisory Board. "We think that we have expertise in directory technology, and this is a classic directory application," said Winston Bumpus, director of standards at Provo, Utah-based Novell..." [According to the announcement:] "With the acquisition of SilverStream, Novell will be in the position to help enterprises meet these challenges by providing a 'Services Oriented Architecture.' A Services Oriented Architecture is a loosely-coupled, standards-based, process-driven, directory-enabled, network-aware architecture that facilitates the development of Web services-based applications. It enables organizations to get maximum value from the systems, users, devices, business processes and information resources that comprise their corporate assets." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [June 17, 2002] "Web Services Interoperability Standards: Accelerating Web Services Adoption." By Chris Kurt (Microsoft). May 15, 2002. Based on 14 slides. Presentation to the XML.gov XML Working Group. "The shift to Web services is underway [An Internet-native distributed computing model based on XML standards has emerged; Early implementations are solving problems today and generating future requirements; The Web services standards stack is increasing in size and complexity to match functionality requirements]. The fundamental characteristic of Web services is interoperability [Assumes consistency across platforms, applications, and programming languages]. What is needed? (1) Guidance [Implementation guidance and support for Web services adoption; A common definition for Web services] (2) Interoperability [Across platforms, applications, and programming languages; Consistent, reliable interoperability between Web services technologies from multiple vendors; A standards integrator to help Web services advance in a structured, coherent manner]..." Available also in HTML and PPT formats. See: "Web Services Interoperability Organization (WS-I)."

  • [June 17, 2002] "Commerce One Attempts Web Services Make-Over." By Renee Boucher Ferguson. In eWEEK (June 14, 2002). "Commerce One Inc. is reshaping itself in the image of a Web services integration software provider... now redesigning its namesake business-to-business collaboration platform to provide Web services and application integration capabilities. In addition to the new version, called Commerce One 6.0 and due to begin beta testing in the fourth quarter, the company is rewriting its sourcing and procurement applications to be enabled for Web services, officials said. In the meantime, the Pleasanton, Calif., company will merge its Collaborative 5.0 source-to-pay software and Exterprise platform. Together, officials said, the product will create a development platform that will enable companies to connect people and business processes inside and outside the enterprise with native Web services. 'We are in a unique position to fully embrace Web services technology that will ... enable customers to realize substantial costs savings throughout their organizations,' said Commerce One CEO Mark Hoffman... Commerce One also faces the challenge of getting supply chain partners to adhere to standards, according to Tom Restaino, manager of e-business at ITT Industries Inc. and the chair of Commerce One's independent user group. 'The entire [user] community thinks this is a good way to go, since classical procurement is not returning the value,' said Restaino, in Palm Coast, Fla. 'If Commerce One can pull it off and get these other partners to standardize on business practices, they'll go a long way'..."

  • [June 17, 2002] "Office Work Is Never Done." By eWEEK Senior Editor Peter Galli. In eWEEK (June 17, 2002). Interview with Steven Sinofsky, senior vice president of Office at Microsoft. "... [Sinofsky] You can expect to see continued investment in areas like collaboration and XML for analyzing and importing and exporting information. We're still early on the adoption curve of these technologies with many customers. XML technology hasn't quite made it into the mainstream yet, and as we start to improve the support we have on our desktop applications you'll start to see people increasingly interact with XML from their desktop or laptop computers. You'll see far more XML-enabled applications talking to Web services that spew out XML, and we're planning a lot of work in the areas of communication and collaboration. We started adding e-mail and calendaring and desktop management information in Outlook, and that's going to be the No. 1 area where we're investing and innovating. Also, the idea of making organizational intelligence, where XML is used to find and analyze all the key information a customer has, available to Outlook is a key advancement we'd like to make. Smart tags are integral to this. But the next level of XML support is having a server that puts forward a vast amount of XML information, like the entire day's sales or reservations, and then have tools like Excel and Outlook able to report or analyze that information. Or have a tool like Word that can mail merge documents to a whole lot of customers from a Web service that allows Word to connect to a data source, say via a URL, and do a mail merge directly from that data source. And that all comes just from using the existing infrastructure information a company has and exposing them in this standardized XML Web service manner. If you take the investments companies have made in business applications like CRM or ERP, where only a few people in the company benefit from that information, Web services will enable many more people to find value in that corporate investment in those applications and systems... The key thing about software as a service is the kinds of scenarios it enables. Software gets better if you have access to services. Office going forward will continue to become a great client for all Web services and the power of XML is that it's data that describes itself, so we'll all be able to communicate with all sorts of services going forward..."

  • [June 17, 2002] "Microsoft to Give XML Bigger Role in Office." By Peter Galli. In eWEEK (June 17, 2002). "Microsoft Corp., which faces mounting competition and price pressure focused on its Office desktop productivity suite, is set to release the first beta of an Office upgrade. Due later this year, the beta will feature far greater use of XML and Web services for reporting, analyzing, importing and exporting information -- particularly in Outlook and Excel, Steven Sinofsky, senior vice president for Office, in Redmond, Wash., told eWeek. Although many users agreed XML and Web services will become important, some said additional features for Word, Excel and Outlook are unnecessary and could add unwanted complexity. They also said the continued high cost of the product and Microsoft's onerous volume licensing plans are making them reconsider upgrading... Making 'organizational intelligence' -- where XML is used to find and analyze a customer's key information -- available to Outlook is a key advancement Microsoft wants to make to the product, Sinofsky said. While Smart Tags are integral to this strategy, the next level of XML support will center on a server housing a vast amount of XML information, such as a day's worth of sales data or reservations, and then having tools such as Excel and Outlook able to report or analyze the information, Sinofsky said. 'Or [you could] have a Web service that allows a tool like Word to connect to a data source -- say, via a URL -- and do a mail merge directly from that data source,' he said. 'All of this will use a company's existing infrastructure and information and expose them in this standardized XML Web service manner.' As Microsoft improves support on its desktop applications, people will increasingly interact with XML from desktop or laptop computers, Sinofsky said..."

  • [June 17, 2002] "XInterfaces: A New Schema Language for XML." Final thesis by Oliver Nölle. Programming Languages Group, Institute for Computer Science, University of Freiburg, Germany. June 12, 2002. Thesis supervisor, Prof. Dr. Peter Thiemann. 106 pages, with 45 references. Abstract: "A new schema language for XML is proposed to enhance the interoperability of applications sharing a common dataset. An XML document is considered as a semi-structured database, which evolves over time and is used by different applications. An XInterface defines a view of an XML document by imposing constraints on structure and type of selected parts. These constraints are not grammar-based but specify an open-content model, allowing additional elements and attributes to be present anywhere in the document. This enables each application to define and validate its own view on the document, with data being shared between applications or specifically added by one application. XInterfaces feature an explicit type hierarchy, enabling easy extension of existing schemas and documents while guaranteeing conformance of the extended documents to the existing views. This allows data evolution without breaking compatibility of existing applications. Because different applications share one document, access mechanisms are described that guarantee the validity of the document for all applications after modifications. As a proof of concept, a tool was implemented that maps XInterfaces to a class framework in Java, allowing convenient access to those parts of an XML document that are described by XInterfaces." Notes from Oliver Nölle in a posting dated June 16, 2002: "As a final thesis project I created a new schema language named 'XInterfaces'. It is a very simple language, similar to XML Schema in syntax and similar to Schematron in semantics, featuring an explicit type hierarchy which allows multiple inheritance. Although it is a very simple concept, I think it is a very useful one. I'm announcing it here as this language has some features that XML Schema currently does not have, but which I found very useful in certain scenarios (in particular, multiple inheritance and open content model). Maybe later versions of XML Schema are moving towards implementing these features and XInterfaces can be an inspiration. If you are interested, you can find my thesis, a sample validator implementation and related material on the XInterfaces home page..." See also: (1) A sample XInterface type definition which could be applied to this instance document; (2) XInterface type definition for XInterface type definitions; (3) XML Schema schema that defines the syntax of XInterface type definitions; (4) sample implementation of an XInterface schema validator. General references: "XML Schemas." [cache]

  • [June 17, 2002] "XML and the Law. A Gentle Voyage Round XML, Electronic Case Files and Electronic Filing." By Roger Horne (Chambers of Sonia Proudman QC). Presentation given to the Society for Computers and Law, Internet Interest Group, London. June 2002. ['designed as simplistic, but it was billed as a non-technical talk'] See the sample Claim Form (cited in paragraph 34). From the Conclusion: "I firmly believe that XML will soon become the normal output format used in the production of legal documents. It has substantial advantages in relation to pleadings, electronic case files and electronic filing. XML requires the development of DTDs or Schemas and the question that then arises is how should these be developed? In its response to the MCC Report the SCL suggested that the SCL should lead the way. In the MCC paper Court Service suggested that it should lead the way. (An alternative would be the latest legal Charity, BAILII.) I tend to the view that this sort of thing is best dealt with by a non-governmental body, but I do not think that much turns on this provided that Court Service stick to the declaration made towards the end of new Modernising the Civil and Family Courts paper that 'As a key part of CTMP we will ... ensure users, advisers, judges and staff can play an active role in developing the modernisation program' In the United States LegalXML has had a very great influence on the developments of standards. Although it has the support of the judicial authorities its work is carried out in the main by practitioners and academics. In my view practitioners (in the widest sense of that expression) must be fully involved in the process in this country..."

  • [June 17, 2002] "Editing XML Data Using XUpdate and HTML Forms." By Chimezie Ogbuji. From XML.com. June 12, 2002. ['One of the most common tasks when programming a Web application is processing input from a user. This frequently involves taking input from an HTML form and storing it in a database or a file. Our main feature this week shows how the creation of such interfaces can be streamlined using XML. Chimezie Ogbuji outlines a technique using a simple schema language, some XSLT, XUpdate (a language for expressing changes to XML files) and HTML forms. Using these tools most of the work can be taken out of creating Web interfaces for editing data.'] "In this article I will discuss how XUpdate can be used in conjunction with XSLT to write tools for authors of web-based applications that will automatically generate HTML forms for editing various kinds of data. Forms are the primary means that web-based applications collect information from a user. XML, XUpdate, XSLT can be employed to automate this process for arbitrary data types... XUpdate is an XML language for updating arbitrary XML documents. XUpdate makes heavy use of XPath for selecting a set of nodes to modify or remove. XUpdate was the obvious choice to use for updating our arbitrary data documents. And we will be writing a stylesheet which generates XUpdate documents that can be used to update XML application data automatically. The reader should briefly review the specification to refresh her or his knowledge of XUpdate syntax..."

  • [June 17, 2002] "Generating SOAP." By Rich Salz. From XML.com. June 12, 2002. ['Rich follows up on last month's column, which used the Google SOAP API as a way to critique WSDL, with an example of building an application that uses the Google SOAP API.'] "Last month we used the Google web services API to point out some warts in WSDL. This month we'll use the same API to walk through the steps involved in building an application which uses Google. We'll do the implementation in Python. Python is open source and runs on all the popular platforms. Python is the kind of language that's very well-suited to SOAP and XML processing: it's object-oriented, so you can build large-scale programs; it allows rapid development cycles, and it has powerful text manipulation primitive and libraries, including comprehensive Unicode support. It also provides automatic memory management, good support for introspection (i.e., a program can examine its code and datatypes), and has an active XML community... We have a couple of choices for the SOAP stack, each choice bringing its own set of features: (1) SOAP.py -- a small, streaming (SAX-based) parser; (2) SOAPy -- includes basic WSDL (and Schema) support; (3) ZSI -- emphasis on native datatype support, and DOM based. We'll use ZSI because of the emphasis it places on typing, and because I wrote it. ZSI is a pure-Python open source SOAP implementation..."

  • [June 17, 2002] "The IETF, Best Practices and XML Schemas." By Leigh Dodds. From XML.com. June 12, 2002. ['Leigh Dodds has culled the latest discussions from the XML developer community for the XML-Deviant column. Under the spotlight this week is the IETF's recently published draft on XML best practices, and in particular a vigorous debate it ignited over XML schema languages.'] "In this week's XML-Deviant column, I examine an XML best practice guide under development by the IETF, as well as the XML Schema language debate which it has reignited. A new Internet Draft, 'Guidelines for the Use of XML within IETF Protocols', is currently working its way through the IETF process. Its authors have recognized the growing interest in using XML to represent structured data within Internet protocols; thus, they're producing a document that '...describes basic XML concepts, analyzes various alternatives in the use of XML, and provides guidelines for the use of XML within IETF standards-track protocols.' [...] The Internet Draft acknowledges that there is an ongoing schema debate in the XML community and that several alternatives are available, but it concluded that unless a good reason could be identified, W3C XML Schemas (XSD) should be the schema language of choice. This recommendation triggered a response from James Clark which set out in no uncertain terms his view of XML Schema. In the posting Clark underscored RELAX NG's strengths, including stability, open development, the presence of independent interoperable implementations, its grounding in tree automata theory, and its ease of use. Clark explained that in his estimation the key criteria for selection of a schema language should be its ability to '...communicate unambiguously and precisely to a human reader what XML documents are legal for that application; it serves a similar role for XML that ABNF does for text.' The remainder of his message makes a very strong case that RELAX NG (RNG) meets this criterion better than XML Schema... James Clark and others have raised technical issues with XSD which have yet to be clearly answered. However, there are clearly use cases for which XSD is eminently suited, notably data exchange. Its strong typing features, while clearly controversial for many, are needed for some applications. We might then recommend the alternatives, as Rick Jelliffe advises, based on their strengths in particular application domains: XSD for exchange of strongly typed data and RNG for document validation. This certainly seems the safest course at present. Yet there are signs that the RELAX NG developers are setting their sights on addressing this imbalance. In a message to XML-DEV James Clark explained that type assignment could be achieved in a modular way, possibly defined in a separate 'RELAX NG Type Assignment' specification... This area of research also seems to be of interest to Murata Makoto who recently explained that type assignments can be made from RELAX NG schemas without having to perform full validation. While these kind of layered architectures have been discussed for some time, the fact that RELAX NG and DSDL have these as core design principles suggests that they have a very real chance of delivering them."

  • [June 17, 2002] "Keeping Web Services Simple." By Parand Tony Darugar. Guest Editorial. In New Architect Magazine (July 2002). ['Leave heavy lifting to the implementations.'] "... The UDDI specification currently stands at version 2, and the community is working toward version 3, which is scheduled for delivery this year. Many community members envision UDDI being used to track and manage a wide variety of corporate assets, beyond what might typically be regarded as Web services. At a past meeting, members discussed how assets such as Word documents, Excel spreadsheets, and even delivery trucks would be managed using UDDI registries. Versions 2 and 3 of the specification support multiple rich taxonomies for classifying the assets stored in the registry, allowing complex queries to be performed. While these capabilities will certainly be useful to some subsets of users, the majority of Web services users are simply looking for a way to publish and discover SOAP-based Web services. The ability to keep track of a company's physical assets or office locations via a taxonomy in a UDDI directory should be a future consideration at best. UDDI is being directed to solve the universal problem of categorization and classification for almost any type of resource, while most users simply want a Web service registry... As we look to enhance Web services with more capabilities and more standards, we must remember the golden rule: Keep the standards simple, and make the implementations powerful. The most successful standards, such as IP, HTTP, SQL, and SMTP, have gained ubiquity and enterprise-class abilities by keeping the concepts and the scope of what they specify straightforward, and leaving the heavy lifting to the implementations. While standards are an important part of making Web services the next major application development architecture, their implementations and adoption are the real measures of success. Web services won't gain ubiquity if they're too complex, and if the changes aren't driven by end users. Real-world implementations of Web services will make standards like SOAP, WSDL, and UDDI the backbone of the next generation of inter and intra-enterprise applications. We just have to remember to keep our universe well defined -- and simple."

  • [June 17, 2002] "Fate of the Commons: The Danger of Owning Ideas." [Expert Opinion. Crow's Nest.] By Lincoln Stein. In New Architect Magazine (July 2002). "'He who lights his taper at mine, receives light without darkening me'. These are the words of Thomas Jefferson, the first director of the U.S. Patent Office, and one of the heroes in Lawrence Lessig's provocative book The Future of Ideas: The Fate of the Commons in a Connected World. Despite being head of an agency whose raison d'être was protecting intellectual property, Jefferson seems to have been skeptical about treating ideas as a property right. To Jefferson, the realm of ideas was distinct from that of tangible objects. In the tangible world, resources are finite. If two farmers pasture their cows in the same field, the amount of forage available to each farmer's cows will be reduced. By contrast, ideas are inexhaustible. When I share an idea, my access to the idea is in no way diminished. This is one of the themes of Lessig's book -- that the Internet is an intellectual commons built on freely shared ideas, a fertile environment that spawned the greatest period of innovation that we've seen in the last half century. Lessig identifies several factors that made the Internet successful: a "dumb" network that relies on smart devices at the edges, but treats all traffic democratically; humility on the part of its inventors who couldn't predict how the network would be used, and so built for generality; and a tradition of open specifications, open software, and a free exchange of ideas. The other theme of the book is that this period of freedom is rapidly ending as the old guard acts to protect its vested interests. The backlash takes many forms: software patents that turn common sense practices into intellectual property... Many of the issues that Lessig discusses in his book are themes that I've highlighted in this column. I'm extremely concerned about the trend against traditional fair use of copyrighted works. Scholarship, musical composition, and literature are all built on the ability to use our predecessors' ideas in novel and transformative ways. This tradition of fairness balances the original artist's need for compensation with the social good that comes from making creative material available for reuse..." See Lessig's work cited in connection with 'Patents and Open Standards'.

  • [June 17, 2002] "Cadence Design Deploys Meta-Tagging Tool." By Richard Karpinski. In InternetWeek (June 14, 2002). "Electronics manufacturer Cadence Design Systems has implemented new content management tools from Interwoven that pushes content-posting down the food chain while maintaining consistent meta-tagging to make the content easier to organize and search. Cadence said it has deployed Interwoven's TeamSite content management platform and its MetaTagger software for tagging an array of content including HTML files, graphics, video, and office documents. The content platforms will help the company manage it customer-facing marketing and support Web sites...Interwoven's TeamSite provides a simple Web-based interface for posting Web content per pre-defined business rules. The vendor's MetaTagger application automatically classifies content, relieving users of the need to apply complex data taxonomies. Cadence manages more than 250,000 files with TeamSite. One of the benefits of the MetaTagger is that if content categories change, they can be applied site-wide via a simple conversion tool..."

  • [June 17, 2002] "Automatic Categorization: Next Big Breakthrough in Publishing? [Content Management.]" By Luke Cavanagh. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 6 (June 17, 2002). ['Organizations with millions of digital documents have already bumped up against the limitations of full-text searches, and the costs of manual indexing can be enormous. In principle, it is now possible to find files about a subject even if they don't contain the exact search terms or keywords. We take a look at this nascent field and four of the players. Automated categorization software may come to the rescue, but it's not for the faint of heart -- nor the faint of checkbook.'] "... The job of a taxonomy, or formal classification scheme, is to determine where and under what headings things are filed, much as a library stamps a number on the spine of a book and files it in the stacks accordingly. To end users, taxonomies offer a way to browse items in categories of interest -- just as you might browse a section of books in a library. In contrast, full-text searches are wonderful at pinpointing documents that match specific queries, but they return only those documents that contain the specific search term. A taxonomy not only brings order to the collection for those maintaining the content, but also helps keep it accessible even as it grows over time. While pointing to long-time players Autonomy and Verity as key rivals for potential accounts, most of the competitors are also quick to point out the major differences in their own approaches... Beyond these two, there are dozens of younger vendors that see prime market real estate up for grabs. Among them are Sageware, Smartlogik, Semio, Inxight, Yellowbrix, Quiver, Applied Semantics, Clear Forest, Engenium, Stratify, Nstein, Triplehop Technologies and Data Harmony. Most of these firms focus on unstructured content that is ingested by an organization's digital innards; their products try to decide what is useful and what should be trashed, and also try to put things in the proper drawers. Two concepts drive nearly every vendor offering we've seen: artificial intelligence and better organization... The content-categorization space has evolved from its technology-development phase to a market-testing phase, and we're now at the point where solid contenders will start to emerge. Because most products carry six-figure price tags, large enterprises are the natural testing ground for the various approaches now on the market. Spending six figures for a trial is a plausible option for these companies; they have both the technology-research budgets and the potential payback if the system works..."

  • [June 17, 2002] "Redefining the Publishing Process. [The Latest Word.]" By Laurel Brunner. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 6 (June 17, 2002). "Increasingly, employees at every step of the production chain have to participate in the editorial process. We recently attended a Pira conference on Redefining the Publishing Process. While many a conference has such a stated goal, this one focused not on technological issues, but on how production methods can contribute to new business models. .. Redefining the 'people part' of the equation is only one component of the task. Fundamental to optimal cost control and content management is automation, and here technology is making a major contribution in all sectors. Much as standard operating systems and hardware platforms liberated production and editorial systems, so data standards and common formats are liberating content management. XML drives virtually any content application, from newspapers such as The Wall Street Journal to technical documentation (e.g., Ford's dealer manuals). It is even relevant in IT, where it is used in support systems, plus such things as a middleware management and Web services. For most presenters at the Pira conference, the issue wasn't so much about whether to use XML, but how to use it. XML is no longer an esoteric hybrid, part SGML and part HTML. It plays a crucial role in many a high-volume production system, from newspapers to tech-doc. XML is consensus technology, unifying digital data management and reaching across boundaries, both within the publishing industry and beyond. The primary concern for publishers should be effective XML implementation as a means of driving throughput and of optimizing resource management. The XML question is not so much if, as how..."

  • [June 17, 2002] "Sansui: Publishing Software from India. Company Debuts New Web-based Newspaper Systems for Ads." By John Parsons and George Alexander. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 6 (June 17, 2002). "Indian developer Sansui Software has debuted two new Web-based newspaper systems for generating real-estate and automotive ads, as well as new versions of its design and XML plug-ins for InDesign. It has also demonstrated the latest versions of its 'Smart' series of database-driven applications for newspaper production. SmartProperty system is a tool for laying out real estate ads for publication in newspapers or real-estate booklets. The system is designed for the type of publication in which each page contains offerings from a single realty company, arranged in an array several ads high by several wide, with each ad containing a photo and a bit of copy about the property... With SmartNews, stories are held in user-specific locations, where they can be updated, edited and proofed before being sent to print, Web release or archive. The system works with both NewsML and Unicode...The system's automation features include auto-generation of formatted HTML pages from XPress, using XSL templates..."

  • [June 15, 2002] "Web Services Specification to Target Collaboration." By Paul Krill. In InfoWorld (June 14, 2002). "Sun Microsystems, SAP, BEA Systems, and Intalio have developed an XML-based interface description language to describe the flow of messages exchanged in Web services called the WSCI (Web Service Choreography Interface). According to information on Sun's Web site, WSCI describes the 'dynamic interface of the Web service participating in a given message exchange by means of reusing the operations defined for a static interface.' WSCI describes the observable behavior of a Web service in terms of temporal and logical dependencies among exchanged messages. Sequencing rules, correlation, exception handling, and transactions are features. A global, message-oriented view of interactions is provided. The behavioral description provided by WSCI enables developers, architects, and tools to describe and compose a global view of the dynamic of a message exchange by understanding interactions with the Web service, according to Sun's site. Analyst Joanne Friedman, vice president of e-business strategies at Meta Group in Toronto, described WSCI as 'an XML-based language to describe the flow of messages exchanged by a Web service, but in the context of a higher level business process.' WSCI is critical to collaboration in applications such as e-business, Friedman said. The technology would be used to look at the behavioral patterns of messages and the expectations of senders and receivers... In an e-commerce transaction, WSCI would enable a buyer or merchant conducting a transaction to query the Web for a set of carriers to deliver the merchandise..." See details in "Web Service Choreography Interface (WSCI)."

  • [June 14, 2002] "Sun Plays Catch Up With Web Services." By Stephen Shankland. In CNet News.com (June 14, 2002). "Sun Microsystems, sensing it has fallen behind rivals Microsoft and IBM in Web services leadership, is launching a renewed strategy in an attempt to play catch up. Senior Sun executives have issued an edict to internal programmers to quickly create a software framework that addresses what they see as potential security weaknesses in existing Web services standards, a source familiar with the plan said... In its latest Web services effort, Sun hopes to enlist other companies as backers. That technique has proven successful in the Sun-spawned Liberty Alliance Project to counter Microsoft's Passport authentication service. Sun plans to eventually submit its Web services work to a standards body such as the Organization for the Advancement of Structured Information Standards, or OASIS, a consortium developing electronic business standards, or the World Wide Web Consortium, which also administers standards work from Microsoft and IBM. Sun's new specification for streamlining Web services is being developed in partnership with BEA Systems, SAP and Intalio. The specification, called the Web Services Choreography Interface, or WSCI, is a mechanism for describing what messages are sent among computers as a particular Web service is processed. The choreography standard mirrors similar work already underway. Microsoft and IBM have built competing languages called Xlang and Web Services Flow Language (WSFL), respectively, and industry groups such as OASIS and the Business Process Management Initiative (BPMI) are working on their own standards. Sun's WSCI partnership might dovetail with the BPMI work, though, since its partner Intalio is BPMI's founder. Sun also plans to devise a security specification for Web services. The security work might at first blush seem to tread on the toes of the WS-Security initiative, one of several created by Microsoft and IBM. But that initiative is concerned more about security in the sense of encrypted communications and transactions, whereas Sun's appears to involve security in the sense of computers that can't be breached by attackers..." On WSCI, see "Web Service Choreography Interface (WSCI)."

  • [June 14, 2002] "Federated Searching Interface Techniques for Heterogeneous OAI Repositories." By Xiaoming Liu, Kurt Maly, Mohammad Zubair, Qiaoling Hong (Old Dominion University, Norfolk, Virginia USA ); Michael L. Nelson (NASA Langley Research Center, Hampton, Virginia USA); Frances Knudson and Irma Holtkamp (Los Alamos National Laboratory, Los Alamos, New Mexico USA). In Journal of Digital Information Volume 2 Issue 4 (May 2002). "Federating repositories by harvesting heterogeneous collections with varying degrees of metadata richness poses a number of challenging issues: (1) how to address the lack of uniform control for various metadata fields in terms of building a rich unified search interface, and (2) how easily new collections and freshly harvested data in existing repositories can be incorporated into the federation supporting a unified interface? This paper focuses on the approaches taken to address these issues in Arc, an Open Archives Initiative-compliant federated digital library. At present Arc contains over 1M metadata records from 75 data providers from various subject domains. Analysis of these heterogeneous collections indicates that controlled vocabularies and values are widely used in most repositories. Usage is extremely variable, however. In Arc we solve the problem by implementing an advanced searching interface that allows users to search and select in specific fields with data we construct from the harvested metadata, and also by an interactive search for the subject field. As the metadata records are incrementally harvested we address how to build these services over frequently-added new collections and harvested data. The initial result is promising, showing the benefits of immediate feedback to the user in enhancing the search experience as well as in increasing the precision of the user's search..." The Open Archives Initiative Protocol for Metadata Harvesting defines a mechanism for harvesting XML-formatted metadata from repositories. See also "Arc - An OAI Service Provider for Digital Library Federation," published in D-Lib Magazine; 'The Open Archive Initiative (OAI) is one major effort to address technical interoperability among distributed archives. The objective of OAI is to develop a framework to facilitate the discovery of content in distributed archives.' References in: "Open Archives Metadata Set (OAMS)."

  • [June 12, 2002] "Commentary: The ABCs of XML." By Rita Knox (Gartner Analyst). In CNET News.com (June 11, 2002). ['The U.S. government should avoid getting lost in a Daedalian XML effort.'] "The U.S. government should... develop a common XML vocabulary and grammar rather than codifying ahead of time the "sentences" that can be used. XML offers the prospect of creating common data and transaction formats so that the vast number of federal agencies and the enterprises they work with--private companies, nonprofit organizations, state and local governments, and so on--can share information, work together and deliver services more effectively. Since Sept. 11 [2001], achieving this goal has become vitally important to national security. For example, the 50 or so agencies involved in homeland security --i ncluding the Customs Service, CIA, FBI and National Security Agency--collect vast amounts of intelligence but often can't share it with each other very easily because of incompatible data representations, as well as processing systems. In response, the General Accounting Office has proposed that the Office of Management and Budget take responsibility to create a repository of XML schemas and document-type definitions. These constitute the sentences with which particular fields, such as financial services, health care, manufacturing, education and government, communicate. By building a repository, the government hopes to standardize the XML transactions for dealing with the government. All standardization efforts Gartner has seen take the same type of approach... XML standards efforts should minimize the amount of "hard coding" they incorporate and should maximize flexibility. Gartner recommends the following linguistic model for XML standardization: [2] Devise a method for defining, classifying and validating XML vocabulary items. Items would include elements (with their 'part of speech') and management attributes (such as ownership, where used and last revision). [2] Create a grammar to construct transactions. A legal XML transaction would be one constructed according to a publicly defined grammar from publicly accessible vocabulary items (such as public repositories)..." See: "Government Seeks Accord on XML," by Margaret Kane.

  • [June 13, 2002] "W3C Rolls Out New XML Test Suite." By Darryl K. Taft. In eWEEK (June 12, 2002). "The World Wide Web Consortium (W3C) Wednesday released a conformance test suite for XML systems. The test suite, known as the 'XML 1.0 (Second Edition) W3C Conformance Test Suite,' was developed in cooperation with the National Institute of Standards and Technology and enables developers to test an XML system for conformance with W3C's XML 1.0 standard. The test suite was formerly hosted by the Organization for the Advancement of Structured Information Standards (OASIS). The test suite contains more than 2000 test files and provides a set of metrics for determining conformance to the XML 1.0 Recommendation. Both W3C's XML Core Working Group and OASIS' XML Conformance Technical Committee have contributed test cases... According to NIST, the test suite has received the support of the XML community and software vendors. 'The XML developers community has made frequent use of the XML Test Suite since we published the first version in 1999,' said Mark Skall, chief of the Software Diagnostics and Conformance Testing Division at NIST. 'It's clear from the feedback we receive from both individual developers and companies that an effective test suite can drive conformant applications and enhanced interoperability'..." See details in the press release: "World Wide Web Consortium Releases XML Conformance Test Suite. W3C/NIST/OASIS Cooperation Leads to Better XML Conformance."

  • [June 13, 2002] WSCI Spec version 1.0. By BEA Systems, Intalio, SAP, and Sun Microsystems. 102 pages. "WSCI describes how Web Service operations -- such as those defined by WSDL can be choreographed in the context of a message exchange in which the Web Service participates. Interactions between services -- either in a business context or not -- always follow and implement choreographed message exchanges (processes). WSCI is the first step towards enabling the mapping of services as components realizing those processes. WSCI also describes how the choreography of these operations should expose relevant information, such as message correlation, exception handling, transaction description and dynamic participation capabilities. WSCI does not assume that Web Services are from different companies, as in business-to-business; it can be used equally well to describe interfaces of components that represent internal organizational units or other applications within the enterprise. Again, WSCI does not address the definition of the process driving the message exchange or the definition of the internal behavior of each Web Service..." See also the WSCI FAQ document and news item.

  • [June 12, 2002] "Industry leaders form Open Mobile Alliance." By Dan Neel and Sumner Lemon. In InfoWorld (June 11, 2002). "A who's who of mobile network companies, wireless handset makers, and IT companies on Wednesday announced the formation of the Open Mobile Alliance. The industry body will drive the adoption of standards for mobile telecommunication services and guarantee interoperability between mobile products and services, according to Jon Prial, vice president of business development at IBM, an alliance partner based in Armonk, N.Y... The Open Mobile Alliance brings together more than 200 companies and consolidates the activities of several industry bodies, such as the WAP Forum and the Wireless Village initiative, according to Prial. Mobile industry experts such as Tim Scannell, a research director with Shoreline Research in Quincy, Mass., said the arrival of the Open Mobile Alliance is more than timely... Among the issues that will be tackled by the Open Mobile Alliance are the development of standards such as XHTML (Extensible Hypertext Markup Language), MMS (Multimedia Message Service) interoperability, and standards for location-based services, Prial said. Future initiatives will focus on developing digital rights management and device management standards, he said. Full interoperability between wireless products and networks should foster a diverse eco-system that benefits both users and vendors alike, Prial said. 'Network operators can invest with confidence that the mobile service are based on an open, interoperable set of standards with less risk of solutions being limited by proprietary alternatives. They'll also be able to greatly expand their choices of technology providers,' Prial said. 'The benefits of the Open Mobile Alliance for both the consumers and the business users will be an extensive amount of mobile services that are interoperable across regions, devices and operator networks'... Getting such a wide range of companies -- many of which are competitors -- to work together toward mobile platform interoperability may not be as difficult as it first may seem, Scannell said. Vendors may conform to mobile interoperability standards, but they will still battle to differentiate themselves in the market with unique features and applications... Representatives for the Open Mobile Alliance gave no specific timeline for the arrival of mobile interoperability standards, and Scannell advised against holding one's breath on immediate results..."

  • [June 12, 2002] "Government Seeks Accord on XML." By Margaret Kane. In CNET News.com (June 12, 2002). "The federal government isn't known as a pioneering early adopter. But growing support within U.S. agencies for the popular XML data exchange format has raised concerns that, for once, things might be moving too fast. And the government's purchasing arm is now studying whether a central directory should be built to prevent Extensible Markup Language (XML) from creating the sort of integration headache it was intended to solve. The General Services Administration has commissioned Booz-Allen & Hamilton to study the creation of a registry and repository for Extensible Markup Language, which has become widely popular in the business world, where it is seen as a way to simplify data exchanges between disparate businesses and software programs. XML lets programmers create definitions for any type of data that will be handled, such as names, product ID numbers and lot sizes, so that computer programs can instantly recognize the information being transferred and handle it properly -- as long as both sides agree on definitions. The language has also been a hit with the government. With a budget of more than $48 billion -- that's 16 times larger than the annual IT expenditure at General Motors, for comparison's sake -- the federal government is easily the largest consumer of computer hardware, software and services on the planet... The GSA study comes after a recent General Accounting Office report questioned whether, without a firm government policy, the use of XML could build out of control... The registry and repository will help by providing a central location for agencies working on XML to list their projects, and to store data definitions used frequently by governmental agencies. Other agencies would then be able to find the data and compare that to the work they are doing. Of course, concerns about incompatible XML definitions exist in the private sector as well. To head off compatibility problems, companies in a given industry, such as insurance or financial services, get together to agree on a standard for that industry. But the concerns are somewhat heightened for government because of its sheer size and basic bureaucratic nature. The federal government spans multiple industries, and must work with outside businesses and contractors, as well as communicate with other agencies with the federal system. So the problem is not just making sure that, say, engineers for one federal agency can exchange documents with private sector engineers, but making sure that they'll be able to communicate with engineers in other branches of government as well..." See also: (1) "US General Accounting Office Releases XML Interoperability Report"; (2) "GSA's Government Without Boundaries Project (GwoB) Publishes Draft XML Schemas"; (3) "DISA to Establish a US DoD XML Registry and Clearinghouse." General references in: "US Federal CIO Council XML Working Group."

  • [June 11, 2002] "Sun Readies Dev Tools Suite, UDDI Package." By Paul Krill. In InfoWorld (June 11, 2002). "Sun Microsystems next week will unveil the Sun ONE (Open Net Environment) Developer Platform, a set of tools and software servers intended to provide IT shops with integration across the life cycle of development. The package will feature Sun's new UDDI (Universal Description, Discovery, and Integration) registry product for setting up registries of Web services available within a firewall or via an extranet. Sun this week also will ship its Sun ONE Studio 4 tool, a component of the tools set and the successor to the Forte for Java package. Sun ONE Studio has been fitted with Web services generation support. Additionally, Sun on June 19 [2002] will detail a new version of its Java-based application server, also part of the developer platform. With the Sun ONE Developer Platform, the company seeks to solve the problem of isolated "silos" of development in an enterprise... Components, each bearing Sun ONE in the name, include: (1) Studio, shipping to online customers this week, is a tool built on the NetBeans open source infrastructure and supporting generation of WSDL (Web Services Description Language)-compliant, SOAP (Simple Object Access Protocol)-compliant, and UDDI-compliant code for Web services. (2) Registry Server, a new product for setting up UDDI registries based on Sun's LDAP implementation. It ships in six to eight weeks. (3) Identity Server, for single sign-on and policy management capabilities. (4) Connector Builder, a new product for building Java adapters enabling Java applications to communicate with back-end systems such as PeopleSoft, SAP, and Oracle. (5) Application Framework, a new version of the tool designed for graphical, rapid application development. It features a plug-in to the Studio product and ships this summer. (6) Portal Server, for portal development. (7) Integration Server, a previously announced product for integration applications. (8) A new version of Application Server, of which Sun officials would not comment on features. Sun is integrating its application server into the Solaris OS. Sun ONE Studio is available in three configurations, including the free Mobile and Community editions, for mobile application and Java servlet development, respectively, and the full-fledged Enterprise Edition, supporting Enterprise JavaBeans and Web services..." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [June 10, 2002] "Novell Joins UDDI Advisory Group." By Darryl K. Taft. In eWEEK (June 07, 2002). ['Novell Inc. on Friday became a member of the UDDI Advisory Group as part of the organization's effort to promote Web services standards.'] "As a member of the group, Novell will help to define and promote the UDDI (Universal Discovery, Description and Integration) specifications, which provide the vehicle for organizing and managing Web services. Novell, of Provo, Utah, is interested in the UDDI Advisory Group as part of the company's effort to solidify the role of directories in Web services, the company said Novell has been making moves in this area recently. Last month, Novell submitted a draft specification to the Internet Engineering Task Force outlining an approach for putting UDDI information into an LDAP (Lightweight Directory Access Protocol) directory. In addition, Novell pushed the DSML (Directory Services Markup Language) Version 2 standard, which was recently adopted and which allows access to a directory through Web services vehicles such as the SOAP (Simple Open Access Protocol) and XML, the company said... Winston Bumpus, director of standards for Novell, said in a statement that the adoption of UDDI 'is essential in order to realize the promise of Web services -- not only to support dynamic transactions among businesses but also to enable seamless integration within the enterprise'..." See: (1) "Universal Description, Discovery, and Integration (UDDI)"; (2) "Directory Services Markup Language (DSML)"; (3) "LDAP Schema for UDDI," by Bruce Bergeson and Kent Boogert (Novell, Inc.).

  • [June 10, 2002] "The Role of Directories in Web Services." By Michael Vizard and Steve Gillmor. In InfoWorld (June 07, 2002). ['As CTO of the Sun ONE (Open Net Environment) software products group, Hal Stern tries to be a unifying force within Sun. Stern, who has been quietly guiding the products that Sun acquired from Netscape, has been fairly effective at getting Sun to line up on a common enterprise software infrastructure. In an interview with InfoWorld Editor in Chief Michael Vizard and Test Center Director Steve Gillmor, Stern outlines where Sun needs to go from here to create the next generation of software infrastructure for Web services using directories.'] [Hal Stern:] "We've been getting a lot of traction around the Web services ideas and a significant amount of traction around the notion of network identity. The problem is categorizing and enumerating all the different places where we store identity information and authorization information, so it's very rewarding to see the amount of attention being paid to Liberty Alliance... If you look at the bigger problem -- Web services security -- I don't think you're going to solve this by adding security stuff to SOAP [Simple Object Access Protocol]. You can make SOAP a little more secure, but if you go up a level, I can guarantee my XML isn't exposed and that my SOAP isn't exposed using SSL [Secure Sockets Layer]. I can validate the contents of my XML using digital signatures. Are the Web services allowed to do what they really asked to go do? What's been offered for them in terms of the range of services? And that becomes a question of context: Where are they? What's their bandwidth? What are they doing? What's their authorization? What's their authentication? How strongly do they authenticate and what's the list of services that have to deal with that? I think a lot of those things live in directories, and the taxonomies for them are kept in directories. Network identity is now the gatekeeper of how you come through a portal, how you come through a delivery vehicle to access this world of services... SOAP and LDAP are complementary. SOAP is a reasonable transport, but if you look at the real, practical aspects of network identity, things that are LDAP-based are going to be accessed indirectly -- maybe through the Web server, through the app server, or through the portal server. You're not going to really go and code to LDAP. XML is another way of coding to the directory..."

  • [June 10, 2002] "The Evolution of Web Services." By Michael Vizard. In InfoWorld (March 27, 2002). ['Prior to taking on the role of CTO at Systinet, Anne Thomas Manes was the peer-to-peer evangelist for Sun Microsystems, where she championed the adoption of SOAP (Simple Object Access Protocol) alongside Java. Today she is working on creating a robust Web services architecture around SOAP. In an interview with InfoWorld Editor in Chief Michael Vizard, Thomas Manes talks about how much work still needs to be done around Web services and why Systinet has a decided edge over rivals.'] "Systinet is a Web services platform company. We build Web services infrastructure. We have a SOAP implementation with full support for WSDL [Web Services Description Language]. We have a UDDI [Universal Description, Discovery, and Integration] implementation, and we're building right now a Web services single sign-on service. I hope at some point it actually will conform to Liberty Project, if Liberty ever actually produces some specifications. We also have as part of the basic platform a set of tools to make it simple and easy for people to build Web services. The tools are designed as plug-ins into existing Java IDEs [integrated development environments], so that you don't need to start a different tool to go create your WSDL files. It's all done from directly within NetBeans or Forte or JBuilder. Our system basically plugs into your favorite Web server or app server. So you don't have to go get something new to support Web services... Almost all the SOAP implementation support very simple request/response-type operations, and not asynchronous stuff. Even though SOAP by itself is by nature a one-way messaging system, the way everybody's implemented it so far, it's a request/response kind of thing. We are adding an asynchronous API into our product that will be available in our next release that's currently scheduled to come out in May. That would allow you to do one-way messaging or solicit responses or notifications and other types of patterns that are of interest in the more advanced world. Most of the SOAP implementations that are out there only support a blocking API, which means you issue a request and your application just sits and waits for a response to come back. We already support the ability to send it over JMS [Java Messaging Server], over MQ Series, or Sonic MQ, or some other type of queuing service. But that still doesn't help you with the non-blocking part of the client interface. We're also adding support for a non-blocking client..."

  • [June 10, 2002] "Web Services Management Platform Emerges." By Paul Krill. In InfoWorld (June 10, 2002). "Looking to fill a niche for provision of Web services management, AmberPoint on Monday is introducing AmberPoint Management Foundation, which is intended to manage XML-based Web applications. Functioning with both Java- and Microsoft .Net-based applications, the product is intended to make Web services-based systems production-ready assets that are easy to monitor and manage, secure and upgrade, according to Oakland, Calif.-based AmberPoint. The company's software is to be delivered as modules that can be non-invasively installed into any existing Web services environment, AmberPoint said... AmberPoint is addressing issues around managing Web services that reside on multiple machines and tracking them in a proactive fashion, said AmberPoint Vice President of Marketing Ed Horst. The suite works by observing XML messages and applying policies to them, Horst said... With AmberPoint, visibility can be provided into contexts such as identity, user, or time of day, and into business content, such as number of units ordered. A business analyst, for example, could see an automatically generated list of targeted business activities or a summary of activities, according to the company. AmberPoint also can limit Web service content to whatever information users have permission to view..."

  • [June 06, 2002] "Maintaining Schemas for Pipelined Stages." By Bob DuCharme. June 6, 2002. ['Comment and criticisms encouraged.'] "The Problem: The whole idea of sending a stream of XML documents through a pipeline instead of through a monolithic process that performs all necessary processing is getting more popular lately, as shown by projects such as XPipe and DSDL. In a production environment, when altering one process's behavior could spell trouble for others that read that process's output as their own input, some sort of contract or defined interface between stages makes it easier to manage the relationship between those processes. In a project that I'm working on, schemas will provide those contracts. In fact, the ease of doing this with schemas over doing it with DTDs is one of the reasons to switch to schemas... But, how could I maintain a set of related schemas used for different stages in the processing of the same document set... Below I've outlined and demoed an approach to doing just that: creating a schema that stores information about which components go into which schemas, as well as a short stylesheet that generates the schemas... I first worked this out with W3C Schema. When I decided to try it with RELAX NG schema, I didn't have to change a byte of the getStage.xsl stylesheet; it worked just fine as it was. All I had to do was to change the getStage.bat driver file to allow for the possibility of reading from and outputing to files with an extension of rng... The example that I made up to test this was quite simple; download schemaStages.zip for the stylesheet, the master schemas, the batch file, the eight extracted schemas, and four sample document files that conform to each W3C/RNG pair of schemas. The test.bat file does all the extractions and schema validations of the sample documents against the various extracted schemas. I would love to hear any suggestions for things to add to the sample master schema to stress test the whole concept a little harder..." See the followup from Eddie Robertsson: "I like the approach suggested by Bob and I can't see any problems with the solution. Another way to do this could be to embedd Schematron rules in another schema language (W3C or RNG)... Each stage in Bob's article hence correspond to a phase in Schematron. The name of the phase that should be used for the validation is sent to the Schematron validator in the same way that the stage name is sent to the getStage.xsl stylesheet. If this is validated manually by a GUI tool that support embedded Schematron rules the available phases can typically be selected by the user in a selection list. You can try out the above examples using the Topologi Schematron Validator [2] which support embedded Schematron rules in W3C XML Schema. The attached zip-file contains the example files..."

  • [June 06, 2002] "Mozilla Finally Turns 1.0." By Paul Festa. In CNET News.com news (June 05, 2002). ['More than four years after the launch of the Mozilla.org open-source project, Mozilla 1.0 is ready to browse.'] "The group released the software on the Web for download Wednesday. Mozilla 1.0 isn't the first browser based on Mozilla code. Netscape Communications, a unit of AOL Time Warner, released Netscape 6.0 in November 2000. That release was largely judged to have been premature. Perhaps because of the negative reaction to that first release of the Mozilla code, and because Mozilla 1.0 is targeted at software developers, the organization added months and years to the development process... Mozilla has long claimed support for open standards as a core part of its mission. With Wednesday's release, Gecko supports World Wide Web Consortium recommendations including HTML 4.0, XML 1.0, the Resource Description Framework (RDF), Cascading Style Sheets level 1 (CSS1), and the Document Object Model level 1 (DOM1). Mozilla 1.0 also offers partial support for Cascading Style Sheets level 2 (CSS2), the Document Object Model level 2 (DOM2), and XHTML. Other standards supported by Gecko include SOAP 1.1, XSLT, XPath 1.0, FIXptr and MathML..." Now available for download; see also the FAQ document. Other details are given in the announcement: "Mozilla.Org Launches Mozilla 1.0. Open Source Browser Suite Powered by Gecko Enables Developers to Create Standards-Based Web Applications and Devices." See: "XML in Mozilla."

  • [June 06, 2002] "Microsoft Exec Calls For Innovative Web Services Apps." By Paul Krill. In InfoWorld (June 06, 2002). "Rather than expect dramatic evolutions, programmers should utilize the Web services 'plumbing' already in place, such as SOAP (Simple Object Access Protocol) and XML Schema, and focus on applications, SOAP co-founder and current Microsoft executive Don Box said during a keynote speech here on Thursday. Speaking at the XML Web Services One conference, Box, an architect at Microsoft, urged developers to focus on innovations at the application level. 'Unless you're a big platform vendor, the place where you should be thinking about innovating is applications. The plumbing is boring' and is nearly completed, Box said. He cited the recently announced Google Web APIs service, for conducting searches over Google from within applications, as an example of an exciting application... An upcoming version of SOAP, Version 1.2, needs to be the final version, Box said. Box also stressed that XML Schema is the dominant technology for Web services. 'Had XML Schema been done in 1998, we would not have done SOAP,' he said. 'The reality is, XML Schema is the foundation for the rest of XML,' said Box. Technologies such as XML Query take XML Schema 'for granted, as a given,' he said. SOAP messages need to function with XML Schema, he said. 'XML Schema is an inevitability. Resistance is futile. There is no point in not embracing this thing and I strongly encourage those of you who work in Web services technologies [to] make sure your story is straight' with respect to XML Schema, Box said. Box also addressed some maligning of the UDDI (Universal Description, Discovery, and Integration) specification at the conference, saying it has the most opportunity for growth and that the upcoming version of UDDI hopefully will fix problems with identifiers..."

  • [June 06, 2002] "Microsoft Builds Web Services With TrustBridge." By Joris Evers. In ComputerWorld (June 06, 2002). "Microsoft Corp. today announced new software that will enable companies to more easily share with business partners and customers information stored in computer systems. The software, code-named TrustBridge and scheduled to be available next year, allows companies using the Windows operating system to share user identities across business boundaries...TrustBridge springs from Web services security work Microsoft has been doing with IBM and VeriSign Inc. The companies developed a specification called WS-Security, which describes how to exchange secure and signed messages in a Web services environment. In addition to TrustBridge, Microsoft announced that its Visual Studio .Net developer package will be updated later this year to include support for digital signatures and encryption for messages sent using Simple Object Access Protocol (SOAP), following the WS-Security specification. Also, .Net Passport, Microsoft's authentication service for the Web, next year will support SOAP over HTTP, Kerberos and the WS-Security specifications. This will enable .Net Passport to merge with TrustBridge and other authentication systems employing WS-Security, Microsoft said. .Net Server, due to be available to customers next year, will support Passport through Active Directory and the Internet Information Service..." See (1) the Roadmap, and (2) the announcement "Microsoft Windows 'TrustBridge' to Enable Organizations to Share User Identities Across Business Boundaries. Cornerstone of Comprehensive Federated Security Product Roadmap Based on XML Web Service Standards."

  • [June 06, 2002] "Wish on a Star Come True." By Maggie Biggs. In InfoWorld Volume 24, Issue 22 (June 03, 2002), page 32. "StarOffice 6.0, the latest release of Sun Microsystems' office productivity suite, promises to give Microsoft a healthy dose of competition while supplying enterprise customers with a real choice in office suites. We found StarOffice to be a stellar solution that is well worth deploying in the enterprise... The StarOffice suite includes five neatly integrated modules that support word-processing (Writer module), spreadsheet (Calc module) and presentation (Impress module) creation in addition to drawing, database, and HTML editing capabilities. These modules compare favorably with Microsoft's business productivity suite. New in this release is default support for XML-based file formats and included document conversion tools to translate binary office document formats into XML. This support allows users to read office documents on any platform that supports a browser while also increasing cross-platform compatibility and reducing file sizes. StarOffice also supports conversion to Adobe PDF. We were able to work on documents, spreadsheets, and presentations using StarOffice and then access the same documents using various versions of Office, including 97, 2000, and XP, without a problem. This interoperability with Office is a real plus for enterprise customers with large numbers of documents. StarOffice does not require you to convert existing documents or switch document formats unless you want to. Equally impressive is StarOffice's integration with existing address books, such as those from Netscape and Outlook, as well as LDAP-or Windows-based contact sources..." See "StarOffice XML File Format" and the 2002-05-15 news item.

  • [June 06, 2002] "Sun Plays With MS Puzzle." By Ed Scannell, Dan Neel, Michael Vizard, and Matt Berger. In InfoWorld Volume 24, Issue 22 (June 03, 2002), pages 31-32. "As Microsoft faces the cost-related controversy surrounding its Software Assurance licensing scheme, Sun Microsystems is hoping renewed interest in its gently priced StarOffice 6.0 suite will finally give it traction in the enterprise. Originally designed to give Sun Solaris OS customers a free office suite that doesn't require an Intel-based PC, StarOffice is now compatible with Windows. Meanwhile, Sun officials are hoping the recent move to add a $75.95 price tag, coupled with liberal usage terms and improved file compatibility features with Microsoft Office, will give both small and large companies pause before automatically rolling over to the next version of Office...Rogers and other Sun officials think Microsoft's controversial licensing plan is already influencing the adoption of StarOffice 6.0... Microsoft's Software Assurance licensing program is causing concern among many users, according to a recent Gartner Group report that states customers could pay as much as 107 percent more for software over the next four years under the Software Assurance plan. And 41 percent of executives polled in an April 2002 survey said they can't afford the new cost structure. Scheduled to begin on July 31, Software Assurance replaces Microsoft's previous bulk discount programs. It will require customers to pay up front for software plus an annual fee that entitles them to upgrades for the life of a contract... Regardless of the odds, Sun's Rogers thinks the new price tag will draw interest, despite protest from some Linux operating systems distributors that Sun is selling open-source-based software..." See: "StarOffice XML File Format."

  • [June 06, 2002] "A Realist's SMIL Manifesto." By Fabio Arciniegas. From XML.com. May 29, 2002. ['A look at the state of the Synchronized Multimedia Integration Language, SMIL, and how it can realistically be used in video and multimedia deployment today. Our main feature this week focuses on SMIL, the W3C's Synchronized Multimedia Integration Language. Although SMIL has been around for several years, and is supported in RealPlayer and Quicktime, it has yet to achieve its full potential in delivering web multimedia. In the first part of "A Realist's SMIL Manifesto", Fabio Arciniegas describes the state of SMIL so far, including liberally illustrated examples of current SMIL concepts.'] "The Synchronized Multimedia Integration Language, SMIL, has a less-than-stellar past but a very interesting future. SMIL 2.0 recaptures the simplicity and practicality of declarative synchronization of media introduced by version 1.0, while adding modularization and content-related features much missed in the early version. The goal of this two-part series is to illustrate best practices and creative uses of SMIL 2.0; in particular the creation of guided-reading documents which push the boundaries of Web narrative technology by combining classic layout and design practices with television-like effects. The present article deals with the problem of enhancing video inexpensively and dynamically with SMIL 1.0 and assumes no prior knowledge of SMIL 1.0. It covers the current state of SMIL; the structure and syntax of the language, with examples; and SMIL 1.0's strengths and flaws. It is meant to get you up to speed with the last three years of SMIL, while the next article will show you what is ahead in the coming years, and how SMIL can be a player in improving narrative technology on the Web... there are many environments where the player conditions are closed and the shortcomings are acceptable, making SMIL a reasonable alternative: (1) Sequencing of advertisement and content inside a particular player. RealPlayer developers use SMIL for this purpose often. (2) Simple prototyping and storyboarding of video content, by elongating the duration of still images. This is an inexpensive and often nice use of SMIL. (3) Closed environments where the elements of content don't change much, but they need to be reorganized in many ways, easily and inexpensively. Think for example of a kiosk in a large museum with pictures of each room, providing directions to users. The pictures don't change at all but depending on where you are and where you want to go, the system must show you a different sequence of pictures. It is a lot cheaper to create and maintain simple text files with SMIL sequences than to edit each sequence as a long video in Premiere (or some other video tool)... In the next article we will look ahead and see how the new modularization and content-related additions to the SMIL language make it an interesting new tool to improve narrative technology on the Web." See: "Synchronized Multimedia Integration Language (SMIL)."

  • [June 06, 2002] "Standard Data Vocabularies Unquestionably Harmful." By Walter Perry. From XML.com. May 29, 2002. ['Last week's XML Europe conference in Barcelona featured many presentations from industry groups who are standardizing vocabularies for their sector. XML vocabularies within and across industries are touted to revolutionize business. Yet Walter Perry argues that they are really an invitation to fraud and abuse -- by "dumbing down" the expertise required in a field to the level of the standard vocabulary.'] "At the onset of XML four long years ago, I commenced a jeremiad against Standard Data Vocabularies (SDVs), to little effect. Almost immediately after the light bulb moment -- you mean, I can get all the cool benefits of web in HTML and create my own tags? I can call the price of my crullers <PricePerCruller>, right beside beside <PricePerDonutHole> in my menu? -- new users realized the problem: a browser knows how to display a heading marked as <h1> bigger and more prominently than a lowlier <h3>. Yet there are no standard display expectations or semantics for the XML tags which users themselves create. That there is no specific display for <Cruller> and, especially, not as distinct from <DonutHole> has been readily understood to demonstrate the separation of data structure expressed in XML from its display, which requires the application of styling to accomodate the fixed expectations of the browser. What has not been so readily accepted is that there should not be a standard expectation for how a data element, as identified by its markup, should be processed by programs doing something other than simple display... Instead of the static mapping of process semantics to particular items of the SDV, we can have processes which demonstrate specific expertise in their instantiation of data for their own unique purposes. They exhibit, that is, the crucial expertise of understanding their own data needs. That expertise permits a process to operate upon data from a variety of sources, in each case available in a form particular to the expertise that created it and without regard to the nature or needs of the process -- or multiple very different processes -- which might consume or manipulate it. Each process produces only one expert rendition or other process outcome. Yet taken together with the variety of similarly expert processes which supply their input data, the group of such processes more than meet the ostensible goal of SDVs in opening the silo to the sharing of data on a many-to-many basis among different expert domains. That goal is not achieved without the effort of a strict discipline in designing process intercommunication and interaction, which I shall describe in a subsequent article, 'The Natural Process Model of XML'. OpEd note: One wonders what justifies the exemption in "programs doing something other than simple display" as articulated above. Are variances in 'display' expecations benign and thus allowable, while all other kinds of processing are seriously at risk? Is it a good thing to countenance hard-coded culturally assumed application-level processing semantics of any kind? Arguably no. Arguably, SDVs are not the problem per se; the problem is implicit and unwarranted assumption regarding an underdefined processing semantic of any kind, for example, as the expectation about whether/how the element content should be "displayed" and the relevance of the attribute values "a", "3", "COMPACT" in the case of <OL TYPE=a START=3 COMPACT>...</OL>.

  • [June 06, 2002] "Transforming Experiences." By John E. Simpson. From XML.com. May 29, 2002. ['John Simpson brings us our monthly dose of his "XML Q&A" column. This month's edition focuses on XSLT and XSL Formatting Objects. John tackles the question of transforming XSL-FO to HTML, and computing attribute values using XSLT.'] "[1] Q: Can I convert XSL Formatting Objects (XSL-FO) to HTML? I have an XSL-FO file that renders a PDF using FOP. Can I use the same XSL-FO file to render to XHTML? If yes, what is the tool to use?... [2] Q: How do I use XSLT to put a 'tag' inside an attribute?..."

  • [June 06, 2002] "XML for Data. XSL Style Sheets: Push or Pull? A look at two authoring techniques for XSL style sheets and how they stack up for data." By Kevin Williams (CEO, Blue Oxide Technologies, LLC). From IBM developerWorks, XML zone. May 2002. ['Columnist Kevin Williams examines the two most common authoring styles used to create XSL style sheets: push and pull. He takes a look at some simple XML and XSL examples and discusses the benefits and drawbacks of each approach.'] "When authoring XSL style sheets, you can use either of two principal authoring styles: (1) The push style is structured around handlers that create output in the output document. The output is based on the type of elements encountered in the source document. Element content is pushed to the appropriate handler. (2) The pull style, on the other hand, builds up the output document by pulling content from the source document as it is needed. The style you choose has a significant impact on the code complexity and maintainability of your style sheets... A data document is intrinsically different from a narrative document in that: (1) Information in a data document appears in an order that can be anticipated in the code. (2) A data document contains no loose text, so no specialized code is required to transfer that text to the output document. Mapping a source data document to a output document usually is no more complex than renaming elements, reordering them, and aggregating them up a level in the target tree (or pushing them down to a more detailed level). If you use the pull model, the mapping is simple and easy to maintain. On the other hand, documents that are narrative-based are exactly the opposite. Information appears in an order that cannot be anticipated easily; some text floats outside of the context of elements so it needs to be moved over to the output document properly. To accurately reproduce the narrative in the output document, the style sheet must handle elements regardless of where they appear, and the push model excels at that. In short, when designing style sheets for data documents, consider using the pull model first. For narrative documents, use the push model if possible. Becoming proficient at writing both kinds of style sheets -- and learning the strengths and weaknesses of each -- will enable you to handle any styling job you might encounter in the future..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [June 06, 2002] "Comparative Jabber Book Review. A Look At Three Titles On the Protocol." By John Zukowski (President, JZ Ventures, Inc.). From IBM developerWorks, XML zone. June 2002. ['Jabber is an open, XML-based protocol you can use to add instant messaging to your applications. If you're interested in learning about Jabber but aren't quite sure where to begin, here's a review of three books on the subject to help you get off to the right start.'] "Instant messengers (IMs) such as AOL Instant Messenger (AIM), ICQ, MSN Messenger, or Yahoo! Messenger require access to vendor-specific servers to facilitate communications between multiple users. While some vendors allow you to add their instant messaging capabilities to your applications, you're still stuck with using their servers and access control. If you want to include instant messaging in your own applications but still maintain control -- such as in a corporate intranet or even over the Internet -- Jabber might be for you. Jabber is an open, XML-based protocol for instant messaging and presence. In this article, I'll look at three books available as of May, 2002, that will help you to understand what Jabber is and how to develop applications using the Jabber protocol. In alphabetical order, the books reviewed are: Instant Messaging in Java, by Iain Shigeoka (Manning, 2002); Jabber Programming, by Stephen Lee and Terence Smelser (M&T Books, 2002); Programming Jabber, by D.J. Adams (O'Reilly, 2002)..." See: "Jabber XML Protocol."

  • [June 06, 2002] "W3C Weighs In On Web Services." By Paul Festa. In CNET News.com news (June 06, 2002). "Five months after reorganizing its Web services work, an influential standards body has released a trio of drafts related to the much-hyped trend. The World Wide Web Consortium (W3C) this week issued the first working draft of the Web Service Description Usage Scenarios. The scenarios are meant to outline real-world uses of Web services that will help the W3C tailor the Web Services Description Language. The release of the scenarios comes a few weeks after the W3C published its Web Services Architecture Requirements and Web Service Description Requirements, also designed to clarify the W3C's goals in standardizing Web services. The three new working drafts are the first related to Web services since the W3C formed its Web Services Activity in January. The releases may help quiet criticisms from companies interested in creating Web services that the W3C hasn't done enough to establish common ground among developers who have been backing a number of fragmented technologies..." See the following bibliographic reference.

  • [June 05, 2002] "Web Service Description Usage Scenarios." Edited by Waqar Sadiq (EDS) and Sandeep Kumar (Cisco Systems). W3C Working Draft 4-June-2002. Version URL: http://www.w3.org/TR/2002/WD-ws-desc-usecases-20020604; latest version URL: http://www.w3.org/TR/ws-desc-usecases. "This is the first W3C Working Draft of the Web Services Description Usage Scenarios document; it describes the Usage Scenarios guiding the development of the Web Service Description specification. The WD is a chartered deliverable of the Web Services Description Working Group (WG), which is part of the Web Services Activity. The Working Group has agreed to publish this document, although this document does not necessarily represent consensus within the Working Group." From the Introduction: "This document describes the use cases of the web services description language. The use cases are meant to capture what is important for a web service to describe itself. There may be several other important aspects of a web service but irrelevant to its operational and interaction descriptions. We believe that following viewpoints would prove useful in describing the use-cases for the web service description language. (1) View Point 1: The web service description defines a contract that the web service implements. The web service client exchanges messages based on this contract. (2) View Point 2: The description language is used by tools to generate proper stubs. These stubs ensure that the stubs implement the expected behavior for the client. (3) View Point 3: The web service description captures information that allows one to reason about them semantically. All the use cases in this document pertain to one or more view-points as described above. Every use case as described in this document has a scenario definition, scenario description, and how it relates to one of the view-points as outlined above. Sample code is based upon the Web Services Description Language (WSDL) 1.1" See also the comments list and the W3C Web Services Activity.

  • [June 04, 2002] "Oblix Delivers Federated Identity Platform." By Richard Karpinski. In InternetWeek (June 04, 2002). "Oblix Inc. on Tuesday said it has begun shipping a new feature for its access-management platform that will let enterprises identify users from multiple sources, including public identity systems such as Microsoft Passport. The new FederatedID Layer of Oblix's NetPoint platform begins to deliver on what has emerged as the holy grail of the security industry: the ability to manage access control and manage identity in a distributed, or federated manner... The NetPoint FederatedID Layer gives customers a choice in deciding how they will manage interoperable authentication, allowing them to use efforts such as .Net Passport and Liberty Alliance, as well as the emerging Secure Access Management Layer (SAML) standard. Oblix is a strong backer of SAML. It says it will participate in the upcoming SAML interoperability demo in July and deliver an SAML-compliant product not long after. Ratification of the SAML 1.0 spec is expected by the end of this month. Oblix NetPoint provides Web access and enterprise identity management. In particular, it helps security managers automate the process of making changes to user identity information and access privileges, a key to maintaining a corporate-wide identity infrastructure. A federated approach to identity management is crucial as enterprises continue to do more and more business with users outside of their own firewalls -- not only customers but suppliers, distributors, and other trading partners as well. NetPoint FederatedID Layer lets users sign in at an accepted third-party service and have that authentication passed over to an enterprise running the NetPoint product, thus allowing the user to avoid signing in for a second time..." See the announcement: "Oblix Announces Availability of the NetPoint FEDERATEDid Layer for Enhanced Identity Management Within the Enterprise. Oblix Delivers Industry's Most Comprehensive Set of Federated Identification Services Including Full Support of Security Assertion Markup Language (SAML)." General references: "Security Assertion Markup Language (SAML)."

  • [June 04, 2002] "LDAP Schema for UDDI." By Bruce Bergeson and Kent Boogert (Novell, Inc.). Updates the version 00 draft of February 2002. IETF Internet Draft. Reference: 'draft-bergeson-uddi-ldap-schema-01.txt.' Category: Informational. May, 2002, expires November, 2002. "This document defines the schema for representing Universal Description Discovery and Integration (referred to here as UDDI) data types in an LDAP [v3] directory. It defines schema elements to represent a businessEntity, a businessService, a bindingTemplate, a tModel, and a publisherAssertion... The information that makes up a registration in UDDI consists of these five data structure types. This division by information type provides simple partitions to assist in the rapid location and understanding of the different information that makes up a registration. The individual instance data managed by a UDDI registry are sensitive to the parent/child relationships found in the schema. A businessEntity object contains one or more unique businessService objects. Similarly, individual businessService objects contain specific instances of bindingTemplate, which in turn contains information that includes pointers to specific instances of tModel objects. It is important to note that no single instance of a core schema type is ever "contained" by more than one parent instance. This means that only one specific businessEntity object (identified by its unique key value) will ever contain or be used to express information about a specific instance of a businessService object (also identified by its own unique key value)..." See: "Universal Description, Discovery, and Integration (UDDI)." [cache]

  • [June 04, 2002] "UDDI Seeks Its Spot." By Paul Krill and Ed Scannell. In InfoWorld Volume 24, Issue 22 (June 03, 2002), pages 41-42. "Is UDDI (Universal Description, Discovery, and Integration) a bust or is it just in an embryonic stage? These questions are being pondered as the Web services directory specification continues to evolve. Adoption rates have not skyrocketed as some expected, prompting many to re-examine where UDDI will fit into the Web services puzzle. The UDDI standard is intended to provide central directories where consumers of Web services can access various services, either within a company's firewall, via an extranet, or on the public Internet. Service providers can register them and make them available via UDDI, which is based on technologies such as XML, HTTP, and DNS. Companies can set up these registries internally and choose to extend access to partners. Microsoft, IBM, Sun Microsystems, and Systinet are among companies that offer, or plan to offer, UDDI products. There are also public UDDI registries deployed by Hewlett-Packard, SAP, and Microsoft... Version 1 of the UDDI specification was announced in September 2000 and Version 2 followed in June 2001. Associates of uddi.org, including Munter, are preparing to release Version 3 in July, focusing on features such as communication between private, semiprivate, and public registries. Security issues also will be addressed. Members of uddi.org also are preparing to turn over jurisdiction of UDDI to a yet-unnamed standards body this summer. A request for proposal document has been published and sent to a selective number of standards development organizations, Munter said... IBM has included support of private UDDI registries in its recently announced WebSphere 5.0 application server. Sun's upcoming UDDI offering will be based on its Sun ONE (Open Net Environment) Directory Server, and will enable setting up of private registries for publishing Web services, company officials said. Novell also planned to update and resubmit in late May a specification entitled 'LDAP schema for UDDI,' originally submitted to the Internet Engineering Task Force (IETF) in February. The spec defines a standard format for representing UDDI data types in an LDAPv3 directory. [See "LDAP Schema for UDDI."] Novell is looking to highlight the similiarities between UDDI regisistries and directories as repositories for Web services, as well as the authentication and security features of directories that could supplement UDDI registry security. Systinet has offered a UDDI server called WASP (Web Application and Services Platform) UDDI since October. The product is available free for download and testing. The company has seen the interest level in WASP UDDI fluctuate over time. 'We had very little interest until about January, then we had a remarkable uptake in downloads,' said Anne Thomas Manes, CTO of Cambridge, Mass.-based Systinet. The spike occurred, Manes said, because Web services have begun to reach 'critical mass,' which may bode well for future adoption of UDDI..." Note the comment of Anne Thomas Manes on UDDI and LDAP (UDDI list): "I'd say that one of the biggest differences between UDDI and LDAP is that UDDI allows you to categorize things, while LDAP doesn't. (This point expands on Andrew's comment about the difference between relational information and hierarchical organization.) In UDDI you can categorize any UDDI entity (a business entity, a business service, or a tModel) using any number of taxonomies. A UDDI registry comes with a set of built-in taxonomies (UDDI Type Taxonomy, NAICS, UNSPSC, UDDI Geographic Taxonomy, General Keywords Taxonomy, Owning Business Taxonomy, Relationships Taxonomy, Dun & Bradstreet D-U-N-S Number identifier, and Thomas Registry Suppliers ID). You can also create your own taxonomies. You use these taxonomies to search UDDI..." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [June 04, 2002] "Guidelines for the Use of XML within IETF Protocols." Updates the draft "Guidelines For The Use of XML in IETF Protocols" ['draft-hollenbeck-ietf-xml-guidelines-00.txt', April 5, 2002]. By Scott Hollenbeck (VeriSign, Inc.), Marshall T. Rose (Dover Beach Consulting, Inc.), and Larry Masinter (Adobe Systems Incorporated; WWW). Reference: 'draft-hollenbeck-ietf-xml-guidelines-04.txt'. June 4, 2002, expires: December 3, 2002. 33 pages. "The Extensible Markup Language (XML) is a framework for structuring data. While it evolved from SGML -- a markup language primarily focused on structuring documents -- XML has evolved to be a widely- used mechanism for representing structured data. There are a wide variety of Internet protocols being developed; many have need for a representation for structured data relevant to their application. There has been much interest in the use of XML as a representation method. This document describes basic XML concepts, analyzes various alternatives in the use of XML, and provides guidelines for the use of XML within IETF standards-track protocols... It is the goal of the authors that this draft (when completed and then approved by the IESG) be published as a Best Current Practice (BCP)..." Document also available in XML and plain text formats. See also the archives of the 'ietf-xml-use' mailing list, which supports a general discussion on how and when to use XML in IETF protocols. A related posting by James Clark "RELAX NG and W3C XML Schema" in response to section 4.6 of the draft ("... XML Schema should be used as the formalism in the absence of clearly stated reasons to choose another...") led to an XML-DEV thread "XML Schema considered harmful?" See comments from James Clark and Rick Jelliffe. [cache, text]

  • [June 04, 2002] "Analyzing XML Schemas With the Schema Infoset Model. Easily Perform Complex Queries on Your Schemas With This Model." By Shane Curcuru (Advisory Software Engineer, IBM). From IBM developerWorks, XML Zone. June 2002. ['As the use of schemas grows, the need for tools to manipulate schemas grows. The new Schema Infoset Model provides a complete modeling of schemas themselves, including the concrete representations as well as the abstract relationships within a schema or a set of schemas. This article will show some of the power of this library to easily query the model of a schema for detailed information about it; we could also update the schema to fix any problems found and write the schema back out.'] "Although there are a number of parsers and tools that use schemas to validate or analyze XML documents, tools that allow querying and advanced manipulation of schema documents themselves are still being built. The Schema Infoset Model (AKA the IBM Java Library for Schema Components) provides a rich API library that models schemas -- both their concrete representations (perhaps in a schema.xsd file) and the abstract concepts in a schema as defined by the specification. As anyone who has read the schema specs knows, they're quite detailed, and this model strives to expose all the details within any schema. This will then allow you to efficiently manage your schema collection, and empower higher level schema tools -- perhaps schema-aware parsers and transformers... While you can use XSLT or XPath to query a schema's concrete representation in an .xsd file or inside some other .xml content, it is much more difficult to discover the type derivations and interrelationships that schema components actually have. Since the Schema Infoset Model library models both the concrete representation and the abstract concept of the schema, it can easily be used to collect details about its components, even when the schema may have deep type hierarchies or be defined in multiple schema files... Although this is a contrived example, it does show how the library's detailed representation of a schema makes it easy to find exactly the parts of a schema you need. The library provides setter methods for the properties of schema components, so it is easy to update your sample to automatically fix any found types by adding any missing facets. And since the library models the concrete representation of the schema as well, you can write your updated schema back out to an .xsd file..." See "IBM Publishes XML Schema Infoset API Requirements and Development Code" and the announcement.

  • [June 04, 2002] "Wal-Mart Leads Charge For Supply Chain Standard." By Ephraim Schwartz. In InfoWorld (June 04, 2002). "Major retail chains, including Wal-Mart, have been quietly working behind the scenes to implement a single data synchronization standard between trading partners for product definitions and item data. The standard, based on XML-messaging, was created and is being developed by UCCnet, a division of UCC, the Uniform Code Council, a global standards organization responsible for among other things designing and managing the UPC code used by all retailers. UCC is based in Lawrenceville, NJ. Wal-Mart notified all of its suppliers in a letter sent last April that it would like its suppliers to implement the UCCnet SYNCtrack product... Other major retailers sending out similar letters include Ahold USA, Shaw's Supermarkets, Star Market, and Food Lion. The letter to suppliers from Ahold USA, a company that boasts 7,000 supermarkets worldwide with 30 million customers, was a bit more direct and demanding... The data exchanged between any two trading partners has always been paper intensive, according to Gene Alvarez, vice president for the Electronic Business Strategies division, at Meta Group in Stamford, Conn. The goal of the retail giants is to reduce paperwork and the human errors that invariably go along with paper-based processes. The SYNCtrack product will enable a supplier to post product data, making it available to retailers electronically rather than manually. 'It's similar to RosettaNet but with UCC, the standard was in existence so both retailers and manufacturers are used to dealing with it while with RosettaNet the industry is still dealing with collaboration issues,' said Alvarez. Beyond lowering the cost of doing business by making the order and billing process less error prone, a reduction in time to market is also forecast by the UCCnet proponents... while the industry is driving toward consolidation in the standards arena the possibility of a single standard is unlikely said Alvarez..." See: (1) "Uniform Code Council (UCC) XML Program" and (2) "VICS CPFR XML Messaging Standard."

  • [June 03, 2002] "RDF Site Summary 1.0 Modules: Service Status." By Matthew J. Dovey (Oxford University) Katherine Ferguson (Oxford University), and Sebastian Rahtz (Oxford University). From a posting to the the RDF Site Summary 1.0 Specification Working Group list. Draft version 3. June 02, 2002. Proposed RSS module. "This module extends RSS to include elements which allow the description of the status and current availability of services and servers. Some data, such as whether a server is up or not, would normally be generated automatically, whilst other data, such as explanatory text for humans, might be the result of further processing or direct human input. A service as seen from a user's point of view might not be a one-to-one mapping to a service as seen from a system point of view. If, for example, a user service is dependent on more than one system service being available then the user service is available if and only if all those system services are available. This can easily be calculated with both input and output conforming to this specification. (1) ss:lastChecked gives the most recent time that the service was tried. lastSeen gives the most recent time that the service responded. ss:responding is true or false, depending on whether the service responded at the last check. In the case that the service is responding the two times should be equal. If the service is not responding then it is useful to have an approximation of how long it has been down, by comparing the two times. (2) ss:availability and averageResponseTime are statistics which may be calculated as you wish. ss:aboutStats should give access to a description of how these have been calculated so that the statistics are meaningful. (3) ss:statusMessage allows for a message about the service and its status. Often this will not be needed. However messages such as 'This service will be rebooted over lunchtime', 'We know this is unavailable - we're all working on the problem so please stop phoning us' or 'There is an intermittent problem which we are monitoring' could at times be appropriate. A message may be included independent of whether the service is currently responding or not..." See "RDF Site Summary (RSS)."

  • [June 03, 2002] "Startup Bets On XML for Integration." By Carolyn A. April. In InfoWorld (June 03, 2002). "Startup Enosys Software officially launches on Monday, rolling out a suite of data integration software that company officials claim features the first commercially available engine based on the XQuery standard. A self-proclaimed EII (enterprise information integration) company, Enosys joins the rush to help enterprises leverage XML to achieve real-time integration of multiple data types across far-flung systems, according to Raj Pai, vice president of product marketing with the Redwood, City, Calif.-based company... The J2EE (Java 2 Enterprise Edition)-based Enosys Integration Server exploits native XML and an engine based on the XQuery standard to allow access to different data sources -- structured or unstructured -- from a single query, inside or outside of the firewall. It can also filter, translate, and transform the data for consumption by a Web-based application or Web service, according to Pai... The promise of distributed data management inherent in XQuery is also on the radar of industry heavyweights such as IBM, Microsoft, and Sybase, among others. IBM has already touted its use of XQuery as part of its integration strategy for its DB2 database and other core pieces of its software infrastructure... In addition to the core Enosys Integration Server, the company's package includes the Enosys Design Suite, a set of graphical tools for building, testing, and deploying data integration applications, and Enosys Management Tools, a set of utilities for handling such things as data storage population, integration engine logging and reporting, authentication, and access control..." See: (1) "XML and Query Languages"; (2) the announcement: "Enosys Software Introduces the First XML-Based Real-Time Enterprise Information Integration Platform. Early Adopters Of Enosys' XML Query (XQuery) Solution Attest to Reduced Cost of Custom Data Integration, Rapid Deployment of New Web Applications, and Improved Customer Service."

May 2002

  • [May 31, 2002] "Network Edge Courts Apps." By Scott Tyler Shafer and Mark Jones. In InfoWorld Volume 24, Issue 21 (May 27, 2002), pages 1, 27-28. With sidebar: "XML switching marks evolution in Net devices," by Wayne Rash. "Driven by the opportunity to secure the role of edge networking devices in enterprise IT infrastructures, a groundswell of networking vendors is beginning to embrace Web services. Companies including 3Com and Cisco are in the process of developing edge routing devices to embrace technologies such as deep-packet inspection in an effort to meet the emerging demands of Web-based applications. Santa Clara, Calif.-based 3Com, which in May announced its XRN 10 Gigabit Ethernet VoIP (voice over IP) switch, is in the midst of contemplating the impact of enterprise applications on the next generation of networking equipment... Although Cisco officials assert that Web services will not fundamentally change the nature of packet flow through the network, network devices need to adopt filtering and prioritization capabilities based on XML tags, said Bob Gleichauf, director of software development for security solutions. Cisco's vision includes XML load balancing in the LAN, and ultimately building out technologies to support 'global routing of Web services,' Gleichauf said... Cisco and 3Com aren't the only companies looking at traffic management for XML and SOAP (Simple Object Access Protocol) communications. In fact, they're almost on the trailing edge, because XML switching isn't so much something new as it is an upgrade to existing technology. Products that handle layer-7 traffic, and perform operations based on the content of that traffic, have been available for a while, but they've been known by other names such as load balancers, SSL accelerators, or security appliances. What's new is that these devices now work with XML and SOAP. These products work by reading the packets as they pass through the appliance. If the appliance finds what it's looking for -- an XML tag, for example -- it may verify the data, then send the requests contained in the traffic to the server best able to service the request. In addition, these devices usually provide security, such as SSL termination and acceleration, and so they can respond to the attempt to connect to a server, handle the SSL, read the XML tags, and distribute the resulting traffic as necessary..."

  • [May 31, 2002] "JDF Goes Into High Gear. [JDF Gathers Steam, or At Least Promises. Print Workflow.]" By John Parsons. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 5 (June 3, 2002). ['Only one year after the official release of JDF 1.0, the printing industry is actively pursuing compliance with the XML-based job-ticket standard. At Ipex 2002, numerous products were demonstrated or announced, and even more were said to be in the works. As anticipated, larger system vendors tended to emphasize JDF connections with their existing business partners, while MIS vendors took a more holistic approach.'] "As standards go, the Job Definition Format (JDF) is progressing at a remarkable rate. By some accounts, the original work began in 1999, building on older, non-XML formats, including Adobe's Portable Job Ticket Format (PJTF) and CIP3's Print Production Format (PPF). The concept of JDF was first publicly presented at Seybold Boston 2000; the official 1.0 release came only a year later, and by Print '01 in September, most prepress, print and MIS vendors were at least privately discussing adding JDF to their plans. At this year's Ipex, the standard had just gone to version 1.1, and vendor and customer enthusiasm had reached gold-rush proportions... The CIP4 organization, the official standards body governing JDF, formally published the 1.1 specifications on May 2. Nearly 550 pages long, the document adds many new job-ticket and messaging definitions. However, CIP4 members generally agree, there is still much work to be done. The enhancements in JDF 1.1 include greater support for digital printing, including variable data handling, as well as major enhancements in finishing and shipping definitions, plus improvements in color workflow and e-commerce. The new spec also adds device-capability reporting. This would allow a JDF-based system to determine (among many other examples) the media sizes supported by an output device, and determine if a specific job was within that device's capability. An updated XML schema for JDF will be published shortly, to be followed by an updated, open-source SDK. CIP4's technical officer, Rainer Prosi of Heidelberg, asserted that the new version is backward-compatible with systems developed using JDF 1.0... JDF follows the time-honored tradition of a comprehensive physical job ticket: a reliable reference point for a job that can be understood -- and easily interacted with -- by many individuals. By all accounts, the CIP4 group has begun to effectively translate that idea into the digital realm, where the efficiencies of a common set of rules can benefit almost everyone. However, extreme patience is required..." See "Job Definition Format (JDF)."

  • [May 31, 2002] "Applied Semantics Boosts Its Categorization Skills. Auto-Categorizer Software Adds Standard Taxonomies to The Mix." By Luke Cavanagh. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 5 (June 3, 2002). ['Automated categorization and summarization of documents is a growing niche, perhaps because there's now so much digital content to organize. A new player in this field, Applied Semantics offers easy setup. Thanks to a built-in ontology with self-learning algorithms, users don't need to "train" the software or write complex rule sets. The latest release adds user-editable taxonomies.'] "In May, Applied Semantics company added four new standard taxonomies to its Auto-Categorizer product, making the semantically aware categorization software more adaptable to a wider variety of industries out of the box. The product will now support the National Library of Medicine's 20,000-term Medical Subject Headings (MeSH), the Universal Standard Products and Services Classification (UNSPC), the U.S. Labor Department's Standard Industrial Classification (SIC) system and an 800-term geographic taxonomy built around the International Organization of Standardization's schema. The new support comes in addition to earlier support for the taxonomies of the Open Directory Project (for Web-site classification) and the International Press Telecommunications Council, which is preeminent among news organizations. The addition of new classification schemes makes the product more appropriate for biomedical, financial service, manufacturing and publishing organizations... When a document is processed in the Applied Semantics engine, the software uses the ontology to apply appropriate metadata to the document (which is stored in XML files for export to other front-end systems, such as a content-management or editorial system), map it to a taxonomy, or create a summary. So, for example, if a document's overriding theme is determined by the system to be 'terrorism,' and the categorization taxonomy being used to store documents has a category named 'terrorism,' the document will automatically be routed to that category..." See the technical paper and announcement of 2002-05-14: "Applied Semantics Unveils New Industry Taxonomies to Support Auto-Categorization. MeSH, SIC, UNSPSC, and ISO Geography Taxonomies Strengthen Out-of-the-Box Capabilities."

  • [May 31, 2002] "Hollywood's Flawed Ploy for Stricter Copy Protection. [Will Uncle Sam Mandate DRM? Digital Right Management.]" By Matt McKenzie. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 5 (June 3, 2002). ['Hollywood's latest rear-guard attack on digital distribution is the Consumer Broadband and Digital Television Promotion bill that was recently filed in the U.S. Senate. It's bad law and bad economics. And it pits the normally apolitical Silicon Valley entrepreneurs squarely against the movie studios' PACs.'] " "The bill is called the Consumer Broadband and Digital Television Promotion Act, and it was introduced by Sen. Ernest Hollings (D-SC) in March. It has lately run into trouble: Sen. Patrick Leahy (D-VT), the chair of the Senate Judiciary Committee, has vowed not to allow the bill to reach the Senate floor this year. Other leading lawmakers from both parties also oppose the bill, making it extremely unlikely the measure -- or anything like it -- will pass this year. Whether or not the bill passes, however, it represents a major shift in direction from previous legislation dealing with digital copyright protection: For the first time, the government would take upon itself the mandate to impose a digital content security standard. This approach would move well beyond the intent of previous laws, which dealt mostly with protecting the integrity of third-party copy-protection systems... . 'The DMCA [Digital Millennium Copyright Act] was quite technology-neutral,' said Mark Bohannon, general counsel of the Software and Information Industry Association (SIIA). 'This legislation would be much more intrusive'. 'Intrusive' is an understatement. The Hollings bill covers every hardware or software product that stores, transmits or plays digital content; if the private sector could not agree on a digital rights management (DRM) standard, the government would reserve the right to impose one anyway. Devices that did not include approved DRM technology would be illegal to make or sell in the United States. Consumers who attempt to disable the copy protection-- even for a legitimate personal use -- would be committing a crime. If you think this sounds like a terrible idea, you're not alone. Groups that supported laws such as the Digital Millennium Copyright Act and the No Electronic Theft Act don't want anything to do with the Hollings bill... Who does support the Hollings bill? It appears to have been introduced largely at the behest of the major Hollywood studios, with the Disney Corp. leading the way, although other segments of the music and entertainment industries have since jumped on the bandwagon. This isn't surprising, given the motion-picture industry's well-known (and often hysterical) fear of new consumer technologies: In the early 1980s, for example, Motion Picture Association of America President Jack Valenti warned Congress that videocassette recorders posed a dire threat to Hollywood's economic survival... The sheer audacity of Hollywood's power play may provoke a substantial political backlash. Consumer advocates, the American Library Association, and other groups on the losing side of the DMCA debate have suddenly found new allies in the software and commercial publishing industries. Groups such as Digitalconsumer.org are pushing for legislation that would codify fair-use exceptions and give consumers an explicit legal right to time-shift, space-shift and create backup copies of their digital content... [Although] The SIIA and the Business Software Alliance have both issued public statements opposing the bill, and the Association of American Publishers has refused to take a public position on the bill; the AAP's diplomatic 'no comment' on the Hollings bill strikes us as a bad idea... Given the damage a government-mandated DRM standard would do to the technology innovations that made the e-book market possible in the first place, we can't imagine what the AAP would find desirable in a law designed primarily as a full-employment act for Hollywood lawyers..." See relevance to the Internet in "Patents and Open Standards." Related references in "XML and Digital Rights Management (DRM)."

  • [May 31, 2002] "Web Services Start to Appreciate Security. [Web Services Are Catching On. Is Security Catching Up?]" By Hans Hartman. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 5 (June 3, 2002), pages 11-16. ['When Web services were new, the challenge was to get them working within the firewall. Now, as firms start to exchange Web services outside the firewall, they need to guarantee privacy, authentication, nonrepudiation and auditability at the XML-message level. So a new set of standards is being hammered out.'] The article surveys Network-level security, session-level security, and (especially) message-level security. "... In a Web-services world where open networks make complete network-level security an illusion and where networks of interrelated Web services require end-to-end security, the current thinking among experts is to expand security at the XML message level. Data secured this way will stay secure even if it is being routed between different Web services and intermediaries. Several standards bodies, including OASIS (Organization for the Advancement of Structured Information Standards), IETF (Internet Engineering Task Force) and W3C (World Wide Web Consortium), have proposed standards to add security to XML messages. We will discuss the most important ones. [Includes presentation of] (1) End-to-end confidentiality - XML encryption to transport data safely through intermediaries; (2) End-to-end integrity - XML Digital Signatures to transport data without modification; (3) Authentication - X.509 certificates or Kerberos tickets for secure authentication; XKMS for managing certificates; (4) Authorization and access control - SAML to request authorization decisions from security servers; XACML for authorization rules; (5) Non-repudiation of origin - XML digital signatures to prove that consumer sent the request; (6) Non-repudiation of receipt - XML digital signatures to prove that provider received the request or responded to it; (7) Auditing and accountability - Digital signatures to protect logs from modification; (8) SOAP-based security - WS-Security to incorporate existing security standards into SOAP; (9) Malicious attack prevention, alerting, isolation - Standard Web-server protection measures; SOAP-cognizant firewalls or proxy servers; authentication denial for unknown requestors... With most Web-service security protocols and technologies still being developed, companies will most likely start introducing security for Web services on an incremental basis. For instance, a Web service could require its requestors to obtain a digital certificate from a well-known company such as VeriSign or Thawte. Even at a stage when the SOAP security framework is still in a state of flux, this would go a long way toward reducing the level of insecurity on public networks... much of the progress in developing universally adopted, message-level security standards for Web services depends on the efforts of leading software companies. The good news here is that they are putting considerable effort into developing open standards that could be widely adopted. The challenge will be to keep these standards open and to leave intra-company rivalry and politics out of these standards discussions..."

  • [May 28, 2002] "Subtyping in XML Schema." By Murali Mani. May 16, 2002. "This article covers the problems we have to answer when we consider subtyping for XML Schemas, and two different solutions that exist currently. This is written mainly with reference to the XML-Schema proposal from W3C, and two theoretical frameworks for subtyping given in [two publications]: (1) Subsumption for XML, by Gabriel M. Kuper and Jérôme Siméon; (2) Regular Expression Types for XML, by Haruo Hosoya... Why is typing in XML Schemas different and challenging? In this article, we study the problem which we call the type membership problem. The type-membership problem tries to answer whether a given object, o belongs to type A or not. In traditional programming languages, this problem is answered as follows: Suppose o is declared to be of type B, then o is of type A iff B is a subtype of A. In XML, the type membership problem occurs in two different scenarios -- during type assignment, and during type checking. Type assignment does the following -- given a set of type definitions and a set of objects (subtrees), we try to assign types to each object. Type checking occurs during processing and does the following -- given an expression where we expect an object of type Type1, can we use an object of type Type2?..."

  • [May 28, 2002] "Subsumption for XML." By Gabriel M. Kuper and Jérôme Siméon. Presented at the International Conference on Database Theory (ICDT'2001), January 2001, London, UK. 15 pages. 33 references. "XML data is often used (validated, stored, queried, etc.) with respect to different types. Understanding the relationship between these types can provide important information for manipulating this data. We propose a notion of subsumption for XML to capture such relationships. Subsumption relies on a syntactic mapping between types, and can be used for facilitating validation and query processing. We study the properties of subsumption, in particular the notion of the greatest lower bound of two schemas, and show how this can be used as a guide for selecting a storage structure. While less powerful than inclusion, subsumption generalizes several other mechanisms for reusing types. notably extension and refinement from XML Schema, and subtyping... Intuitively, subsumption captures not just the fact that one type is contained in another, but also captures some of the structural relationships between the two schemas. We show that subsumption can be used to facilitate commonly used type-related operations on XML data, such as type assignment, or for query processing..." See also the earlier technical report "Subsumption for XML Types." Other URLs: [1], [2].

  • [May 24, 2002] "All Things Under Sun." By Michael Vizard and Steve Gillmor [with Jonathan Schwartz]. In InfoWorld (May 24, 2002). "Jonathan Schwartz, Sun's chief strategy officer, on July 1 [2002] will assume the role of executive vice president of software and will inherit a set of major challenges. In particular, Schwartz needs to put Sun's Linux house in order, get IT managers excited about a range of products marketed under the Sun ONE (Open Net Environment) banner, and recapture the industry momentum the company has lost in the wake of being late to embrace and understand Web services. In an interview with InfoWorld Editor in Chief Michael Vizard and Test Center Director Steve Gillmor, Schwartz defends all things under Sun. InfoWorld: To many people in the industry, it seems like Sun has this knee-jerk aversion to XML and anything to do with Microsoft. Under your leadership, what will Sun's position on XML be? Schwartz: I wouldn't argue with anybody that XML is a better portable data platform than almost anything else. But is XML going to be the dominant format for the propagation of information? I don't believe so. I believe it will be the most prevalent for static data, and specifically for textual data. But I think it is a complete fallacy to assume that AOL-Time Warner is going to start broadcasting Harry Potter in XML. They're not. I believe that media types will be the dominant content format on the Web; therefore, MPEG 2 and 4 and all of its progeny, to my mind, are going to be the dominant content type. So looking at the role of Java in our end-to-end platform, Java is an execution environment. In fact, we believe it will be the most pervasive with respect to appearing on a multitude of devices, from set-top boxes, to airline seat-backs, to vending machines, to cellular phones, to Web servers, to app servers, to databases. Unquestionably, it is our end-to-end architecture. But what is going to be the principal transport between and among those devices? That all depends. The cellular network isn't going to use HTTP, and the broadcast cable environment right now doesn't even know what an IP network is. So will HTTP be the principal propagation mechanism for corporate data? Probably. But what do you do with the need for richer expressions or semantic information on top of HTTP? You evolve to things like SOAP and ebXML... Our strategy can be summed up as follows: We believe that every platform that exists will need to have an execution environment of some form. To the extent [that] a portable execution environment needs to exist, we believe Java will be it, because .Net is a waste of time for any consumer company. [Microsoft is] never going to put .Net in front of their users. But what is the principal propagation mechanism to get code to those devices? It may be XML, in the sense that you can send some pretty interesting instructions and information to a device in XML. It may, in fact, be objects -- depending upon the data types that you're attempting to propagate -- because again, if you're going to ship a movie to the seat-back of the GM minivan, you aren't going to do it in XML. But you may ship the billing information around the Harry Potter movie in XML, and that seems like the rational thing to go do. But with respect to our strategy, portable execution is Java, portable data is XML. XML is useful to get the data there, but XML had nothing to do with how the information was displayed..."

  • [May 24, 2002] "Storage Leaders Drum Up New SAN Specification." By Clint Boulton. In InternetNews.com (May 22, 2002). "The collective gunning for data storage interoperability has come up with a new specification to augment the management of storage area networks. It's code-named 'BlueFin,' and heavy-hitting members of the Storage Networking Industry Association are backing it. The meat of the new protocol is the application the Common Information Model (CIM) and Web-Based Enterprise Management (WBEM) technology as a basis for Storage Area Management (SAM). Why are competing hardware and software vendors, such as BMC Software, EMC, Hewlett-Packard, and IBM, taking up this task? Essentially, they want to prove that customers can manage multi-vendor products with a single management application - a Holy Grail of sorts in the niche of SAM. Specifically, the SNIA feels that the enterprise requires a standard for managing devices such as disk arrays, switches, and hosts in a SAN. Such a standard, the group said, should include a common model of device behavior, and a common language to read and set control information. John Webster, senior analyst at the Data Mobility Group, said: 'Without comprehensive standards for management and testing for interoperability, users will be forced to pay artificially high prices for solutions and, as a result, will find it more difficult to achieve the promised value of storage networking,' As for technical details, 'BlueFin,' employs technology from the WBEM initiative that uses the Managed Object Format (MOF) to describe system resources based on a CIM, or view of physical and logical system components. WBEM includes a data model, the Common Information Model (CIM), an encoding specification based on Extensible Markup Language (XML), and a transport mechanism based on Hypertext Transport Protocol (HTTP). SNIA has worked for the last five years on this endeavor with the creator of the CIM and WBEM models, Distributed Management Task Force (DMTF)..." See references in the news item "SNIA Announces Bluefin SAN Management Specification Using WBEM/MOF/CIM."

  • [May 24, 2002] Document Schema Definition Languages (DSDL) -- Part 2: Grammar-based validation -- RELAX NG. From ISO/IEC JTC 1/SC 34/WG 1. Reference: ISO/IEC DIS 19757-2. Date: 2002-05-22. ISO/IEC 19757-2 was prepared by Joint Technical Committee ISO/IEC JTC 1, Information Technology, Subcommittee SC 34, Document Description and Processing Languages. 40 pages [PDF]. See (1) the news item "RELAX NG Published as ISO/IEC DIS 19757-2 (DSDL Part 2)"; (2) "Document Schema Definition Language (DSDL)"

  • [May 24, 2002] "Air Force Tests XML-Based E-Forms." By Dan Caterinicchia. In Federal Computer Week (May 27, 2002). "The [US] Air Force has selected software from PureEdge Solutions Inc. to pilot the development of electronic forms based on Extensible Markup Language in the first phase of a four-year, multimillion-dollar contract. The Air Force has about 14,000 e-forms that are used by more than 700,000 members worldwide, and the PureEdge technology will help transform those static electronic documents into an interactive e-business model based on XML, said Brian Nutt, chief operating officer of the Canada-based company. XML enables agencies to "tag" data and documents so it is easier to exchange information among systems. The Air Force is testing PureEdge's secure Internet Commerce System software, which creates, deploys and manages secure XML forms for online transactions. Air Force laboratory tests have been completed at PureEdge headquarters, and final operational tests are under way at Air Force bases around the world, including seven in the continental United States and others in South Korea, Turkey and Germany. When the software passes the operational tests at those 10 bases, the Air Force and PureEdge will implement XML solutions for the millions of e-form transactions made each year, Nutt said... Technology integrator Enterprise Information Management is managing the overall project, said David Sprenkle, president of the Rosslyn, Va.-based firm. He said company officials are excited about the "opportunity to replace proprietary systems with XML standards-based products," which will save the Air Force time and money... 'The key to our technology is interoperability,' Nutt said. 'The Air Force is using it as a springboard to revamp business processes by using the forms at the front end and then sharing data across Air Force commands and potentially other [Defense Department] commands, and you do that with XML.' PureEdge's other federal customers include the National Institutes of Health, the Federal Trade Commission and the Securities and Exchange Commission. The SEC uses the solution for EDGAR, the Electronic Data Gathering, Analysis and Retrieval system, a database of corporate filings that receives about 1 million hits daily..." See the announcement: "USAF Drills Next Generation XML Technology. PureEdge software subjected to rigorous tests before rollout to 700,000 users." The Internet Commerce System from PureEdge Solutions is "the first enforceable XML replacement for paper forms and processes, and is used by organizations such as the Canadian Deposit Insurance Corporation, Securities & Exchange Commission, JPMorgan Chase, FedEx, Royal Canadian Mounted Police, General Electric and the U.S. Department of Defense."

  • [May 24, 2002] "Exclusive XML Canonicalization Version 1.0." W3C Proposed Recommendation 24-May-2002. Edited by John Boyer (PureEdge Solutions Inc.), Donald E. Eastlake 3rd (Motorola), and Joseph Reagle (W3C). Version URL: http://www.w3.org/TR/2002/PR-xml-exc-c14n-20020524/. Latest version URL: http://www.w3.org/TR/xml-exc-c14n. "Canonical XML specifies a standard serialization of XML that, when applied to a subdocument, includes the subdocument's ancestor context including all of the namespace declarations and attributes in the "xml:" namespace. However, some applications require a method which, to the extent practical, excludes ancestor context from a canonicalized subdocument. For example, one might require a digital signature over an XML payload (subdocument) in an XML message that will not break when that subdocument is removed from its original message and/or inserted into a different context. This requirement is satisfied by Exclusive XML Canonicalization..." Status: "The exit criteria of at least two interoperable implementations over every feature, one implementation of all features, reports of satisfactory performance and adequacy in an application context (e.g., SOAP, SAML, etc.) has been satisfied..." See also the Interoperability Report.

  • [May 24, 2002] "Background Checking 1.0." HR-XML Recommendation. 2002-April-29. Version reference: BackgroundChecking-1_0. Edited by Craig Corner (HireCheck) and Chuck Allen (HR-XML Consortium, Inc.). 82 pages. "HR-XML's Background Checking specification supports requests to third-party providers of background checking services and the return of search results. The specification defines messages to support background check requests and reports... Version 1.0 of the HR-XML Background Checking specification includes two schemas: (1) A schema to support background-check requests to third-party providers. The schema explicitly supports screenings relating to criminal records, department of motor vehicle records, education, employment history, and credit worthiness. In addition, the BackgroundCheck schema is sufficiently flexible to transmit information required to execute custom screenings that a client might arrange with a background checking service provider. (2) A simple schema to transmit background check results to a client of a background checking service provider..." See the announcement for other details: "HR-XML Consortium Approves Background Checking Standard Specification Enables Easy Integration Between Employers and Background-Checking Services." The ZIP archive contains schemas and other documentation. References: "HR-XML Consortium."

  • [May 24, 2002] "Pull Parsing in C# and Java." By Niel Bornstein. From XML.com. May 22, 2002. ['Over the last year, a new model for processing XML has emerged as a middle way between the well established DOM and SAX models. So-called "pull parsing" has found a home in particular in the Microsoft C#/.NET XML API. In the third of his series investigating XML processing in .NET, Niel Bornstein gives an introduction to pull parsing and shows how it is implemented in Java, too.'] "In my first article in this series, I wrote about porting a SAX application called RSSReader to the new Microsoft .NET Framework XmlReader. After publication, I received a message from Chris Lovett of Microsoft suggesting I revisit the subject. As he said, while the code I presented works, my approach was not optimal for the .NET framework; I was still thinking in terms of SAX event driven state machinery. A much easier way to approach this problem is to take advantage of the fact that XmlReader does not make you think this way; and, thus, to write a recursive descent RSS transformer as outlined below. Based on Chris' suggestions, I've also made some other changes, including changing the output mechanism to use the XmlTextWriter, which will take care of generating well formed XHTML on the output side. And following all that, in a reversal of our usual process, I'll port this code back to Java... pull parsers are not unique to the .NET world. The Java Community Process is currently working on a standard called StAX, the Streaming API for XML. This nascent API is, in turn, based upon several vendors' pull parser implementations, notably Apache's Xerces XNI, BEA's XML Stream API, XML Pull Parser 2, PullDOM (for Python), and, yes, Microsoft's XmlReader... A pull parser makes it much easier to process XML, especially when you are processing XML with a well-defined grammar like RSS. This code is much easier to understand and maintain since there's no complex state machine to build or maintain. In fact, this code is completely stateless; the pull parser keeps track of all the state for us. So in that sense a pull parser is a higher level way of processing XML than SAX... But the real bottom line remains that doing it the .NET way means that Microsoft provides all the standards-compliant tools that 90% of developers are likely to need, while the Java way still means putting together a solution from various pieces that you can scrounge from various sources. Some of those pieces come from the Java Community Process and thus represent peer-reviewed, formally approved APIs, but some come from a quick search of the Web..."

  • [May 24, 2002] "Extending SVG for XForms." By Antoine Quint. From XML.com. May 22, 2002. ['On the use of SVG, CSS, and EcmaScript in building XForms apps. In the first of two articles on XForms, Antoine Quint explains how SVG can be used to implement the W3C's replacement for HTML form controls.'] "If you've read my previous columns, it should be clear that SVG provides what programmers and designers need to create interactive 2D graphics. What makes SVG a little special is the versatility it inherits from XML. XML vocabularies are meant to be extended and intermixed with one another. But despite some noteworthy efforts to create XML-aware browsers, like XSmiles, Amaya, and Mozilla, rock-solid SVG-centric browser applications are not quite ready. But it may be possible to start implementing XForms UI controls in SVG. In previous columns we looked at SVG DOM in simple drag and drop or event handling and remote animation control. This month we're going to implement the XForms <button> element with CSS support, which will serve as an applied study on extending SVG's semantics with foreign elements and attributes... Our task has three parts: parsing, instantiating objects, and interacting with them. Parsing is the part where we are going to look up XForms content in our document and try to make some sense of it. We will then use the gathered information to build an internal EcmaScript object model and draw the form controls. Interaction is the part where we use event handlers to create the different states of the button and allow for callbacks to be registered when it is pressed. This month, we'll take care of the XForms <button> element alone, concentrating on the process of integrating a new namespace with SVG; we'll look at the the interactivity and object modeling next month. So our first task is to read an SVG+XForms document and draw a static button... In next month's column we'll add SVG interactivity..." See: "W3C Scalable Vector Graphics (SVG)."

  • [May 24, 2002] "XML Europe 2002 Coverage." By Leigh Dodds. From XML.com. May 22, 2002. XMLDeviant report from XML Europe 2002. [1] Topics on the European Map: "In his opening keynote at the XML Europe 2002 conference, Peter Pappamikail, head of the European Union (EU) Information Resource Management Group, gave some background on their activities within European government and the unique challenges which they face in defining a new information architecture for the EU Parliament. Pappamikail explained that a key policy of the group was to use XML as the underlying technology for their efforts to draw together data, editorial, and metadata standards, among other policy documents. Pappamikail also briefly outlined two current initiatives. The first, ParlML, will attempt to capture best practices and other XML "bricks and tricks" to inform the use of XML in various European parliamentary activities. The second project, MiREG, will define a common metadata framework and associated controlled vocabularies and topic maps for use in public administration across the EU." [2] Published Subjects and Content Structures: "... the need for users of a controlled vocabulary to reach a common understanding of its terms for it to be most effective. This ensures that it will be consistently applied when authoring documents. Dealing with the many different EU languages was one issue which Papamikail highlighted. The difficulties in reaching shared understanding of controlled terms across multiple languages further complicates this scenario. Published Subjects basically allow a subject to be defined in a Topic Map as a concept with an associated URI. One or more names may then be associated with a subject, which might be localized for specific languages. This gives the freedom to support multiple languages but still provide a stable subject resource with which documents can be associated. The ability to associate human-readable documentation with a Published Subject is another important component that will drive shared understandings of the controlled terms." [3] Adaptive Graphics: "The central theme of the presentation was the benefit of attempting to separate content and presentation elements of images to allow the delivery of adaptive or personalized graphical content. Tailoring graphical content to end user preference or need is a feature enabled by the use of SVG to describe an image. For example, it's vastly easier to alter a piece of text content in an SVG image than it is to attempt the same with a raster graphics format. This introduces the potential for on-the-fly image generation as a means to present complex data." [4] The Markup Spectrum: "Rick Jelliffe, 'When Well-formedness is Too Much, and Validity is Too Little' -- The point of illustrating this spectrum of states was to highlight the states useful for different kinds of applications, particularly editing... These concepts have already guided the development of the new Toplogi Collaborative Markup Editor which is currently in late beta with very promising results. It will be interesting to see how these concepts might be applied to the ISO DSDL project, of which Jelliffe is a member. DSDL is currently undergoing a realignment to focus on publishing requirements exclusively, at least in its initial phases. W3C XML Schemas will now be supported as an extension, rather than as one of the core languages, which include Schematron and RELAX NG. Jelliffe was keen to avoid any sense that there might be conflict between the two efforts, stressing that each was most useful in distinct but possibly overlapping application areas."

  • [May 23, 2002] "Automating Data Acquisition into Ontologies from Pharmacogenetics Relational Data Sources Using Declarative Object Definitions and XML." By Daniel L. Rubin, Micheal Hewett, Diane E. Oliver, Teri E. Klein, and Russ B. Altman (Stanford Medical Informatics, MSOB X-215, Stanford, CA 94305-5479, USA). In Pacific Symposium on Biocomputing [Online Proceedings] 7:88-99 (2002). With 30 references. "Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB, a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis..." See also Pacific Symposium on Biocomputing 2003 conference details. [cache]

  • [May 21, 2002] "Uniform Comparison of Data Models Using Containment Modeling." By E. James Whitehead, Jr. (Department of Computer Science, University of California, Santa Cruz, CA). Paper prepared for the ACM SIGWEB 2002 Hypertext Conference (ACM conference on Hypertext and Hypermedia), June 2002. "Containment data models are a subset of entity relationship models in which the allowed relationships are either a type of containment, storage, or inheritance. This paper describes containment relationships, and containment data models, applying them to model a broad range of monolithic, link server, and hyperbase systems, as well as the Dexter reference model, and the WWW with WebDAV extensions. A key quality of containment data models is their ability to model systems uniformly, allowing a broad range of systems to be compared consistently. ... Containment data modeling provides a modeling mechanism capable of uniformly representing the data models of a wide range of existing hypertext systems. Containment data modeling has been validated by presenting the models of 14 existing hypertext systems and reference models, the broadest survey to date of these models. The uniformity of containment modeling is highlighted by the ability to decompose and model both Dexter and the WWW in the same way as other systems, allowing them to be compared with each other and with other systems using a consistent model. The understanding of Dexter and the WWW that emerges from this process shows them to be non-distinguished peers with other hypertext systems, carrying their own design choices and tradeoffs, but otherwise with no special distinction to their data models. Containment data modeling provides a new technique useful in comparing hypertext system data models. By concisely representing system data models, and then grouping thems together, it is possible to quickly see similarities and differences among systems, including patterns for handling composites, anchors, and links. Since containment data modeling focuses on modeling the static aspects of system data models, it is complementary to architecture-focused models, such as Flag, and formal models of system semantics, such as the FOHM model, or Trellis' Petri-nets. While containment data models are a powerful and useful modeling technique, they alone do not give a complete picture of a hypertext system, and should be used in conjunction with other modeling techniques, providing multiple views of distinct aspects of each system. Containment data models show significant promise for modeling systems in other domains, such as Software Configuration Management and Document Management. In our future work, we look forward to extending this technique to these additional classes of systems, enabling us to perform substantive crossdomain comparison of information management systems..." See Emmet Whitehead's Curriculum Vitae for related papers on WebDAV and hypertext versioning. On WebDAV: "WEBDAV (Extensions for Distributed Authoring and Versioning on the World Wide Web." [cache]

  • [May 21, 2002] "The DocBook Document Type." Version 4.2 Candidate Release 2. OASIS DocBook Technical Committee Working Draft. 21-May-2002. "DocBook is general purpose XML and SGML document type particularly well suited to books and papers about computer hardware and software (though it is by no means limited to these applications). The DocBook Technical Committee maintains the DocBook schema. DocBook is officially available as a Document Type Definition (DTD) for both XML and SGML. It is unofficially available in other forms as well. The Version 4.2 release is a maintainance release. It introduces no backwards-incompatible changes. All valid DocBook 4.1 documents are also valid DocBook 4.2 documents..." [Norman Walsh announced the availability of the second Candidate Release of DocBook V4.2. "Please give it a try and report the results of your efforts. If no errors are reported in the next 30 days, the TC will be able to vote to move it to Committee Specification status." The The DocBook SVG Module Version 1.0 Beta 2 is also available [OASIS DocBook Technical Committee, Working Draft 21-May-2002]. "This module integrates SVG (Scalable Vector Graphics) into DocBook by incorporating the SVG V1.1 DTD using a namespace prefix and extending the content model of DocBook's imageobject element to allow those elements to occur." See: "DocBook XML DTD."

  • [May 21, 2002] "Take the Sting Out of SAX: Generate SAX Parsers with XML Schemas ." By Leon Messerschmidt. In JavaWorld (May 17, 2002). ['Although SAX (the Simple API for XML) parsers are handy tools for parsing XML content, developing and maintaining a SAX parser can prove difficult. In this article, Leon Messerschmidt shows you how to use the information contained in XML Schemas to generate source code for a skeleton SAX parser. You also learn techniques to accomplish common XML parsing tasks.'] "...parsing even a simple XML file can produce a significant amount of source code. SAX's event-driven (as opposed to document-driven) nature also makes the source code difficult to maintain and debug because you must be constantly aware of the parser's state when writing SAX code. Writing a SAX parser for complex document definitions can prove even more demanding; see Resources for challenging real-life examples. We must reduce the work involved in writing an event-handler structure so we have more time to work on actual processing... To lighten our workload, we can automate most of the process of writing the event-handler structure. Luckily, the computer already knows the format of the XML file we will parse; the format is defined in a computer-readable DTD (document type definition) or in an XML Schema. I explore ways to use this knowledge for generating source code that removes the sting from SAX parser development. For this article, I rely on XML Schemas only. Though younger than DTDs, the XML Schema standard will probably replace DTDs in the future. You can easily convert your existing DTD files to XML Schemas with the help of some simple tools. The first step towards building our code generator is to load the information contained in the XML Schema into a memory model. For this article, I use a simple memory model that defines only the XML entity and attribute names, as well as the entities' relationship to each other..."

  • [May 21, 2002] "REST (Representational State Transfer)." XML Technologies Course. By Roger L. Costello (The MITRE Corporation). ['I have created a tutorial on REST. I welcome all comments, suggestions, criticisms.'] See the thread on the XML-DEV forum. Also available in Powerpoint format.

  • [May 21, 2002] "Open eBook Publication Structure 1.2." Open eBook Forum Working Draft. April 17, 2002. Appendix A contains the OeBF Package DTD; Appendix B contains the basic OeBF Document DTD. "The Open eBook Publication Structure (OEBPS) is a specification for representing the content of electronic books. Specifically: (1) The specification is intended to give content providers (e.g., publishers, authors, and others who have content to be displayed) and tool providers minimal and common guidelines which ensure fidelity, accuracy, accessibility, and adequate presentation of electronic content over various electronic book platforms. (3) The goal of this specification is to define a standard means of content description for use by purveyors of electronic books (publishers, agents, authors et al.) allowing such content to be provided to multiple Reading Systems... OEBPS is based on XML because of its generality and simplicity, and because XML documents are likely to adapt well to future technologies and uses. XML also provides well-defined rules for the syntax of documents, which decreases the cost to implementers and reduces incompatibility across systems. Further, XML is extensible: it is not tied to any particular set of element types, it supports internationalization, and it encourages document markup that can represent a document's internal parts more directly, making them amenable to automated formatting and other types of computer processing..." This version: "Original planning for the new OEBPS specification identified, as reported earlier, four areas for enhancement: content provider control over presentation, support for international content, linking and navigation functionality, and metadata support. The WG has worked intensely over the last year and a half and has made considerable progress in several of these areas. However we decided, with the approval of the OEBF Board and Systems Working Group, that rather than delaying the release of completed work in presentation control, the highest priority area, until all enhancements were completed, we should make presentation control work available now, particularly as it could be done with minimal impact on the conformance of existing content. Consequently OEBPS 1.2 is a tightly constrained update to OEBPS 1.0.1: it provides a great deal of new functionality in the area of presentation control, but it deliberately minimizes all other changes to 1.0.1..." Associated note from Allen_Renear (Chair, OeBF Publication Structure Working Group): "Public comment is invited on version 1.2 of the OEBF Publication Structure, now a 'Draft Document' undergoing OEBF membership review ... a reminder that the review period for OEBPS 1.2 'Draft Document' ends on May 27, 2002. Please send any comments to commentsoebps12@openebook.org. We would also appreciate hearing from anyone who has reviewed the draft, regardless of whether or not they have any questions, proposed corrections, or other specific comments." See "Open Ebook Initiative." [cache]

  • [May 20, 2002] "Inktomi Searches XML for Content Answers." By Cathleen Moore. In InfoWorld (May 17, 2002). "Search vendor Inktomi is looking to drive its search technology into the heart of enterprise applications, tapping XML to extract data from structured and semistructured data buried in corporate systems. The Foster City, Calif.-based company this week rolled out the Inktomi Search Toolkit, designed to be embedded into applications such as content management, CRM, ERP, portals, and commerce systems. Spreading its wings beyond intranet search, Inktomi is leveraging XML in combination with the emerging World Wide Web Consortium (W3C) standard XQuery to dig into the structured and semistructured content that proliferates in applications. The problem of extracting information from corporate applications is huge, according to Bill Lange, senior analyst at Boston-based Delphi Group... Existing application search tools fall short because they attempt to use unstructured information retrieval designed for intranets or relational database search, which does not deal well with textual content, said Rahul Lahiri, director of product management at Inktomi. Tapping XQuery, a query language for XML, in combination with Inktomi's Search Toolkit merges traditional keyword search with sophisticated tools for powerful structured queries typically found in relational databases. 'The product we built is very XML-focused. XML is becoming the de facto standard for data presentation in the enterprise,' Lahiri said. 'XML allows you to evolve the data structure, and it adds a tremendous amount of flexibility'... Inktomi also recently teamed with Stratify to marry search with text categorization. The deal aims to boost search precision by allowing users to drill down into documents within specific categories..." See the news item: "New XML-Based Inktomi Search Toolkit Combines Keyword and Parametric Search."

  • [May 20, 2002] "Inktomi Unveils XML-based Search Toolkit." By Ryan Naraine. From Internet.com Developer News. May 14, 2002. "Search technology firm Inktomi Corp Tuesday [2002-05-14] released its Search Toolkit, an XML-based server tool targeting enterprise Web developers. Inktomi, locked in a high-stakes race for market share with upstart Google, is styling the Search Toolkit technology an OEM software for extracting data from structured, unstructured and semi-structured content. The software, which targets enterprise developers and systems integrators, promises retrieve information within content-rich applications such as content management, enterprise portal, CRM and commerce solutions. The Foster City, Calif.-based Inktomi said the Search Toolkit would let customers run provide a server-based architecture, open APIs and a standards-based query language, allowing easy integration with most environments... 'As enterprise applications continue to build upon XML and evolve toward Web services, it is critical that they include search functionality that is fully compatible with XML,' Inktomi said, touting the Search Toolkit as the first OEM software that delivers XML-based retrieval capabilities for content within enterprise applications. Inktomi said the Search Toolkit would provide the unstructured search functionality of a keyword search engine, such as relevance ranking, natural language search and filtering for various file formats. In addition, it would offer XQuery-based structured query capabilities that allow jazzed-up retrieval functions such as parametric searching and retrieval of content based on a document's structure. It is programmed to return results that include references to documents as well as the actual documents or fragments of those documents that contain the precise information requested... The latest product announcement is seen as Inktomi's response to losing its lucrative contract with America Online to rival Google. America Online chose Google to provide editorial search results and paid listings to its various search properties in the United States, including AOL Search, Netscape Search and CompuServe Search..." See the news item: "New XML-Based Inktomi Search Toolkit Combines Keyword and Parametric Search."

  • [May 20, 2002] "Oracle Readies 9i Tools Suite." By Paul Krill. In InfoWorld (May 20, 2002). "Oracle on tuesday will announce general availability of Oracle9i Developer Suite, which enables Java, XML, and SQL developers to build applications for transactional and analytical purposes, according to the company. Team-based development also is supported through a tool called Software Configuration Manager, for managing development teams. The suite features Oracle9i JDeveloper and supports J2EE (Java 2 Enterprise Edition), XML, and Web services standards, the company said. Applications can be built to be interoperable across multiple operating systems and devices, from Web browsers to mobile systems. The suite is built to function with the Oracle9i Application Server and database of the same designation... Java and PL/SQL application development tools in the suite include JDeveloper, a Java, XML, and SQL development environment; Forms Developer, a rapid application development tool; Designer, for modeling and generating requirements and design of database applications; and Software Configuration Manager. The suite will be available for $5,000 per developer from the Oracle Store."

  • [May 20, 2002] "Oracle Announces New Developers' Suite." By Darryl K. Taft. In eWEEK (May 20, 2002). "Oracle Corp. Monday announced the release of Oracle 9i Developer Suite, a new set of tools for building Internet applications and Web services featuring transactional and business intelligence capabilities. The new suite targets Java, XML and SQL developers, and provides full development lifecycle support in a standards-based platform, the company said. Oracle said the toolset supports the latest Java 2 Enterprise Edition, XML and Web services standards. In addition, the Oracle 9i Developer Suite includes new features for building transactional and analytic applications, including Oracle9i Business Intelligence Beans for implementing business intelligence capabilities in Java applications; Oracle9i Software Configuration Manager (SCM) for collaborative team development; enhanced J2EE Design Pattern implementations; a fast Java debugger; and a CodeCoach tool for code optimization, the company said. For Java and PL/SQL developers, Oracle 9i Developer Suite offers tools to help them write applications and web services faster. The tools include Oracle9i JDeveloper, an integrated Java, XML and SQL development environment for J2EE and Web services; Oracle9i Forms Developer, a rapid application development tool for PL/SQL development; Oracle9i Designer, a toolset for modeling, generating and capturing the requirements and design of database applications; and the SCM tool..."

  • [May 20, 2002] "Oracle Bets on XML for 9i Software." By Wylie Wong. In ZDNet News (May 15, 2002). "Oracle, looking to regain market share lost to its rivals, is set to launch a new version of its flagship database software... Topping the list of new features in the release is more fluency in XML (Extensible Markup Language), a Web standard for exchanging information that is a cornerstone of Web services software development... Using XML in a database was a complicated task, requiring the XML data to be translated so it could be stored in the database, Shimp said. 'It didn't run fast enough to be usable.' Now, the updated 9i database can support XML documents, giving businesses faster access to data, he said. The updated database could also spur customers to buy add-on technology from Oracle, such as advanced security and clustering features. Clustering lets businesses harness multiple servers to run a very large database, allowing servers to share work or take over from each other if one fails. The new 9i release will also feature better performance because it includes all the software patches and bug fixes that the company has released since the first version of 9i was launched last June, said George Demarest, Oracle's senior director for database marketing... [Gartner analyst Ted Friedman says although improved XML capabilities in 9i is a logical step for Oracle to take, it won't guarantee future success. See "Commentary: Oracle's Positive Step" as a Gartner Viewpoint, Special to CNET News.com: "The market demand for XML (Extensible Markup Language) support within the database management system is only now starting to build, but to date, most relational database vendors have been slow to include this resource. With this release of 9i, Oracle has taken a solid step toward addressing this issue and making its product competitive with the offerings of XML-grounded database vendors that have generated considerable market interest over their XML database features. Furthermore, Oracle seeks to move ahead of its two chief competitors in this market, Microsoft and IBM, which are also expected to introduce stronger XML support. However, although improved XML capabilities in 9i is a logical step for Oracle to take, it won't guarantee future success. Improved XML support alone isn't going to cause a sudden, massive adoption of 9i--the number of enterprises looking to immediately adopt XML database functionality is still small..."]"

  • [May 20, 2002] "Whirlwind of Web Services Work on Tap." By John Fontana. In Network World (May 20, 2002). "In rapid-fire succession over the next six to eight months, network executives could see up to 30 new protocols emerge designed to advance Web services as a way to support secure and reliable interconnection of transaction-based business applications. The protocols will help mitigate risk, enforce access and use policies, ensure nonrepudiation and guarantee execution and exception handling by defining authentication, authorization, trust, reliable messaging, transactional integrity and workflow. Standards for XML-based digital signatures and encryption already exist. Standards bodies focused on XML include the Organization for the Advancement of Structured Information Standards (OASIS) and the World Wide Web Consortium (W3C)... The groups will be heavily active in the coming months on standardizing recommendations, introducing new specifications, hammering out guidelines for security requirements and focusing on creating consistency across a palette of security initiatives.. Work is under way to tie it all together for use in Web services development tools and other software. Last week, OASIS created the Security Standards Joint Committee (SSJC), an oversight group to ensure consistency among its security working groups. Next month, OASIS will begin work on final approval of SAML and XACML. The SPML specification is set for standards review at year-end, and a fourth focused on digital rights management had its first committee meeting last week. 'If you can show me a PowerPoint slide that describes how security standards tie together, I'll give you a million bucks,' says Darran Rolls, director of technology for Waveset and the co-chair of the SSJC. 'We need common terms and a way to prevent overlap in the specs.' The W3C last month published the first draft of its Web Services Architecture Requirements, including a foundation for security based on accessibility, authentication, authorization, confidentiality, integrity and nonrepudiation. The final draft is due early next year, and the group is working on a proposal to create an umbrella security group that would work on security extensions to SOAP and examine new security proposals, says Philippe Le Hégaret, a member of the W3C technical staff. One such effort to create new protocols is being led by IBM, Microsoft and VeriSign, which by year-end plan to introduce six specifications to extend the WS-Security specification they introduced last month. The trio says it hopes to submit WS-Security, which is built on XML-Sig and XML encryption, to a yet-to-be determined standards body this fall. IBM and Microsoft are at work independently on specifications - Web Services Flow Language and Xlang, respectively - for standardizing workflow, the process of managing the execution of Web services in business processes. IBM's Sutor says the company also is working on a specification for guaranteed delivery of messages, although he would not provide details..."

  • [May 20, 2002] "Vitria To Unveil Web-Services Module." By Antone Gonsalves. In InformationWeek (May 16, 2002). "Vitria Technology Inc. next week will unveil a Web-services module for its application-integration platform. Despite being called a version 2 release, previous versions of the product were not generally available. The new product, which works in conjunction with Vitria's BusinessWare integration platform, provides tools for building and deploying Web-services-based application interfaces without difficult coding. Web services is a term used for a set of emerging technologies posed to become standards for connecting Internet-based applications through XML. XML is a hot technology because it can simplify the complex process of application integration. Companies are using the 20% of the IT budget usually set aside for trying new technology to buy Web-services tools, Aberdeen Group analyst Darcy Fowkes says. "The initial deployments are to save money" in software integration, she says. With Vitria's $30,000 Web services module, a developer can choose an application connected to BusinessWare and let the new tool build a Soap interface for the software. Soap, or Simple Object Access Protocol, is the Web-services communication mechanism used for receiving and sending XML-based data between applications..."

  • [May 20, 2002] "Slowing Standards." By Chad Dickerson (InfoWorld CTO) and Jace Weiser (YellowBrix). In InfoWorld Volume 24, Issue 19 (May 13, 2002), page 63. "Many CTOs are actively involved with Web services and other standards development organizations. But the work of these organizations is progressing too slowly, according to a poll of 45 members of the InfoWorld CTO Network." The charts display level of involvement with different standards organizations and perceptions about whether standards take too long for publication/release. See the associated story "The Standards Body Politic," by Jack McCarthy and "The Politics of Standards."

  • [May 20, 2002] "Hyperlinks Matter." By Jon Udell. In InfoWorld (May 17, 2002). ['Proponents of REST, the Web's foundation architecture, ask, 'Where's the Web in Web services?' SOAP toolkit vendors have been listening, and as a result, the currency of the Web -- the hyperlink -- seems likely to be honored in the Web services realm... The architects of the Web services era often speak of loosely coupled services and low coordination costs. Piggybacking SOAP onto HTTP is one way to achieve those goals. Another is to make sure that URLs identify important resources, both to humans and to software.'] "... A critique of the Web services movement, which emerged last summer and flared up after the release of Google's SOAP (Simple Object Access Protocol) API, reminds us that the Web's convenient duality was no accident, and shouldn't be taken for granted. At the center of this critique was Roy Fielding, who chairs the Apache Software Foundation and co-wrote a number of the Web's core specifications. Fielding's doctoral thesis, published in 2000, identified an architectural style called REST (Representational State Transfer) as the basis of the Web. Resources are the currency of REST. It uses URIs (uniform resource identifiers) to exchange resource representations (documents), by means of a handful of core methods such as HTTP GET and POST. Network effects don't just happen, the RESTians argue. The Web's success was a direct result of the simplicity and generality of this approach. When Google offered its SOAP API, REST proponents argued that it had, in some sense, seceded from the Web. '[Google] deprived the Web of approximately a billion useful XML URIs,' wrote REST proponent Paul Prescod. What ought Google have done to satisfy the naysayers? One undocumented solution, since discontinued, was to support URLs such as http://google.com/xml?q=roy+fielding, so that a simple HTTP GET request would return a package of XML data. Does this kind of thing qualify as a Web service? Absolutely. To see why, consider how a similar kind of service, news syndication based on the RSS (Rich Site Summary) format, has fared. RSS channels, including those launched at InfoWorld, are simply XML files fetched like ordinary Web pages. Some are generated by software, others are written by people. Subscribers to these channels are, likewise, both programs and people. Web sites running RSS aggregators collect channels and transform them into readable HTML. More recently, personal aggregators such as the one in Radio UserLand are putting the program selection into the hands of individuals. The fact that software 'calls' the same URL that a person links to, or sends to a friend in e-mail, goes a long way toward explaining why RSS is one of the more widespread and popular applications of XML. The RESTful nature of RSS can have surprising consequences. For example, Macromedia recently launched an XML-formatted but non-RSS news feed on its developer site. The point of this exercise was to showcase innovative Flash renderings of XML content. Arguably, Macromedia should have provided an RSS rendering of the feed. But the omission was easy to rectify, thanks to another REST-savvy service offered by the W3C (World Wide Web Consortium). The W3C provides a URL-accessible XSLT (Extensible Stylesheet Language Transformations) transformation service. You can use it to transform URL-accessible XML files using URL-accessible XSLT files..."

  • [May 17, 2002] "Filling in the DTD Gaps with Schematron." By Bob DuCharme. From XML.com. May 15, 2002. ['Using Schematron to add greater capabilities to applications using DTDs.'] "Many XML developers, just when they've gotten used to DTDs, are hearing about alternatives and wondering what to do with them. W3C schemas, RELAX NG, Schematron -- which should they go with? What will each buy them? What software support does each have? How much of their current systems will they still be able to use? The feeling of unease behind these questions can be summed up with one question: if I leave DTDs behind to use one of the others, will I regret it? One nice thing about Schematron is its ability to work as an adjunct to the others, including DTDs, so you don't have to leave DTDs behind to take advantage of Schematron. To use Schematron in combination with RELAX NG, Sun's msv validator has an add-on library that lets you check a document against a RELAX NG schema with embedded Schematron rules, but you don't need a combination validator like msv to take advantage of Schematron. There's nothing wrong with checking a document against one type of schema and then checking it against a set of Schematron rules as well. In fact, more and more XML developers are realizing that a pipeline of specialized processes that each check for one class of problems can serve their needs better than a monolithic processor that does most of what they need and several more things that they don't need... This turns out to be the answer to the prayers of many developers wondering about the best way to move on from DTDs. If you have a working system built around DTDs and want to take advantage of the Schematron features that are unavailable in DTDs, you can go ahead and write the Schematron rules that fill in those gaps and continue using your DTD-based system... Once XPath 2.0 is implemented in some XSLT processors, I should be able to do type checking properly without moving beyond my Schematron+DTD combination. And I do have a lot right now: all the new possibilities of Schematron for specifying data constraints without giving up any of the features of DTDs and the extensive support available for them. Schematron support only requires an XSLT processor, and there are plenty of those around. The fact that [almost] all of the examples in this article were real problems that I had to solve for a project unrelated to this article made it clear to me: Schematron can add a lot to XML-based systems currently in production or in development without forcing us to leave DTDs behind until we're good and ready to..." See: "Schematron: XML Structure Validation Language Using Patterns in Trees."

  • [May 17, 2002] "Eric van der Vlist on W3C XML Schema." By [O'Reilly Staff]. From XML.com. May 15, 2002. ['An interview with the author of O'Reilly's "XML Schema" book.'] "Eric van der Vlist, a regular contributor to XML.com, has just completed writing XML Schema: The W3C's Object-Oriented Descriptions for XMLfor O'Reilly, to be published in June 2002. In this interview he explains the importance of XML schema languages, and his motivations for writing the book. "[I have chosen this subject] "because I think that the XML schema languages in general, and W3C XML Schema in particular, are the hot topics of the moment: being at the same time essential and potentially dangerous for XML. I thought that an objective book needed to be written, which would be a kind of map to W3C XML Schema, showing clearly not only the features and goodies but also the pitfalls of this specification... the lack of XML schema languages is simply not economically acceptable! An application must expect that the XML documents used as input follow some kind of structure, in order to be able to understand them. Formalizing this structure as 'XML schemas' enables all kind of productivity, quality and performance improvements by automating tasks such as validation, code generation, data binding, documentation and query optimization... The XML DTD was specified in the XML 1.0 recommendation, published before Namespaces in XML 1.0. The XML DTD ignores the notion of namespace and lacks the flexibility necessary to support them in a simple way. The XML DTD is also a descendant of the SGML DTD, which had been designed for document-oriented applications, and lacks a complete type system -- a requirement for data oriented applications. The W3C had the choice between updating the specification of the DTD or creating a new specification; it chose to start anew. I guess that the interoperability issues linked with any modification of the XML 1.0 recommendation have influenced this decision: it is often easier to create a new standard than to update an existing one, especially when it's a successful one! [...] DSDL proposes a classification of schema languages in three categories: (1) Rule based languages (such as Schematron), defining the rules to be followed by a class of XML documents. (2) Grammar based languages (such as RELAX NG), defining the structure of a class of XML documents as a grammar or a set of patterns. (3) Object oriented languages (disclaimer: I am the editor of this section of the DSDL work), describing a class of XML documents as object oriented structures facilitating the mapping between XML documents and object oriented applications. This classification shows that the XML schema languages are very different and could be considered more complementary than competing. If we had to define these schema languages from scratch today, with the experience we have acquired and putting aside any political considerations, I think that we could even define them as layers: a rule based language would be the foundation of a grammar based language, on top of which an object oriented language could be defined..." Note: Eric van der Vlist's book "explains XML Schema foundations, a variety of different styles for writing schemas, simple and complex types, datatypes and facets, keys, extensibility, documentation, design choices, best practices, and limitations; complete with references, a glossary, and examples throughout." For schema description and references, see "XML Schemas." On DSDL, see "Document Schema Definition Language (DSDL)."

  • [May 17, 2002] "XML Schema Languages." By Eric van der Vlist. May 2002. Prepared for the XML Europe 2002 Tutorials. The tutorial follows the classification of XML schema languages proposed by the ISO DSDL Working Group at http://dsdl.org. (1) Introduction; (2) Rule based languages [XSLT and Schematron[; (3) Grammar based languages [RELAX NG]; (4) Object Oriented languages [W3C XML Schema]. See source from the DSDL web site.

  • [May 17, 2002] "Examining WSDL." By Rich Salz. From XML.com. May 15, 2002. ['Rich Salz explains some of the flaws and rough corners of WSDL.'] "Unlike today's Web, web services can be viewed as a set of programs interacting cross a network with no explicit human interaction involved during the transaction. In order for programs to exchange data, it's necessary to define strictly the communications protocol, the data transfer syntax, and the location of the endpoint. For building large, complex systems, such service definitions must be done in a rigorous manner: ideally, a machine-readable language with well-defined semantics, as opposed to parochial and imprecise natural languages. It is possible to define service definitions in English; XML-RPC and the various weblogging interfaces are a notable example. But XML-RPC is a very simple system, by design, with a relatively small set of features; it's ill-suited to the task of building large-scale or enterprise applications. For example, you can't use XML-RPC to send an arbitrary XML document from one system to another without converting it to a base64-encoded string. Almost all distributed systems have a language for describing interfaces. They were often C or Pascal-like, often named similarly: 'IDL' in DCE and Corba, MIDL, in Microsoft's COM and DCOM. The idea is that after rigorously defining the interface language, tools could be used to to parse the IDL and generate code stubs, thus automating some of grungier parts of distributed programming. The web services distributed programming model has an IDL, too; and as you can probably guess, it's the Web Services Definition Language, WSDL... WSDL derives from two earlier efforts by a number of companies; the current de facto standard is a W3C Note submitted by IBM and Microsoft. There's a web services description working group, which is creating the next version of the note for eventual delivery as a W3C standard. So far the group has published a requirements document and some usage scenarios. One reason to like the requirements document is that it renames some of WSDL's more confusing terms... I find WSDL to be a frustrating mixture of verbosity -- most messages are essentially described three times -- and curious supposedly helpful defaults, such as omitting the name of a message in an operation. I'll use the now much-discussed Google WSDL to point some of these out..." See: "Web Services Description Language (WSDL)."

  • [May 17, 2002] "Go Tell It On the Mountain. [RDF Primer]". By Kendall Grant Clark. From XML.com. May 15, 2002. ['As part of the re-framing of the W3C's Resource Description Framework a primer has been produced to accompany the new RDF specifications. Kendall Clark reviews the new document.'] On RDF Primer, W3C Working Draft 26 April 2002. "... After languishing for a relatively long time, the next 12 to 18 months is a make or break time for RDF, as well as for the Semantic Web Activity, of which it is a crucial element. Despite the technical problems with RDF, the biggest impediment to its widespread use to date has been the failure of evangelism, not so much because it was done poorly, though there have been missteps, but, rather, because it was mostly not done at all. And so the arrival of a Primer among the work product of the RDF Core WG is a happy occasion. It offers those of us not serving on an RDF working group, but inclined to do formal or informal evangelization of RDF among our peers, a non-normative but still blessed and trusted ground upon which to base our efforts. As well, it offers curious, potential users a more easily accessible introduction to RDF, and that can only be a good thing. There is some sense in which there being an RDF primer is far more strategically valuable than the sort of primer it is. However, as with other introductory texts, which are often the only texts about a technology users ever read, there are certain principles it is hard to quarrel with. Among these are simplicity and concision. In what ways and to what extent these principles are met, as always, at least partially a subjective question. I for one think that the RDF Primer gets more things right than it gets wrong, but I'm also hopeful that future drafts grow increasingly simplified and concise..." See "Resource Description Framework (RDF)."

  • [May 16, 2002] "A Metadata Registry for the Semantic Web." By Rachel Heery (Research & Development, UKOLN) and Harry Wagner (Office of Research, OCLC / DCMI). "The Semantic Web activity is a W3C project whose goal is to enable a 'cooperative' Web where machines and humans can exchange electronic content that has clear-cut, unambiguous meaning. This vision is based on the automated sharing of metadata terms across Web applications. The declaration of schemas in metadata registries advance this vision by providing a common approach for the discovery, understanding, and exchange of semantics. However, many of the issues regarding registries are not clear, and ideas vary regarding their scope and purpose. Additionally, registry issues are often difficult to describe and comprehend without a working example. This article will explore the role of metadata registries and will describe three prototypes, written by the Dublin Core Metadata Initiative. The article will outline how the prototypes are being used to demonstrate and evaluate application scope, functional requirements, and technology solutions for metadata registries... The W3C Resource Description Framework (RDF) has provided the basis for a common approach to declaring schemas in use. At present the RDF Schema (RDFS) specification offers the basis for a simple declaration of schema. It provides a common data model and simple declarative language. Additional work is underway in the context of the W3C's RDFCore Working Group and the Web Ontology Group to add 'richness' and flexibility to the RDF schema language, to incorporate the features of the DARPA Agent Markup Language (DAML) and the Ontology Interface Layer (OIL) ontology language, and to bring this work to recommendation status. Even as it stands, an increasing number of initiatives are using RDFS to 'publish' their schemas... Metadata schema registries are, in effect, databases of schemas that can trace an historical line back to shared data dictionaries and the registration process encouraged by the ISO/IEC 11179 community. New impetus for the development of registries has come with the development activities surrounding creation of the Semantic Web. The motivation for establishing registries arises from domain and standardization communities, and from the knowledge management community. Examples of current registry activity include: (1) Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179. [This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.]; (2) The xml.org directory of the Extensible Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); (3) The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; (4) The Semantic Web Agreement Group Dictionary (SWAG), a database of terms for the Semantic Web that can be referred to by humans and software agents; (5) LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; (6) The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives..." See also "XML Registry and Repository."

  • [May 16, 2002] "ebXML: A B2B Standard On Hold." By Eric Knorr. In ZDNet Tech Update (May 15, 2002). "Quick--how do most e-commerce dollars change hands? No, not through big B2B marketplaces such as Covisint. And not through B2C sites such as Amazon. The fat pipe for e-commerce dollars remains electronic data interchange (EDI), a standard born in the 1970s that enables businesses to order and bill electronically, typically for raw materials used in manufacturing. People don't think of EDI as e-commerce because most of its connections are still proprietary rather than Internet-based. But whatever you call it, EDI continues to transfer many times the dollars conveyed by Internet-based commerce, according to most estimates. Most of those EDI dollars flowed among a few hundred big corporations, mainly because EDI links are expensive to establish and maintain. For years, companies that relied on EDI lamented its limited reach and high cost. So in 1999, corporate EDI users, along with IBM, Sun, Ariba, Commerce One, and various other members of the OASIS standards group -- along with the United Nations -- decided to establish a rich common denominator that would make electronic trade relationships easier and cheaper using XML over the Internet. In May 2001, after 18 months of grueling work, ebXML was born. My question of the week is: Why is Microsoft opposing -- and why is IBM all but ignoring -- ebXML? Obviously, ebXML's corporate proponents have money to spend on technology. And they drew on decades of electronic trading experience to develop a common standard for companies to engage in machine-to-machine commerce. Moreover, according to Jon Bosak, a distinguished engineer at Sun (who led the original W3C working group that created the XML 1.0 specification), ebXML is gaining traction among governments all over the world, particularly in Asia. But mention ebXML to Microsoft, and you'll get your head bitten off--while the only IBM product that supports ebXML is the obscure Trading Partner Interchange software... David Burdette, director of Web services product management for Commerce One, acknowledges that ebXML focuses pretty much exclusively on B2B, a narrower scope than Web services. But he also disputes any implication that ebXML (particularly version 2.0 released last month) is not ready for prime time, asserting that it is "completely fit for the purpose of B2B messaging in its current form." Three points argue in favor of the adoption of ebXML: It exists, dozens of governments have launched initiatives that support it, and the EDI people behind it move more dollars over the wire in one year than Internet e-commerce industry has in its history. Sun is the only big technology company still pushing ebXML..." See: "Electronic Business XML Initiative (ebXML)."

  • [May 16, 2002] "Biometric based Digital Signature Scheme." By Rohas Nagpal and Shuchi Nagpal (Asian School of Cyber Laws, India). IETF Internet-Draft. Intended Category: Informational. Reference: 'draft-nagpal-biometric-digital-signature-00.txt'. May 2002; expires: October 2002. "Digital Signatures are fast emerging as a viable information security solution, satiating the objectives of data integrity, entity authentication, privacy, non-repudiation and certification. The technique, as it stands today, faces the problem of the maintenance of the secrecy of the private key. This document provides a conceptual framework for the establishment of a biometric-based key generation scheme. In this scheme, the private key is generated each time a document or record requires to be signed. Such generation is based upon a combination of biometric traits... Biometrics is the use of some physiological or behavioural characteristics for authentication and identification. This technique of authentication is based upon the fact that certain characteristics are never the same in any two people. Biometric recognition systems operate in two modes: one is the identification where the system identifies a person by searching a large database for a match and the other the authentication mode where the systems verifies a person's claimed identity by checking against a previously entered pattern. The techniques included in this method of identification are retina scanning, iris scanning, fingerprint verification, voice verification, facial analysis etc. Biometric based authentication schemes are utilized in sectors like Public services, Law Enforcement and Banking... The authors propose a biometric-based key generation scheme, [in] which a private key of the person would be generated every time that the person wishes to sign (or for that matter decrypt) a record. His public key, on the other hand will always be taken from the records of the bank (or other Government appointed agency) where the key pair was first generated. An elaborate key revocation protocol would have to be worked out based upon current practices with relevant modifications. The private key should be allowed to sign as long as the iris, retina and fingerprint of the person are in adequate physical proximity to the system. Once that proximity is lost, the private key should be permanently erased by the system. This method of key generation would ensure the trust of the public in cryptography and digital signatures to a much larger extent. It would be possible to utilize this method in many applications requiring authentication or identification..." See also: "XML Common Biometric Format (XCBF)." [cache]

  • [May 16, 2002] "XML Standards for Human Resources Move Forward." By Richard Karpinski. In InternetWeek (May 15, 2002). "A group of vendors and companies involved in providing human resources software and services said Wednesday they've approved a new version of XML standards to help automate the exchange of HR information. The HR-XML Consortium approved the Resume 2.0 specification, which provides a standard way for companies to accept, store, and exchange employee resumes. The group said the resume standard could help employers, who would be able to analyze and sort resumes in a more automated fashion, as well as job seekers, who would be able to better communicate their skills to hiring departments. Resume document standards were defined as part of HR-XML's Staffing Exchange Protocol 1.1. But they were specified as Document Type Definitions (DTDs). Resume 2.0 uses the more powerful XML Schema Definition Language, making such documents more extensible and modular, the group said. Part of the advantage of the HR-XML standards is that well-defined documents can help companies better share and exchange data. For instance, the new resume standard can automatically share data with other HR-XML standards, such as specifications for formatting an "employment history" or "background" check document..." See: (1) "HR-XML Consortium"; (2) "HR-XML Publishes Staffing Industry Data Exchange Standards 1.0 (SIDES) with Industry Support"; (3) the announcement "HR-XML Consortium Approves New XML Resume Specification. Versatile Specification Makes Resumes Easier To Search, Index, Match, and Manage." The Resume 2.0 specification is abstracted in the following bibliographic item.

  • [May 16, 2002] "Resume 2.0." Edited by Kim Bartkus. HR-XML Recommendation. 2002-April-30. 38 pages. "This [HR-XML] specification (Resume 2.0) includes the necessary schema for transmitting a resume, including the employment, education, and military history. Additionally, a date schema allows for transmission of partial dates for historical data. For example, an individual may have received a bachelor's degree in 1995, or received special training in Summer 1986... The specification includes models and business cases for both recruiting and contingent staffing transactions. The schema allows the transfer of resume information between employers, suppliers of staff, service/system/procurement vendors and job/position seekers... Although all SEP 2.0 schemas have not all been identified as of this specification, this document will address the following schemas: (1) Resume 2.0; (2) Employment History 2.0; (3) Education History 2.0; (4) Military History 2.0; (5) SEP Dates 2.0. [...] The schema provides for two types of resumes: structured and non-structured. The structured resume is defined as discrete XML elements. A non-structured (non-XML) resume may be included as a URL to the resume or an insertion of the resume text, as opposed to an attachment... The Staffing Exchange Protocol (SEP) 1.1 encompassed three Recruiting and Staffing type transactions. Each transaction was a self-contained monolithic DTD. [1] JobPositionPosting. This is published information about a position, job, opening or other staffing need. [2] JobPositionSeeker. This is information about an individual submitted for consideration by a staffing company or an employer. [3] JobPositionSeekerFeedback. This is feedback information pertaining to the Job/Position Seeker in relation to a set of Job/Position Postings. SEP 2.0 will be a collection of reusable schemas, which may be used a la carte to build a full schema. For example, the Resume schema will be used in the Staffing Industry Data Exchange Standard (SIDES) 1.0 and Background Checking 1.0 specifications. All schemas will be compatible within the SEP 2.0 specification." Extracted from the ZIP archive, which contains the relevant XML schemas and sample documents. See also (1) the announcement "HR-XML Consortium Approves New XML Resume Specification. Versatile Specification Makes Resumes Easier To Search, Index, Match, and Manage"; (2) general references in "HR-XML Consortium." [cache]

  • [May 16, 2002] "IBM Web Services Guru Predicts WSDL Future." By Gavin Clarke. In Computerwire (May 16, 2002). Also printed in The Register. "... IBM's director of e-Business standards Bob Sutor, predicts between 20 and 25 XML-based specifications for web services will eventually be defined by standards bodies like OASIS and the World Wide Web Consortium (W3C). The good part? ISVs and customers are unlikely to implement all 25. Instead developers will mix and match, picking specifications to suit specific needs. WSDL will be the most widely adopted specification, to describe web services. On this latter specification, Sutor is emphatic: web services are defined by whether they are described in WSDL. 'Web services will be fundamentally WSDL based. You have to use WSDL to describe [a web service] and then you can build up from there,' Sutor said. 'There will be a tremendous amount of use for SOAP... but there may be a more optimized protocol.' Sutor believes the Web Services Interoperability (WS-I) Organization will help clear-up confusion over the remaining specifications. Sutor said WS-I will help to ensure interoperability between vendor's implementations of standards. WS-I will also corral groups of standards to suit specific functions, such as security. 'Once standards get implemented you get a different perspective of what you should have and what should have been settled, as people mix and match parts of a paper specification. That's where WS-I can contribute. The raw list of standards is just ammunition," he said..." See: "Web Services Interoperability Organization (WS-I)."

  • [May 16, 2002] "Sun Pitches StarOffice as Low-Cost Microsoft Office Alternative." By Tom Krazit and Ed Scannell. In InfoWorld (May 15, 2002). "Staroffice 6.0 will be available worldwide on May 21 [2002] as a low-cost alternative to Microsoft's Office, Sun Microsystems announced Wednesday... Business customers will pay on a volume basis, with prices starting at $50 per user for a 150-user license and decreasing to $25 per user for 10,000 or more user licenses. StarOffice licenses allow each user to install the software on as many as five machines or devices. Retail users will pay $75.95 for the product, while educational customers need only to pay for the CD-ROM and shipping of the software... Sun designed the new version to use default XML file formats, enabling users to share StarOffice files with other XML-based applications and to download tools from the Internet to customize features, according to Sun. Support for Microsoft's Office product, the dominant office productivity suite, was also increased. Files created in Word or Excel can be read, modified, and saved in their StarOffice counterparts, Rogers said. Users can set Word and Excel as their default file formats in StarOffice. A common complaint lodged against earlier versions of StarOffice pertained to its desktop interface. Previous versions of StarOffice used the StarDesktop, which mimicked a desktop operating system upon launch, displaying icons for the different features and substituting its own toolbar for the user's operating system toolbar. Sun had wanted to standardize the look and feel of the software across different environments, said Rogers, but users disliked it, preferring to work within their chosen desktop environment... Sun will distribute StarOffice through several Linux distributors, who will bundle the StarOffice product into their versions -- or "distributions" -- of the open-source operating system. Sun will license StarOffice 6.0 to MandrakeSoft, SuSE Linux, Turbolinux, and Ximian for inclusion with their products, and additional companies are in negotiations with Sun to bundle either StarOffice or the open-source version Openoffice.org... OpenOffice.org 1.0 was released on May 1. It lacks some of the additional features of 6.0, but is available as a free download. The new open-source version was developed by its namesake open-source community, which is online at www.openoffice.org..." See additional references in the news item.

  • [May 16, 2002] "Office Wars: StarOffice in Attack Mode." By Peter Galli. In eWEEK (May 15, 2002). "StarOffice 6.0, which runs on the Windows, Linux and Solaris platforms and has been available as a free download until now, will be available for $75.95 through retail channels next week, [Sun's Mike] Rogers confirmed. Enterprise customers who make a commitment of 1,000 or more users will pay between $25 and $50 under the new tiered, per-user basis, while education customers will only pay the cost of the CD-ROM and shipping, he said. Stacey Quandt, an analyst at Giga Information Group, also recently wrote in a report that while StarOffice provides good file compatibility, it does not enable macro, pivot table or anything developed in Visual Basic. 'So clients with substantial use of any of these likely would find a migration rather costly due to manual conversions and retraining,' she said. But Rogers disputed this, saying StarOffice 6.0 takes Office compatibility to a new level. Among the most significant new features in the product is the open XML-based default file format, and users would have far more robust Microsoft Office import and export filters, including support for Office XP, redesigned dialog boxes, additional templates, graphics and clip art, he said. 'It works transparently with a variety of file formats, enabling users familiar with other office suites, such as Microsoft Office, to open, save and exchange files. There's also a familiar, integrated and configurable interface, while the integrated desktop has also been removed to support native desktop environments such as GNOME, CDE and Windows,' he said..." See also "OpenOffice.org: Serious Suite Alternative." See additional references in the news item.

  • [May 16, 2002] "Company Claims Patent on 'millions' of E-Commerce Sites." By Sam Costello. In InfoWorld (May 15, 2002). [Paranoia gives way to cash settlements.] "... At issue are two PanIP patents, awarded to Lawrence B. Lockwood, one of the principals of Pangea, in 1996 and 2001. The 1996 patent, number 5,576,951, covers a system 'for composing individualized sales presentations created from various textual and graphical information data sources' using 'the retrieval of interrelated textual and graphical information,' according to the patent filing. The 2001 patent, number 6,289,319, which covers an automated 'financial transaction processing system' in which a computer 'is programmed to acquire credit rating data relating to the applicant from the credit rating service,' according to the filing. Though PanIP is currently only suing 11 companies for allegedly infringing on these patents, the total number of infringing companies could run into the millions, according to Kathleen M. Walker, a private practice attorney based in San Diego who is representing PanIP... PanIP is seeking a one-time $30,000 from each company for a license on both patents for as long as they are valid, she said. That payment would also cover a patent that PanIP claims to have pending that covers 'very similar technology and may bear on these retailers,' Walker added. So far, one of the 11 companies being sued has chosen to settle, Walker said. She declined to release terms of the settlement because that agreement has not been finalized... Though the decision to file suit against 11 small businesses out of the potentially millions of companies that may be infringing the patents, including such e-commerce titans as Amazon.com and eBay, may look bad, 'the patent and trademark laws don't say you have to go after the biggest fish first,' Walker said... These lawsuits aren't the first time that PanIP principal Lawrence Lockwood has initiated legal proceedings against companies he felt were infringing his patents. Lockwood filed a lawsuit against American Airlines in 1994, claiming that American's SABREvision airline reservation system infringed on other patents he holds. Lockwood lost the suit in the U.S. District Court for the Southern District of California and then lost again on appeal in 1997... Though the patents may seem broad, 'when you seek a patent, you try to get it as broad as possible,' said PanIP's attorney Walker..." See: "Patents and Open Standards."

  • [May 15, 2002] "Sun Guns for Archrival with StarOffice 6." By Clint Boulton. In InternetNews.com (May 15, 2002). "Hoping to pry a few fingers from Microsoft Corp.'s seemingly iron-clad grip on the market for e-mail software, Sun Microsystems, Inc. Wednesday unveiled StarOffice 6 to compete with the Redmond, Wash. firm's ubiquitous Office product. StarOffice 6 has been around in some form or another for beta testers since October 2001, and when rumors spread in February 2002 that Sun was thinking of switching gears from its free, open-source versions of StarOffice to a money-making product, users wondered whether Sun would lose credibility in its repeated attacks on Microsoft for being anti-open source and greedy. Others, though, such as Gartner analyst Michael Silver, have lauded Sun's decision to charge for StarOffice and said the Palo Alto, Calif. concern could nab as much as 10 percent of Microsoft's Office market share. In a nutshell, StarOffice 6 allows the creation of documents, spreadsheets and presentations on the Linux, Solaris and Windows platforms. The suite uses an open Extensible Markup Language (XML) based file format as its default, enabling anyone the ability to use widely available tools to open, modify, and share StarOffice content -- including some with Microsoft's Office import and export filters. StarOffice 6 will be available through Linux vendors, PC OEMs, software retailers, Sun's sales force and the StarOffice NOW program. OEMs such as Hyundai, MandrakeSoft, SuSE Linux, Turbolinux, and Ximian will bundle the application in their products..." See references in the news item.

  • [May 15, 2002] "Specializing Domains in DITA. Feature provides for great flexibility in extending and reusing information types." By Erik Hennum (Advisory Software Engineer, IBM Corporation). From IBM developerWorks, XML zone. May 2002. ['In current approaches, DTDs are static. As a result, DTD designers try to cover every contingency and, when this effort fails, users have to force their information to fit existing types. The Darwin Information Typing Architecture (DITA) changes this situation by giving information architects and developers the power to extend a base DTD to cover their domains. This article shows you how to leverage the extensible DITA DTD to describe new domains of information.'] "The Darwin Information Typing Architecture (DITA) is an XML architecture for extensible technical information. A domain extends DITA with a set of elements whose names and content models are unique to an organization or field of knowledge. Architects and authors can combine elements from any number of domains, leading to great flexibility and precision in capturing the semantics and structure of their information. In this overview, you'll learn how to define your own domains... In DITA, the topic is the basic unit of processable content. The topic provides the title, metadata, and structure for the content. Some topic types provide very simple content structures. For example, the concept topic has a single concept body for all of the concept content. By contrast, a task topic articulates a structure that distinguishes pieces of the task content, such as the prerequisites, steps, and results. In most cases, these topic structures contain content elements that are not specific to the topic type. For example, both the concept body and the task prerequisites permit common block elements such as p paragraphs and ul unordered lists. Domain specialization lets you define new types of content elements independently of topic type. That is, you can derive new phrase or block elements from the existing phrase and block elements. You can use a specialized content element within any topic structure where its base element is allowed. For instance, because a p paragraph can appear within a concept body or task prerequisite, a specialized paragraph could appear there, too... [Summary:] Through topic specialization and domains, DITA provides the following benefits: (1) Simpler topic design: The document designer can focus on the structure of the topic without having to foresee every variety of content used within the structure. (2) Simpler topic hierarchies: The document designer can add new types of content without having to add new types of topics. (3) Extensible content for existing topics: The document designer can reuse existing types of topics with new types of content. (4) Semantic precision: Content elements with more specific semantics can be derived from existing elements and used freely within documents. (5) Simpler element lists for authors: The document designer can select domains to minimize the element set. Authors can learn the elements that are appropriate for the document instead of learning to disregard unneeded elements. In short, the DITA domain feature provides for great flexibility in extending and reusing information types. The highlight, programming, and UI domains provided with the base DITA release are only the beginning of what can be accomplished..." See the news item 2002-05-15: "IBM's Darwin Architecture Supports Enhancements for Domain Specialization, Content Reuse, and Linking Logic." General references in "Darwin Information Typing Architecture (DITA XML)."

  • [May, 2002] "Architecting Content for DITA." By Julio J. Vazquez (D&ID Marketing & Sales, IBM). Technical Report. Research Triangle Park, North Carolina. Reference: TR 29.xxxx. May, 2002. 18 pages. "This article describes how an author can apply the DITA architecture when developing information for a product. It lists the information types defined in DITA and a reasonable approach for analyzing content to fit the types. The article does not teach the structure of the language but focuses on the concepts behind the architecture and how to change your paradigm to embrace the architecture. There are a few examples included that are based on the language as it stands today..." General references in "Darwin Information Typing Architecture (DITA XML)."[cache]

  • [May 10, 2002] "The Wireless, Non-Free, Paperless Future. [Future of News.]" By Leslie Gornstein. In Online Journalism Review. Article posted: 2002-04-18; modified: 2002-05-02. ['Technologies in research labs today will soon change how we get our news. The market downturn has meant bad news for R&D budgets. But most firms say they are still busy developing the next generation of gadgets that will redefine how the news is delivered and received. Rest assured, you won't get it for free.'] "... Here are just a few of the ways gizmos in the pipeline now may change the news experience in the near future: [1] Nothing Will Be Free. The biggest change is that the information you get over your laptop, Palm or pen probably won't be free. And if it is free to peek at, you probably won't be able to copy and paste it, print it, look at it a second time, or store it on your hard drive in any way -- unless you pay for the privilege. A relatively new markup language called XrML is being tweaked by Xerox, Microsoft and others to protect copyrighted materials in the digital domain. At Xerox, researchers are developing a digital rights manager software suite that will let people use XrML to protect everything from songs to written stories. A song being offered for free download at a mall kiosk may go into a 13-year-old's MP3 player, but once the kid gets home, he may find he can't put it on the Net or even upload it to his PC without a message popping up on his computer monitor, asking if he would like to purchase the privilege. The language is being hailed by at least one standards board as the way to shut down piracy and free rides in general. 'DRM (digital rights management) isn't just hacker proofing,' says Michael Miron, chief executive at ContentGuard, a Xerox spin-off that's spearheading XrML rights software in cooperation with Microsoft. 'You need security, but you also need to deal with conditions of use.' What's scary about DRM is that there's nothing built into the forthcoming rights protection software that protects consumers' rights under fair use laws -- the same laws that allow people to tape their favorite TV shows. But Miron says assuring such rights is too much of a burden for his software team, arguing that every country has different intellectual property laws. The rights management revolution is already underway; a Microsoft eBook reader that complies with XrML rights management standards was released last summer. More XrML products are on their way later this year, Miron says..." See analysis of ContentGuard's (XrML) patent claims in the document dedicated to "Patents and Open Standards."

  • [May 10, 2002] "Toward a New Politics of Intellectual Property." By Pamela Samuelson (University of California at Berkeley). Friday, 10 May 2002 Plenary. The Eleventh International World Wide Web Conference (May 7-11, 2002; Sheraton Waikiki Hotel, Honolulu, Hawaii, USA). "... The Hollings bill is unlikely to pass during the current legislative session but it should be taken very seriously. Hollywood won't be satisfied until and unless general purpose computers have been tamed and the Internet has been rearchitected to make it safe for their products. Quite possible in the near term are little 'mini-Hollings' bills focused on specific technologies (e.g., requiring makers of digital televisions to build sets to respond to broadcast 'flags' which would allow or disallow copying of particular programs). Once several of these bills have passed, the momentum for more general legislation is likely to build. To oppose such legislation, it is not enough to say that the Hollings bill or little mini-Hollings bills are brain-dead or unenforceable. If you think general purpose computers and open information environments such as the World Wide Web are valuable, you are going to have to help build a new politics of intellectual property that will preserve these devices and infrastructure... A new politics of intellectual property is needed to counteract the content industry's drive toward ever stronger rights. More importantly, a broader awareness is needed that copyright deeply affects the information environment for us all... Here are some thoughts about who might participate in a new politics of intellectual property aimed at promoting a balanced information ecology. Obvious candidates include authors and artists (who need access to information, a robust public domain, and meaningful fair use rights), educational institutions, libraries, scholarly societies, computing professionals, computer manufacturers and other equipment providers who don't want Hollywood to be in charge of their research and development divisions, telecommunications companies and Internet service and access providers (who want to serve their customers and not become a new branch of the police), consumer groups, civil liberties organizations, and digital media companies who may have some radical business models that just might work if not shut down through litigation by established copyright industry groups who want to protect preferred business models. The agenda of a new politics of intellectual property obviously needs to be about more than just opposing the high protectionist initiatives of copyright industry groups. It should, of course, oppose legislation such as the Hollings bill, but the new politics needs to have a set of affirmative policy objectives of its own..." Note: Pamela Samuelson has been recognized as a leading authority on intellectual property.

  • [May 10, 2002] "Berners-Lee: Keep the Web Royalty-Free." By Edd Dumbill. From O'Reilly Network Weblogs (May 08, 2002). "In his opening keynote today at the Eleventh World Wide Web Conference in Honolulu, Hawaii, Tim Berners-Lee made a strong appeal for the development of the web to continue unencumbered by patent royalties. In a talk entitled 'Specs Count', Berners-Lee outlined how important it was that today's web technology specifications remain open and freely implementable. He described how accessing a web page today involved many layers of standards -- ethernet, IP, TCP, HTTP, MIME, XML, Namespaces, XHTML -- each layer of which relies critically on the layer below. As Berners-Lee is fond of noting, the web is 'not done yet', therefore it is not unreasonable to imagine a future with a similar number of layers built upon the existing ones. For that reason, it is still highly critical that the 'communal' nature of the specifications is preserved. Berners-Lee took off his hat as W3C Director for his speech, stressing that it was delivered as personal opinion: he was highly pointed in his support of royalty free licensing for web technology, a position that doesn't meet universal approval within the consortium. The W3C has itself had a difficult journey through issues of licensing its own standards. Reacting to a large amount of dissent from the web and free software community, it reversed plans to allow RAND ('reasonable and non-discriminatory') licensing terms on its specifications. The new patent policy is that every working group will aim to achieve royalty free licensing terms by the time a spec reaches the final Recommendation stage at the W3C. Outlining both the pros and cons of enforcing royalties on open specifications, Berners-Lee speculated that if the specifications driving the web had not been royalty-free, then none of the 900-strong audience would actually be at the conference. Enforcing royalties discourages adoption both by the open source community, who simply cannot pay royalties, however 'reasonable', and other companies who will shy away from the issues associated with licensing the technology..."

  • [May 09, 2002] "It's Easy If You Know How: Importing, Processing, and Exporting CDISC XML with SAS." By Michael C. Palmer (Zurich Biostatistics). February 2002. 4 pages. "The Clinical Data Interchange Standards Consortium (CDISC) has published an XML-based clinical data standard that will make it easier for a pharmaceutical company or other study sponsor to develop a single, low maintenance, vendor-neutral gateway for clinical trials data, including central lab results and FDA-compliant data archiving. The tutorial will cover the basics of data-centric XML, the structural and semantic requirements of CDISC's Operational Data Model (ODM) standard, and techniques for importing, processing, and exporting clinical data in SAS using the ODM. The tutorial is needed because SAS users accustomed to flat or relational files have found it challenging to import, process, and export CDISC XML. Trouble has come from two characteristics of XML. First, it is a text stream, and, second, it is hierarchical in structure. SAS, on the other hand, is oriented towards data in fields and flat or relational file structures. CDISC is a consortium of some 60 leading pharmaceutical companies and vendors, with a formal liaison to the FDA, pledged to the development of non-proprietary, open standards to streamline the collection, analysis, and review of clinical trials data intended for regulatory submissions...The ODM is an XML vocabulary. XML is just plain text so The ODM is just plain text that can be read in any text reader, such as Notepad on Windows PCs. There is no need to buy proprietary software to read an ODM instance. CDISC freely publishes the precise features of the ODM vocabulary, or DTD in XML jargon, for all on its web site. Anyone can use it..." See: (1) the news item of 2002-05-09: "Clinical Data Interchange Standards Consortium Publishes CDISC Operational Data Model (ODM)"; (2) general references in "Clinical Data Interchange Standards Consortium."

  • [May 09, 2002] "Berners-Lee Issues a Call to Arms." By Anne Chen. In eWEEK (May 09, 2002). "Tim Berners-Lee, director of the World Wide Web Consortium, opened the 11th annual International World Wide Web Conference here on Wednesday by challenging his colleagues not to get bogged down in arguments over existing specifications but to move forward and develop new ones while remaining focused on interoperability between standards... Berners-Lee also reiterated his opposition to allowing any W3C-based specifications to be covered by patent restrictions. Over the last year, the W3C has been embroiled in a controversy after a proposal to allow royalty-encumbered, patented technologies into Web standards attracted intense criticism and debate. Berners-Lee used his keynote as an opportunity to comment that he believes standards should remain royalty-free. 'All the forerunners of HTTP were developed under the ethos that they were for the common all,' he said. 'Looking back at what we have, it doesn't take very much to realize -- we should keep the same mindset. Nobody will do anything if they feel they're doing pro bono work for a company that will charge for new versions.' Berners-Lee said that, in order for the Web to continue to grow, every developer needs to have access to specifications without having to sign a licensing agreement or a non-disclosure agreement. 'What we're building on top is going to be much more exciting than what we've built underneath,' he said. 'This explosion happened because of you. You did it through interoperability and by keeping the Web open to all'..." See: "Patents and Open Standards."

  • [May 09, 2002] "Cisco Unleashes Content Switch, Management Barrage at N+I." By Phil Hochmuth. In Network World (May 09, 2002). "Cisco this week unveiled a new batch of content networking gear aimed at speeding up Web switching and server load balancing at e-commerce-driven businesses. At NetWorld+Interop 2002 Las Vegas, Cisco introduced its newesthigh-end Layer 4-7 switch, the CSS 11500 series, as well as new versions of its Content Switching Module (CSM) for the Catalyst 6509 switch, and new releases of its Content Transformation Engine (CTE) 1400 for supporting VoIP. The company also released new software for managing enterprise content delivery and datacenters. The CSS 11503 and 11506 are three and six-slot modular Layer 4-7 switches that offer increased port density and can support more Secure Sockets Layer sessions than previous CSS 11000 series switches, according to the company. Version 3.1 of the CSM includes improved management features, allowing the device to be managed as part of a network of content switching devices, even though it resides in a Catalyst 6509 chassis. The Content Transformation Engine 1400, introduced last year, converts Web pages into formats readable by non-PC clients, such as PDAs and cellphone Web browsers. The new version now supports Web content conversion for Cisco IP telephone display screens, allowing Cisco phones to act as desktop thin clients with Web access. The new CTE 1400 also includes increased support for Java Script and XML..." See the August 1, 2001 announcement and the white paper, Cisco CTE 1400 Series: Content Transformation Engine Product Overview."

  • [May 09, 2002] "DOD Will Establish XML Registry." By Dawn S. Onley. In Government Computer News Volume 21, Number 6 (May 06, 2002). "The Defense Department has established a clearinghouse and registry for Extensible Markup Language components to get DOD's far-flung IT shops on the same page in their XML use. The Defense Information Systems Agency will oversee the collection and reuse of XML components, Defense CIO John Stenbit and undersecretary Pete Aldridge noted in an April 22, 2002 memorandum. The registry and clearinghouse will be the authoritative source for XML components used by DOD. 'All program managers that use XML as an interchange format must register XML components in accordance with procedures established by DISA,' the memo said. Owen Ambur, co-chairman of the government's XML Working Group, established by the CIO Council to develop an XML registry of data elements and schemas, said the clearinghouse could reach beyond DOD. Once the department demonstrates the usefulness of its registry, the Office of Management and Budget should issue a governmentwide policy similar to the Defense policy, Ambur suggested..." See details in the news item of 2002-04-26: "DISA to Establish a US DoD XML Registry and Clearinghouse."

  • [May 08, 2002] "RELAX NG Compact Syntax." Edited by James Clark, for the OASIS RELAX NG Technical Committee. Working Draft 8-May-2002. The document "specifies a compact, non-XML syntax for RELAX NG. The semantics of this syntax are specified by specifying how the syntax can be translated into the XML syntax. The goals of this syntax are: (1) maximize readability; (2) support all features of RELAX NG; it must be possible to translate a schema from the XML syntax to the compact syntax and back without losing significant information; (3) support separate translation; a RELAX NG schema may be spread amongst multiple files; it must be possible to represent each of the files separately in the compact syntax; the representation of each file must not depend on the other files. The syntax has similarities to XQuery Formal Semantics, to XDuce and to the DTD syntax of XML 1.0." The most recent update: (1) fixes the way annotations get attached to patterns and name-classes; (2) the BNF is annotated with references to applicable constraints (like WFCs and VCs in XML 1.0). See: "RELAX NG."

  • [May 08, 2002] "Microsoft Tests Tools to Make The Web Talk." By Matt Berger and Ephraim Schwartz. In InfoWorld (May 07, 2002). "In another small step toward creating a voice-enabled Web, Microsoft on Tuesday released to developers a test version of its tools for building applications that can be controlled over the Internet using voice commands... the company made available the beta version of its .Net Speech SDK (Software Development Kit), Version 1.0. Used with Microsoft's Visual Studio .Net developer tools, the SDK is designed to add voice to the list of methods for inputting data, which includes the mouse, keyboard, and stylus. The tools are intended to 'help jumpstart the industry' for building speech-enabled Web applications, such as an airline Web site that allows users to make reservations by talking into a microphone on their computer, said James Mastan, group product manager for Microsoft's .Net speech technologies group. While the beta version will allow developers to design speech-enabled applications and Web services for desktop use, the components needed for speech developers to create telephony-based applications using SALT (Speech Applications Language Tags) is still missing a key component. SALT was created by an industry group known as the SALT Forum, whose founding members include Microsoft, Speechworks International, Cisco Systems, and Intel... However, the rival speech development language, VoiceXML, also has yet to publish its set of APIs to interface with telephony boards. Call control tags are just now being developed by the VXML sub-committee of the W3C (World Wide Web Consortium), according to Dennis King, director of architecture for Pervasive Computing at IBM. IBM is a founding member of the W3C... The set of SALT protocols have not yet been submitted to any standards body but will be this summer, according to Mastan. The SALT members have not yet voted on which standards body to submit the protocols to. The .Net Speech SDK can be used to retool an existing Web application developed with Microsoft's popular developer tools, a benefit that Mastan said would spur its use. Features of the SDK include workspaces for programming applications, as well as for creating the spoken questions and answers that a voice-enabled application would need to understand..." Reverences: (1) the MS announcement, "Microsoft Releases Tools That Will Enable Mainstream Developers To Add Speech to Web Applications. Microsoft .NET Speech SDK Version 1.0 Beta Is First Software Product Based on Speech Application Language Tags Specification."; (2) "Speech Application Language Tags (SALT)"; (3) "VoiceXML Forum."

  • [May 08, 2002] "XML Standards for Global Customer Information Management." By Ram Kumar. In DM Review (May 2002). "XML plays a significant role in the efforts many companies are undertaking to integrate e-business and customer information management applications with their enterprise systems. Companies can define XML grammars that deal with customer information management. These can be used to interface data between systems in a more open and simplified way, saving time and reducing error, and to represent customer data in a common language across the different lines of business. However, vendor-proprietary XML grammars can cause problems when data is exchanged between systems. Special XML converters are needed to move data, which defeats the purpose of using XML in the first place. The best solution would be to develop a -- that is, regardless of the geography or culture in which the data resides. The term "global" is very important, particularly when it comes to e-business and e-CRM business models, where organizations deal with customers on a worldwide basis. Global Web customers require 24x7 service. They have language differences, multiple data and shipping formats, and vast differences in demographics, tastes, preferences and so on... Due to the lack of XML standards for managing customer information and exchange/interchange that is open, application-independent and vendor-neutral, approximately two years ago, the Customer Information Quality (CIQ) Technical Committee of OASIS accepted the challenge of building such standards. The committee has now developed three powerful XML languages for customer information management and exchange -- xNAL, xCIL and xCRL... xCRL is the XML standard for managing customer relationship data and is designed to be application-independent, vendor-neutral, open and, most importantly, truly global. xCRL uses xNAL for representing customer name and address data and xCIL for representing other customer information. xCRL is also defined as a separate standard to enable simplicity and flexibility in the usage of the CIQ standards and provides the flexibility to represent customer data at an abstract or a detailed level..." References: (1) OASIS CIQ TC website; (2) "xNAL Name and Address Standard (xNL, xAL)"; (3) "Customer Identity / Name and Address Markup Language (CIML, NAML)"; (4) "Markup Languages for Names and Addresses."

  • [May 07, 2002] "The Languages of the Semantic Web." By Uche Ogbuji. In New Architect Volume 7, Issue 6 (June 2002), pages 30-33. ['If you believe Tim Berners-Lee, the Web has more potential than is realized today. Part of that potential is held in specifications like RDF and DAML+OIL. A new Web where agents do our bidding may not be far off.'] "RDF itself is a handy way to describe resources. Widespread use of such a facility could alleviate many of the current problems with the Web. But RDF by itself only gets us part way toward realizing the Semantic Web, in which agents can infer relationships and act on them. Classification is extremely important on the Semantic Web. Each community of related interests defines categories for the matters that it discusses. For instance, the snowboarding community defines items such as snowboards, parks, tricks, and manufacturers. The definition of a manufacturer in snowboarding terms is related to the definition of a manufacturer in the general business sense. The snowboarding community can enshrine these definitions by creating a schema for its RDF models... The leading ontology system for RDF is the DARPA Agent Markup Language (DAML). DARPA, for those who may have forgotten, is the group that brought us the Internet itself. DAML incorporated useful concepts from the Ontology Inference Layer (OIL), a European project to provide some AI primitives in RDF form. The resulting language is DAML+OIL. DAML+OIL lets us formally express ontologies. W3C RDFS provides primitive classification and simple rules for this, but DAML+OIL goes much further. For instance, DAML+OIL can express that 'any snowboard with plate bindings is a race board,' which makes it unnecessary to then explicitly flag every race board. You might see in this some of the flavor of business rules, which are known in software development circles as the programmatic expression of mandates for the way data must be processed. In fact, one way to look at DAML+OIL is as the business rules for the Semantic Web, yet it's much more flexible than most business-rules-languages in common use. Most of DAML+OIL's power comes from primitives for expressing classifications, as the race boards example illustrates. DAML+OIL provides a toolbox of class expressions, which bring the power of mathematical logic and set theory to the tricky and important task of mapping ontologies through classifications... The Semantic Web is still a way off, if it's attainable at all. To date, RDF and DAML+OIL are our best efforts at reaching it. They address a good number of the problems with the present state of the Web, and further enhancements are on the way. For example, a system of statements that's managed at a certification authority could help establish the validity of RDF statements to minimize metadata spam and other security problems..." See: (1) "DARPA Agent Mark Up Language (DAML)"; (2) "Resource Description Framework (RDF)"; (3) "XML and 'The Semantic Web'." [alt URL]

  • [May 07, 2002] "The Rolls Royce of Security. Are biometrics worth the expense?" By Jerri L. Ledford. In New Architect Volume 7, Issue 6 (June 2002), pages 14-15. "... Biometric security works on a simple principle: no two people are built exactly the same. Whether the actual process involves scanning a body part or imprinting voice patterns, biometric software maps physical characteristics, called 'markers,' that exist in a combination that's unique to each individual. The three methods used most often are fingerprint scans, iris scans, and voice print recognition. Voice printing is considered the least secure and least user-friendly of the three. Keith O'Leary, the United States marketing director for Keyware, a European biometric-enabled applications vendor, says there are some problems with voice printing. For example, outside factors like throat inflammation can cause false readings. Fingerprint scanning is the form of biometric security in widest use today. When a user places his or her finger or thumb on a reading device, the device scans the print and then matches it to a mathematical representation of the known fingerprint for that user. The mathematical representation is built from over 100 identifiable markers found on each fingerprint. At least 70 of these markers must match for authentication to succeed. Iris scans, on the other hand, rely on 266 points of recognition within the iris of the eye. If 150 of those points match, then identification is conclusive. 'Iris recognition is changing the biometric landscape,' says O'Leary. 'With each point that matches over 150, the ability to conclusively verify a [person's identity] is more accurate.' O'Leary explains that fingerprinting is enough in most cases, but it all depends on the company that is deploying the technology. 'What is your security need?' he asks. 'What is your methodology, and what will be the level of security that you want?' In some cases, he says, a single biometric technology might be used at a lower level of security, with additional biometrics added as the user accesses more-critical applications... Choosing a product that supports a wide range of applications is important, agrees Keyware's O'Leary. One shortcoming of the biometric security industry is that it still lacks standards. Perceiving an open playing field, numerous companies have begun to offer biometric security solutions with little concern for interoperability. Among these are the BioAPI Consortium, the National Institute of Standards and Technology (NIST), and the Security Industry Association (SIA). Although it isn't uncommon to have this disparity in standards in a young industry, O'Leary says that 'Lack of standards from a risk assessment perspective should always be a concern. You should look to the people who are forerunners and who are participating in the standards bodies'..." See also the OASIS TC at work on the "XML Common Biometric Format (XCBF)."

  • [May 07, 2002] "Design Patterns for Web Programming. Do you need MVC?" By Al Williams. In New Architect Volume 7, Issue 6 (June 2002), pages 16-17. "... In a traditional application, a single piece of code handles everything. With MVC, you break down your program into three cooperating parts: the Model, the View, and the Controller. The View is the part the user sees. It formats data into an onscreen representation. However, it doesn't actually contain the data. The data resides in the Model. Finally, the Controller portion accepts user commands and modifies the Model. Naturally, the Model has to inform the View that it should update the representation of the data when that data changes. The classic example of this strategy is a spreadsheet program that runs on a personal computer. Using MVC architecture, the Model stores the formulae and other data. When you issue a command to save to a file (or load from a file), the Model handles this action. It also handles the specific logic, like recalculating the entire sheet. The View draws the familiar grid that shows a part of the data (depending on the scroll bar's position). The Controller deals with any process in which the user changes something. This approach has several advantages. The most obvious is that this decoupling allows for easier unit testing. It also isolates changes to a large degree. Changing the way data is displayed is not likely to affect the document object functions. However, an even greater benefit is the ability to mix and match different parts. For example, you might want to add a pie chart View of the data. Not only is this relatively easy, but you can reuse the pie chart View with other applications that use the same style of document object... you can adapt MVC to any of the popular scripting languages. You simply have to partition your code so that one part handles the underlying data, a second part handles the presentation, and a third part handles the changes and control. You might also consider using XML to implement your Model. Because the Model part of MVC is supposed to hold the data, it makes sense that modern MVC architectures would work well with XML. Decoupling the View and Controller logic lets the document code focus on the XML parsing. The Views can just assume that they have the data they need without dealing with the intricacies of XML. Changes in the XML structure only affect the Model code. Of course, with the move to supply XML-based Web services, your document might even be distributed between multiple servers with little difficulty. XML is a great choice for many applications. Of course, one of the nice things about MVC is that with proper design you could use XML, a database, or even an ordinary file for the data source..."

  • [May 07, 2002] "MDA Brings Standards-Based Modeling to EAI Teams." By Tushar K. Hazra. In Application Development Trends Volume 9, Number 5 (May 2002), pages 48-52. ['Application Integration & Management. Model-Driven Architecture is based on UML, MOF and CWM standards. Its creator, the OMG, promises the new modeling standard will evolve to follow and incorporate more standard technologies.'] "The Model-Driven Architecture (MDA) is an innovative approach to constructing an enterprise architecture. Created by the Object Management Group (OMG), MDA abstracts and visualizes business requirements in the form of platform- and implementation technology-independent models, and separates implementation details from business functions. Once interfaces are identified and implementation technologies are selected, models are transformed into platform-specific software architectures. MDA is a new software architecture principle in the sense that it follows, incorporates or supports most industry standards. It enhances the capability of the software architecture to be flexible, extensible and adaptable to new technologies or standards, and to abstract business application components in the early stages of EAI initiatives. Most importantly, MDA enables business experts and software engineers to focus on business risks, issues and concerns, ensuring the effective use of existing and evolving technologies. As described by the OMG, MDA is founded on three established modeling standards -- the Unified Modeling Language (UML), Meta-Object Facility (MOF) and the Common Warehouse Meta-model (CWM). UML is widely accepted and supported by the software industry as a standard modeling notation, and most modeling tool vendors comply with it. EAI initiative teams of any size can share UML-based visual models across platforms. MOF and CWM allow users to capture a specific domain, data warehousing-related information or meta data in UML-based models. With the help of appropriate tools and repositories, EAI teams can exchange and transform the models to define, design and implement desired interfaces independent of a specific platform. As a next step, OMG standard mappings support the generation of CORBA, XMI/XML, J2EE, .NET and Web services-based interfaces. The interfaces can utilize various pervasive services such as directory, transactions, events and security services to build enterprise software architectures. At present, MDA targets space, manufacturing, finance, e-commerce, telecom, healthcare and transportation industry-related EAI initiatives and promises to cover more soon..." See: "OMG Model Driven Architecture (MDA)."

  • [May 07, 2002] "Sun Hails Open-Source Meta Data Repository." By Paul Krill. In InfoWorld (May 07, 2002). "Sun Microsystems on Tuesday [2002-05-07] plans to announce that it has contributed meta data repository modules to the NetBeans open-source development project as part of the Object Management Group's MDA (Model Driven Architecture) effort. The repository can, for example, assist in the development of Web services applications by enabling developers to quickly locate objects to be fitted with SOAP (Simple Object Access Protocol) interfaces, according to Sun. CORBA and other infrastructure standards also can be supported. The NetBeans open-source project is a Java-based effort that is being positioned as a compliant solution for the MDA specification. The OMG MDA is designed to protect software investments by providing a framework in which application infrastructure components, such as databases and programming languages, can be changed without requiring enterprises to change their underlying application architecture. A meta data repository, which can hold information about programming objects so they can be reused, is critical to supporting the MDA, said Drew Engstrom, product line manager for Sun's ONE Studio tools, in Menlo Park, Calif. "When an organization is doing object-based development, typically you end up with a library of hundreds of objects," Engstrom said. The meta data repository is expected to be included in an upcoming version of the NetBeans IDE (integrated development environment), to be known as build 4.0, in six to eight months, according to Engstrom. It has been available in an experimental mode, according to Sun... Other companies that have submitted MDAs include IBM, with WebSphere, and Rational, with XDE..." See: (1) Details in the news item of 2002-05-07; (2) the Sun announcement: "Netbeans: First Open Source Tools Platform to Implement Model Driven Architecture (MDA). MDA to Simplify Implementation of Web Services by Separating Architecture from Deployment Infrastructure." MDA references: "OMG Model Driven Architecture (MDA)."

  • [May 07, 2002] "SAML Gains Steam." By John Fontana. In Network World (May 06, 2002). "An XML protocol that appears on its way to becoming a key building block for standards-based security picked up momentum last week as vendors introduced products and vowed to provide free access to their patents to advance the cause. The efforts are in support of the Security Assertions Markup Language (SAML), a framework for exchanging authentication and authorization credentials over the Web, which promises to give IT executives a way to tie together disparate security systems internally and with business partners. Last week, RSA Security announced that it would offer royalty-free use of two patents it owns that are similar to how SAML functions, therefore quashing concerns that the patents may hamper the acceptance of SAML. Also, Quadrasis, a business unit of Hitachi, introduced a developer tool for building SAML support into connectors that work with its Security Unifier. The product is similar to enterprise application integration software in that it provides a routing and transformation hub and a set of connectors that allow disparate security systems such as authentication systems, single sign-on software and encryption products to work together... SAML is gaining steam as it moves through the standards track at the Organization for the Advancement of Structured Information Standards. Ratification is expected in June. Experts say SAML will make it easier for users to cross security boundaries, especially those between companies that have established trust relationships. Combined with another emerging standard for digital signatures called XML Signatures, companies can exchange signed SAML assertions that confirm a particular user is authenticated and authorized to access certain network services. RSA, which is building SAML into its Web Access Management product called ClearTrust, is offering royalty-free access to U.S. patents that cover one type of SAML assertion called Browser/Post Profile, which basically delivers a digitally signed SAML assertion through an HTML form stored on a browser. Most vendors today, however, are implementing a simpler type of SAML assertion called Browser/Artifact Profile..." See: "Security Assertion Markup Language (SAML)."

  • [May 07, 2002] "Putting the Business in Business Process Modeling." By Richard Adhikari. In Application Development Trends Volume 9, Number 5 (May 2002), pages 53-56. "The concepts of business process reengineering (BPR) and integration are more than 10 years old, but few corporations have implemented them. That is because business process modeling -- the foundation of BPR and integration -- has been poorly applied. Often, IT departments take a techno-centric view of the modeling process rather than a business-centric view, or they do not capture the business process adequately. [1. Business Genetics Inc.] When modeling a business process, you should take a business-centric approach, not a techno-centric approach. [says] Cedric Tyler, president of consultants Business Genetics Inc., Englewood, CO. Tyler advocates breaking down a process into the major grouping of its parts so that it can be understood better. That approach forms the basis of Business Genetics' approach, the Extended Business Modeling Language (xBML). xBML consists of asking the five "W" series of questions: why, what, who, where and when... [2. Enterprise FrameWork] Enterprise FrameWork, from Quincy, Mass.-based Ptech Inc., is an enterprise-modeling tool that includes several types of views for modeling business processes. At the top level, the focus is on exchange of value between high-level business processes -- 'not on how processes are done, but why they are done, the value coming out of them, and what other organizational processes or departments are using that value' said James Cerrato, Ptech's chief product officer... Enterprise FrameWork ships with several pre-defined queries that help users to discover information flow between the different parts of apps and areas of overlap or gaps. Users can also create their own queries graphically by building the path of the questions they want to ask into a diagram. Enterprise FrameWork has a proprietary knowledge-based engine that uses object-oriented semantic network technologies instead of a commercial database. [3. Metaserver] Metaserver offers an integrated development environment that can take a picture created by a business analyst and generate an XML document representing the semantics of the process pictured. The IT department can read and understand that XML document and connect each of the business processes drawn by the analysts to an IT system. Metaserver's product has four parts. First is the Metaserver Modeling Environment. This is a Java Swing-based GUI that generates XML representations of business processes that are 'close to' various business XML standards, Deshpande said. It can be used by business analysts and IT, and incorporates a process metamodel. Next is the connectivity framework. This comes with wizards and XML databases that describe how to connect a process to existing databases and perform marshalling steps like representing a business entity, such as a customer, in a canonical form. XML is the canonical form used. Third is an engine that executes the process. This is based on LINDA, a parallel processing language that implements the concept of a virtual shared memory between a network of computers 'so you don't have to have one physical shared memory multiprocessor,' [Ashish] Deshpande said. Fourth is a Web browser-based management console..." Note the related ADT articles "Go for the exceptions" and "10 rules for modeling business processes." See also "Business Process Modeling Language (BPML)" and the work within the ebtWG project teams: Business Process Information Model Exchange Schema; Business Process Specification Schema; Common Business Process Catalog.

  • [May 07, 2002] "XML Watch: Worm's-eye BEEP. Part 2 of An Introduction to the Blocks Extensible Exchange Protocol Standard." By Edd Dumbill (Editor and publisher, xmlhack.com). From IBM developerWorks, XML zone. March 2002. ['In this second article examining BEEP -- Blocks Extensible Exchange Protocol -- Edd builds on the broad principles of BEEP outlined in his previous article, explaining how the protocol is implemented, and providing an example of how it is used in Java.'] "BEEP is a peer-to-peer protocol framework. It isn't as much a ready-made protocol in itself, like HTTP, but a framework on which such protocols can be built. It takes care of many of the features of such protocols so that they don't need to be reinvented. The main areas of functionality are: (1) Separating one message from the next; (2) Encoding messages; (3) Multiple channels of communication over a single connection; (4) Reporting errors; (5) Negotiating encryption; (6) Negotiating authentication. In the previous article, I explained that communication in a BEEP session takes place over one or more channels, which are multiplexed over the transport protocol. I will assume that, per RFC 3081, we're using BEEP over TCP/IP. This need not be the case: A mapping of BEEP could be made onto a different connection-oriented transport protocol. The first channel, channel 0, has a special role and is used for session management. All communication over a channel is defined by a profile, which is basically a description of permissible interactions. For XML-based protocols, a profile could be based around a DTD or schema that specifies the syntax for messages. (Obviously, more than just syntax specification is required to completely define the profile.) When peers connect using BEEP, they request profiles of each other in order to structure their communication. Profiles fall into two categories: tuning and data exchange. Tuning profiles affect the whole session and are typically responsible for security and authentication. A data exchange profile, as indicated above, defines the exchanges on a particular channel... Now that you have a high-level idea of what BEEP does, let's swoop down to the other extreme and investigate how the fundamentals of the protocol are implemented... BEEP offers the Internet protocols world, which includes the growing area of Web services, a plausible alternative to the continued overloading of HTTP. Its flexible nature and deep-rooted support for MIME means it should adapt well to connection-oriented protocol needs. The number of freely available implementations of BEEP is steadily increasing, as is the BEEP user community. Developers creating new application protocols should seriously consider using BEEP as the substrate for their work." See also the abstract for Part 1. References: "Blocks eXtensible eXchange Protocol Framework (BEEP)."

  • [May 06, 2002] "The Danger of Software Patents." By Richard Stallman. Comments on IP Law hearings. US Federal Trade Commission and DOJ, 'Competition and Intellectual Property Law and Policy in the Knowledge-Based Economy: Notice of Public Hearings and Opportunity for Comment.' "This speech is about a way of misusing laws to make software development a dangerous activity. This is about what happens when patent law gets applied to the field of software. It is not about patenting software. That is a very bad way, a misleading way, to describe it, because it is not a matter of patenting individual programs. If it were, it would make no difference, it would be basically harmless. Instead, it is about patenting ideas. Every patent covers some idea. Software patents are patents which cover software ideas, ideas which you would use in developing software. That is what makes them a dangerous obstacle to all software development... People in high tech fields are not generally working on their own and that ideas don't come in a vacuum, they are based on ideas of others; and these people have pretty good chances of getting a job if they need to these days. So this scenario, the idea that a brilliant idea came from this brilliant [person] working alone is unrealistic, and the idea that he is in danger of starving is unrealistic. But it is conceivable that somebody could have an idea and this idea along with 100 or 200 other ideas can be the basis of making some kind of product, and that big companies might want to compete with him. So let's see what happens if he tries to use a patent to stop them. He says 'Oh No, IBM. You cannot compete with me. I've got this patent.' IBM says, 'Let's see. Let's look at your product. Hmmm. I've got this patent and this one and this one and this one and this one and this one, which parts of your product infringe. If you think you can fight against all of them in court, I will just go back and find some more. So, why don't you cross license with me?' And then this brilliant small inventor says 'Well, OK, I'll cross license'. So he can go back and make these wonderful whatever it is, but so can IBM. IBM gets access to his patent and gets the right to compete with him, which means this patent didn't 'protect' him at all. The patent system doesn't really do that. ... if you are a small inventor or work for a small company, the small company will not be able to do this. Small companies cannot get enough patents to do this. Any given patent is pointing in a certain direction. So if a small company has patents pointing there, there and there and somebody over there points a patent at them and says give me your money, they are helpless. IBM can do it [retaliate] because with 9000 patents, they are pointing everywhere; no matter where you are, there is probably an IBM patent pointing at you. So IBM can almost always make you cross license. Small companies can only occasionally make someone cross-license. They will say they want patents for defensive purposes but they won't get enough to be able to defend themselves..." See: (1) W3C's Daniel Weitzner Testifies on Patents and IP Licensing Terms in Open Standards Activity"; (2) "Patents and Open Standards." [cache]

  • [May 06, 2002] "XML in Java: Data Binding with Castor. A Look at XML Data Binding for Java Using the Open Source Castor Project." By Dennis M. Sosnoski (President, Sosnoski Software Solutions, Inc.). From IBM developerWorks, XML Zone. April 2002. ['XML data binding for Java is a powerful alternative to XML document models for applications concerned mainly with the data content of documents. In this article, enterprise Java expert Dennis Sosnoski introduces data binding and discusses what makes it so appealing. He then shows readers how to handle increasingly complex documents using the open source Castor framework for Java data binding. If your application cares more about XML as data than as documents, you'll want to find out about this easy and efficient way of handling XML in Java.' Website description: 'Castor is an open source data binding framework for Java[tm]. It's basically the shortest path between Java objects, XML documents, SQL tables and LDAP directories. Castor provides Java to XML binding, Java to SQL/LDAP persistence, and then some more.'] "Most approaches to working with XML documents in applications put the emphasis on XML: You work with documents from an XML point of view and program in terms of XML elements, attributes, and character data content. This approach is great if your application is mainly concerned with the XML structure of documents. For many applications that care more about the data contained in documents than the documents themselves, data binding offers a much simpler approach to working with XML... The document models discussed in previous articles of this series are the closest alternatives to data binding. Both document models and data binding build document representations in memory, with two-way conversions between the internal representation and standard text XML. The difference between the two is that document models preserve the XML structure as closely as possible, while data binding is concerned only with the document data as used by your application... Data binding is a great alternative to document models in applications that use XML for data exchange. It simplifies your programming because you no longer need to think in terms of XML. Instead, you can work directly with objects that represent the meaning of the data as used by your application. It also offers the potential for better memory and processor performance than document models... Data binding can provide other benefits beyond justprogramming simplicity. Since it abstracts many of the document details, data binding usually needs less memory than a document model approach. Consider, for instance, the two data structures shown in the earlier figures: The document model approach uses 10 separate objects, as compared to two for data binding. With a lot less to build, it may also be faster to construct the data binding representation for a document. Finally, access to the data within your program can be much faster with the data binding approach than with a document model, since you control how the data is represented and stored. I'll get back to these points later. If data binding is such great stuff, when would you want to use a document model instead? The two cases that require a document model are: (1) Your application is really concerned with the details of the document structure. If you're writing an XML document editor, for instance, you'll want to stick to a document model rather than using data binding. (2) The documents you're processing don't follow fixed structures. For example, data binding wouldn't be a good approach for implementing a general XML document database..." Also in PDF format. See: "XML and Databases."

  • [May 06, 2002] "This Year's Model: Covisint Upgrades Catalog Management." By Richard Karpinski. In InternetWeek (April 23, 2002). "Two things you never want to see getting made: sausage, and aggregate catalogs at industry exchanges such as Covisint. In either case, the process, to put it kindly, is anything but pretty. For Covisint, the auto industry exchange founded by General Motors, Ford, and Daimler/Chrysler, bringing catalogs from industry suppliers online is an arduous, overly manual process. Covisint accepts content in a variety of formats -- from simple flat files to fairly sophisticated XML documents. But before it can present the data to its big buyers, it must bring it into a content repository where it is manually vetted and normalized. The work doesn't stop there: Covisint must also create versions of the catalog content that will work with the procurement systems of its various buyers, including formats for buying platforms from Ariba, Commerce One, SAP, and Oracle, among others... Today, much of that work is organized and driven by a homegrown tool dubbed Covisint Tracker. But as its content management requirements grow -- from 3 million catalog items this summer to more than 6 million by year's end -- that tool is beginning to reach its practical limits. So Covisint's technical team is in the process of testing -- with a global rollout planned for later this summer -- a new catalog management tool from vendor Requisite Technology, which will form the underpinnings of Covisint Tracker 2.0... The new platform will bring important new features to Covisint buyers and suppliers -- from better self-service content management to more seamless support for multiple data formats. It also will lay the groundwork that will allow Covisint to vastly improve its ability to host catalogs that include custom prices and deals for its buyers. Although initially the new Requisite tool will focus mainly on catalogs of indirect goods (commodity items such as office suppliers or PCs), the new capabilities will also help Covisint deliver more sophisticated and custom-designed catalogs of production goods (things like brake components or engine parts) that go right into manufacturing process, the Holy Grail for the exchange and its founders... Today, while Covisint organizes and cleans up supplier catalog content, it ultimately pushes that content out to its carmaker buyers, which host the content behind their own firewalls. Now, at least in part driven by the imminent arrival of the new catalog management platform, Covisint is preparing to launch its first community and marketplace catalogs, which involves hosting custom content and custom prices for buyers within Covisint. Buyers simply need to punch out to that single master catalog to get list, market-driven, and negotiated prices -- as well as features like enhanced search -- all managed and hosted by Covisint, [Covisint's Greg] Wong said..." See also "Covisint Supports ebXML Message Specification and OAGIS Standards."

  • [May 03, 2002] "Modeling XML Applications. Part 3." By Dave Carlson (Ontogenics Corp., Boulder, Colorado). In Software Development Magazine Volume 10, Number 5 (May 2002), pages 45-48. ['The XML Metadata Interchange specification standardizes metamodels, models and model instances among applications -- providing a consistent way to create XML schemas from models. Part 3 of 4.'] "When mapping between UML and XML Schema, the first priority is to enable generation of a valid XML Schema from any UML class diagram. The developer needs no knowledge of the XML Schema specification. Having this capability enables a rapid development process and supports reuse of the model vocabularies in several different deployment languages or environments, because the model isn't overly specialized to XML structure. To meet specific XML design guidelines, customization of the generated schemas must be supported. Several XML experts have told me that the generated schema must be the same as one they would write by hand. This may include choice of global versus local element declarations, or a preference for reusable <group> definitions in the schema context. However, the best hand-authored schemas still follow a consistent set of design principles that I intend to duplicate with UML-assisted design... I didn't start from scratch when creating these mapping rules, but used the XML Meta-data Interchange (XMI) specification from the Object Management Group (OMG) as a foundation. XMI standardizes the exchange of metamodels, models and object instances between applications. The standard is equally applicable to database meta-data and XML schemas, or to any other model that you might define for your application. Simply stated, XMI defines a consistent way to create XML schemas from models and XML document instances from objects that are instances of those models... The XMI specification is written in terms of the Meta Object Facility (MOF), the language used to define the UML metamodel -- essentially an abstract subset of the UML. For additional background on the MOF and XMI -- without jumping into the deep end of the conceptual pool -- I recommend reading the overview in section 2 of the MOF 1.3 specification, plus the design rationale in section 4 of the XMI 1.1 specification... Most common UML tools now support an XMI representation of their models. (The UML standard does not, however, completely define diagram graphics for interchange; stay tuned for UML 2.0, which will correct this deficiency.) Some tools, such as the open source ArgoUML, use XMI as their native file storage format... I envision an XML design tool that operates in loose collaboration with any UML tool that can import and export XMI representations of its models. I've used this XMI format of UML models to create a Web-based tool for bidirectional transformation between UML and XML Schema... In February 2002, Rational Software announced their new XDE UML product, which is integrated into the Eclipse framework. TogetherSoft and WebGain have also committed to support for Eclipse. I have integrated my XML Schema generation and reverse engineering tool into Eclipse; all of the schema examples in this article were generated using this tool... The fourth part of this series will describe techniques for reverse engineering an existing XML Schema into a UML model, again using the XMI representation as an intermediary to gain UML tool independence." See "XML Metadata Interchange (XMI)."

  • [May 03, 2002] "Modeling XML Applications. Part 2." By Dave Carlson (Ontogenics Corp., Boulder, Colorado). In Software Development Magazine (April 2002). "Models are an essential part of system analysis and design, be they specified in UML or defined by the structure of software artifacts such as XML schemas. However, models that are tied too closely to today's deployment technology may be short-lived. If your company decides to switch from J2EE to Microsoft's .NET, will your system's design survive the migration? The answer is yes, if you build in resilience from the start by using a model-driven architecture (MDA). MDA enables a separation of domain models from specific deployment platforms... Let's examine an MDA in which a generic purchase-order specification is refined into two platform-specific models (PSMs) for deployment, one to XML Schema and the other to an Oracle database. Platform-independent models (PIMs) are the cornerstone of distributed systems. Otherwise, without a shared domain model, similar concepts must be designed for each platform, and then those implementations must be synchronized and integrated -- the classic headache when dealing with legacy systems... My preferred approach for mapping between a PIM and a PSM for XML Schema is to define a UML profile that adds XML-specific definitions to a UML model. With this method, the only difference between a PIM and PSM is the addition or subtraction of UML profile information, which is composed of stereotypes and tagged value properties that extend the core UML metamodel for XML design. When reverse-engineering a previously defined XML Schema to UML, I create a logical domain model that includes profile definitions required to re-create an equivalent schema. The PSM may not be structurally identical to the schema (for example, a child element in XML may be mapped to a UML attribute), but the profile stereotypes and tags capture this difference. Then, to create a PIM, I simply remove the XML-specific profile information from this view in UML. In an alternative approach, I define a PSM that is structurally identical to the implementation language. In a relational database, the PSM includes a one-to-one correspondence between classes in UML and tables in the database. The platform-specific model in UML also includes attributes for all primary and foreign keys, and separate classes are included for many-to-many associations. However, this PSM may not represent the good conceptual-level view desired for a PIM domain model. A conceptual view should be close to the terms and structure used by human experts, whereas the PSM must conform to the implementation language. So, to map between the PIM and PSM using this approach, the UML classes and relationships may require significant structural transformation, in addition to the use of a UML profile. This second approach is used by the Rational Rose UML modeling tool for both database modeling and DTD modeling. Rose does not yet include support for XML Schemas. In the case of the Rose PSM for DTD design, each child element in the XML structure is represented as a first-class object in UML, even if it represents a simple string value. To create a PIM from a reverse-engineered model, the PSM must be transformed to a more logical object-oriented structure. As a result, the PIM and PSM are two completely separate model instances, or possibly two packages within a parent model... In my next two articles in this series, we'll continue the discussion of these transformation techniques for XML." See also "OMG Model Driven Architecture (MDA)."

  • [May 03, 2002] "Modeling XML Applications With UML. Part 1." By David Carlson. In Software Development Magazine (February 2002). ['Successful business-to-business integration requires sharing underlying domain semantics, processes and policies. Here's how to use UML and XML to gather and display requirements and design with easily understood visual models.'] "By itself, XML is a simple syntax for exchanging data and text-oriented documents, but successful business communication involves more than just a lingua franca. Electronic commerce requires shared models of the underlying domain semantics, business processes and policies. These models are the very essence of business-to-business (B2B) integration. A model may be embedded in the code of an application program that processes the XML documents, or made explicit in the definition of the model's concepts, relationships and constraints. This is the first in a series of articles in which I'll cover three aspects of modeling e-business applications: (1) Processes and communication policies: B2B interactions are not limited to sending a single message, but require a coordinated sequence of the business partners' activities and expectations. (2) Business vocabularies: Each message contains information that may be concise or convoluted. Such content documents are defined by a vocabulary that is shared by the parties engaged in the communication. (3) Web service components: The techniques for applying component-based design to Web services are still emerging, but I'll share some early thoughts about their impact on future development. XML documents are increasingly used to represent and exchange both process and content information when deploying these applications. Process information includes the messaging infrastructure and workflow control that guide the process execution. Many B2B processes are asynchronous and long running, so the XML-based message header information identifies the parties, process and purpose of the message. Business vocabularies define the heart of the message: its content..."

  • [May 03, 2002] "The Promise of UDDI." By Tamara Carter. In Software Development Magazine (April 2002). ['If you build Web services, will the users come?'] "'When should you get involved with Web services?' Mark Colan, an e-business evangelist for IBM, asked the audience at his class "All About UDDI" on Wednesday morning at SD West's Web Services World. 'Now,' he answered, 'so you'll be in a good position when it really gets off the ground.' While the UDDI (Universal Description, Discovery and Integration) specification, part of which emulates a yellow pages directory, certainly offers the promise of greater profit and interactivity by bringing companies and available Web services together, the sparse audience at Colan's talk didn't offer much proof that the public is jumping on the UDDI bandwagon. Colan acknowledged that the public UDDI registry is 'very hit or miss.' As it exists today, the registry may be filled with experiments, bogus information or spam, in addition to useful services. Despite his evangelical role, Colan doesn't think the registry will truly become useful until 2004, the year he believes "we'll really get serious" about Web services. Until that magic date, private UDDI may provide companies with a way to usefully use Web services. According to Colan, you could use UDDI behind your firewall, for example, enabling employees to search within the company for internal or approved service providers and automate transactions beyond current intranet approaches..." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [May 03, 2002] "Sun Readies UDDI Offering." By Paul Krill. In InfoWorld (May 02, 2002). "Sun Microsystems is preparing within a few months to launch a server-based product for setting up UDDI registries for publishing Web services, according to Sun officials. UDDI (Universal Description, Discovery, and Integration) is an industry standard for development of Web services registries, to be deployed either privately within company firewalls or publicly... 'The purpose of the product would be, I would assume, to support the use of registries within a private company, within a company's firewall, or within a set of trusted partners,' said Suzy Struble, manager of XML Web services for Palo Alto, Calif.-based Sun. The product, however, is 'certainly something that could [also] be deployed in a public manner,' added Struble, who is a Sun delegate to uddi.org, which is steering development of the UDDI standard. The UDDI effort will be part of the Sun ONE (Open Net Environment) software initiative and will be important to an upcoming revision of a Sun ONE-based product, according to a Sun representative. Sun recently re-branded several software product lines under the Sun ONE umbrella, including its iPlanet offerings, which feature software such as Web and portal servers. Sun with its UDDI and Web services efforts may be getting a bit ahead of the market, according to one analyst... Sun has no plans to set up a UDDI node for registering publicly available Web services, as some other vendors have done, according to Struble..." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [May 03, 2002] "Open Forum Concept Presentation: Sixth Open Forum on Metadata Registries." By Bruce Bargmeyer (Lawrence Berkeley National Laboratory). 27 pages. Overview of (XML) registries, prepared for the Sixth Open Forum on Metadata Registries, January 20-24, 2003 in Santa Fe, New Mexico. "... registries have some common, overlapping content, which is extended and utilized in different ways. These registries vary according to the intended purpose, granularity of contents, the level of semantics management... (1) OASIS/ebXML XML Registries -- for XML Artifacts; (2) ISO 11179 Metadata Registries -- for Data Semantics. Register Data Elements, components of data elements and groups of data elements. For example, country codes for customer place of residence. Includes: data element concepts, data elements (including representation), value domains, and (multiple) taxonomies; emphasis is on semantic information such as definitions of data elements and value meanings, and stewardship responsibilities. (3) Universal Description, Discovery, and Integration [UDDI] Registries -- for Web-based Business Services; (4) Database System Registries [System Catalogs/Data Dictionaries/ Repositories] -- for schemas, integrity and operational information; (5) Case Tool Registries [Encyclopedias/Repositories] -- for Data model and application program logic; (6) Ontological Registries -- for Concept Structures; (7) Software Component Registries -- supporting the reuse of software components, including basic common elementary objects and object patterns... Open Forum 2003 is a conference drawing together standards developers, software developers and practitioners. The conference is intended to introduce the registries, show how the registries are used and describe the related standards. A major topic will be cooperation between the registries to manage semantics..." See: (1) Forum details in "Open Forum 2003 on Metadata Registries Highlights Data Semantics and Registry Interoperability"; (2) some general references in "XML Registry and Repository." The document is also available in Powerpoint format. [cache]

  • [May 03, 2002] "Privacy and XML, Part 2." By Paul Madsen and Carlisle Adams. From XML.com. May 01, 2002. ['Examining XML-based initiatives for controlling and describing online privacy: Paul Madsen and Carlisle Adams discuss P3P, XACML, XML Encryption and Signature, WS-Security, and SAML. Two weeks ago we published an introduction to privacy issues on the Web. This week we're publishing the follow-up to this article, which describes how XML is being used to address the issue of online privacy.] "There are a number of efforts currently underway in standards bodies and other organizations to address various aspects of privacy with XML-based technologies. (1) P3P The Platform for Privacy Preferences (P3P) is a protocol developed by the World Wide Web Consortium (W3C). P3P (on the server side) defines an XML-based language by which Web sites can describe their privacy policies in a machine readable format. Categories of information include the contact information of the legal entity making the privacy statement, whether users will have access to information collected about them, different types of data being collected, the purpose(s) for collection, and which organizations will have access to the collected data... (2) XACML An OASIS Technical Committee, is producing XACML [XML Access Control Markup Language], a proposal for capturing authorization policies for resources. XACML is expected to address fine-grained control of authorized activities (e.g., read, write, copy, etc.) based on, among other criteria, access requester characteristics ('only Senior VPs and above can view this document'), the protocol over which the request is made ('this data is only viewable if accessed over HTTPS'), and the authentication mechanism ('requester must have authenticated using a Digital ID')... (3) XML Encryption Currently a W3C Candidate Recommendation, XML Encryption is a proposal for an XML vocabulary for capturing the results of an encryption operation performed on arbitrary (but most likely XML) data. A critical feature of the XML Encryption proposal is that it supports the concept of encrypting only specific portions of an XML document, not only minimizing the encryption processing but, more importantly, leaving non-sensitive information in plain text form such that general (i.e., non-security-related) processing of the XML can proceed... (4) XML Signature W3C XML Signature defines an XML Schema for capturing the result of a digital signature operation applied to arbitrary (but often XML) data. Unlike previous non-XML Digital Signature standards, XML Signature has been designed to both account for and take advantage of the Internet and XML... (5) WS Security WS Security is a recent proposal from Microsoft for adding security metadata to SOAP messages. We'll discuss it here in the context of a site requesting user information from .NET My Services [authenticity, confidentiality]... (6) SAML An OASIS initiative, SAML (Security Assertions Markup Language) will provide a standard way to define user authentication, authorization, and attribute information in XML documents. As its name suggests, SAML will allow business entities to make assertions regarding the identity, authorizations, and attributes of a subject to other entities, which may be partner companies, other enterprise applications, and so on... Privacy, ensuring that e-business customers remain in control over the personal information they share with online businesses, is one of the key issues facing today's companies as they attempt to take advantage of the great potential the Internet has for creating personalized and trusted relationships with customers. XML, not surprisingly, plays a critical role, both aggravating the problem through the free-flow of information it enables and providing syntax that offers a piece of the technology puzzle for solving the problem..."

  • [May 03, 2002] "Splitting and Manipulating Strings." By Bob DuCharme. From XML.com. May 01, 2002. ['In his monthly column on XSLT, "Transforming XML," Bob DuCharme focuses on using XSLT to manipulate strings, and as ever relishes the opportunity to talk about poetry and wine on XML.com.'] XSLT is a language for manipulating XML documents, and XML documents are text. When you're manipulating text, functions for searching strings and pulling out substrings are indispensable for rearranging documents to create new documents. The XPath string functions incorporated by XSLT give you a lot of power when you're manipulating element character data, attribute values, and any other strings of text that your stylesheet can access. We'll start by looking at ways to use these functions to split up strings and how a PCDATA element might be split into subelements. [...] Next month, we'll look at how to compare two elements to see if they're the same. We'll also look at a way to implement a global string replace with an XSLT stylesheet." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [May 03, 2002] "DAML Reference." By Uche Ogbuji and Roxane Ouellet. From XML.com. May 01, 2002. ['The DARPA Agent Markup Language (DAML) is an important part of developing Semantic Web technology. This third installment of our series looking at the DARPA Agent Markup Language provides a quick reference for concepts from RDF, RDF Schema and DAML.'] The reference covers: (1) RDF Syntax Elements; (2) DAML+OIL Syntax Elements; (3) RDF Classes and Properties; (4) RDFS Classes and Properties; (5) DAML Classes and Properties. See also Part I and Part II in the DAML series. General references in "DARPA Agent Mark Up Language (DAML)."

  • [May 03, 2002] "If Ontology, Then Knowledge: Catching Up With WebOnt." By Kendall Grant Clark. From XML.com. May 01, 2002. ['Kendall Clark examines the W3C's Web Ontology Working Group in this week's "XML Deviant" column. The WebOnt WG is developing a language that will be the successor to DAML, intended for use in describing ontologies. Kendall explains the mission of WebOnt and comments on their progress so far.'] "There are at least two broad plans for the direction in which the Web may evolve and, significantly, each of them has XML as a keystone. The first, known colloquially as 'web services', is largely the domain of the largest corporate IT vendors, most notably Microsoft and IBM. The second, the 'Semantic Web', is largely the domain of the W3C, academic, government, and some industry researchers. Curiously, the W3C seems to have taken the position that the Semantic Web, or something very much like it, is inevitable, if the Web is to mature fully... [Here] I introduce one of the major elements of the W3C's Semantic Web initiative, the Web Ontology Working Group ('WebOnt'). In short, WebOnt, co-chaired by Professor Jim Hendler (of University of Maryland) and Professor Guus Schreiber (of University of Amsterdam), has been given the task of developing an ontology vocabulary for use in the Semantic Web (which is to be distinguished from an ontology of the Web, i.e., a formal schema of what there is on the Web: hosts, resources, media types, and the like). This ontology vocabulary or ontology language corresponds to the foundational stratum of Tim Berners-Lee's Web Architecture layer cake diagram. But what is an ontology vocabulary? It is a formal schema which, as the WebOnt Charter puts it, allows for the 'explicit representation of term vocabularies and the relationships between entities in these vocabularies'. Less formally, an ontology language is a markup language -- presumably in XML, but RDF is possible, too -- that allows users to define formal ontologies or schemas... If the Semantic Web is worth pursuing at all, something like a W3C-blessed Web Ontology Language is not only desirable but necessary. Though I lament yet another W3C generically named specification as unhelpful and confusing, the WebOnt WG which has been assembled to produce this crucial element of the Semantic Web is so far proving to be determined and capable. But only time will tell." See: (1) "Requirements for a Web Ontology Language " [W3C Working Draft 07-March-2002]; (2) "W3C Web Ontology Working Group Formed to Extend the 'Semantic Reach' of XML/RDF Metadata Efforts"; (3) "XML and 'The Semantic Web'."

  • [May 02, 2002] "xCRL: Reducing Integration Expense with Standardization." By Ram Kumar. In CRM.Insight (May 02, 2002). "The rapid adoption of e-business has created a new world of interoperability between organizations, systems, processes, platforms, tools and, most importantly, data... If a standard way of defining customer information and relationships existed that was vendor neutral, open (i.e., independent of tools, systems, languages and platforms), and enabled portability and interoperability of data, then it would be possible to reduce the expensive and complex integration problems associated with new business initiatives. The proposed standard -- by the Customer Information Quality Committee of OASIS -- is called Extensible Customer Relationships Language, or xCRL, and is intended to meet this requirement. xCRL, is a set of XML vocabulary specifications for defining customer characteristics such as name, address, age, customer number, e-mail address, and so on. In addition, xCRL describes, in a standard way, how individual customers and organizations interact with one another. As currently defined, xCRL enables users to describe relationships such as person-to-person, person-to-organization or organization-to-organization in a standard way. For example, if a CRM system and an Enterprise Resource Planning system both understood xCRL definitions, they could automatically interoperate without needing expensive, custom integration. This would accelerate the time taken to deploy such systems and allow them to interact more readily with a wider range of other systems..." See also "Customer Identity / Name and Address Markup Language (CIML, NAML)."

  • [May 02, 2002] "Sun Offers Free Version of StarOffice." By Tom Krazit. In InfoWorld (May 01, 2002). "A free version of Sun Microsystems' StarOffice business productivity suite is now available for download from OpenOffice.org, an open-source developer community sponsored by Sun. OpenOffice.org 1.0 provides users with a nearly identical software package to Microsoft's Office suite, featuring word processor, spreadsheet and presentation programs, said OpenOffice.org in a statement Wednesday. The source code from the previous release, StarOffice 5.2, was the code base for both OpenOffice.org 1.0 and StarOffice 6.0, but the two products are both advances over that version. The StarOffice 5.2 software was distributed as a free download prior to the release of 6.0. Sun's desire to capture business customers led it to offer paid support contracts for 6.0, and still allow free downloads of an improved release, OpenOffice.org 1.0, for users who didn't require support or training. StarOffice 6.0, announced in March and slated for availability this month, comes with additional features such as a database and special fonts. Sun, in Palo Alto, Calif., also provides training for StarOffice, which will be priced at under US $100, according to an earlier Sun announcement. E-mail and calendar functions that were disliked by users of StarOffice 5.2 have been removed from OpenOffice.org 1.0, said Zaheda Bhorat, a community manager for OpenOffice.org and a marketing manager for Sun. Future releases will add those functions back in when the community agrees on the best way to do that, she said. The product also contains support for XML, which will allow users to save files to PDAs (personal digital assistants) and other mobile devices when plug-ins for that type of file transfer are completed, said Sam Hiser, co-leader of the marketing project at OpenOffice.org and chief information officer of New York startup ReelAmerica. Users running Linux, Windows, Solaris, and other Unix flavors will be able to run OpenOffice.org 1.0. A port for Macintosh users is in the works, said Bhorat. The software also was set up to work with several different file formats so Microsoft Word and Excel files could be kept and worked with in the new version. However, macros and other specially created programs for the Microsoft products will not work with OpenOffice.org 1.0, Hiser said..." See: "StarOffice XML File Format."

  • [May 01, 2002] "Collaboration-Protocol Profile and Agreement Specification." By OASIS ebXML Collaboration Protocol Profile and Agreement Technical Committee. Version 1.9. April 19, 2002. 157 pages. " The objective of this specification is to ensure interoperability between two Parties even though they may procure application software and run-time support software from different vendors. The CPP defines a Party's Message-exchange capabilities and the Business Collaborations that it supports. The CPA defines the way two Parties will interact in performing the chosen Business Collaboration. Both Parties SHALL use identical copies of the CPA to configure their run-time systems. This assures that they are compatibly configured to exchange Messages whether or not they have obtained their run-time systems from the same vendor. The configuration process may be automated by means of a suitable tool that reads the CPA and performs the configuration process. In addition to supporting direct interaction between two Parties, this specification may also be used to support interaction between two Parties through an intermediary such as a portal or broker... As defined in the ebXML Business Process Specification Schema, a Business Partner is an entity that engages in Business Transactions with another Business Partner(s). The Message-exchange capabilities of a Party may be described by a Collaboration-Protocol Profile (CPP). The Message-exchange agreement between two Parties may be described by a Collaboration-Protocol Agreement (CPA). A CPA may be created by computing the intersection of the two Partners' CPPs. Included in the CPP and CPA are details of transport, messaging, security constraints, and bindings to a Business-Process-Specification (or, for short, Process-Specification) document that contains the definition of the interactions between the two Parties while engaging in a specified electronic Business Collaboration..." From the posting of Dale Moberg (for the OASIS ebXML-CPPA TC), 'OASIS ebXML-CPPA specification 1.9 Initial Public Review': "The OASIS ebXML CPPA Technical Committee has unanimously approved an initial public review period for their version 1.9 core specification. For this initial public review period -- which will last until May 18, 2002 -- all received comments will be considered by the Technical Committee; the committee will then include any changes motivated by the public comments that are received in a final 2.0 version. The Technical Committee will vote at the end of May whether to make version 2.0 an approved Committee Specification of the Technical Committee, and will also vote at that time, whether to submit the 2.0 specification for OASIS membership review and approval as an OASIS Standard. In short, the technical committee is seeking the standards community's comments prior to OASIS membership review so that any issues can be addressed in the version submitted to the membership. The committee considers the specification, and the accompanying schemas, to be essentially stable, and suitable for early implementation. Comments from both analysts and implementers are being solicited." [cache]

  • [May 01, 2002] "Plug-And-Play Portlets." By David L. Margulius. In InfoWorld (April 26, 2002). "Portlet interoperability, currently the subject of two high-profile standards initiatives, is the dream of anyone familiar with today's enterprise portal landscape. The idea is that portlets -- those little portal windows that connect to specific back-end functions -- would be able to run in any portal environment regardless of the specific portal framework for which they had been written. This would save enterprises much of the time and money they currently spend custom-developing portlets for each portal vendor's framework... A real push is on to create a standard portal component or interface model so portlets can be shared and reused across frameworks, and published as services that can be syndicated to and consumed by other portals or applications. JSR (Java Specification Request) 168, a proposal submitted to the Java Community Process, is attempting to identify standard APIs for portlets to plug into J2EE portal servers. The WSRP (Web Services for Remote Portals) technical committee of the Organization for the Advancement of Structured Information Standards (OASIS) is trying to develop a Web services-based, language-independent component model for plugging Web services into portlets, so service providers can implement to a common plug-and-play interface. WSRP services are essentially WSIA (Web Services for Interactive Applications) component services built on SOAP (Simple Object Access Protocol) and WSDL, with the addition of context elements including user profile and information about device, locale, and desired markup language. Now the bad news: Much of the value of enterprise portals lies in the interaction and coordination among the different functions within the overall framework. For example, all portlets in the portal should draw on the same user and account management, personalization, session management, and directory and search capabilities. And perhaps most importantly, they should support process coordination and workflow so that a group of portlets representing services can act in a synchronized way as part of a business process. The problem is that none of the players participating in the two standards efforts really want to standardize interfaces or APIs that will enable this kind of richer portlet coordination... A Web services-based approach would sacrifice performance for broader applicability and interoperability. These interfaces would be less functional (and the question is how much less) especially in areas such as security, management, and QoS (quality of service). Incumbent integration players, and some JSR 168 advocates, paint Web services as wholly inadequate for any serious portlet or application integration, until more robust Web services protocols are developed. Yet both the Java and Web services approaches are pretty much starting from scratch when it comes to higher-level interaction issues such as metadata/directory, process, and identity management -- issues that are being addressed first in the development of other protocols such as WSIL (Web Services Inspection Language), WSFL (Web Services Flow Language), ebXML (E-business XML), and XSD (XML Schema Definition) and likely won't be incorporated into portlet standards for some time..."

  • [May 01, 2002] "A Striking Balance." By Tom Yager. In InfoWorld (April 29, 2002). "Macromedia's tools achieve an uncommonly good balance between visual design and coding capabilities. The newly integrated Dreamweaver MX has much stronger editing and debugging features. ColdFusion MX adds seamless Java/J2EE connectivity, and Web services support is standard in the new ColdFusion and Dreamweaver... The $799 Studio MX bundle... includes Dreamweaver MX, Flash MX, Fireworks MX, and FreeHand 10, along with a single-user edition of ColdFusion MX. The centerpiece of Studio MX is Dreamweaver. The MX release of this Web development environment combines the HomeSite+ code editor, ColdFusion Studio, and Dreamweaver UltraDev to create an integrated tool that's almost as adept at handling scripts, HTML, XML, and databases as it is at layout. Although the old Dreamweaver was comparable in features to FrontPage, Dreamweaver MX is more like Visual InterDev with a cleaner interface and a killer layout engine... Through its integrated Apache Axis SOAP (Simple Object Access Protocol) engine, ColdFusion MX exposes ColdFusion components as Web services. The engine generates WSDL (Web Services Description Language) service description files automatically when a remote client requests them. CFML and CFScript make it easy to consume remote Web services, including services that return complex structured data. ColdFusion MX's XML support is strong and includes WDDX (Web Distributed Data Exchange) for the representation and exchange of complex XML data..."

  • [May 01, 2002] "Future of Macromedia Rests on MX." By [Seybold Bulletin Staff.] In The Bulletin: Seybold News and Views on Electronic Publishing Volume 7, Number 31 (May 1, 2002). "Macromedia this week completed its MX family of products, launching the authoring and server tools that complement the Flash MX player introduced by the company last month. The MX suite ties together Macromedia's authoring and player tools with an important retooling of the ColdFusion server technology that Macromedia acquired in its 2001 merger with Web development up-and-comer Allaire... the upgrade is substantial, particularly in the case of Dreamweaver MX. It adds built-in support for PHP scripting, XHTML export, and import and manipulation of XML Schema documents. It also improves the product's handling of CSS2 style sheets and incorporates UltraDev and ColdFusion Studio (both of which will fade away as stand-alone products). For Web developers, the code libraries, hints and debugging aids are first rate; designers have been given integrated, color-coded code views to complement the visual layout tools. And if you don't like the way the product comes out of the box, nearly everything about it-including the keyboard mappings, color coding, code hints and revamped user interface-is fully configurable by end users... ColdFusion MX is the 'Neo' version of ColdFusion that CTO Jeremy Allaire and his team have been prepping for many months. It recasts ColdFusion-a popular scripting-based Web application server-as a scripting environment able to run on top of J2EE and .Net application servers. As Allaire put it, 'ColdFusion MX defines a new role for ColdFusion in the enterprise. We're making it incredibly easy for mainstream developers to create rich Internet applications and to use XML and Web services.' The easy part is that with ColdFusion MX, Web services written in Java can be encapsulated as tags that the ColdFusion server interprets..." [The Macromedia MX product family: Macromedia Studio MX, Macromedia Dreamweaver MX, Macromedia Flash MX, Macromedia Flash Player 6, Macromedia ColdFusion MX, Macromedia Fireworks MX.]

  • [May 01, 2002] "A More Standard Approach for Document Sciences." By [Seybold Bulletin Staff.] In The Bulletin: Seybold News and Views on Electronic Publishing Volume 7, Number 31 (May 1, 2002). "Embracing the XML standard and a the use of third-party repositories, Document Sciences Corporation (DSC) has reworked its traditional document-composition software into a new product, xPression, which will be the company's main product offering in the future. DSC's focus will remain on facilitating the efficient composition of text-intensive variable documents, including proposals, contracts, policies, and correspondence. Insurance and financial institutions have typically been major customers for DSC... While maintaining the company's traditional market focus, xPression represents a big change in architecture and approach from DSC's legacy offering, Autograph. Where Autograph was considered a tool for creating galleys that could then be broken into pages, xPression takes a frame-oriented approach to layout... but xPression uses native XML as its internal format. XPression has been written to be modular, open and scalable. It is designed to integrate with existing Web portals and to run in standard application-server environments. The user interface of xPression is being rewritten as well..." See also the press release, "Document Sciences Introduces Enterprise Content Automation To Application Server Environment": "XML (Extensible Markup Language) compatibility allows xPression to easily interchange data with an organization's enterprise software and simplifies creation and content of customer communications. xPression's J2EE (Java 2 Enterprise Edition) component design takes advantage of leading-edge open-architecture standards for software development, allowing xPression Content Automation solution to easily integrate with other enterprise applications and easily port to a broad range of hardware platforms. An open content repository provides further connectivity and allows organizations to fully leverage their enterprise data within mission-critical customer communications, speeding time to market and enhancing ROI..."

  • [May 01, 2002] "Business Transaction Protocol." Draft version 0.9.6. Second review draft. May 01, 2002. 179 pages. OASIS Business Transactions TC. Primary authors and editors of the main body of the specification (alphabetical order) Alex Ceponkus, Sanjay Dalal, Tony Fletcher, Peter Furniss, Alastair Green, and Bill Pope. From the Introduction: "BTP is designed to allow coordination of application work between multiple participants owned or controlled by autonomous organizations. BTP uses a two-phase outcome coordination protocol to ensure the overall application achieves a consistent result. BTP permits the consistent outcome to be defined a priori -- all the work is confirmed or none is -- (an atomic business transaction or atom) or for application intervention into the selection of the work to be confirmed (a cohesive business transaction or cohesion). BTP's ability to coordinate between services offered by autonomous organizations makes it ideally suited for use in a Web Services environment. For this reason this specification defines communications protocol bindings which target the emerging Web Services arena, while preserving the capacity to carry BTP messages over other communication protocols. Protocol message structure and content constraints are schematized in XML, and message content is encoded in XML instances. The BTP allows great flexibility in the implementation of business transaction participants. Such participants enable the consistent reversal of the effects of atoms. BTP participants may use recorded before- or after-images, or compensation operations to provide the 'roll-forward, roll-back' capacity which enables their subordination to the overall outcome of an atomic business transaction. Note the OASIS BTP TC charter: "The purpose of this technical committee is to address the agreed set of requirements for the business transaction protocol and produce a final specification for the business transaction protocol that works in conjunction with existing and emerging business messaging and web services standards. The Business Transactions Technical Committe is developing XML-based technology for business transactions on the Internet. Business to business interactions on the Internet pose unique challenges; including transactions that span multiple enterprises and long lasting transactions. The interdependent workflows among multiple trading partners, which drive business transactions, need to be coordinated to ensure that the outcome of the transaction is reliable."

April 2002

  • [April 30, 2002] "Switch Tackles XML Traffic." By Ann Bednarz. In Network World (April 29, 2002), pages 25, 28. "Sarvega, which means 'universal' in Sanskrit, next month is unveiling its debut product - a switch that the start-up says will ease translation, encryption and priority-based routing of XML traffic. Slated for launch at NetWorld+Interop 2002 Las Vegas, Sarvega's XML switch is designed to handle XML traffic, offloading that processing from servers. The switch includes content-based routing features, which lets users set priorities on important transmissions. So for example, a financial services firm might choose to configure the switch to provide better quality of service for trade orders than for customer address changes. The device also can encrypt or unencrypt messages, depending on the level of security required. The device is aimed at large companies that are faced with growing levels of XML traffic as vendors in areas such as enterprise application integration and e-procurement shift from proprietary data formats to XML, says John Chirapurath, a Sarvega co-founder and vice president... The idea of a dedicated network device for routing XML traffic appeals to Bill Rocholl, who is first vice president at ABN Amro's services division and is responsible for the financial institution's North American network environment for commercial and consumer client business. ABN Amro has load-balancing products that make application-level decisions based on packet inspection, but they are not geared for making packet-level decisions about XML content, Rocholl says. Alternatively, software developers can write code to handle XML routing, but if the network grows or something changes, Rocholl doesn't want to have to bring a developer back in to make code changes... Sarvega's switch sits on a LAN and scans for XML-based protocols. Incoming packets might be composed in any of several XML variants - there are multiple dialects that exist today, such as ebXML for general e-business transactions and CIDX for the chemical industry. If necessary, the switch translates the XML document into the format that the recipient requires. Rocholl says XML translation capabilities could be helpful at ABN Amro, which uses standards-based Document Type Definitions to define the structure of information stored in applications. But definitions and schema vary with off-the-shelf XML products, requiring developers to write a translator for these variations. The Sarvega switch could take on translation tasks, Rocholl says..."

  • [April 30, 2002] "RSA Removes Patent Block to SAML Uptake." By ComputerWire Staff. In The Register (April 30, 2002). "RSA Security Inc. yesterday said it will grant royalty-free licenses to any developer that wants to use the Securities Assertions Markup Language (SAML) in their products. The company revealed last month that it has two US patents it believes cover aspects of the XML access control standard. The only caveat RSA is imposing on the royalties is that any other companies which claim to have intellectual property covering parts of SAML must also grant RSA a royalty-free license to use their technology. No other company has yet to disclose an IP interest in any other parts of SAML, but should one come forward, RSA's terms leave it open for them to levy royalties against firms other than RSA. Also, any developer that makes an SAML product using tools from companies that have licensed from RSA, must also license from RSA under the same royalty-free terms, so RSA can keep track of where its IP is being used. The patents in question are 6,085,320 and 6,189,098, both entitled 'Client/Server Protocol for Proving Authenticity'. RSA disclosed the patents after a direct request from the SAML working group, part of the OASIS XML interoperability group Eve Maler, one of Sun Microsystems Inc's engineers in the working group, said the RSA patents 'appear to be essential to the implementation of the SAML specification.' She added that RSA's decision to go royalty-free is a good one for the encouraging uptake of standards in the emerging digital identity space..." For details, see the text of the RSA announcement: "RSA Security Grants Royalty-Free Use of Patents to Vendors Implementing SAML-Based Solutions. License of Patents Aims to Speed Implementations, Interoperability and Adoption of SAML-Based Applications and Products." Also: "Committee Specification Level Documents for the Security Assertion Markup Language (SAML)" and general references in "Security Assertion Markup Language (SAML)."

  • [April 29, 2002] "(WSXL) Web Service Experience Language Version 2." Edited by Angel Diaz, John Lucassen, and Charles F Wiecha (IBM). IBM Note 10-April-2002. Version URI: http://www.ibm.com/developerworks/ws-wsxl2/ [the URL is inactive 04/29]. Posted on IBM developerWorks. "WSXL (Web Services Experience Language) is a Web services centric component model for interactive Web applications, that is, for applications that provide a user experience across the Internet. WSXL is designed to achieve two main goals: enable businesses to deliver interactive Web applications through multiple distribution channels and enable new services or applications to be created by leveraging other interactive applications across the Web. To accomplish these goals, all WSXL component services implement a set of base operations for life cycle management, accepting user input, and producing presentation markup. More sophisticated WSXL component services may be specialized to represent data, presentation, and control. WSXL also introduces a new description language to guide the adaptation of user experience to new distribution channels. User experiences that are implemented using WSXL can be delivered to end users through a diversity of distribution channels - for example, directly to a browser, indirectly through a portal, or by embedding into a third party interactive Web application. In addition, WSXL user experiences can easily be modified, adapted, aggregated, coordinated, synchronized or integrated, often by simple declarative means. New applications can be created by seamlessly combining WSXL applications and adapting them to new uses, to ultimately leverage a worldwide pallet of WSXL component services. WSXL is built on widely accepted established and emerging open standards, and is designed to be independent of execution platform, browser, and presentation markup... WSXL is the next piece of the web services stack. WSXL provides a web services standards based approach for web application development, deployment and maintenance. WSXL enables Dynamic e-Business, by moving from transactions driven by a single business entity to a world of electronic transactions involving multiple business entities who compose and aggregate re-usable web applications. As a result, these applications can leverage new and innovative revenue models." See: "Web Services Experience Language (WSXL)."

  • [April 29, 2002] "Address Structure Harmonization." By David RR Webber (XMLGlobal). Also available in PDF format. In a message posted to the OASIS CIQ TC. April 27, 2002. 10 pages/slides. ['... ideas on using Schema technology to build layers of deeping complexity with address formats -- tied explicitly to business use criteria'] "[The] Objective is to provide layers of address granularity tailored to business use Address use levels: [Level 0 = handwritten postal address -- machine parsed; Level 1 = in country simple postal address -- legacy; Level 2 = extended postal address -- advanced features; Level 3 = shipping / delivery address -- large organization; Level 4 = facilities management -- universal / exotic / global]. Share common noun, definitions, share validation rules and quality factors, Provide means to manage the quality of address content, provide global language and format support." Technology: Use W3C Schema to provide layers of increasingly refining definitions based on business use; Use OAGIS V8 methods to restrict syntax to best-practice techniques; Enable use of ebXML AssemblyDoc technology; Provide migration from legacy address formats; Harmonization of existing and emerging standards to single common base noun dictionary and use templates..." This contribution is pertinent to the CIQ effort to accommodate a range of different (postal) address requirements from various user communities. For the broader framework and multiple standards efforts, see "Markup Languages for Names and Addresses."

  • [April 29, 2002] "Web Services Security Tightens." By Darryl K. Taft and Dennis Fisher. In eWEEK (April 29, 2002). "Since security remains among the key challenges that must be met before Web services can become pervasive, some companies are moving to answer the call. Baltimore Technologies plc. and Hitachi Computer Products Inc.'s Quadrasis business unit this week will each deliver tools to help meet Web services' security challenge. At the heart of these technologies is SAML (Security Assertions Markup Language), an XML-based standard for exchanging security credentials among online business partners. Nearing ratification by OASIS, or the Organization for the Advancement of Structured Information Standards, SAML enables users to sign on to one site and have their security credentials and information transparently transferred across affiliated sites... Quadrasis this week will announce its Enterprise Application Security Integration Developer Tool. It enables users to link security solutions via SAML wrappers and combine them to form a front-line defense for Web services security. The EASI tool is part of the company's EASI Security Unifier, which is based on SAML. Bret Hartman, chief technology officer of Quadrasis in Waltham, Mass., said the EASI Developer Tool is like 'enterprise application integration for security'..." See: "Security Assertion Markup Language (SAML)."

  • [April 29, 2002] "Federation Key to Web Services." By James Kobielus. In Network World (April 29, 2002). "You increasingly will see the term federation used with a new security standard, the XML-based Security Assertions Markup Language (SAML) 1.0, which is nearing ratification by the Organization for the Advancement of Structured Information Standards (OASIS). Web access-management vendors such as IBM/Tivoli, RSA Security/Securant, Netegrity, Oblix, Entegrity, Entrust Technologies and Sun/iPlanet have rallied around SAML 1.0 as a means for establishing standards-based interoperability among their products. As these vendors sell their wares into corporations large and small, SAML-based federation will be critical to knitting organizations' diverse access-management environments into unified business-to-business supply chains. So what precisely is SAML 1.0? At its heart, the standard defines XML/Simple Object Access Protocol-based protocol interactions that support real-time authentication and authorization across federated Web services environments. The standard defines request and response messages that security domains use to exchange authentication, attribute and authorization information in the form of trust-assertion messages about named users and resources. Users log on to their home domains through authentication techniques such as ID/password or Kerberos, and this authentication is communicated to a federated destination site through a SAML authentication assertion... During the next several months, Web access-management vendors will address interoperability issues among their SAML 1.0 implementations. If all goes well with initial interoperability testing, expect to see some commercial SAML 1.0-enabled products this year. But it may take several years before SAML-based products mature to the point where users can implement federated single sign-on and authorization scenarios without having to write excessive amounts of custom code to bridge divergent vendor implementations of the core standard." See: "Security Assertion Markup Language (SAML)."

  • [April 29, 2002] "UDDI Endures Reality Check Debate." By Paul Krill. In InfoWorld (April 26, 2002). "Although UDDI (Universal Description, Discovery, and Integration) has not yet fulfilled its promise to become the public registration technology for Web services, the concept is gaining a steady foothold, a panel of uddi.org members said during a session here at the Software Development Conference and Expo on Thursday. UDDI is designed to provide registries, either public or private, for registering and discovering Web services. Panelists from uddi.org, which is shepherding the technology, were sounded out about the progress of UDDI. The concept is still maturing, said panelist Joel Munter, senior software engineer at Intel, following comments from panel moderator Brent Sleeper, a consultant at Stencil Group, about registries allegedly 'missing in action' and scant services being available. 'There's a reluctance to populate a public registry with Web services,' Munter said. 'I don't think [the lack of services registered] says [UDDI] is not successful,' said panelist Suzy Struble, manager of XML industry initiatives at Sun Microsystems. Many Sun customers still are thinking about using UDDI internally first before considering utilizing public registries, she said. UDDI is well-positioned for use within firewalls, she said. Panelist Claus Von Riegen, XML standards architect at SAP, which has deployed a UDDI business registry, agreed with the notion that public registry development is not the only measurement of success of UDDI. UDDI, he added, can be used for business-to-business integration, enterprise application creation, and for developing a network of business partners..." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [April 27, 2002] "Before the United States Department of Justice and United States Federal Trade Commission Joint Hearings on Competition and Intellectual Property Law and Policy in the Knowledge-Based Economy: Standards and Intellectual Property: Licensing Terms." Testimony of Daniel J. Weitzner (Technology and Society Domain Leader, World Wide Web Consortium). 18-April-2002. Washington, DC USA. Excerpts: "I am the head of the World Wide Web Consortium's (W3C) Technology and Society activities, responsible for development of technology standards that enable the Web to address social, legal, and public policy concerns... My goal is to contribute to the factual basis of your inquiry into antitrust, intellectual property and technical standards by providing an overview of the experience that the World Wide Web community has had with patents over the last four years. This testimony will highlight three main points: (1) First, the Web itself has been possible only in the context of open, royalty-free (RF) standards. (2) Second, the 'reasonable, non-discriminatory terms' (RAND) licensing model common in many traditional standards bodies is unlikely to be accepted in the Web environment. (3) Third, W3C is working hard to develop a Royalty-Free patent policy that encourages the continued evolution of the Web as a universal information space, while respecting our Member's legitimate intellectual property rights... Of critical importance to the rise of electronic commerce as a new marketplace, Web technology allows a wide variety of new systems and technologies to be built on top of the basic architecture of the Web, thus enabling continual innovation in the design of Web-based applications and services. Two key attributes of Web standards are responsible for the ubiquity and flexibility of the Web: 1) simple, extensible design, and 2) open, unencumbered standards... Whether patents and claims related to W3C technologies are in fact valid or not, the risk of costly, time-consuming litigation and possible limitations on use by the right holders, is sufficient to suffocate much of the dynamic development activity that has been driving the Web industry... The strongest reaction [to the W3C Patent Policy Working Group's first proposal] came from various communities of open source developers who declared, (in several thousand emails sent to the W3C public comment mailing list) that a RAND approach would cause open source developers to stop using W3C web standards, impel some to form alternate Web standards, thus balkanizing the Web, and overall constituted a take-over of the Web by large corporate interests... W3C [has] responded to input from W3C Members and the public by adding invited experts to the Patent Policy Working Group to represent the Open Source community, creating a task force within the Patent Policy Working Group to examine how to accommodate the Open Source community, and creating a public home page for the PPWG, publishing public meeting records, and committing to additional public drafts of the framework before the policy was finalized. Based on input from a meeting the W3C Members, the Patent Policy Working Group recently issued a draft royalty free policy. The main features of this policy are: (1) RF licensing goals for W3C Recommendations; (2) RF licensing obligations that Working Group participants will undertake as a condition of Working Group membership, along with means of excluding specific patents from those obligations; (3) the definition of a Royalty-Free license; (4) disclosure rules for W3C Members; (5) an exception handling process for situations in which the Royalty-Free status of a specification comes under question..."

  • [April 26, 2002] "Postal Address Template Description Language (PATDL): A Contribution To An International Postal Addressing Standard." By Joe Lubenow (Industry Co-Chair, UPU DMAB Address Management Project Team). February 20, 2002. 27 pages. Included in a 2002-04-26 posting "Postal Address Templates" to the OASIS Customer Information Quality TC. With three appendices: Appendix I: The DTD defines version 1.0 of the Postal Address Template Description Language (PATDL). Preceding the main body of the DTD are comments that explain a number of technical matters not documented elsewhere, such as the syntax of the trigger conditions and the rules governing how they are interpreted. Appendix II: PATDL version of the template designed by Mabel Grein of USPS to represent the street address format using ECCMA codes and ADIS rendition instructions It has been validated using the Postal Address Template Description Language (PATDL) DTD. Appendix III: An annotated example designed to show many of the features of PATDL. It uses ADIS and ECCMA element codes and ADIS rendition instructions. There is an example of natural language element names with a CEN TC 331 element definer. It has been validated using the Postal Address Template Description Language (PATDL) DTD. -- "The intention of this paper is to discuss criteria that are relevant to achieving the goals laid down in these two [previous UPU] resolutions, to examine certain areas of progress in this effort, and to introduce the XML-based Postal Address Template Description Language (PATDL) that demonstrates how many of these goals might be met...The proposed Postal Address Template Description Language (PATDL) is currently expressed as an XML Document Type Definition (DTD). The core features of PATDL, such as the description of templates in terms of elements, the set of rendition instructions, and the trigger conditions, are presented in sufficient detail to support the development of a prototype reference implementation. Other features, such as the specification of character sets and external tables, are indicated but not developed..." [Note from Joe Lubenow: 'This is a proposal made to the] Universal Postal Union (UPU) POST*Code project on postal address templates... The UPU seeks to define address templates on a country by country basis, and given lists of address elements, to determine whether actual addresses can be parsed into the elements, and passed though the templates, to constitute deliverable addresses. This method of describing templates also covers rendition instructions which are used to edit the addresses to fit into a vertically or horizontally constrained space such as an address label, called for as a second step in the UPU project. It is intended to support both single piece mail and business mailings. Elements and rendition instructions from multiple sources, such as OASIS, ECCMA, ADIS, etc., or natural language tags, can be used within the same template... comments or suggestions are very welcome.'] See: "Markup Languages for Names and Addresses" (especially the section on Universal Postal Union (UPU). [source]

  • [April 26, 2002] "DISA to Create XML Clearinghouse." By Christopher J. Dorobek. In Federal Computer Week (April 26, 2002). "The April 22 [2002] memo, signed by John Stenbit, DOD's chief information officer, and Pete Aldridge, undersecretary of Defense for acquisition, technology and logistics, is part of an effort to 'support interoperability and minimize overhead' by creating a single clearinghouse and registry for creating, finding, reusing and unambiguously identifying XML components, the memo says... The memo was praised as a step forward in promoting XML interoperability. 'On a quick glance it looks good, which is what I would expect in light of DOD [and] DISA's leadership along these lines,' said Owen Ambur, co-chairman of the XML Working Group established by the federal CIO Council. 'I believe it is very much in line with what we're aiming to do with registry services at XML.gov,' he said. 'I look forward to continuing to work very closely with DOD toward the establishment of a worldwide set of XML registries that act as one by virtue of compliance with the applicable standards for interoperability.' The XML Working Group, working with the National Institute of Standards and Technology, is developing a registry of 'inherently governmental' data elements, document type definitions and schemas. An April report from the General Accounting Office said that the DOD XML registry is part of the department's effort to promote interoperability vertically within individual projects and horizontally across departmental organizations. Michael Jacobs, data architecture project leader for the Navy's CIO office, which recently issued a draft XML guide, said the DOD policy is a good first step toward formalizing the DOD XML registration requirements..." See the news item: "DISA to Establish a US DoD XML Registry and Clearinghouse."

  • [April 26, 2002] "The Hidden Toll of Patents on Standards." By David Berlind. In ZDNet Tech Update (April 25, 2002). ['My recent reports regarding how Microsoft and IBM's control of next-generation Internet protocols could lead to the two companies' control of the Internet itself have resulted in a flood of responses via e-mail and ZDNet's TalkBack forums.'] "Consider the Web applications you've deployed in support of your business' bottom line. Most companies support simple applications, such as allowing customers to come browse their sites. Browsing is one example of an Internet application where all the protocols needed to make it work are freely available. To run your Web site, you can rely on a set of standard protocols like HTTP, HTML and TCP/IP, without any financial encumbrances from royalty seekers. That set of protocols is called a stack. Other Internet applications rely on other freely available protocol stacks. Examples include FTP for file transfer, NNTP for newsgroups, and SMTP for e-mail. Each application relies on a slightly different stack (usually with the same protocols like TCP/IP at the bottom). Today those stacks, from top to bottom, are royalty-free..." Note: Lawrence Lessig treats this topic at length in The Future of Ideas: The Fate of The Commons in a Connected World.

  • [April 26, 2002] "RealNetworks: MPEG-4 Could Be DOA." By Stefanie Olsen and Gwendolyn Mariano. In ZDNet News (April 25, 2002). ['Proposed licensing fees for MPEG-4, a next-generation video compression standard, could mean its early death on the personal computer, RealNetworks CEO Rob Glaser said in a press conference Wednesday.'] "The licensing structure is putting the technology on a path to become irrelevant in the PC industry," Rob Glaser said after giving a keynote speech at the Streaming Media West conference... His remarks address a fee structure put forth in early February by MPEG LA, a licensing body representing 18 patent holders of the technology. Still under consideration by the group, the plan would require licensees to pay 25 cents for each MPEG-4 product, such as an encoder or decoder, with fees capped at $1 million a year. The plan also suggests charging a per-minute use fee, equivalent to 2 cents for each hour encoded in the format, that includes content on DVDs. Such fees would make it cost prohibitive for media players such as RealNetworks' RealOne or Apple Computer's QuickTime to support the emerging standard. Apple immediately rejected the proposed licensing terms, leaving the future of its QuickTime multimedia technology in limbo. The licensing impasse over MPEG-4 could help Microsoft, which has refused to sign on to the standards effort. Even if acceptable terms are eventually hammered out, the delay will give Microsoft more time to push its proprietary Windows Media format. MPEG-4 is the successor to MPEG-1 and MPEG-2, technologies behind digital broadcast transmissions over cable, satellite and the Internet. Like its predecessors, MPEG-4 comprises audio and video technologies that condense large digital files into smaller ones that can be easily transferred via the Web. It also adds such features as interactivity, e-commerce and digital rights management to audio and video files. RealNetworks is pursuing a dual strategy, Glaser said, partly to offset uncertainty over MPEG-4's future in light of the licensing issue. The company's proprietary media player system, RealSystem, supports MPEG-1 and MPEG-2 as well as MPEG-4. Meanwhile, the company on Wednesday backed the new standard by launching a site that promotes interoperability between different codecs, or video and audio compression and decompression technology... Some streaming media experts said there are few signs so far that MPEG LA feels pressured to drastically overhaul the proposed licensing structure..." OpEd note: This is what happens when standards committees shamelessly fall into the control of large companies who subvert the standards process in order to gain a commercial polyopoly (monopoly by plurality), ensuring that their patented IP is incorporated into open standards. Just say no.

  • [April 26, 2002] "Stallman: 'Patent licenses discriminate'." By Richard Stallman. In ZDNet Tech Update (April 23, 2002). "In order for standards to be useful for the general computer-using public, the standards must be freely implementable by all. In order to give free software a chance to compete, the standards must allow free software implementations. Many standards bodies do not insist on this -- they promulgate patent-restricted standards that the public cannot freely implement and that don't allow free software at all. These standards bodies typically have a policy of obtaining patent licenses that require a fixed fee per copy of a conforming program. They often refer to such licenses by the term 'RAND,' which stands for 'reasonable and non-discriminatory.' That term whitewashes a class of patent licenses that are normally neither reasonable nor non-discriminatory. It is true that these licenses do not discriminate against any specific person, but they do discriminate against the free software community, and that makes them unreasonable. Thus, half of 'RAND' is deceptive and the other half is prejudiced... Standards bodies should recognize that such licenses are discriminatory, and drop the use of the term 'reasonable and non-discriminatory' or 'RAND' to describe them. Until they do so, other writers who do not wish to join in the whitewashing would do well to reject that term. To accept and use it merely because patent-wielding companies have made it widespread is to let those companies dictate the views you express. I suggest the term 'uniform fee only,' or 'UFO' for short, as a replacement for 'RAND'..." See [provisionally] "Patents and Open Standards."

  • [April 26, 2002] "SOAPBuilders Roadmap for Implementing Interoperability Tests." From: 'Web Services Interoperability Forum'. Hosted at XMLBus.com, a web site by IONA Technologies. Project excerpt: "Web services interoperability testing began on the SOAP Builders Newsgroup. Tony Hong of XMethods on Jan 31, 2001 posted the group about an interoperability lab he wanted to start. At that time, most SOAP toolkits focused on the wire protocol and described Web services in terms of the messages expected on the wire, the SOAP interface. Additionally, the primary focus of most SOAP implementations was to enable remote method calls using request response over the Internet. As a result, the initial round of tests defined focused on RPC encoded SOAP messages over HTTP... Web services promise that businesses and individuals will be able to communicate and collaborate, over the Web, with various applications, services and processes that are remotely located. The revolution is in the underlying standards, including XML, UDDI, WSDL and SOAP. The promise of Web services can only be a reality if vendors building Web services solutions cooperate and define a common test suite for Web services Interoperability... Freedom of interaction increases the value of Web services. The WSDL-interop testing ensures that Web services created with different companies using different tools have that freedom. While earlier rounds of interoperability testing have shown that clients can consume WSDL from many different servers, no tests have been created to demonstrate that a wide range of Web service client and server tools can process the same WSDL definitions. This testing is another necessary step for Web services to be successful... Two WSDLs are equivalent if their corresponding SOAP messages on the wire look the same. In much the same way that IDL determines the wire format of an IIOP message, WSDL determines the wire format of a SOAP message. Ultimately, WSDL is decoupled from the wire protocol, so it is important to test to make sure that people interpret WSDL consistently and create the wire format in such a way that can be validated against the WSDL..."

  • [April 25, 2002] "Google's Gaffe." By Paul Prescod. From XML.com. April 24, 2002. ['Paul Prescod explains why moving its API to use SOAP was a backward step for the popular search engine, and argues for a return to a pure HTTP and XML interface. Previous articles demonstrated how SOAP-based services can be re-engineered around pure HTTP.'] "Google's release of an API has been heralded as a bright moment for web services. It is an exciting development, but at the same time there is a subset of the programmer community that is disappointed. Google had a similar XML-based API a year ago, but neither documented nor publicized it. Merely by changing '/search' to '/xml' in any Google query, you could get back an XML representation of the query result. It became a pay-only service last fall, which, while unfortunate, was understandable as the service was a straight money-loser. Imagine the surprise at Google reviving the service, adding new features, documenting and promoting it. But Google also moved it to an inferior technical platform, SOAP RPC running over HTTP POST. On many mailing lists, weblogs and discussion groups the reaction was mixed. It felt like one step forward and two steps back. Google seems to be caught up in the hype of SOAP-based web services. Conversely, Amazon has recently released a program for its partners based upon pure XML, HTTP and URIs. eBay has long had an XML/HTTP/URI-style service. In this article I demonstrate that Google's choice was technologically poor, compared to that of eBay and Amazon. I will show that a Google API based on XML, HTTP and URIs can be simpler to use, more efficient, and more powerful... SOAP-based services are called 'Web Services' because their proponents wish to partake of the Web's success -- yet they don't build on its core technologies, URIs and HTTP. Somehow many have equated SOAP and Web Services but HTTP has been in Service on the Web for more than a decade now and has not yet hit its prime. One other question for you to ponder. If, in general, most deployed Web Services use XML Schema for definition of types, language-specific databinding techniques for interpretation of types and HTTP for the transport of data, then what exactly is SOAP doing? ... Back in the days before XML, when we used SGML, we enthusiasts were like Hobbits in the Shire, hearing tales of the distant wars between Wizards (tools vendors), Dragons (database vendors) and Armies (MIS departments). But we were relatively sheltered from them. Those days have long since passed. What we need to do is gather together a fellowship of like-minded Hobbits, Dwarves, Elves and men and go on a quest to educate the world about the limitations of SOAP-RPC interfaces. You can join the quest by signing the petition available, and if you don't mind registering for Google Groups, you can also participate in the thread discussing this topic. Make sure to spread the topic to other mailing lists, weblogs and discussion forums. XML users have a special understanding of the possibilities of the marriage of hyperlinks, XML and the Web. We have a responsibility to go out of the Shire and educate the other residents of Middle Earth. Google is like the wizard Saruman, a benign and powerful force inexplicably turned from the path of virtue. Nevertheless, I am confident that it can be won back. The white robe awaits the delivery of Google API beta 3 with strong support for URI-addressed XML delivered over HTTP." See also Paul's "Questioning the Google API."

  • [April 25, 2002] "Kicking Out the Cuckoo." By Edd Dumbill. From XML.com. April 24, 2002. ['In his <taglines/> column, Edd Dumbill argues that the W3C is not really the appropriate place for the continuing development of web services technology, which does not seem that concerned about alignment with the Web itself. Web services are a distraction from the true business of developing the Web, and the W3C should stop wasting resources on their development.'] "The genesis of web services technology began with Microsoft and, to a lesser extent, IBM; and the leadership of this movement lies still very much with these organizations. When SOAP first emerged, developers were distinctly wary of a Microsoft technology that seemed to want to own the Web. Microsoft, for its part, would clearly benefit from widespread acceptance of SOAP. Hence the move to the only other party in town at the time, the W3C, made enormous sense: Microsoft gets the stamp of 'open standard' on its technology, and other companies get a forum in which to ensure they can have their turn steering the bandwagon. Cut to the present. SOAP, as specified by Microsoft and IBM, is deployed widely, to the point where its proponents refer to the fact that it is part of 'mission critical' systems. SOAP as per the W3C is still struggling. One and a half years on, the troubled and oversized XML Protocol Working Group is gearing up to release Last Call Working Drafts that stand a significant chance of being rejected by W3C members, thus facing delay. Meanwhile, things start getting out of hand in the world of consortia. Encouraged no doubt by the success of UDDI -- not in producing useful technology, but in press-ganging companies into its membership -- the WS-I organization sprang into being. Spearheaded by Microsoft, IBM and BEA, the Web Services Interoperability Organization is in the business of promoting the interoperability of web services by 'blessing' collections of specifications into profiles. At launch the WS-I avoided direct confrontation with the W3C by stressing a commitment to interoperability testing and creation of profiles. Of course, this makes complete sense. With over 100 member companies, the WS-I could not practically design new, competing, technology. The real effect of the WS-I is to remove the W3C's imprimatur of authority over web services... It has become painfully obvious that accommodating the development of web services technology at the W3C makes little sense. Not only has SOAP's development been a distraction of attention and resources from developing parts of the Web and XML infrastructure, it thumbs its nose at the W3C's other work. SOAP as commonly practiced offers nothing in the way of compatibility with HTML, XSLT, RDF, XLink, or SVG. In addition, it is responsible for the promotion of non-standard URNs (e.g., urn:GoogleSearch), collapses all the resources of an endpoint behind one URI, and uses HTTP in a way that most experts think is highly inappropriate. The 'web' in 'web services' is contemptible. The W3C's attempts in January 2002 to respond to the larger need of web services by creating a Web Service Architecture group now look futile and redundant in the face of the WS-I, established one month later, which will essentially define web service architecture by blessing its component specifications. There can be no doubt that the web service movement is the cuckoo in the W3C's nest. Sooner or later, one of the two will kick the other out..."

  • [April 25, 2002] "When to Use GET?" By Leigh Dodds. From XML.com. April 24, 2002. ['Dipping into the W3C TAG discussions once more, the XML-Deviant reports on the ongoing debate over the correct way to use the HTTP GET method.'] "In his recent XML-Deviant column ('TAG Watch') Kendall Clark examined the current issue list of the W3C TAG [Technical Architecture Group], which includes ' whenToUseGet-7', the resolution of which will help clarify when to use HTTP GET rather than POST. According to the HTTP specification GET is said to have two characteristics: it is safe and idempotent. POST, however, is neither idempotent or safe. A digression into the subtleties of defining these terms could fill a column in itself and has certainly been the topic of a number of mailing list discussions. Safe, for example, should be considered in terms of user accountability rather than side-effects. Retrieving a web page using GET might cause a hit counter to be updated -- clearly a side-effect -- but because the user did not specifically request this change they cannot be held accountable for it's occurrence. An idempotent operation is one that can be carried out repeatedly with the same results. This might lead one to believe that retrieving a dynamic web page is not an idempotent operation -- the page may alter at each request -- yet the page is consistently returning the current state of the requested resource so can be said to have the same result. In resolving this issue the TAG aims to lend its support to the HTTP 1.1 specification, thereby ensuring that web developers are correctly working within the constraints of the Web's architecture. Dan Connolly is currently assigned the job of writing up the TAG recommendations, a draft of which, when circulated publicly this week, sparked another debate... Having highlighted two benefits of conformance to web architecture could bring, the TAG debate has since moved on to suggestions about how the issue might be put to bed. David Orchard seems to be the lone voice on the TAG protesting the findings, but there is obviously a strong swell of support from the web services community. Liaisons between the groups will certainly be a contributing factor toward finding a resolution, while at a technical level a range of options -- from a HTTP GET binding for SOAP to revisiting the entire SOAP specification -- have been suggested..."

  • [April 25, 2002] "Government and Finance Industry Urge Caution on XML." By Alan Kotok. From XML.com. April 24, 2002. ['Away from the turbulence of the web services debate, Alan Kotok has been following recent reports on XML from two important US institutions. The US General Account Office, and NACHA, and electronic payments organization, have both urged their constituents to move cautiously on any commitment to XML.'] "On 5 April 2002, the XML world received a double-dose of sobering news, as reports from both the U.S. General Accounting Office and NACHA, an electronic payments organization, urged their constituents to move cautiously on any commitment to XML. Both reports cite XML's bright promise but express concerns about its stability. Nonetheless, recent events suggest the industry has begun to get the message and started addressing these concerns. The General Accounting Office (GAO) is the U.S. government's auditor and watchdog body. NACHA, which used to be known as the National Automated Clearing House Association, develops standards and best practices in electronic payments and claims to represent some 12,000 financial institutions through its regional affiliates. While the two reports look at different industries and each takes a different focus, they both came to similar conclusions about the state of standards, namely that a single set of business standards needs to emerge to provide businesses with a sense of stability that will encourage investment in the technology. While this conclusion may be accurate in some respects, it does not tell the whole story. The XML standards bodies have recognized for some time the need for stability and interoperability in business standards..." See other details in: (1) "US General Accounting Office Releases XML Interoperability Report"; (2) "XML Formatted Remittance Data in the ACH: A Feasibility Assessment."

  • [April 25, 2002] "Strange Transformations." By John Simpson. From XML.com. April 24, 2002. ['John Simpson deals with issues of handling CDATA and the tensions between outputting HTML vs. XHTML from your XSLT transformations.'] Answers (1) "Can I un-CDATA my CDATA section?" and (2) "I keep losing a trailing space inside my empty-element tags... [what to do?]"

  • [April 25, 2002] "Baltimore to Release SelectAccess 5.0 with SAML." By Sam Costello. In InfoWorld (April 24, 2002). "Baltimore Technologies will announce version 5.0 of its SelectAccess Web access management product on Monday, a release that includes easier configuration, better reporting and support for the SAML (Security Assertions Markup Language) standard. The addition of SAML to the product is perhaps the most important new feature in version 5.0. SAML is an emerging Web standard that should allow different Web access management products to interoperate and exchange security, authentication and permission information about users... Version 5.0 of SelectAccess simplifies the process of adding new users and components to a system, allows user information to be drawn from different directories simultaneously, offers deeper reporting and alerting options and adds support for the authentication of wireless users, she said. The new version of the software allows administrators to more easily and quickly deploy new SelectAccess components by storing configuration details in an LDAP (Lightweight Directory Access Protocol) directory, she said. That configuration data can then be automatically applied to new components -- such as servers and directories -- as they are added to a network, speeding installation of the new component. The new feature also cuts down on the time needed to upgrade configurations, as the new configuration can be created once and then published to all affected components, she said. SelectAccess 5.0 also allows information about users and policies to be extracted from different LDAP directories at the same time, according to Fai. This feature is needed as companies may use separate directories for different groups of users, she said. The new software also offers administrators more detailed and searchable reports, allowing them to be viewed by date, server, user, administrator and other criteria, she said. Administrators can also be notified of events in SelectAccess in more ways in version 5.0, with SNMP (Simple Network Management Protocol) and pager forwarding options, Fai said. Alerts can also be sent to trigger other events, rather than immediately alerting an administrator... Users of WAP (Wireless Application Protocol) devices are also supported in SelectAccess 5.0, she said. Another new protocol supported by the software is SAML, an emerging standard for Web access management products that will allow authentication and access control data to be handed off among Web access management products, she said. SAML support will help SelectAccess users extend Web single-sign-on capabilities beyond their corporate boundaries to partners who may not be using the same Web access management software... Despite the impending ratification, other details still need to be worked out among Web access management vendors. Those include how the data about access control will be described, he said. As as result, initial SAML deployments are likely to offer only a single sign-on to a variety of Web resources, rather than the full capability of the standard..." See (1) "Committee Specification Level Documents for the Security Assertion Markup Language (SAML)"; (2) general references in "Security Assertion Markup Language (SAML)"; (3) the announcement, "Baltimore Introduces the First Commercially Available Implementation of SAML-based Services for Online Partnerships with SelectAccess 5.0. SelectAccess 5.0 Eases Administration, Extends Usability, and Leverages Existing IT Investment."

  • [April 25, 2002] "Challenges for Service Providers When Importing Metadata in Digital Libraries." By Marilyn McClelland, David McArthur, Sarah Giersch (CollegisEduprise, Morrisville); and Gary Geisler (University of North Carolina at Chapel Hill). In D-Lib Magazine Volume 8 Number 4 (April 2002). ISSN: 1082-9873. "Much of the usefulness of digital libraries lies in their ability to provide services for data from distributed repositories, and many research projects are investigating frameworks for interoperability. In this paper, we report on the experiences and lessons learned by iLumina after importing IMS metadata. iLumina utilizes the IMS metadata specification, which allows for a rich set of metadata (Dublin Core has a simpler metadata scheme that can be mapped onto a subset of the IMS metadata). Our experiences identify questions regarding intellectual property rights for metadata, protocols for enriched metadata, and tips for designing metadata services... Much work has been done in the digital library community to facilitate the exchange of metadata between digital libraries. Specifications like Dublin Core, IEEE Learning Object Metadata (LOM), or IMS have been developed for tagging metadata in a standard XML format (IMS Specification). Frameworks are being developed to provide interoperability for digital libraries with many using the protocol for metadata harvesting developed by the Open Archives Initiative (OAI). For the discussion here, we consider a simple, high-level view of the interaction of metadata entities which illustrates the logical separation of data providers and service providers to distinguish the different roles in handling metadata... The iLumina database schema was developed using an entity relationship diagram for the first version of the IMS metadata specification. After mapping the fields in the XML instance of the imported data to the database schema, shortfalls in the database schema came to light. The original design of the iLumina database overlooked datatypes defined in the LOM specification, like langstring and vocabulary. This did not cause problems while all metadata was being created by iLumina catalogers, as the iLumina cataloging is done in English and the iLumina vocabularies are known. The import of others' data highlighted the omission of the datatypes in the schema for persistence of the metadata and the limitations of the current design for future use. Another problem is that we had not made provisions for storing non-IMS metadata. For example, Dublin Core can be mapped into IMS, so iLumina could store DC metadata, but with our current model, the XML generated for export would be in IMS format. One solution is to store the intact XML document in a single field in the database. This appears to be the policy that the National Science, Technology, Engineering and Mathematics Digital Library (NSDL) is evolving for sharing and transacting metadata across its collections, since it now plans to retain native metadata and to map it into the common NSDL schema, based on Dublin Core. Storing the XML instance insures our ability to roundtrip the original document and eliminates the challenge of mapping various XML schemas into the iLumina database schema. Lesson 4: 'Maintain the original XML instance of imported data to preserve all mappings and to be able to roundtrip the original.'..." See: (1) See "IMS Metadata Specification"; (2) "Open Archives Metadata Set (OAMS)."

  • [April 25, 2002] "Metadata Principles and Practicalities." By Erik Duval (Katholieke Universiteit Leuven, Belgium), Wayne Hodgins (Strategic Futurist, Autodesk), Stuart Sutton (Associate Professor, The Information School, University of Washington), and Stuart L. Weibel (Executive Director, Dublin Core Metadata Initiative). In D-Lib Magazine Volume 8 Number 4 (April 2002). ISSN: 1082-9873. "The rapid changes in the means of information access occasioned by the emergence of the World Wide Web have spawned an upheaval in the means of describing and managing information resources. Metadata is a primary tool in this work, and an important link in the value chain of knowledge economies. Yet there is much confusion about how metadata should be integrated into information systems. How is it to be created or extended? Who will manage it? How can it be used and exchanged? Whence comes its authority? Can different metadata standards be used together in a given environment? These and related questions motivate this paper. The authors hope to make explicit the strong foundations of agreement shared by two prominent metadata Initiatives: the Dublin Core Metadata Initiative (DCMI) and the Institute for Electrical and Electronics Engineers (IEEE) Learning Object Metadata (LOM) Working Group. This agreement emerged from a joint metadata taskforce meeting in Ottawa in August, 2001. By elucidating shared principles and practicalities of metadata, we hope to raise the level of understanding among our respective (and shared) constituents, so that all stakeholders can move forward more decisively to address their respective problems. The ideas in this paper are divided into two categories. Principles are those concepts judged to be common to all domains of metadata and which might inform the design of any metadata schema or application. Practicalities are the rules of thumb, constraints, and infrastructure issues that emerge from bringing theory into practice in the form of useful and sustainable systems... XML markup, while still a small part of the total markup on the Web, is the idiom of choice for the encoding and exchange of structured data. The XML namespace facility provides structural capabilities that HTML lacks, making it easier to achieve the principles of modularity and extensibility. The XML Schema specification defines a schema language that allows for the specification of application profiles that will increase the prospects for interoperability. The Resource Description Framework (RDF) promises an architecture for Web metadata and has been advanced as the primary enabling infrastructure of the Semantic Web activity in the W3C. Designed to support the reuse and exchange of vocabularies, RDF is an additional layer on top of XML that is intended to simplify the reuse of vocabulary terms across namespaces. Most RDF deployment to date has been experimental, though there are significant applications emerging in the world of commerce (Adobe's deployment of their XMP standard which is based on RDF). The IEEE Learning Object Metadata standard provides an example of how this critical need for independence between the semantics of metadata and their syntactical representation can be addressed. LOM will be what is known as a 'multi-part standard' where the semantic data model is an independent standard and then each syntactical representation is an independent standard developed as a specific 'binding' of the LOM Data Model standard. DCMI also provides recommendations on encoding of Dublin Core metadata in alternative encoding idioms..." See: (1) "Dublin Core Metadata Initiative (DCMI)"; (2) "IEEE LTSC XML Ad Hoc Group [Learning Object Metadata Working Group]."

  • [April 24, 2002] "Internationalized Resource Identifiers (IRI)." By Martin Dürst (W3C/Keio University) and Michel Suignard (Microsoft Corporation). IETF Network Working Group, Internet-Draft. Reference: 'draft-duerst-iri-00'. 24 pages. ['This document is a product of the Internationalization Working Group (I18N WG) of the World Wide Web Consortium (W3C).'] Abstract: "This document defines a new protocol element, the Internationalized Resource Identifier (IRI), as a complement to the URI (RFC2396). An IRI is a sequence of characters from the Universal Character Set. A mapping from IRIs to URIs is defined, which means that IRIs can be used instead of URIs where appropriate to identify resources. The approach of defining a new protocol element was chosen, instead of extending or changing the definition of URIs, to allow a clear distinction and to avoid incompatibilities with existing software. Guidelines for the use and deployment of IRIs in various protocols, formats, and software components that now deal with URIs are provided... As with URIs, an IRI is defined as a sequence of characters, not as a sequence of octets. This definition accommodates the fact that IRIs may be written on paper or read over the radio as well as being transmitted over the network. The same IRI may be represented as different sequences of octets in different protocols or documents if these protocols or documents use different character encodings and/or transfer encodings. Using the same character encoding as the containing protocol or document assures that the characters in the IRI can be handled (searched, converted, displayed,...) in the same way as the rest of the protocol or document... IRIs are defined similarly to URIs in RFC2396 (as modified by RFC2732), but the class of unreserved characters is extended by adding all the characters of the UCS (Universal Character Set, ISO10646) beyond U+0080, subject to the limitations given in Section 3. Otherwise, the syntax and use of components and reserved characters is the same as that in RFC2396. All the operations defined in RFC2396, such as the resolution of relative URIs, can be applied to IRIs by IRI-processing software in exactly the same way as this is done to URIs by URI-processing software. Characters outside the US-ASCII range MUST NOT be used for syntactical purposes such as to delimit components in newly defined schemes. As an example, it is not allowed to use U+00A2, CENT SIGN, as a delimiter, because it is in the 'iunreserved' category, in the same way as it is not possible to use - as a delimiter, because it is in the 'unreserved' category. Section 1 introduces concepts, definitions, and the scope of this specification. Section 2 discusses the IRI syntax and conversion between IRIs and URIs. Section 3 deals with limitations on characters appropriate for use in IRIs, and with processing of IRIs. Section 4 discusses software requirements for IRIs from an operational viewpoint..." [Motivation/justification:] "The characters in URIs are frequently used for representing words of natural languages. Such usage has many advantages: such URIs are easier to memorize, easier to interpret, easier to transcribe, easier to create, and easier to guess. For most languages other than English, however, the natural script uses characters other than A-Z. For many people, handling Latin characters is as difficult as handling the characters of other scripts is for people who use only the Latin alphabet. Many languages with non-Latin scripts do have transcriptions to Latin letters and such transcriptions are now often used in URIs, but they introduce additional ambiguities. The infrastructure for the appropriate handling of characters from local scripts is now widely deployed in local versions of operating system and application software. Software that can handle a wide variety of scripts and languages at the same time is increasingly widespread. Also, there are increasing numbers of protocols and formats that can carry a wide range of characters." See also: (1) "URIs and Other Identifiers"; and (2) "Internationalized Resource Identifiers: From Specification to Testing. [cache]

  • [April 24, 2002] "[Adobe and DRM]." By Bill Rosenblatt. In DRM Watch Update. The Adobe PDF format "got a boost in the market for DRM-controlled academic content with launches of several new content delivery services. One involves a partnership between OverDrive and Netherlands-based Kluwer Academic Publishers, whereby OverDrive will run an eBookstore based on Adobe Content Server that sells Kluwer titles at ebooks.kluweronline.com. In the other, Ebrary announced partnerships with three information providers to make its academic and literary content, also in PDF format, available in Europe, Asia, Latin America, and Australia." [Analysis:] "These two announcements provide further evidence of Adobe's head start over Microsoft and other eBook technology providers as the format of choice for paid content in the academic market. Because of its "captive audience" properties, the academic market is a landmark early adopter market for DRM-based content delivery. Therefore, success in that market is an important stepping-stone to mass market acceptance for DRM technology..." See: "XML and Digital Rights Management (DRM)."

  • [April 24, 2002] "Database Future Debated." By Paul Krill. In InfoWorld (April 24, 2002). "Whether the future of databases is the traditional, relational and SQL model with XML technologies incorporated into it or a new XML-based model is a matter of debate, according to panelists during a session Tuesday [2002-04-23] at the Software Development Conference & Expo. The fate of XML and SQL dominated the discussion, which featured officials from companies such as Oracle, Sun Microsystems, and IBM. 'I think that XML will become the dominant format for data interchange,' with its flexibility and ability to provide self-description,' said Don Chamberlin, a database technology researcher at IBM. Relational databases, he said, will be fitted with front ends to support XML and process queries based on the XQuery standard... Sun's Rick Cattell, a distinguished engineer at the company, had a less dominant outlook for XML, saying very few people are going to store XQuery data in an XML format. 'I think the momentum behind relational databases is insurmountable,' Cattell said, adding that he was drawing on his experience with object-oriented databases, which were unable to unseat relational databases in enterprise IT shops. Developers, Cattell said, will need tools to convert relational data to XML and vice versa. Another panelist, Daniela Florescu, chief technology officer at XQrl, said she was 'pretty optimistic [about] the performance of XML databases.' Documents will be stored natively in XML, she said. XQrl offers a version of the XQuery XML query language. Currently, performance on the Web is hindered because of translations between Java and XML data formats, Florescu said. 'I don't think we will have good performance as long as we have people marshalling data from XML to Java and back,' Florescu said... Panelists also touched on topic such as tuple space technology, which is intended to make it easier to store and fetch data by recognizing patterns. Tuple space technology is 'interesting, but I wouldn't predict that it's going to take over the world,' since much more research needs to be done and most people are not building production applications based on it, Cattell said. Cattell also said in-memory database technology is a 'no-brainer,' but there is not enough memory available yet to accommodate it... Panelist Jim Melton, consulting member of the technical staff at Oracle, said he is part of a vendor group called SQLX that has been working for a year to define ways to bring SQL and XML closer together. The group in mid-2003 plans to publish a specification called SQL/XML, which will contain publishing functions for the two formats..." See: (1) "XML and Databases"; (2) "Tuple Spaces and XML Spaces."

  • [April 24, 2002] "UDDI 2.0 Provides Ties That Bind." By Timothy Dyck. In eWEEK (April 22, 2002). "With one version's worth of experience under its belt, the UDDI design team has gained valuable insight into the things it missed in UDDI 1.0. UDDI 2.0 takes important steps forward, while 3.0 [...] should really hit its stride... The UDDI 2.0 specification, which was released last June, makes two major changes over UDDI 1.0. First, Version 2.0 provides a way for multiple entities (multiple UDDI BusinessEntity objects) in a UDDI directory to link themselves in a hierarchy or in a horizontal, point-to-point chain. This is especially important for large organizations with many divisions or subsidiaries that will register themselves in a UDDI directory... Second, UDDI 2.0 adds a mechanism for those querying a UDDI directory to verify that the statements an organization makes about itself are true. Claims about categorization (the type of industry a business is in, using standardized industry taxonomies) or identity (any unique identifiers, such as a Dun & Bradstreet D-U-N-S number) can now be defined as checked taxonomies. UDDI 2.0 directories pass information in checked taxonomies through a verification process (left unspecified by the standard) before allowing the entries to be registered. Four organizations -- Hewlett-Packard Co., IBM, Microsoft Corp. and SAP AG -- are hosting beta implementations of public UDDI directories based on UDDI 2.0. As the uddi.org site warns, these registries are for testing and prototyping UDDI 2.0 only, and data stored in them may be lost at any time. IBM and Microsoft continue to maintain production-status UDDI 1.0 registries..." See: "Universal Description, Discovery, and Integration (UDDI)."

  • [April 24, 2002] "A Dynamic Index: Reinventing the Online Help in XML." By Benjamin Guralnik. From MSDN, 'Extreme XML'. April 03, 2002. ['This month's installment of Extreme XML is written by guest columnist Benjamin Guralnik. Benjamin lives in Jerusalem, Israel and is an independent developer specializing in XML/XSLT. He has been a regular contributor to the online community surrounding Microsoft XML products. Benjamin has been pioneering XML-oriented information systems since 1999 when he began working on the award-winning SAS Interactive Handbook, an innovative help aide that reorganized SAS System's native help into a compact and useful interface.'] "While earlier help systems may have worked because they contained less information or fewer topics, the newer help systems are turning remarkably counter-productive in spite of the fact that they incorporate much more information than their prototypes did. The main reason that these combined resources fail to work is that the increasing volume of information is still managed using outdated and inadequate indexing and organization tools that are as ineffective on large sets of documents as they were successful on small ones. Thus, while the information itself is helpful, it is frequently buried by hundreds of irrelevant hits returned by the search engine or by the bewildering Table of Contents (TOC). Could this situation be fixed? You'll have to read the article to find out what role XML can play in solving this problem... The conceptual innovations of the Dynamic Index address both help authors and end-users alike. The users, first and foremost, get a truly versatile aide to work with; one that can show them the material from several different points of view. Help authors, on the other hand, are now set free from the dull task of arranging the hierarchies manually. Moving forward, it allows help authors to focus on the accessibility of their manuals, namely by developing specialized views for different profiles of users, questions, and so on. Moreover, updating and revising the content becomes much easier because the whole process is automated. It is these innovations that I believe will allow us to start seeing comprehensive, XML-powered help systems..."

  • [April 23, 2002] "West Wing Meets Web Services." 'Commentary Box' by Dan Farber. In ZDNet AnchorDesk (April 23, 2002). "... [companies are] angling to establish intellectual property rights and patents that would give them an advantage over competitors. They have the legal and engineering resources, as well as the economic incentive, to shift the balance of power. After all, the giant companies become giants by developing proprietary solutions that often include barriers to entry. Altruism is not a quality that would describe the builders of commercial or digital solutions in any market sector. Yet, the evolution of the Internet at its core levels requires a kind of altruism and cooperation to ensure continued innovation that favours the whole and not the few. Like the telephone or airline systems, there are many points along which value for services can be extracted to benefit the supplier and the customer. The question is at what point in a stack of protocols and standards does value extraction conflict with creating viable options for customers and impede innovation? [...] The entire issue is best summed up by comments made this week by Daniel Weitzner of the W3C before the Department of Justice and Federal Trade Commission: 'W3C, its Members, and many in the independent software developer community around the world who have contributed to the growth of the Web, have spent the last year in active discussion about the proper relationship between patents and Web standards. Our debate is not yet concluded, but we have learned together quite a bit about how important the tacit royalty-free licensing environment of the Web to-date has been for the development of extraordinary economic and social value that has been generated by the World Wide Web. Our commitment is to find an approach the insures the Web's growth into the future as a vibrant engine of technical innovation, economic productivity and social growth. Above all, we will find a solution that provides for the continued universality of the Web as an information medium, and avoids uses of intellectual property rights that could lead to Balkanisation of the Web'."

  • [April 19, 2002] "Assertions and Protocol for the OASIS Security Assertion Markup Language (SAML)." Edited by Phillip Hallam-Baker (VeriSign) and Eve Maler (Sun Microsystems). For the OASIS XML-Based Security Services Technical Committee (SSTC). Maturity level: Committee Specification. Publication date: 19-April-2002. 54 pages. One of the 'final' SAML 1.0 Committee Specification "00" documents. Posted by Eve Maler. "This specification defines the syntax and semantics for XML-encoded SAML assertions, protocol requests, and protocol responses. These constructs are typically embedded in other structures for transport, such as HTTP form POSTs and XML-encoded SOAP messages. The SAML specification for bindings and profiles provides frameworks for this embedding and transport. Files containing just the SAML assertion schema and protocol schema are available... The Security Assertion Markup Language (SAML) is an XML-based framework for exchanging security information. This security information is expressed in the form of assertions about subjects, where a subject is an entity (either human or computer) that has an identity in some security domain. A typical example of a subject is a person, identified by his or her email address in a particular Internet DNS domain. Assertions can convey information about authentication acts performed by subjects, attributes of subjects, and authorization decisions about whether subjects are allowed to access certain resources. Assertions are represented as XML constructs and have a nested structure, whereby a single assertion might contain several different internal statements about authentication, authorization, and attributes. Note that assertions containing authentication statements merely describe acts of authentication that happened previously. Assertions are issued by SAML authorities, namely, authentication authorities, attribute authorities, and policy decision points. SAML defines a protocol by which clients can request assertions from SAML authorities and get a response from them. This protocol, consisting of XML-based request and response message formats, can be bound to many different underlying communications and transport protocols; SAML currently defines one binding, to SOAP over HTTP. SAML authorities can use various sources of information, such as external policy stores and assertions that were received as input in requests, in creating their responses. Thus, while clients always consume assertions, SAML authorities can be both producers and consumers of assertions..." The new SAML Committee Specification deliverables include: (1) SAML Assertions and Protocol, with separate XML Assertion Schema and XML Protocol Schema; (2) SAML Bindings and Profiles; (3) SAML Security and Privacy Considerations [non-normative]; (4) SAML Conformance Program Specification; (5) SAML Glossary. See details in the news item. See general references in: "Security Assertion Markup Language (SAML)."

  • [April 19, 2002] "CSS3 Module: Color." W3C Working Draft 18-April-2002. W3C Last Call Working Draft; comments may be sent until May 17, 2002. Edited by Tantek Çelik (Microsoft Corporation) and Chris Lilley (W3C). With contributions from additional authors: Steven Pemberton (CWI) and Brad Pettit (Microsoft Corporation). Latest version URL: http://www.w3.org/TR/css3-color. "CSS (Cascading Style Sheets) is a language for describing the rendering of HTML and XML documents on screen, on paper, in speech, etc. To color elements in a document, it uses color related properties and respective values. This draft describes the properties and values that are proposed for CSS level 3. It includes and extends them from properties and values of CSS level 2... This document is a draft of one of the 'modules' for the upcoming CSS3 specification. As a set of modules, CSS3 is divided up and profiled in order to simplify the specification, and to allow implementors the flexibility of supporting the particular modules appropriate for their implementations. This module describes CSS properties which allow authors to specify the foreground color and opacity of an element. Additional properties allow specification of the ICC color profile and rendering intent of image content. This module also describes in detail the CSS <color> value type. The Working Group doesn't expect that all implementations of CSS3 will implement all properties or values. Instead, there will probably be a small number of variants of CSS3, so-called 'profiles'. For example, it may be that only the profile for 32-bit color user agents will include all of the proposed color related properties and values..." [As to Profiles:] "Each specification using CSS3 Color must define the subset of CSS3 Color features it allows and excludes, and describe the local meaning of all the components of that subset." See "W3C Cascading Style Sheets."

  • [April 19, 2002] "The 'application/ssml+xml' Media Type." By Steph Tryphonas and Brad Porter (Tellme Networks Inc.). IETF Internet-Draft. Reference: 'draft-tryphonas-ssml-media-reg-00.txt'. March 11, 2002 ['This document defines the 'application/ssml+xml' media type for the Speech Synthesis based markup language'] "Speech Synthesis Markup Language is an XML based markup language for assisting the generation of synthetic speech in web and other applications. The essential role of the markup language is to give authors of synthesizable content a standard way to control aspects of speech output such as pronunciation, volume, pitch, rate and etc. across different synthesis-capable platforms. Feedback or discussion about this draft should be directed to the Voice Browser Working Group public mailing list, www-voice@w3.org with archives at http://lists.w3.org/Archives/Public/www-voice/. Registration of MIME media type application/ssml+xml: MIME media type name: application; MIME subtype name: ssml+xml; File extension: .ssml. See: (1) general references in "W3C Speech Synthesis Markup Language Specification"; (2) news item "W3C Publishes New Speech Synthesis Markup Language Specification." [cache]

  • [April 19, 2002] "Describing and Retrieving Photos Using RDF and HTTP." By Yves Lafon and Bert Bos (W3C). W3C Note 19-April-2002. Latest version URL: http://www.w3.org/TR/photo-rdf. Previous version URL: http://www.w3.org/TR/2000/NOTE-photo-rdf-20000928 This note describes a project for describing and retrieving (digitized) photos with (RDF) metadata. It describes the RDF schemas, a data-entry program for quickly entering metadata for large numbers of photos, a way to serve the photos and the metadata over HTTP, and some suggestions for search methods to retrieve photos based on their descriptions. The data-entry program has been implemented in Java, a specific Jigsaw frame has been done to retrieve the RDF from the image through HTTP. The RDF schema uses the Dublin Core schema as well as additional schemas for technical data. We already have a demo site and there is sample source code available for download. The system can be useful for collections of holiday snapshots as well as for more ambitious photo collections..." Compare "DIG35: Metadata Standard for Digital Images."

  • [April 19, 2002] "Privacy and XML, Part I." By Paul Madsen and Carlisle Adams. From XML.com. April 17, 2002. ['How XML-based initiatives are addressing online privacy, Part One. Paul Madsen and Carlisle Adams give an overview of the issues surrounding online privacy, and also introduce important concepts in privacy. The next article, to be published next week, will investigate the various XML initiatives in this area, and how they link to the concepts explored this week.'] "The widespread uptake of e-commerce has been stalled as much by the inability of businesses to guarantee the privacy preferences of their customers for the personal data entrusted to them as by any other single factor. Of those who are connected but do not purchase online -- which is over half of all Internet users -- over half say their reluctance is due to fear that their personal information will be stolen or misused. In a sense, XML, through the smart data transfer it enables, contributes to the problem. However, a number of XML-based efforts are emerging that offer solutions to some of the major technology issues for privacy... . Microsoft has promised that they will neither use Passport data in this way themselves, nor sell it to others. An organization wishing to participate in a Liberty community will necessarily make the same commitment. .NET My Services will make the user's information available through a published XML API; Microsoft is calling this the 'XML Message Interfaces' (XMI). XMI will simplify for application developers both the retrieval of this information and its integration into their applications (browser-based and non-browser-based). The initial .NET My Services roll-out will include core services like .Net Profile (nicknames, picture, etc.) and .Net Calendar (time and task management), each of which will have an appropriately defined XML Schema. Passport, .NET My Services, the Liberty Alliance, and similar architectures built around Web protocols and services have heightened public awareness of privacy issues with regard to the Internet..."

  • [April 19, 2002] "Hot and Fresh Technology for the Enterprise." By Antoine Quint. From XML.com. April 17, 2002. ['This month's SVG column examines the potential of SVG in the enterprise. In a report on the state of server-side and thin-client SVG, Antoine Quint rests our weary brains from coding, and explains how SVG is being used at an "enterprise" scale.'] "This month I'm taking a break from flooding you with heaps of technical SVG tricks in order to reflect on how SVG has been progressing. In my first article here five months ago I discussed where we were then with SVG. I focused on the application area I was most familiar with, end-user graphics. The situation since then has evolved as the community is still awaiting an SVG-enabled animation authoring tool from a major vendor, as well as counting the months until an SVG viewer achieves complete ubiquity (like Adobe's)... The term 'enterprise graphics' may sound a little barbarous, but it covers professional usage of graphics for business applications like gantt charts, graphs, maps, workflow monitoring, etc. While on the Web SVG faces a very fierce competitor format for high-end interactive graphics (SWF), it is being accepted in a cosier way in the enterprise where no vector-based technology has yet made its mark. Today I'll try to show you how SVG is used in enterprise applications, and how it ranks in usage and power against other client-side technologies like DHTML and SWF. One of the key elements of enterprise graphics solutions is to be able to meld data and graphic presentation together in a seamless way. There are a variety of tools out there which allow this on a variety of platforms. Lately, two graphics giants have come up with new technologies for dynamic serving of high-end graphics. Corel has announced its DEEPWHITE content management system, with a strong focus on providing graphically-enriched content. Adobe has been shipping its AlterCast product for a few months now as a server-side imaging solution tightly integrated with Adobe's applicative workflow. The common point in those two offerings is the strong implication of SVG. XML makes it easy to separate data and rendering. SVG, as an XML technology, also offers that great advantage. This makes SVG a great candidate as a format for use in a graphics framework... SVG is an XML application and greatly benefits from the XML experience of programmers who get to use this technology. SVG is fully compatible with XML standards such as the DOM, and anyone with some DOM experience should be able to construct SVG graphics from a data set. Also, XML is an ubiquitous technology in the application server world, now pushed more than ever by Web Services trends. On top of that, we have seen that major industry players such as Adobe and Corel are pushing SVG-centric server-side graphic frameworks. Finally, SVG offers a real platform on the client-side with Core, CSS, Events, and SVG DOM available for scripting. This offers great tools for drawing, transforming, filtering applications in a completely open, standalone and dynamic way; and all of this with the same DOM technology many developers are familiar with. You can create a complete visualization system on the client-side with SVG without having to query once the server..." Note the book SVG Essentials: Producing Scalable Vector Graphics with XML, by J. David Eisenberg.

  • [April 19, 2002] "Vertical Market Uses of SVG: Agriculture." By Russel Gum. Noted on the W3C SVG website: Radio Weblog now has a SVG via Radio page. Russel uses SVG in a distributed agricultural software system; see the example. The W3C site references several other SVG tools and demonstrations. See for example ILOG's dynamically updated SVG weather map.

  • [April 19, 2002] "Perl and XML on the Command Line." By Kip Hampton. From XML.com. April 17, 2002. ['Kip demonstrates some nifty tricks, including populating a database from an XML document, finding the differences between two documents, and pretty printing a document.'] "Over the last several months we have explored some the of ways that Perl's XML modules can by used to create complex, modern Web publishing systems. Also, the growing success of projects like AxKit, Bricolage, and others shows the combination of Perl and XML to be quite capable for creating large-scale applications. However, in looking at more conceptual topics here recently, together with the fact that the Perl/XML combination is often seen in complex systems, seems to give the impression to the larger Perl community that processing XML with Perl tools is somehow complex and only worth the effort for big projects. The truth is that putting Perl's XML processing facilities to work is no harder than using any other part of Perl; and if the applications that feature Perl/XML in a visible way are complex, it is because the problems that those applications are designed to solve are complex. To drive this point home, this month we will get back to our Perlish roots by examining how Perl can be used on the command line to perform a range of common XML tasks. For our first few examples we will focus on those modules that ship with command line tools as part of their distributions..." See: "XML and Perl."

  • [April 18, 2002] "Google Web API." By Rael Dornfest. From O'Reilly Network (April 11, 2002). "By exposing its cache of over 2 billion Web pages via simple Web services, the Google Web API is a breath of fresh air in a specification-dense yet implementation-sparse arena... Google's arrival at the Open Services experimentation party finds them in good company. Userland's Radio Userland is a wellspring of DIY Web Services bootstrapping. Jabber-RPC transports XML-RPC messages over the Jabber instant messaging framework. Watson provides a stunning example of putting a GUI front-end on Web Services. My own Meerkat Open Wire Services provides open URL-line and XML-RPC interfaces which have reaped some unintented yet wonderful uses... what the Google Web API offers is a SOAP (Simple Object Access Protocol) interface to searching Google's index, accessing info and Web pages from its cache, and checking the spelling of words (even proper names) from the comfort of Google.com's standard search syntax. A freely downloadable Developer's Kit contains: (1) A complete API reference describing the semantics of method calls and fields; (2) Sample SOAP request and response messages; (3) Google Web API WSDL file; (4) A Java library, example program, and Javadoc documentation; (5) A sample .NET program; (6) A simple SOAP::Lite-based Perl script; (7) README, Licensing, and so forth..."

  • [April 18, 2002] "Adobe Pulls FrameMaker Into the XML Limelight. [Cross-media Publishing: New Tricks for an Old Dog.]" By Mark Walter. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 2 (April 22, 2002). ['The v7 upgrade -- the first in two years -- is notable not only for reviving Frame as a contender for XML authoring but, perhaps more importantly, for introducing FrameMaker as a legitimate server product for cross-media publishing.'] "In this new version, Adobe promises to not only bring the product up to date, but it also has merged the structured and unstructured versions of the product into a single application, so that users can choose the mode in which they want to work... We should point out that FrameMaker's structured mode is not designed for casual use. Adobe has provided an 'XML Cookbook' to guide new users, but its 110 pages illustrate how arcane Frame's SGML and XML configuration process is compared with those of the native XML editors. Mastering the detailed process is worthwhile when you have an editorial team collaborating on similar documents that follow a consistent DTD -- especially if you want to provide a WYSIWYG print view within the editing environment. For occasional use, a native XML editor would be a better choice... The growing demand for XML-based publishing requires configurable tools for structured editing to be coupled with robust tools for cross-media production. FrameMaker 7.0 delivers both of these capabilities in a single desktop package, though the structured-editing features require considerable setup before getting started. Once configured, FrameMaker enables an editorial team to see a print production context even as they work within an XML structure. It is those who want such a print-oriented interface that we think will find FrameMaker 7 attractive compared with Arbortext and SoftQuad. It is at the server level, though, that we're most intrigued by FrameMaker 7's potential. Lacking an XML- and server-enabled version of XPress (which is what designers would like), this product fills a gap in the market for an inexpensive composition server that can be hooked up to Web content-management systems, one that gives companies a tool with which to build solutions for automated multiple-channel delivery. See details and references in the news item: "Adobe Announces Enhanced XML Authoring Support in FrameMaker Version 7.0."

  • [April 18, 2002] "The Road Ahead: Problems Facing the Publishing Industry Today. [Technology Challenges 2002: What Needs to be Solved. Industry Analysis.]" By George Alexander, Peter Dyson, John Parsons, and Mark Walter. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 2 (April 22, 2002). ['What are the unresolved technology challenges publishing faces today? Is it overcoming barriers to variable-data printing adoption? Rethinking our approach to digital rights management? Deciding which printing technology to use in the future? Are a host of new tools and technologies required . . . or does the industry need to instead focus on developing standards? Inside we identify the publishing industry's most pressing unsolved problems and suggest what needs to be done to solve them.'] "JDF: Printing-plant automation is the second strategy by which printers might survive, although it is far from a sure thing. JDF allows the RIPs, presses, folders, cutters, etc., to record their process results in the job-ticket file, as well as to read instructions from it. Second, it prescribes a standard and extensible format (XML) for this data. That doesn't sound like a big deal, but it is. By putting all the job information into a single format, the information generated at one stage of the production process can be available at other stages. In theory, that could affect job scheduling, inventory replenishment, customer invoicing, employee incentives and the plant's ability to keep statistics... PDF: is 'just a language'; properly used, it has the potential to eliminate errors from a key step of the printing process: the hand-off from the designer to the manufacturer. Strangely, this is being done by making the language less powerful -- that is, by placing constraints on what can be said in the language... Transition to Web-first workflows: Newspapers are fortunate because they have a set of vendors who have recognized the need for change and are introducing editorial systems that marry an XML database to an editing and production environment that takes into account both the printed paper and its companion Web site. Vendors, such as Seinet, Unisys, DTI and others, acknowledge that both media matter to the customer, and they are developing technology accordingly. Once this technology takes hold, business models, not content management, will be newspapers' key concern... DRM: Despite its reliance on cryptography and other esoteric disciplines, the challenge of digital rights management (DRM) isn't about bringing new technology to bear. There's plenty of technology available. The real problem is that today's DRM products offer no apparent user benefits. Because the focus is on locking up content, users perceive such products (rightly, we think) as impeding the full enjoyment of the content they've paid for..." OpEd Note: this "think piece" article is good, as most Seybold Report articles are. I think they slipped in reflecting a notion presumed to be axiomatic in DRM: for-pay content (see "the content they've paid for" above). The real problem with DRM is not "impeding full enjoyment" of reader-pays content, but that DRM models are focused upon protecting the interests of commercial middlemen not the interests of the two principal actors in eBook technology -- the author and the reader. Claims that DRM "protects authors and helps them share in the revenue" may sound plausible, but as we know from the major labels, such arguments carry little conviction. References in: "XML and Digital Rights Management (DRM)."

  • [April 18, 2002] "Enterprise Content Integration II: Savantech's Photon Commerce. [Savantech Joins Content Integration Market. Asset Management.]" By Bill Rosenblatt. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 2 (April 22, 2002). ['Joining Context Media's Interchange Platform in the market for Enterprise Content Integration is Photon Commerce from Savantech. Photon Commerce is a highly flexible "master control center" for a large publisher's enterprise-wide content-management, packaging and distribution functions.'] "Enterprise Content Integration (ECI) is a new type of approach that is predicated on the assumption that, to a large extent, publishing enterprises already have content-management, DAM [Digital Asset Management] or editorial-workflow systems, file servers and other types of content-storing systems deployed at the level of departments, brands or business units. ECI provides a way to tie these systems together without uprooting or disrupting them. ECI systems provide two basic types of functionality: metadata-catalog management and content distribution... Through his experience with Xerox, Dr. Prasad Ram (Savantech's CEO) saw that DRM was only a thin slice of the larger problem that media companies really needed to solve: content aggregation and distribution. As formats and delivery services proliferated, the complexity of getting the right content to the right user or distribution partner at the right time has increased geometrically. Ram and his colleagues saw an opportunity to define an architecture and toolset for helping media companies grapple with this complexity across the enterprise. Savantech's newest product, Photon Commerce, is a digital distribution management solution; it supports metadata catalog management, content packaging, and content distribution management... Photon Commerce can read metadata from three primary types of sources: XML files, relational (ODBC-compliant) databases, and DAM systems (through their proprietary APIs). It maps the metadata schemas inherent in these sources to one common metadata schema, which can be of the customer's choosing (e.g., a book publisher could use ONIX). It stores the metadata internally in a relational database, along with an XML DTD to describe the metadata schema. Metadata fields can be marked as 'indexable,' meaning that the fields are used to identify content items; 'viewable,' meaning that they show up in query results; and 'queryable,' meaning that content items can be searched and retrieved using those fields... We feel that the more important disadvantage of [other] syndication solutions [primarily based on the ICE (Information and Content Exchange) protocol] -- especially the stand-alone products -- is that they provide no consistent interface to a media company's content on the back end; that is, they provide no equivalent of DAM or a metadata catalog. The combination of the metadata catalog with distribution-management functionality in products like Photon Commerce and Interchange Platform is quite attractive..."

  • [April 18, 2002] "Web Services Body Adds 50 Members." By Wylie Wong. In CNET News.com (April 17, 2002). "AT&T, Cisco Systems, and Procter & Gamble are among the latest companies to join a key Web services standards organization aimed at promoting the emerging technology. The Web Services Interoperability Organization (WS-I) said Thursday that it has gained 50 new members, bringing its total membership to more than 100 companies. The organization's stated goal is to promote Web services and ensure that software from various technology makers is compatible. The group, created by IBM and Microsoft in February, is meeting for the first time this week in San Francisco to sketch out its objectives. In the next few years, analysts expect Web services to gain in popularity as a more efficient way to build software. Web services allows businesses with different computing systems to more easily interact and conduct transactions. But analysts and technology buyers warn that Web services won't catch on industrywide unless there is a widely accepted way of linking systems. In a two-day meeting that began Wednesday, organizers say, the WS-I has come up with three initial tasks and has created teams, or 'working groups,' to accomplish them by this fall, said Norbert Mikula, Intel's director of Web services technology and co-chair of WS-I's marketing committee. One team will identify the key specifications for Web services, including Extensible Markup Language (XML), the Simple Object Access Protocol (SOAP), Web Services Description Language (WSDL), and Universal Description, Discovery and Integration (UDDI). It also will create the guidelines for using them to ensure compatibility, Mikula said. Those four specifications have emerged as key to making Web services work across multiple computing systems. To understand how these protocols work together, imagine an ordinary phone call. In Web services parlance, XML represents the conversation, SOAP describes the rules for how to call someone, and UDDI is the phone book. WSDL describes what the phone call is about and how you can participate... Still missing from the WS-I's list of supporters is Sun Microsystems... new members joining the WS-I include Hitachi, Unisys, Ascential Software, Bowstreet, Corel and SilverStream Software. Companies that originally joined included Compaq Computer, DaimlerChrysler, United Airlines and VeriSign..." See the announcement: "WS-I Announces Formation of Key Working Groups; Membership Grows to More Than 100. 100+ Member Community Meets for Initial Working Group Planning Sessions. AT&T, Proctor & Gamble, Sabre and Others Join Industry Effort." References: "Web Services Interoperability Organization (WS-I)."

  • [April 18, 2002] "Expand XSL with Extensions. Technique Helps Expand the Capabilities of XSL's Core Features." By Jared Jackson (Research Associate, IBM). From IBM developerWorks, XML Zone. ['The combined power of XML and XSL for representing, manipulating, and presenting data over the Web and sharing data across differing applications has been clearly demonstrated through the fast acceptance and broad usage of these technologies. Still, most developers familiar with the basics of XML and XSL are not utilizing this power fully. This article shows developers how to use extensions, a technique that allows you to expand the capabilities of XSL.'] "In terms both of power and simplicity, the combination of XML and XSL has revolutionized data storage and manipulation in a way not seen since the early days of the SQL database language. XML provides a clear and independent way of recoding data that is easily shared and understood. Similarly, many people feel that XSL is also easy to read, write, and understand. Clearly, this powerful duo are essential knowledge for everyone involved in the technology industry. The broad scope and small learning curve associated with the basic elements of XSL transformation sometimes acts as a double-edged sword -- yielding broad usage of the core technology but dissuading the majority of developers learning XSL from investigating and using its more advanced and powerful features. This article is written for developers who already have a basic understanding of XML and XSL, and are ready to build on this knowledge. If you are unfamiliar with these technologies, you can find several good introductory articles and tutorials on developerWorks and other Web sites. The article shows you how to use extensions -- a technique present in most XSL processors -- which allows virtually unlimited expansion of the existing capabilities of XSL's core features. This article includes a general description of how to write extensions with code, followed by three specific and widely applicable examples..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [April 18, 2002] "The W3C Needs to Be Royalty Free." By Jim Bell (Hewlett-Packard). In ZDNet Tech Update (April 12, 2002). "Currently a fiery debate is raging concerning whether standards from the World Wide Web Consortium (W3C) should be royalty free or whether some might require patent royalties. The latter approach, referred to as RAND (Reasonable And Non-Discriminatory), is supported by companies such as IBM. Other companies, such as Hewlett-Packard, feel passionately that free, open, technically excellent standards have been the foundation for the Web's success in the past, and that they will be equally fundamental in the future. In particular, we feel that the ability to use W3C standards without charge will be essential to their quick acceptance and to their universal acceptance. Why do we have such a strong preference for royalty-free licensing over RAND licensing for W3C standards? Patent policies for standards organizations may rationally vary based on organizations' specific circumstances and technologies, but in the case of W3C, we view the RAND licensing alternative as fundamentally flawed for reasons including the following: [...] For Hewlett-Packard and many others, including the vast majority of the more than two thousand people who have written to W3C to share their views, the conclusion is clear: To enable the Web to extend its meteoric growth in capabilities, acceptance and beneficial impact, W3C standards must remain royalty free now and in the future." Note: This story originally ran on CNET's News.com on October 12, 2001; it was republished on April 12, 2002 as "historical context" for an April 2002 discussion about legally encumbered standards in the WWW and Web Services arena, including a report of an IBM patent on ebXML-related specifications. See for example the following bibliographic references, and "IBM, Microsoft Plot Net Takeover."

  • [April 18, 2002] "Will Patent Disputes Spoil the Web's Success?" By Adrian Mello. In ZDNet Tech Update (April 17, 2002). "... open standards -- as we have come to know them -- are under threat. That's because some technology vendors are trying to corral parts of the Web that most people believe are openly available standards by filing for patents and then twisting the arms of other companies to pay up or recognize those patents... Patent filings and disputes may play the role of spoiler as vendors like Microsoft, IBM, Sun, and others simultaneously attempt to promote standards while competing for commercial advantage..." The article cites for ewxample British Telecommunications' claims relative to hyperlinking; Unisys patent on GIF (apparently to expire 2003); Microsoft's patent on 'style sheets for publishing system'; Intermind's claims relative to P3P (apparently now resolved in favor of W3C specification clear and free); UFIL Unified Data Technologies' claims pertaining to RDF, etc. On the broader topic of completely botched US patent law, see "Patent Turns Playtime Into Pay Time," citing a "7-year-old Minnesota boy [who] has patented a method for swinging side to side, meaning he could conceivably take playmates to court if they try his new trick without permission. U.S. Patent #6,368,227, issued April 9, describes a method for swinging 'in which a user positioned on a standard swing suspended by two chains from a substantially horizontal tree branch induces side-to-side motion by pulling alternately on one chain and then the other'..." [might be a spoof, but many readers have been perfectly willing to believe the story, as it's so very much in line with the tradition of USPTO's preposterous judgments in awarding software patents]

  • [April 18, 2002] "IBM Drops Internet Patent Bombshell." By David Berlind. In ZDNet Tech Update (April 16, 2002). "A recent IBM patent claim could threaten royalty-free access to a key Internet standard protocol backed by the United Nations. The standard -- called ebXML -- is an XML-based set of definitions for electronic transactions and business collaboration. IBM's patent claim was made in an intellectual property disclosure filed in late March with the Organization for the Advancement of Structured Information Standards (OASIS). Executives from both the United Nations and OASIS said they expected the ebXML specification to be royalty-free and unencumbered by patent claims. Both said they were surprised by the sudden appearance of the disclosure. According to IBM's disclosure statement, the company has one patent and one patent application that it believes are relevant to compliance with ebXML's Collaboration Protocol Profiles (CPPs) and Collaboration Protocol Agreements (CPAs) specifications..." Note: a subsequent communiqué from Martin W. Sachs of IBM cited IBM's official response: that "IBM will not charge royalties for either of the two patents mentioned in the declaration..." A statement from James Bryce Clark of the ebXML Joint Coordinating Committee references the memo of Bob Sutor (IBM) and says the committee expects "to be able to reach a satisfactory resolution." See also the detailed references and the following article by WWong.

  • [April 18, 2002] "IBM ebXML Patent Plan Royalty-Free." By Wylie Wong. In ZDNet Tech Update (April 18, 2002). ['Big Blue says it will not seek royalties on a patented e-commerce Web standard that helps companies in many industries communicate over the Internet.'] "At issue is a Web standard called Electronic Business XML, or ebXML, which allows companies in many industries to communicate over the Web. It was a standard created by a United Nations organization and by the Organization for the Advancement of Structured Information Standards, or OASIS, a consortium of tech companies that includes IBM, Sun Microsystems, BEA Systems and Hewlett-Packard. IBM's patent claim came to light as part of an intellectual property disclosure filed with OASIS in late March. An IBM representative on Thursday said the company owns one patent in the ebXML standard and has another patent pending. But the company has decided not to charge royalties on the patents... An IBM executive last fall said the company has historically worked in both royalty-free and RAND environments, and chooses to use one or the other on a case-by-case basis, depending on the technologies..." See details in the dedicated section.

  • [April 18, 2002] UK GCL (Government Category List) Version 1.0. Part of the UK e-Gif and e-GMF projects. 11 pages. "Making it easier to find information is a key aim of Information Age Government, and is addressed in part by the e-Government Metadata Framework (e-GMF). The Government Category List (GCL) is a list of headings for use with the Subject element of the e-Government Metadata Standard (e-GMS). It will be seen in applications such as UK Online. Subject metatags drawn from the GCL will make it straightforward for website managers to present their resources in a directory structure using the GCL headings... The citizen who wants to find out about a service or aspect of policy faces a daunting challenge, especially when he does not know which department, authority or agency to approach. The sheer quantity of information available has the effect of hiding the nugget sought. Websites and portals like UK Online already offer access to a broad range of public sector information and services. Implementing the directory structure of the GCL will help people navigate through the ocean of resources to the pool where they can find what they want... The number of levels is a compromise between enabling precision and not confusing the user with too much detail. The total number of headings in the scheme is around 350, including 12 at the top level and about 120 at the second level. There are more levels only where there is greater need to discriminate between subjects... A separate Index of over 1400 entries has been prepared to help authors and administrators when meta-tagging. The Index includes many lead-in entries (for example, 'Abattoirs See Meat and livestock industries') to help people find the right heading. It also shows scope notes and cross-references between headings (for example, 'Arms control See also Firearms')..." Version 1.0 of the GCL was issued in January 2002. A 'GCL First Update' was issued in March 2002; a full updated version is due in May 2002. See also the GCL Index (Version 1.0, January 2002) and the UK e-Government Metadata Framework. General references: "e-Government Interoperability Framework (e-GIF)."

  • [April 17, 2002] "Schema Centric XML Canonicalization." By Selim Aissi (Intel), Bob Atkinson (Microsoft), and Maryann Hondo (IBM). Published by UDDI.org. "Copyright (c) 2000-2002 by Accenture, Ariba, Inc., Commerce One, Inc., Compaq Computer Corporation, Fujitsu Limited, Hewlett-Packard Company, i2 Technologies, Inc., Intel Corporation, International Business Machines Corporation, Oracle Corporation, SAP AG, Sun Microsystems, Inc., VeriSign, Inc., and / or Microsoft Corporation." Version 1.0. Working Draft 13-February-2002. An editors' working draft copy circulated for general review, comment, and feedback. Version URL: http://www.uddi.org/pubs/SchemaCentricCanonicalization-20020213.htm. Latest version URL: http://www.uddi.org/pubs/SchemaCentricCanonicalization.htm. "Existing XML canonicalization algorithms such as Canonical XML and Exclusive XML Canonicalization suffer from several limitations and design artifacts (enumerated herein) which significantly limit their utility in many XML applications, particularly those which validate and process XML data according to the rules of and flexibilities afforded by XML Schema. The Schema Centric Canonicalization algorithm addresses these concerns... The Schema Centric Canonicalization algorithm is intended to be complementary in a hand-in-glove manner to the processing of XML documents as carried out by the assessment of schema validity by XML Schema, canonicalizing its input XML instance with respect to all those representational liberties which are permitted thereunder. Moreover, the specification of Schema Centric Canonicalization heavily exploits the details and specification of the XML Schema validity-assessment algorithm itself. In XML Schema, the analysis of an XML instance document requires that the document be modeled at the abstract level of an information set as defined in the XML Information Set recommendation. Briefly, an XML document's information set consists of a number of information items connected in a graph. An information item is an abstract description of some part of an XML document: each information item has a set of associated named properties... Properties on each of these items, for example the [children] property of element information items, connect together items of different types in an intuitive and straightforward way. The representation of an XML document as an infoset lies in contrast to its representation as a node-set as defined in XPath. The two notions are conceptually quite similar, but they are not isomorphic. For a given node-set it is possible to construct a semantically equivalent infoset without loss of information; however, the converse is not generally possible. It is the infoset abstraction which is the foundation of XML Schema, and it is therefore the infoset abstraction we use here as the foundation on which to construct Schema Centric Canonicalization algorithm. The Schema Centric Canonicalization algorithm consists of a series of steps: creation of the input as an infoset, character model normalization, processing by XML-Schema assessment, additional infoset transformation, and serialization..." For schema description and references, see "XML Schemas." [cache]

  • [April 17, 2002] "RadioTV-NewsML 0.1." By the NSK RadioTV-NewsML Team. February 26, 2002. PDF format, 45 pages. Based upon the PPT presentation. The document surveys the design and use of the RadioTV-NewsML Standard based upon TV program distribution in Japan. See the 2002-04-17 news item "IPTC Develops RadioTV-NewsML Standard for Radio/TV Program Information."

  • [April 16, 2002] "E-Government Act of 2002." US Senate Governmental Affairs Committee. S.803, Amended 2002, substitute to the E-Government Act of 2001. Document reference: COE02.138. 86 pages. From Senator Joseph Lieberman, Senator Fred Thompson, and others. The original bill was entered May 1, 2001. The amended version was reported out of the Senate Governmental Affairs Committee on March 21, 2002. Excerpt: "Section 3602. Office of Electronic Government. (a) There is established in the Office of Management and Budget an Office of Electronic Government. (b) There shall be at the head of the Office an Administrator who shall be appointed by the President, by and with the advice and consent of the Senate. (c) The Administrator shall assist the Director in carrying out: (1) all functions under this chapter; (2) all of the functions assigned to the Director under title II of the E-Government Act of 2002; and (3) other electronic government initiatives, consistent with other statutes.... Subject to requirements of this chapter, the Administrator shall assist the Director by performing electronic Government functions as follows: [1-7] (8) Assist the Director in establishing policies which shall set the framework for information technology standards for the Federal Government under section 5131 of the Clinger-Cohen Act of 1996 (40 13 U.S.C. 1441), to be developed by the National Institute of Standards and Technology and promulgated by the Secretary of Commerce, taking into account, if appropriate, recommendations of the Chief Information Officers Council, experts, and interested parties from the private and nonprofit sectors and State, local, and tribal governments, and maximizing the use of commercial standards as appropriate, as follows: (A) Standards and guidelines for interconnectivity and interoperability as described under section 3504. (B) Standards and guidelines for categorizing Federal Government electronic information to enable efficient use of technologies, such as through the use of Extensible Markup Language. (C) Standards and guidelines for Federal Government computer system efficiency and security. Section 3603. Chief Information Officers Council. There is established in the executive branch a Chief Information Officers Council. (e) The Council shall perform the following functions: ... (5) Work as appropriate with the National Institute of Standards and Technology and the Administrator to develop recommendations on information technology standards developed under section 20 of the National Institute of Standards and Technology Act (15 U.S.C. 278g-3) and promulgated under section 5131 of the Clinger-Cohen Act of 1996 (40 13 U.S.C. 1441), as follows: (A) Standards and guidelines for interconnectivity and interoperability as described under section 3504. (B) Standards and guidelines for categorizing Federal Government electronic information to enable efficient use of technologies, such as through the use of Extensible Markup Language. (C) Standards and guidelines for Federal Government computer system efficiency and security..." See the three bibliographic items following. [cache]

  • [April 16, 2002] "XML Group Challenges GAO Report." By Patricia Daukantas. In Government Computer News (April 15, 2002). "The General Accounting Office this month said the lack of uniform, government-specific data structures could dim the Extensible Markup Language's promise of easily searchable, reusable content. GAO recommended the director of the Office of Management and Budget oversee federal planning for XML adoption... [Chuck Allen of] the HR-XML Consortium Inc. of Raleigh, N.C., attacked the report after its April 5 [2002] release, claiming that GAO did not consider the group's achievements. GAO warned that vendors are developing proprietary XML extensions that could eventually stop interoperability. It cited four other challenges... Marion Royal of the General Services Administration's Office of Governmentwide Policy said a repository of XML data structures is still in the pilot stage. Royal is co-chairman of the interagency XML Working Group, which was chartered by the CIO Council to promote use of the markup language. Lisa J. Carnahan, a staff member at the National Institute of Standards and Technology, leads the XML repository development effort. Agencies shouldn't go to the trouble of making XML tags for their own sake, Royal said, but rather should use existing tags if possible... In January, the XML Working Group distributed a draft list of federal-specific XML best practices. The draft, however, does not include a governmentwide vocabulary. The accounting office reviewed XML development standards and related agency challenges at the request of Sen. Joseph I. Lieberman (D-Conn.), chairman of the Senate Governmental Affairs Committee. The committee last month sent Lieberman's E-Government Act of 2001, S. 803, to the full Senate for consideration. The bill would establish an Office of Electronic Government at OMB to promote interagency collaboration on systems and standards. Owen Ambur, a Fish and Wildlife Service systems analyst who is co-chairman of the XML Working Group with Royal, said he found nothing to disagree with in the GAO report. Ambur said private-sector data elements, document type definitions and business process schemas are either undefined or redundantly and inconsistently defined. The working group wants to minimize that confusion within government, he said. The proposed XML repository will register data tags for inherently governmental XML projects. 'That's far and away the greatest value that the working group adds,' Ambur said. Whether agencies need specific XML guidance through legislation is questionable, he said, but certainly they need funding and guidelines. If Congress continues to fund stovepipe systems instead of interoperable ones, 'then they have no one to blame but themselves for the results,' Ambur said...

  • [April 16, 2002] "Lieberman: Government Needs Plan For Adopting XML." By Gail Repsher Emery. In Washington Technology (April 10, 2002). "Sen. Joseph Lieberman, D-Conn., called on the Bush administration April 10, 2002 to develop a plan that would enable information sharing through governmentwide adoption of extensible markup language, or XML. XML facilitates data sharing by tagging information so it can be read easily by different computer systems. Lieberman, chairman of the Senate Governmental Affairs Committee, made the recommendation in response to a General Accounting Office report that analyzed the challenges of implementing XML... GAO recommended the Office of Management and Budget take the lead in addressing these and other issues that will improve the federal government's adoption of XML. The report's recommendations mirror requirements contained in S. 803, an e-government bill sponsored by Lieberman. 'We have at our fingertips technology that will greatly improve communication among the federal, state and local government agencies and among all Americans,' Lieberman said, 'but we will be unable to utilize it until a comprehensive strategy for its implementation is developed. I call on the administration to follow the recommendations of the GAO so we can take advantage of this important technology.' Provisions in the E-Government Act of 2002, introduced by Lieberman and Sen. Conrad Burns, R-Mont., would require better governmentwide management of XML initiatives. The legislation calls for the administrator of a new Office of Electronic Government, in coordination with the National Institute of Standards and Technology, to develop a policy framework and set standards for the implementation of XML. The bill was approved by the Governmental Affairs Committee March 21 [2002]..." See also "Never Mind The Title -- Get On With The Work," in InformationWeek.

  • [April 16, 2002] "Senate Resurrects E-Gov Bill." By Judi Hasson. In Federal Computer Week (March 25, 2002). "A Senate bill boosting the visibility and funding of key e-government initiatives passed another milestone last week when the Senate Governmental Affairs Committee gave its unanimous approval. The E-Government Act of 2002 is the result of months of negotiations among the panel's chairman, Sen. Joe Lieberman (D-Conn.); Sen. Fred Thompson (R-Tenn.), the committee's ranking member; and the Bush administration, which had opposed one of the bill's major components -- the appointment of a federal chief information officer. Instead, the bill calls for an administrator within a new office of Electronic Government at the Office of Management and Budget. The administrator, who would be confirmed by the Senate, would oversee a fund that provides $386 million during four years for e-government initiatives and related projects. Among its provisions, the bill requires each federal court to establish a Web site to post written opinions issued by the court in a free text-searchable format. It also requires a public directory of federal government Web sites and the development of standards for those sites... Highlights of the E-Government Act of 2002: (1) Provides $386 million, including $345 million for e-government projects over four years. (2) Creates an Office of Electronic Government within the Office of Management and Budget. (3) Appoints a Senate-confirmed administrator in charge of the office. (4) Provides $8 million for a federal bridge certification authority for digital signatures. (5) Provides $7 million for an information technology training center. (6) Asks for tighter privacy and security provisions..." See "E-Government Act of 2002" above for the XML connection. See also the announcement of 2002-03-21: "E-Government Bill Heads to Full Senate. Bill Would Improve Access of Citizens to Government Services and Information."

  • [April 15, 2002] "Across the Universe: [Review of] Virtuoso Universal Server 2.7." By Jon Udell. In InfoWorld (April 15, 2002). "['OpenLink Software's Virtuoso consolidates functions of a SQL database, an XML database, a Web-services tool kit, an application server, and an entire suite of Internet applications. Virtuoso delivers today what Microsoft's Yukon may deliver next year: integrated management of SQL and XML data in the context of a Web-services-aware application server.'] "In 1993 Microsoft first demonstrated Cairo, based on a prototype of its Object File System. The concept is slated to return, as Yukon, in 2003. Meanwhile, the industry hasn't been sitting on its thumbs. Database vendors have been busily converging the two major data-management disciplines: SQL and XML. One of the more forward-looking efforts is Virtuoso, from OpenLink Software. Virtuoso packs more technology into one product than seems possible. It was first sold as a virtual database -- that is, a stand-alone SQL engine that could also use (and extend) foreign data sources. It evolved into a Web application server and now, in Version 2.7, has become as complete an example of a universal server as you are likely to find. Virtuoso 2.7, available for Windows, Linux, Solarix, HP/UX, AIX, and Mac OS X, creates a profound synthesis of SQL and XML data management styles, and wraps Web-services bindings around both. The SQL engine at the core of the product can contain structured data, as well as semistructured data (i.e., XML) and unstructured data (files, images). There's also a tightly integrated WebDAV (Web Distributed Authoring and Versioning) datastore that offers hierarchical access to semistructured and unstructured data. Here's one example of how these pieces can fit together. A SQL query is defined, using the FOR XML clause to produce XML output. The query's results are routed through Virtuoso's built-in XSLT (Extensible Stylesheet Language Transformations) engine, transformed to HTML, and stored in a DAV folder. The HTML report is now available to DAV clients, or to browsers by way of Virtuoso's HTTP-based DAV interface. If the query is defined as real time then the result file will seem empty, because it's just a placeholder; the query will generate the report in real time. Alternatively the query can be defined as persistent. In this case the result file is created, and then refreshed automatically at an administrator-defined interval. The query can optionally produce metadata (either a Document Type Definition or an XML Schema definition) describing the structure of the output. Note that except for writing the SQL query, there's no programming required to achieve these effects. It's all done in Virtuoso's Web-based administration tool... Virtuoso can extend the set of functions available in XPath queries and XSLT (Extensible Stylesheet Language Transformation) stylesheets to include Virtuoso/PL procedures, which can be pure PL constructs or can call on other stored procedures and SOAP services. Imagine a SOAP-based stock-quote service made into an XSLT extension. A Web developer, writing a VSP application that pulls XML documents from the database can augment ticker symbols with current prices by making SOAP calls directly from the XSLT stylesheet..."

  • [April 15, 2002] "Special Delivery For XML." By Scott Tyler Shafer and Mark Jones. In InfoWorld (April 12, 2002). "Sensing an untapped opportunity, emerging networking vendors are developing XML switches capable of supporting latency-intolerant applications based on Web services. Also known as XML acceleration appliances, companies including DataPower Technology, Sarvega Networks, and Forum Systems are on the verge of releasing technology designed to speed the secure delivery and translation of XML data. The moves come as XML data becomes increasingly prevalent, and the evolution of Web services standards signals a call for IT infrastructure to support the requirements of distributed, loosely coupled applications... Sarvega is developing its XPE Switch, capable of translating and switching XML data, and plans to launch the product at NetWorld+Interop in Las Vegas on May 6, 2002... Cambridge, Mass.-based DataPower is building a Layer 7 switch that translates and transfers XML data at wire speed. According to DataPower CEO Eugene Kuznetsov, the device resides next to a server, handling off-loaded XML translation functions. 'Making the infrastructure aware of XML traffic will be very important,' Kuznetsov said. 'XML acceleration is needed in the same way Web caching, load balancing, and SSL [Secure Sockets Layer] acceleration were before.' Meanwhile, Forum Systems, based in Salt Lake City is approaching XML switching from the security perspective, focusing on XML parsing and cryptography... Companies such as Forum Systems are attempting to address the complexities brought about by the variety of XML schemas running through the network. In the case of a Web services-based application, an XML-aware switch examines incoming SOAP (Simple Object Access Protocol) envelopes and makes decisions based on the SOAP header, opens the data packets, transforms or encrypts the XML data into a format that can be understood by the network, and speeds it along..."

  • [April 15, 2002] "'Yukon' to Add Run-Time, XML Support." By Peter Galli and Matt Hicks. In eWEEK (April 15, 2002). "Microsoft Corp. is bringing its next SQL Server release, code-named Yukon, in line with its .Net and Web services vision, with new data unification and integration technology. Microsoft plans to embed .Net CLR (Common Language Runtime) in the SQL engine, as well as provide deeper and richer XML support. With .Net CLR integrated, developers will be able to write stored procedures in 23 languages in addition to Microsoft's T-SQL, said Stan Sorensen, director of SQL Server, at the company's TechEd conference here last week... Yukon will also provide technology that enables a better way to store and retrieve XML data of all kinds, which has been referred to internally as Storage+. Yukon will provide native XML support inside the SQL Server engine and support XQuery, a proposed standard query language for XML documents."

  • [April 12, 2002] "A High-level Description of a Draft Reference Model for ISO 13250 Topic Maps." By Steven R. Newcomb and Michel Biezunski (Co-editors, ISO/IEC 13250). Document reference: ISO/IEC JTC 1/SC34 N0298 Rev. 1. From ISO/IEC JTC 1/SC34, Information Technology -- Document Description and Processing Languages. April 04, 2002. [Note the status description: "Draft High-level Description of forthcoming Editor's Draft." A draft of this document was first prepared for the XML Europe 2002 Conference, Barcelona, Spain, May, 2002.] "The purpose of ontologies (sets of knowledge-bearing assertion types), taxonomies (classes of things, ideas, etc.), and vocabularies (such as markup vocabularies) is to allow the human members of communities of interest to communicate among themselves about the things with which their communities are concerned. However, knowledge that is represented in the terms of a specific community may have high value outside its community of origin. How can distinct bodies of knowledge, created and maintained by distinct, non-cooperating communities, be made usefully available in contexts other than their communities of origin? One answer is to merge them with other bodies of knowledge, in conformance with the Reference Model of ISO 13250 Topic Maps... In the draft Reference Model, a topic map is seen as a set of 'assertions', no more and no less. Each assertion asserts the existence of a strongly-typed relationship between some specific set of subjects of conversation. Each such subject is a 'role player' in the assertion; it plays a specific role in the relationship. The ontologies of Applications may include an unbounded number of kinds of assertions ('assertion types'). The roles, the role players, the assertions themselves, and the types of the assertions, are all regarded as subjects, and any of these features of an assertion can be role players in other assertions. Every topic map is a graph, and every assertion within a topic map is a subgraph within that graph. According to the draft Reference Model, graphs of topic maps consist of 'nodes' (also sometimes called 'vertices' or 'vertexes' or 'topics') and four distinct kinds of nondirectional 'arcs' or 'edges' that connect the nodes to one another. The Reference Model establishes a single graphic meta-structure for all assertions... The draft Reference Model is extremely simple: four arc types and two built-in assertion types. Each of these six constructs has a limited number of implications for implementations, and none of these implications prohibits the implementation of systems that distribute knowledge among peer servers that collectively and effectively behave as a single knowledge base. On the basis of the draft Reference Model, distributed systems that can do 'lazy' merging -- merging that is done on an ad hoc basis in order to meet an ephemeral user need -- can be created, even if they do not all implement the same Application." See also ISO/IEC JTC 1/SC34 N0298. See: "(XML) Topic Maps."

  • [April 12, 2002] "IBM, Microsoft Plot Net Takeover." By David Berlind, Editorial Director of ZDNet Tech Update. (April 11, 2002). ['Quietly, these two giants have been building a toll booth that could position them to collect royalties on most if not all Internet traffic.'] "While the technologies that form the foundation of that toll booth have yet to be officially recognized as standards by an independent standards body, the collective strength of IBM and Microsoft could be enough to render Internet standards consortia powerless to stop them. The potential for the two giants to erect a toll booth is tied to the likelihood that Web services protocols such as SOAP, WSDL, and UDDI -- and the related ones to which the two companies hold patents -- will one day be as important as the standard protocols (such as TCP/IP and HTTP) on which the Internet is based today... For the most part, standards-setting for the Internet and Web has taken place within the working groups of two organizations: the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). Until recently, neither organization had maintained a policy requiring vendors to make the intellectual property (IP) they contribute to the standards setting process available on a royalty-free basis. According to W3C Patent Policy Working Group Chairman Danny Weitzner, 'Despite the lack of a policy, there has always been an understanding amongst the various contributors that the Internet and the Web wouldn't be possible or scalable unless their contributions were available to everyone on a royalty-free basis.' But that gentleman's agreement has been tested several times over the years and it could end up being tested again by Microsoft and IBM. According to documents on the W3C's Web site, IBM and Microsoft not only hold patents to specific Web services protocols, but also have no intentions of relinquishing their IP rights to those protocols should they become standards. The documents indicate that the two companies are currently maintaining their rights to pursue a reasonable and non-discriminatory (RAND) licensing framework as opposed to a royalty-free-based framework. The RAND framework is widely acknowledged as the one that keeps a vendor's options open in terms of being able to charge content developers and Internet users a royalty for usage of relevant intellectual property... Against the backdrop of the W3C's emerging plan to adopt a primarily royalty-free-based patent policy, the royalty-free vs. RAND controversy reached full boil last October when Hewlett-Packard withdrew its support as a sponsor of IBM and Microsoft's W3C WSDL submission on the basis that WSDL might not be royalty-free. According to a statement by HP's director of standards and industry initiatives and W3C advisory committee representative Jim Bell, 'HP resigned as a co-submitter of the otherwise excellent Web Services Description Language (WSDL) proposal to W3C solely because other authors refused to let that proposal be royalty free.' [...] In its coverage of the controversy, Linux Today's version of Bell's statement includes a statement from W3C director Tim Berners-Lee declaring Web services protocols like WSDL to be common infrastructure protocols to which the royalty-free licensing framework should apply... [W3C's] Weitzner also acknowledges that, in addition to HP, Apple and Sun are wholeheartedly behind the royalty-free movement too. According to Sun's Manager of XML Industry Initiatives Simon Nicholson, 'Anyone should be able to use the specifications that define the Internet infrastructure without charge. We believe the best route to ensuring this is that such specs be licensed under royalty free terms.' Sun backed that position up when it relinquished a set of IP rights it had--a move that cleared the way for the royalty-free use of the W3C standard for Xlink'..." OpEd Note: This 4-part article by Berlind uses a sharp sword and a sharp pen, so we may hope that he has the facts straight. Surely the number of villains is greater than two. Part of my weekend reading includes Lawrence Lessig's The Future of Ideas: The Fate of the Commons in a Connected World, after which I expect to have more educated sensibilities on this topic of patents and digital rights. In the meantime, I share Berlind's concern that our freedom of access to information is economically threatened. The MPEG-4 debacle, not to mention the pernicious CBDTPA legislation, is but the tip of the iceberg; arguably, matters are getting worse with MPEG-21 (REL), and with the growing momentum behind companies advocating a monolithic "single standard" for DRM that covers multiple business domains with the promise of RAND royalties for "services" transactions, including health care information access, financial services, and all forms of news publication. For the moment, I cannot see how RAND protects anyone except patent owners. We are fortunate to have one model of leadership (W3C) which has taken the high ground, albeit a difficult route, in defending a Royalty Free standards development environment; we may hope that others begin to read, and to come to their senses. Attendees at the O'Reilly Emerging Technology Conference will not want to miss the keynote and special session on the "The Future of Ideas", addressing "the future of innovation in a time when commercial and governmental interests are exercising their control over plumbing, software, content, and patent laws to impede competition." - rcc

  • [April 12, 2002] "Beyond W3C XML Schema." By Will Provost. From XML.com. April 10, 2002. ['XSLT has proven to be a very successful technology, and has moved beyond the relatively narrow scope that drove its design. Even James Clark, XSLT's primary designer, never imagined the many uses XSLT would find. In "Beyond W3C XML Schema" Will Provost demonstrates that there are some aspects of document validation beyond the capabilities of W3C XML Schema. He describes the use of XPath with XSLT for reaching into documents and checking those constraints that can't be enforced with a schema.'] "The XML developer who needs to validate documents as part of application flow may choose to begin by writing W3C XML Schema for those documents. This is natural enough, but W3C XML Schema is only one part of the validation story. In this article, we will discover a multiple-stage validation process that begins with schema validation, but also uses XPath and XSLT to assert constraints on document content that are too complex or otherwise inappropriate for W3C XML Schema. We can think of a schema as both expressive and prescriptive: it describes the intended structure and interpretation of a type of document, and in the same breath it spells out constraints on legal content. There is a bias toward the expressive, though: W3C XML Schema emphasizes "content models", which are good at defining document structure but insufficient to describe many constraint patterns. This is where XPath and XSLT come in: we'll see that a transformation-based approach will let us assert many useful constraints and is in many ways a better fit to the validation problem. (In fact, one might define schema validation as no more than a special kind of transformation; see the paper of van der Vlist.) We'll begin by looking at some common constraint patterns that W3C XML Schema does not support very well and then develop a transformation-based approach to solving them... XPath and XSLT can form a second line of defense against invalid data. The value of this second stage in the validation architecture will be judged by what it can do that W3C XML Schema cannot. Here's a short list of constraint patterns XPath can express well. (1) Structural analysis of the tree as a whole; (2) Weakly-typed designs; (3) Finer control over use of subtypes -- say base types A and B are associated but subtype A2 should only see instances of B2, not B1 or B3, etc.; (4) Single values based on numeric or string calculation -- a number that must be a multiple of three, a string that must list values in a certain order; (5) Relationships between legal single values -- a checksum over a long list of values, or a rule limiting the total number of occurrences of a common token; (6) Constraints that span multiple documents -- for instance a dynamic enumeration where the legal values are listed in a second document, and so cannot be hardcoded into a schema... We've discovered a multi-stage validation architecture based entirely on W3C-standardized technology. Out in the world, another popular transformation-based approach is Schematron, an open source tool which specifies constraint definitions in its own language. Its vocabulary simplifies the XSLT structure shown in the previous section and relies on XPath for its constraint expressions. It also allows for both 'positive' and 'negative' assertions. The big difference is that a Schematron schema must be pre-compiled, or 'pre-transformed' if you will, into a validating stylesheet, which once created is the true counterpart to the pure-XSLT transformations used here..." For schema description and references, see "XML Schemas."

  • [April 12, 2002] "What's New in XSLT 2.0?" By Evan Lenz. From XML.com. April 10, 2002. ['Interest is strong in the W3C's development of XSLT 2.0. In a follow-up to his previous article on XPath 2.0, Evan Lenz describes the powerful new features proposed by the W3C for XSLT 2.0, which bring more convenient solutions to many of the bugbears involved with working with XSLT 1.0.'] "In my previous article, I went through a brief overview of some of the features that XPath 2.0 offers over and above XPath 1.0. We saw that XPath 2.0 represents a significant increase in functionality for XSLT users. In this article, we'll take a look at some of the new features specific to XSLT 2.0, as outlined in the latest working draft. Again, this assumes that you are familiar with the basics of XSLT/XPath 1.0. XSLT 2.0 goes hand in hand with XPath 2.0. The two languages are specified separately and have separate requirements documents only because XPath 2.0 is also meant to be used in contexts other than XSLT, such as XQuery 1.0. But for the purposes of XSLT users, the two are linked together. You can't use XPath 2.0 with XSLT 1.0 or XPath 1.0 with XSLT 2.0. At least, the W3C is not currently proposing any such combination... XSLT 2.0 includes a number of other useful features that we won't go into detail here. They include a mechanism for defining a default namespace for XPath expressions, the ability to use variables in match pattern predicates, named sort specifications, the ability to read external files as unparsed text, and so on. In addition, a large part of the XSLT 2.0 specification remains to be written, particularly the material dealing with the construction and copying of W3C XML Schema-typed content... For those of you who can't wait to start trying some of this stuff out, Michael Kay has released Saxon 7.0, which includes an 'experimental implementation of XSLT 2.0 and XPath 2.0'. It implements a number of features in the XSLT 2.0 and XPath 2.0 working drafts, with particular attention to those features that are likely the most stable. I've tested each of the examples in this article, and Saxon 7.0 executes them all as expected..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [April 12, 2002] "IBM Beefs Up Web Services Toolkit." By Ed Scannell. In InfoWorld (April 12, 2002). "Wasting little time after Thrusday's announcement of a Web security specification it is co-developing with Microsoft and VeriSign, IBM on Friday said that it is adding two new security features into the 3.1 version of its Web Services Toolkit. The new additions include an implementation of the SOAP Security Token and the Digital Signature components of the newly announced WS-Security specification. The SOAP Security Token indicates what the name, identity, credentials, and capabilities are of a user sending a message. This sort of technology can be useful to Web services providers in those situations where they must support users with different authentication mechanisms, according to IBM officials. It also makes it possible for Web services providers to incorporate additional security features to Web services applications over time... The WS-Security specification is an effort to offer guidance to users hoping to build secure and more broadly interoperable Web services. Besides SOAP (Simple Object Access Protocol), the functions within Version 3.1 of the toolkit are based on WSDL, WS-Inspection, and UDDI (Universal Description, Discovery, and Integration). These functions can work with Windows XP, Windows 2000, and Linux..." See the news item: "IBM Web Services Toolkit Supports the WS-Security Specification."

  • [April 12, 2002] "Microsoft Plays XML Politics." By Brian Fonseca and Ed Scannell. In InfoWorld (April 12, 2002). "Microsoft moved to put two smoldering business problems to rest this week with the demise of its My Services initiative for consumers and the signing of a Web services security pact with IBM and VeriSign. The elimination of Microsoft's MyServices offering for consumers -- considered a potential competitive threat to its existing enterprise customers -- frees Microsoft to more aggressively market its XML technologies. Microsoft is expected to repackage the technology used in My Services to create the equivalent of an XML application server that will be tightly integrated with Windows XP servers. Microsoft officials said that some time next year, developers and corporate users will be able to take all or some of .Net My services and related infrastructure products and create new services that, for instance, would allow them to tie together data from multiple sources both inside and outside the firewall... The demise of My Services for consumers also cleared the path for Microsoft to embrace a proposed Web services security protocol put forth by IBM and VeriSign and in so doing made it more likely that Microsoft technologies will be integrated into the forthcoming Project Liberty authentication effort that is being driven by IBM, Sun, Nokia, and a host of other vendors along with enterprise customers such as General Motors, MasterCard, AOL Time Warner, NTT Docomo, and Sony... IBM and Microsoft are pushing their own standards implementations that may run into interference with 'open' standards groups such as the W3C (World Wide Consortium) and IETF (Internet Engineering Task Force). But companies such as Sun are wrestling with their positions on emerging standards bodies, including IBM's WS-I (Web Services Interoperability Organization), which are driven largely by vendors..."

  • [April 12, 2002] "Speaking Up for Interop." Editorial, by Dan Ruby. In XML & Web Services Magazine Volume 3, Number 3 (April/May 2002), page 11. "As rhetoric about the interoperability of Web services heats up, watch out for the forces of vendor lock-in. The software platform companies are not as open as they might want you to believe. You couldn't tell that last month when the whole lot of them -- well, all but Sun -- came out politely to support the Web Services (WS-I) Interoperability Organization. The point was to be seen saying the right things about interoperability but not to acknowledge the fierce competitive pressures mitigating against it. As we've seen before on such occasions, Microsoft and IBM shared the role of cruise-ship organizer. So far, the sailing has been smooth. The base Web services protocols have remained mostly unified, with the result that everyone's servers will be able to exchange SOAP messages. But that's not where the action is -- or will be. All you have to do is talk to vendors about their complete application platforms, and you hear a different story. Every vendor's software for messaging, transactions, process, or something else has special, nonstandard capabilities that they assert as a differentiator. So, Yes, everyone will be able to round-trip SOAP messages, but where will these services be orchestrated into useful processes and applications? That is destined to occur in someone's semiproprietary integration layer. When two systems come together, one is the integrator and one the integrated. Every platform software company wants to be the single point of integration... Let me be clear to whom I'm referring: Microsoft, IBM, Oracle, Sun, and BEA. There's as much dirty fighting inside the Java camp as any I've seen used by or against Microsoft. I'm more impressed by companies seeking to bridge disparate platforms. Rogue Wave's Web Services Intermediary technology, using Tuple Spaces, is one approach I've seen that seems to address a range of Web services deployment and operational issues..."

  • [April 12, 2002] "Market Scan: WS Interop Board Treads a Fine Line." By Paul Kapustka. In XML & Web Services Magazine Volume 3, Number 3 (April/May 2002), pages 12-13. Earlier version posted February 19, 2002. ['Will the recent formation of some industry heavyweights into one consortium bode well for progress in cross-platform standards?'] "Will the new Web Services Interoperability (WS-I) Organization have greater success than earlier open-systems efforts that flashed and then fizzled? While there might be cause for skepticism, the recent history of agreements in the Web services arena suggests that further progress in cross-platform standards may be possible. Billing their creation as a 'standards integrator' for the Web services marketplace, a large group of vendors and enterprise customers -- including IBM, Microsoft, and Oracle but not Sun Microsystems -- announced the formation of the WS-I Organization, a group meant to 'accelerate the development and deployment' of interoperable Web services. According to WS-I representatives, the group's main goal is to give guidance and reassurance to potential Web services developers and customers who may be hesitant to buy into the marketing claims already being attached to the various Web services product bandwagons. The WS-I also said it will not attempt to develop standards, but will instead endorse and encourage the efforts of existing standards groups... Bob Sutor, director for e-business standards strategy with IBM, said the WS-I's work will be crucial once Web services technologies move past the basic connection levels of Simple Object Access Protocol (SOAP), XML, Web Services Description Language (WSDL), and Universal Description, Discovery, and Integration (UDDI) to include thornier topics like security and transaction support... Gartner's Natis, however, thinks the WS-I may be reaching too high with its stated plans for developing a technology 'road map,' or a list of product-interoperability profiles and 'best practices' for Web services deployment... The WS-I could prove useful, Natis said, if its members were able to reach a consensus on standards that do emerge in functional areas like security and transaction support, where there isn't the current level of agreement found for the basic communication technologies like SOAP and XML." See: "Web Services Interoperability Organization (WS-I)."

  • [April 12, 2002] "Open Source Style Publishing." By Kurt Cagle. In XML & Web Services Magazine Volume 3, Number 3 (April/May 2002), pages 58-63. ['Forget about expensive page layout products-use open source tools to generate publications fit for print and the Web.] "Sometimes the journey is more interesting than the destination, and this was never more evident than recently when I decided to put together a small 'magazine' that I could print for speaking engagements and post online. One problem I faced immediately was that I didn't want to spend big bucks for something like Adobe PageMaker or Quark Xpress, and I found most of the lower-end software not worth the price. I decided to look on the task as a challenge-could I put together a magazine with just a laptop running Apache and Tomcat on a Linux system, the Open Office 6.0 beta, and a glorified text editor for creating XML and XSLT? In other words, could I create a publishing program using only off-the-shelf open-source tools? It turns out the answer is 'yes,' although the system isn't the prettiest thing in the world... The heart of my publishing application is XSL-Formatting Objects (XSL-FO) and the Apache Formatting Object Processor (affectionately known as FOP). XSL-FO is XSLT's little brother-the formal markup language for describing a complex page description, which was intended to be the target of an XML document transformed by XSLT... for this project I wanted to keep everything open source, and I was itching to play with the XSLT 2.0 feature set, which is supported (in beta form, of course) only by Michael Kay's Saxon 7 parser. Saxon is ideal for this product (and will probably end up being my preferred parser from here on out) because the primary issue in this application was robustness rather than speed. I wrote the article in Open Office in Linux. Besides the pleasure of seeing that an open source solution was possible, these efforts reveal other points. Java (and Perl for that matter) supports unzipping classes that can be used to extract the XML directly from the Open Office.sxw format automatically, which means that you can use a combination of Java, Apache, and XSL to create the foundations of an automated Web publishing solution. The same Office XML can be transformed with a different XSL style sheet to create XHTML, which also allows you to vector your production effectively-going from word processor directly to published output. Among its other features, Open Office 6.0 (which is also the heart of Sun's Star Office 6.0, now in development) includes its default file format, the Star Office XML Word (SXW) format-perhaps one of its most powerful features from an XML development standpoint. The SXW format is remarkable because it's only nominally binary and completely nonproprietary. In fact, it is a zipped file that contains a set of five XML documents and a folder of resources such as JPEG, PNG, or GIF images, sound files, movie files, and so forth..."

  • [April 12, 2002] "Loosely Speaking." By Adam Bosworth. In XML & Web Services Magazine Volume 3, Number 3 (April/May 2002), page 72. ['Loose coupling is central to the nature of Web services-based application integration.'] "In any robust application-to-application architecture, the two key requirements are a platform-neutral, coarse-grained style of communication and loose coupling. I will take up the former in my next column. For this issue, I will focus on the idea of loose coupling and one of its important implications for Web services: the need to think of SOAP XML messages as documents, not as remote procedure calls (RPC). By loose coupling, I mean that you can be sure that changes to your implementation will not break other applications that are relying on it through the aegis of Web services. That is simple to explain but hard to do. In an ideal application-to-application architecture, each application en-queues well-defined messages to the other applications and de-queues well-defined messages from them. This model is ideal for several reasons... RPC suggests that it is okay to automatically map the parameters or return type into or from XML messages. It isn't. That is a private implementation detail. Everyone's implementations will vary and all implementations will vary over time. RPC also implies that the caller knows the signature and classes of the receiver. In fact, it is a miracle if the one application's classes and parameter order happen to match another's. In the real world, every implementation will have its own classes. Finally, RPC implies a precise conversion of all incoming XML into arguments of known classes or from a return with a known class. This means even a simple addition to an XML message will require a new receiving class, which will break receiving applications. It is better to have a model that is loose in and tight out. By this I mean that the receiver of a message looks for what it needs and ignores the rest. Loose coupling is central to the nature of Web services-based application integration. That's why it seems to me that the right model for XML in Web services is a message-oriented, document-based one rather than one based on remote procedure calls."

  • [April 12, 2002] "Routing XML Messages." By Claude Duguay. In XML & Web Services Magazine Volume 3, Number 3 (April/May 2002), pages 68-71. ['Take advantage of XML as a primary messaging system standard to build powerful solutions that route your messages accurately.'] "XML is quickly becoming a primary standard for messaging systems, and it provides a mechanism for data interchange that's incomparable at the moment. By applying simple techniques for deciding where your XML documents or messages are routed, you can build powerful solutions that range from large-scale operations to standalone desktop applications or anything in between. One of the great advantages of message-based architectures is the loose coupling and scalable nature of these systems. Messages can be queued, processed in parallel, and routed to different services. Let's take a look at routing XML messages. Much like packet routing, we'll look at the envelope of a message and move a message to a given target based on some simple rules for matching tags and attributes... While other approaches are possible, we'll assume that the first tag in a message, the root tag, will be used to decide where the document should be routed. The same approach can be expanded to apply more complex rules using a similar technique. I think you'll find this discussion a suitable starting point if this turns out to be important to you. To maximize flexibility, we'll develop a set of classes that allow you to associate tag specifications with a routing target. A RoutingTable class holds these associations and a RoutingSerializer reads from an XMLReader and pushes the output to a RoutingTarget. RoutingTarget is an interface, and we'll implement a ConsoleTarget, FileTarget, and a SocketTarget class to show how easy it is to define where your messages can be routed. In practice, you may want to wrap this code with a simple proxy server that uses socket connections to move messages to suitable message processors..."

  • [April 12, 2002] "Database Strategies for Unstructured Content." By Stuart J. Johnston. In XML & Web Services Magazine Volume 3, Number 3 (April/May 2002), pages 18-27. ['Relational and native XML database developers take diverse approaches to managing free-form information data.'] "... Consider using a native XML database as a consolidation point for data that has to be exchanged in an industry-standard format. With its ability to represent and query data from many sources as pure XML, a native XML database can enable levels of data correlation, aggregation, and information mining that would be difficult or impossible to achieve without a central place to standardize data formats and protocols. Although the field is still incipient, native XML databases might help enterprises respond to changes in the business environment more quickly and cheaply than custom approaches... a survey conducted in March 2001 at the Data Administration Management International Symposium by Intellor Group and Wilshire Conferences found that 12 percent of companies had already implemented a native XML database or planned to within the following 12 months. Indeed, advocates say that the use of native XML databases as middle-tier servers between conventional relational databases such as IBM's DB2 and XML-based Web services is catching on. Rather than using translators, the pure XML databases can speed processing time for electronic transactions while off-loading demand from the large-scale, enterprise database. Most native XML databases can communicate with relational systems relatively easily using either ODBC or JDBC drivers, through Extensible Stylesheet Language Transformations (XSLT), or in some cases using XPath. Rather than make the relational database translate SQL data into XML 'on the fly,' argue native XML aficionados, why not off-load most of that work to a native XML database? This approach has several benefits, including consolidation of all XML data in a single repository designed specifically to handle XML information and documents. However, the market is still formative, and many of the products themselves are not yet mature. Many of the players so far have come out with only version 1.0 releases, although a few have 2.0 and 3.0 releases now. But that doesn't mean that even the 1.0 releases aren't useful today, or that now wouldn't be a good time to get up to speed, try some pilot projects, and gain a measure of understanding as to what's good and bad about them. Key to the functionality of XML databases is support for several XML standards or proposed standards, although not every product will support all of them. The proposed standards include XSLT, Document Type Definitions (DTDs), and XML Schemas, as well as XPath (an XML language for addressing parts of XML documents), and XQuery. XQuery is an XML-based query language, an emerging World Wide Web Consortium (W3C) proposed standard for querying data in XML, which includes XPath 2.0 as a subset...

  • [April 11, 2002] "Security in a Web Services World: A Proposed Architecture and Roadmap." Joint White Paper from IBM Corporation and Microsoft Corporation. Version 1.0. April 7, 2002. 25 pages. The document "defines a comprehensive Web service security model that supports, integrates and unifies several popular security models, mechanisms, and technologies (including both symmetric and public key technologies) in a way that enables a variety of systems to securely interoperate in a platform- and language-neutral manner... In this document we present a broad set of specifications that cover security technologies including authentication, authorization, privacy, trust, integrity, confidentiality, secure communications channels, federation, delegation and auditing across a wide spectrum of application and business topologies. These specifications provide a framework that is extensible, flexible, and maximizes existing investments in security infrastructure. These specifications subsume and expand upon the ideas expressed in similar specifications previously proposed by IBM and Microsoft (namely the SOAP-Security, WS-Security and WS-License specifications)... By leveraging the natural extensibility that is at the core of the Web services model, the specifications build upon foundational technologies such as SOAP, WSDL, XML Digital Signatures, XML Encryption and SSL/TLS. This allows Web service providers and requesters to develop solutions that meet the individual security requirements of their applications... document outlines a comprehensive, modular solution that, when implemented, will allow customers to build interoperable and secure Web services that leverage and expand upon existing investments in security infrastructure while allowing them to take full advantage of the integration and interoperability benefits Web service technologies have to offer... We anticipate concerns about what can be done to ensure interoperability and consistent implementation of the various proposed specifications. To address this, IBM and Microsoft will work closely with standards organizations, the developer community, and with industry organizations such as WS-I.org to develop interoperability profiles and tests that will provide guidance to tool vendors..." Also from IBM and VeriSign. See: "Web Services Security Specification (WS-Security)." [cache]

  • [April 11, 2002] "Spec Secures Web Services Applications." By Peter Galli and Darryl Taft. In eWEEK (April 11, 2002). "Microsoft Corp., IBM and VeriSign Inc. have joined forces to develop and publish a new security specification for Web services that will form the foundation of their proposed Web services security architecture. The specification, which the parties say is the first such spec to be launched, will be known as WS-Security and is designed to help organizations build secure and broadly interoperable Web services applications. 'The spec is designed to enable message-level, SOAP-level security, specifically encryption, authentication and identity around those messages. What this will really enable is for Web services to communicate in a trusted manner, especially in a scenario where you have multiple actors in a Web service, where it will provide end-to-end security between all of those parties,' [said] Marcie Verdin, director of enterprise services at VeriSign... Bob Sutor, the director for eBusiness strategy at IBM, said the plan is to submit the specification to standards groups, but he declined to say which these would be. 'This is very broad, and no one standards organization is going to be able to do it. The W3C [World Wide Web Consortium] may be involved; we'll just have to wait and see'... 'This is the foundational specification for Web services security, in the same way that two years ago SOAP was a foundational specification', he added. Steven VanRoekel, the director of Web services marketing at Microsoft, said the specification is based on work that had already been done in the W3C around XML encryption and digital signatures. When asked about not including other industry players in the development of the specification, Sutor said, 'We have published several specifications together in the past like SOAP and WSDL [Web Services Description Language]. We put them out there and let the industry and partners comment on them. We plan to do the same with this and will only submit the spec to the standards bodies in a few months after we have received industry input.'... customers have been saying that while the technology is new and innovative, they are not going to use it for anything mission-critical across the Internet until they can be comfortable that their information is secure and stays confidential... IBM and Microsoft have also developed and are publishing a Web services security road map, titled 'Security in a Web Services World.' The document defines additional and related Web services security capabilities within the framework established by the WS-Security specification that Microsoft and IBM plan to develop ... The additional proposed specifications deal with security policies, trust relationships, privacy practices, the management and authentication of message exchanges between parties, trust in heterogeneous federated environments, and the management of authorization data and policies..." See details in the news item of 2002-04-11: "Microsoft, IBM, and VeriSign Promote WS-Security Specifications for Web Services"; also local references in "Web Services Security Specification (WS-Security)."

  • [April 11, 2002] "Tech Giants Partner On Security Specs." By Wylie Wong. In CNET News.com (April 10, 2002). "Microsoft, IBM and VeriSign have teamed to create security specifications for Web services, a move analysts say will help drive adoption of the hyped but still emerging technology... they will release a new specification, called WS-Security, which will encrypt information and ensure that the data being passed between companies remain confidential. The companies, which are announcing the new security initiative at Microsoft's Tech Ed developer conference, also plan to build five more security specifications in the next 12 to 18 months that will provide additional security measures that businesses may need for Web services... WS-Security merges two different security specifications that Microsoft and IBM had worked on separately with help from VeriSign, said executives from the three companies. Microsoft released a security specification, also called WS-Security, in October, but the software giant built the original without collaborating with the other companies. The new version of WS-Security, which combines the work of IBM, Microsoft and VeriSign, is used as part of a SOAP message. It outlines how to use existing World Wide Web Consortium specifications called XML Signature and XML Encryption. "It ensures the message you received is the one that was sent and was not tampered with along the way," said Bob Sutor, IBM's director of e-business standards strategy. "You want to know that the message is from me and not someone spoofing my identity and counterfeiting messages to you." The three companies plan to eventually submit WS-Security to an industry standards body, but have not yet decided on the timing or the body that they will work with..." See: "Web Services Security Specification (WS-Security)."

  • [April 11, 2002] "Microsoft, IBM, Verisign Team on Web Services." By Joris Evers. In InfoWorld (April 11, 2002). " Microsoft, IBM, and Verisign have devised a way to add integrity- and confidentiality-checking capabilities to upcoming Web services applications, a first step in a broader joint effort to secure Web services, the companies said Thursday. The jointly developed specification, dubbed WS-Security, defines a set of SOAP (Simple Object Access Protocol) extensions and describes how to exchange secure and signed messages in a Web services environment, providing a foundation for Web services security... In addition to the WS-Security specification, Microsoft and IBM said they plan to develop a range of security specifications for Web services together with key customers, partners, and standards organizations such as the W3C (World Wide Web Consortium) and the IETF (Internet Engineering Task Force). Six of the other proposed specifications are WS-Policy, WS-Trust, WS-Privacy, WS-Secure Conversation, WS-Federation, and WS-Authorization. These proposed specifications can be grouped in two categories, with the first three dealing with defining security policies, establishing trust relationships, and implementing privacy policies, and the last three handling the sending and receiving of messages sent between Web services..." See details in the news item of 2002-04-11: "Microsoft, IBM, and VeriSign Promote WS-Security Specifications for Web Services."

  • [April 11, 2002] "XML Namespaces." By Sean McGrath (Propylon). From ITWorld (April 11, 2002). Column 'XML In Practice', an online "guide which explores how to write well-formed XML documents, model business requirements using XML, how to integrate XML with existing applications." ['Namespaces have been causing major disruptions throughout the XML world since their inception. As another such disturbance erupts, the time has come to reevaluate their value.] "Ostensibly, the namespace Rec is just a simple way of allowing element type names to be globally unique. Unfortunately, in so doing, it introduces rules for defaulting namespaces that significantly complicates any software it touches. XPath, SAX, DOM, XQuery, XSchema, and XSLT -- all of these are caught up in the flapping of the namespace Rec's wings, and it is not a pretty sight! I won't go into the details here, but type any of the previous words into a search engine along with 'namespace' and 'problem' and you will see what I mean. Recently, the W3C introduced a draft of version 1.1 of the namespace Rec. The changes suggested are minor but have caused the namespace debate to erupt again on xml-dev... When someone as knowledgeable as Joe English starts classifying compliant namespace usage patterns into categories called 'neurotic', 'borderline', and 'psychotic', it behooves us to take a step back and look at what is going on here! When the XML world revisits its fundamentals (around 2008 I suspect as these things always take a decade), namespaces will need to have their wings well and truly clipped in order to redress some of the chaos and damage caused from a decade of inconsiderate flapping. My advice: Don't use namespaces at all if you can avoid it. If you can't, then only put them on the root element. Oh, and don't lend money to anyone who tries to tell you namespaces are simple..." References: (1) the thread in XML-DEV postings; (2) the news item, W3C XML Core Working Group Publishes New Working Drafts for Namespaces in XML."; (3) "Namespaces in XML."

  • [April 11, 2002] "XML Namespaces 1.1." Commentary on the new W3C WDs, by Leigh Dodds. From XML.com [XML Deviant]. April 10, 2002. "Namespaces have probably generated more debate and confusion than any other W3C Recommendation. With the first publication of a Working Draft for Namespace 1.1, which has caused a great surge of discussion on XML-DEV, it seems like the next iteration of the specification won't be any less controversial. Looking through the color coded changes in the new Namespaces in XML 1.1 Working Draft will quickly demonstrate that the document includes minimal changes from the original. Turning to the accompanying requirements document shows that this is entirely intentional. Barring the addition of some minor errata, Namespaces 1.1 will incorporate only a single extra feature, the ability to undeclare a namespace prefix. Putting the changes in context, the requirements also explain that the new specification is to be progressed to Recommendation alongside XML 1.1. Richard Tobin clarified the relationship between the two specifications in an XML-DEV posting: 'It's intended that Namespaces 1.1 will be strongly tied to XML 1.1, so that it will only apply to XML 1.1 documents. XML 1.0-only parsers will reject documents labeled 1.1 anyway, and Namespaces-1.1-aware processors will apply Namespaces 1.0 to 1.0 documents and Namespaces 1.1 to 1.1 documents.' ... As Ronald Bourret's Namespace FAQ explains, while it's possible to undeclare the default namespace, it's not possible to undeclare a namespace associated with a prefix. According to the Namespace 1.1 requirements document this is a significant problem that is affecting several other W3C specifications, including XQuery, SOAP, and XInclude... The precise problem is that, according to the XML Infoset, each element has a number of ' namespace information items', one for each of the namespace declarations currently in-scope for that element. An in-scope namespace is any namespace declared on an individual element or one of it's ancestors. This is problematic because each element ends up carrying around additional 'baggage' in the form of namespace declarations that it doesn't need... One proposal floated during the discussion combining the XML and Namespaces specifications into a single core document. Opinions were divided on whether this was a good suggestion. Some see XML and Namespaces as the real foundation. Others wanted the continued freedom to ignore Namespaces or perhaps adopt alternatives. Tim Bray's Skunkworks XML 2.0 was cited as an example of how this might combination might be achieved, although this raised some additional concerns as Bray's document also removes DTDs from the core specification. This lead to a subsequent discussion on how DTDs could be updated to better support XML Namespaces. The entire thread is too lengthy to summarize here, but it's interesting to note that the ISO DSDL project has a section devoted to 'Namespace-aware processing with DTD syntax' which may yet see this work happen outside of the W3C. In the short term, and despite loud comments from the community, the scope of changes for XML 1.1 and XML Namespaces 1.1 won't be altered beyond their currently stated requirements as Richard Tobin made absolutely clear: ['XML 1.1 aims to resolve various character-level issues (newer Unicode versions, line ends, Unicode normalization). Namespaces 1.1 aims to add undeclaring of prefixes. Both will incorporate errata to the current editions. That's all. No throwing out DTDs, no making them work with namespaces. No unification of the two specs...'] . Whether a more substantial reworking to further rationalize these and other specifications will happen is anyone's guess." References: (1) the thread in XML-DEV postings; (2) the news item, W3C XML Core Working Group Publishes New Working Drafts for Namespaces in XML."

  • [April 11, 2002] "XSLT." By Elliotte Rusty Harold. Notice posted to XML-DEV 2002-04-11. Chapter 17 of Processig XML with Java. This work is being written as "a complete tutorial about writing Java programs that read and write XML documents. It will become an approximately 700 page book that will be published by Addison-Wesley in Spring 2002... This chapter includes three major sections: (1) A brief XSLT tutorial (2) Detailed discussion of the TrAX API for using XSLT with Java (3) Writing XSLT extension functions and elements in Java I'm particularly pleased with the first section. I've written several XSLT tutorials before, but this one is radically different. It considers XSLT primarily as a functional programming language and focuses on the ability to call templates recursively. I doubt there's anything here that hasn't been discovered or invented by someone somewhere before, but certainly I had never seen some of the things you could do with XSLT until I invented them for this chapter. I wouldn't recommend this as your first exposure to XSLT (see instead "XSL Transformations" in XML Bible, Gold Edition) but if you're already familiar with XSLT basics, this chapter may show you a few new tricks..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [April 11, 2002] "Practical Formatting Using XSLFO. (Extensible Stylesheet Language Formatting Objects.)" By G. Ken Holman. Second Edition. 2002-04-05. ISBN 1-894049-09-8. From Crane Softwrights Ltd. 361 pages, free 175-page download preview excerpt. "This revised Second Edition again contains all formatting objects of the W3C XSLFO 1.0 Recommendation with additional text, examples, twice as many diagrams, and new content based on feedback and requests for additions from existing customers. It has been four months since the last edition. The free download preview excerpt (in both A4 and US-letter page sizes) has also been updated to reflect the content of the new edition. If you have downloaded the earlier free preview excerpt, you may wish to pick up a new copy..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [April 11, 2002] "Microsoft Ships Commerce Server, Touts XML Progress." By Richard Karpinski. In InternetWeek (April 11, 2002). "Microsoft this week shipped the next version of its Commerce Server, the first server to support its emerging .Net framework and an important upgrade in its own right for companies building e-commerce sites on the Windows platform. In addition, at its TechEd developer show, Microsoft detailed new progress on the XML and Web services front, including the first release of a live Web service, an XML version of MapPoint Web site. The release of Commerce Server 2002 addresses several shortcomings, including concerns about the lack of scalability and relatively limited personalization and merchandising tools. It is also notable for its built-in support for the .Net platform, which will allow developers using the recently delivered VisualStudio.Net to add custom application functionality to the platform, as well as support for the BizTalk integration server... improvements in Commerce Server 2002 include: the introduction of 'virtual catalogs,' or the ability to aggregate catalogs from multiple suppliers; integration with Passport, Microsoft's single sign-on authentication solution; and a slew of new functionality to tap merchandising and cross-sell/up-sell opportunities..."

  • [April 10, 2002] "An XML Schema for Electronic Records Management." By Leila Naslavsky and Dorrit Gordon. Draft Version 0.1. 35 pages. Prepared in a workgroup lead by Jim Whitehead (UC Santa Cruz). Referenced in a 2002-04-09 post from Owen Ambur (Co-Chair, XML.gov XML Working Group; Vice Chair, Federal Information and Records Managers (FIRM) Council). Comment on the ERM schema from Owen Ambur's post: "DoD Std. 5015.2 embodies legal, logical, and technical requirements that are applicable to all U.S. federal agencies. DoD is certifying COTS products that comply with the standard, and NARA has endorsed the standard for use by all U.S. federal agencies. In a very real sense, agencies that choose not to use one of those systems risk violating the law, not to mention failing to manage their records effectively. Among the projects identified for priority action in the Administration's eGov Strategy is the eRecords project managed by NARA. Among the objectives identified for the project is the following: "Complete the RM and Archival XML Schema" by 2/28/03; see page 17/20." Excerpt from the ERM paper: "Governments, corporations, museums, libraries and other organizations all maintain repositories of documents. As the volume of information maintained by these institutions grows, so does the need for simple, flexible mechanisms for managing that information. Electronic records management (ERM) is a field of study devoted to developing methods for managing this information. This work defines an XML schema to be used as the basis for a records management application (RMA) compliant with the United States Department of Defense's (DoD) standards for RMAs, DoD 5015.2 [Department of Defense 5015.2-STD, "Design Criteria Standard for Electronic Records Management Software Applications," November, 1997]... The XML schema defined in this paper was designed with two key goals in mind: compliance with DoD 5015.2, and flexibility. We chose to use DoD 5015.2 as a basis for our schema. It provides a manageable set of clearly defined metadata elements. The US based ERM research has been driven by the National Archive and Records Administration (NARA) which relies on the DoD 5015.2 standard. An Australian organization, the Public Record Office of Victoria has also published a comprehensive set of ERM standards, the Victorian Electronic Records Strategy (VERS)... The primary goal of this effort was to support the US Department of Defense's requirements for electronic records management. To that end, the metadata included in our schema derives directory from DoD 5015.2. We added a limited number of additional metadata elements. Elements were added either because their presence was inferred from the specification (although not explicitly defined), or because we felt they would add significant utility. Elements which were added only because we felt they would add utility are all optional... The secondary goal of this effort was to create a schema with enough flexibility to be used for other ERM efforts. We chose to split the schema into several pieces. An organization should use only those pieces which are appropriate to its needs. Records are the core of any RMA. Our assumption is that any organization implementing our schema will, at a minimum, use the records portion. The only additional portion required is the organizational schema. The File Schema cannot be implemented without the Record Schema because files contain elements with a type, recordID, which is defined in the Record Schema. All other interconnections are via elements of type unsignedInt. These can be implemented with meaningless default values, when they do not refer to the code for an existing element. The inclusion of user defined data elements allows organizations to include metadata that is not called for in the DoD 5015.2 specification. While there is still considerable work to be done to ensure that this schema is fully compliant with DoD 5015.2, and to improve its flexibility and support for interoperability, this draft will provide a platform from which to move forward." The Appendix provides a draft mapping between the metadata elements in VERS and those in DoD 5015.2. On VERS, see "Australian Public Record Office Victoria Uses VERS Standard for Records Management." [cache, PDF from the .DOC version]

  • [April 10, 2002] "TechEd: .Net Evangelist Talks Shop." By Matt Berger and Ed Scannell. In InfoWorld (April 10, 2002). "Months after the prime-time release of Microsoft's developer toolkit Visual Studio .Net, the company is offering additional tools and software upgrades that it says will harness the initiative's capabilities for delivering software and services over the Internet. Eric Rudder, Microsoft's senior vice president of developer platforms, who is also the evangelist for the company's broad .Net initiative, kicked off the TechEd 2002 developer conference here with several new products and software announcements as well as testimonials from customers who have begun using the tools to build their own next generation Web applications. Rudder demonstrated Microsoft's first commercially available XML Web service, called MapPoint .Net, which can be used to add mapping and location-based services to applications, Web sites or Web-based services. Now available in its second version, MapPoint .Net has support for the Web service standard SOAP (Simple Object Access Protocol). To be hosted by Microsoft, MapPoint .Net is aimed at helping software developers, portal designers and corporate users easily embed maps, driving instructions, distance calculations and proximity searches into their applications. Among its features, MapPoint .Net is capable of finding street addresses in the U.S., Canada and 11 European countries, with maps and driving directions delivered automatically in nine languages, Microsoft said. On the software side, Microsoft officially unveiled Commerce Server 2002, the first in its line of .Net enterprise servers that includes built-in support for the .Net Framework, a runtime environment that is used to execute applications designed for .Net... Also during Wednesday's keynote, Rudder announced the Exchange 2000 Server XML Web Services Toolkit for .Net. The new toolkit, which is used in conjunction with Visual Studio .Net, offers developer tools and other resources designed to link .Net applications with Exchange. Users will be able to take advantage of Exchange features such as a calendar, workflow scheduling and messaging. The toolkit supplies sample code white papers, and a training course to quicken the design and development of XML Web services." See details in the news item: "Microsoft Announces MapPoint .NET XML Web Service With SOAP API."

  • [April 10, 2002] "Siebel Tackles Application Integration." By Ann Bednarz. In Network World (April 09, 2002). "Siebel Systems Monday [2002-04-08] unveiled plans for a set of tools designed to ease the hassle of linking Siebel eBusiness applications with other enterprise software. Due to be available this summer, the Universal Application Network is built around a set of 200 industry-specific 'business processes' that Siebel is creating. Rather than being tied to a single application, these prepackaged software sequences cross multiple applications to access data needed to complete common business functions. A business process might cover creating a new customer record, for example, or advancing a sales transaction from the quote stage to order placement... Siebel - which is the market leader in customer relationship management (CRM) software - is developing the business processes with some help from system integrators PwC Consulting, Accenture, IBM Global Services and KPMG Consulting. Also contributing technology and expertise to the Universal Application Network effort are integration middleware vendors IBM, SeeBeyond, TIBCO Software, Vitria and webMethods. Siebel recruited these middleware vendors to supply the underlying integration platform for Universal Application Network, and the vendors have agreed to support the business processes specifications Siebel uses. The integration servers will execute the business processes, Siebel says. All together, Universal Application Network consists of three major components: the library of business processes; tools to configure business processes; and the integration server for coordinating inter-application communication. The tools are based on Web services and XML standards. Ultimately, the aim of Universal Application Network is to reduce the need for custom integration, allowing enterprises to quickly and affordably tie their CRM software to other applications, such as inventory, order management or financials..." For details, see the 2002-04-10 news item: "Siebel Systems Announces XML-Based Universal Application Network." See also the announcement "Siebel Systems Launches Universal Application Network: Industry's First Standards-Based, Vendor-Independent Application Integration Solution. Industry Leaders Commit Technology and Expertise to Deliver Complete, Customer-Centric Solution that Speeds Deployment and Delivers Lower Total Cost of Ownership."; also the white paper.

  • [April 10, 2002] "XML Formatted Remittance Data in the ACH: A Feasibility Assessment." From: NACHA, The Electronic Payments Association. Internet Council. Final Draft. April, 2002. 17 pages. ['The Internet Council has just completed its study of the feasibility of passing XML-formatted remittance data through the ACH. The white paper reviews the current state of ACH business-to-business payments and remittance processing, the emerging use of XML in remittance processing, and the perceived future of Internet-based B2B payments. NACHA represents more than 12,000 financial institutions through its 33 regional payments associations, and 600 organizations through its seven industry councils and corporate Affiliate Membership program. The mission of the Internet Council is to facilitate the development of global electronic commerce by enabling businesses, governments, and consumers to utilize present and future payments over open networks in a secure and cost-effective manner'] "The purpose of this document is to summarize the issues pertaining to this question through a SWOT (strengths, weaknesses, opportunities, and threats) analysis of the: (1) current state of ACH business-to-business (B2B) payment and remittance processing; (2) emerging usage of XML in payments and remittance; and (3) perceived future of Internet-based B2B payments. The scope of the analysis is limited to evaluating ACH-based remittance origination and receipt in business-to-business (B2B) payment transactions... Should the ACH enable the exchange of XML remittance data? There is some evidence to support both a 'yes' and a 'no' response. However, it is clear from this research effort that significant issues remain, and it is not possible to definitively answer the question at this time. Based on this conclusion, the Internet Council recommends that the question of XML remittance data in the ACH be tabled for the moment; not enough data exists to move forward in promoting ACH network changes at this time... The Council identified five types of potential changes in market conditions that may warrant the need for XML remittance data in the ACH: (1) Industry adoption of an XML payment standard; (2) Substantial increase in customer or constituent demand; (3) Establishment of regulatory requirements by a legal or governing authority; (4) A legal or risk assessment showing that stakeholders are somehow at risk or have liability for not providing this capability; (5) Evidence of adoption and use on competitive bulk-payments networks." See also the press release: "NACHA Issues Report on XML and the ACH. Internet Council Evaluates B2B Market Conditions; Financial EDI Continues Growth." [cache]

  • [April 10, 2002] "SyncML Intensive: A Beginner's Look at the SyncML Protocol And Procedures." By Chandandeep Pabla (Software Engineer, DCM Technologies). From IBM developerWorks, Wireless. April 2002. ['SyncML is the data synchronization protocol that promises to put an end to wireless data synchronization woes for end users, manufacturers, service providers, and application developers alike. Follow along with wireless programmer Chandan Pabla as he unpacks the SyncML protocol (now in version 1.1) and walks you through the steps of a two-way client/server SyncML procedure.'] "A long-standing obstacle to the advancement of ubiquitous computing has been the lack of a generalized synchronization protocol. Until recently, the available synchronization protocols were proprietary, vendor-specific, and supported synchronization only on selected transports, for specified data types. This has slowed development in the area of mobile computing and been a common source of frustration for users, device manufacturers, service providers, and application developers. SyncML, launched in February 2000, marked the coming together of industry leaders (Ericsson, IBM, Motorola, and Nokia, among others) to resolve the synchronization problem. With the February 2002 release of the SyncML version 1.1 specification, we have our chance to begin working in earnest with what already promises to be a groundbreaking protocol. In this article we'll unpack the SyncML specification. I'll start by describing the essential components of the SyncML protocols. Next, I'll explain the seven different types of SyncML procedure, and walk through the steps of a two-way synchronization procedure. In closing, I'll outline the contents of the SyncML Reference Toolkit, then briefly discuss the overall benefits of using SyncML as the foundation for your wireless device development. [...] we have explored the basic structure and components of the SyncML data synchronization protocol. We've also walked through a simple SyncML synchronization procedure, using that procedure to learn about some of the complexities that can arise in data synchronization, and how they are dealt with in the SyncML framework. We closed with a quick overview of the SyncML RTK, and a discussion of the wide-reaching benefits of SyncML. The future of SyncML is bright. Already, an initiative is in place to expand it scope, with the Device Management Protocol, and application development under the SyncML programming framework is well underway. [The SyncML Device Management protocol (SyncML DM), announced in January 2002, is the first move by the SyncML Initiative to broaden the scope of the SyncML specification. SyncML DM technology will allow third parties such as wireless operators, service providers, and corporate information management departments to remotely configure mobile devices.] With the release of the SyncML version 1.1 specification, it is apparent that SyncML will continue to mature, and will become an increasingly ubiquitous wireless programming technology over the next several years..." See references in "The SyncML Initiative."

  • [April 10, 2002] "Interactive TV Firms Back Tech Standards." By [Reuters]. In CNet News.com (April 10, 2002). "A group of interactive TV content providers on Monday [2002-04-08] unveiled a set of open technology standards for making programs they claim will play on any set-top box or Web-based system. The production standards, developed by interactive TV company GoldPocket Interactive, have the backing of about 90 percent of the companies in the nascent industry, GoldPocket said. Interactive TV is a system for two-way communications via TV sets using remote controls, and many cable and satellite TV operators are beginning to install the systems across the United States... The standards are based on a form of XML (Extensible Markup Language), which is widely compatible with the major interactive set-top boxes on the market today. The company said it has created an advisory committee to shepherd the development of the standards set, including participants from America Online and Turner Broadcasting, both units of AOL Time Warner; Dick Clark Productions; and Game Show Network, a cable TV channel backed by Sony's entertainment division. A draft version of the standards is available now, with final versions of the free standards to be available in May. Separately, GoldPocket also said it has developed a version of its authoring tools integrated with digital video editing technology from Avid Technology, whose editing systems are among the most widely used in the entertainment industry..." See references in the news item: "iTV Production Standards Initiative Creates XML Standards for Interactive Television."

  • [April 09, 2002] "Adobe Freshens FrameMaker with XML, Server Features." By [Seybold Bulletin Staff.] In The Bulletin: Seybold News and Views on Electronic Publishing Volume 7, Number 27 (April 10, 2002). "This week marks the return of FrameMaker to the forefront of the structured-document authoring market. Adobe announced FrameMaker 7, a new version of the venerable desktop publishing program, which, for the first time, is endowed with direct support for XML. Frame Technology introduced FrameMaker+SGML in late 1995, the same year Adobe acquired the company. By early 1996, Adobe claimed that 20 percent of the FrameMaker installed base was considering FrameMaker+SGML. Since the arrival of XML in 1997, however, Adobe has been slow to adapt the product to the syntax and character encoding of the new standard. FrameMaker 6, introduced in the spring of 2000, featured integrated HTML output but no direct XML support, even in the SGML version ('Adobe Debuts FrameMaker 6.0,' in SRIP 4/8, April 2002). In this new version, Adobe not only has brought the product up to date, but also merged the structured and unstructured versions of the product into a single application so that users can choose the mode in which they want to work without installing two products... As before, the structured mode provides context-sensitive structured editing within a WYSIWYG, page-oriented display. The new version adds support for Enhanced Unicode characters (UTF-8 and UTF-16), and enables users to round-trip XML files into and out of FrameMaker. The second significant change in version 7 is that it comes in a certified server configuration... As it has for a decade, FrameMaker remains an excellent desktop layout tool for producing proposals, reports and books... The growing demand for XML-based publishing requires configurable tools for structured editing to be coupled with robust tools for cross-media production. FrameMaker 7.0 delivers both of these capabilities in a single desktop package. It not only provides real competition to Arbortext and SoftQuad at the desktop, but also a capable print-engine server that can be hooked up to a variety of Web content-management systems..." See details and references in the news item: "Adobe Announces Enhanced XML Authoring Support in FrameMaker Version 7.0."

  • [April 09, 2002] "Apache SOAP Type Mapping, Part 2: A Serialization Cookbook. Follow these steps to write your own custom serializers and deserializers." By Gavin Bong (Software Engineer, eUtama Sdn. Bhd.). From IBM developerWorks, Web Services. April 2002. See previously: "Apache SOAP Type Mapping, Part 1: Exploring Apache's Serialization APIs. Learn how to translate the data types in your apps into XML." ['SOAP specifies an encoding to represent common types found in databases, programming languages (for example, Java programming language), and data repositories. Apache SOAP's toolkit supports encoding by supplying a base set of (de)serializers; classes that do the grunt work of mapping Java types to serialized XML representations. Part 1 of this two-part series explored the use of these (de)serializers. Here in Part 2, Gavin Bong shows you how to write your own (de)serializers when none from the toolkit suit your needs. He also provides an example application that demonstrates many of the concepts explored in this series.'] "In the first article in this series, you saw how SOAP maps data types to XML, and learned how to use the serializers and deserializers (hereafter referred to collectively as (de)serializers) included in the Apache SOAP toolkit. In this installment, I'll walk you through a cookbook that will show you how to write your own (de)serializers. I would advise you to have the sources of some of the base (de)serializers available for reference. You may also want to reread the 'Type Mapping Pattern' section in Part 1 to refresh your memory on how type mappings are resolved internally. Once I've finished with the cookbook, I will present a simple application that implements schema-constrained SOAP. The application will describe an interaction in which a purchase order document that's purposely noncompliant with Section 4 encoding is sent using SOAP. [...] I hope that the examples in this article have made clear the theoretical concepts outlined in the first article in this series. If Web services operating across many machines on the network are to become a widespread reality, developers must understand how programmatic objects are transmitted from one machine to another. A better understanding of SOAP's type mapping abilities should help you build better distributed applications and services." See "Simple Object Access Protocol (SOAP)."

  • [April 09, 2002] "Extensible Provisioning Protocol E.164 Number Mapping." By Scott Hollenbeck (VeriSign Global Registry Services). Internet Engineering Task Force, Internet-Draft. Reference: 'draft-ietf-enum-epp-e164-00.txt'. April 5, 2002. Expires: October 5, 2002. "This document describes an Extensible Provisioning Protocol (EPP) extension mapping for the provisioning and management of E.164 numbers representing domain names stored in a shared central repository. Specified in XML, this mapping extends the EPP domain name mapping to provide additional features required for the provisioning of E.164 numbers... This mapping, an extension of the domain name mapping described in ['Extensible Provisioning Protocol Domain Name Mapping'], is specified using the Extensible Markup Language (XML) 1.0 as described in [XML 1.0] and XML Schema notation as described in 'XML Schema Part 1: Structures' and 'XML Schema Part 2: Datatypes.' The Extensible Provisioning Protocol provides a complete description of EPP command and response structures. A thorough understanding of the base protocol specification is necessary to understand the mapping described in this document. Faltstrom/Mealling in 'The E.164 to URI DDDS Application' describe how the Domain Name System (DNS) can be used to identify services associated with an E.164 number. The EPP mapping described in this document specifies a mechanism for the provisioning and management of E.164 numbers stored in a shared central repository. Information exchanged via this mapping can be extracted from the repository and used to publish DNS resource records.. Examples used in this document were chosen specifically to illustrate provisioning concepts for the example resource records... Formal Syntax: An EPP object mapping is specified in XML Schema notation. The formal syntax presented here is a complete schema representation of the object mapping suitable for automated validation of EPP XML instances..." See the XML Schema (text). See also the Charter for the IETF Provisioning Registry Protocol WG. General references in: "Extensible Provisioning Protocol (EPP)." [cache]

  • [April 09, 2002] "Architectures for Intelligent Systems." By John F. Sowa. Revised version. 2002-04-09 or later. ['I have put a new version of the paper "Architectures for Intelligent Systems" on my web site; the major change from the earlier version is the addition of a new concluding section... At ICCS 2002 in Bulgaria, I'll be giving a tutorial on the topic of designing intelligent systems using conceptual graphs. Some of the material in this paper will be included. In particular, I would like to see an architecture along these lines developed as a project for the entire CG community. Existing systems and components could be incorporated into the project by passing CGIF messages to and from the blackboard. New systems could be designed around the blackboard from the start.'] "People communicate with each other in sentences that incorporate two kinds of information: propositions about some subject, and metalevel speech acts that specify how the propositional information is used -- as an assertion, a command, a question, or a promise. By means of speech acts, a group of people who have different areas of expertise can cooperate and dynamically reconfigure their social interactions to perform tasks and solve problems that would be difficult or impossible for any single individual. This paper proposes a framework for intelligent systems that consist of a variety of specialized components together with logic-based languages that can express propositions and speech acts about those propositions. The result is a system with a dynamically changing architecture that can be reconfigured in various ways: by a human knowledge engineer who specifies a script of speech acts that determine how the components interact; by a planning component that generates the speech acts to redirect the other components; or by a committee of components, which might include human assistants, whose speech acts serve to redirect one another. The components communicate by sending messages to a Linda-like blackboard, in which components accept messages that are either directed to them or that they consider themselves competent to handle... A major advantage of a flexible modular framework is that it doesn't have to be implemented all at once. The four design principles, which enabled Unix-like systems to be implemented on anything from a wearable computer to the largest supercomputers, can also support the growth of intelligent systems from simple beginnings to a large 'society of mind,' as Minsky [1985] called it. For an initial implementation, each of the four principles could be reduced to the barest minimum, but any of them could be enhanced incrementally without disturbing any previously supported operations..." See: "XML and 'The Semantic Web'."

  • [April 09, 2002] "XML Efforts Need Focus, GAO Says." By Diane Frank. In Federal Computer Week (April 08, 2002). "Despite multiple initiatives to define common federal standards and requirements for Extensible Markup Language, the lack of central XML guidance could derail interoperability within government, according to a General Accounting Office report... Several agencies oversee federal information policy and standards, starting with the Office of Management and Budget and including the National Institute of Standards and Technology, the General Services Administration and the Defense Information Systems Agency. But each is involved in developing some form of government XML business standards, and that approach will not help foster the intended governmentwide interoperability, according to report. GAO is recommending that OMB, along with NIST and the CIO Council's XML working group, develop a strategy for governmentwide adoption of XML to guide agencies and ensure that XML is addressed in each agency's enterprise architecture..." See a summary in the news item: "US General Accounting Office Releases XML Interoperability Report."

  • [April 08, 2002] "[HR-XML Response to 'Challenges to Effective Adoption of the Extensible Markup Language'.]" By Chuck Allen (Director, HR-XML Consortium, Inc.). Chuck Allen reports on some errors in the GAO report considered to be "damaging to the interests of HR-XML and to U.S. government interests." This document may be considered commentary to the report referenced in "US General Accounting Office Releases XML Interoperability Report" and (bibliographically) in the 'Articles' document. Initial segment: "Dear Mr. Nelligan: I am writing to request that the U.S. General Accounting Office act quickly to correct inaccuracies in the above-referenced report that are damaging to the organization I represent as well as to U.S. government interests... Although the report makes many references to the HR-XML Consortium and discusses the applicability of the Consortium's work to U.S. government entities, the author of the report did not contact HR-XML in researching information for the report. A key finding in the report is that 'potentially useful XML vocabularies are not ready for government-wide adoption.' I do not necessarily dispute this finding. However, I strongly object to information the report provides to support this finding as it relates to human resources (HR) management vocabularies. I am writing to provide an accurate accounting of the HR-XML Consortium's work and its goals. I request that GAO publish this letter as an addendum to the report or otherwise publish a fair and complete accounting of the Consortium's work. I also want to offer some advice to government stakeholders about how the development of vocabularies suitable for government adoption could be accelerated. At the risk of stating the obvious, I can assure you and representatives of any U.S. federal government organization that XML vocabularies for HR are not likely to be ready for government adoption without active involvement by government stakeholders. The sections that follow detail the problems with the report and my recommendations for more productive federal involvement within HR-XML..." See also: "HR-XML Consortium." [cache]

  • [April 08, 2002] "Adobe Supports XML With FrameMaker 7.0." By Matt Berger. In InfoWorld (April 08, 2002). "Adobe Systems... will begin shipping Adobe FrameMaker Version 7.0, the latest release of its software for creating content once and publishing it to a variety of media, a process called multichannel publishing. From a single user interface similar to that of a word processor, Adobe FrameMaker enables users to create content, such as a user manual or sales documentation, and publish it for use in a variety of settings, including the Web, handheld devices, and print. A new feature in the latest version is the ability to create content in XML... In addition to XML, FrameMaker can output content as an Adobe PDF (Portable Document Format) file, in HTML (Hypertext Markup Language) for display on a Web site, or in a format that can be displayed on an e-book reader or handheld device. The San Jose, Calif., graphics software maker is not alone in trying to make publishing easier by helping content creators to put out their work in a variety of formats. Corel recently acquired SoftQuad, which makes XMetaL, an application that can out put content in some of the same data formats as Adobe's software. A longtime player in the market, Arbortext makes another competing product, Epic Editor. Although each company promotes unique features in its software, they all agree on one thing: the importance of XML... A good example of the power of XML publishing is in the software business, where vendors may write a lengthy user manual for each product and then publish it as a printed book, as a CD-ROM and on the Web. Before XML-based tools were available, each time a new version of software was released, a company would have to go through the daunting task of updating the content in each version of its manual individually, Schiavone said..." See details and references in the news item: "Adobe Announces Enhanced XML Authoring Support in FrameMaker Version 7.0."

  • [April 08, 2002] "Adobe Enhancing FrameMaker Software." By David Becker. In CNET News.com (April 08, 2002). "Adobe Systems on Monday [2002-04-08] will announce the release of a new version of its FrameMaker publishing software, which continues the company's push into server programs. FrameMaker is a package of tools for composing documents for print and electronic distribution. The new FrameMaker 7.0 includes a server module and tools, based on XML (Extensible Markup Language), that can automatically reformat the same document for delivery in various forms, such as an HTML Web page, an Acrobat print file or a Palm handheld document... Other features in the new FrameMaker include templates to help desktop users optimize documents for various publishing formats and a simplified, point-and-click WYSIWYG interface that Adobe expects will make the software much more approachable to the average corporate worker. WYSIWYG refers to any technology that enables you to see images onscreen exactly as they will appear when printed out. The new FrameMaker follows several recent moves by Adobe to expand into the server arena. In January, the company released software for managing images throughout a corporate network. A few days later, Adobe announced it was acquiring Accelio, a small Canadian company that makes server software for managing data submitted through electronic forms. The moves have prompted speculation that Adobe may expand into content management software (CMS), server applications produced by companies such as Interwoven and Documentum that manage the flow of text data throughout a business..." Details and references in the news item: "Adobe Announces Enhanced XML Authoring Support in FrameMaker Version 7.0."

  • [April 08, 2002] "Government urges slow road to XML ." By Margaret Kane. In CNET News.com (April 08, 2002). ['A new report is cautioning federal agencies to go slowly in adopting XML.'] "The General Accounting Office found that, in the absence of any formal governmental policy or standard on XML, federal agencies might want to hold back before adopting the Web standard. But the GAO, Congress' investigative arm, did recommend that the government develop an overarching policy so agencies could take advantage of the new technology... XML would seem to be an ideal technology for the government because it could help various and diverse branches quickly exchange data. For instance, crime data could be easily transferred and accessed by multiple law enforcement agencies... The problem, the GAO found, is that while technical definitions of XML have been ironed out, the business definitions are, for the most part, still lagging. Some agencies have been trying to help create the definitions that they need; for instance, the Environmental Protection Agency has been working with state environmental agencies to develop standards for environmental information. But in most cases, the specialized definitions needed are either not yet ready, or not sufficient, for government use, the GAO report said. As an example, it pointed to a workgroup that's been trying to develop a set of human resources standards for the Office of Personnel Management. While a version of HR-related XML is being worked on by a nonprofit consortium, the standard has only two approved data definitions, the GAO said. Meanwhile, the government workgroup has created its own list of 984 data elements, some of which will likely not be included in the public standard because they are specific to government use..." See a summary in the news item: "US General Accounting Office Releases XML Interoperability Report."

  • [April 08, 2002] "Microsoft Revamps Specification for Web Searches." By Wylie Wong. In CNET News.com (April 08, 2002). "Microsoft is revising a data access specification that will allow companies to more easily search databases on the Web. The software kingpin on Monday plans to announce a new version of XML for Analysis, a specification intended to offer better support for data mining and Web services applications. To help with the effort, Microsoft also is to announce that SAS Institute, which specializes in data mining technology, has joined the project. Microsoft, along with Hyperion Solutions, a specialty company focused on data analysis software, created XML for Analysis last year. The specification uses the Simple Object Access Protocol (SOAP), a technology used in Web services, to let Web browser-based programs access back-end data sources for data analysis. The specification allows companies to build online analytical processing (OLAP) and data mining applications that work over the Web. OLAP and data mining are similar operations that entail searching databases for information to compile reports, such as how products are selling in a particular region or on a given date, and for analyzing business data to discover patterns and trends. XML for Analysis is meant to re-create for the Web existing corporate network-based data access specifications. Two older specifications, ODBC and OLE DB, backed by Microsoft and supported by the rest of the computing industry, were created in the 1990s as a standard way of accessing databases from client-based programs. ODBC works mostly with relational databases; OLE DB works with multiple data types, including text, video and data. Both specifications were aimed at programs working on local corporate networks, not the Internet. To address that limitation, Microsoft and its partners built XML for Analysis. The coalition of companies on Monday will announce plans to revise the specification within the next year and submit it to an industry standards body. Microsoft did not specify which standards body it will work with..." See the XML for Analysis (XMLA) web site and the announcement: "Microsoft, Hyperion Welcome SAS as Co-Chair on XML for Analysis Council. Industry Leaders Further Bolster XML for Analysis to Accelerate Deployment Of Web Services in Business Intelligence." Other references: (1) "XML for Analysis"; (2) Java OLAP Interface (JOLAP), JSR 69.

  • [April 08, 2002] "Document Publishing Vendors Tout XML." By Jennifer Mears. In Network World (April 08, 2002). "Adobe this week is releasing the latest version of its FrameMaker authoring software that includes native support for XML, as well as a server-based feature that will enable corporations to create documents that can be shared across workgroups and viewed in multiple formats. The release of FrameMaker 7.0 represents a big push for Adobe into the enterprise market as it strives to provide the tools and services corporations need to re-use and repurpose increasing amounts of content. It also reflects the growing trend among businesses to adopt XML as a way to share content among employees, partners and customers. Software company Corel also recently made a move into the enterprise market with the announcement of its Deepwhite strategy, which employs standards such as XML to create content that can used in a variety of ways. Arbortext is another vendor that has long offered an XML editor and multichannel publishing engine. Businesses are turning to content management systems to organize unstructured content, but find that without a standard such as XML time is spent repackaging content for different uses - print or the Web, for example... XML describes the data within a document, leaving it up to the receiving device to display it in the most appropriate manner. What's more, XML documents can be broken down into components, enabling businesses to reuse some components, while updating other portions of files, if necessary. What companies like Adobe, Corel and Arbortext are doing is providing businesses with a comfortable environment in which to create XML documents..." See details and references in the news item: "Adobe Announces Enhanced XML Authoring Support in FrameMaker Version 7.0."

  • [April 08, 2002] "UBL and Industry XML Standards." Position paper. By the OASIS Universal Business Language (UBL) TC, UBL Marketing Subcommittee. April 02, 2002 [or later]. "XML was designed to specify multiple data formats optimized for different data exchange applications. So the recent explosion of XML specifications resulting from the efforts of industry associations to develop domain-specific markup languages is neither unexpected nor detrimental. In one area, however, these otherwise beneficial XML industry initiatives are creating interoperability problems and impeding the development of inexpensive software. That area is the specification of XML schemas for common business documents such as purchase orders and invoices. While different industries frequently do have slightly different requirements for these common business forms, their similarities far outweigh their differences, and most of the work devoted to the design of these forms in each industry segment is simply wasted effort that would better be deployed in work on XML schemas for the data that is truly specific to a given industry. The goal of UBL is to standardize XML schemas for common business documents so that industry organizations can concentrate on the part of the data interchange problem in which they have special expertise and truly divergent needs... To coordinate the review of UBL schemas by industry organizations, the UBL TC has established a special group, the UBL Liaison Subcommittee, whose members are individuals formally appointed by industry consortia to represent their interests in the UBL work. The members of the UBL Liaison Subcommittee currently include ACORD (insurance industry), ARTS (retail sales), EIDX (electronics industry), RosettaNet (IT industry), X12 (EDI), and XBRL (accounting professionals)..." See "Universal Business Language (UBL)." [cache]

  • [April 08, 2002] "Microsoft Pushes Data Mining in Business Intelligence Protocol." By Paul Krill. In InfoWorld (April 08, 2002). "Looking to promote access to data mining functions via a Web services computing paradigm, SAS will take a co-chairmanship with Microsoft and Hyperion on the XML for Analysis Council, named for a protocol intended to provide an industry-standard messaging interface for accessing business intelligence functions. But an Oracle official charged the effort is Microsoft-proprietary. Hyperion, Microsoft, and SAS plan to announce the new co-chairmanship on Monday in an effort to boost data mining in the XML for Analysis Protocol, which is being designed for access to business intelligence applications, such as OLAP, from databases and other applications such as ERP, message brokers, and portals... XML for Analysis is a SOAP-based XML API for standardizing data access between clients and a data provider over the Web, according to Microsoft. Eventually, the protocol will be submitted as a potential industry standard to an organization such as the W3C, said Sheryl Tullis, product Manager for Microsoft SQL Server, in Redmond, Wash. Version 1.0 of the specification was released a year ago. A draft update to the specification is anticipated shortly... A Microsoft spokesperson rejected Shimp's claims, saying XML for Analysis is for access to diverse data sources, not just Microsoft sources. More than 20 companies have joined the council while Oracle declined an invitation to participate in the effort, the spokesperson said. Hyperion and SAS actually are in both the XML for Analysis and JOLAP camps. A Hyperion official, John Poole, who holds the corporate title of distinguished software engineer, is even chairing the JOLAP committee formed to develop that specification, said Hyperion's Ragnar Edholm, director of strategy and planning, in Sunnyvale, California JOLAP is for Java programmers while XML for Analysis is similar to SQL and can be used in multiple programming languages..." See the XML for Analysis (XMLA) web site and the announcement: "Microsoft, Hyperion Welcome SAS as Co-Chair on XML for Analysis Council. Industry Leaders Further Bolster XML for Analysis to Accelerate Deployment Of Web Services in Business Intelligence." Other references: (1) "XML for Analysis"; (2) Java OLAP Interface (JOLAP), JSR 69.

  • [April 06, 2002] "A Pragmatic Convergence of ebXML and Web Services. Is This a Second Chance for the Industry?" [Contrary Opinion.] By John Ogilvie (Killdara). In XMl Journal Volume 3, Issue 4 (2002). "... There is the de facto consensus view of how to build out basic Web services (see Table 1). The ebXML architects pointed the way in this by abandoning their own attempts to define new transport/routing protocols, instead endorsing SOAP... Standards-writing bodies should take a couple of years off. We have more than enough overlapping XML standards. Like the existing HTML/HTTP standards we depend on today, the existing XML standards are imperfect but workable. It's better to have every vendor implementing the same imperfect standard than to have the industry in limbo waiting for the upcoming perfected standard. Standards-supporting bodies such as the National Institute for Standards and Technology (NIST) and OASIS have a pivotal role to play. These bodies should continue their historical role and build conformance test-beds on the Internet for the de facto Web services standards discussed above. Industry groups such as OAGI, RosettaNet, and HL7 would do well to focus their efforts on defining solid, simple XML payload for their respective industries. Vendors should support this basic Web services framework in their products. It's okay if they sneer at this basic framework as 'lowest-common-denominator' and up-sell customers on their advanced extensions. If there are enough customers who want these advanced extensions, they will end up self-organizing into higher-performance subnetworks. Customers should insist on vendors providing clean support for this 'basic' Web services model. Unless your implementation of Web services is interoperable with others, you're not going to have access to the full universe of available services... There is evidence that this pragmatism has occurred to others as well. On January 3, 2002, the B2B auto exchange Covisint made a firm public commitment to ebXML - but only for the TRP (SOAP) and payload (OAGI) portions. Three weeks later, in an independent effort, a commercial consortium including EDS, Sun, and Killdara demonstrated an ebXML-based network for the retail side of this same industry, 'Dealersphere.' Six software applications communicated using HTTP, ebXML TRP (extended SOAP), and XML payloads derived from the work of the STAR/OAGI standards effort. The security and directory aspects of ebXML weren't implemented, but will be in the production version (assuming the relevant standards are firmed up in 2002). The formation of the Web Services Interoperability (WS-I) consortium in February is a recognition of this problem. We can hope that WS-I will be a pragmatic project, keeping a sharp focus on hard-core interoperability specs and testing..."

  • [April 06, 2002] The Intelligent Wireless Web. Notice for a book (not reviewed) and a useful website. The book is authored by H. Peter Alesso and Craig F. Smith. Addison-Wesley, 2002. ISBN: 0201730634. The book [according to HPA] "presents a coherent picture of the Next Generation Web by weaving together - XML Web Services, 3G wireless devices, speech recognition, artificial intelligence and more... This book explores technology developments in speech recognition, mobile wireless devices, network integration, and software that will be far more responsive to our informational and transactional needs... The Intelligent Wireless Web examines the convergence and synergy among five key technological components: speech used as a primary user interface; wireless personal area networks (WPANs); an integrated wired/wireless network infrastructure; supporting wireless protocols; and intelligent applications. It investigates available technologies and standards that are currently being developed to bring these goals into the mainstream of Internet use: (1) Speech recognition and understanding text-to-speech generators, and Speech Synthesis Markup Language (SSML); (2) Personal Area Networks (PANs), Bluetooth, Jini, and Universal Plug & Play; (3) Spread Spectrum, wireless networks, and the IEEE 802.11 standard; (4) Wireless handheld devices and third-generation TDMA and CDMA; (5) Mobile IP, Wireless Application Protocol (WAP), and Wireless Markup Language (WML); (6) Web Services, .NET, J2EE, SOAP, UDDI, WSDL, and XML; (7) Machine learning and Distributed Artificial Intelligence; (8) Semantic Web Architecture... Appendices list standards organizations, protocols, and security issues. Alesso has 20 years of experience as a group leader at Lawrence Livermore National Laboratory... Ongoing research projects, such as MIT's Project OXYGEN, are used throughout to illustrate elements of the intelligent wireless Web in action. With an understanding of the trends, goals, and technologies described in The Intelligent Wireless Web you will be well-positioned to develop your own strategic planning for the coming world of the ubiquitous Internet. Web IQ.com "is a software research laboratory specializing in developing Web Services capable of running over Semantic Web Architecture."

  • [April 06, 2002] "Pace Picks Up for Biometrics Standards Development. NIST, INCITS, ASC X9, OASIS Work in Tandem to Standardize Biometric Systems." Based upon an interview with Phil Griffin. In ANSI Online (April 03, 2002). "Validating the identity of an individual based simply on their driver's license photograph, password or handwritten signature will no longer be considered the securest method. With the advent of several new standards initiatives on Biometrics, authentication will be achieved through physiological or behavioral characteristics. Accelerated standards development in this critical area is being achieved through the collaborative efforts of formal standards groups, the federal government and consortia. The U.S. government is by far the biggest user of biometric authentication systems. The U.S. Department of Defense, the National Security Agency, the Departments of State, Justice and Transportation (namely the Federal Aviation Administration) and the Federal Bureau of Investigation rely on biometric technologies for data protection, secure access and sign-on applications and more. Essentially, biometrics are automated methods of identifying a person or verifying the identity of a person based on a physiological or behavioral characteristic. Physiological characteristics include hand or finger images, facial characteristics, speaker verification and iris recognition. Behavioral characteristics are traits that are learned or acquired including dynamic signature verification and keystroke dynamics. Simply stated, unlike conventional identification methods such as a driver's license or 'PIN' number that validate identity based on 'what you have or know,' biometrics validate identity based on 'who you are.' It's far easier to falsify an ID card or forge a handwritten signature than it is to alter a voice pattern or change the configuration of an iris... Signed into law by President Bush on October 26, 2001, the Patriot Act (Public Law 107-56) requires the development of technology standards to confirm identity. Leading biometric standardization at the federal level is the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce and an ANSI member. NIST spearheaded the development of the Common Biometric Exchange File Format (CBEFF), which defines a common set of data elements necessary to support multiple biometric technologies. CBEFF also promotes interoperability of biometric-based application programs and systems by allowing for biometric data exchange... Many in the biometrics industry believe that the financial community has the most to gain from biometric standardization. The newly approved Accredited Standards Committee (ASC) X9 document, X9.84 Biometric Information Management and Security, is geared to integrate biometric information, such as fingerprint, iris scan and voiceprint, for use in the financial services industry. ASC X9, the national standards-setting body for the financial services industry and an ANSI-accredited standards developer, is supported by the American Banker's Association which serves as the group's secretariat. The X9.84 standard defines requirements for managing and securing biometric information such as customer identification and employee verification and was developed in cooperation with two of the industry groups already mentioned in this article, including NIST and BioAPI Consortium, as well as others. According to NIST's Fernando Podio who is also co-chair of the Biometric Consortium and the CBEFF development group, 'The approval of the ANS X9.84 standard is an important step for the biometrics industry. It will be extremely important in accelerating the utilization of highly secure biometric-based financial applications.' One of the latest groups to join the biometrics arena is the Organization for the Advancement of Structured Information Standards (OASIS), who has recently formed a Technical Committee (TC) to create an XML [2] Common Biometric Format (XCBF)..." See "XML Common Biometric Format (XCBF)" and the OASIS TC website.

  • [April 06, 2002] "Managing structured Web service metadata. The realities of managing Web service metadata." By Uche Ogbuji (Principal Consultant, Fourthought, Inc.). From IBM developerWorks, Web services. April 2002. ['This article builds on an earlier developerWorks article on using the Resource Description Framework (RDF) to enhance WSDL, and related to a recent article on using SOAP with RDF. Uche Ogbuji looks at how updates in WSDL affect the techniques presented earlier, and draws on the significant discussion of RDF and Web services description to show how developers can use both to their advantage.'] "About a year and a half ago, I took a close look of how the then just announced Web Services Description Language (WSDL) could benefit from interacting with Web metadata format, RDF, either at the standards level, or at the level of individual developer effort. Since then, there has been a huge amount of activity in the Web services community and in the RDF (and Semantic web) community. Some of this has been very good, as dialog between these groups has taken root, and the technologies put forth by both groups have been improved. Some has also been bad, as the press has somehow conjured up a fantastical struggle between Web services camps and Semantic Web camps at the W3C, fighting for the resources of the consortium. All these developments are a great insight into the development and interaction of next-generation technologies, but in this article, I shall distill the most important developments of interest to developers... The recent RDF model theory and RDF/XML syntax specifications motivate some changes to the approach I have taken for mapping from WSDL to RDF. Other factors have also weighed in. In particular, Eric Prud'hommeaux has worked to improve on my approach by trying to make a mapping of the underlying intent behind the WSDL elements, rather than my own mechanical approach to the mapping. Certainly such a mapping would be more useful for general RDF integration, and one hopes that it is such a rich mapping that it will be accepted by the Web services description working group. I, however, also had a particular aim in my own approach: to provide a mapping straightforward enough to be palatable for non-RDF-types, and as close as possible to an identity transform from XML. With the increasing cooperation between the Web services and RDF camps, such a mechanical mapping is no longer a great necessity, so I shall look a bit more at the big picture for developers... developments in Web services and RDF provide not only richer tools for developer looking to take advantage of both, but a greater spirit of cooperation between the two efforts. The RDF query of WSDL metadata that I demonstrated can easily enough be duplicated using SQL queries, or API calls against some data binding of WSDL, but the benefit of using RDF is that one can then mix WSDL with other important RDF-based metadata, including content description and syndication formats (for example, Prism and RSS), privacy profiles (for example, P3P), and other emerging RDF applications. The more generic the management of Web services information, the more readily productivity output can be enhanced by improved integration..." See "Resource Description Framework (RDF)."

  • [April 06, 2002] "Guidelines For The Use of XML in IETF Protocols." By Scott Hollenbeck (VeriSign, Inc.), Marshall T. Rose (Dover Beach Consulting, Inc.), and Larry Masinter (Adobe Systems Incorporated; WWW). IETF Network Working Group, Internet-Draft. Reference: 'draft-hollenbeck-ietf-xml-guidelines-00.txt'. April 5, 2002. Expires: October 4, 2002. 22 pages. Also available in XML format, using the 'rfc2629.dtd'; see "Using XML for RFCs." Excerpts: "The Extensible Markup Language (XML) is a framework for structuring data. While it evolved from SGML -- a markup language primarily focused on structuring documents -- XML has evolved to be a widely- used mechanism for representing structured data. There are a wide variety of Internet protocols; many have need for a representation for structured data relevant to their application. There has been much interest in the use of XML as a representation method. This document describes basic XML concepts, analyzes various alternatives in the use of XML, and provides guidelines for the use of XML within IETF standards-track protocols... This document is intended to give guidelines for the use of XML content within a larger protocol... It is the goal of the authors that this draft (when completed and then approved by the IESG) be published as a Best Current Practice (BCP)." Section 2 lists 'XML Selection Considerations': "XML is a tool that provides a means towards an end. Choosing the right tool for a given task is an essential part of ensuring that the task can be completed in a satisfactory manner. This section describes factors to be aware of when considering XML as a tool for use in IETF protocols [...]" Section 3 discusses 'XML Alternatives': This document focuses on guidelines for the use of XML, but it's useful to consider why one would use XML as opposed to some other mechanism. This section considers some other commonly used representation mechanisms and compares XML to those alternatives. For example, Abstract Syntax Notation 1 (ASN.1) along with the corresponding Basic Encoding Rules (BER) are part of the OSI communication protocol suite, and have been used in many subsequent communications standards (e.g., the ANSI Information Retrieval protocol and the Simple Network Management Protocol (SNMP). The eXternal Data Representation (XDR) and variations of it have been used in many other distributed network applications (e.g., the Network File System protocol). With ASN.1, data types are explicit in the representation, while with XDR, the data types of components are described externally as part of an interface specification..." Section 4 'XML Use Considerations and Recommendations' notes several aspects of XML and makes recommendations for use. [cache .txt, cache XML format]

  • [April 06, 2002] "GAO Says XML Not Ready For Extensive Government Use." By Patrick Thibodeau. In InfoWorld (April 05, 2002). "XML, the data exchange standard that is being deployed by federal agencies to improve interoperability, is not a mature standard. And without centralized leadership, XML implementations by various agencies could actually hurt the interoperability of government systems, the U.S. General Accounting Office said in a report released Friday. The GAO warned that unless the President's Office of Management Budget improves XML adoption planning, the government could end up with XML implementations that defeat a key goal of the Bush administration's proposed $52 billion federal IT budget -- improving the ability of federal government's vast data network to interoperate... The absence of a complete set of XML standards poses potential development pitfalls "that could limit its potential to facilitate broad information exchange or adversely affect interoperability," the agency said. The lack of complete standards could prompt agencies to develop their own data definitions and proprietary extensions and make changes that could hurt system security. The strong leadership approach advocated by the GAO is not too dissimilar from what occurs in the private sector, said Rob Perry, an analyst at The Yankee Group in Boston. In specific industries, large companies such as an automobile maker often mandate XML standards for business partners..." See a [different] summary in the news item: "US General Accounting Office Releases XML Interoperability Report."

  • [April 05, 2002] "Is Speech Recognition Becoming Mainstream?" By Savitha Srinivasan and Eric Brown (IBM Almaden Research Center). In IEEE Computer Volume 35, Number 4 (April 2002), pages 38-41. IEEE Computer Society. ISSN: 0018-9162. This Guest Editors' Introduction provides an introduction to speech recognition in its two primary modes (using speech as spoken input, or as a data or knowledge source), an introduction to VoiceXML, and an overview of other articles in this IEEE Computer Special Issue on Speech Recogntion. ['Combining the Web's connectivity, wireless technology, and handheld devices with grammar-based speech recognition in a VoiceXML infrastructure may finally bring speech recognition to mass-market prominence.'] "... At the simplest level, speech-driven programs are characterized by the words or phrases you can say to a given application and how that application interprets them. An application's active vocabulary -- what it listens for -- determines what it understands. A speech recognition engine is language-independent in that the data it recognizes can include several domains. A domain consists of a vocabulary set, pronunciation models, and word usage models associated with a specific speech application. It also has an acoustic component reflected in the voice models the speech engine uses during recognition. These voice models can be either unique per speaker or speaker-independent. The domain-specific resources, such as the vocabulary, can vary dynamically during a given recognition session. A dictation application can transcribe spoken input directly into the document's text content, a transaction application can facilitate a dialog leading to a transaction, and a multimedia indexing application can generate words as index terms. In terms of application development, speech engines typically offer a combination of programmable APIs and tools to create and define vocabularies and pronunciations for the words they contain. A dictation or multimedia indexing application may use a predefined large vocabulary of 100,000 words or so, while a transactional application may use a smaller, task-specific vocabulary of a few hundred words. Although adequate for some applications, smaller vocabularies pose usability limitations by requiring strict enumeration of the phrases the system can recognize at any given state in the application. To overcome this limitation, transactional applications define speech grammars for specific tasks. These grammars provide an extension of the single words or simple phrases a vocabulary supports. They form a structured collection of words and phrases bound together by rules that define the set of speech streams the speech engine can recognize at a given time. For example, developers can define a grammar that permits flexible ways of speaking a date, a dollar amount, or a number. Prompts that cue users on what they can say next are an important aspect of defining and using grammars. It turns out that speech grammars are a critical component of enabling the Voice Web... The Voice Web -- triggered by the connectivity that wireless technology and mobile devices offer -- may be the most significant speech application yet. Developers originally included speech recognition technology in the device, but now they house this technology on the server side. This trend could lead to the development of powerful mass-market speech recognition applications such as (1) voice portals that provide instant voice access to news, traffic, weather, stocks, and other personal information; and (2) corporate information to streamline business processes within the enterprise. With the advent of VoiceXML, the Voice Web has become the newest paradigm for using technology to reinvent e-commerce. VoiceXML lets users make transactions via the telephone using normal speech without any special equipment. Thus, combining the Web's connectivity, wireless technology, and handheld devices with effective grammar-based speech recognition in a VoiceXML infrastructure may finally lead to the elusive mass market that speech recognition developers have chased for decades." See "VoiceXML Forum."

  • [April 05, 2002] "Active XML: A Data-Centric Perspective on Web Services." By Serge Abiteboul (INRIA), Omar Benjelloun, Tova Milo, Ioana Manolescu, and Roger Weber. Verso Report, Number 213. 2002. 13 pages (with 48 references). "We propose a peer-based architecture that allows for the integration of distributed data and web services. It relies on a language, Active XML, where (1) documents embed calls to web services that are used to enrich them, and (2) new web services may be defined by XQuery queries on such active documents. Embedding calls to functions or even to web services inside data is not a new idea. Our contribution, however, is turning them into a powerful tool for data and services integration. In particular, the language includes linguistic features to control the timing of service call activations. Various scenarios are captured, such as mediation, data warehousing, and distributed computation. A first prototype is described... We propose Active XML ('AXML' in short) a language that leverages web services for data integration and is put to work in a peer-to-peer architecture. The language enables embedding service calls inside an XML document that is enriched by their results when they are fired... The AXML paradigm allows to turn service calls embedded in XML documents into a powerful tool for data integration. This includes in particular support for various integration scenarios like mediation and data warehousing and distributing computations over the net via the exchange of AXML documents..." source, Postscript]

  • [April 05, 2002] "Electronic Government: Challenges to Effective Adoption of the Extensible Markup Language." By United States General Accounting Office (GAO). Report to the Chairman, Committee on Governmental Affairs, U.S. Senate. Document reference: GAO-02-327. April 5, 2002. 73 pages. [Submitted to The Honorable Joseph I. Lieberman, Chairman, Committee on Governmental Affairs, United States Senate. From David L. McClure, Director, Information Technology Management Issues.] "Dear Mr. Chairman: This report responds to your request that we review the status of Extensible Markup Language (XML) technology and the challenges the federal government faces in implementing it. XML is a flexible, nonproprietary set of standards designed to facilitate the exchange of information among disparate computer systems, using the Internet's protocols. Specifically, we agreed to assess (1) the overall development status of XML standards to determine whether they are ready for governmentwide use and (2) challenges faced by the federal government in optimizing its adoption of XML technology to promote broad information sharing and systems interoperability. The report recommends that the director of the Office of Management and Budget (OMB) take steps to improve the federal government's planning for adoption of XML..." [Results in brief:] "Many standards-setting organizations in the private sector are creating various XML business standards, and it will be important for the federal government to adopt those that achieve widespread acceptance. However, it is not yet clear which business standards meet this criterion. In addition, key XML vocabularies tailored to address specific industries and business activities are still in development and not yet ready for governmentwide adoption. Given that a complete set of XML-related standards is not yet available, system developers must be wary of several pitfalls associated with implementing XML that could limit its potential to facilitate broad information exchange or adversely affect interoperability, including (1) the risk that redundant data definitions, vocabularies, and structures will proliferate, (2) the potential for proprietary extensions to be built that would defeat XML's goal of broad interoperability, and (3) the need to maintain adequate security. In addition to these pitfalls, which all systems developers must address, the federal government faces additional challenges as it attempts to gain the most from XML's potential. Specifically: (1) No explicit governmentwide strategy for XML adoption has been defined to guide agency implementation efforts and ensure that agency enterprise architectures address incorporation of XML; (2) The needs of federal agencies have not been uniformly identified and consolidated so that they can be represented effectively before key standards-setting bodies...; (3) The government has not yet established a registry of government-unique XML data structures (such as data element tags and associated data definitions) that system developers can consult when building or modifying XML-based systems...; (4) Much also needs to be done to ensure that agencies address XML implementation through enterprise architectures so that they can maximize XML's benefits and forestall costly future reworking of their systems..." Recommendations for Executive Action: "Given the statutory responsibility of OMB to develop and oversee governmentwide policies and guidelines for agency IT management, we recommend that the director of OMB, working in concert with the federal CIO Council and NIST, develop a strategy for governmentwide adoption of XML to guide agency implementation efforts and ensure that the technology is addressed in agency enterprise architectures. This strategy should, at a minimum, address how the federal government will address the following tasks: (1) Developing a process with defined roles, responsibilities, and accountability for identifying and coordinating government-unique requirements and presenting consolidated, focused input to private sector standards-setting bodies during the development of XML standards... (2) Developing a project plan for transitioning the CIO Council's pilot XML registry effort into an operational governmentwide resource... (3) Setting policies and guidelines for managing and participating in the governmentwide XML registry, once it is operational, to ensure its effectiveness in promoting data sharing capabilities among federal agencies..." [cache, alt URL]

  • [April 05, 2002] "From JDOM to XmlDocument." By Niel Bornstein. From XML.com. April 03, 2002. ['Our main feature this week is the second part in our ongoing series on learning .NET XML programming in C#. In this series, Niel Bornstein focuses on taking typical Java XML programming tasks and investigating how they can be ported to Microsoft's C#/.NET platform. In this installment, Niel looks at DOM programming and ports a Java JDOM application to use .NET's XmlDocument API.'] "The Microsoft .NET framework is becoming well known for its integration of XML into nearly all data-manipulation tasks. In the first article in this series, I walked through the process of porting a simple Java application using SAX to one using .NET's XmlReader. I concluded that there are advantages and disadvantages to each language's way of doing things, but pointed out that if you are not a fan of forward-only, event-based XML parsing, neither one will really fit your fancy. This article focuses on porting whole-document XML programs from Java to C#. [...] I've shown you how to port your SAX Java code to XmlReader and your JDOM Java code to XmlDocument, with a small helping of XPath. These are the basic technologies that most developers are familiar with, and you should now be ready to apply them in your C# programming. But the original task I set out to accomplish was to see what could be learned from Microsoft's XML APIs. In my first article, I concluded that Microsoft's one-stop-shopping is both positive and negative, depending on your point of view. However, I'm beginning to see a greater benefit to this single source of objects; the XmlNodeType that you deal with in XmlReader is exactly the same object that you deal with in DOM. This could easily have the benefit of shortening your learning cycle, as well as making your code more reusable. The Java community could certainly stand to learn something here. In the next installment of this series, I'll take another look at the venerable RSSReader, and make it a better C# program by using XmlReader the way it was meant to be used, as a pull-parser. And I'll compare that to some pull-parsers in the Java world." See also C# in a Nutshell, by Peter Drayton and Ben Albahari (O'Reilly; ISBN: 0-596-00181-9).

  • [April 05, 2002] "Putting Attributes to Work." By Bob DuCharme. From XML.com. April 03, 2002. ['Bob DuCharme's "Transforming XML" column this month looks at the many ways that attributes from a source XML document can be processed in XSLT.'] "Once an XML document is read into a source tree, where an XSLT stylesheet can get a hold of it, there's a lot that the stylesheet can do with the source document's attribute values. This month we'll look at how to convert attributes to result tree elements, how to grab attribute values for more general use, how to test attribute values for specific values, and how to define groups of attributes for easier re-use in multiple places in the result tree... Attributes can play a role in almost any aspect of XSLT development, and this column has outlined just a few of the ways to take advantage of them. See my book XSLT Quickly for even more on ways to take advantage of attribute values in your source documents..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [April 05, 2002] "TAG Watch." By Kendall Grant Clark. From XML.com. April 03, 2002. ['This week's XML-Deviant column catches up with the W3C's Technical Architecture Group (TAG). The TAG was chartered by the W3C to take decisions on key matters in the architecture of the Web, and over the past few months has gotten down to serious business: running head-on into some of the gnarliest issues to plague the XML world over recent years. In "TAG Watch", Kendall Clark examines the TAG's progress so far, looking at the issues it has chosen to pursue, and those it has chosen to decline.'] "Now that a full two months have passed, it seems a good time to look again at what the TAG has accomplished, what's on its agenda at the moment, and where it might be going next. In order to answer those questions, I will be examining the TAG's issues list, the progress and the discussion surrounding parts of its shared document work, as well as a practical issue or two... Browsing the TAG's issues list is an excellent way to come up to speed with the group's work... In addition to the ad hoc, piecemeal work of responding to issues raised by the general Web and XML communities, the TAG is also working on (so far non-authoritative) documents addressing various issues of Web architecture. The TAG's Architecture Categorization seems to be serving as a kind of shared outline or roadmap of the parts of Web architecture which are slated for TAG documentation. A large part of the TAG list traffic during February and March was response, discussion, and commentary on three documents which are part of the Architecture Categorization: "Introduction" by Tim Bray and Dan Connolly, "What Does a URI Identify?" by Norm Walsh and Stuart Williams, and "What Does a Document Mean?" by Paul Cotton and Norm Walsh..."

  • [April 04, 2002] "Making P2P interoperable: Creating Jxta systems. Design P2P systems that extend beyond traditional network boundaries." By Sing Li (Author of Early Adopter Jxta, Wrox Press). From IBM developerWorks, Java technology. April 2002. 'JXTA technology is a set of open, generalized peer-to-peer (P2P) protocols, defined as XML messages, that allow any connected device on the network ranging from from cell phones and wireless PDAs to PCs and servers to communicate and collaborate in a P2P manner.' ['With the rise in popularity of mobile computing and the pervasive application of embedded networkable microprocessors, the TCP/IP protocol is finally showing its age. Jxta has been designed from its inception to extend the reach of the Internet beyond the limitations of today's TCP/IP-based network. In this final installment of developerWorks' series on Jxta, Sing Li illustrates a system that showcases this extension and solves a practical problem. You'll see that Jxta isn't bound by the constraints typical of client-server networks.'] "So far in this series, we've taken a nuts-and-bolts look at how Jxta, a new P2P platform with a Java reference implementation, works. In the first installment, we discovered interoperable features of Jxta. Defined as a set of interoperable protocols, Jxta can be implemented across hardware platforms, operating systems, and programming languages. We also covered the operational model of Jxta and many important concepts, including peers, peergroups, services, and pipes. In the second installment, our focus was on getting Jxta up and running. We explored one Jxta application -- the Jxta shell -- and walked through a scenario for creating pipes and sending messages from one peer to another. We also gained our first programming experience with the Jxta API when we wrote a Jxta shell extension. Thus far, our approach to Jxta has been from the bottom up. This is most natural for those of us with system programming and network engineering backgrounds. In this third and final article of our series, we are turning the world upside down. Working from the perspective of those who work in application-level design and architecture, this article takes a top-down look at Jxta. We start with a specific example problem, analyze and design a solution to that problem, and show how Jxta solves the problem naturally. Along the way, we discover how Jxta changes the networking landscape by juxtaposition, and we also present the design and code of a Jxta service and client. To really highlight the value added by Jxta, I have chosen as an example a global weather information collection system. However, the problem and solution have characteristics that are typical in many business scenarios, such as mobile sales force automation, commodities trading, content distribution, and business-to-business e-commerce, just to name a few. One can juxtapose P2P networks alongside conventional client/server networks to build new network solutions that extend well beyond today's static boundaries. The open source Jxta platform will be a vehicle for these new solutions. I hope this series has inspired you to explore the possibilities that Jxta offers." Also in PDF format.

  • [April 04, 2002] "Collaboration-Protocol Profile and Agreement Specification." By Members of the OASIS ebXML Collaboration Protocol Profile and Agreement Technical Committee: Selem Aissi, Arvola Chan, James Bryce Clark, David Fischer, Tony Fletcher, Brian Hayes, Neelakantan Kartha, Kevin Liu, Pallavi Malu, Dale Moberg, Himagiri Mukkamala, Peter Ogden, Marty Sachs, Yukinori Saito, David Smiley, Tony Weida, Pete Wenzel, and Jean Zheng. Version 1.11. April 04, 2002. 158 pages. "As defined in the ebXML Business Process Specification Schema, a Business Partner is an entity that engages in Business Transactions with another Business Partner(s). The Message-exchange capabilities of a Party may be described by a Collaboration-Protocol Profile (CPP). The Message-exchange agreement between two Parties may be described by a Collaboration-Protocol Agreement (CPA). A CPA may be created by computing the intersection of the two Partners' CPPs. Included in the CPP and CPA are details of transport, messaging, security constraints, and bindings to a Business-Process-Specification (or, for short, Process-Specification) document that contains the definition of the interactions between the two Parties while engaging in a specified electronic Business Collaboration. This specification contains the detailed definitions of the Collaboration-Protocol Profile (CPP) and the Collaboration-Protocol Agreement (CPA)... The objective of this specification is to ensure interoperability between two Parties even though they may procure application software and run-time support software from different vendors. The CPP defines a Party's Message-exchange capabilities and the Business Collaborations that it supports. The CPA defines the way two Parties will interact in performing the chosen Business Collaboration. Both Parties shall use identical copies of the CPA to configure their run-time systems. This assures that they are compatibly configured to exchange Messages whether or not they have obtained their run-time systems from the same vendor. The configuration process may be automated by means of a suitable tool that reads the CPA and performs the configuration process..." See also the XML schema.

  • [April 04, 2002] "ADL and SCORM: Creating a Standard Model For Publishing Courseware. [SCORM Sets a Standard For Publishing Courseware.]" By Mike Letts. In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 1 (April 8, 2002). ['Founded by the U.S. Dept. of Defense in 1997, the Advanced Distributed Learning initiative is a public-private partnership working to create the globally distributed learning environment of the future. Where do higher-education, medical and IT publishers fit in?'] "The U.S. Defense Department buys a lot of instructional materials. Working with academic institutions, standards bodies, private corporations and military suppliers, it is developing a Sharable Content Object Reference Model (SCORM) for reusable modules of learning content that it can use anywhere in the world. Far from playing the role of 800-pound gorilla, the DoD has adopted existing standards wherever possible. But it can't wait for the marketplace to perfect distance-learning technology, and publishers can't afford to ignore its purchasing power... Version 1.2 of the SCORM standard was released in October of last year, and version 1.3 will be released this month. One of the main reasons SCORM has been so readily adopted is that nearly all of the guidelines have been adopted from the various industry segments it serves. As John Purcell put it, 'SCORM is really nothing different from regular XML markup. The ADL cherry-picked the best pieces from these various organizations and wrapped it up. Taking popular and proven XML elements from various industries has been critical to SCORM's success.' In contrast with the way that most computer-based instruction is written today, SCORM demands that content be authored to stay independent from any larger contexts. A lesson on how to apply gauze bandages, for example, must be written in a way that does not depend on it following instructions for disinfecting a wound. By breaking instructional materials down into discrete, self-contained chunks, the DoD hopes to make it possible for its own staff to build customized curricula at a very detailed level... SCORM also requires content to stay independent from the software used to render it. This formal separation of content from software is consistent with the decades-old move toward SGML and XML in high-end documentation. But it is very different from most computer-based instruction materials, which are typically authored in a particular software environment. As a result, SCORM can be expected to spur development of XML-based authoring tools and e-learning systems. The uptake in the civilian economy will not be as explosive as it will be within the defense industry, but the safe bet is that SCORM will see widespread adoption among education publishers. The DoD isn't the only customer interested in SCORM, and publishers are already paying attention. Some publishers have already moved beyond the research phase to trial implementations. AAP and ADL now count approximately 20 SCORM-compliant pilot networks, which are allowing major education publishers to explore the potential ROI of learning objects. The Academic ADL Co-Lab at the University of Wisconsin is helping nearly 40 higher-education institutions evaluate SCORM-compliant tools and technologies. Despite potential roadblocks to SCORM adoption in the private sector, publishers need to take note of SCORM and the DoD's ADL initiative if they don't want to miss the boat..." See: "Shareable Content Object Reference Model Initiative (SCORM)."

  • [April 03, 2002] "Apache SOAP Type Mapping, Part 1: Exploring Apache's Serialization APIs. Learn how to translate the data types in your apps into XML." By Gavin Bong (Software Engineer, eUtama Sdn. Bhd.). From IBM developerWorks, Web Services. April 2002. ['SOAP defines a simple wire protocol for transferring application-level data. This protocol can easily carry arbitrary Java types as serialized XML, thanks to its rich and extensible type system. In this article, the first of a two-part series on the type system support found in the Apache SOAP toolkit, Gavin Bong will introduce you to the theoretical underpinnings of SOAP's type system. You'll also learn more about SOAP's programmatic support for serialization and deserialization and conclude with an exploration into the toolkit's internals. A better understanding of how these processes work will help you build your own distributed systems.'] "When data is exchanged electronically, the endpoints of the exchange need to agree in advance on two things: an interaction pattern and a type system. The former is concerned with the architecture of the communication channel (for example, point-to-point versus many-to-one, or blocking versus asynchronous). The latter, on the other hand, is an agreed data format for use in encoding and decoding messages. In this article, I will describe the type system in SOAP, as applicable to the Apache SOAP toolkit. Although the current incarnation of the SOAP toolkit supports both messaging and RPC interaction patterns, this article will concentrate on the latter. Unless otherwise stated, the features discussed in this article apply to Version 2.2 of Apache SOAP, released in May 2001. Where appropriate, I will highlight bug fixes or interface changes that have occurred since that release. [...] In this article, I've explored most of Apache SOAP's out-of-the-box API support for (de)serializing Java types. I've also covered some of the idiosyncracies that may inhibit interoperability with other Java- or non-Java-based SOAP toolkits. In the next installment, I will introduce a cookbook that will guide you in writing your own (de)serializers. You'll also see how you can work with SOAP's non-Section 5 encodings by utilizing a schema language." Article also available in PDF format. See "Simple Object Access Protocol (SOAP)."

  • [April 03, 2002] "Transactional Attitudes: Reliable Composition of Autonomous Web Services." By Thomas Mikalsen, Stefan Tai, and Isabelle Rouvellou (IBM T.J. Watson Research Center, New York, USA). Paper prepared for the June 26, 2002 Workshop on Dependable Middleware-based Systems (WDMS 2002), part of the International Conference on Dependable Systems and Networks (DSN 2002), Washington D.C., June 2002. 10 pages (with 11 references). "The Web services platform offers a distributed computing environment where autonomous applications interact using standard Internet technology. In this environment, diverse applications and systems become the components of intra- and inter-enterprise integration. Yet, transactional reliability, an often critical requirement on such integration, is presently missing from the Web services platform. In this paper, we address this shortcoming and propose the WSTx framework as an approach to Web service reliability. WSTx introduces transactional attitudes to explicitly describe the otherwise implicit transactional semantics, capabilities, and requirements of individual applications. We show how explicit transactional attitude descriptions can be used by a middleware system to automate the reliable composition of applications into larger Web transactions, while maintaining autonomy of the individual applications... [...] In this paper, we introduced the WSTx framework for building reliable transactional compositions from Web services with diverse transactional behavior. We showed how transactional attitudes are used to capture and communicate otherwise implicit transactional semantics and requirements, without compromising the autonomy of the individual transaction participants. Provider transactional attitudes (PTAs) use WSDL extension elements to annotate Web service provider interfaces for Web transactions, according to welldefined transactional patterns; client transactional attitudes (CTAs) are described in terms of well-defined WSDL port types and outcome acceptance criteria. We further outlined the requirements on a middleware system which uses these explicit transactional attitude descriptions to reliably execute web transactions, and offered a conceptual design for a specific middleware implementation that meets these requirements. The WSTx framework uniquely enables a client to program a Web transaction without requiring transaction participants to agree on a common transaction semantic, transaction context representation, and coordination protocol. Further, the concept of transactional attitudes follows and promotes a clean separation of concerns for software engineering. Transactional properties can be isolated from other aspects of a service description, allowing, for example, capability-based service queries. Regarding the system implementation, the design of the WSTx framework uses standard Web services technology only; WSDL and existing WSDL extension mechanisms are sufficient to support the idea of transactional attitudes. The design is also open to accommodate some technologies emerging in the highly dynamic field of Web services. The implementation design for SAM can easily be realized using existing Web services toolkits and standard Web Application servers (such as IBM's Web Services Toolkit). See also the news item. [cache]

  • [April 03, 2002] "Reliability of Composed Web Services From Object Transactions to Web Transactions." By Thomas Mikalsen, Isabelle Rouvellou, and Stefan Tai (IBM T.J. Watson Research Center, New York, USA). Proposal submitted to the OOPSLA 2001 Workshop on Object-Oriented Web Services. "Multiple Web Services often need to be composed within some business process. Existing Web Service standards do not address reliability of such compositions. In this position paper, we argue that reliability can be achieved by adopting and extending existing advanced object transaction processing technology, like the OMG/J2EE Activity Service. We identify and discuss some important problems and research issues related to this approach... A Web Service is a program that can be invoked in a distributed web environment. Web Service technologies like SOAP, WSDL, and UDDI are standards to describe, publish, discover, and use Web Services; they define a development model for integrating applications over the web. SOAP and WSDL are fundamental technologies for an application to issue a request to a Web Service. However, they do not provide support for the application to compose multiple Web Services. With SOAP and WSDL, requests to multiple Web Services are issued individually, but a set of requests cannot be grouped into a single process flow across the web. Because of this shortcoming, additional technologies for web business process specification and management are currently being developed. These include new languages extending WSDL, for example, IBM's WSFL/ WSEL and Microsoft's XLANG... In this paper, we identify research issues in the development of system infrastructure support for reliability of composed Web Services. We investigate the suitability and applicability of object transactions, a common and proven approach to reliability in object systems, to web environments. Further, we outline ideas for a model of web transactions that is based on the OMG/J2EE Activity Service specification, an advanced object transaction service. Overall, we aim at exploring a practical solution to web transactions that is compliant to Web Services standards... Transactions are fundamental to reliable distributed computing, and most commercial distributed object systems provide support for traditional transaction models (e.g., ACID transactions, two-phase commit). Such systems typically support standard architectures, such as the CORBA Object Transaction Service (OTS) and the J2EE Java Transaction Service (JTS). These systems offer a mature, interoperable, and widely used, solution to address reliability. Increasingly, however, these systems are being used to build advanced applications, where traditional transaction semantics are not appropriate. For example, in applications where transactions execute over long periods of time, the concurrency control mechanisms employed to support ACID semantics are unacceptable. In other cases, dependencies between application activities are complex, requiring flexibility in determining transaction outcome. The transaction models required by these applications (so called 'extended transaction models') are rarely directly supported by the system, and are often implemented at the application level; this is typically done in an ad hoc manner, making it difficult to construct, maintain and integrate these applications..." See also the online description of the [IBM] Web Services Transactions (WSTx) project: "The Web services platform offers a standards-based distributed computing model for providing and accessing business functions over the public Internet. When such business functions are composed into larger business processes, the reliability of the composition as whole must be addressed (in addition to the reliability of the individual business functions). The WSTx project addresses the reliability concerns of business processes that utilize Web services technology to build such compositions..." Also available from the OOWS2002 document. [cache]

  • [April 03, 2002] ".NET: Microsoft's Enterprise Ticket? Will .NET Framework bring Microsoft and true language-neutral development into the enterprise?" By Jon Udell. In Enterprise Systems (April 03, 2002) "Even if your enterprise doesn't use Microsoft's stuff, which is highly unlikely, you're probably going to need to interoperate with it. The good news is that as .NET starts to roll out, you'll find that you can. Because principal components of the .NET Framework are already released, enterprise managers and developers need a basic understanding of its benefits and pitfalls... Jim Culbert, Metratech's VP of technology, says that he's talked to major carriers who are 'spending billions on multiple OSSes [Operations Support Systems], and have to manage all of them, but there are no common components, no cleavage points, there's no middle ground for data.' In a traditional enterprise system, he says, software encodes a set of fixed requirements, and when requirements change, the software must also -- a slow, painful and risky process. Businesses that offer flexible services, and that are enmeshed in increasingly dynamic partnership webs, need the kind of speed and flexibility that only a more data-driven approach can deliver. Using Metratech's solution, a service aggregator who wants to add a new partner doesn't write software to make it happen. Rather, the new relationship is expressed as a stream of XML data that feeds into the existing, unmodified billing kernel. In either case, for internal integration or external collaboration, the magic lubricant is XML. It's used to describe both procedural interfaces and data formats. This does not mean fewer interconnect points or data formats. That complexity is an irreducible fact of enterprise systems. What XML does, Culbert says, is represent complexity in a way that machines can process much more cost-effectively, and more accurately, than humans can. In principle, this benefit was available years ago, using CORBA (Common Object Request Broker Architecture). In practice, CORBA never reached the critical mass that XML is achieving, thanks to XML's inherent simplicity and to the rich assortment of off-the-shelf tools that can, as a result, support it... At least this much is clear. Whether or not Microsoft's .NET and Web services initiatives find near-term traction in the enterprise, the elements of the strategy -- componentization, access to the middle tier, XML representation of business data and protocols -- are universal and will matter to everyone. Even if you don't use Microsoft's products, you'll likely need to interoperate with it. As .NET starts to roll out, you'll find that you can."

  • [April 03, 2002] "Peer-to-peer Communications Using XML. How XML Fuels the Next Generation of Information Sharing Applications." By Anne Zieger (Chief analyst, PeerToPeerCentral.com). IBM developerWorks, XML Zone. April 2002. ['XML is a critical ingredient in peer-to-peer information sharing schemes, including grid computing, instant messaging, and Web services. In this article, Anne explores the cutting-edge efforts intended to create a unified P2P fabric based on adaptations of existing XML technology.'] "Sharing information from one networked device directly to another is far from a new idea. After all, remote access software such as LapLink, which has been around for nearly 20 years, can move files and folders from a remote computer, transfer application settings and directories, and update files already stored on another PC. Today's peer-to-peer communications, however, call for an extra layer of sophistication. While programs like LapLink are designed primarily to push file-level information from one PC to another, P2P platforms seek to go further, sharing data and linking applications that might otherwise be incompatible. The engine fueling P2P communications is driven by XML communications. Virtually all emerging P2P platforms rely on XML to make data digestible, a necessary step in a world where users might be on a desktop, laptop, PDA, Pocket PC, or even a server. And XML communications and protocols are built into a number of proposed standards that are rapidly gaining acceptance in the P2P development community. Below, we'll provide examples of how XML is making this next generation of information sharing applications possible. We'll also provide samples of application-specific code and spell out how this code is supporting core P2P functions... Over the next several months, developers have a significant opportunity to participate in the emergence of this range of technologies. Standards for Web services, instant messaging, and grid computing are far from cast in stone, and developer voices can have quite an impact. Ultimately, when gaps are worked out, XML specifications and tools of this kind should offer enterprises ways to share data, integrate applications, manage content, and upgrade messaging in ways previously impossible. In P2P technologies, it seems, XML development has come into its own, as an irreplaceable approach for bridging computing platforms everywhere." See also: (1) Intel's Peer-to-Peer Accelerator kit, which offers extensible enhancements for messaging, file copy, and discovery in the .NET; (2) 2001 P2P Networking Overview: The Emergent P2P Platform of Presence, Identity, and Edge Resources, by Kelly Truelove, Clay Shirky, Lucas Gonze, and Rael Dornfest, published by O'Reilly; (3) the O'Reilly P2P website.

  • [April 02, 2002] "ContentGuard Turns Over XrML to OASIS." From Bill Rosenblatt, in "DRM Watch Update." April 02, 2002. "ContentGuard turned over XrML to OASIS (Organization for the Advancement of Structured Information Standards), the body that oversees standards related to XML and e-business... ContentGuard has been promising for quite a while that they will turn the language over to a suitable standards body for definition and stewardship. OASIS is a fine choice: it ties XrML in with other XML standards, some of which already form part of the basis for the language. OASIS' membership consists primarily of technology vendors, not publishers or other media companies... ContentGuard is able to relinquish control of OASIS' rights language design while retaining the ability to charge for licenses to the language. ContentGuard can do this because it holds patents on any rights language, not just XrML as it exists today. OASIS, unlike other standards bodies (such as the W3C), allows companies like ContentGuard to contribute technology that can be licensed for money, as long as it's on RAND (reasonable and nondiscriminatory) terms. The move to OASIS is a positive step in ContentGuard's ongoing process of making the industry comfortable that it wants to help set good standards that the DRM industry will want and that will move it forward. ContentGuard's patents allow it to take legal action against other organizations that would advance their own rights language. As another important step towards market acceptance, ContentGuard will need to assure the industry that it does not intend to use its patent portfolio as a stick with which to beat other organizations. Only the passage of time will provide this assurance..." References: (1) the TC proposal in "OASIS Members Propose a Rights Language Technical Committee"; (2) announcement of 2002-04-02: "OASIS Members Form Technical Committee to Advance XML Rights Language. ContentGuard, HP, Microsoft, Reuters, Verisign, and Others Collaborate on Standard to Express Usage and Access Rights Across Industries."; (3) general references in "XML and Digital Rights Management (DRM)."

  • [April 02, 2002] "ContentGuard Turns XrML Over to OASIS." By [Seybold Bulletin Staff.] In The Bulletin: Seybold News and Views on Electronic Publishing Volume 7, Number 26 (April 3, 2002). "ContentGuard has found a new home for its XrML rights specification language: the OASIS vendor consortium. The recently formed Rights Language technical committee of OASIS will draft a rights language specification using XrML as the initial submission. Initial members of the technical committee include Hewlett-Packard, Microsoft, Reuters and Verisign, in addition to ContentGuard... According to Bruce Gitlin, ContentGuard's VP of business development and its representative on the OASIS committee, the plan is to develop a core specification, with extensions handling media- and market-specific requirements. This is the same approach taken by MPEG, which recently adopted XrML as its rights specification language... According to Gitlin, because ContentGuard's patent covers any rights language, not just XrML, ContentGuard expects to extract license fees from vendors implementing whatever rights language emerges from OASIS' standards process. The terms and conditions of the license will depend on the implementation, and ContentGuard has pledged that it will license XrML to other vendors under reasonable and nondiscriminatory terms..." References: (1) TC proposal in "OASIS Members Propose a Rights Language Technical Committee"; (2) announcement of 2002-04-02: "OASIS Members Form Technical Committee to Advance XML Rights Language. ContentGuard, HP, Microsoft, Reuters, Verisign, and Others Collaborate on Standard to Express Usage and Access Rights Across Industries."; (3) general references in "XML and Digital Rights Management (DRM)."


Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/xmlPapers2002Q2.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org