Other collections with references to general and technical publications on XML:
- XML Article Archive: [July 2002] [April - June 2002] [January - March 2002] [October - December 2001] [Earlier Collections]
- Articles Introducing XML
- Comprehensive SGML/XML Bibliographic Reference List
[August 30, 2002] "Guidelines for the Use of XML within IETF Protocols." By Scott Hollenbeck (VeriSign, Inc.), Marshall T. Rose (Dover Beach Consulting, Inc.), and Larry Masinter (Adobe Systems Incorporated). IETF Network Working Group Internet-Draft. Reference: 'draft-hollenbeck-ietf-xml-guidelines-06.txt'. 34 pages. August 22, 2002, expires February 20, 2003. This draft represents version 6 of the document; Appendix A lists 'Changes from Previous Version'. It is the goal of the authors that this draft (when completed and then approved by the IESG) be published as a Best Current Practice (BCP). "The Extensible Markup Language (XML) is a framework for structuring data. While it evolved from SGML -- a markup language primarily focused on structuring documents -- XML has evolved to be a widely- used mechanism for representing structured data. There are a wide variety of Internet protocols being developed; many have need for a representation for structured data relevant to their application. There has been much interest in the use of XML as a representation method. This document describes basic XML concepts, analyzes various alternatives in the use of XML, and provides guidelines for the use of XML within IETF standards-track protocols... This document is intended to give guidelines for the use of XML content within a larger protocol. The goal is not to suggest that XML is the 'best' or 'preferred' way to represent data; rather, the goal is to lay out the context for the use of XML within a protocol once other factors point to XML as a possible data representation solution. The Common Name Resolution Protocol (CNRP) is an example of a protocol that would be addressed by these guidelines if it were being newly defined. This document does not address the use of protocols like SMTP or HTTP to send XML documents as ordinary email or web content. There are a number of protocol frameworks already in use or under development which focus entirely on "XML protocol" -- the exclusive use of XML as the data representation in the protocol. For example, the World Wide Web Consortium (W3C) is developing an XML Protocol framework based on SOAP (SOAP Version 1.2 Part 1: Messaging Framework, SOAP Version 1.2 Part 2: Adjuncts). The applicability of such protocols is not part of the scope of this document. In addition, there are higher-level representation frameworks, based on XML, that have been designed as carriers of certain classes of information; for example, the Resource Description Framework (Resource Description Framework (RDF) Model and Syntax Specification) is an XML-based representation for logical assertions. This document does not provide guidelines for the use of such frameworks... A discussion forum 'email@example.com' is used for comments on this draft; see the archives. [cache]
[August 30, 2002] "Common Name Resolution Protocol (CNRP)." By Nico Popp (RealNames Corporation), Michael Mealling (VeriSign, Inc.), and Marshall Moseley (Netword, Inc). IETF Network Working Group, Internet-Draft. Reference: 'draft-ietf-cnrp-12.txt'. February 21, 2002. See also the XML DTD from section 5. "People often refer to things in the real world by a common name or phrase, e.g., a trade name, company name, or a book title. These names are sometimes easier for people to remember and type than URLs. Furthermore, because of the limited syntax of URLs, companies and individuals are finding that the ones that might be most reasonable for their resources are being used elsewhere and so are unavailable. For the purposes of this document, a 'common name' is a word or a phrase, without imposed syntactic structure, that may be associated with a resource. This effort is about the creation of a protocol for client applications to communicate with common name resolution services, as exemplified in both the browser enhancement and search site paradigms. Although the protocol's primary function is resolution, it is also intended to address issues of internationalization and localization. Name resolution services are not generic search services and thus do not need to provide complex Boolean query, relevance ranking or similar capabilities. The protocol is a simple, minimal interoperable core. Mechanisms for extension are provided, so that additional capabilities can be added... The protocol consists of a simple request/response mechanism. A client sends one of a few types of requests to a server which responds with the results of that request. All requests and responses are encoded with XML using the DTD found in Section 5. There are two types of requests. One is a general query for a common-name. The other is a request for an object that describes the service and its capabilities. There is only one type of response which is a set of results. Results can contain actual result items, referrals and/or status messages. CNRP is completely encapsulated within its XML definition, and is therefore transport-independent in its specification. However, clients need to have a clearly defined means of bootstrapping a connection with a server... Queries are sent by the client to the server. There are two types of queries: (1) A 'special' initial query that establishes the schema for a particular CNRP database and communicates that to the client. The CNRP client will send this query, and in turn receive an XML document defining the query properties that the database supports. (In CNRP, XML is used to define and express all objects.) This query is called the 'servicequery' in the DTD. In the case where a client does not know anything about the Service, the client may assume that it can at least issue the request via HTTP. (2) A 'standard' query, which is the submission of the CNRP search string to the database. The query will conform to the schema that may have been previously retrieved from the service..." See also the IETF Common Name Resolution Protocol WG Charter, The 'go'URI Scheme for the Common Name Resolution Protocol, and the mail list archives. [cache]
[August 30, 2002] "UDDI Takes Step Forward but Isn't Ready for Deployment." By Ray Wagner and John Pescatore (Gartner Research). Gartner FirstTake. Reference: FT-18-0859. 30 August 2002. ['Most major IT vendors support OASIS's new committee to develop the UDDI protocol. However, UDDI will achieve widespread use only at a late stage in the deployment of Web services.'] "On 28 August 2002, the Organization for Structured Information Standards (OASIS) announced the UDDI Specification Technical Committee to oversee the development of Universal Description, Discovery, and Integration (UDDI), a Web service protocol. More than 20 major IT companies have said they will participate, including most major software infrastructure providers... The unprecedented cooperation by industry participants will do much to secure widespread acceptance of UDDI, which provides a common format for enterprises to identify and link to new Web services... However, this specification may not prove as important as other Web service protocols with which it is normally associated, such as Simple Object Access Protocol (SOAP) and the Security Assertion Markup Language (SAML), because UDDI will achieve widespread use only at a late stage in the deployment of Web services. In general, enterprises will not need UDDI initially, either behind the firewall or when they deal with trusted business partners. Supporters will have to resolve many security issues related to UDDI before enterprises can safely expose service information via UDDI. Standard mechanisms need to be defined for such functions as supporting granular access, denial-of-service protection and nonrepudiation. Gartner recommends that enterprises evaluate the output of the UDDI committee for in-depth treatment of UDDI security issues before planning externally exposed use of UDDI. Gartner also believes that secure use of Web services will greatly accelerate if the vendors participating in the UDDI committee also participate aggressively in the SAML, Web Services Security (WS-Security), Liberty Alliance and other Web service initiatives related to security..." Also in PDF format. See: "Universal Description, Discovery, and Integration (UDDI)." [cache]
[August 30, 2002] "Q&A: VeriSign's Phillip Hallam-Baker on Web Services Security." By Carol Sliwa. In Computerworld (August 30, 2002). "IT professionals should wait for the Web Services Security specification to be finalized and implemented before they start building sophisticated Web services that extend beyond their company's firewalls, according to the specification's co-author. Phillip Hallam-Baker, principal scientist at Mountain View, Calif.-based VeriSign Inc., said it could take between six months and two years to nail down the WS-Security specification that he helped to write. Hallam-Baker spoke with Computerworld's Carol Sliwa about the state of Web services security during this week's XML Web Services One Conference here. The WS-Security specification was announced in April by IBM, Microsoft Corp. and VeriSign and was turned over to the Organization for the Advancement of Structured Information Standards (OASIS). A technical committee working to advance WS-Security will hold its first face-to-face meeting next week. Hallam-Baker was also senior author of the XML Key Management Specification (XKMS), and he served as editor of the Security Assertion Markup Language core schema and protocol specification. Here's what he had to say... [Excerpts:] "... What Web services are about is machine-to-machine communication. The base technology is XML and XML schema. If we want to narrow it to what types of Web service specifications are you going to be most interested in supporting -- obviously SOAP [Simple Object Access Protocol], WS-Security, XKMS... There are people like myself who are full-time occupied on the development standards, and we push things real hard. If something isn't meeting a deadline, if people are having an argument, I will make things happen. And I not only know everybody in the room, I know all their managers. If two people need to agree, either we will come to an agreement or we'll have a flame-out. We could throw part of the spec out. We could split the standards group. If I'm convinced that we're not going to get agreement, I'm going to say, "OK. We're splitting." And people know that that would be bad press... Q: When will WS-Security get nailed down? A: Within a two-year time span, certainly. Within a six-month time span, certainly not. Between the two, well it depends..."
[August 30, 2002] "Out with AOL, in with Jabber." By Paul Festa. In CNET News.com (August 30, 2002). "When America Online closed its door on efforts to standardize instant messaging, a new one may have opened for Jabber. Jabber, the XML-based instant messaging application that interoperates with multiple IM services, is close to winning approval for its own dedicated working group within the Internet Engineering Task Force (IETF), a development that would elevate the technology from one of many competing IM also-rans to that of a potential industry standard. 'They're pushing for a working group,' said Ned Freed, the IETF's co-area director for applications and member of the group's decision-making Internet Engineering Steering Group (IESG). 'I suspect we will be approving it in the very near future.' ... The IETF-proposed standard for instant messaging that AOL abandoned is still in progress. Dubbed SIMPLE (SIP for Instant Messaging and Presence Leveraging Extensions), it is an instant-messaging application of the IETF's Session Initiation Protocol (SIP), a technology with numerous applications apart from IM. SIMPLE proponents, however diminished in strength without AOL's backing, are putting up a fight to resist the Jabber invasion, arguing that the IETF's energies are divided enough as it is without adding another instant messaging protocol to the mix. In fact, there is a large handful of IM-related activities, variously competing and complementary with each other, in progress under the IETF's auspices. In addition to SIMPLE, they include Application Exchange (APEX), a still-ongoing project that even its working group chair acknowledges is unlikely to prosper; the now moribund Presence and Instant Messaging Protocol (PRIM), which backers hope to revive in the future; and the Instant Messaging and Presence Protocol (IMPP), a group working on Common Presence and Instant Messaging (CPIM)... Jabber proponents argue that an XML-based protocol would find a warm reception on the Internet, where the number of XML-based documents and applications is burgeoning. And should the IETF approve a Jabber working group, it would start out with an installed base that no other IETF instant messaging activity can match. Jabber now claims that 'as many as 100,000 of its servers are running across the Internet, with millions of people using the application. Licensees of Jabber's enterprise-grade software include AT&T, Hewlett-Packard, Walt Disney, BellSouth, France Telecom and VA Linux Systems...Jabber -- which exists as both the for-profit Jabber.com and the open-source development group 'The Jabber Software Foundation' -- has much to gain from the potential IETF working group. In addition to the prestige and possible surge in adoption that IETF recognition would bring, Jabber backers are hoping that in exchange for ceding control of the technology to the IETF, they might get valuable technical help in areas where Jabber badly needs it -- namely security and internationalization..." See: "Jabber XML Protocol."
[August 30, 2002] "Resource Description Framework (RDF): Concepts and Abstract Data Model." Edited by Graham Klyne (Clearswift and Nine by Nine) and Jeremy Carroll (Hewlett Packard Labs). Series editor: Brian McBride (Hewlett Packard Labs). W3C Working Draft 29-August-2002. Version URL: http://www.w3.org/TR/2002/WD-rdf-concepts-20020829/. Latest version URL: http://www.w3.org/TR/rdf-concepts/. Produced by the W3C RDF Core Working Group as part of the W3C Semantic Web Activity. The Resource Description Framework (RDF) is a data format for representing metadata about Web resources, and other information. This document defines the abstract graph syntax on which RDF is based, and which serves to link its XML serialization to its formal semantics. It also describes some other technical aspects of RDF that do not fall under the topics of formal semantics, XML serialization syntax or RDF schema and vocabulary definitions (which are each covered by a separate document in this series). These include: discussion of design goals, meaning of RDF documents, key concepts, character normalization and handling of URI references... The normative documentation of RDF falls broadly into the following areas: (1) XML serialization syntax [RDF/XML Syntax Specification (Revised)]; (2) formal semantics [RDF Model Theory]; (3) RDF vocabulary definition language (RDF schema) [RDF Vocabulary Description Language 1.0: RDF Schema], and (4) this document, which covers the following: discussion of design goals, meaning of RDF documents, key concepts, abstract graph syntax, character normalization, and handling of URI references..." See: (1) W3C website section for Resource Description Framework (RDF); (2) local references in "Resource Description Framework (RDF)."
[August 30, 2002] "Validation by Instance." By Michael Fitzgerald. From XML.com. (August 28, 2002). ['Michael Fitzgerald shows a convenient way to write schemas for validating XML documents. Rather than modeling the schema from scratch, Michael shows how to derive schemas (DTDs, RELAX NG, and W3C XML Schema) from instance documents.'] "Most people these days develop XML documents and schema with a visual editor of some sort, perhaps Altova's XML Spy, Tibco's TurboXML, xmlHack from SysOnyx, or Oxygen. Some even use several editors on a single project, depending on the strengths of the software. Others prefer to work closer to the bone. I usually develop my schema and instances by hand, using the vi editor, along with other Unix utilities (actually, I use Cygwin on a Windows 2000 box). I don't want to make more work for myself, but I prefer to use free, open source tools that allow me to make low-level changes that suit my needs. If you prefer to work this way, you should enjoy this piece. In this article, I will explore how you can translate an XML document into a Document Type Definition (DTD), a RELAX NG schema, and then into an W3C XML Schema (WXS) schema, in that order. I'll do this with the aid of several open source tools, and I'll also cover a way to validate the original XML instance against the various schemas.  Translating the DTD to RELAX NG: James Clark's DTDinst is a Java tool that translates a DTD either into its own XML vocabulary or into a schema in RELAX NG's XML syntax. After downloading and installing dtdinst.jar, you can issue the following command to translate a DTD into RELAX NG:  Translating an XML Document into a DTD: To translate the XML document into a DTD, I'll use Michael Kay's DTDGenerator. Originally, DTDGenerator was part of the Saxon XSLT processor, but now it is separate. At just 17kb, it's a pretty small download. DTDGenerator does a fair amount of work for you, but it doesn't produce parameter entities, notation declarations, or entity declarations. It's also not namespace-aware, but DTDs aren't inherently aware of namespaces or qualified names anyway.  Translating RELAX NG to XML Schema: Trang is a another tool written by James Clark. It can take as input a schema written in RELAX NG XML and compact syntax; it can produce RELAX NG XML, RELAX NG compact syntax, DTD, and WXS as output. After downloading Trang (which includes a JAR file for Jing, a RELAX NG validator), unzipping and installing it, you can convert the RELAX NG schema back to a DTD new-event.dtd ... If you work on the Windows platform, I have also written a set of batch files that will perform all the translations (from instance, to DTD, to RELAX NG, and finally to W3C XML Schema) and then validate against them in one simple step... Using the tools I've described here, you can perform the conversions and validate against the resulting schemas in a matter of seconds. You may still prefer to use a visual editor, but I believe that learning and using these tools can save you time and money..." See general references in "XML Schemas."
[August 30, 2002] "Transporting Binary Data in SOAP." By Rich Salz. From XML.com. (August 28, 2002). ['In his monthly Web services column, XML Endpoints, Rich Salz tackles the problem of sending binary data using SOAP. There are several solutions to this problem, and this month Rich looks at "SOAP Messages with Attachments".'] "... it's not good to try to embed arbitrary binary or XML content into another XML document. This is particularly bad news for SOAP and web services, since SOAP messages are XML documents with a thin layer -- a SOAP bubble, perhaps? -- around them. The right approach is to pull the embedded content out of the XML container, and replace it with a link. Fortunately, SOAP defines the href attribute that makes such linking fairly easy... Usually it's necessary to bundle the data with the message. When this is done, we typically call the SOAP message the payload and the data that used to be embedded as attachments. There are three common formats for doing this. In no particular order, they are (1) SOAP Messages with Attachments (SwA), which uses multi-part MIME; (2) DIME, a binary packaging format created by Microsoft; (3) BEEP, a very powerful facility by protocol expert Marshall Rose. We'll look at each of these in turn, starting with SwA for the rest of this column, and DIME and BEEP in subsequent months. While "direct handling of binary data" was explicitly declared to be out of scope for the W3C SOAP working group, this should change once SOAP 1.2 enters the standardization track. Using one of the existing mechanisms seems the most reasonable way to move forward... SOAP Messages with Attachments is a W3C Note, just like SOAP 1.1. It was published in December of 2000, seven months after the SOAP Note. The name turns out to have been unfortunate, having usurped the obvious generic term. SwA is very simple: the first part of the multipart MIME message is the XML SOAP document; the subsequent parts contain the attached data. The bulk of the document addresses URI resolution, particularly relative URIs. If we ignore them and always use absolute URIs (the current recommendation), the specification becomes even simpler. In the example below, we'll use email-like Message-IDs as our identifiers, as they have the convenient properties of being globally unique and absolute. We'll just attach a prefix to a single Message-ID to distinguish the parts..." See also (1) "Direct Internet Message Encapsulation (DIME)"; (2) "Blocks eXtensible eXchange Protocol Framework (BEEP)." General references in "Simple Object Access Protocol (SOAP)."
[August 30, 2002] "Nobody REALLY Asked Me, But..." By John E. Simpson. From XML.com. (August 28, 2002). ['The summer sun might possibly have gone to the head of John Simpson, our XML Q&A columnist. In this month's column he investigates XSLT scripts for obscuring XML documents.'] "How can I use XSLT to mask not only the markup, but the content of my XML document?...[beer, hack, more beer, hack] As I said in last August's column, this may be pretty effective at stopping a casual reader of the document. But naturally, it falls down as soon as the reader recognizes the document's ROT-13 nature, because she can fairly easily build a 'de-ROT-13' routine to turn the document back into its cleartext form. Incidentally, continuing to discuss all this as 'ROT-13' encoding is a little misleading. That name derived from the fact that 26 letters could be rotated 13 places to produce a simply coded result. What we've now got rotates 52 letters (including lower- and uppercase variants), 10 digits, and 30 punctuation characters. Thus, this form of the encoding might better be referred to as something like ROT-46, or maybe ROT-26,5,15. If you're interested in pursuing this further on your own, you could rotate the characters an arbitrary number of places -- perhaps driven by a global parameter whose value is passed in from outside the stylesheet..."
[August 30, 2002] "W3C, OASIS Look For Common Web Services Ground." By Richard Karpinski. In InternetWeek (August 29, 2002). "The World Wide Web Consortium (W3C) and OASIS -- two bodies building critical Web services and security standards -- held a public forum this week to better coordinate their work in this crucial area... The two standard bodies are wrestling with how to avoid overlap while also coordinating their efforts to ensure key Web and XML specifications remain interoperable. The W3C has created core Web standards ranging from HTML to XML, as well as Web services security standards such as XML-Encryption and XML-Signatures. It has also formed a Web Services Architecture group to guide the big picture deployment of these new, more distributed services architectures. OASIS, meanwhile, was first known for its work on the global e-business standard ebXML but has come on particularly strong in the world of Web services and especially XML security. It now runs six technical committees looking at Web services security, including technologies for authentication, access control, provisioning, biometrics, digital rights, and overall Web services security... The W3C and OASIS already work together at an informal level -- and it's important to note that OASIS is actually a W3C member. Overall, the W3C looks to be best at developing infrastructure-level specifications, especially those that affect the World Wide Web. OASIS works a level up, focusing on e-business and increasingly business-driven Web services and security applications that in many cases consume W3C specs..." See references to the Forum on Security Standards for Web Services and list of presentations.
[August 30, 2002] "Iona CTO Touts Web Services 'Standardization Dream'." By Carolyn A. April. In InfoWorld (August 29, 2002). "Unlike earlier distributed computing technologies, Web services and XML give the software industry a chance to finally realize the 'standardization dream' enjoyed by industries such as transportation and manufacturing, said Iona CTO Eric Newcomer here Thursday... Web services interfaces and standards will enable the lashing together of commodity application functions such as billing systems or credit check approval processes, freeing companies to focus on the value-added elements of particular applications... And while more established distributed computing middleware, such as CORBA, features more robust, reliable technology, Web services will ultimately prevail as the dominant system-to-system integration mechanism because it is based on the Internet and standards and affords a higher level of abstraction to developers through XML versus a language like C and the use of IDLs, he said... To get there, however, Newcomer believes the standards around quality of service features such as security, workflow, and transactions will need to be ironed out -- no easy task given increasing fragmentation among vendors and standards bodies. Agreement on this second layer of standards, above the core XML, SOAP, UDDI, and WSDL, will be slower to come because vendors have money at stake around these protocols, he predicted... In addition, Newcomer said the establishment of a standard Web services reference architecture will be essential to adoption. The W3C, of which Newcomer is a member, is currently working on such an architecture and will release a proposal for public review sometime next month, he said..."
[August 30, 2002] "Standards Bodies Seek to Reconcile Web Services Security." By Shawna McAlearney. In Security Wire Digest Volume 4, Number 65 (August 29, 2002). Report from the Boston Forum on Security Standards for Web Services. "Seeking common ground for the implementation of Web security standards, the Organization for the Advancement of Structured Information Standards (OASIS) and the World Wide Web Consortium (W3C) took a small step forward Monday to reconcile differences in integration and resource allocation. 'We are looking at ways in which we can maximize the consistency across the standards,' says Phillip Hallam-Baker, a Web services security architect at VeriSign. 'The whole industry realizes the potential of Web services, but without trust and security Web services are dead on arrival.' According to Hallam-Baker, the W3C and OASIS working groups are addressing different levels of security infrastructure... The key standards under W3C include XML Encryption, XML Signature and eXtensible Key Management Specification (XKMS). OASIS's key standards include eXtensible Rights Markup Language (XrML); WS-Security; Security Assertion Markup Language (SAML); and eXtensible Access Control Markup Language (XACML). 'For example XKMS and SAML both define a mechanism for authenticating SOAP messages,' says Hallam-Baker. 'WS-Security is a level higher, encompassing our experience with XKMS and SAML and providing a framework for applying standards to authenticate and encrypt any type of Web services message..."
[August 30, 2002] "Web Services Security Standards Forum." Technical Keynote by Dr. Phillip M. Hallam-Baker C.Eng. FBCS (VeriSign Inc) presented at the Forum on Security Standards for Web Services, Boston, 26 August, 2002. The Forum was co-sponsored by OASIS and W3C. "What Parts of Web Services Security Should Be Infrastructure? Replicate security context provided by O/S:  Protected Memory (Prevents modification of process state; Prevents interception of function calls; Prevent disclosure);  Access Control (Authentication; Authorization; Auditing). Problem Space:  Infrastructure: Policy, Conversation, Confidentiality, Integrity, Access Control;  Security Infrastructure Services: Trust, Authorization, Authentication, Attributes;  Applications: Funds Transfer, Payroll, Inventory, Purchasing. Without Security and Trust: Web Services are Dead On Arrival. Conclusions: Considerable progress has already been made; Industry wide consensus on value of standards; Basic Infrastructure is in place or in development; There is considerable consensus on the roadmap; Security need not be the show stopper..." [source .PPT]
[August 30, 2002] "OASIS XACML TC and Rights Language TC." By Hal Lockhart (Entegrity). Among the presentations given at the Forum on Security Standards for Web Services, Boston, 26 August, 2002. XACML and RLTC 'Forty Thousand Foot View': Both deal with the problem of Authorization; Both draw requirements from many of the same application domains; Both share many of the same concepts, but in some cases use different terms; Both base specification on XML Schema; Each approaches the problem differently. Types of Authorization Information:  Attribute Assertion (Properties of a system entity, typically a person; Relatively abstract - business context; Same attribute used in multiple resource decisions; Examples: X.509 Attribute Certificate, SAML Attribute Statement, XrML PossessProperty);  Authorization Policy (Specifies all the conditions required for access; Specifies the detailed resources and actions/rights; Can apply to multiple subjects, resources, times...; Examples: XACML Policy, XrML License, X.509 Policy Certificate);  AuthZ Decision (Expresses the result of a policy decision; Specifies a particular access that is allowed; Intended for immediate use; Example: SAML AuthZ Decision Statement). Web Services Security:  SAML, XACML and RLTC Spec can all convey AuthZ Info, carry in SOAP header  Possible use in Policy Advertisement  Issues: Substantial overlap between SAML/XACML & XrML - not clear what is best for what use; Intellectual Property Issues; Controversies over DRM itself; XACML and XrML are complex, will take time to understand. See: (1) "Extensible Access Control Markup Language (XACML)"; (2) "OASIS Rights Language." [source .PPT]
[August 30, 2002] "OASIS Fuels Security Agenda." By Brian Fonseca. In InfoWorld (August 30, 2002). "Next week, 95 individuals representing 56 different companies will meet in Redwood City, Calif. [...] in a new TC (technical committee) being formed by the Organization for Advancement of Structured Information Standards (OASIS) to address the WS-Security specification, said Kelvin Lawrence, distinguished engineer at Armonk, N.Y.-based IBM and co-chair of the OASIS WS-Security TC. Lawrence said a complete list of accepted members will appear on the OASIS Web site after the meeting. OASIS members that have proposed TC participation include BEA Systems, Cisco, Intel, IBM, Microsoft, Sun Microsystems, Entrust, IONA, Novell, VeriSign, Netegrity, Oblix, SAP, RSA, Baltimore Technologies, OpenNetwork, Systinet, and Documentum. Originally created by Microsoft, IBM, and VeriSign, WS-Security proposes a standard building-block set of SOAP extensions to construct secure Web services and offer support for multiple security tokens, trust formats, signature formats, and encryption technologies. The security standards effort taps into a long-held enterprise concerns. According to a Forrester Research report released in June, Web services will remain hidden in the back office until multiple levels of authentication and encryption, centralized authorization and auditing, seamless message signing, and consumption of external authentication services desires are met. IBM's Lawrence said three input documents will be discussed at the inaugural WS-Security TC meeting, including the original WS-Security specification and a submission by the OASIS SAML TC to examine how SAML will utilize WS-Security. A WS-Security addendum will also be introduced as a result of 'lessons learned' during a Web services interoperability test between Microsoft. Net and IBM WebSphere servers at the XML Web Services One conference in Boston this week... This week, the Liberty Alliance Project announced that 30 companies joined its ranks -- boosting total membership to more than 95 companies -- to develop open interoperable specs for federated network identity... According to Rob Cheng, senior iPlatform analyst at Redwood Shores, Calif.-based Oracle and co-chair of the Web Services Interoperability (WS-I) organization's marketing committee, the WS-I is on track to produce Version 1.0 of its WSBasic profile, which will feature sample applications and testing tools, in the fourth quarter. A profile is a set of best practices designed to bridge the gap between standards organizations and end-users." References: (1) "Web Services Security Specification (WS-Security)"; (2) "Liberty Alliance Specifications for Federated Network Identification and Authorization"; (3) "Web Services Interoperability Organization (WS-I)."
[August 29, 2002] "OASIS Forms Technical Committee To Tackle UDDI." By Richard Karpinski. In InternetWeek (August 29, 2002). "The OASIS standards group this week launched a new technical committee to oversee the development of UDDI, the registry and lookup technology in the Web services software stack. UDDI.org, which developed the early Universal Description, Discovery and Integration specs, agreed to move UDDI into OASIS last month. At that time, OASIS.org also released version 3.0 of the UDDI specs, which added crucial enterprise functionality such as built-in security and support for digital signatures. At OASIS, UDDI will go through the group's usual standards processes and eventually emerge with a consensus-driven, 1.0 version of a UDDI standard... OASIS CEO Patrick Gannon said that UDDI is a good fit for his group, which is 'really about applying core standards and applying them to business needs.' OASIS has taken many important standards processes, including ebXML, WS-Security, and others..." See details in the 2002-08-29 news item "OASIS UDDI Specification Technical Committee Continues Work on Web Services Registry Foundations.". General references: "Universal Description, Discovery, and Integration (UDDI)."
[August 29, 2002] "Internet Open Trading Protocol Version 2 Requirements." By Donald E. Eastlake 3rd (Motorola). Request for Comments (RFC): 3354. Date: August 2002. I-D Tag: 'draft-ietf-trade-iotp2-req-02.txt'. "This document gives requirements for the Internet Open Trading Protocol (IOTP) Version 2 by describing design principles and scope and dividing features into those which will, may, or will not be included. Version 2 of the IOTP will extend the interoperable framework for Internet commerce capabilities of Version 1 while replacing the XML messaging and digital signature part of IOTP v1 with standards based mechanisms... IOTP v2 will provide optional authentication via standards based XML Digital Signatures [RFC 3275]; however, neither IOTP v1 nor v2 provide a confidentiality mechanism. Both require the use of secure channels such as those provided by TLS [RFC 2246] or IPSEC for confidentiality and depend on the security mechanisms of any payment system used in conjunction with them to secure payments..." WG description from the charter: "The Internet Open Trading Protocol is an interoperable framework for Internet commerce. It is optimized for the case where the buyer and the merchant do not have a prior acquaintance and is payment system independent. It can encapsulate and support payment systems such as SET, Mondex, secure channel card payment, GeldKarte, etc. IOTP is able to handle cases where such merchant roles as the shopping site, the payment handler, the deliverer of goods or services, and the provider of customer support are performed by different Internet sites. The working group will document interoperability experience with IOTP version 1 (which has been published as an Informational RFC) and develop the requirements and specifications for IOTP version 2. Version 2 will make use of an independent Messaging Layer and utilize standard XML Digital Signatures." See (1) Internet Open Trading Protocol (Trade) Working Group; (2) "Internet Open Trading Protocol (IOTP)." [cache]
[August 29, 2002] "Cape Clear Offers WSDL Editor." By Carolyn A. April. In InfoWorld (August 29, 2002). "Cape Clear Software next week will serve up a graphical WSDL (Web Services Description Language) Editor designed to simplify and encourage development of Web services applications. A core standard around Web services, WSDL provides a standard way to describe what a service does, in terms of its functionality, specifications, inputs, outputs, and accessibility methods. The WSDL Editor, which will be available for free download, is focused on helping developers design the WSDL for a particular Web service up front -- before they do any coding of the application itself, according to officials at Dublin, Ireland-based Cape Clear... [John] Maughan said this top-down tack has several advantages over a 'bottom-up' approach of coding first and then creating WSDL, among them: the separation of design over development and implementation; interoperability across development frameworks such as J2EE and .Net; and the ability for Web services consumers and producers to work in parallel and for corresponding developers to use different languages. Likening the tool to a WYSIWYG HTML editor for building Web pages, Maughan said the WSDL Editor is best used for development projects that start from scratch or for those that center on building a Web service based on an existing XML schema, such as SWIFT or RosettaNet. In cases where a developer plans to expose existing code as a Web service, the better choice for creating WSDL is a 'generator' that automatically spits out WSDL code around the application, he added... Cape Clear officials said the WSDL Editor is also an attempt to encourage Web services development in general..."
[August 29, 2002] "XML Web Services: Is the End Near?" By Darryl K. Taft. In eWEEK (August 28, 2002). "For the second day in a row at the XML Web Services One conference here, a keynote speaker got up and signaled the impending end to the Web services era, at least on a standards level. Don Box, an architect in Microsoft Corp.'s developer division told an audience of Web services conference attendees Wednesday: 'The end of the XML Web services era is near. I predict two years from now we won't have this conference'... Box said Microsoft has been moving awfully fast with its Global XML Architecture (GXA) for Web services; an interoperability demonstration between IBM and Microsoft here at the conference [serves] as evidence that the technology is constantly improving... Box posed the question of why Microsoft is pursuing a Web services strategy. 'Because we hit the wall with the prior technology,' he said. He said Microsoft's COM (Component Object Model) and DCOM (Distributed Component Object Model) hit the wall. 'On the XML front we needed a replacement for DCOM, so XML Web services is the way we went. Microsoft has bet the company on this thing and it is our intention to make all software integration based on Web services.' In addition, he said, some of the Web services standards are mature and need to be finalized. He said the Simple Object Access Protocol (SOAP) 1.3 is a bad idea because the specification covers all the necessary functionality for a SOAP implementation. 'SOAP 1.2 should be the end of the line,' he said. Box also said Universal Description, Discovery and Integration (UDDI) is the technology of the future, but that may change in 2003. Microsoft is shipping UDDI as part of its operating system, Box said..." See: (1) "Universal Description, Discovery, and Integration (UDDI)"; (2) "Microsoft Announces Web Services Development Kit Technology Preview."
[August 28, 2002] "Liberty Alliance Picks Up More Members." By Matt Berger. In InfoWorld (August 28, 2002). "Another 30 companies have thrown their support behind Liberty Alliance Project, an effort to create a standard technology that allows users to travel password-protected Web sites using a single user name and password. The effort is backed by Sun Microsystems and a number of hardware, software and consumer services companies as diverse as United Air Lines, General Motors and American Express. It now has more than 95 members from private industry, not-for-profit organizations and government, the group said in a statement Wednesday. New members include Sprint, security and authentication technology maker Baltimore Technologies, network management software maker Oblix, and Internet2, a consortium of university researchers, private industry and government agencies that are working to develop and deploy advanced network applications. The Liberty Alliance released the first version of its specification in July based on the security standard SAML (Security Assertion Markup Language). As it is designed, users would be able to use a single online identity to traverse Web sites or gain access to corporate applications and databases that support the Liberty Alliance specification. The specification allows Web site operators to build in functions that allow users to 'opt-in' to share their user name and password with other Liberty-enabled Web sites, as well as a 'global log out' for signing off all participating Web sites in a single action. A similar technology is available from Microsoft through its Passport authentication service, with which users can travel Passport member Web sites without having to re-enter a user name and password each time. Microsoft and the Liberty Alliance have yet to synchronize their efforts, though Microsoft said in July that it would include support for SAML in future versions of its Windows operating system..." See: (1) the text of the announcement: "Liberty Alliance Increases Ranks With 30 New Members From Across The Globe"; (2) "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[August 28, 2002] "DataPower delivers XML Acceleration Device." By Scott Tyler Shafer. In InfoWorld (August 28, 2002). "Datapower Technology on Monday unveiled its network device designed specifically to process XML data. Unlike competing solutions that process XML data in software, DataPower's device processes the data in hardware -- a technology achievement that provides greater performance, according to company officials. The new device, dubbed DataPower XA35 XML Accelerator, is the first in a family of products expected from the Cambridge, Mass.-based startup. The DataPower family is based on a proprietary processing core technology called XG3 that does the analysis, parsing, and processing of the XML data. According to Steve Kelly, CEO of DataPower, the XA35 Accelerator was conceived to meet the steady adoption of XML, the anticipated future proliferation of Web services, and as a means to share data between two businesses. Kelly explained that converting data into XML increases the file size up to 20 times. This, he said, makes processing the data very taxing on application servers; DataPower believes an inline device is the best alternative. In addition to the large file sizes, security is also of paramount importance in the world of XML... According to DataPower, most existing solutions to offload XML processing are homegrown and done in software -- an approach the company itself tried initially and found to be inadequate with regards to speed and security. After trying the software path, the company turned to creating a solution that would process XML in hardware... Other content-aware switches, such as SSL (secure socket layer) accelerators and load balancers, look at the first 64 bytes of a packet, while the XA35 provides deeper packet inspection, looking at 1,400 bytes and thus enabling greater processing of XML data, Kelly explained. The 1U-high network device has been tested against a large collection of XML and XSL data types and can learn new flavors of the markup language as they pass through the device..." See the announcement "Datapower Technology Delivers Industry's First Wire-Speed Intelligent XML-Aware Network Device. Datapower XA35 XML Accelerator Solves Performance and Scalability Issues for XML Web Services and Enterprise Applications."
[August 28, 2002] "Standardizing Web Services Nears Completion." By Darryl K. Taft. In eWEEK (August 28, 2002). "The effort to standardize the Web services arena is but six to nine months from completion, but the work necessary to implement all the standards to create a totally services-oriented architecture is at least a year to two years away, according to one IBM executive. Robert Sutor, IBM's director of e-business standards strategy, demonstrated at the XML Web Services One conference here interoperability between an IBM WebSphere-based Web services system and a Microsoft Corp. .Net Web services system. The scenario included a client, a brokerage house and trade desks at an institution, where each point was able to swap code between WebSphere and .Net. 'The demo showed the degree to which WebSphere and .Net could interoperate on a standard level,' Sutor said. The demonstration used the SOAP (Simple Object Access Protocol), WSDL (Web Services Description Language), WS-Security and WS-Attachments... There is still work to be done on Web services security, but the recent move of the WS-Security specification into the OASIS standards organization should help to move that forward, according to Sutor. Indeed, he said, business processing, workflow, transactions and systems management are going to be big areas for the future. 'We'll be spending the next couple of years in standards organizations finalizing these things,' Sutor said. 'The standardization work will continue, but for the big picture we've only got six to nine months on this.' ... Meanwhile, Sutor said the Web Services Interoperability organization, which IBM founded with Microsoft, BEA Systems Inc. and other companies, has played a crucial role already in the Web services arena. 'With WS-I there's much better liaison between OASIS and the W3C..." See: "Web Services Interoperability Organization (WS-I)."
[August 28, 2002] "Check Point Tweaks Firewall To Secure Web Services." By Richard Karpinski. In InternetWeek (August 28, 2002). "Striking back against growing numbers of specialty XML security appliances, firewall and security vendor Check Point Software Tuesday released a free upgrade to its firewall to help enterprises secure XML- and SOAP-based traffic. The new capabilities -- dubbed Application Intelligence Technology -- will be available as a no-charge feature for licensed users of Check Point's firewall and VPN offerings. The market has been flooded with a slew of so-called 'XML firewalls' of late, or standalone servers or appliances that aim to inspect and secure Web services traffic separate from network-level security devices and firewalls. Those vendors say that XML processing and security is so fundamentally different than what happens at a network firewall that it requires a new firewall altogether. Check Point, in comparison, not only believes that XML processing can happen on firewall boxes, but that application-layer security on those firewalls shouldn't just be limited to HTTP- and XML-based traffic, but should support a wide array of application security measure, said April Fontana, Check Point product marketing manager... In addition, by placing XML and network firewall security on the same box, it assures that network-level attacks -- such as denial of service or IP spoofing -- don't take down a standalone XML firewall... Security for XML and SOAP will be available at no additional cost in the latest version of Check Point VPN-1/FireWall-1 Next Generation, Feature Pack 3, starting in September." See the announcement "Check Point Software First to Secure Web Services. XML/SOAP Security Made Possible With Breakthrough Application Intelligence Technology."
[August 28, 2002] "Liberty Alliance Adds Technical Muscle." By Sandeep Junnarkar. In CNET News.com (August 28, 2002). "The Liberty Alliance Project added a new member on Wednesday, boosting its efforts to establish an online authentication plan to compete with Microsoft's Passport online ID system. Bridgewater Systems said it plans to provide technical expertise in network identification and authentication to Liberty's quest to establish new standards in online authentication systems. The Canadian software developer joins a growing number of companies aligned with Sun Microsystems' Liberty Alliance effort. Heavyweights like American Express, America Online and Hewlett-Packard are among the other members. The group is trying to establish a standard method for online identification that would let a computer user log on once, to one Web site, then have other sites recognize that user as authenticated. Bridgewater supplies software to network service providers that allows them to differentiate access to wireline and wireless services based on the identity of the user or the application. This capability, Bridgewater said, lets service providers solve problems such as how to account for services and track them, and how to prevent unauthorized access... Sun and Microsoft... are each rushing to build and market an authentication system that consumers and businesses alike will trust. Such identity systems are an essential ingredient if next-generation Web services are to actually become mainstream, bringing useful new Internet services to businesses and consumers. Sun is counting on Liberty to become part of the pantheon of Web services standards, and it has been pushing to have such specifications be royalty-free. Liberty's 'single sign-on' standard is based on another newly released standard, the Security Assertion Markup Language (SAML)..." See: (1) the announcement: "Liberty Alliance Increases Ranks With 30 New Members From Across The Globe"; (2) "Liberty Alliance Specifications for Federated Network Identification and Authorization."[source]
[August 28, 2002] "Answering the Critical Web Services Questions." By Peter Fischer. In Application Development Trends Volume 9, Number 7 (July 2002), pages 51-57. ['The hyperbole surrounding the Web services phenomenon appears to be reaching its peak; now IT developers must determine whether the technology can really be a key enabler for enterprise portals and application integration.'] "Web services' capabilities and features are not new. The idea of providing distributed software services has been around for more than a decade. Technologies like RPC, DCOM and CORBA were built from a client/server foundation focused on creating a standard technology platform on which to accomplish 'true' app-to-app communication by providing access to remote methods. Messaging then came along and freed us from the shackles of synchronous communication, enabling point-to-point and Publish/Subscribe communication models utilizing messages as the 'exchange currency.' ... The technologies that work together to provide this ubiquitous standard connectivity are XML, SOAP, WSDL and UDDI. These technologies work together to provide a Web services model with important functionality: a loosely coupled model for exchanging information; a standard format for packaging and sending data over the wire; the ability to make interface definitions available; the ability to locate and register interest in a service; and the ability to describe the capabilities of a service and information about how to access a service... From a technical perspective, Web services fills in what EAI lacked -- a standard, on-the-wire format to enable app components to exchange messages in an implementation-independent way. Web services, through SOAP and XML, provides this format and 'XML-itizes' app integration. Combining its loosely coupled nature with standard-driven technologies that are 'toolable,' Web services is a good approach to app integration in the small. The sweet spot for Web services is sharing business logic in a non-intrusive way. Existing interfaces can be 'e-nabled' for integration within the enterprise and between partners and customers. Web services provide an 'inside out' approach to app integration where existing interfaces are wrapped as components with interfaces specified in XML in WSDL files... The Web services technology stack is penetrated through the use and leverage of XML. XML provides the lingua franca of Web services; every other Web services technology eats and breathes XML. I use the term 'informational middleware' to describe XML's power, applicability and potential. Web services finally realize XML's promise by providing the standard format for specifying both application interfaces and application messages... Firms should not delay in adopting Web services as an integral component of their IT toolkits. Despite the fact that interfaces like UDDI are still maturing, and security is still a work in progress, the foundation technologies of XML, SOAP and WSDL are mature and well formed..."
[August 27, 2002] "XML-Signature XPath Filter 2.0." W3C Proposed Recommendation 27-August-2002. Authors/Editors: John Boyer (PureEdge Solutions Inc.), Merlin Hughes (Baltimore Technologies Ltd.), and Joseph Reagle (W3C). Version URL: http://www.w3.org/TR/2002/PR-xmldsig-filter2-20020827. The PR specification "describes a new signature filter transform that, like the XPath transform (XML-DSig, section 6.6.3), provides a method for computing a portion of a document to be signed. In the interest of simplifying the creation of efficient implementations, the architecture of this transform is not based on evaluating an XPath expression for every node of the XML parse tree (as defined by the XPath data model). Instead, a sequence of XPath expressions are used to select the roots of document subtrees -- location sets, in the language of XPointer -- which are combined using set intersection, subtraction and union, and then used to filter the input node-set. The principal differences from the XPath transform are: A sequence of XPath operations can be executed in a single transform, allowing complex filters to be more-easily expressed and optimized. The XPath expressions are evaluated against the input document resulting in a set of nodes, instead of being used as a boolean test against each node of the input node-set. To increase efficiency, the expansion of a given node to include all nodes having the given node as an ancestor is now implicit so it can be performed by faster means than the evaluation of an XPath expression for each document node. The resulting node-sets can be combined using the three fundamental set operations (intersection, subtraction, and union), and then applied as a filter against the input node-set, allowing operations such as signing an entire document except for a specified subset, to be expressed more clearly and efficiently..." See IETF/W3C XML Signature Working Group.
[August 27, 2002] "What's Behind BEA's Big Bet On Tools?" By Jack Vaughan. In Application Development Trends Volume 9, Number 8 (August 2002), pages 48-55. ['The company started life with the Tuxedo transaction monitor, then its WebLogic Java application server redefined the middleware market. Now BEA Systems will seek to entice a broader group of developers to work with Java.'] "...In the last five years, BEA Systems has hurdled into the top ranks of enterprise software companies. And it did it on the back of one of the hottest products the industry has ever seen. BEA's WebLogic application server was the proverbial right product in the right place at the right time. San Jose, Calif.-based BEA appears to sense it may be time to turn its back on the notion that tools should be completely independent of runtime software, and it is now ready to push tools along with its platforms. It is also wagering that Java needs a mainstream IDE to attract wider use. Thus, the firm is beginning to promote its WebLogic Workshop software at the same time it shows diminished interest in technical Java tools from Santa Clara, Calif.-based WebGain, a third-party tool maker BEA helped fund, but which has struggled in recent months. A number of traits have merged harmoniously, creating success for the WebLogic server over the last few years.The BEA application server was fast, had an architecture that impressed enterprise shops with transaction processing backgrounds, and it adhered carefully to the new J2EE compatibility standard. With the Java application server, the foggy notion of 'middleware' gained definition, and BEA's WebLogic server was the most brilliant example... BEA has begun to tout its new line of WebLogic Workshop tools,which is a change for a company that avoided making tools in the past... BEA has been quickly forced into the role of big league player. But it has not proved shy about driving change. More barbs will come its way. Its WebLogic Workshop offering especially will come in for arrows from competitors that challenge whether it is truly standard Java. While BEA has tried to create a way of abstracting complexity without disrupting standard Java, Workshop must rely on useful, callable BEA-created components for handling some common but difficult Java programming tasks. In the loosely knit Java community, that is usually an ingredient for controversy... Having reached nearly $1 billion in yearly revenue, said Gartner's Natis, BEA is a vendor at a crossroads. Getting to higher revenues will be difficult. BEA comes to its present position with some brave plans that, at the very least, help to make the software business interesting. In the end, corporate application managers and programmers will decide, as ever, if the products live up to the promise..."
[August 27, 2002] "Microsoft Delivers Web Services Toolkit." By Richard Karpinski. In InternetWeek (August 27, 2002). "Microsoft this week made available an add-on to its Visual Studio development platform that makes it easier for developers to build applications that support the Web services specifications it is forwarding. The new Microsoft Web Services Development Kit supports specs such as WS-Security, WS-Routing, and WS-Attachments. Though there is broad industry support behind some of these efforts -- especially WS-Security, which now has wide backing and is on a standards track in the OASIS group -- none are yet official standards. Yet the specs are strongly backed by Microsoft, and thus inclusion of them in the popular Visual Studio not only puts them into wide use, it sets up a scenario that could push rivals -- such as Sun Microsystems -- to back the specs or push its developers in another direction. That begins to move the battle over Web services specs out of the standards bodies and into the marketplace. Microsoft, Sun, and others continue to talk a good game on Web services standards, but some major fissures remain. Perhaps most notably, Sun has not yet been asked to join the Web Services Interoperability group as a board member, it says, a requirement for joining that organization. And while Sun has climbed on board WS-Security, the latest major Web services forwarded at OASIS saw IBM, Microsoft, and BEA taking the lead -- and Sun nowhere to be found. The new development kit is available for free download from the MSDN developer Web site. It plugs into the Visual Studio .Net development tool and fits the company's overall .Net framework..." See details in "Microsoft Announces Web Services Development Kit Technology Preview."
[August 27, 2002] "What's New in EJB 2.1?" By Emmanuel Proulx. Published on The O'Reilly Network, ONJava.com (August 14, 2002). "Only a few J2EE application servers are following the EJB 2.0 specification, and already the EJB 2.1 draft specification is out. For you busy folks who want to know about what the future has in store for EJBs but don't have the time to read a 636-page document, here is a quick overview. Fair warning: the specification is a draft, so many parts are incomplete or will change. Quick List of New Features: (1) Message-driven beans (MDBs): can now accept messages from sources other than JMS; (2) EJB query language (EJB-QL): many new functions are added to this language: ORDER BY, AVG, MIN, MAX, SUM, COUNT, and MOD; (3) Support for Web services: stateless session beans can be invoked over SOAP/HTTP. Also, an EJB can easily access a Web service using the new service reference; (4) EJB timer service: a new event-based mechanism for invoking EJBs at specific times; (5) Many small changes: support for the latest versions of Java specifications, XML schema, and message destinations... The ejb-jar.xml standard deployment descriptor is now specified with XML schema, rather than DTD... One of the best features of EJB 2.1 is the support for Web services. This applies to two different areas: accessing an EJB as if it were a Web service, and an EJB directly accessing a Web service. A stateless session EJB can now be accessed through a Web service. In order to do that, the client must use SOAP over HTTP or HTTPS, and use a WSDL descriptor. Furthermore, the stateless session bean must be invoked 'RPC-style'..." See also (1) Enterprise JavaBeans 2.1 as JSR 153; (2) Enterprise JavaBeans, 3rd Edition, By Richard Monson-Haefel.
[August 27, 2002] "W3C, OASIS Meet Over Web Security Standards." By Darryl K. Taft. In eWEEK (August 27, 2002). "Despite the best efforts to come to agreement on Web security standards, two leading standards bodies can best say they have made a start on moving to a common set of standards. At the XML Web Services One Conference in Boston, the Organization for the Advancement of Structured Information Standards (OASIS) and the Worldwide Web Consortium (W3C) held an all-day forum to hash out where they need to pool their resources and integrate security standards efforts. 'Standards should be enablers, not limiters,' said Phillip Hallam-Baker, chief engineer at Revising Inc., which is a co-author of the WS-Security specification. 'Don't complain if companies don't wait for standards to catch up.' He added, 'Without trust and security, Web services are dead on arrival.' Hallam-Baker said key standards under the W3C include XML Encryption, XML Signature and exXensible Key Management Specification (XKMS), whereas the key standards under OASIS include Extensible Rights Markup Language (VRML) WS-Security, Security Assertion Markup Language (SAML), Provisioning, Biometrics and Extensible Access Control Markup Language. Some users expressed the need for more cohesion among the standards. However, Hallam-Baker said there is no standards war. 'Either there is genuinely more than one approach that makes sense' or the individual standards can be put together, he said. And although 'there is lots of potential overlap, we're very capable to start it on a very, very specific theme. You're seeing convergence on a single approach,' he added. However, some users said they cannot wait for the standards bodies to come up with standards because they must implement systems today. Patrick Gannon, CEO of OASIS, said, 'It's not just that we're using standards, but we have the ability to get wide adoption of standards... There will be more coordination [between the W3C and OASIS] moving forward'..." See "Forum on Security Standards for Web Services."
[August 27, 2002] "Update on SSML [Speech Synthesis Markup Language Specification]." By Daniel C. Burnett. In VoicexML Review (July/August 2002). "The Speech Synthesis Markup Language (SSML), as its name implies, provides a standardized annotation for instructing speech synthesizers on how to convert written language input into spoken language output. This language has been under development within the Voice Browser Working Group (VBWG) of the World Wide Web Consortium (W3C) for a few years. This article provides a brief update on the status and future of SSML. For background on SSML and an introduction to its features... In April of 2002, the Voice Browser Working Group issued another Working Draft (not a Last Call this time) with some minor content changes. The group is now working towards publication of a new Last Call WD. The April 2002 draft has a fairly small number of changes from the January 2001 draft. It was released primarily to provide XML Schema support for use in VoiceXML and to bring the definition of valid SSML documents in line with that in the other Voice Browser Working Group specifications... The W3C has now moved from encouraging the use of XML Schema to the stronger position of explicitly discouraging the use of DTDs. While the creation of a schema when you already have a DTD is fairly straightforward, the fact that SSML is expected to be embedded in other markup languages (of which VoiceXML is the first example) brought additional requirements to the table: (1) the need to be able to incorporate SSML elements into the host language namespace, (2) the need to modify the SSML elements to add host language-specific attributes and functionality. In the SSML specification the DTD is now informational only, while the schema provides the normative definition of the syntax of the language... Any changes for the next [future] draft are likely to fall into two categories: clarifications of ambiguous or confusing features and text, and the addition features requested or encouraged by other groups in the W3C. Two portions of the specification that were vague in the last Working Draft are the use of the xml:lang attribute and the <say-as> element... The <metadata> element in VoiceXML and SRGS provides a mechanism for expressing information about the document. Both recommend the use of the Resource Description Format (RDF) syntax and schema as the content format for this element; RDF 'provides a standard way for using XML to represent metadata in the form of statements about properties and relationships of items on the Web.' This element (with suggested content structure) is part of the W3C's Semantic Web Initiative, an attempt to develop standard ways of representing the meaning of XML-structured data on the World Wide Web. As such, it is likely that such a capability will be encouraged for SSML..." See: "W3C Speech Synthesis Markup Language Specification."
[August 27, 2002] "The IETF Speech Services Control Working Group." By Eric Burger. In VoicexML Review (July/August 2002). "Speech recognition technology has become an essential building block for a new wave of next generation enhanced services. Speech resources such as automated speech recognition (ASR), text-to-speech (TTS), and speaker verification (SV) are becoming key features in a range of new services that help businesses manage their work force and customer base more efficiently and enable consumers to communicate in compelling new ways. We are just now seeing interesting applications where you speak to an application and it responds to you, such as automated stock trading, airline reservations, and e-mail by phone. Speech resources make these interesting and useful applications possible... Right now, most of these applications are experiments, trials, and limited deployments. There are a number of challenges still facing speech resource providers, application developers, and platform manufacturers. The IESG recently chartered speechsc, or the Speech Services Control Work Group of the IETF to develop a more effective protocol for speech recognition technology in next generation networks. This article will briefly discuss what speechsc is, what the expected benefits of the protocol will be, the role of the work group, and the speech services vision... The speechsc Work Group will develop protocols to support distributed media processing of audio streams. The focus of the working group is to develop protocols to support ASR, TTS, and SV. The working group will only focus on the secure distributed control of these servers... The work of the group is complimentary to work going on in other standards bodies. We are coordinating with ETSI Aurora, ITU-T Study Group 16 Question 15, the W3C Multi-Modal Interaction Work Group, and other groups, as appropriate. The speechsc Work Group of the IETF is taking on the interesting work of enabling media servers, VoiceXML Interpreters, arbitrary speech applications, and possibly even wireless handsets to access distributed speech resources. This will enable new and useful applications that are speech driven and integrate multiple media types. The work group will improve upon the existing protocols and produce a robust, extensible protocol that meets the needs of ASR, TTS, and SV today and into the future..." The WG has a Speech Services Control Working Group Discussion list and archives. See also "VoiceXML Forum."
[August 27, 2002] "BEA, Palm Partner On Web Services For Handhelds." By Paul Krill. In InfoWorld (August 27, 2002). "PALM and Bea Systems on Tuesday will announce plans to boost development of Web services-based applications for Palm handheld devices. The plan melds Palm's Reliable Transport infrastructure technology with the BEA WebLogic Server 7.0, BEA's J2EE-based application server platform. Developers, the vendors said, will be able to build Palm applications that can either be wireless or downloaded via a Palm cradle to interface to back-end business logic... Through the partnership, a BEA WebLogic Workshop component called a control will be developed to bridge WebLogic to the Palm, according to Chris Morgan, Palm director of strategic alliances. Palm's Reliable Transport technology will be deployed to take care of low-level communications between the applications server and the handheld, Morgan said. Reliable Transport supports protocols such as GPRS. Web services will be deployed with Reliable Transport to move XML and SOAP messages back and forth between the application server and device, said Morgan... To run applications, the Palm client will require the Reliable Transport technology, an XML parser, SOAP engine and, to run Java code, a Java Virtual machine from PalmSource. The Palm control resides on the application server. The Palm-BEA arrangement will enable a single application to be developed for both the Palm and the server, Morgan said. 'Right now, to do this, [developers] would have to do all the business logic in the BEA server. Today, they would then have to write a completely different application for the Palm,' he said. Applications such as travel or expense reports could be deployed on the Palm via the arrangement between the companies, according to Morgan. A beta release of the BEA-Palm software combination is due in late-2002, with general availability planned for the first quarter of 2003..." Related: "Palm and IBM team to deliver wireless solutions." See the announcement: "Palm and BEA Partner to Mobilize Web Services in the Enterprise. Industry Leaders to Enable Enterprises to Develop and Deploy Web Services and Extend Enterprise Data Access to Palm Handhelds."
[August 27, 2002] "Burning for Web Services." By Brian Fonseca. In InfoWorld (August 27, 2002). "A battle is brewing between traditional firewall players and a new breed of XML-application firewall vendors as both push wares that promise to protect enterprises from the security threats Web services may bring. Analysts say that whereas most of the mainstream firewall players, such as Symantec, Network Associates, Cisco, and even Microsoft, rest on their laurels, a group of startups is emerging to take dead aim at securing Web services. Stepping forward on Tuesday, Check Point Software Technologies will be the first of the stalwarts to make a move in the Web services sector when it unveils a SOAP and XML strategy within its FP3 (Feature Pack 3) software upgrade. Due next month, FP3 will include SSL VPN capabilities and stateful inspection of SOAP and XML traffic within HTTP and HTTPS, said Neal Gehani, senior product manager at Redwood City, Calif.-based Check Point. FP3 will enable Check Point's products to provide an integrated network and application layer that performs authentication, routing, QoS (quality of service), and management of Web services transactions and messages... Matthew Kovar, an analyst at Cambridge, Mass.-based The Yankee Group, said that Check Point has yet to be tested against new applications that require a stand-alone proxy. Kovar also questioned the company's expertise to identify all types of malicious activities Web services and its protocols may bring... Last week, XML firewall upstart Quadrasis introduced its SOAP Content Inspector, an entry-level point for customers to wrap authentication, authorization, and alerts around bidirectional SOAP and XML messages. The software product offers a proxy-based approach that does not depend on a Web server, and it supports fledgling Web services security standards such as WS-Security, Microsoft Passport, and SAML (Secure Assertion Markup Language)." See the Check Point announcement "Check Point Software First to Secure Web Services. XML/SOAP security made possible with Breakthrough Application Intelligence Technology."
[August 26, 2002] "SAML Secures Web Services." By Linda Rosencrance. In ComputerWorld August 26, 2002. ['The Security Assertions Markup Language (SAML) is an XML-based framework for Web services that enables the exchange of authentication and authorization information among business partners.'] 'If an emerging security specification for Web services from the Organization for the Advancement of Structured Information Standards (OASIS) consortium succeeds, the days of multiple sign-ons could be over for companies and their business partners. OASIS is a worldwide not-for-profit consortium that drives the development, convergence and adoption of e-business standards. Its Security Assertions Markup Language (SAML) Specifications Set 1.0 is a vendor-neutral, XML-based framework for exchanging security-related information, called 'assertions,' between business partners over the Internet. OASIS is scheduled to adopt SAML by the end of November, according to Jeff Hodges, co-chairman of the OASIS Security Services Technical Committee, which developed the specification. SAML is designed to deliver much-needed interoperability between compliant Web access management and security products. The result: Users should be able to sign on at one Web site and have their security credentials transferred automatically to partner sites, enabling them to authenticate once to access airline, hotel and rental car reservations systems through Web sites maintained by associated business partners, for example. SAML addresses the need to have a unified framework that is able to convey security information for users who interact with one provider so they can seamlessly interact with another, according to Hodges. SAML doesn't address privacy policies, however. Rather, partner sites are responsible for developing mutual requirements for user authentication and data protection. The SAML specification itself doesn't define any new technology or approaches for authentication. Instead, it establishes assertion and protocol schemas for the structure of the documents that transport security. By defining how identity and access information is exchanged, SAML becomes the common language through which organizations can communicate without modifying their own internal security architectures..." See "Security Assertion Markup Language (SAML)."
[August 26, 2002] "What's New in EJB 2.1?" By Tarak Modi. In Java Pro Magazine Volume 6, Number 10 (October 2002). ['The Enterprise JavaBeans 2.1 specification extends the existing Enterprise JavaBeans 2.0 specification with new features, including support for JAXM message-driven beans, enhancements to EJB QL to support aggregate and other operations, support for linking of messaging destinations, support for web services usages within EJB, and a container-managed timer service.'] "Enterprise JavaBeans (EJBs) and the advent of Web services seem to be a good example of this phenomenon. Since the release of EJB 2.0, Web services and associated standards such as Simple Object Access Protocol (SOAP); Web Services Description Language (WSDL); Universal Description, Discovery, and Integration (UDDI); and Electronic Business Extensible Markup Language (ebXML); among others, have gained tremendous momentum. So it should come as no surprise that the draft specification of EJB 2.1 (which Sun released in June 2002 for public review) includes Web services support as one of its major enhancements. Other enhancements in EJB 2.1 allow you to access stateless session beans as Web services and extend message-driven bean component types to other messaging types. The new spec also includes a container-managed timer service. Let's see what these changes will mean for you... The EJB 2.1 deployment descriptor includes a new element called service-endpoint that contains the fully qualified name of the enterprise bean's Web service endpoint interface. This element is a child of the session-beanType element. Only stateless session beans can have the service-endpoint element in their deployment descriptor. The Web service endpoint is exposed only if it is referenced by a Web service deployment descriptor through the service-endpoint element. If this is done correctly during deployment, the container will generate the appropriate classes that implement the Web service endpoint interface... The Web Service endpoint interface facility is available only for stateless session beans. That is, entity beans (container- and bean-managed), stateful session beans, and message-driven beans cannot be made available as Web services. In a way this makes sense because all Web services standards today are meant to support synchronous, stateless Web services, which map very nicely to stateless session beans... Both Java and non-Java clients can access stateless session beans as Web services. A client that is written in Java may access the Web service by means of the Java API for XML-based RPC (JAX-RPC) client APIs, which is part of the Java Web Service Pack release. And of course, all clients can access the Web service through SOAP 1.1 messages over HTTP(S). SOAP messages over other protocols (such as Simple Mail Transfer Protocol [SMTP]) are not yet supported, although such support is included in the SOAP 1.1 specification. To support Web service interoperability, the EJB 2.1 specification requires compliant implementations to support XML-based Web service invocations using WSDL 1.1 and SOAP 1.1 over HTTP 1.1 in conformance with the requirements of JAX-RPC..." See also Enterprise JavaBeans 2.1 as JSR 153.
[August 26, 2002] "JAXR: A Web Services Building Block." By Sameer Tyagi. In Java Pro Magazine Volume 6, Number 10 (October 2002). [Covers Java API for XML Registries (JAXR), which 'provides a uniform and standard Java API for accessing different kinds of XML Registries; an XML registry is an enabling infrastructure for building, deploying, and discovering Web services.'] "The Java XML (JAX) Pack provides the core set of APIs that facilitate the building of Web services in Java. The JAX Pack is a set of Java APIs that includes the Java APIs for XML Processing (JAXP) and Messaging (JAXM), the Java API for XML-based RPC (JAX-RPC), SOAP with Attachments API for Java (SAAJ) and Java API for XML Registries (JAXR), which was released with the final version of the pack in June. JAXR provides a uniform and standard Java API for accessing XML registries. JAXR is also included in Sun's Web Services Developer Pack version 1.0, which provides tools for Java developers to build, test, and deploy Web services. Although the API itself is quite simple, JAXR represents a critical component for enabling Java Web services. Let's see why... JAXR provides a layer of abstraction to developers and gives them the ability to write applications with a simple and standard API in Java to interact with a varied set of business registries (at present UDDI and ebXML). However, this should not be implicitly construed to mean that JAXR is a new registry specification or is a lowest-common-denominator API. The JAXR architecture is based on the concept of pluggable providers. Developers write applications using a standard JAXR client API and a standard JAXR information model (or domain object model). The JAXR pluggable provider then maps the information model and invocations to the underlying registry's capability and delegates work under the hood to the registry-specific provider, which knows how to interact with that specific registry. Because JAXR's information model provides a superset of existing registry models, not all registries support each individual JAXR feature. To group these features logically, each individual method in the JAXR API is assigned a capability level, and providers declare what capability level they support. In practical terms, the JAXR information model is based on the ebXML information model. This makes sense because the ebXML version 2.0 information model is functionally larger than the UDDI version 2.0 information model. JAXR has two capability levels: level 0 and level 1. A capability level 0 from a provider implies support for a UDDI registry, and a level 1 implies support for an ebXML and UDDI registry (support for a higher level by a provider also implies support for a lower level). All JAXR providers are required to support level 0 (and hence UDDI), and support for level 1 is optional. In short, a JAXR client for a standard registry (such as UDDI) is guaranteed to be portable across other providers of that registry..."
[August 26, 2002] "Get to the Top with 10 Wireless Technologies." By Jeff Jurvis (Compaq Global Services). In Java Pro Magazine Volume 6, Number 10 (October 2002). ['Java enables sophisticated wireless applications that can help developers penetrate the enterprise.'] "Java has always been the perfect platform for mobile and wireless applications, but the enabling technologies have made it difficult to deliver those applications. Devices had insufficient processing power, and networks were slow and unreliable. Only now are devices and networks coming up to speed to support the kind of applications we want to run over wireless. Java's emphasis on security and efficient use of network resources makes it ideal for building enterprise applications on small but powerful devices such as smartphones and handhelds. Here are 10 key wireless technologies that do enable sophisticated wireless applications today and why they are important to Java developers." The article covers: Wireless Application Protocol (WAP), Mobile Markup Languages [WML, Compact HTML (cHTML), XHTML Basic and the XHTML Mobile Profile], Multimodal Markup Languages, Short Messaging Service (SMS), SyncML, 802.11b Wireless LANs, Next Generation Wireless Phone Networks, Wireless Security, Java APIs for Bluetooth Wireless Technologies, and JavaPhone API.
[August 26, 2002] "Practicing Safer SAX. [Column: Javatecture.]" By James W. Cooper. In Java Pro Magazine Volume 6, Number 10 (October 2002). ['See how easy it is to write your own XML-parsing system using Java 1.4, which includes all of the common methods for parsing XML documents, including both a SAX and a DOM parser.'] "All of the common methods for parsing XML documents, including both a Simple API for XML (SAX) and a document object model (DOM) parser, are built into Java 1.4, and this prompted me to rewrite some code I have lying around to use these classes. Let's suppose we have a passel of documents on which we want to do some computations. Now, these documents could just be separate files, but if they are short documents, perhaps document abstracts, you will get better performance if you put all of them in one big file. Now, what sort of document analysis might we be doing where we would scan through a bunch of abstracts? Depending on your computational bent, you might try to analyze each document for readability, for occurrence of specific domain terms, or sentence complexity. In this example, we'll just count the number of words in each document..."
[August 26, 2002] "Microsoft Readies Specifications Compliance Kit for Web Services." By Paul Krill. In InfoWorld (August 26, 2002). "The Microsoft Web Services Development Kit (WSDK), to be available in a beta version Monday [2002-08-26], will function with the company's Visual Studio .Net development platform. The free download will provide support for three Microsoft-driven specifications that the company wants adopted as industry standards: WS-Security, WS-Attachments, and WS-Routing... A final kit is to be available in approximately two months, followed by periodic updates as new industry standards and specifications emerge, according to Steven VanRoekel, Microsoft director of Web services marketing. But don't look for standards efforts from rival Sun Microsystems, such as the Web Services Choreography Interface (WSCI) submitted to the World Wide Web Consortium (W3C), to be supported in the kit, VanRoekel said... WS-Security is intended to enable secure passing of SOAP messages. WS-Routing supports routing of messages through intermediaries, such as passing an order for a part directly to a vendor. WS-Attachments enables binary attachments, such as a picture, to be attached to a SOAP message... WS-Security has been submitted to OASIS (Organization for the Advancement of Structured Information Standards) while WS-Attachments was sent to the Internet Engineering Task Force (IETF). WS-Routing has not been submitted to a standards body. These three have been included in the kit because they are the most mature of Microsoft's specifications, according to Meyer. Future versions of the kit might add specifications such as: BPEL4WS (business process execution language for Web services), which is intended to ensure that differing business processes can understand each other in a Web services environment; WS-Transaction, for transactional applications; and WS-Coordination, for Web services coordination..." See details in the 2002-08-26 news item "Microsoft Announces Web Services Development Kit Technology Preview."
[August 26, 2002] "Microsoft Previews Web Services Kit." By Darryl K. Taft. In eWEEK (August 26, 2002). "Though working in lock step with partners on every other important Web services standard, Microsoft on Monday took a step on its own to advance Web services capabilities. The company announced the availability of the technical preview for the Microsoft Web Services Development Kit (WSDK), which provides the tools developers need to build advanced Web services applications using the latest Web services specifications, such as WS-Security, WS-Routing and WS-Attachments. The WSDK incorporates Microsoft's recent work with partners such as IBM and VeriSign Inc. and also with customers to develop Web services specifications beyond XML and the Simple Object Application Profile (SOAP), such as WS-Security, that address the core challenges of Web services in a way that is broadly interoperable across heterogeneous systems. In addition, the specifications are designed to be modular so developers using Microsoft's WSDK can incorporate a specific specification functionality, on an as-needed basis, into the different levels of their Web services applications..." See details in the 2002-08-26 news item "Microsoft Announces Web Services Development Kit Technology Preview."
[August 26, 2002] "XMLTK: An XML Toolkit for Scalable XML Stream Processing." Draft version 2002-07. 13 pages. By Iliana Avila-Campillo (Institute for Systems Biology), Todd Green (Xyleme), Ashish Gupta (University of Washington), Makoto Onizuka (NTT Cyber Space Laboratories, NTT Corporation ), Demian Raven (University of Washington) and Dan Suciu (University of Washington). Paper prepared for presentation at the PLAN-X Workshop on Programming Language Technologies for XML, October 3, 2002, Pittsburgh, PA, USA. "We describe a toolkit for highly scalable XML data processing, consisting of two components. The first is a collection of stand-alone XML tools, s.a. sorting, aggregation, nesting, and unnesting, that can be chained to express more complex restructurings. The second is a highly scalable XPath processor for XML streams that can be used to develop scalable solutions for XML stream applications. In this paper we discuss the tools, and some of the techniques we used to achieve high scalability. The toolkit is freely available as an open-source project. Each of the tool stand-alone XML tools performs one single kind of transformation, but can scale to arbitrarily large XML documents in, essentially, linear time, and using only a moderate amount of main memory. There is a need for such tools in user communities that have traditionally processed data formatted in line-oriented text files, such as network traffic logs, web server logs, telephone call records, and biological data. Today, many of these applications are done by combinations of Unix commands, such as grep, sed, sort, and awk. All these data formats can and should be translated into XML, but then all the line-oriented Unix commands become useless. Our goal is to provide tools that can process the data after it has been migrated to XML. Our second goal is to study highly efficient XML stream processing techniques. The problem in XML stream processing is the following: we are given a large number of boolean XPath expressions and a continuous stream of XML documents and have to decide, for each document, which of the XPath expressions it satisfies. In stream applications like publish/subscribe or XML packet routing this evaluation needs to be done at a speed comparable with the network throughput, and scale to large numbers of XPath expressions... We report here one novel technique for stream XML processing called Stream IndeX, SIX, and describe its usage in conjunction with the stand-alone tools. A SIX for an XML file (or XML stream) consists of a sequence of byte offsets in the XML file that can be used by the XPath processor to skip unneeded portions. When used in applications like XML packet routing, the SIX needs to be computed only once for each packet, which can be done when the XML packet is first generated, then routed together with the packet... The work closest to our toolkit is LT XML. It defines a C-based API for processing XML files, and builds a large number of tools using this API. Their emphasis is on completeness, rather than scalability: there is a rich set of tools for searching and transforming XML files, including a small query processor..." See other references in the news item "Stream Index 'SIX' Used in XML Stream Processing Toolkit." Source: Postscript. Also in the PLAN-X Proceedings.
[August 26, 2002] "Registering and Discovering RSS Feeds in UDDI." By Karsten Januszewski (Microsoft Corporation). Microsoft White paper at GotDotNet. "The use of Universal, Description, Discovery and Integration (UDDI) to catalog and discover Rich Site Summary (RSS) news feeds is a logical application of UDDI in its mission of description and discovery of Web services. RSS is one of the most frequently used applications of XML on the Web today. It provides a standard way for organizations and individuals to distribute news on the Internet. One question that arises with RSS is the ability to discover the location of different RSS Feeds. The question of discovery and aggregation of RSS Feeds has the following requirements: (1) Programmatically publish an RSS Feed; (2) Associate metadata (classification, geography, ownership, etc.) with that RSS Feed in an extensible manner; (3) Query for RSS Feeds based on a number of parameters; (4) Perform requirements 1, 2, and 3 in an interoperable, programming language independent way. It is in meeting these requirements that UDDI serves as a solution. UDDI provides a mechanism to register RSS Feeds in a UDDI registry. UDDI has a flexible classification system that can be employed to attribute those feeds with a range of different metadata in an extensible way. Once RSS Feeds are registered in UDDI, users can query for those feeds deterministically across different metadata. Client RSS readers can query UDDI and aggregate different RSS Feeds based on classification information. And, lastly, UDDI is an interoperable, programming language independent service with a comprehensive XML SOAP API for both publication and inquiry." From the announcement: "... a white paper and code sample on registering RSS. Feeds in UDDI has been published. The paper walks through publishing and discovering RSS Feeds in UDDI, including a mapping between the two data models, the creation of well-known RSS tModels, and recommendations on classification. The code sample provides a sample .NET publication/aggregation tool based on the practice in the paper. An installable .msi file is provided, as is the source code for the C# WinForm. The application is meant only as a sample and is not optimized for usage. (There is no caching of feeds, for example, in the sample application.) Incidentally, a feed one might discover in UDDI according to the practice outlined in the paper is a web log I am maintaining on UDDI -- for the location of the feed, query UDDI based on the paper... The most difficult part of this exercise was modeling RSS version in UDDI. The paper opts for a particular solution; feedback and comments on the solution are welcomed..." See: (1) "RDF Site Summary (RSS)"; (2) "Universal Description, Discovery, and Integration (UDDI)."
[August 26, 2002] "WS-I Sorts Out Web Services Specs." By Darryl K. Taft. In eWEEK (August 26, 2002). "With the number of Web services standards becoming an alphabet soup, enterprises are looking for assurance that the myriad specifications are interoperable. The Web Services Interoperability organization, or WS-I, is taking steps to help. The WS-I recently finished an internal version of its first set of guidelines -- or profiles -- called WSBasic, designed to assist enterprises in developing and running Web services. The beta version is scheduled for release in November, with general availability expected by the end of the year. The group, formed in February by Microsoft Corp., IBM, BEA Systems Inc., Intel Corp. and others, also wants to play a broker role for the various competing standards bodies, in particular the World Wide Web Consortium (W3C) and the Organization for the Advancement of Structured Information Standards (OASIS)... Another key to standards interoperability is cooperation among the major standards groups. At the XML Web Services One Conference in Boston this week, the W3C and OASIS will discuss security standards for Web services. WS-I representatives said their group's profiles will give the standards bodies a middle ground to work around. The WS-I profiles are Web services specifications at specific version levels that include outlines about how they work together, according to Rob Cheng, a WS-I co-chairman and senior IPlatform product analyst at Oracle Corp., in Redwood Shores, Calif. WSBasic includes the core Web services specifications: XML Schema 1.0, SOAP (Simple Object Access Protocol) 1.1, WSDL (Web Services Description Language) 1.1 and UDDI (Universal Description, Discovery and Integration) 2.0... Available with the alpha version of WSBasic are sample applications used to demonstrate Web services interoperating on various platforms -- including Windows, Solaris and Linux -- and tools to analyze and test interoperability, said John Kiger, director of Web services technologies at BEA, in San Jose, Calif., and a WS-I board member. Sample applications and testing tools will be beefed up as profiles evolve. WSBasic will be the building block for profiles that will include other standards, such as WS-Transaction and WS-Security, Cheng said. Additional profiles will address issues such as message extensibility, routing, correlation, guaranteed message exchange, signatures, encryption, transactions, process flow and inspection. The development of additional or updated WS-I profiles depends on the continued maturity of Web services specifications, Cheng said. WS-I representatives said they expect that vertical industries will build on the WS-I profiles by adding industry-specific standards to them..." See: "Web Services Interoperability Organization (WS-I)."
[August 26, 2002] "Startup Eyes XML Management, Monitoring." By Tom Sullivan. In InfoWorld (August 25, 2002). "A Portsmouth, N.H.-based startup is looking to carve a place for itself among the growing swirls of XML within organizations. Swingtide on Monday announced itself to the IT industry, with plans to detail forthcoming products in the fourth quarter of this year. Although the company declined to detail those products, CEO David Sweet said that the focus in on XML and XML-based services. The very name connotes its founders' belief that a shift is taking place in which the proverbial programming tide is flowing decidedly away from traditional applications straight toward XML-based services. 'The fundamental purpose for which we founded Swingtide is to manage and monitor the growth of XML within the enterprise,' Sweet said. 'We are seeing a proliferation of XML.' But not everyone sees XML exploding right how. 'To say XML is growing like crazy, that's a bit of a rosy picture. There' not as much proliferation of XML content as we might have expected there to be a year ago,' said Tyler McDaniel, an analyst with consultancy Hurwitz Group, based in Framingham, Mass. 'A big reason for that is because companies have retrenched to make the most out of what they already have.'... Swingtide is also addressing what Sweet called QoB, or Quality of Business. Whereas QoS (Quality of Service) concerns itself with the physical network management, such as speed and performance, availability, and ROI, QoB examines the aspects of 'logical' network management, such as the customer experience, XML service traction and related revenue growth, and what Sweet called Return on Assets. A big part of QoB is to make sure that XML from various applications and sources is interoperable, much the way companies do with applications themselves... Swingtide's founders are no strangers to XML and the world of Web services. CEO Sweet, in fact, previously started two other companies with co-founder and chairman Jack Serfass -- Web services company Bowstreet, and Preferred Systems, which was sold to Computer Associates. The final piece of the co-founder triptych is David Wroe, who brings more than thirty years experience in the financial services and insurance industries. In roles prior to Swingtide, Wroe served as CTO of commercial insurer CNA, and CEO of Agency Management Services..."
[August 26, 2002] "The Query Language TQL." By Giovanni Conforti, Giorgio Ghelli, Antonio Albano, Dario Colazzo, Paolo Manghi, and Carlo Sartiani (Dipartimento di Informatica, Università di Pisa, Pisa, Italy). From among the papers accepted for the Fifth International Workshop on the Web and Databases (WebDB 2002), Madison, Wisconsin - June 6-7, 2002. "This work presents the query language TQL, a query language for semistructured data, that can be used to query XML files. TQL substitutes the standard path-based pattern-matching mechanism with a logic-based mechanism, where the programmer specifies the properties of the pieces of data she is trying to extract. As a result, TQL queries are more 'declarative', or less 'operational', than queries in comparable languages. This feature makes some queries easier to express, and should allow the adoption of better optimization techniques. Through a set of examples, we show that the range of queries that can be declaratively expressed in TQL is quite wide. The implementation of TQL binding mechanism requires the adoption of non-standard techniques, and some of its aspects are still open. In this paper we implicitly report about the current status of the implementation by writing all queries using the version of TQL that has been implemented... Although the language TQL originates from the study of a logic for mobile ambients, for the simplest queries it turns out to be quite similar, in practice, to other XML query languages. However, the expression of queries which involve recursion, negation, or universal quantification, has in TQL a clear declarative nature, while other languages are forced to adopt a more operational approach. All queries presented in this paper are executable in the prototype version of the TQL evaluator, and can be found in the file demo.tql in the standard distribution. The current version of the prototype works by loading all data in main memory, but is already based on a translation into an intermediate TQL Algebra, with logical optimizations carried on both at the source and at the algebraic level. The intermediate algebra works on infinite tables of forests, represented in a finite way, and supports such operations as complement, to deal with negation, coprojection, to deal with universal quantification, several kinds of iterators, to implement the | operator, and a recursion operator. TQL is currently based on a unordered nested multisets data model. The extension of TQL's data model with ordering is an important open issue." TQL can be freely downloaded. See "XML and Query Languages." [cache]
[August 26, 2002] "User Agent Accessibility Guidelines 1.0." W3C Working Draft 21-August-2002. Edited by Ian Jacobs (W3C), Jon Gunderson (University of Illinois at Urbana-Champaign), Eric Hansen (Educational Testing Service). Version URL: http://www.w3.org/TR/2002/WD-UAAG10-20020821/. Latest version URL: http://www.w3.org/TR/UAAG10/. Previous version URL: http://www.w3.org/TR/2001/CR-UAAG10-20010912/. List of changes. "This document provides guidelines for designing user agents that lower barriers to Web accessibility for people with disabilities (visual, hearing, physical, cognitive, and neurological). User agents include HTML browsers and other types of software that retrieve and render Web content. A user agent that conforms to these guidelines will promote accessibility through its own user interface and through other internal facilities, including its ability to communicate with other technologies (especially assistive technologies). Furthermore, all users, not just users with disabilities, are expected to find conforming user agents to be more usable. In addition to helping developers of HTML browsers, media players, etc., this document will also benefit developers of assistive technologies because it explains what types of information and control an assistive technology may expect from a conforming user agent. Technologies not addressed directly by this document (e.g., technologies for braille rendering) will be essential to ensuring Web access for some users with disabilities."
[August 26, 2002] "The Distributive ebXML Grid." By David Lyon (GTD Technologies Pty Limited). Document posted to 'firstname.lastname@example.org'. August 22, 2002. Slides from a Sydney meeting of August 20, 2002. "... why do we need an ebXML Grid? The Grid promises an easier way to do business transactions: The Web is often too slow for business; Web Servers are generally limited by the speed of users clicking from within their Browser; There is excess capacity on the Internet; Not enough business transactions go across the Internet; The Grid has a better business model... The Distributive ebXML Grid is a coop with a better business model than the current web:  The business model of the web is basically that software developers will build a website for a price. Transactions are low in cost but startup is quite expensive. In practice, few systems can interoperate and there is no financial incentive for interoperability to occur.  The business model of the Distributive ebXML Grid is transaction/subscription based with a percentage paid back to Integrators. The effective cost is lower for all concerned... The 'distributive ebXML Grid' is a 'circuit' or Grid of business computers networked using the Internet (the Grid works on permanently open TCP/IP connections; it can use transactions based on the international UBL standard from ebXML as well as EDI; it has a sustainable business model). Transactions get 'dropped' on the grid and find their way to trading partners systems. The Grid uses 256 bit security and X.509 Certs and pushes TCP/IP networking to it's limits..." From the ebXML-DEV posting: "the ebXML Grid concept, loosely based on existing ebXML work such as BPSS, UDDI, Core Components, Message Handling services and also a significant influence from X.500/LDAP...The idea behind the concept was to build a Distributive computer grid or circuit based on/for ebXML to take advantage of dialup, broadband and wireless connections for the ever increasing capabilities of the modern PC. The vision for the project is to one day interconnect tens of thousands of business computers on an ebXML grid. On demonstration will be some 256-bit, 2048-bit and 8192-bit encryption/decryption software for those who are security conscious..." In a note to the OASIS ebXML Registry list, David RR Webber compares the Grid to ArapXML Consortium work led by Todd Boyles. See: "Electronic Business XML Initiative (ebXML)."
[August 26, 2002] "Cryptographically Enforced Conditional Access for XML." Paper presented at the Fifth International Workshop on the Web and Databases (WebDB 2002), Madison, Wisconsin - June 6-7, 2002. By Gerome Miklau and Dan Suciu (University of Washington). [Note: several accepted papers from the workshop cover XML.] "Access control for databases is typically enforced by a trusted server responsible for permitting or denying users access to the database. This server-based protection model is increasingly becoming inconvenient for web based applications. We propose encryption techniques that allow XML documents to be distributed over the web to clients for local processing while maintaining certain access controls. In particular, we focus on conditional access controls, where a user is granted access to certain data elements conditioned on the user's existing knowledge of another part of the data. We believe such access controls are important in practice, and that enforcing them cryptographically on remote instances allows for more efficient data dissemination and processing... An access control model is used to permit or refuse access by subjects to data objects. Subjects are users, or groups of users usually defined by name, network identification or other static affiliation. For XML, objects are documents or parts of documents defined by XPath expressions. Access control in relational database systems, and most proposed XML systems, is enforced by a server that handles all data requests and strictly controls which users can access what data. While this model is sometimes also used in Web applications, it is often too restrictive. As the following examples show, there are a number of advantages to delivering remote copies of the data to clients if access control can be maintained... We propose an approach for publishing XML data on the Web while controlling how the data is accessed. In particular, we propose a novel and flexible language of conditional access rules used to define security policy. We explain how to encrypt XML data to enforce these access controls without a trusted server, and we discuss query processing over encrypted data. Our notion of conditional access generalizes the static categorization of subjects into authorization classes. Subjects are not identified by user name or network identifier but by their knowledge. As a special case, access may be conditioned on knowledge of a private key or password in a conventional way. But more generally, subjects qualify for access to an object by virtue of their knowledge of the data. Conditional access rules specify what data values need to be presented by the subject before granting access to other data values. Subject authorization is therefore flexible and dynamic in a way not possible with conventional access classes. The flexibility of our conditional access rules distinguishes our work from other attempts to encrypt data and manage decryption keys..." See W3C XML Encryption WG and the reference page "XML and Encryption." [alt URL, cache]
[August 26, 2002] "Top Ten Tips to Using XPath and XPointer." By John Simpson. From XML.com. August 21, 2002. ['Ten tips for XML developers from the author of XPath and XPointer. John Simpson, author of XML.com's monthly XML Q&A column, is the author of the new O'Reilly book on XPath and XPointer. This first feature is Simpson's handy list of ten XPath/XPointer tips which will increase your mastery of these essential tools.'] "XPath and XPointer allow XML developers and document authors to find and manipulate specific needles of content in an XML document's haystack. From mindful use of predicates, to processor efficiency, to exploring both the standards themselves and extensions to them, this article offers ten tips -- techniques and gotchas -- to bear in mind as you use XPath and XPointer in your own work... Beware of whitespace when counting nodes; Keep an open mind about predicates: nested, 'compound,' and so on; The string-value of a node is just a special case of the string-value of a node-set; Remember the difference between value1 != value2 and not(value1 = value2); Find and use a good tool for testing your XPath expressions; Explore EXSLT; Fail-safe your XPointers; Remember to keep namespaces straight, in both XPath and XPointer applications; Don't forget processor efficiency in XPath and XPointer; Keep an eye out for spec changes..." See the book: XPath and XPointer Locating Content in XML Documents, by John E. Simpson (O'Reilly, August 2002; ISBN: 0-596-00291-2); with full description and Table of Contents. General references: "XML Linking Language."
[August 26, 2002] "Business Maps: Topic Maps Go B2B!" By Marc de Graauw. From XML.com. August 21, 2002. ['Marc de Graauw explains how topic maps can help vocabulary interoperability.'] "Interoperability between ontologies is a big, if not the single biggest issue in B2B data exchange. For the foreseeable future there will not be a single, widely accepted B2B vocabulary. Therefore we will need mappings between different ontologies. Since these mappings are inherently situational, and the context is very complex, we cannot expect computers to create more than a small part of those mappings. We need tools to leverage the intelligence of humans business experts. We need portable, reusable, and standardized mappings. Topic Maps are an excellent vehicle to provide those 'Business Maps'. (This article presumes a basic understanding of Topic Maps, readers may wish to read A Gentle Introduction to Topic Maps in conjunction with this article.) We have lots of data and descriptions of data. Take for instance the abundance of vocabularies for B2B exchange: xCBL, FinXML, FpML, etc. Those vocabularies can be seen as ontologies. Older EDI technologies such as X.12 and EDIFACT are also ontologies. There are as of yet no general standards for B2B vocabularies in XML. The ebXML initiative did not have actual business documents as one of its deliverables. Right now work is being done on the Universal Business Language (UBL) to fill this gap. Beside those 'industry-strength' solutions, there are lots of tailor-made data exchanges between companies, often using nothing more than simple ASCII comma-separated files. Together with their documentation, those ASCII-files also constitute ontologies. And even within larger companies many different ontologies exist within the different legacy databases of different departments. Those different data sources present huge interoperability problems... Interoperability between ontologies is one of the most important problems in B2B data exchange. For the time being, making mappings will mainly be a human job. Therefore we need a way to leverage human intelligence to make all the required B2B mappings. Portable, reusable mappings would accomplish this. Those mappings would need to store information on business document mappings and the context that applies to those mappings. Topic Maps are an excellent vehicle to store such information, thus yielding Business Maps. The complete samples of Business Maps [InterOperability Topic Map] are available online..." References: "(XML) Topic Maps."
[August 26, 2002] "OSCON 2002 Perl and XML Review." By Kip Hampton. From XML.com. August 21, 2002. ['Kip Hampton reviews the state of Perl and XML at O'Reilly's Open Source Convention.'] "This month, in place of the module introduction or other technical tutorial that usually appears here, I want to offer a few observations about the state of Perl-XML World in general, based on my experiences at the recent OScon 2002. It's not at all surprising that XML has been a bit of a tough sell to the Perl community. For some other languages, XML neatly fills a large void in their text processing facilities, a void that Perl did not really have. And the white-hot hype, which is often advanced by those selling applications that use XML, has sent many Perl coders scrambling for the hills; or, depending on their personal bent, for the torches and pitchforks. From the other side, much of the XML World has ignored Perl based on the misconception that it is little more than a muscular shell scripting language. I am pleased to say that I saw evidence of growth in both the Perl and XML communities. Of the Perl developers I spoke with, most seem to have learned that using XML in their applications is neither magic nor poison -- rather, they are beginning to see it as just another technology and set of tools that can be really good for some cases and really bad for others. At the same time, the growing success of Open Source tools like Bricolage and AxKit is proving to some in the larger XML World that Perl and XML can be a powerful combination. It was not uncommon to hear these tools mentioned in the same breath as their Java counterparts..." See "XML and Perl."
[August 26, 2002] Information Technology -- Multimedia Framework -- Part 5: Rights Expression Language -- Committee Draft. Edited by Thomas DeMartini (ContentGuard, US), Xin Wang (ContentGuard, US), and Barney Wragg (UMG, UK). International Organization for Standardization. ISO/IEC JTC 1/SC 29/WG 11: Coding of Moving Pictures and Audio. Document Reference: ISO/IEC JTC 1/SC 29/WG 11/N4942. Date: 2002-07-26, ISO/IEC CD 21000-5. Multimedia Description Schemes Group. Approved at the Klagenfurt, AT Meeting. 115 pages. Normative Annex A documents the REL Architecture; Normative Annex B provides the XML Schemas [Versioned '01' namespace: urn:mpeg:mpeg21:2002:01-REL-NS]; Informative Annex C documents the 'Relationship Between ISO/IEC 21000-5 (REL) and ISO/IEC 21000-6 (RDD)'. From the 'Executive Summary for MPEG-21': "...The vision for MPEG-21 is to define a multimedia framework to enable transparent and augmented use of multimedia resources across a wide range of networks and devices used by different communities. This fifth part of MPEG-21 (ISO/IEC 21000-5) specifies the expression language for issuing rights for Users to act on Digital Items, their Components, Fragments, and Containers..." From the Scope statement: "This document explains the basic concepts of a machine-interpretable language for issuing rights to Users to act upon Digital Items, Components, Fragments, and Containers. It does not provide specifications for security in trusted systems, propose specific applications, or describe the details of the accounting systems required. This document does not address the agreements, coordination, or institutional challenges in building an implementation of this standard. The standard describes the syntax and semantics of the language..." On XML Schemas: "The syntax of REL is described and defined using the XML Schema technology defined by the Worldwide Web Consortium (W3C). Significantly more powerful and expressive than DTD technology, the extensive use of XML Schema in REL allows for significant richness and flexibility in its expressiveness and extensibility... To that end, a principal design goal for REL is to allow for and support a significant amount of extensibility and customizability without the need to make actual changes to the REL core itself. Indeed, the core itself makes use of this extensibility internally. Others parties may, if they wish, define their own extensions to REL. This is accomplished using existing, standard XML Schema and XML Namespace mechanisms..." Source: see Brad Gandee's posting to the OASIS RLTC. Additional details are given in: (1) the 2002-08-26 news item "New Draft Specifications from MPEG-21 Multimedia Framework Project"; (2) "MPEG Rights Expression Language."
[August 26, 2002] "Validation and Boolean Operations for Attribute-Element Constraints." By Haruo Hosoya (Kyoto University) and Makoto Murata (IBM Tokyo Research Laboratory). Paper prepared for presentation at the PLAN-X Workshop on Programming Language Technologies for XML, October 3, 2002, Pittsburgh, PA, USA. 28 pages. "Algorithms for validation and boolean operations play a crucial role in developing XML processing systems involving schemas. Although much effort has previously been made for treating elements, very few studies have paid attention to attributes. This paper presents a validation and boolean algorithms for Clark's attribute-element constraints. The kernel of Clark's proposal is a uniform and symmetric mechanism for representing constraints on elements and those on attributes. Although his mechanism has a prominent expressiveness and generality among other proposals, treating this is algorithmically challenging since naive approaches easily blow up even for typical inputs. To overcome this difficulty, we have developed (1) a two-phase validation algorithm that uses what we call attribute-element automata and (2) intersection and difference algorithms that proceed by a 'divide-and-conquer' strategy... It exploits that it is often the case that we can partition given constraint formulas into orthogonal subparts. In that case, we proceed the computation with the subparts separately and then combine the results. Although this technique cannot avoid an exponential explosion in the worst case, it appears to work well for practical inputs that we have seen... We have already implemented the validation and boolean algorithms in the XDuce language. For the examples that we have tried, the performance seems quite reasonable. We plan to collect and analyze data obtained from the experiment on the algorithms in the near future..." Source: Postscript. See also: (1) the note from MURATA Makoto; (2) SGML/XML Elements versus Attributes. The electronic proceedings also reference this paper.
[August 26, 2002] "TV-Anytime Forum Specification Series: S-2, System Description." Date: 5-April-2002. Reference: SP002v1.2. Informative, with mandatory Appendix B. 39 pages. "This TV-Anytime specification shows the system behavior of a TV-Anytime broadcast system with an interaction channel used for consumer response. It focuses on the use of the TVAnytime content reference specification in combination with the TV-Anytime metadata specification in a system context. The specification will show examples of how to use both specifications both from static and dynamic viewpoints, i.e., it will highlight the parties involved in the processes and show the interaction between them." Note from Section 5.2, 'XML - a common representation format': "For the purpose of interoperability, the TV-Anytime Forum has adopted XML Schema as the common representation format for documentation of metadata. XML offers many advantages: it allows for extensibility, supports the separation of data from the application, and is widely used. In addition, powerful XML tools are now available such as XSL (XML Stylesheets), XQL (XML Query Language), and XML databases that can be used to process and manage XML data. As a textual format, XML tends to be rather verbose; however, a number of mechanisms are being developed to reduce the bandwidth when necessary. It is important to note that the XML representation of a TV-Anytime document is just that, a representation. It is one possible representation of the metadata; it is not the only representation of the metadata. There is no assumption that TV-Anytime metadata must be represented in XML format. Metadata could be represented by an optimized binary format to conserve bandwidth and aid rapid processing and mapping to a database. It is strongly recommended that if XML is used as exchange syntax for TV Anytime metadata, then that XML should conform to the TV-Anytime Schema. This has obvious advantages in the business-2-business realm in addition to the business-2-consumer realm. The following sections introduce the TV-Anytime metadata schemas. They also provide snippets of XML instance documents. Basic knowledge of XML is needed in order to understand the following sections..." See: "TV Anytime Forum." [OASIS RLTC source: ZIP package, Word .DOC.]
[August 26, 2002] "CHANT (CHinese ANcient Texts): A Comprehensive Database of All Ancient Chinese Texts up to 600 AD." By Che Wah Ho (Institute of Chinese Studies, The Chinese University of Hong Kong, Shatin, N.T. Hong Kong). In Journal of Digital Information Volume 3 Issue 2 (August 09, 2002). [Note: the project is "using Extensible Markup Language (XML), for its versatility and better optimization, to mark up the data..."] "The CHinese ANcient Texts (CHANT) database is a long-term project which began in 1988 to build up a comprehensive database of all ancient Chinese texts up to the sixth century AD. The project is near completion and the entire database, which includes both traditional and excavated materials, will be released on the CHANT Web site (www.chant.org) in mid-2002. With more than a decade of experience in establishing an electronic Chinese literary database, we have gained much insight useful to the development of similar databases in the future. We made use of the best available versions of all texts, noting variant readings in footnotes. The biggest problem we encountered is the inclusion of rare and obsolete Chinese characters. For excavated materials, we also have to incorporate a considerable number of inscriptions in the original oracle bones and bronze forms. Since we started building the database, information technology has advanced so rapidly that we had to upgrade the technical devices already in use in the database. Unification of different sub-databases is also a daunting task. To maintain our competitive edge over free online Chinese databases, we need to continue developing new databases employing the existing ones."
[August 23, 2002] "The Layered Markup and Annotation Language (LMNL)." By Jeni Tennison (Jeni Tennison Consulting) and Wendell Piez (Mulberry Technologies). Extended abstract of the presentation at Extreme Markup 2002. [LMNL 'Layered Markup and aNnotation Language' is pronounced "liminal".] "In document-oriented XML development, there's frequently a requirement for several views of the same document to coexist. For example, one view might represent the logical structure of a document into chapters, sections and paragraphs, while another represents the physical manifestation of that document in a particular book, maintaining page and even line breaks. The structures in these different views often overlap -- a page might start in the middle of one paragraph and end after another, for example -- and this makes it difficult for a simple hierarchical structure, such as XML, to represent... All the [previous] approaches have their strengths and weaknesses. Interestingly, they all assume a DAG (directed acyclic graph) as a primary data model, sometimes enhancing it with metainformation at a different level. (Even in SGML CONCUR, this metainformation is provided by the DTD. TexMECS assumes a more complex graph structure, dubbed GODDAG, allowing elements to have multiple parentage.) Recognizing the difficulties of this approach, however (for example, in XSLT it is quite challenging and processor-intensive, albeit not impossible, to perform the splicing and segmenting operations typically required to transform between concurrent structures), the authors postulated it might be worthwhile to start by parsing markup into an entirely different data model. Since XML and XSLT already provide us with a strong technology for processing trees (we reasoned), we could always have a tree when we needed one; so we opted to concentrate on a more rudimentary data model that would capture the information we needed, while not itself trying to assert containment or sibling relations (that are too simple to apply to overlapping structures). These could be inferred or specified in other layers of the system. The Core Range Algebra presented by Gavin Nichol at this year's Extreme conference proposes a data model that supports overlapping structures by viewing documents as sequences of characters over which named ranges are defined. To represent more fully the range of document structures encountered in real documents, we have extended this data model to include the concepts of 'layers', which are ranges that fully contain all the ranges that start or end within them, and 'metaranges', which are layers that can be associated with ranges to provide meta-information about their content. This data model can be represented in XML, using any of the methods outlined above, but for ease of writing, we have developed a specialised syntax, the Layered Markup and Annotation Language (LMNL)..." General references: "Markup Languages and (Non-) Hierarchies."
[August 23, 2002] "Declaring Trees: The Future of the Evolution of Markup? Just-In-Time-Trees (JITTs)." By Patrick Durusau and Matthew Brook O'Donnell. Based on a paper presented at Extreme Markup 2002. See the slides in PPT and PDF format. "Just-In-Time-Trees (JITTs) arose from the realization that SGML/XML syntax had been overburdened with the tasks of declaring a document root (sigular) and the markup to be recognized in the document. (Not actually true for markup recognition in SGML with its concur feature but that was seldom implemented and so not generally available.) We propose moving the declaration of the document root and the markup to be recognized from that root from markup syntax to pre-processing before passing the document instance to an SGML or XML parser. The move from syntax to processing for these declarations may seem like a small one but it has important implications for the development of markup syntax. Freed from the strictures of the current model, markup syntax can be used to record any number of arbitrary or overlapping structures in a document instance. It is true that processing eventually requires declaration of a traditional tree, but there are many cases (presentation vs. logical layout) where the storage of overlapping hierarchies in a single document instance will be of immediate use. We are exploring ways to relate information from overlapping trees that are found on successive parses of a document instance... We propose that the declaration of the document root and the markup to be recognized should be moved from the syntax layer and made a part of the processing of a text. That change in the model for handling markup removes the various problems with overlapping markup that have been the subject of numerous proposals but few widespread implementations since the rise of SGML. Our latest proposal differs from all prior ones in that it allows the use of standard XML software for the processing of texts, while allowing extensive experimentation with markup languages for the encoding of texts. Our argument for markup recognition is grounded in the text of ISO 8879 (concur) and extends that concept to XML by the use of filters to declare the document root and markup to be recognized..." General references: "Markup Languages and (Non-) Hierarchies."
[August 23, 2002] "Meaning and Interpretation of Concurrent Markup." By Andreas Witt (Universität Bielefeld). Presented at ALLC/ACH 2002, Joint International Conference of the Association for Literary and Linguistic Computing and the Association for Computers and the Humanities, July 24 - 28, 2002, University of Tübingen, Germany. "The difficulty of annotating not hierarchically structured text with SGML-based mark-up languages is a problem that has often been addressed... In general, annotated text consists of content and annotations. Annotations are used on a syntactical level. Therefore they are used for assigning a meaning to (parts of) a document. While developing a document grammar the focus should be centred on the content. This point of view is expressed by Sperberg-McQueen, Huitfeldt and Renear (2000). They show how knowledge which is syntactically coded into text by annotations can be extracted by knowledge inference. After summarizing this approach, it will be shown, how this technique can be expanded so that it can be used for inferences of separately annotated and implicitly linked documents - documents marked-up according to different document grammars... The described model of knowledge representation can only be used for single documents. However, it will be shown, that this model can easily be expanded, so that it is applicable for the inference of relations between several separately annotated XML-documents with the same primary data. ... The outlined architecture has many advantages. The model allows for structuring text according to multiple concurrent document grammars without workarounds. Furthermore additional annotations can be subsequently included, without changing already established annotations. The annotations are on the one hand independent of each other, on the other hand they are interrelated via the text, allowing for the inference of relations between different levels of annotation. The final advantage to be mentioned is that the compatibility of several or all annotations used can be proven automatically. This can be done using a technique originally developed within linguistics, namely unification." See Witt's reference page, with the slide presentation and Python code implemented by Daniel Naber. The author's dissertation Multiple Informationsstrukturierung mit Auszeichnungssprachen. XML-basierte Methoden und deren Nutzen für die Sprachtechnologie presents some of the research. General references: "Markup Languages and (Non-) Hierarchies." [cache]
[August 22, 2002] "UDDI Should Flourish Under OASIS." By Dave Kearns. In Network World (August 14, 2002). Network World Directories Newsletter. "I'm not a big fan of UDDI because I think what it does can better (and more economically) be achieved with directory services -- even with DNS. But more and more directory vendors (such as Sun, Computer Associates and Novell) are tossing in the towel and simply developing UDDI front ends for their directory products rather than trying to convince the world that the UDDI repository isn't needed. That's probably the best way to go: Jump on the UDDI bandwagon, but bring the instrument (i.e., the directory) you already know how to play. Now that UDDI is in the OASIS house, the ability to work closely with the folks on the Directory Services Markup Language (DSML), Security Assertions Markup Language (SAML), Business Transactions (BTP), and the various Electronic Business Extensible Markup Language (ebXML) technical committees could go a long way toward making UDDI a usable, rather than just a theoretically possible, technology. In the call for members, UDDI Technical Committee co-chairs Tom Bellwood of IBM and Luc Clément of Microsoft state that the group's scope is 'the support of Web services discovery mechanisms in the following areas: (1) Specifications for Web services registries and Web service interfaces to the registries. (2) Replication or synchronization mechanisms across multiple implementations. (3) Security facilities for access or manipulation of the registry and maintaining data integrity.' Replication and synchronization are something the DSML folks have already started on. Access security was ably demonstrated by SAML recently. Specifying Web services interfaces is part of the purview of the ebXML committees. There is a lot of expertise available at OASIS, and the UDDI people should take advantage of it. Adapting someone else's initiative is very often much more efficient and productive than re-inventing the wheel..." See: "Universal Description, Discovery, and Integration (UDDI)."
[August 21, 2002] "Thinking About the Future of Data." By Paul Festa . In CNET News.com (August 21, 2002). Interview with Dave Hollander (W3C). ['A native of Baldwinsville, N.Y., Hollander worked his way through school, driving a truck and working other odd jobs before graduating with a bachelor of science degree from Michigan Tech. These days he is the chief technology officer of Contivo. The 46-year-old Hollander also is the co-author of XML Applications, technical editor of XML Schemas, and a contributor to standards including OAGI, RosettaNet, OBI and the ECO Framework. He co-chairs the World Wide Web Consortium's (W3C) XML Schema working group as well as its Web Services Architecture working group, and is co-author of Namespaces in XML, a W3C recommendation. From his home in Loveland, Colo., Hollander spoke with CNET News.com about the alphabet soup of specifications underlying his work with data integration and Web services.'] "Back early '90s I was publishing all the manuals for HP--Unix and others--on CD-ROM. This is pre-Web. And I created a language called SDL (Semantic Delivery Language), which has nothing to do with the Semantic Web! (laughing) There were SGML languages that were around, which we couldn't deliver onto CD-ROM. So SDL was an intermediate language that could be read by CD-ROM browsers, which gave me a good forum for translating from high-level SGML to something closer to the computer... I went from HP to CommerceNet. My interpretation of their charter was to explore the white space between companies -- to look at how companies do business and try to fill the huge gaps with respect to security, payment standards, etc. I was doing a lot of work in catalogs, payment, in trying to define XML standards to help businesses do these kinds of B2B transactions. For me it was a way to stop thinking about documents and manuals and start thinking about those things as being the tools we use in business every day. From CommerceNet I went to Contivo. Whether it's documents or B2B, to me the biggest issue turns out to be a transformation problem... It ['transformation'] means, how does the receiver interpret the intent of the sender of the information? The easy one to think about is in language, going from French to German, 'rue' to 'strasse.' Being able to understand that intent, and transform what the sender intended to what the receiver needs to do, becomes a transformation. Semantics is understanding the intent of a thing, of a concept. I like to think of it as the boundary between data and behavior. If you send out the same data and get five different responses, then there are five different semantics associated with that data. In order to do a transformation from 'rue' to 'strasse,' I have to understand the fundamental transformation that this is a street, and whether it's a street or highway or road needs to be differentiated. In order to make meaningful transformations, you have to understand the semantics of the information..."
[August 21, 2002] "Get started with Castor JDO. Learn the basics of object-relational data binding using Castor." By Bruce Snyder (Senior Software Engineer, DigitalGlobe). From IBM developerWorks, Java technology. August 2002. ['A growing number of enterprise projects today call for a reliable method of binding Java objects to relational data -- and doing so across a multitude of relational databases. Unfortunately (as many of us have learned the hard way) in-house solutions are painful to build and even harder to maintain and grow over the long term. In this article, Bruce Snyder introduces you to the basics of working with Castor JDO, an open source data-binding framework that just happens to be based on 100 percent pure Java technology.'] "Castor JDO (Java Data Objects) is an open source, 100 percent Java data binding framework. Initially released in December 1999, Castor JDO was one of the first open source data binding frameworks available. Since that time, the technology has come a long way. Today, Castor JDO is used in combination with many other technologies, both open source and commercial, to bind Java object models to relational databases, XML documents, and LDAP directories... In this article, you'll learn the fundamentals of working with Castor JDO. We'll start with a relational data model and a Java object model, and discuss the basics of mapping between the two. From there, we'll talk about some features of Castor JDO. Using a simple product-based example, you'll learn about such essentials as inheritance (both relational and object-oriented), dependent and related relationships, Castor's Object Query Language implementation, and short versus long transactions in Castor. Because this article is an introduction to Castor, we'll use very simple examples here, and we won't go into depth on any one topic. At the end of the article, you will have a good overview of the technology, and a good foundation for future exploration...Just as some people believe that the key to life is a proper understanding and acceptance of one's personal karma, I've found that the key to working with Castor JDO is a proper understanding and implementation of the mapping descriptor. The mapping descriptor provides the connection (the map) between relational database tables and the Java objects. The format of the mapping descriptor is XML -- and for good reason. The other half of the Castor Project is Castor XML. Castor XML provides a superior Java-to-XML and XML-to-Java data binding framework. Castor JDO makes use of Castor XML's ability to unmarshall an XML document into a Java object model for the purposes of reading the mapping descriptor..."
[August 21, 2002] "Web Services Security Addendum." Edited by Chris Kaler (Microsoft). 18-August-2002. From International Business Machines Corporation, Microsoft Corporation, and VeriSign, Inc. Authors: Giovanni Della-Libera (Microsoft), Phillip Hallam-Baker (VeriSign), Maryann Hondo (IBM), Hiroshi Maruyama (IBM), Anthony Nadalin (IBM), Nataraj Nagaratnam (IBM), Hemma Prafullchandra (VeriSign), John Shewchuk (Microsoft), Kent Tamura (IBM), and Hervey Wilson (Microsoft). The Addendum document "describes clarifications, enhancements, best practices, and errata of the WS-Security specification... Since the publication of the WS-Security specification, additional reviews and implementation experiences suggest some additions, clarifications, and corrections to the original specification." Topics include: Errata, ID References, Placement of X.509 Certificates, Message Timestamps, Passing Passwords, Key Identifiers, Key Names, Token Reference Lookup Processing Order, Encrypted Keys, Decrypted Transformation, Certificate Collections, Security Considerations. [A 2002-08-23 note from Anthony Nadalin (IBM), Chris Kaler (Microsoft), and Hemma Prafullchandra (Verisign) reads: "WSS TC Co-Chairs: We are submitting the specification entitled WS-Security Addendum 1.0, August 18, 2002 for consideration within the WSS-TC as a supplement to the existing WS-Security input specification. The attached specification is a result of building our respective WS-Security implementations. The WS-Security Addendum 1.0, August 18, 2002 can be found at the following sites: (1) IBM, (2) Microsoft; (3) Verisign..."] See: "Web Services Security Specification (WS-Security)."
[August 21, 2002] "The Most Important Open-Source Project You've (Probably) Never Heard Of." By Richard Karpinski. In InternetWeek (August 20, 2002). "Almost a year after it donated some $40 million worth of code and tools, IBM is on the warpath once again, drumming up support for the open-source Eclipse project. Eclipse may not be as well known as some other open-source projects, such as Linux, Mozilla, or Apache. It's certainly not as sexy. At its core, Eclipse provides a common platform, user interface, and plug-in framework for integrating development tools. Developers working with the Eclipse framework would be able to plug in different tools from different vendors -- say a Java IDE, a modeling tool, a test environment, an XML editor -- and benefit from a common look and feel and under-the-covers functionality... While admitting that its impact on developers thus far has been 'relatively limited,' Giga Information Group analyst Mike Gilpin said Eclipse is a strong technology and is gathering vendor momentum. 'In terms of getting vendors into the program, if you look at Rational and Borland and some others, they've made good progress,' he said. Large enterprises -- such as German conglomerate Siemens -- have the ability today to build their own tools frameworks. Something like Eclipse could bring that capability to more IT shops, Gilpin said. Eclipse is reaching some important milestone. A beta of version 2.0 of the open-source code for Eclipse was released about a month ago. The final version, along with commercial products supporting the new release, is slated for September. New features in 2.0 include improvements to the platform's project management capabilities, plug-in architecture, and ability to integrate in third-party tools... Like all things in the software industry, Eclipse is as much about strategy as it is about products. For starters, it reflects IBM's ongoing flirtation -- some would say obsession -- with the open-source community. IBM has backed the Apache Web server and Linux operating systems with major success. For IBM, with deep legacy roots and a huge services business, open source is a good competitive tool when it comes to commoditized products like baseline servers or even operating systems. It makes its money further up the stack..."
[August 21, 2002] "Exploring XML Encryption, Part 2. Implement an XML Encryption Engine." By Bilal Siddiqui (CEO, WAP Monster). From IBM developerWorks, XML Zone. August 2002. ['In this second installment, Bilal Siddiqui examines the usage model of XML Encryption with the help of a use case scenario. He presents a simple demo application, explaining how it uses the XML Encryption implementation. He then continues with his last implementation of XML Encryption and makes use of JCA/JCE classes to support cryptography. Finally, he briefly discusses the applications of XML Encryption in SOAP-based Web services.'] "In Part 1 of this series ['Demonstrating the secure exchange of structured data '], I gave an introduction to XML Encryption and its underlying syntax and processing. I examined the different tags and their respective use in XML encryption with a simple example of secure exchange of structured data, proposed a Java API for XML Encryption based on DOM, and gave a brief overview of cryptography in Java (JCA/JCE). I start my discussion in this part with an information exchange scenario which demonstrates the use of XML encryption... Consider the process of information exchange between two enterprises. One is an online books-seller and the other is a publisher. When the books-seller wants to purchase books, it submits a purchase order to the publisher. At the publisher's end, the sales department receives this order, processes it, and forwards it to the accounts department. The two enterprises exchange information in the form of XML documents. Since some portion of the document needs to be secure and the rest can be sent insecurely, XML encryption is the natural approach for applying security to distinct portions of the document... According to the books-seller's security policy, the payment information will only be revealed to the accounts department. The sales department will need to extract only the name of the book, its item ID and the quantity ordered; because this is insensitive information it can remain insecure. The accounts department will need to decrypt the payment information in the purchase order using a pre-exchanged secret key... Mapping this policy, XML Encryption facilitates the concealment of payment information in the sales department and its disclosure in the accounts department... At this point, it may be useful to ponder a bit on the concept of document-based security. With this security architecture, you can impose security at the document level..." See: (1) XML Encryption Syntax and Processing, W3C Candidate Recommendation 02-August-2002; (2) "XML and Encryption."
[August 21, 2002] "Dynamic e-Business Using BPEL4WS, WS-Coordination, WS-Transaction, and Conversation Support for Web Services." By Santhosh Kumaran and Prabir Nandi (IBM T. J. Watson Research Center). Presented with the (downloadable) IBM alphaWorks application "Conversation Support for Web Services." "IBM, MS, and BEA have just announced a set of Web services standards for automating business processes on the Web: BPEL4WS for executable specification of business processes, WS-Coordination for specifying an extensible framework for the coordination of actions of distributed applications, and WS-Transaction for coordination types that are used with the framework described in WS-Coordination. Additionally, IBM is making available through alphaWorks a technology called 'Conversation Support for Web Services' for supporting conversational model of interaction between distributed, autonomous systems. The goal of this document is to articulate a vision in which these complementary technologies work together enabling dynamic e-Business. We begin with a very brief overview of the technologies. Then we introduce a travel reservation scenario and use various levels of sophistication of this scenario to motivate the use of various technologies. We conclude with a summary of the complementary features that the conversation model brings to the table... Conversation Support for Web Services (CS-WS) proposes a set of specifications to support a conversational model of component integration using Web Services. The specifications include a XML dialect to describe a conversation interaction, called Conversation Policy (CP). CPs are preprogrammed interaction patterns or protocols and is used to specify the message formats, sequencing constraints, and timing constraints that define the interaction protocol. The other set of specification extends the Java Connector Architecture APIs, both at the system and application level to provide a standard runtime framework to execute CPs on a J2EE Application Server... BPEL4WS, WS-Coordination, and WS-Transaction, along with WSDL and UDDI, provide the basis for the programming model for dynamic e-Business. Below we summarize the significant and complementary features that Conversations bring to this model as explained in 'Travel Reservation Scenario: Stage 3': (1) Dynamic and flexible interaction patterns as shown by the ability of the conversation technology to support complex interactions between Acme and customers. (2) Adaptable, open-ended, and extensible B2B integration capability as demonstrated in the ability of the conversation module to account for interface changes. (3) Conversation module serves as a process broker layer dealing with multiple business protocols that the partners employ to provide the business services the BPEL processes expect. For example, different car companies may have different protocols for making a reservation; the conversation module supports the selection of the right protocol while utilizing the same BPEL processes. (4) Message handling based on explicit conversation state. A very good example is shown in the travel reservation example, in which there could be several loops and state changes at the conversation level before a transaction is completed. (5) Executable business protocols, nested protocols, and protocol switching as demonstrated in the travel scenario..." See: (1) "Business Process Execution Language for Web Services (BPEL4WS)"; (2) "Web Services Specifications for Business Transactions and Process Automation" and (3) "IBM alphaWorks Releases Conversation Support for Web Services."
[August 21, 2002] "Conversation-Enabled Web Services for Agents and e-Business." By James E. Hanson, Prabir Nandi, and David W. Levine (IBM T.J. Watson Research Center, Yorktown Heights NY 10598). In the Proceedings of the 2002 International Conference on Internet Computing (IC-02). "In this article we outline some enhancements to the existing Web Services architecture and programming model, which will enable them to support the needs of fully-realized dynamic e-business and software agents -- which have much in common. Of particular importance is conversation support, with its core element, conversation policies. The emergence and continued development of Web Services has brought them to the brink of supporting rich e-business applications. The simplified invocation model afforded by SOAP, the standardized, public description of invocation syntax provided by WSDL and UDDI, and the encapsulation of detailed message-transport plumbing behind a standard invocation framework (WSIF) all are essential stepping-stones toward full support of e-business interactions. But at present, Web Services remain a 'vending machine' model -- that is, they limit themselves to providing a way in which functions can be made available for invocation over the internet... Rich e-business interactions require a more peer-to-peer, proactive, dynamic, loosely coupled mode of interaction. A fully realized e-business acts as both the 'invoker' and 'invokee' in two-sided (or multi-sided), multi-step, complex patterns of interaction with other ebusinesses. Its internal business processes are under its unilateral control, both as to what to do in any given interaction, and when and how to make changes; while its interactions with other businesses are mediated by public (or at least commonly held) protocols. Even in cases where there is an agreement in place, the business retains control over the extent to which it follows the agreement. Interacting software agents correspond almost exactly with the above description. In terms of how they interact, differences between agents and fully-realized e-business are largely a matter of scale and emphasis. But from a Web Services architectural standpoint, they are synonymous... We are currently developing Conversation Policy XML (cpXML), an XML dialect for describing conversation policies. It permits CPs to be downloaded from third parties (such as standards bodies, providers of conversation-management systems, or specialized protocol-development shops). Once downloaded and fed into a firm's conversation-management system, bindings are added to specify the connections between the decision points of the CP and the firm's business logic. cpXML is intentionally minimalist, restricting itself to describing the message interchanges... Thus, for example, it does not cover the way in which the CP is bound to the business logic. It takes a third-party perspective, describing the message exchanges in terms of 'roles' which are assumed at runtime by the businesses engaged in a conversation. It supports nesting of conversation policies and time-based transitions (such as timeouts on waiting for an incoming message). Its first use will perhaps be as a standard of comparison for evaluating forthcoming developments in flow languages such as WSFL, etc. At this time, it is impossible to judge wither a separate language is needed for specifying conversation policies, or whether hybridized flow/state-machine description language will be practical..." See also the news item "IBM alphaWorks Releases Conversation Support for Web Services" and Conversation Support for agents, e-business, and component integration. [cache]
[August 21, 2002] "Internet Registry Information Service (IRIS)." By Andrew L. Newton (VeriSign, Inc., WWW). Network Working Group Internet-Draft. Reference: 'draft-ietf-crisp-iris-core'. August 14, 2002, expires February 12, 2003. "This document describes an application layer client-server protocol for a framework of representing the query and result operations of the information services of Internet registries. Specified in XML, the protocol defines generic query and result operations and a mechanism for extending these operations for specific registry service needs... Each type of Internet registry, such as address, routing, and domain, are identified by a registry identifier (ID). This registry identifier is a URI, more specifically a URN, used within the XML instances to identify the XML schema formally describing the set of queries, results, and entity classes allowed within that type of registry. A registry information server may handle queries and serve results for multiple registry types. Each registry type that a particular registry operator serves is a registry service instance. IRIS and the XML schema formally describing IRIS do not specify any registry, registry identifier, or knowledge of a particular service instance or set of instances. IRIS is a specification for a framework with which these registries can be defined, used, and in some cases interoperate. The framework merely specifies the elements for registry identification and the elements which must be used to derive query elements and result elements. This framework allows a registry type to define its own structure for naming, entities, queries, etc. through the use of XML namespaces and XML schemas (hence, a registry type is identified by the same URI that identifies its XML namespace). In order to be useful, a registry type's specification must extend from this framework. The framework does define certain structures that can be common to all registry types, such as references to entities, search continuations, entity classes, and more. A registry type may declare its own definitions for all of these, or it may mix its derived definitions with the base definitions. IRIS defines two types of referrals, an entity URI and a search continuation. An entity URI indicates specific knowledge about an individual entity, and a search continuation allows for distributed searches. Both types may span differing registry types and instances. No assumptions or specifications are made about roots, bases, or meshes of entities. Finally, the IRIS framework attempts to be transport neutral..." See also IETF Cross Registry Information Service Protocol.
[August 21, 2002] "Using the Internet Registry Information Service (IRIS) over the Blocks Extensible Exchange Protocol (BEEP)." By Andrew L. Newton (VeriSign, Inc., WWW). Network Working Group Internet-Draft. Reference: 'draft-ietf-crisp-iris-beep'. August 14, 2002, expires February 12, 2003. "This document specifies how to use the Blocks Extensible Exchange Protocol (BEEP) as the application transport substrate for the Internet Registry Information Service (IRIS) as described in 'draft-ietf-crisp-iris-core-00.txt.' The BEEP profile for IRIS transmits XML instances encoded as UTF-8 using the media-type of "application/xml" according to RFC3023. The BEEP profile for IRIS only has a one-to-one request/response message pattern. This exchange involves sending an IRIS XML instance, which results in a response of an IRIS XML instance. The request is sent by the client using an 'MSG' message containing a valid IRIS XML instance. The server responds with an 'RPY' message containing a valid IRIS XML instance. The 'ERR' message is not used for faults and all responses from the server MUST use the 'RPY' message..."
[August 21, 2002] "Sun Opens Up With Storage Software Suite." By Scott Tyler Shafer . In InfoWorld (August 19, 2002). "Sun Microsystems on Tuesday will throw its hat into the emerging market for heterogeneous SAN management software, claiming to be the first company to offer support for budding open standards for discovering and managing multi-vendor storage devices. The new software suite, dubbed Sun StorEdge Enterprise Storage Manager (ESM), was created by combining its existing discrete storage software products with new technology based on evolving storage-specific standards, said James Staten, director of strategy for Sun's Storage division, in Mountain View, Calif. These open standards include CIM (Common Information Model), WEBM (Web-Based Enterprise Management), and the Bluefin specification that was recently submitted to the Storage Networking Industry Association under the name SMI (Storage Management Initiative). SNIA says SMI will be formally submitted to a standards body later this year. The decision to adopt open standards in lieu of exchanging proprietary APIs with competitors and partners was an easy choice, explains Steve Guide, a product line manager for Sun's storage division... Guido explained Tuesday's release of the ESM software suite will feature a way to do topology reporting, device configuration, and proactive health diagnostics. Guido added future releases will support an 'expanding device support list' and automation capabilities via the company's existing StorEdge Utilization Suite & Performance Suite. Staten further explained that the StorEdge Resource Management was created by combining the company's current StorEdge Diagnostic Expert software, StorEdge Resource Management and Availability software, StorEdge Traffic Manager, and StorEdge Utilization Suite & Performance Suite..." References: (1) "DMTF Common Information Model (CIM)"; (2) "SNIA Announces Bluefin SAN Management Specification Using WBEM/MOF/CIM"; (3) the announcement "Sun Microsystems Delivers New CIM-Compliant San Management Software. Continues To Lead Industry On Storage Open Standards. Complete Storage Management Portfolio Enables Customers to Improve Service Levels and Reduce Total Cost of Ownership."
[August 20, 2002] "Automating Business Processes and Transactions in Web Services. An Introduction to BPELWS, WS-Coordination, and WS-Transaction." By James Snell (IBM Emerging Technologies). From IBM developerWorks, Web Services Zone. August 2002. ['The new Business Process Execution Language for Web Services, WS-Transaction, and WS-Coordination specifications provide a comprehensive business process automation framework that allows companies to leverage the power and benefits of the Web Services Architecture to create and automate business transactions. Here we present a high level executive overview of what the three new specifications provide.'] "The role of dynamic e-business within the enterprise is to simplify the integration of business and application processes across technological and corporate domains. The relatively recent advent of Web service technologies such as SOAP, WSDL, and UDDI has helped to evolve our thinking about how distributed applications can connect and work together in an increasingly dynamic way, yielding a more dynamic economic environment. None of these core Web service specifications (SOAP, WSDL, UDDI, etc) were designed to provide mechanisms by themselves for describing how individual Web services can be connected to create reliable and dependable business solutions with the appropriate level of complexity. The technology industry has not yet produced a single standardized Web services view of how to define and implement business processes so that such connections can be described. To address the concerns and needs of our customers, IBM again teamed with Microsoft and others to develop and propose a Business Process Execution Language for Web Services, a new specification that replaces and offers additional functionality and greater flexibility over previous individual efforts on the IBM Web Services Flow Language (WSFL) and Microsoft XLANG grammar. The Business Process Execution Language for Web Services (BPEL4WS or BPEL for short) is a XML-based workflow definition language that allows businesses to describe sophisticated business processes that can both consume and provide Web services. In this document we will introduce the fundamental principles of BPEL as well as two significant and complementary specifications, WS-Coordination and WS-Transaction, also developed jointly by IBM and Microsoft. These deal with how one coordinates the dependable outcome of both short- and long-running- business activities. This issue is central to the successful implementation of a distributed business process. To illustrate the function and benefits of the BPEL, WS-Transaction, and WS-Coordination specifications, we will explore the application of those technologies to a real-world business scenario..." See detailed references in "Business Process Execution Language for Web Services (BPEL4WS)."
[August 20, 2002] "Java and UDDI Registries." By Paul Tremblett. In Dr Dobb's Journal [DDJ] (September 2002) #340, pages 34-40. "Applications that require web services send requests to services at advertised URLs. The service processes the request and returns the result. Applications obtain information about how to contact a service (along with other useful data) from business registries such as the Universal Description, Discovery, and Integration (UDDI) project, a platform-independent open framework for describing services and and businesses and integrating them via the Internet. Currently more than 200 companies support UDDI... The tools needed to query a UDDI registry are available and easy to use, and the JAXM API is suitable for preparing SOAP messages... Paul Tremblett shows how your Java applications can contact business registries, such as UDDI, and retrieve information from them..." See the associated source code and listings. See: "Universal Description, Discovery, and Integration (UDDI)."
[August 20, 2002] Open Management Interface Specification. Version 1.0. Revision 1 OASIS. From webMethods (Geoff Bullen, Ash Nangia, Doug Stein, Mona He, Prasad Yendluri, Steve Jankowski) and Hewlett-Packard (Art Harkin, Victor Martin, Homayoun Pourheidari, Fu-Tai Shih). 118 pages. Submitted [via Veena Subrahmanyam, HP] 2002-08-20 to the OASIS Management Protocol Technical Committee. "The Open Management Interface (OMI) is an open specification, jointly authored by Hewlett Packard and webMethods, to provide an easy, open way for system management vendors and other interested parties to access and manage the resources associated with an integration platform and its associated business processes... The intent of OMI is to provide an easy, open way for systems management vendors and other interested parties to access and manage the resources associated with an integration platform and its associated business processes. What has been developed is a generic and extendable interface, accessed as a web service (i.e., via SOAP, XML and HTTP). Through this interface consumers can manipulate a set of OMI managed objects that represent the available resources. The OMI specification also defines a set of standard attributes, operations and notifications for each type of OMI managed resource and also a set of relations that can exist between OMI managed objects..." See other background in: (1) "webMethods and HP Release Open Management Interface Specification (OMI) Version 1.0."; (2) "webMethods and HP Announce Availability of the Open Management Interface (OMI) Specification Version 1.0. Companies Answer Customer Demand for Management of Integration Platforms, Business Processes and Web Services." See "Management Protocol Specification."
[August 20, 2002] "ASR On The Fly." By Ellen Muraskin. In Communications Convergence (August 2002), pages 42-55. "Until now, established VoiceXML platform providers have been focused on selling products to carriers and host/service providers - largely in support of voice portal and customer-facing self-service applications. The carrier voice-dialing space has been dominated by makers of monolithic systems, optimized for fast data retrieval and high throughput. But as carrier-level voice dialing becomes more popular, and as the enterprise begins showing interest in speech-mediated call routing and messaging, the VoiceXML market leaders are taking notice: forming partnerships with messaging providers, or downscaling and modifying network-grade platforms for enterprise use... VoiceXML-creation [is] a predictable step in the maturation of the VoiceXML markup language itself, and follows the HTML historical model. This merits examination because the VoiceXML voice markup language itself, now at V. 2.0, has gained traction with developers in the past year, in spite of a marketplace somewhat distracted by the whistle of the multimodal, device-agnostic SALT markup and approaching Microsoft .NET train... As VoiceXML expands its functional range and industry adoption, it begins to repeat HTML history by moving from static to dynamic presentation. No longer content to expose a dynamic database through CGI and Perl scripting, developers now use a web application server (and commonly, Java 2 Enterprise Edition) to deliver up fresh VoiceXML pages at execution time, in small, one-task-per-page applets. Result: just as my amazon.com home page doesn't look like yours, my voice application may not play the same as someone else's... A JSP or ASP-processing app server (BEA's, Tomcat's, Microsoft's, Oracle's, IBM's) runs the Java code that runs your voice site, and dynamically produces VoiceXML pages. Such a server has become the platform for a new class of service creation environments and their palettes of precoded servlets... At first glance, their drag-and-drop boxes, fill-ins and call flows resemble the proprietary IVR app gens of yore. But the pages they produce are dynamic - changing the app for each user. They are also portable to a range of VoiceXML-interpreter platforms. The platforms, in turn, run the customer's choice of core speech recognition and synthesis technologies..."
[August 20, 2002] "Wireless Standards Group Picks Up ODRL." By [Seybold Staff]. In The Bulletin: Seybold News and Views On Electronic Publishing Volume 7, Number 47 (August 21, 2002). "... The Open Mobile Alliance (OMA) [has] released for comment the OMA Download, the specification for securely downloading content objects to OMA-compatible mobile devices. The OMA Download spec includes a subset of ODRL. OMA is a standards body for wireless devices that is the successor organization to the WAP Forum... OMA has a membership that spans all of the major players in the wireless-device space and virtually every technology on its periphery. Various mobile device vendors had been investigating ContentGuard's XrML, but apparently have decided against it. Relevant concerns may have included the relatively large size of XrML's rights-specification files and ContentGuard's licensing requirements for the language. The subset of ODRL that OMA has adopted suffices for secure commerce in ring tones and other small chunks of digital media. While it is unclear whether rights specifications in ODRL are really that much smaller than their equivalents in XrML, there is a stark difference in licensing terms: ODRL is offered on an open-source basis. It is also unclear, of course, when or whether the mobile-device industry will actually implement OMA Download in their services and available devices..." See also Bill Rosenblatt's analysis of the ODRL 1.1 specification release and its use in OMA Download. Other references: (1) ODRL version 1.1; (2) OMA Rights Expression Language Version 1.0; (3) "Proposed Open Mobile Alliance (OMA) Rights Expression Language Based Upon ODRL."
[August 20, 2002] "The Web Services Scandal. How Data Semantics Have Been Overlooked in Integration Solutions." By Jeffrey T. Pollock. In EAI Journal Volume 4, Number 8 (August 2002), pages 20-23. Cover Story. ['Just because you know the standards - WSDL, XML, SOAP, and UDDI - don't think you're all set for comprehensive integration. The issue of data semantics has been overlooked by most in the media. Enterprises do so at the risk of failed integration projects.'] "Web services technology, despite its potential benefits, is limited in its ability to work with randomly formatted, non-standard data or data not based on XML... Web services promise to get us out of the vendor lock-in scenario and free us from the 'integrating the integrators' problem. Ultimately, Web services are relying heavily on the standard vocabularies solution to the problem of disparate semantics. These vocabulary standards will no doubt be significant for some industries. But the notion that a single set of standards will enable businesses everywhere to speak precisely the same vocabulary is as misplaced as the idea that EDI solved the integration challenge 30 years ago. At best, the lessons we've learned from industry's work with EDI have shown us that: (1) Defining standard vocabularies is difficult and time-consuming. (2) Once defined, standards don't adapt well. (3) People don't implement standards correctly anyway. There's no effective way to accommodate variant business semantics within the Web services framework. That's not surprising because Web services were designed for a different purpose. In fact, these limitations are really a legacy of traditional business systems integration approaches. Most problems contributing to the high failure rates of integration projects aren't technical in nature. The human activities involved in capturing the right kinds of expertise about the information, design intent, business rules, and usage of the data that needs to be interoperable have been core contributors to many of these failures. The issues surround how, and to what degree, proper analysis occurs before attempting to link multiple information systems together. Too many project managers, architects, and business people assume that with enough technology, enough business experts, and enough money, the problem can be solved. The assumption is that, somehow, the work of tying together all this disparate information across businesses, computer systems, company cultures, and international boundaries can be accomplished by putting enough smart people in a room together. Wrong. ... Semantics-based enterprise information interoperability solutions are an emerging category of tools focused on solving the problems of disjointed vocabularies, data definitions, terminology, and world-views of enterprise IT systems. Semantic interoperability solutions let engineers and business people build loosely coupled information webs that can dynamically enable rich, semantically precise collaboration and understanding -- without the need for custom code or standardized vocabularies. Information interoperability solves a completely different set of problems than either Web services or B2Bi and EAI products. It focuses on the logical information infrastructure of an enterprise rather than the physical connectivity and routing infrastructure..." See: "Markup Languages and Semantics."
[August 20, 2002] "Application Integration: Business and Technology Trends." By Eric Roch (Tibco). In EAI Journal Volume 4, Number 8 (August 2002), pages 34-39. ['Standards won't replace EAI, which is now viewed as an essential part of the enabling infrastructure, like databases and networks. EAI will allow the creation of new standards-based systems that will bridge the gap between old and news systems.'] "... Neither Web services nor J2EE extensions will eliminate the need for application integration broker suites. To fully integrate an enterprise using emerging standards would necessitate aligning all the component behavior and data formats for every application in the enterprise. To attempt such a massive effort with application architectures such as J2EE or .NET would require extensive programming to create standards-based interfaces and data transformations. Such an effort would still not address extending processes to the supply chain via the Internet. For example, how can an enterprise standardize on a common XML data exchange schema from the competing standards of RosettaNET, cXML, BizTalk, XEDI, or ebXML when their customers or suppliers are likely to support different standards? An integration broker is necessary to leverage the standards, where available, and provide application adapters and data transformations where standards aren't available... Many companies make software acquisition decisions based on integration issues, when integration architecture should be part of the enabling infrastructure. The use of EAI allows the software selection process to focus on requirements and not on the cost of integration or infrastructure issues. EAI architecture is starting to be viewed similarly to networks, transaction processing, and database management systems -- as part of the enabling infrastructure used to build applications. Standards won't replace EAI, yet EAI will enable the creation of standardsbased systems combining old and new systems both internal and external to the corporation. The integrated processes within the supply chain will drive new value from existing EIS systems and provide competitive advantage..."
[August 19, 2002] "Sun Unveils New Standards-Based Storage Software." By Lucas Mearian. In Computerworld (August 19, 2002). "Sun Microsystems Inc. this week plans to introduce advances to its storage-area network (SAN) management software, including the first use of two standards that are the cornerstone of an industrywide effort to bridge the interoperability gap in multivendor SANs. Sun's latest version of its StorEdge Enterprise Storage Manager software uses the Common Information Model (CIM) and Web-Based Enterprise Management (WBEM), the two primary elements of the Storage Networking Industry Association's draft storage management specification, formerly known as Bluefin. 'I think this really puts the pressure on the vendor community in general but also [on] our partners and competitors to begin developing storage software based on open standards,' said Russ Fellows, strategic marketing manager for Sun's Network Storage Unit. Fellows said Sun's new StorEdge Utilization Suite will allow disk-to-disk archiving and replication over long distances. Sun also announced new products in its tape library line. It said its new StorEdge L25 and L100 tape libraries will offer high capacity in a smaller footprint for midrange applications. StorEdge Enterprise Storage Manager suite offers topology reporting, network health monitoring, diagnostics and device monitoring under a centralized platform. The new software, which starts at $15,000, is available for the Solaris Operating System and is accessible from Linux, HP-UX, and other hosts remotely in a Web-enabled environment. It can also perform array management for Hitachi Data Systems Inc.'s Lightning 9900 storage array..." See the announcement: "Sun Microsystems Delivers New CIM-Compliant San Management Software. Continues To Lead Industry On Storage Open Standards. Complete Storage Management Portfolio Enables Customers to Improve Service Levels and Reduce Total Cost of Ownership." See background in: (1) "SNIA Announces Bluefin SAN Management Specification Using WBEM/MOF/CIM"; (2) "DMTF Common Information Model (CIM)."
[August 19, 2002] "Easier Financial Reporting At Hand With XBRL. Nasdaq First Stock Market to Adopt XML-Based Tags for Financial Data." By Eileen Colkin and Paul McDougall. In InformationWeek (August 12, 2002), page 22. "At a time when scrutiny of corporate financial statements has intensified, Microsoft, Nasdaq, and PricewaterhouseCoopers have launched a pilot program to test the viability of using XBRL to provide greater transparency of financial statements, making it easier to report financials over the Internet. The Exensible Business Reporting Language is an open specification that uses XML-based data tags to describe financial data in business reports and databases. Under the pilot program begun last week, investors will have access to XBRL-enabled financial data from the financial reports of 21 semiconductor companies listed on the Nasdaq. Nasdaq extracted company data from Securities and Exchange Commission filings, and PricewaterhouseCoopers consultants tagged each of the documents in XBRL... The backing of three marquee firms will move XBRL from a 'talked about' issue to a more credible solution, says John Hagerty, VP of research at AMR Research. 'The success of this pilot will indeed accelerate the XBRL standard adoption,' he says. Hagerty predicts that XBRL-enabled financial reports will be a requirement within several years..." See: (1) "Pilot Nasdaq-Hosted Web Service Features XBRL Financial Data"; (2) "Extensible Business Reporting Language (XBRL)."
[August 19, 2002] "ODRL 1.1 Review. Release of Open Digital Rights Language (ODRL) 1.1 and its endorsement by the Open Mobile Alliance (OMA)." By [Bill Rosenblatt] GiantSteps/Media Technology Strategies. In DRM Watch. August 19, 2002. "ODRL has come quite a way since its pre-1.0 incarnation, where it was a lightweight yet elegant schema for expressing content rights metadata. With version 1.1, it is now a full-fledged rights data model with an associated XML-based specification language that is suitable for use in real-world content distribution services. More particularly, ODRL is now entering the territory of ContentGuard's XrML... Apart from the similarities, a key difference between ODRL and XrML is that ODRL seems more applicable to actual transactions in the media and publishing world, whereas XrML (especially in its latest incarnation, version 2.0) has designs on broader cross-vertical applicability. ODRL's primitives map more directly onto the kinds of license terms that are found in real-world media; for example, it has explicit features for specifying things like resolutions, encoding rates, and file formats for content, whereas XrML does not. This would seem to stem from the fact that XrML began life in a research lab and is now under the care of a company (ContentGuard) that needs to maximize revenue and a standards body (OASIS) that does not confine itself to specific vertical markets; while on the other hand, ODRL derives from real-world implementation experience that Iannella and his IPR colleagues have had in building content management and distribution systems for their clients in publishing and related markets. ODRL is narrower than XrML when it comes to security elements. XrML ambitiously attempts to establish 'trust levels' that can help two systems decide whether or not to engage in a transaction. This concept only scratches the surface of a huge set of issues that are very hard to model properly (just ask the folks at the Liberty Alliance). ODRL wisely doesn't touch this area, nor does it deal with content copy protection per se. Both languages specify methods for securing rights specifications -- i.e., documents written in the languages themselves -- via XML Digital Signatures and other related methods. In all, ODRL version 1.1 feels more compact and elegant than the comparatively sprawling, ambitious XrML -- but it's not 'XrML Lite'; It has its own set of complexities. Its endorsement by the OMA has lifted it from its previous status (outside of Australia and other parts of the Pacific Rim, at least) as a model primarily of interest to researchers to become a serious contender in the arena of badly-needed DRM interoperability standards. Yet ODRL must overcome three serious hurdles in order to maintain its momentum. Perhaps the most serious is ContentGuard's patent portfolio..." See: (1) "Proposed Open Mobile Alliance (OMA) Rights Expression Language Based Upon ODRL"; (2) "Open Digital Rights Language (ODRL)"; (3) "XML and Digital Rights Management (DRM)." [27-August-2002 DRM Watch update. "Correction: the story on the rights languages ODRL and XrML, published on August 19th, contained a reference to 'trust levels' in ContentGuard's XrML as part of the language's security features. Trust levels were a feature of older ContentGuard technology and are not included in the current version, 2.0, of XrML. Neither language attempts to define relative trust levels of different entities involved in content rights transactions. DRM Watch regrets the error and has edited the story accordingly..."]
[August 19, 2002] "NeoCore Adds Muscle to XML Database." By Matt Hicks. In eWEEK (August 19, 2002). "NeoCore Inc. on Monday is announcing the next release of its native XML database with a focus on improved performance, new querying capabilities and expanded interface support. Version 2.6 of the NeoCore XML Information Management System (XMS), which will be available August 30,  will provide an order of magnitude improvement in query and store performance as well as support for XQuery, the developing XML querying standard, according to officials of the Colorado Springs, CO, company... The latest release also adds support for Java 2 Enterprise Edition with new Enterprise JavaBeans interfaces and provides an HTTP interface for integration. On the scalability front, it includes support for the Solaris 64-bit platform. NeoCore's improvements come as the major relational database vendors -- Oracle Corp., IBM and Microsoft Corp. -- have begun or are planning to add greater support for XML in their respective databases. But, according to [NeoCore CEO Ric] Miles, because NeoCore was built to handle XML, rather than being restructured to handle XML like traditional relational databases, it better manages XML data. It also eliminates 50 percent to 70 percent of the database design effort because of its Digital Data Processing technology, allowing the database to self-construct based on XML. Rex Fowler, CEO of Fowler Software Design LLC, is recommending NeoCore XMS to his clients who are looking to store and manage XML. He said he has also experimented with using it himself and likes its speed and new querying capabilities..."
[August 19, 2002] "XML Firewalls Aid Services." By Darryl K. Taft. In eWEEK (August 19, 2002). "Two technology companies are helping corporate users embrace XML-based information while ensuring the security and integrity of the messages that come into their systems. Quadrasis and Tarari Inc. this week will each introduce so-called XML firewalls that will offer businesses ways of inspecting XML messages before they enter their systems. An XML firewall acts like a traditional firewall in that it intercepts traffic and makes redirection or transformation decisions based on policies, but it can also look inside messages, parse the XML content, and make security and routing decisions... Quadrasis, a division of Hitachi Computer Products (America) Inc., of Waltham, Mass., this week is rolling out Quadrasis/Xtradyne SOAP Content Inspector, software that inspects and secures Simple Object Access Protocol messages and enables enterprises to take Web services outside their networks. Quadrasis developed the technology in cooperation with Xtradyne Technologies AG, of Berlin. The tool secures SOAP-to-SOAP communication via proxy servers with authentication, authorization, audit, alarm and policy techniques, said Quadrasis Chief Technology Officer Bret Hartman. It provides single-sign-on technology and can distinguish between standard HTML and SOAP messages. It includes a SAML (Security Assertion Markup Language) attribute assertion and can sign and verify defined SOAP messages... Meanwhile, Tarari, a San Diego-based spinoff of Intel Corp. that is launching this week, is announcing its combination hardware/software Tarari Content Processors. The processors act as an XML network appliance, reading and certifying every message as well as performing the SOAP filtering."
[August 19, 2002] "Web Services Extend Tools' IQ To Other Apps. Cognos Developers' Kit Provides Links to Query and Reporting Capabilities." By Rick Whiting. In InformationWeek (August 19, 2002). "Cognos Inc. will dive into Web services this week when it debuts a software developers' kit for building Web services. The kit lets users extend Cognos' business-intelligence tools to other applications. While developers' kits from Cognos and its competitors are designed to bring business intelligence to a wider audience, analysts say vendors can do more to realize Web services' full potential. Cognos Inc. will dive into Web services this week when it debuts a software developers' kit for building Web services. The kit lets users extend Cognos' business-intelligence tools to other applications. While developers' kits from Cognos and its competitors are designed to bring business intelligence to a wider audience, analysts say vendors can do more to realize Web services' full potential. The kit allows developers to extend Cognos' Series 7's query and reporting capabilities to other applications and environments. Using XML and Simple Object Access Protocol technology, industry-specific applications such as manufacturing and logistics apps can tap into Cognos data with less integration effort than through direct connections, the company says. The kit also will transform Cognos reports into XML content. Web services link Cognos applications to a portal, an extranet, or apps running on handheld devices. Windows and Unix versions of the kit will be priced at $10,000 per developer... Developers' kits like Cognos' or one from rival Business Objects SA released earlier this year are 'basically a bag of nuts and bolts,' says Philip Russom, an analyst at Giga Information Group. Developers use them to build Web-services wrappers around the business-intelligence tools' APIs--a technical chore. And they provide only front-end links to reports generated by the business-intelligence tools, not links between the tools and back-end data sources, he says..." See also the announcement: "Cognos Announces Immediate Availability of Cognos Web Services. Platform Support for Unix, Microsoft -- Extends Value of Broad Range of Cognos Series 7 Business Intelligence."
[August 19, 2002] "Cognos to Unveil Support for Web Services." By Heather Harreld. In InfoWorld (August 16, 2002). "Cognos on Monday plans to unveil support for Web services as its new integration platform will extend its Cognos Series 7 business intelligence to other applications and environments. Using Internet technologies such as XML and SOAP (Simple Object Access Protocol) to build connections between software and multiple applications, Cognos Web Services will allow enterprises and their trading partners to rapidly integrate Cognos into Web, vertical market, and wireless applications, according to company officials from Burlington, Mass.-based Cognos. In addition, Cognos will team with Macromedia to combine Cognos Web Services with Macromedia Flash to provide responsive, "desktop like" applications for delivering customized BI visualizations and reports... The Web services platform, which supports both Unix and Microsoft platforms, will also allow companies to tailor Cognos BI to fit unique needs because developers can customize software, officials said. In addition, Web services will allow companies to display content from a Cognos application customized within an existing user interface, which can then be seamlessly added to existing enterprise portals..."See also the announcement: "Cognos Announces Immediate Availability of Cognos Web Services. Platform Support for Unix, Microsoft -- Extends Value of Broad Range of Cognos Series 7 Business Intelligence."
[August 16, 2002] "Web Services Management Protocol Spec Eyed." By Paul Krill. In InfoWorld (August 15, 2002). "OASIS (Organization for the Advancement of Structured Information Standards) has formed a technical committee to propose an XML-based management protocol specification for Web services, said a Novell official Thursday who is chairing the committee. The OASIS Management Protocol Technical Committee, formed about two weeks ago, is intended to boost distributed systems management over the Internet, according to Novell. Management of Web services through use of Web services is the crux of the effort, said Novell's Winston Bumpus, director of standards for the company, in San Jose, Calif., and chairman of the new OASIS committee. As the industry is building a Web services platform, it is important that there be an infrastructure to manage it, Bumpus said. The protocol would be used for functions such as resource allocation, monitoring, controlling, and troubleshooting, Bumpus said. The committee is reviewing a number of technologies for use in the protocol, including as XML, SOAP, OMI (Open Model Interface), DMTF CIM (Distributed Management Task Force Common Information Model), and DMTF CIM Operations. Plans are to have the specification ready in June 2003, with reference implementations to appear next spring and supportive products to be available in late-2003, Bumpus said. The effort is intended to enable companies to not only manage their own services but to also oversee interaction of those services with services from other companies, according to Novell. The work is intended to deliver an industry-standard protocol for managing desktops, services, and networks across an enterprise or Internet environment..." See "Management Protocol Specification."
[August 16, 2002] "The XMLPULL API." By Elliotte Rusty Harold. From XML.com. August 14, 2002. ['XMLPULL, an alternative API for parsing XML. Harold analyzes XMLPULL, which takes a different approach than either SAX or DOM, offering real benefits, but he stops short of recommending it for server-side production use. He explains the flaws in its implementation along the way.'] "Most XML APIs are either event-based like SAX and XNI or tree-based APIs like DOM, JDOM, and dom4j. Most programmers find tree-based APIs to be easier to use, but they are less efficient, especially when it comes to memory usage. A typical in-memory tree is several times larger than the document it models. These APIs are normally not practical for documents larger than a few megabytes in size or in memory-constrained environments. In these situations, a streaming API such as SAX or XNI is normally chosen. However, these APIs model the parser rather the document. They push the content of the document to the client application as soon as they see it, whether the client is ready to receive that data or not. SAX and XNI are fast and efficient, but the patterns they require are unfamiliar and uncomfortable to many developers. XMLPULL is a new streaming API that can read arbitrarily large documents like SAX. However, as the name indicates, it is based on a pull model rather than a push model. In XMLPULL the client is in control rather than the parser. The application tells the parser when it wants to receive the next data chunk rather than the parser telling the client when the next chunk of data is available. Like SAX, XMLPULL is an open source, parser independent pure Java API based on interfaces that can be implemented by multiple parsers. Currently there are two implementations, both free... XMLPULL can be a fast, simple, and memory-thrifty means of loading data from an XML document whose structure is well known in advance. State management is much simpler in XMLPULL than in SAX, so if you find that the SAX logic is just getting way too complex to follow or debug, then XMLPULL might be a good alternative. However, because the existing XMLPULL parsers don't support validation, robustness requires adding a lot of validation code to the program that would not be necessary in the SAX or DOM equivalent. This is probably only worthwhile when the DOM equivalent program would use too much memory. Otherwise, a validating DOM program will be much more robust. The other thing that might indicate choosing XMLPULL over DOM would be a situation in which streaming was important; that is, you want to begin generating output from the input almost immediately without waiting for the entire document to be read..."
[August 16, 2002] "XSLT Processing in .NET." By Joe Feser. From XML.com. August 14, 2002. ['Joe Feser explains .NET's support for transforming XML with XSLT'] "This article is meant to help XML developers understand the different ways XSLT transformations can be performed using the .NET framework. It alsos describe how to use various input sources for an XSLT transformation. In .NET, the System.Xml.Xsl.XslTransform class is used for transforming XML data using an XSLT stylesheet. System.Xml.Xsl.XslTransform supports the XSLT 1.0 syntax, using the http://www.w3.org/1999/XSL/Transform namespace... There are several classes that may be used to read XML and XSLT documents for a transformation. The most versatile of these is the System.Xml.XmlReader class. Since System.Xml.XmlReader is an abstract class, another class must inherit from it. The first class of this type is System.Xml.XmlTextReader, which reads character streams and checks that XML is well-formed, but does not validate the XML against a DTD or schema... Programming and scripting language constructs may also be embedded and utilized in XSLT stylesheets by using the msxsl:script element. The prefix 'msxsl' is assumed to be bound to the urn:schemas-microsoft-com:xslt namespace. Languages supported by the script tag include C#, VB.NET, and JScript.NET, which is the default. An implements-prefix attribute that contains the prefix representing the namespace associated with the script block must also exist in the msxsl:script element. Multiple script blocks may exist in a stylesheet, but only one language may be used per namespace..."
[August 16, 2002] "The Absent Yet Present Link." By Kendall Grant Clark. From XML.com. August 14, 2002. ['Kendall Clark examines XLink's absence from the first public XHTML 2.0 draft.'] "When all is said and done, XHTML is one of the most important W3C specifications because, in principle, it affects the greatest number of users -- at least that's the theory. XHTML, like RDF, is still more often talked about and evangelized than it is actually used, but there are good reasons to think that it will eventually catch on. Which means that the continual evolution of XHTML is a key element of the future health of the Web as a means of intra-human exchange. In last week's column, XHTML 2.0: The Latest Trick, I described the next step in the evolution of XHTML, at least according to the 5 August XHTML 2.0 working draft. The big changes include the addition of navigation lists; significant changes to the way sections are conceptualized, including a real section container element and an unordered section title or heading element (<h>); deprecation of br in favor of line, and of img and applet in favor of object; and the addition of href to every XHTML element. Suffice it to say that if XLink (or, for that matter, XPointer) is absent from XHTML 2.0 because of rivalry or ill will between various W3C Working Groups, that would constitute a rather embarrassing process failure for the W3C; it's the sort of thing that concerned citizens of the Web should reject clearly and unambiguously. You should expect that this issue will end up as a TAG working item in the not-so-distant future..."
[August 15, 2002] "Ventura 10 Throws Down the XML Gauntlet to Frame. [Content Management.]" By Mark Walter (TSR Senior Editor). In Seybold Report: Analyzing Publishing Technology [ISSN: 1533-9211] Volume 2, Number 10 (August 19, 2002). ['Corel has clearly needed an output formatter to go with its XMetaL XML authoring and editing program. Now, in Ventura 10, it has one. The new Ventura keeps many of the features that loyal fans have always loved, and adds XML import and styling. Other features include tags for table properties, direct output of PDF and a preflight checker. And mercifully, the file format hasn't changed; your old publications will still work. There are some issues that Corel should address, but Ventura 10 could offer a real challenge to FrameMaker and the other long-document programs.'] "Hoping to revive a neglected product, Corel has announced version 10 of Ventura Publisher, the first upgrade to the venerable desktop publishing program in four years. The new version introduces XML support that Corel hopes will make Ventura attractive as a companion to XMetaL. Version 10 also updates the product's filters and PDF capabilities... Corel says it is working on a cross-media publishing solution for 2003 and it believes that 'Ventura technology will be core to this application.' Near as we can tell, that means a bundle of Ventura and XMetaL (and quite possibly Micrografx technology) with a third-party content-management system that could handle Web delivery of XML-encoded content. We expect to hear more about Corel's plans in this area this fall... Corel needs a versatile output formatting tool as a companion product to XMetaL, lest XMetaL users turn to competitors such as FrameMaker and Arbortext for their output needs. The new version of Ventura certainly fits that role, and it may find a place in the broader XML-to-print market. We believe FrameMaker and Arbortext deserve some competition in that space, but it remains to be seen if Corel is really up to the challenge. In the retail market, there are still die-hard Ventura fans who will be relieved that Corel is once again taking interest in this product. And Corel may also be able to find a niche for Ventura among new users who want more structure than MS Publisher provides and who don't need the creative capabilities of XPress or InDesign. Realistically, though, this product is to be viewed as an add-on for XMetaL. Of course Corel hopes for other sales, but they have to be seen as gravy..." A sidebar "Ventura Milestones" chronicles the development of Ventura Publisher in 18 steps from its first release in 1985. See also the announcement: "Corel Corporation Introduces Corel Ventura 10. Enterprise Publishing Software Provides XML and Enhanced Graphics Support for Visually-Rich Documents."
[August 15, 2002] "Standard Practice." By Aaron Walsh. In New Architect Magazine Volume 7, Issue 09 (September 2002), pages 26-30. ['Backing the wrong standard can mean costly licensing fees. Learn to compete effectively using standards, while avoiding pitfalls along the way.'] "As the host of available standards continues to grow and change rapidly, it becomes increasingly difficult to choose among them. Which standards should we adopt today? Which should we let bake a little longer in hopes of using them tomorrow? And which should we ignore altogether? When building products and services for the Internet or Web, we're forced early in the process to make critical, long-term bets regarding which technologies we'll use. We're asked to choose among various solutions, both proprietary and standardized. And as any software architect can attest, the technology choices you make today can make or break you tomorrow. In addition to the usual technology-specific issues, we must also consider the short- and long-term legal and business impacts of our choices. License fees related to patents and other intellectual property rights (IPR) are often accompanied by both proprietary and standard technologies. Sometimes these fees are obvious; other times they present hidden dangers. These issues are only the beginning when it comes to adopting any new technology solution. Although it's important to perform advanced due diligence and take an eyes-wide-open approach when assessing various options, even these steps don't guarantee a smooth ride -- as MPEG-4 product vendors can attest... Irrespective of any particular standards organization's stance on the issue of IPR in general, the debate often boils down to the fundamental issue of whether patents are good or bad for standards. The question of whether patents protect and advance standards or merely encumber and stall them isn't likely to be resolved soon, if ever. On one hand, patents and their associated license fees can be viewed as effective barriers to entry that can extend a significant competitive advantage to those willing to pay for the privilege of using them. This complements the notion that the very best technologies are usually patented and don't come for free. On the other hand, some view patents as a barrier to widespread technology adoption because they encourage for-fee licenses. Many vendors (and especially independent developers) are unwilling -- or unable -- to accept such licenses, no matter how 'reasonable' the terms may be. In some cases, patented technologies can exert a measure of control over licensees above and beyond mere license fees..." See general references in "Patents and Open Standards."
[August 15, 2002] "Supporting Limits on Copyright Exclusivity in a Rights Expression Language Standard. A Requirements Submission to the OASIS Rights Language TC." By Deirdre K. Mulligan and Aaron Burstein, with John Erickson (Principal Scientist, Digital Media Systems Lab, Hewlett-Packard Laboratories). August 13, 2002. 17 pages. Comments submitted by the Samuelson Law, Technology & Public Policy Clinic on behalf of the Clinic and the Electronic Privacy Information Center (EPIC). "Copyright law grants certain rights to purchasers and other users of copyrighted works. It is neither a legal nor a practical requirement for users to declare (or claim) these rights explicitly in order to enjoy them. While the public's legal rights cannot be altered by Digital Rights Management (DRM) systems per se, we can imagine scenarios in which DRM systems may require users to make these kinds of declarations, in order to work around inherent technical limitations. It is therefore essential that a rights expression language (REL) provide the vocabulary necessary for individuals to express, in a straightforward way, the rights that copyright law grants them to use materials. The user's claim of right would provide the essential information for a usage-rights issuing agency to give the user the technical capability to use the work in a particular way... In many instances it is important that both parties in the relationship be able to assert their rights and/or desired terms. True negotiation between parties requires that, at a minimum, the REL provide the vocabulary and syntax to support bi-directional exchanges. Otherwise, the rights transaction reduces to the mere request for and acceptance of an offer of permissions asserted by the rights holder. This document therefore suggests certain accommodations that DRM architectures, and especially their rights expression language components, must make to adequately express certain core principles of copyright law. Rights holders must have the means to express that a work is available on terms that reflect existing copyright law, as opposed to the limitations of a simple contract. The REL must also enable rights holders to express the more generous terms -- i.e., copyleft, with attribution -- commonly attached to digital resources today. At a minimum, recipients of works must have the ability to assert their rights as recognized under copyright law, and have these assertions reflected in their ability to use the work. Extending an REL to support a broader range of statements that reflect current law is, however; insufficient. The rights messaging protocol (RMP) layer must also be extended to accommodate both the downstream and upstream assertion of rights. We recognize that the RMP layer is not currently within the scope of this discussion, but we believe that the assumption of a one-way expression of rights has in part led to the current deficiencies in the REL..." Available also in original Word/.DOC format. See also: (1) the collection of Requirements contributions to the OASIS RLTC, Requirements Subcommittee; (2) "Patents and Open Standards."
[August 14, 2002] "SALT Forum Submits Multimodal Spec to W3C." By Ephraim Schwartz. In InfoWorld (August 13, 2002). "The SALT (Speech Application Language Tags) Forum on Tuesday officially submitted Version 1.0 of its specification for consideration by two committees of the W3C (World Wide Web Consortium). The specification was submitted to the Multimodal Working Group and the Voice Browser Working Group of the W3C. The SALT Forum includes many high-tech industry companies, including founding members Cisco, Comverse, Intel, Microsoft, Philips, and SpeechWorks. There has been some controversy surrounding multimodal specifications, there being proponents of the SALT Forum's technology as well as proponents of another multimodal technology submitted to the W3C by the Voice XML Forum. Each group claims to have the better development environment for creating a user interface on mobile devices that would combine voice, touch-screen, and graphical systems to access data. The Voice XML Forum, which includes founding members AT&T, IBM, Lucent, and Motorola, was organized in 1999. Earlier this years its specification, which did not include multimodal development, was approved as a standard markup language for creating voice responses to make menu selections in lieu of making choices by depressing a series of numbers on a keypad. The submission of the SALT Forum specification to the same standards body that is currently reviewing the Voice XML proposal should help to break the deadlock between the two competing standards according to one industry analyst... Some of the controversy might have been avoided if, when the SALT Forum originally submitted the specification to the W3C, it had been able to announce the fact. However, W3C guidelines state that a submitting organization cannot identify the W3C until it officially acknowledges that it is considering a submission, according to Rob Kassel, a member of SALT and product manager for Emerging Technology at SpeechWorks in Boston..." See the news item "SALT Forum Contributes Speech Application Language Tags Specification to W3C."
[August 14, 2002] "SALT Forum Submits Spec to W3C." By Dennis Callaghan. In eWEEK (August 13, 2002). "The SALT Forum announced Tuesday that it has sent its namesake Speech Application Language Tags (SALT) specification to the World Wide Web Consortium for review. SALT is a specification used for adding voice tags to existing application development markup languages so as to develop so-called multimodal applications, which combine speech recognition, speech synthesis, graphics and text. The SALT Forum has asked the W3C Multimodal Interaction Working Group and Voice Browser Working Group to review the SALT specification as part of their standards development for promoting multimodal interaction and voice-enabling the Web, SALT Forum officials said. 'We respect the standards efforts of the W3C and are pleased to bring the SALT specification to W3C Working Groups for their consideration,' said SALT Forum representative Martin Dragomirecky in a statement. 'By making a comprehensive, royalty-free contribution, we hope to accelerate their efforts targeting a new class of mobile devices that support multiple modes of interaction.' Founding members of the SALT Forum include Cisco Systems Inc., Comverse Inc., Intel Corp., Microsoft Corp., Philips Electronics N.V. and Speechworks International... While the SALT Forum had always said it would submit a spec to a standards body, this is the first time it offered to work with the W3C..." See the news item "SALT Forum Contributes Speech Application Language Tags Specification to W3C."
[August 14, 2002] "Converting Between Java Objects and XML with Quick. Integrating Java objects and XML data." By Brett McLaughlin (Author and Editor, O'Reilly and Associates). From IBM developerWorks, XML Zone. August 2002. ['Quick is an open source data binding framework with an emphasis on runtime transformations. This instructional article shows you how to use this framework to quickly and painlessly turn your Java data into XML documents, without the class generation semantics required by other data binding frameworks. Extensive code samples are included.'] "XML has certainly taken the programming world by storm over the last several years. However, the complexity of XML applications, which started out high, has not diminished much in the recent past. Developers still have to spend weeks, if not months, learning the complicated semantics of XML, as well as APIs to manipulate XML, such as SAX and DOM. However, in just the last six to 12 months a new class of XML API, called Quick, has become increasingly popular as a simpler alternative to these more complex APIs. Data binding allows you to directly map between the Java objects and XML, without having to deal with XML attributes and elements. Additionally, it allows Java developers to work with XML without having to spend hours boning up on XML specifications. Quick is one such data binding API -- a project that's geared toward business use in Java applications... Before diving into the details of using Quick, you'll need to download and install the project... Ideally, you'll see some really intriguing functionality here. First, data binding in general can greatly simplify programming tasks, especially when you need to persist data to some type of static storage -- like a file, as shown in this article. Additionally, Quick provides a fast, simple way to achieve this in your own projects..." See also the source code.
[August 13, 2002] "Travel Giant Sails Into Web Services Age." By John Fontana. In Network World (August 12, 2002). "After 30 years of running closed, proprietary systems as the pre-eminent provider of data to the travel industry, Galileo International said on Monday it has gone live with its first external Web service... Galileo has spent the past two years developing and testing Web services and XML as a way to open up its proprietary systems and provide more flexible access to its wealth of data for a larger number of companies, especially smaller travel agents. With the Web service used by AAA, itinerary data such as time of travel and destination is entered in one spot on the AAA site, and the Web service coordinates as many as six separate queries for data on Galileo's global distribution system (GDS). The GDS runs on a mainframe, and is updated constantly with fare and reservation information from 500 airlines, 227 hotel operators, 32 car rental agencies, 368 tour operators and all the major cruise lines. The itinerary information is aggregated into a single XML document and returned to the AAA Web site. Galileo also plans to offer a booking Web service, which would convert the itinerary information into a booking. The booking Web service would aggregate as many as a dozen transactions into a single Web service that AAA customers can activate with a single click from the Web site. Other Web services will follow for viewing trip information and flight status... Everything AAA needs is now contained in a single Web service, which cuts development time for AAA, since AAA does not have to know the proprietary interfaces needed to access Galileo's GDS. In addition, AAA no longer needs to maintain a special Windows-based server to interact with a data language Galileo developed three years ago called XML Select, a standard way to describe such concepts as a car or hotel. XML Select converted Common Object Model (COM) components used in client-side applications into XML documents. Those documents were fed to adapters in Galileo's network that converted the XML into triggers that would touch off transactions on the mainframe..."
[August 13, 2002] "Nasdaq, PwC, Microsoft Team on XBRL." By Peter Galli. In eWEEK (August 06, 2002). "The Nasdaq Stock Market Inc., Pricewaterhouse Coopers (PwC) and Microsoft Corp. on Tuesday will launch a pilot program that allows companies to more easily communicate their financial information over the Internet and which helps investors more easily analyze this data. The pilot program uses Extensible Business Reporting Language (XBRL), a new platform developed for corporate reporting over the Internet and which is based on XML, the universal format for data on the Web. With XBRL, data is tagged to instruct the system how to handle the data in question and enables the user to locate the necessary information without leafing through numerous financial reports, said Mike Willis, a partner at PricewaterhouseCoopers. The pilot program, designed by PwC and stored on Nasdaq hardware, provides access to XBRL data through Microsoft Office. This data will be accessible through the Microsoft Excel interface via a custom solution built by Dell Professional Services. David Jaffe, the lead product manager for Microsoft Office in Redmond, Wash. told eWeek that XBRL and Excel had enabled 'instant analytics' meaning that analysts could now access accurate information in real time using the tools they were already familiar with... Any interested individual or organization would simply use the add-in, which is accessed in the form of a freely downloadable Excel workbook for Microsoft Office. This would allow them to easily access and analyze the data, he said... The pilot program, whose goal is to showcase XBRL's ability to allow easy comparisons of the financials of companies within a particular industry, will provide investors with remote access to financial data from the financial reports of 21 Nasdaq-listed companies, starting with a company's most recent financials and going back five years. The data is formatted in XBRL and publicly available via a Nasdaq-hosted Web Service..." See: "Extensible Business Reporting Language (XBRL)."
[August 13, 2002] Open Digital Rights Language (ODRL). Version: 1.1. Date: 2002-08-08. 70 pages. Edited by Renato Iannella. Version URL: http://odrl.net/1.1/ODRL-11.pdf. Also [to be?] published as a W3C NOTE: http://w3.org/TR/odrl/. XML schemas for ODRL Expression Language and ODRL Data Dictionary in normative appendices A and B. "The Open Digital Rights Language (ODRL ) is a proposed language for the Digital Rights Management (DRM) community for the standardisation of expressing rights information over content. The ODRL is intended to provide flexible and interoperable mechanisms to support transparent and innovative use of digital resources in publishing, distributing and consuming of electronic publications, digital images, audio and movies, learning objects, computer software, and other creations in digital form. The ODRL has been submitted to appropriate standards body for formal adoption and ratification. The ODRL has no license requirements and is available in the spirit of 'open source' software... ODRL complements existing analogue rights management standards by providing digital equivalents, and supports an expandible range of new services that can be afforded by the digital nature of the assets in the Web environment. In the physical environment, ODRL can also be used to enable machine-based processing for rights management. ODRL is a standard language and vocabulary for the expression of terms and conditions over assets. ODRL covers a core set of semantics for these purposes including the rights holders and the expression of permissible usages for asset manifestations. Rights can be specified for a specific asset manifestation (i.e., format) or could be applied to a range of manifestations of the asset. ODRL is focused on the semantics of expressing rights languages and definitions of elements in the data dictionary. ODRL can be used within trusted or untrusted systems for both digital and physical assets. However, ODRL does not determine the capabilities nor requirements of any trusted services (e.g., for content protection, digital/physical delivery, and payment negotiation) that utilises its language. Clearly, however, ODRL will benefit transactions over digital assets as these can be captured and managed as a single rights transaction. In the physical world, ODRL expressions would need an accompanying system with the distribution of the physical asset. ODRL defines a core set of semantics. Additional semantics can be layered on top of ODRL for third-party value added services with additional data dictionaries. ODRL does not enforce or mandate any policies for DRM, but provides the mechanisms to express such policies. Communities or organisations, that establish such policies based on ODRL , do so based on their specific business or public access requirements. ODRL depends on the use of unique identification of assets and parties. Common identification is a very difficult problem to have agreement across sectors and is why identification mechanisms and policies are outside the scope of ODRL . Sector-specific versions of ODRL may address the need to infer information about asset and party identifiers..." References: (1) "Proposed Open Mobile Alliance (OMA) Rights Expression Language Based Upon ODRL"; (2) ODRL website; (3) "Open Digital Rights Language (ODRL)"; (4) "XML and Digital Rights Management (DRM)" [cache]
[August 13, 2002] OMA Rights Expression Language Version 1.0. From Open Mobile Alliance. Proposed Version 28-June-2002. Reference: OMA-Download-DRMREL-v1_0-20020628-p. 29 pages. "Open Mobile Alliance (OMA) Wireless Application Protocol (WAP) is a result of continuous work to define an industry-wide specification for developing applications that operate over wireless communication networks. The scope for the Open Mobile Alliance is to define a set of specifications to be used by service applications. The wireless market is growing very quickly and reaching new customers and services. To enable operators and manufacturers to meet the challenges in advanced services, differentiation, and fast/flexible service creation, WAP defines a set of protocols in transport, session, and application layers... The scope for this specification is to define the rights expression language used to describe the rights governing the usage of DRM content. It addresses requirements such as enabling preview, i.e., test-driving, of content, possibly prior to purchasing, expressing a range of different permissions and constraints, and optimisation of rights objects delivered over constrained bearers. It provides a concise mechanism for expressing rights over DRM content. It is independent of the content being distributed, the mechanism used for distributing the content, and the billing mechanism used to handle the payments... The REL is defined as a mobile profile of ODRL. Rights are the collection of permissions and constraints defining under which circumstances access is granted to DRM content. The structure of the rights expression language enables the following functionality: (1) Metadata such as version and content ID, and (2) The actual rights specification consisting of linking to and providing protection information for the content, and specification of usage rights and constraints. Models are used to group rights elements according to their functionality, and thus enable concise definition of elements and their semantics. The following models are used throughout this specification: (1) Foundation model, (2) Agreement model, (3) Context model, (4) Permission model, (5) Constraint model, and (6) Security model. The rights expression language is defined as a mobile profile, i.e., a subset, of ODRL..." See "Proposed Open Mobile Alliance (OMA) Rights Expression Language Based Upon ODRL" [cache]
[August 13, 2002] "Moving into XML Functionality: The Combined Digital Dictionaries of Buddhism and East Asian Literary Terms." By Charles Muller (Toyo Gakuen University, Chiba, Japan) and Michael Beddow (Leeds University, UK). In Journal of Digital Information: Publishing papers on the management, presentation and uses of information in digital environments Volume 3, Issue 2 (August 2002). ['This paper reports on the new developments in the online Digital Dictionary of Buddhism and CJK-English Dictionary, focusing on their implementation in XML. The paper is in two parts: (1) Project Manager's Report, by Charles Muller; (2) Delivering CJK Dictionaries from Pure XML Sources: A Developer's Perspective, by Michael Beddow'] "... The present number of terms included in the DDB (15,000 at the time of writing) is not small, but it represents only a tiny fraction of the terms, names, places, temples, schools, texts, etc., that are included in the entire East Asian Buddhist corpus. Thus, a search for a term conducted by someone whose research interests are significantly different to those of the compilers is likely to draw a blank. A group of scholars of East Asian Buddhism has been developing a comprehensive, composite index drawn from the indexes of dozens of major East Asian Buddhist reference works, which now includes almost 300,000 entries (described in further detail below). The search engine was extended to cover this comprehensive index. In its present state, the DDB may be searched for a term and if not found the search continues on this comprehensive index... Probably the most important thing to stress about the collaboration between Charles Muller and myself [Michael Beddow] on an XML-based delivery platform for the DDB and the CJK-E is that no more than six weeks elapsed between our first contact and the announcement of a fully-functional system (and indeed one that had more functions than either of us had envisaged at the start). Perhaps even more noteworthy is that I had the core of the system up and running (in so far as individual entries were being retrieved from the larger files) within a single day of first downloading the data... The layout, ordering and indeed the contents of the delivered HTML can easily be changed by editing a single controlling XSL style sheet, without touching the XML data, so it is easy to act on user comments about the presentation of the material which previously might have required thousands of separate HTML pages be recreated. In other words, the separation of visual design from logical structure that XML allows for is here given full scope. The nature of XML markup has also allowed a significant extension of what the user, specifically of the Digital Dictionary of Buddhism, can be offered. As the very large set of references to Buddhist CJK terms in printed or other digital dictionaries which Muller and his associates have assembled were also marked up in XML, the DDB's facilities could be greatly expanded with little programming effort. If a user looks up a term in the search engine which is not in the DDB, a secondary lookup is performed on the external references data. If the term concerned is found there, the user is offered a listing of the locations in those external sources where the term is defined or explained. Given the very large number of entries in this secondary data collection (c. 300,000 and rising), lookups are assisted by a Berkeley db database (itself automatically built from the core XML) interposed between the client and the XML sources: this is the only instance in the current implementation where information is not located by a direct parse of the core XML files..."
[August 13, 2002] "Many Outputs - Many Inputs: XML for Publishers and E-book Designers." By Terje Hillesund (Stavanger University Collage, Stavanger, Norway). In Journal of Digital Information: Publishing papers on the management, presentation and uses of information in digital environments Volume 3, Issue 1 (August 2002). "This essay questions the XML doctrine of "one input -- many outputs". In the area of publishing the doctrine says that from one book one can produce many formats and end-products. Supported by insights of linguistics and experiences of writers and editors, I shall claim this assertion to be basically wrong. By examining the main properties of XML I will further, in contrast to the doctrine, argue that XML and related technologies add to the complexity of publishing. New media, new formats and new genres will, powered by XML, lead publishers into a new and challenging state of "many outputs -- many inputs"... Supported by linguistic and semantic theories and practical examples from writing and editing, I argued that the doctrine [one input -- many outputs, especially the way it is used in publishing] is basically wrong. A text almost always belongs to a media-specific genre. Every genre has rules or norms telling the author how to organise subject matter, how to design an argument (or a narrative plot) and how to use words and a vocabulary in shaping the genre?s common language style. This makes it a difficult task to take a piece of text and use it inside a text of another genre. Or, for that matter, to take a text from one medium and use it in another medium. It is not easy to use print genres in electronic environments automatically, and especially not electronic genres in print environments. On these grounds I warn against computer scientists and conversion house spokespeople telling publishers that XML will lead them into a relatively uncomplicated era of one book -- many formats. This article shows the opposite to be the case. Electronic media give rise to new presentational principles and new genres, and XML is one of the technologies that will add to the diversity of publishing, forcing publishers to develop even more complex production and distribution systems..."
[August 13, 2002] "Tunnel Setup Protocol (TSP): A Control Protocol to Setup IPv6 or IPv4 Tunnels." By Marc Blanchet (Viagenie). WWW. . IETF Network Working Group, Internet-Draft. Reference: 'draft-vg-ngtrans-tsp-01'. July 1, 2002; expires December 30, 2002. "This document proposes a control protocol to setup tunnels between a client and a tunnel server or broker. It provides a framework for the negociation of tunnel parameters between the two entities. It is a generic TCP protocol based on simple XML messaging. This framework protocol enables the negociation of any kind of tunnel, and is extensible to support new parameters or extensions. The first target application is to setup IPv6 over IPv4 tunnels which is one of the transition mechanism identified by the ngtrans and ipv6 working groups. This IPv6 over IPv4 tunnel setup application of the generic TSP protocol is defined by a profile of the TSP protocol, in a companion document... The Command phase is where the Tunnel Client send a tunnel request or a tunnel update to the server. In this phase, commands are sent as XML messages. The first line is a 'Content-length' directive that tells the size of the following XML message. This makes it easier for protocol implementation to tell when they got the whole XML message. When the server sends a response, the first line is the 'Content-length' directive, the second is the return code and third one is the XML message if any. The size of the response for the 'Content-length' directive is the first character of the return code line to the last character of the XML message..."
[August 13, 2002] "IBM, Microsoft Ride Herd On B2B Web Services." By Eric Knorr. In ZDNet Tech Update (August 13, 2002). "Late last week, IBM and Microsoft published two more seminal Web services specifications--WS-Coordination and WS-Transactions--which will together establish a common business process framework for B2B interaction. For process management app development, IBM's Web Services Flow Language (WSFL) and Microsoft's XML Language (XLang) will also be merged into the Business Process Execution Language for Web Services... VanRoekel [Microsoft] provides a simple example to explain the relationship between WS-Coordination and WS-Transactions: 'If I'm building an engine I'm going to order parts from lots of different vendors. What WS-Coordination does is -- make sure [the requesting message] arrived at the vendor. If the vendor can satisfy the order, maybe it will send a message back to me saying, 'Yes, I've got that part in stock.' WS-Transactions sits above that and says if one of those vendors can't supply a part for the engine--maybe like a piston or something--I'm going to roll back the entire order. And it gives me standard, two-phase-commit transactional support and more complex stuff that might exist in work flow... The union of WSFL and XLang is just as significant as the two new protocols. The greatest value of Web services inside and outside the firewall will be rapid development of ad-hoc, XML-based applications -- and BPEL4WS will provide a more standardized method of doing that, simply by merging two accepted languages into one... It's pretty obvious what's going on here. First and foremost, IBM and Microsoft badly want to get Web services moving, so they're doing what dominant players always do: using their market position to railroad standards and enlist the assistance of important players like BEA along the way (or VeriSign, as happened with WS-Security)..." See details in the news item of 2002-08-12.
[August 13, 2002] "BPEL4WS Analysis." By Jean-Jacques Dubray (Chief Architect, Eigner Precision Lifecycle Management). Posted to 'email@example.com' mailing list with "Subject: BPEL4WS Analysis." Draft August 13, 2002 or later. "... 'BeePEL' does a much better job than BPML at relating the collaboration operations to the private process. Typically this happens by a: receive-flow-reply sequence where the receive and reply point to the same operation (e.g., a request/response operation). In BPML you have to design a 'sub-process' to do just that. The control flow of BeePEL is also better than the one of BPML. However, I remain convinced that the control flow must be 'plug-able'. BeePEL is now ahead of BPML in expressivity and completeness. In particular I have argued in a prior analysis, that BPML is designed really to model 'unit of works' rather than business processes. BeePEL is designed to be able to deal with business process models though it remains fairly 'centric' and would not enable an arbitrary message exchange between two partners to be simply 'observed' by the process engine rather than controlled. Since BPML does not offer and does not want to offer any differentiator compared to BeePEL, it is likely that most process engine vendors will adopt BeePEL rather than BPML. What is happening was inevitable in the context of web services and what is at stake for Microsoft and IBM. A big absentee in this furious battle ... Oracle. Like BPML, BeePEL does not deal with user interactions, just as if there were any worker left in a company that would influence the course of process, or help deal with an exception. I view this as a severe limitation of the web services approaches and as in the case of BPML, it is pretty clear that my product manager would reject the use of BeePEL just as well ... The real of business process modeling is complex and the many attempts developed in the past ten years have failed to capture all (and numerous) semantics necessary to model real-world scenarios. BeePEL is no exception. It goes barely beyond the level of BPML, i.e., a framework to specify web service compositions. However it is pretty clear that it is better than BPML. Instead BeePEL like BPML and its ancestors XLang and WSFL get lost in the intricacies of WSDL. Hopefully someone in the web services group will finally see the light soon and discover that if you forget for one moment the API and focus on the message exchange between two roles, one can greatly simplify the definition of collaborations and executable processes. It is pretty clear that BeePML will benefit from the marketing machines of Microsoft and IBM, probably the two best one on earth and therefore will become the specification of choice..."
[August 09, 2002] "Tech Giants Drive New Web Services." By Wylie Wong. In ZDNet News (August 08, 2002). Microsoft, IBM and BEA Systems plan to announce new specifications Monday that the companies hope will help drive adoption of Web services. The first specification -- called Business Process Execution Language for Web Services -- is a programming language for defining how to combine Web services to accomplish a particular task. WS-Coordination describes how individual Web services within that task interact. A software programmer, for example, can stitch together Web services into a sequence of operations to accomplish a particular task. The third specification, called WS-Transaction, is used to ensure that transactions all complete successfully or fail as a group. Web services are emerging methods of writing software that allows businesses to interact via the Internet. A travel Web site could use Web services to connect to airlines, hotels and car rental agencies, allowing a traveler to book an airplane, hotel room and a car at the same time. If all three reservation requests are successful, the traveler can complete the transaction. But if the airplane request is not successful, the computing system can undo the hotel room and car rental request -- and ask the traveler to submit another travel request... The three new specifications are the latest in a series of Web services specifications that Microsoft, IBM and their industry partners have created to advance the Web services effort. In fact, the Business Process Execution Language merges two languages -- Microsoft's Xlang and IBM's Web Services Flow Language -- that the two companies originally created separately. In February, the pair created the Web Services Interoperability (WS-I) Organization, an industry group charged with promoting Web services and ensuring that they are compatible. In April, Microsoft, IBM and VeriSign released WS-Security, a specification that encrypts information and ensures that the data being passed between companies remain confidential. Before that, IBM and Microsoft created specifications that garnered widespread support from the industry: The Simple Object Access Protocol, a communications technology that glues together different computing systems so businesses can interact and conduct transactions; Universal Description, Discovery and Integration, which lets businesses register in a Web directory to advertise their Web services and find each other easily; and Web Services Description Language, which allows businesses to describe programmatically what a Web service does..." See details in the news 2002-08-12 item "Web Services Specifications for Business Transactions and Process Automation."
[August 09, 2002] "Microsoft, IBM, BEA to Unleash Trio of Web Services Specs." By Carolyn A. April. In InfoWorld (August 08, 2002). Industry heavyweights Microsoft, IBM, and BEA Systems on Monday will unleash a trio of proposed Web services standards that address several unmet needs of the nascent services-oriented application model, according to sources. With these standards, the companies are looking to solidify workflow and business process execution as well as transaction integrity and coordination. Primary among the new proposals is the awkwardly named BPEL4WS (business process execution language for Web services), which represents the marriage of two rival standards, WSFL (Web services flow language) from IBM and XLang from Microsoft. An executable language, BPEL4WS is designed to ensure that differing business processes can understand each other in a Web services environment. Many industry observers had expected WSFL to subsume XLang as a standard. The other proposed standards include one for Web services transactions, dubbed WS-Transaction; and one for Web services coordination, called WS-Coordination. The former deals with what experts refer to as non-repudiation and will help to ensure the integrity of Web services transactions, making sure that a transaction happens only once and if a mistake occurs it is compensated for automatically. This becomes particularly important for transactions involving finances, such as purchase orders. Clearly, there is a need to make sure that a purchase order, and a corresponding payment, goes through once and only once. WS-Coordination drills down further into the transaction, providing a standard way for making sure that many simultaneous transactions execute correctly from one system to another, regardless of platform, sources said. The three proposals join an alphabet soup of other Web services standards, including the now mainstream SOAP, XML, UDDI, and WSDL..." See details in the news 2002-08-12 item "Web Services Specifications for Business Transactions and Process Automation."
"New Web Services Specs on Horizon." By Darryl K. Taft. In eWEEK (August 9, 2002). "IBM, Microsoft Corp. and BEA Systems Inc. on Friday will announce three new Web services standards to address such areas as transactions, workflow and business process execution. The three companies, two of which -- IBM and Microsoft -- have taken the lead in the Web Services Interoperability organization, will deliver on promises they made to invest in creating a services platform for business interaction. BEA also is a member of the WS-I. Sources said the first of the three standards will be Business Process Execution Language for Web Services. That standard is actually a combination of the Web Services Flow Language from IBM, of Armonk, N.Y., and XLang from Microsoft, of Redmond, Wash., which both did much the same thing. The newly combined offering is a Web services language that brings business processes together under the realm of services and makes sure each service knows what the other is doing. The two companies, along with BEA, also are releasing standards for transactions in a Web services environment -- WS-Transaction -- and for coordination of transactions in a Web services environment: WS-Coordination. The transaction and coordination standards solve the issues of the integrity of Web services transactions and making sure the transaction completes, and how Web services interact with one another..." See details in the news 2002-08-12 item "Web Services Specifications for Business Transactions and Process Automation."
[August 09, 2002] "Recommendations for XML Schema for Qualified Dublin Core." Proposal to the Dublin Core Architecture Working Group. From the Ad-hoc Committee: Timothy Cole (UIUC), Thomas Habing (UIUC), Diane Hillmann (Cornell), Jane Hunter (DSTC), Pete Johnston (UKOLN), Carl Lagoze (Cornell), and Andy Powell (UKOLN). 2002-07-14. "This document is a follow-on to three efforts: (1) Publication of the Dublin Core Qualifiers recommendation document. (2) Publication of the DCMI proposed recommendation Guidelines for Implementing Dublin Core in XML [We propose an amendment to Recommendation 7 of this document to indicate that references to encoding schemes should be represented using an xsi:type attribute of the XML element for that property, in line with the conventions used in the schemas proposed here]; (3) Joint work between the Open Archives Initiative and the DCMI to define an XML schema for unqualified Dublin Core; this work was motivated by the requirements of the base metadata format for the OAI Protocol for Metadata Harvesting, but is useful for other applications that exchange unqualified Dublin Core records. The schema presented in this document conform to the W3C XML Schema (1.0) recommendations. They are suggested rather than prescribed and may, in fact, co-exist with other schema for exchanging Dublin Core metadata. XML schema are interoperability vehicles; the greater number of applications that agree on a single schema the greater the ability to easily share Dublin Core metadata. Therefore, while the committee that formulated this proposal hopes that the proposed schema will be useful to a breadth of applications, we recognize that different functionality, provided by different schema, may be required by some... While the schema presented here are indeed suggested, the functionality they support is congruent with the qualification model in the Dublin Core Qualifiers document. Therefore, applications that employ other schema that express additional functionality should recognize that doing so compromises interoperability with applications that use this schema. See also the note of July 26, 2002 in "DCMI release new XML Schema proposal for Dublin Core metadata specification": "The Dublin Core Metadata Initiative (DCMI) have released a proposed new XML Schema for Qualified Dublin Core metadata. According to Carl Lagoze, one of the group producing the proposal, the "continued absence of such solid guidelines on how to deploy qualified Dublin Core has interfered with the interoperability goals of DCMI." An XML Schema binding of unqualified Dublin Core metadata had been released by DCMI previously, but this is the first XMLS Schema for qualified Dublin Core..." See: "Dublin Core Metadata Initiative (DCMI)."
[August 09, 2002] "Web Services Introduction." Sample chapter  from Web Services Implementation Guide, Volume 1: Getting Started, by Brian E. Travis and Mae Ozkan. Volume published June, 2002 by Architag Press (Denver, CO, USA), ISBN:0-9649602-3-0. This sample includes the book's TOC. "Web Services Implementation Guide was created to fill a void in web services development. While most books discuss solutions to web service implementation from a technology point of view, Web Services Implementation Guide shows a high-level architectural view of the problem and how to solve it. The book then covers the technologies (XML, XSD, SOAP, and WSDL) that are used in web services solutions. From there, the reader will be able to see how each technology fits into a solution for his or her organization. Finally, a useful discussion of web services extensions is covered. This includes reliability, security, attachments, routing, workflow, and others... According to co-author Brian Travis, implementation of web services has been slow because some people see it as a technology problem. 'We see people get frustrated when they try to implement their web services with just tools. That approach solves only part of the problem; it is like thinking you can build an accounting system with just a COBOL compiler. The compiler is required, to be sure, but it is only invoked after needs are analyzed, a system is architected, and the green light is given for implementation. There are great tools creating web services applications, but those applications will be much more robust if architects and developers understand the larger structural issues.' Co-author Mae Ozkan notes that architects and developers must view their operation in a different way. 'Instead of focusing only on tools, implementers also need to think about their entire operation from a services point-of-view and automate the processes they want to share. Only after those processes are automated and working will implementers be able to expose them as services to internal or external systems'..." See the 2002-08-06 announcement "Architag Press Releases Web Services Implementation Guide." [alt URL]
[August 09, 2002] "Tip: Make choices at Runtime with XSLT Parameters. Use Parameters and Conditionals in Your Style Sheets." By Nicholas Chase (President, Chase and Chase, Inc). From IBM developerWorks, XML Zone. August 2002. ['Extensible Stylesheet Langauage Transformations provide the ability to perform sophisticated manipulation of data as it is transformed from one form to another. You can increase their capabilites even further through the use of parameters that can be specified at runtime. This tip takes a basic look at using parameters and conditional statements in an XSLT style sheet. This tip uses the Xalan XSL Transformation engine, but any XSLT processor will do. It assumes that you are familiar with XSL transformations.'] "In this tip, I take a single style sheet and repurpose it to provide different results depending on the parameter values entered by the user when the document is actually transformed. The style sheet takes an XML document and transforms it into an HTML document displaying the results of a dog show. The basic style sheet creates a page with information in tables... The advantage of parameters is that you can specify them at runtime, but this capability isn't limited to transformations performed from the command line. With most engines, you can specify transformations when performing them programmatically, as well..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."
[August 09, 2002] "XML: The Talk of the Tech Industry." By Eric Lundquist. In eWEEK (August 5, 2002). "When Sun's Jon Bosak led the team that developed the first XML spec in 1996, I doubt he envisioned a day when competitors Oracle and Microsoft would become two of the standard's champions. In Redmond, nearly every phrase heard in the hallways includes XML used as a noun, verb, adjective and overall magic elixir for what ails the technology industry... And as this week's lead eWEEK Labs review states, the latest version of Oracle's database embraces XML wholeheartedly. Oracle9i is a huge, sprawling product difficult for any but the most advanced reviewers to take on. As Labs West Coast Technical Director Timothy Dyck notes, in addition to being a relational and an XML database, 9i Release 2 is also an application server, message server, OLAP server and data mining server. That is about as close as you can come to an IT infrastructure all-in-one product. The Oracle developers have done a good job with the difficult task of incorporating the XML capabilities while also improving the overall performance. An accompanying article by Senior Writer Anne Chen examines whether customers are ready to upgrade to 9i..."
[August 08, 2002] "UML For W3C XML Schema Design." By Will Provost. From XML.com. August 07, 2002. ['Will Provost offers a UML profile for W3C XML Schema'] "Even with the clear advantages it offers over the fast-receding DTD grammar, W3C XML Schema (WXS) cannot be praised for its concision. Indeed, in discussions of XML vocabulary design, the DTD notation is often thrown up on a whiteboard solely for its ability to quickly and completely communicate an idea; the corresponding WXS notation would be laughably awkward, even when WXS will be the implementation language. Thus, UML, a graphical design notation, is all the more attractive for WXS design. UML is meant for greater things than simple description of data structures. Still the UML metamodel can support Schema design quite well, for wire-serializable types, persistence schema, and many other XML applications. UML and XML are likely to come in frequent professional contact; it would be nice if they could get along. The highest possible degree of integration of code-design and XML-design processes should be sought. Application of UML to just about any type model requires an extension profile. There are many possible profiles and mappings between UML and XML, not all of which address the same goals. The XML Metadata Interchange and XMI Production for W3C XML Schema specifications, from the OMG, offer a standard mapping from UML/MOF to WXS for the purpose of exchanging models between UML tools. The model in question may not even be intended for XML production. WXS simply serves as a reliable XML expression of metadata for consumption in some other tool or locale. My purpose here is to discuss issues in mapping between these two metamodels and to advance a UML profile that will support complete expression of an WXS information set... The major distinction is that XMI puts UML first, so to speak, in some cases settling for a mapping that fails to capture some useful WXS construct, so long as the UML model is well expressed. My aim is to put WXS first and to develop a UML profile for use specifically in WXS design: (1) The profile should capture every detail of an XML vocabulary that an WXS could express. (2) It should support two-way generation of WXS documents. I suggest a few stereotypes and tags, many of which dovetail with the XMI-Schema mapping. I discuss specific notation issues as the story unfolds, and highlight the necessary stereotypes and tags... David Carlson [Modeling XML Applications with UML: Practical e-Business Applications] has also done some excellent work in this area, and has proposed an extension profile for this purpose. I disagree with him on at least one major point of modeling and one minor point of notation, but much of what is developed here lines up well with Carlson's profile..." See references in: (1) "Conceptual Modeling and Markup Languages"; (2) "XML and 'The Semantic Web'"; (3) "XML Schemas"; (4) "XML Metadata Interchange (XMI)."
[August 08, 2002] "Finding the First, Last, Biggest, Smallest." By Bob DuCharme. From XML.com. August 07, 2002. ['DuCharme explains how to do without a query language when developing XSLT.'] "Sometimes you want to know which element or record is first or last in a given set, or you want to know which element or record has a value that is the greatest or smallest among the corresponding values in that set -- for example, which employee element has the lowest value for a hireDate subelement or attribute. These operations are typically performed by a query language. You don't need a separate query language, however, to do these when you're developing with XSLT. If you can describe a set of nodes from a document with a single-step XPath expression, then you can get the first of those nodes by adding a predicate of  to that expression, and you can find out the last one by adding a predicate of [last()]. To get an element or attribute value with the greatest or smallest value in it, you can sort the nodes using any of the sorting options that we saw in last month's column and then use the same predicates to pick out the one at either end..." See also the source code and sample files.
[August 08, 2002] "XHTML 2.0: The Latest Trick." By Kendall Grant Clark. From XML.com. August 07, 2002. ['Kendall Clark takes a first look at the W3C's draft of XHTML 2.0'] "... Since HTML is going to be around for a very long time, it makes sense to rationalize it, continue evolving it, and, in general, to make it more powerful and more amenable to the kinds of things people want to do with it. There are signs, encouraging in such an early draft, that the W3C Working Group responsible for XHTML 2.0 understands and is working to enact this ideal... One thing web designers have been doing forever is building ad hoc widgets, structures, scripts to display site navigation elements. In response to this universal need, XHTML 2.0 includes a new kind of list, represented by the element <nl>... XHTML 2.0 includes a group of element attributes which are defined for every element, called the 'Common Attribute Collection' (CAC). Since attribute collections are often used to manage large XML or SGML applications, XHTML 2.0's CAC isn't noteworthy per se. However, XHTML 2.0 moves the href attribute into the CAC, which means that every element in XHTML 2.0 can be a hyperlink. That's the sort of fundamental change that it's hard to evaluate at this very early point, but it's immediate utility is pretty obvious... XHTML 2.0 does not yet deprecate the familiar set of h1 through h6 heading elements, though I think there's a pretty good chance that they will be deprecated eventually. XHTML 2.0 does, however, lay the ground work for their deprecation by adding <section> and <h> elements... XHTML 2.0 retains <cite> as an element but also adds it as an attribute, which has a URI as its value. I like the use of cite as an attribute holding a URI; it's one of those stress points at which the variegated folds of Semantic and the Human Webs intersect... It's not yet clear to me on which elements cite as an attribute is available in XHTML 2.0, but I would like to see it very widely available, perhaps joining href in the CAC. It simply cannot hurt for there to be more links in a global hypertext system, particularly typed links, and ones which offer both humans and machines other nodes on the Web at which related information may be obtained. One fly in this bit of soup, however, is the further conspicuous absence of XLink, which seems applicable to the kind of concern cite addresses. It's clear that XHTML 2.0, despite its warts and omissions, which are by no means critically problematic at such an early stage, is a gesture in the right direction, that it gets more right than wrong. I suggest that the Working Group take its time with XHTML 2.0; given the slow adoption of XHTML 1.1, version 2.0 is not going to be used en masse anytime soon..."
[August 08, 2002] "XHTML 2.0." W3C Working Draft 5-August-2002. Edited by Shane McCarron (Applied Testing and Technology), Jonny Axelsson (Opera Software), Beth Epperson (Netscape/AOL), Ann Navarro (WebGeek, Inc.), and Steven Pemberton (CWI; HTML Working Group Chair). Version URL: http://www.w3.org/TR/2002/WD-xhtml2-20020805. Latest version URL: http://www.w3.org/TR/xhtml2. This initial public Working Draft "specifies the XHTML 2.0 Markup Language and a variety of XHTML-conforming modules that support that language... XHTML 2 is a markup language intended for rich, portable web-based applications. It is a member of the XHTML Family of markup languages. It is an XHTML Host Language as defined in XHTML Modularization. As such, it is made up of a set of XHTML Modules that together describe the elements and attributes of the language, and their content model. XHTML 2 updates many of the modules defined in XHTML Modularization 1.0, and includes the updated versions of all those modules and their semantics. XHTML 2 also uses modules from Ruby, XML Events, and XForms. The modules defined in this specification are largely extensions of the modules defined in XHTML Modularization 1.0. This specification also defines the semantics of the modules it includes. So, that means that unlike earlier versions of XHTML that relied upon the semantics defined in HTML 4, all of the semantics for XHTML 2 are defined either in this specification or in the specifications that it normatively references. Even though the XHTML 2 modules are defined in this specification, they are available for use in other XHTML family markup languages. Over time, it is possible that the modules defined in this specification will migrate into the XHTML Modularization specification... either the DTD or the Schema can be used to validate XHTML 2.0 documents..." The W3C's HTML Home Page warns: "Note that while the ancestry of XHTML 2 comes from HTML 4, XHTML 1.0, and XHTML 1.1, it is not intended to be backward compatible with its earlier versions. Also, this first draft does not include the implementations of XHTML 2.0 in either DTD or XML Schema form yet. Those will be included in subsequent versions, once the contents of this language stabilizes." See: "XHTML and 'XML-Based' HTML Modules."
[August 07, 2002] "Getting Started With XML Security." By Frederick Hirsch. July 31, 2002. With 32 references. From a collection of referenced papers. "Meeting security requirements for privacy, confidentiality and integrity is essential in order to move business online. With the growing acceptance of XML technologies for documents and protocols, it is logical that security should be integrated with XML solutions. The XML Security standards define XML vocabularies and processing rules in order to meet security requirements. These standards use legacy cryptographic and security technologies, as well as emerging XML technologies, to provide a flexible, extensible and practical solution toward meeting security requirements. The XML Security standards include XML Digital Signature for integrity and signing solutions, XML Encryption for confidentiality, XML Key Management (XKMS) for public key registration, location and validation, Security Assertion Markup Language (SAML) for conveying authentication, authorization and attribute assertions, XML Access Control Markup Language (XACML) for defining access control rules, and Platform for Privacy Preferences (P3P) for defining privacy policies and preferences. Major use cases include securing Web Services (WS-Security) and Digital Rights Management (eXtensible Rights Markup Language 2.0 - XrML)... [Conclusion:] The XML Security standards define XML languages and processing rules for meeting common security requirements. For the most part, these standards incorporate the use of the other XML Security standards, especially the core XML Digital Signature and XML Encryption standards. Another example is the sharing of policy statements by SAML and XACML. This set of interlocking standards has emerged quickly, and, since it is based on a foundation of accepted practices and technologies, should mature quickly. This article has presented a brief introduction to the set of standards and how they work together. XML Security standards will be essential to moving business online as XML technologies are adopted for Web Services, Digital Rights Management and other emerging applications. Understanding of how XML may meet authentication, authorization, confidentiality, integrity, signature and privacy requirements will be essential..." (1) "Security, Privacy, and Personalization" and (2) "XML and Digital Rights Management (DRM)." [cache 2002-08-07]
[August 06, 2002] "Seeking a Common E-Commerce Tongue." By Alorie Gilbert. In CNET News.com (August 05, 2002). "E-business standards group RosettaNet has merged with the Uniform Code Council, which administers product code standards used in the retail and grocery industries, the non-profit organizations said Monday. RosettaNet, a consortium of high-tech companies developing Web standards for exchanging data over the Internet, will become a subsidiary of the Uniform Code Council (UCC) as a result of the merger. Founded in 1998, RosettaNet has more than 450 members, including Intel, Cisco Systems, Hewlett-Packard, Oracle, Microsoft and IBM. The UCC administers the Uniform Product Code, the basis for bar codes used by retailers to identify products and electronically track inventory. The organization, established in 1972, caters to 260,000 members that include Wal-Mart, Procter & Gamble, General Mills and Kraft Foods. Although their constituents may be different, the groups said the merger should benefit their respective members. For instance, giant retailer and UCC member Wal-Mart sells many technology products from RosettaNet members Microsoft and HP. By merging, RosettaNet and UCC hope to build a common Web language for exchanging orders, receipts and other business documents among technology manufacturers and retailers. RosettaNet members are also interested in new tracking technology being developed at the Massachusetts Institute of Technology with the support of UCC, Procter & Gamble and Wal-Mart. The new system places a microchip in a product's packaging, acting as a radio transmitter to shuttle information back and forth. The chip can keep track of a product's location and price, as well as how much of the same product is in stock. With such a system, companies could collect instantaneous inventory information, as well as potentially detect counterfeits or monitor theft..." See details in the 2002-08-05 news item "RosettaNet and Uniform Code Council Inc. (UCC) Announce Merger."
[August 06, 2002] "Stealth Patents Put Innovation at Risk." By Jim Rapoza. In eWEEK (August 05, 2002). "At the recent Open Source Group Conference, Tim Berners-Lee, the creator of the World Wide Web, spoke out on the need to keep Web standards and Web services patent-free. He knows what he's talking about -- not only do patents threaten the standards put forth by his World Wide Web Consortium, they have even threatened his very invention of the Web, one that has been compared to Gutenberg's creation of the printing press. Just a couple of years ago, British Telecom stunned the world by claiming to have a patent on hyperlinking. Despite the fact that everyone knew that there was prior art discussing hyperlinking before the date of the patent, the sheer threat of it caused a lot of consternation and worry. And just recently, Forgent Networks claimed to have a patent on the technology for the JPEG format, one of the most commonly used image formats on the Web today... I understand that companies losing money big time in today's economy are increasingly looking for pieces of paper in their closets that they can use to increase the value of their stock and to potentially blackmail other tech companies. But there has to be a limit here..." See 'JPEG (Forgent Networks)' in "Patents and Open Standards."
[August 06, 2002] [Book Announcement] XPath, XLink, XPointer, and XML: A Practical Guide to Web Hyperlinking and Transclusion. By Erik Wilde (Computer Engineering and Networks Laboratory [TIK], Department of Information Technology and Electrical Engineering, Swiss Federal Institute of Technology, Zürich) and David Lowe (Faculty of Engineering at the University of Technology, Sydney, Australia). Boston, MA: Addison-Wesley Professional, [July 23] 2002. ISBN: 0-201-70344-0. 304 pages. "This practical reference book documents critical [Web] standards, shifting theory into practice for today's developers who are creating tomorrow's useful, efficient, and information-rich applications and Web sites. Blending advanced reference material with practical guidelines, this authoritative guide presents a historical overview, current developments, and future perspectives in three detailed sections. Part I provides a conceptual framework highlighting current and emerging linking technologies, hypermedia concepts, and the rationale behind the "open" Web of tomorrow. Part II covers the specifics behind the emerging core standards, and then Part III examines how these technologies can be applied and how the concepts can be put to efficient use within the world of Web site management and Web publishing. The book... examines how today's enabling technologies are likely to change the Web of tomorrow. Topics covered in-depth include: (1) Hypermedia concepts and alternatives to the Web; (2) XML Namespaces, XML Base, XInclude, XML Information Set, XHTML, and XSLT; (3) XPath, XLink, and XPointer concepts, strengths, and limitations; (4) Emerging tools, applications, and environments; (5) Migration strategies, from conventional models to more sophisticated linking techniques; (6) Future perspectives on the XPath, XLink, and XPointer standards..." Chapter 6 of the book ("XML Pointer Language," pages 139-168) is available online. See the website description. The website transcluding.com is maintained by the authors in support of the book and updated references: "on this Web site you can find information about the subjects discussed in the book, and you can find many pointers to other useful information on the Web..."
[August 06, 2002] "Compressed Accessibility Map: Efficient Access Control for XML." By Ting Yu, Divesh Srivastava, Laks V.S. Lakshmanan, and H. V. Jagadish. In Proceedings of the International Conference on Very Large Databases (VLDB). August 20-23, 2002 Hong Kong, China. 12 pages. "XML is widely regarded as a promising means for data representation integration, and exchange. As companies transact business over the Internet, the sensitive nature of the information mandates that access must be provided selectively, using sophisticated access control specifications. Using the specification directly to determine if a user has access to a specific XML data item can hence be extremely inefficient. The alternative of fully materializing, for each data item, the users authorized to access it can be space-inefficient. In this paper, we propose a space- and time-efficient solution to the access control problem for XML data. Our solution is based on a novel notion of a compressed accessibility map (CAM), which compactly identifies the XML data items to which a user has access, by exploiting structural locality of accessibility in tree-structured data. We present a CAM lookup algorithm for determining if a user has access to a data item; it takes time proportional to the product of the depth of the item in the XML data and logarithm of the CAM size. We develop a linear-time algorithm for building an optimal size CAM. Finally, we experimentally demonstrate the effectiveness of the CAM for multiple users on both real and synthetic data sets." See related papers in the publication list from Divesh Srivastava. [cache]
[August 06, 2002] "Optimizing the Secure Evaluation Of Twig Queries." By SungRan Cho, Sihem Amer-Yahia, Laks V.S. Lakshmanan, and Divesh Srivastava. In Proceedings of the International Conference on Very Large Databases (VLDB). August 20-23, 2002 Hong Kong, China. 12 pages. "The rapid emergence of XML as a standard for data exchange over the Web has led to considerable interest in the problem of securing XML documents. In this context, query evaluation engines need to ensure that user queries only use and return XML data the user is allowed to access. These added access control checks can considerably increase query evaluation time. In this paper, we consider the problem of optimizing the secure evaluation of XML twig queries. We focus on the simple, but useful, multi-level access control model, where a security level can be either specified at an XML element, or inherited from its parent. For this model, secure query evaluation is possible by rewriting the query to use a recursive function that computes an element's security level. Based on security information in the DTD, we devise efficient algorithms that optimally determine when the recursive check can be eliminated, and when it can be simplified to just a local check on the element's attributes, without violating the access control policy. Finally, we experimentally evaluate the performance benefits of our techniques using a variety of XML data and queries." See also: "Holistic Twig Joins: Optimal XML Pattern Matching," By Nicolas Bruno, Nick Koudas and Divesh Srivastava. ['XML employs a tree-structured data model, and, naturally, XML queries specify patterns of selection predicates on multiple elements related by a tree structure. Finding all occurrences of such a twig pattern in an XML database is a core operation for XML query processing... In this paper, we propose a novel holistic twig join algorithm, TWIGOPTBU, for matching an XML query twig pattern. Our technique uses a chain of linked stacks to compactly represent partial results to root-to-leaf query paths, which are then composed to obtain matches for the twig pattern..." [cache]
[August 06, 2002] "Managing Change." By Adam Bosworth (Vice President, Engineering, BEA Systems Inc). In XML & Web Services Magazine Volume 3, Number 5 (August/September 2002). "How can running instances of applications handle changes in business logic? That's the question I posed a few weeks back in an e-mail to a few key internal architects at BEA discussing some of the problems I think we still need to solve. Then I left on a four-day, five-country trip that left me out of the loop on e-mail. The question was meant to address a challenge faced by our customers with really long-running workflows and "conversations." In such cases, the business logic may change while the instances of the prior version of the application are still far from complete. Previously, I had thought this would not be an issue because people would not want to change the business logic of running instances, but simply deploy new applications with the new logic. However, numerous discussions with customers proved that the real world is a weird and wonderful place; people really do want to change business logic on the fly... [Customers?] If it is metadata, they are storing it in XML. If it is state that is essentially transient, they are increasingly managing it in XML. They are doing this because it is easy to write tools to analyze, migrate, and reshape XML to handle change. Customers have learned the hard way that this isn't true of either Java serialization or relational databases. With databases in particular, one of our customers' biggest problems, considering the highly dynamic world they live in, is the inflexibility of data in continuously running systems. Database administrators spend untold fortunes coping with this. Even after working with XML for six years, I'm still pleasantly surprised at the prevalent use of XML for metadata. I believe that we are at a point where the two biggest revolutions in computer science of the last 20 years, object-oriented computing and relational databases, have failed us. Because our systems must be available 24x7 for years on end, the methods we have for accommodating change just don't work. Customers running complex operations such as fabrication systems can never shut them down, but they constantly want to fine-tune the operations. In so doing they need to change the shape of the information they need, but cannot easily do so... So who needs an XML database? Anyone dealing with change..." See: "XML and Databases."
[August 05, 2002] "Core Range Algebra: Toward a Formal Model of Markup." By Gavin Nicol (Red Bridge Interactive, Inc.). Presented at the Extreme Markup Languages Conference 2002, Montréal. "There have been a number of formal models proposed for XML. Almost all of these models focus on treating XML as semistructured data, typically as edge-labeled graphs, or as a tree of nodes. While this approach is fine for more traditional database applications, or situations where structure is of paramount importance, it fails to deal with the issues found in the use of XML for text. In particular, unless special functions are introduced, queries that involve text spanning node boundaries are not closed over a set of nodes, and likewise, the returned sets of nodes are not necessarily well-formed XML documents. This paper presents an algebra for data that, when applied to XML, is not only closed over the entire set of operations permissible in more traditional XML query languages, but over the operations that involve text and XML fragments that are not in themselves well-formed... Core Range Algebra, as defined in the paper, is a simple algebra that operates over sequences, ranges, and sequences of ranges. The intent of Core Range Algebra is to provide a basis for manipulating sequences of data, especially text, as a sequence of items, and to provide means for layering higher-level interpretations over that substrate such that operations at the higher level can be defined in terms of the underlying model... What is woefully missing in the existing specification is a means for constructing sequences of ranges from an underlying sequence. Given the Kleen closure, we have a formal basis for creating ranges based on regular expressions. This will not only provide a powerful means for creating ranges over unstructured texts, but also gives a means for inferring structurally recursive data structures, or to infer higher level structures over previously inferred structures. One obvious further extension is to expand the Core Range Algebra to fully cover XML. An approach much like REX or Regular Fragmentation can be used to build the structures of XML, from the underlying sequences, and with additional functions for range construction such that elements, attributes, etc., can be constructed, a full query language can be defined over the Core Range Algebra defined above... For the notion of parsing XML using regular expressions, regular fragmentations from Simon St. Laurent and REX ['REX: XML Shallow Parsing Using Regular Expressions'] from Robert Cameron are especially applicable. Core Range Algebra provides a formal basis for both approaches..."
[August 5, 2002] "Oracle Goes XML." By Timothy Dyck. In eWEEK (August 02, 2002). "Oracle Corp. is the first among the big relational database vendors to make major changes to its database in response to XML, shaking up the generally overpriced and underperforming native XML database market something fierce but having a lesser effect on current Oracle database sites. Oracle9i Database Release 2 continues to provide the largest range of features available in a database... All the major database players are moving to strengthen support for XML data and XML query languages in their products. In the case of IBM's DB2 and Microsoft's SQL Server databases, XML technologies and SQL will be on the same level as data access techniques. However, Oracle has gotten there first with its XML DB engine. XML DB is a combination of three technologies: a large set of SQL functions that allows XML data to be manipulated as relational data (through a view or special SQL functions) as well as to retrieve relational table data in XML format; a native XML data type called XMLType that can store XML data either in an object-relational storage format that maintains the XML DOM (Document Object Model) or as the original text document; and a special hierarchical XML index type to speed access to hierarchies of XML files stored in Oracle9i's XML file repository. XML DB also supports XML Schema, the latest standard for defining the structure of XML documents, although it doesn't support the upcoming XML query language, XQuery. Instead, XML DB uses a combination of XPath and SQL to manipulate XML. The database includes an Extensible Stylesheet Language Transformation engine, made accessible through the built-in copy of Apache, that can retrieve XML data from XML DB and transform it into HTML or other formats... Previous versions of Oracle and other relational databases support the option of storing XML as text data or extracting data from XML and storing it in normal relational tables, but the interim option of storing data in a format that maintains DOM fidelity (including comments, namespaces, the distinction between elements, and attributes and element ordering) is valuable and is the distinguishing feature of a native XML database. The DOM format doesn't require XML documents to be re-parsed when accessed, and this, in combination with XML and SQL index types, should provide good performance." See: "XML and Databases."
[August 05, 2002] "RosettaNet Merges With UCC." By Heather Harreld. In InfoWorld (August 05, 2002). "In an effort to boost business-to-business integration efforts in the high-technology and retail sectors via XML and Web services standards, the Uniform Code Council (UCC) and RosettaNet have agreed to merge their organizations, officials said Monday. The two organizations will jointly develop an architectural environment designed to provide a common set of objects that businesses can exploit to more easily automate supply chain operations. In addition, the move is designed to provide software vendors more incentive to design products to support the standards via a larger membership base of potential customers... This library of objects will benefit businesses as they seek to launch new B2B relationships with partners, said Hollis Bischoff, an analyst with the Meta Group, Stamford, Conn... Sara Lee's XML-based business processes are complementary to RosettaNet standards, said Barry Beracha, chairman of the UCC Board of Governors and CEO of the Sara Lee Bakery Group. The merger will allow Sara Lee to leverage its strengths and expertise of product identification into the high technology sector, as well as other adjacent industries, he added..." See: (1) the news item, "RosettaNet and Uniform Code Council Inc. (UCC) Announce Merger"; (2) "Uniform Code Council (UCC) XML Program."
[August 05, 2002] "Office Gets Its XML Groove." By Mark Jones and Heather Harreld. In InfoWorld (August 02, 2002). "As Microsoft drives XML into the core of its client software stack, Groove Networks is set to unveil a new toolkit designed to extend the collaborative capabilities of Windows applications. The move is part of Microsoft's overall effort to leverage XML to make Office the de facto front end to any enterprise application, and more specifically tightly couple Office with Microsoft offerings such as Great Plains and BizTalk. Designed as an add-on to Microsoft's .Net development environment, Groove Toolkit for Visual Studio .Net allows developers to rapidly build WinForms-based applications hosted in Groove's Workspace environment. Using C# or VB .Net, the toolkit will ease the development of WinForm applications that leverage the collaborative capabilities of SharePoint Team Services and Groove Workspace, said Jack Ozzie, co-founder and vice president of development, platform, and developer services at Groove Networks. 'We place ourselves in the IDE, [as a] a new project type within a WinForms subproject. You're presented with a WinForm, [and] we bring Groove up in a minitransceiver,' Ozzie explained. Expected to arrive in the fall of 2002, the toolkit builds on Groove's previous work to take advantage of the existing integration between Groove 2.0 and Microsoft Office, sources report... According to Raikes, Microsoft is weaving XML into the core of its Office 11 suite, due in mid-2003, to facilitate services such as notification, research, and improved file sharing between applications. Microsoft is also using XML to integrate both its Great Plains CRM offering and its BizTalk integration engine into Office..."
[August 05, 2002] "Using XInclude." By Elliotte Rusty Harold. From XML.com. July 31, 2002. ['Elliotte Rusty Harold explains joining XML documents together with XInclude. Harold is the coauthor of XML in a Nutshell, 2nd edition.'] "It's often convenient to divide long documents into multiple files. The classic example is a book, which is customarily divided in chapters. Each chapter may be further subdivided into sections. Traditionally one has used XML external entity references to support document division... However, external entity references have a number of limitations. Among them: (1) The individual component files cannot be used independently of the master document. They are not themselves complete, well-formed XML documents. For instance, they cannot have XML declarations or document type declarations and often do not have a single root element. (2) If any of the pieces are missing, then the entire document is malformed. There's no option for error recovery. (3) An entity reference cannot point to a plain text file such as an example Java program or HTML document. Only well-formed XML can be included. XInclude is an emerging W3C specification for building large XML documents out of multiple well-formed XML documents, independently of validation. Each piece can be a complete XML document, a fragmentary XML document, or a non-XML text document like a Java program or an e-mail message. XInclude reference external documents to be included with include elements in the http://www.w3.org/2001/XInclude namespace.. Current support for XInclude is limited, though that is slowly changing. In particular, (1) Libxml, the XML C library for Gnome, includes fairly complete support for XInclude. (2) The Apache Cocoon application server can resolve XIncludes in a document before sending it to a client. Processing instructions in the document's prolog control the exact operations performed and the order they're applied in. (3) The 4Suite XML library for Python has an option to resolve XIncludes when parsing. (4) GNU JAXP includes a SAX filter that resolves XIncludes, provided no XPointers are used..." See XML Inclusions (XInclude) Version 1.0 (W3C Candidate Recommendation 21-February-2002).
[August 05, 2002] "Not My Type: Sizing Up W3C XML Schema Primitives." By Amelia Lewis. From XML.com. July 31, 2002. ['Continuing our occasional series of opinion pieces from members of the XML community, Amy Lewis takes a hard look at W3C XML Schema datatypes.'] "Since the application of XML to data representation first gained public visibility, there has been a movement to enhance its type system beyond that originally provided by DTD. Several attempts were made (SOX, XML Data and XML Data Reduced, Datatypes for DTDs, and others) before the W3C handed the problem to the XML Schema Working Group. What is the goal of data type definitions for XML? For one thing, it establishes "strong typing" in XML in a fashion that corresponds with strong typing in programming languages. Various commercial interests have been vocal supporters of strong typing in XML because they see typed generic data representation as their best hope for interoperability and increased automation. With typing in schemas extended into the textual content of simple types, and not just the structural content of complex types, businesses can enforce contracts for data exchange. In other words, strong typing enables electronic commerce. To phrase it a little differently, the data types defined in DTDs were considered inadequate to support the requirements of electronic commerce or, more generally, of commercially reliable electronic information exchange. The publication of W3C XML Schema (or WXS), in which one half of the specification was devoted to the definition of a type library (part two), seemed to resolve the problem. Certainly, with forty-four built-in data types, nineteen of them primitive, it seemed at first glance to cover the field. The increasing visibility of WXS and the efforts to layer additional specifications on top of it -- XML Query, the PSVI, data types in XPath 2.0, typing in web services -- have begun to raise serious questions about WXS part two, even among proponents of strong types, including the author of this article. There are two fundamental problems with WXS datatyping. The first is its design: it's not a type system -- there is no system -- and not even a type collection. Rather, it's a collection of collections of types with no coherent or consistent set of interrelations. The second problem is a single sentence in the specification: 'Primitive datatypes can only be added by revisions to this specification'. This sentence exists because of the design problem; lacking a concept for what a primitive data type is, the only way to define new types is by appeal to authority. The data type library is wholly inextensible, internally inconsistent, bloated in and incomplete for most application domains..." 'Related Reading' from O'Reilly includes XML Schema: The W3C's Object-Oriented Descriptions for XML, by Eric van der Vlist. General references: "XML Schemas."
[August 05, 2002] "Of Grouping, Counting, and Context." By John E. Simpson. From XML.com. July 31, 2002. ['John Simpson explains XSLT keys and the count() function.'] "Q: How do I count the number of elements with a given attribute value? A: Great question. The answer requires knowledge of a couple of XSLT techniques: grouping (with XSLT keys) and using the count() function..." Related reading: XPath and XPointer: Locating Content in XML Documents, by John E. Simpson.
[August 05, 2002] "IANAL [I Am Not A Lawyer], but HTH [Hope This Helps]." By Eduardo Gutentag (Sun Microsystems). Presented at the Extreme Markup Languages Conference 2002. Wednesday, August 7, 2002. "Issues relating to IPR (Intellectual Property Rights) have burst into public consciousness over the past 12 months or so, and have attracted considerable attention both among developers and among the specialized press. As always happens in these cases, misinformation competes with information with equal fierceness. In order to reduce the amount of current misinformation, this talk will: (1) examine the usual alphabet soup (IPR, RAND, RF, RAND-Z); (2) compare the various available alternatives; comment on the known status of this matter in a variety of SDOs (W3C, OASIS, IETF, GIS, etc.); (3) illuminate some particular cases of extreme interest (Unisys, Rambus, etc.) and try to penetrate the fog that is brought upon this subject, sometimes through ignorance, sometimes through FUD..." See additional background in "Patents and Open Standards."
[August 03, 2002] "XML to Relational Conversion using Theory of Regular Tree Grammars." By Murali Mani and Dongwon Lee. In Proceedings of the VLDB Workshop on Efficiency and Effectiveness of XML Tools, and Techniques (EEXTT), Hong Kong, China, August 2002. 12 pages, with 17 references. "In this paper, we study the different steps of translation from XML to relational models, while maintaining semantic constraints. Our work is based on the theory of regular tree grammars, which provides a useful formal framework for understanding various aspects of XML schema languages. We first study two normal form representations for regular tree grammars. The first normal form representation, called NF1, is used in the two scenarios: (a) Several document validation algorithms use the NF1 representation as the first step in the validation process for efficiency reasons, and (b) NF1 representation can be used to check whether a given schema satisfies the structural constraints imposed by the schema language. The second normal form representation, called NF2, forms the basis for conversion of a set of type definitions in a schema language L1 that supports union types (e.g., XML-Schema), to a schema language L2 that does not support union types (e.g., SQL), and is used as the first step in our XML to relational conversion algorithm..." General references: "XML Schemas." [cache]
[August 03, 2002] "NeT and CoT: Translating Relational Schemas to XML Schemas Using Semantic Constraints." By Dongwon Lee, Murali Mani, Frank Chiu, and Wesley. W. Chu. Paper prepared for VLDB 2002 (28th International Conference on Very Large Data Bases). "The paper introduces two algorithms, called NeT and CoT, to translate relational schemas to XML schemas using various semantic constraints are presented. The XML schema representation we use is a language-independent formalism named XSchema, that is both precise and concise. A given XSchema can be mapped to a schema in any of the existing XML schema language proposals. Our proposed algorithms have the following characteristics: (1) NeT derives a nested structure from a flat relational model by repeatedly applying the nest operator on each table so that the resulting XML schema becomes hierarchical, and (2) CoT considers not only the structure of relational schemas, but also semantic constraints such as inclusion dependencies during the translation - it takes as input a relational schema where multiple tables are interconnected through inclusion dependencies and converts it into a 'good' XSchema. To validate our proposals, we present experimental results using both real schemas from the UCI repository and synthetic schemas from TPC-H." See similarly "NeT and CoT: Inferring XML Schemas from the Relational World", by Dongwon Lee, Murali Mani, Frank Chiu, and Wesley. W. Chu; in Proceedings of ICDE 2002, San Jose, California, February 2002. General references: "XML Schemas." [source]
[August 02, 2002] "RELAX NG: The Power Is in the Patterns." By Tom Gaven. In XML Journal Volume 3, Issue 7 (July 2002). "Schema languages are languages that allow you to specify the structure of XML instance documents. RELAX NG is an XML schema language that is considered to be simple, yet powerful. This article gives an overview of an important concept of the RELAX NG schema language called patterns. The power of RELAX NG can be found in its patterns. Schema languages also describe the allowed names of elements and attributes that are found in XML instance documents. And they allow you to specify element ordering, occurrence, and allowed content, like simple text, or datatypes, like integers. Some examples of schema languages are W3C XML Schema, RELAX NG, Schematron, and DTD. RELAX NG differs from other schema languages in that it's built around the concept of patterns. To understand the power of RELAX NG, you must first understand the basic RELAX NG patterns and how they can be combined. Let's begin by taking a look at the following XML instance document..." See: "RELAX NG."
[August 02, 2002] "The Next Generation Database - XDB." By Greg Mable (NuSoft Solutions). In XML Journal Volume 3, Issue 6 (June 2002). "... With the advent of Web services, applications are now free to communicate in a common format - that of an XML document - anywhere on the Web. Where the Web was once built on static content linked together via hypertext, XML takes it to the next level. Instead of users surfing the Internet via HTML pages linked with hyperlinks, we can now build Web-based applications that can be linked via XML documents. Imagine a user clicking on a link to a Web site. This in turn fires off an exchange of an XML document to another application. Here's the key: the XML document. This will be the primary means of information exchange and message passing. With the need to process XML documents comes the need to be able to store, retrieve, and report on them. Hence the need for a management system to handle the flood of XML documents that an application will process. This is where an XML database, XDB, comes in... So what is an XML database, or an XDB? In this article I define what it is, when and why you will need to use one, and what impact it will have on the business world. By the time you finish reading, you just may realize the importance of an XDB and will want to grab your surfboard to ride the next big wave. There are no requirements for how an XDB is expected to physically store XML documents. Some XDBs are built on an object database, others might use compressed files with an indexing scheme, and still others might be built on top of a relational database. At this time XDBs can be classified into two basic types (with a third type on the horizon): native and XML enabled. Native XML database: A native XML database (NXDB) is simply one that was designed from the ground up to store XML documents. It might make use of a preexisting technology such as object-oriented data storage techniques, but its mission is to store, retrieve, and update XML documents. XML-enabled database: In the second type, an XML-enabled database (XEDB), extensions are added to a preexisting database management system to support XML documents. An XEDB can be built on top of an existing object-oriented or relational database management system. An XEDB provides a mapping layer between the XML documents and its database structures as well as support for XML-based tools to retrieve and update XML documents. Convergence of NXDB and XEDB: The third type of XDB is in its formative stages, and like a wave approaching the beach, it is about to crest. It can be considered a convergence of the two other types: an XDB that is designed to handle XML documents but is built on a preexisting database technology, combining them into a unified data model and a single repository. In this article I'll briefly describe an example for each of these types..." See: "XML and Databases."
[August 01, 2002] "Data Mining Standards Initiatives." By Robert L. Grossman (Laboratory of Advanced Computing and the National Center for Data Mining, University of Illinois at Chicago), Mark F. Hornick (Data Mining Technologies, Oracle Corp), and Gregor Meyer (Business Intelligence Unit, IBM Corp., San Jose, CA). In Communications of the ACM (CACM) Volume 45, Issue 8 (August 2002), pages 59-61. Special Issue: Evolving Data Mining Into Solutions For Insights. ['Lacking standards for statistical and data mining models, applications cannot leverage the benefits of data mining.'] "... The Predictive Model Markup Language (PMML) is an XML standard being developed by the Data Mining Group, a vendor-led consortium established in 1998 to develop data mining standards. PMML represents and describes data mining and statistical models, as well as some of the operations required for cleaning and transforming data prior to modeling. PMML aims to provide enough infrastructure for an application to be able to produce a model (the PMML producer) and another application to consume it (the PMML consumer) simply by reading the PMML XML data file... PMML consists of the following components: (1) Data dictionary. Defines the input attributes to models and specifies each one's type and value range. (2) Mining schema. Precisely one in each model, listing the schema's attributes and their role in the model; these attributes are a subset of the attributes in the data dictionary. The schema contains information specific to a certain model, while the data dictionary contains data definitions that do not vary by model. It also specifies an attribute's usage type, which can be active (an input of the model), predicted (an output of the model), or supplementary (holding descriptive information and ignored by the model). (3) Transformation dictionary. Can contain any of the following transformations: normalization (mapping continuous or discrete values to numbers); discretization (mapping continuous values to discrete values); value mapping (mapping discrete values to discrete values); and aggregation (summarizing or collecting groups of values, such as by computing averages). (4) Model statistics. Univariate statistics about the attributes in the model. (5) Models. Model parameters specified by tags. PMML v.2.0 includes regression models, cluster models, trees, neural networks, Bayesian models, association rules, and sequence models... In PMML v.2.0, inputs to PMML models can be DataFields defined in a data dictionary or DerivedFields defined in the transformation dictionary. The consensus among Data Mining Group members is that the transformation dictionary is powerful enough for capturing the process of preparing data for statistical and data mining models... The main reason so many different data representation and data communication standards exist today is that data mining is used in so many different ways and in combination with a so many different systems and services, many requiring their own separate often-incompatible standards. Although some vendor- led efforts have sought to homogenize terminology and concepts among standards, more work is indeed required. Relatively narrow XML standards, such as PMML, serve as common ground for several emerging standards. For example, SQL/MM Part 6: Data Mining, JSR-73 [Data Mining API], OMG's CWM, and Microsoft's Analysis Services all use PMML in their specifications, providing a base level of compatibility among them all. Meanwhile, two major challenges top the data mining standards agenda: agreeing on a common standard for cleaning, transforming, and preparing data for data mining (PMML v.2.0 represents a first step in this direction); and agreeing on a common set of Web services for working with remote and distributed data (an effort only just beginning)..." References: (1) Data Mining Group Web Site; (2) "Predictive Model Markup Language (PMML)"; (3) "Markup Languages and Semantics."
[June 2002] "Concepts and Requirements for XML Network Configuration." By Margaret Wasserman (Wind River). IETF Internet-Draft. Reference: 'draft-wasserman-xmlconf-req-00.txt'. June 2002, expires December 2002. "This document defines basic concepts and describes the requirements for a standard network configuration protocol, which could be used to manage the configuration of networks and networking equipment. The document also discusses a phased approach to developing an XML-based configuration protocol that could provide tangible benefits in the short term, while working towards an XML-based configuration protocol that meets the full requirements...  Phase One (XML over Secure Transport): The first standardization task of a working group focused on XML configuration should be to standardize how XML data can be transmitted over a secure protocol transport. This would include an explanation of how XML should be encapsulated within the secure transport, and the assignment of a new port number for XML transport connections... This phase would basically define the protocol transport for later XML configuration work.  Phase Two (XML Operations on Configuration Blocks): For the second phase, an XML configuration WG should focus on defining how operations on blocks of configuration information, representing whole or partial system configurations, could be transferred over the secure transport defined in the first phase. This would include a basic RPC-like mechanism for specifying operations on configuration blocks.  Phase Three (XML Operations on Data Objects): In phase three, it would become necessary to define a data model, data modeling language and complete data representation for the XML configuration protocol. In this phase, we would define operations to manipulate individual configuration data objects.  Phase Four (Multi-System Transactions using XML): Phase four would involve the addition of multi-system transaction support to the XML configuration protocol. After the completion of phase four, it would be possible to use the XML configuration protocol to configure entire networks, not just individual pieces of networking equipment..." See also "Management Protocol Specification." [cache]
[August 01, 2002] "OASIS LegalXML Member Section Electronic Court Filing Technical Committee Draft." Electronic Court Filing Version 1.1. Proposed Standard. Document Number: 12072002CF1.1r1. Version Date: 12-July-2002. Produced by the OASIS LegalXML Electronic Court Filing TC. Workgroup Co-Chairs: John Greacen and Mary McQueen. "A Draft Specification which provides the XML DTD required for Court Filing, updated in light of agreements specified in 'Principles of XML Development for Justice and Public Safety,' August 28, 2001, and as detailed in the 'LegalXML Standards Development Project: Horizontal Elements Draft Standard,' November28, 2001. The document is intended to describe the information required for electronic court filing and the structure of that information. No information regarding the content of any pleading or other legal devices (e.g., contracts, orders, judgments) is included, other than what is required to accomplish the intended task. The document is a Proposed Standard collaboratively developed by the COSCA/NACM 1 Joint Technology Committee and the OASIS LegalXML Member Section Electronic Court Filing Technical Committee. Portions of this document were derived from the 'Court Filing Straw Man,' collaboratively developed by the U.S. District Court for the District of New Mexico, New Mexico Administrative Office of the Courts, SCT Global Government Solutions, Inc., and West Group... The document includes a DTD to be used to validate the syntax of XML documents used for court filing. Annotations appearing inside the DTD, which add further definition and specification, shall be binding. Appendices are non-normative and may contain well-formed, validated examples. Where conflict arises between an example and the DTD or the body of this document, the body or DTD shall be deemed normative and ruling..." See: "LegalXML Electronic Court Filing TC." [cache]
[August 01, 2002] Proceedings of the Workshop on Knowledge Transformation for the Semantic Web. KTSW 2002. Workshop W7 at the 15th European Conference on Artificial Intelligence. 23-July-2002, Lyon, France. 121 pages. <q>...The workshop attracted a number of high-quality submissions concerning different transformation issues and models presented in the present book. This book is opened with an extended abstract of the invited talk of F. Casati presenting a discussion about the role of services at the Semantic Web. The first section of the proceedings is devoted to model transformation approaches. The paper on "Effective schema conversions between XML and relational models" by D. Lee, M. Mani, and W. Chu is followed by the paper on "Transforming UML domain descriptions into configuration knowledge bases for the Semantic Web" by A. Felfernig, G. Friedrich, D. Jannach, M. Stumptner, and M. Zanker. Generic model transformation issues are discussed in the paper "On modeling conformance for flexible transformation over data models" by S. Bowers and L. Declambre. Specific modeling issues are again discussed in the second section. Namely, the problem of "Tracking changes in RDF(S) repositories" by A. Kiryakov and D. Ognyanov, "Tracing data lineage using schema transformation pathways" by H. Fan and A. Poulovassilis, and "An algebra for the composition of ontologies" by P. Mitra and G. Wiederhold. The next section of the book is devoted to the papers on mapping conceptual models. First, "Knowledge representation and transformation in ontology-based data integration" by S. Castano and A. Ferrara, then "MAFRA -An Ontology MApping FRAmework in the context of the Semantic Web" by A. Maedche, B. Motik, N. Silva and R. Volz. These are followed by application-driven approaches "Conceptual normalization of XML data for interoperability in tourism" by O. Fodor, M. Dell'Erba, F. Ricci, A. Spada and H. Werthner; and "RDFT: a mapping meta-ontology for business integration" by B. Omelayenko. The fourth section contains the papers discussing configuration issues: "Enabling services for distributed environments: ontology extraction and knowledge-base characterization" by D. Sleeman, D. Robertson, S. Potter and M. Schorlemmer; "The "Family of Languages" approach to semantic interoperability" by J. Euzenat and H. Stuckenschmidt; and "A logic programming approach on RDF document and query transformation" by J. Peer. The last section is devoted to poster presentations and system demonstrations: "Information retrieval system based on graph matching" by T. Miyata and K. Hasida; "Formal knowledge management in distributed environments" by M. Schorlemmer, S. Potter, D. Robertson, and D. Sleeman; "Distributed semantic perspectives" by O. Hoffmann and M. Stumptner; "The ontology translation problem" by O. Corcho...</q> See general references in "Markup Languages and Semantics." [cache]
[August 01, 2002] "W3C Hails Semantic Web, Web Services Usage Scenarios." By Paul Krill. In InfoWorld (July 31, 2002). "The World Wide Web Consortium (W3C) on Wednesday [2002-07-31] detailed the release of working drafts of its OWL Web Ontology Language to enable development of the Semantic Web. The Semantic Web is intended to enable more structured, intelligent processes on the Internet, allowing, for example, the automatic lookup of flights and hotel information after a person confirms attendance at a meeting in a specific city, said Ian Jacobs, W3C spokesman, in New York. 'The whole idea of the semantic Web is when you say something, I need to know what you're talking about. The idea is we want computers to know things,' Jacobs said. He described development of the Web Ontology Language as being in its early stages. The OWL Web Ontology Language, in which OWL is to be construed as an acronym for Web Ontology Language, is being designed by the W3C Web Ontology Working Group. The intent is to provide a language that can be used for applications that need to understand content, instead of just human-readable presentation of content, according to W3C. As part of the Semantic Web, machine reability is boosted by XML, RDF, and RDF-S support by providing a vocabulary for term descriptions. The three working drafts released by W3C are entitled Feature Synopsis, Abstract Syntax, and Language Reference. W3C this week also released a working draft of its Web Services Architecture Usage Scenarios collection, which is intended to provide usage cases and scenarios for generation of Web services... Also, the W3C on August 26  is holding an event entitled, 'Forum on Security Standards for Web Services,' in Boston. At this event, which is to be part of the XML Web Services One Conference amd Expo, relationships will be explored between W3C and OASIS Web services and security specifications. OASIS is co-sponsoring the event..." On OWL, see "W3C Web Ontology Working Group Releases Working Drafts for OWL Semantic Markup Language" and the main reference page, "OWL Web Ontology Language."
- XML Articles and Papers July 2002
- XML Articles and Papers April - June, 2002
- XML Articles and Papers January - March, 2002
- XML Articles and Papers October - December, 2001
- XML Articles and Papers July - September, 2001
- XML Articles and Papers April - June, 2001
- XML Articles and Papers January - March, 2001
- XML Articles and Papers October - December, 2000
- XML Articles and Papers July - September, 2000
- XML Articles and Papers April - June, 2000
- XML Articles and Papers January - March, 2000
- XML Articles and Papers July-December, 1999
- XML Articles and Papers January-June, 1999
- XML Articles and Papers 1998
- XML Articles and Papers 1996 - 1997
- Introductory and Tutorial Articles on XML
- XML News from the Press