XML General Articles and Papers: Surveys, Overviews, Presentations, Introductions, Announcements
Other collections with references to general and technical publications on XML:
- XML Article Archive: [July 2003] [June 2003] [May 2003] [April 2003] [March 2003] [February 2003] [January 2003] [December 2002] [November 2002] [October 2002] [September 2002] [August 2002] [July 2002] [April - June 2002] [January - March 2002] [October - December 2001] [Earlier Collections]
- Articles Introducing XML
- Comprehensive SGML/XML Bibliographic Reference List
August 2003
[August 29, 2003] "MIT to Uncork Futuristic Bar Code." By Alorie Gilbert. In CNET News.com (August 29, 2003). "A group of academics and business executives is planning to introduce next month a next-generation bar code system, which could someday replace with a microchip the series of black vertical lines found on most merchandise. The so-called EPC Network, which has been under development at the Massachusetts Institute of Technology for nearly five years, will make its debut in Chicago on Sept. 15, at the EPC Symposium. At that event, MIT researchers, executives from some of the largest global companies, and U.S. government officials intend to discuss their plans for the EPC Network and invite others to join the conversation. The attendee list for the conference reads like a who's who of the Fortune 500: Colgate-Palmolive, General Mills, GlaxoSmithKline, Heinz, J.C. Penney, Kraft Foods, Nestle, PepsiCo and Sara Lee, among others. An official from the Pentagon is scheduled to speak, along with executives from Gillette, Johnson & Johnson, Procter & Gamble and United Parcel Service... EPC stands for electronic product code, which is the new product numbering scheme that's at the heart of the system. There are several key differences between an EPC and a bar code. First, the EPC is designed to provide a unique serial number for every item in the system. By contrast, bar codes only identify groups of products. So, all cans of Diet Coke have the same bar code more or less. Under EPC, every can of Coke would have a one-of-a-kind identifier. Retailers and consumer-goods companies think a one-of-a-kind product code could help them to reduce theft and counterfeit goods and to juggle inventory more effectively. 'Put tags on every can of Coke and every car axle, and suddenly the world changes,' boasts the Web site of the Auto-ID Center, the research group at MIT leading the charge on the project. 'No more inventory counts. No more lost or misdirected shipments. No more guessing how much material is in the supply chain -- or how much product is on the store shelves.' Another feature of the EPC is its 96-bit format, which some say is large enough to generate a unique code for every grain of rice on the planet... Working on the standards problem is AutoID, a new arm of the Uniform Code Council, the nonprofit that administers the bar code, or Universal Product Code. AutoID, announced in May, plans to pick up where MIT's Auto-ID Center leaves off, assigning codes, ironing out technical standards, managing intellectual property rights, publishing specifications, and providing user support and training..." See: (1) following bibliographic entry on PML servers; (2) Inaugural EPC Executive Symposium, September 15 - 17, 2003; (3) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)."
[August 29, 2003] "PML Server Developments." By Mark Harrison, Humberto Moran, James Brusey, and Duncan McFarlane. White Paper. Auto-ID Centre, Institute for Manufacturing, University of Cambridge, UK. June 1, 2003. 20 pages. "This paper extends our previous white paper on our PML Server prototype work. We begin with a brief review of the Auto-ID infrastructure, then consider the different types of essential data which could be stored about a tagged physical object or which relate to it. In our data model we distinguish between data properties at product-class level and at instance-level. Product-class properties such as mass, dimensions, handling instructions apply to all instances of the product class and therefore need only be stored once per product class, using a product-level EPC class as the lookup key. Instance-level properties such as expiry date and tracking history are potentially unique for each instance or item and are logically accessed using the full serialised EPC as the lookup key. We then discuss how a PML Service may use data binding tools to interface with existing business information systems to access other properties about an object besides the history of RFID read events which were generated by the Auto-ID infrastructure. The penultimate section analyses complex queries such as product recalls and how these should be handled by the client as a sequence of simpler sub-queries directed at various PML services across the supply chain. Finally, we introduce the idea of a registry to coordinate the fragmented PML Services on a supply chain in order to perform tracking and tracing more efficiently and facilitate a complex query, which requires iterative access to multiple PML Services in order to complete it... The key to the Auto-ID architecture is the Electronic Product Code (EPC) which extends the granularity of identity data far beyond that which is currently achieved by most bar code systems in use today. The EPC contains not only the numeric IDs of the manufacturer and product type (also known as stock-keeping unit or SKU) but also a serial number for each item or instance of a particular product type. Whereas two apparently identical instances or items of the same product type may today have the same bar code, they will in future have subtly different EPCs, which allows each one to have a unique identity and to be tracked independently. In order to minimise the costs of Radio Frequency Identification (RFID) tags, the Auto-ID Centre advocates that only a minimal amount of data (the EPC) should be stored on the tag itself, while the remaining data about a tagged object should be held on a networked database, with the EPC being used as a database key to look up the data about a particular tagged object. Within the Auto-ID infrastructure, the Savant, Object Name Service (ONS) and PML Service are all networked databases of some form. Edge Savants interface directly with RFID readers and other sensors and generate Auto-ID event data, typically consisting of triples of three values (Reader EPC, Tag EPC, Timestamp) and an indication of whether the tag has been 'added' or 'removed' from the field of the tag readers. The Object Name Service (ONS) is an extension of the internet Domain Name Service (DNS) and provides a lookup service to translate an EPC number into an internet address where the data can be accessed. Data about the tagged object is communicated using the Physical Markup Language (PML) and the PML Service provides additional information about the tagged object from network databases. The Physical Markup Language (PML) does not specify how the data should be stored, only how it should be communicated. It should be possible for many different types of existing information systems to act as data sources to the PML Service, and for the data to be queried and communicated using the PML language and by reference to the PML schema rather than by reference to the particular structure/schema of the various underlying databases in which the values are actually stored..." See "Physical Markup Language (PML) for Radio Frequency Identification (RFID)."
[August 29, 2003] "The End of Systems Integrators?" By Erika Morphy. In CRMDaily.com News (August 29, 2003). "As with the applications themselves, verticalization has become the latest thing in integration technology, says Steve Bonadio, senior program director in Meta Group's enterprise application strategies group. Siebel continues to expand development of its integration tool, UAN, as does SAP with its respective integration product package, Xapps. Even PeopleSoft has gotten into the act, rolling out its version of an integration on-ramp this week -- Process Integration Packs, or 'PIPs,' for CRM. The premise behind each of these products is roughly the same: to help customers cut down on integration costs by providing standardized interfaces for business processes and discrete systems and applications. This, of course, was once strictly the domain of systems integrators. Could it be, CIOs of every stripe and size wonder, that their dependence on these service providers will diminish -- if not end -- as more application providers start to pay attention to integration linkages and hooks? The short answer: Not likely. The longer answer: There are other competitive and market-development pressures that are eroding systems integrators' stranglehold on IT budgets... Products such as UAN, Xapps and PIPs are making life easier for customers, which is good, as that was their intent. 'Application vendors talk about the fact that integration is too costly, and that is one reason why many companies are hesitant to deploy more enterprise software,' Gartner research analyst Ted Kempf told NewsFactor's CIO Today Magazine. 'So they try to make it easier by providing integration packages.' ... Siebel's Universal Application Network was designed to do just that, Bharath Kadaba, Siebel's vice president and technical manager of UAN, told NewsFactor's CIO Today. Rather than having a systems integrator, such as webMethods, code all information about business objects and processes into an integration platform by hand, UAN provides models for doing so... First introduced last year, UAN is at heart a tool that is predicated on partnerships with independent enterprise-application integration vendors, such as webMethods and Tibco. Now, Siebel is broadening its functionality to provide vertical expertise. Last week, it announced the availability of UAN integration applications for the communications, media and energy industries on the webMethods integration platform... It is a similar story with SAP's xApps and its renamed tech platform, NetWeaver. NetWeaver leverages Web-services technology to integrate the xApps application with mySAP and other software. The applications, or xApps, automate specific business processes, such as project management. 'What SAP is saying is that the next generation of applications, as far as they are concerned, will not be applications for accounting or CRM, but will be end-to-end business applications -- or even cross-multiple business applications,' Gartner research analyst Simon Hayward told NewsFactor's CIO Today. SAP's first Xapps, X-Application Resource and Program Management, aligns corporate resources to specific projects. It is the perfect application for pharmaceutical companies, Tim Bussiek, vice president of xApps marketing, told NewsFactor's CIO Today. Typical big pharma companies might launch 5,000 projects each year, all very expensively staffed and equipped. SAP's new xApps tool allows them to evaluate these projects on an ongoing basis, Bussiek said..."
[August 27, 2003] "BPEL and Business Transaction Management: Choreology Submission to OASIS WS-BPEL Technical Committee." By Tony Fletcher, Peter Furniss, Alastair Green, and Robert Haugen (Choreology Ltd). Copyright (c) Choreology Ltd, 2003, subject to OASIS IPR Policy. Working paper presented to the OASIS Web Services Business Process Execution Language Technical Committee. "An overall motivation for this submission is given in an article by one of the authors, Alastair Green, in the September issue of Web Services Journal (see following bibliographic entry). From the 27-August-2003 posting of Peter Furniss: "... [WRT] the announcements of a raft of issues on "business transaction management". These all relate to the long-promised submission from Choreology on how to handle transactions in BPEL... The submission gives the background and context for the BTM issues and proposes syntax constructs as solutions for [items] 54 to 59" in the issues list... "BTM Issue A (BPEL issue 53), Desirable for WS-BPEL to include Business Transaction Management (BTM) programming constructs which are compatible with WS-T, BTP and WS-TXM, "There are three multi-vendor specifications which address the needs of business transaction management for Web Services: Business Transaction Protocol 1.0 (OASIS Committee Specification, June 2002); WS-Transaction (proprietary consortium, August 2002), and the very recently published WS-TXM (proprietary consortium, August 2003). In our view BTP Cohesions, WS-T Business Activity, and WS-TXM Long-Running Actions are the most relevant aspects of these specifications for WS-BPEL. These aspects overlap to a very high degree, each effectively utilizing a two-phase (promise/decide) outcome protocol. (We should emphasize that there has been little time to analyze or assimilate WS-TXM, so this is a provisional conclusion with respect to that specification). WS-BPEL should be equipped with the ability to create and terminate business transactions, and to define process participation in such transactions, in a way which is compatible with the intersection of these three capabilities. This will minimize dependence on future standardization efforts in the BTM area... It is should be noted that a 'business transaction' is normally performed in support of some economic transaction -- that it coordinates actions that have an effect on the parties and their relationships that go beyond the lifetime of the transaction itself. Since a BPEL process cannot directly manipulate data with a lifetime longer than the process, but always delegates to a web-service, the invoked web-services will either themselves be participants in the business transaction (strictly, the invocation will trigger the registration of participants) or the BPEL process will register as a participant and then make non-transaction invocations on other web-services. In the former case, the invoked web-services are 'business-transaction aware'; the BPEL process will export the context to it and the web-services will implement the transactional responsibilities internally. Similarly, a BPEL process, as an offerer of a web-service, may import a context from a non-BPEL application -- in which case it is itself a business-transaction aware web-service from the perspective of its caller -- and either registers as a participant or passes the context on in its own invocations..." General references in "Business Process Execution Language for Web Services (BPEL4WS)."
[August 26, 2003] "Grid Security: State of the Art. Expanded Grid Security Approaches Emerge." By Anne Zieger (Chief Analyst, PeerToPeerSource.com) From IBM developerWorks (August 2003). "Today, emerging grid security efforts are also beginning to address application and infrastructure security issues, including application protection and node-to-node communications. Among other advances, emerging grid security approaches are integrating Kerberos security with PKI/X.509 mechanisms, securing peer connections between network nodes and better protecting grid users and apps from malicious or badly formed code. One of the best-known security approaches for Grid computing can be found within the Globus Toolkit, a widely used set of components used for building grids. The Toolkit, developed by the Globus Project, offers authentication, authorization, and secure communications through its Grid Security Infrastructure (GSI). The GSI uses public key cryptography, specifically public/private keys and X.509 certificates, as the basis for creating secure grids. X.509, perhaps the most widely implemented standard for defining digital certificates, is very familiar to enterprise IT managers, and already supported by their infrastructure. At the same time, it's flexible, and can be adopted neatly for use in the grid. Among the GSI's key purposes are to provide a single sign-on for multiple grid systems and applications; to offer security technology that can be implemented across varied organizations without requiring a central managing authority; and to offer secure communication between varied elements within a grid... Grid security research is just beginning to address the operational and policy issues of concern to enterprise IT managers. Going forward, however, grid security efforts should embrace technologies rapidly, while they're still at the cutting edge of mainstream corporate development. For example, in recent months, the Global Grid Forum has begun to look at security in a grid-based Web services environment. GGF is working with Open Grid Services Architecture (OGSA), a proposed Grid service architecture based on the integration of grid and Web services concepts and technologies. Members of the OGSA security group plan to realize OGSA security using the WS-Security standard backed by IBM, Microsoft, and VeriSign Inc. Among other features, WS-Security offers security enhancements for SOAP messaging and methods for encoding X.509 certificates and Kerberos tickets. While the OGSA security group's work is in its early stages, its final work product should be yet another factor contributing to grid's increasing acceptance in enterprise life. With critical technologies like Web services being securely grid-enabled, grid technology should soon be central to just about any enterprise's networking strategy..." Article also in PDF format.
[August 26, 2003] "RSS Utilities: A Tutorial." By Rodrigo Oliveira (Propertyware). From Java Developer Services' technical articles series. August 2003. "RSS ('Really Simple Syndication') is a web content syndication format. RSS is becoming the standard format for syndicating news content over the web. As part of my recent contract with Sun Microsystems, I was tasked with the development of a JSP Tag Library to be used by anybody with a basic understanding of RSS, JavaServer Pages, and HTML. The taglib is mostly geared towards non-technical editors of web sites that use RSS for aggregating news content. My goal was to develop a JSP tag library that would simplify the use of RSS content (versions 0.91, 0.92 and 2.0) in web pages. The RSS Utilities Package is the result of that project. It contains a set of custom JSP tags which make up the RSS Utilities Tag library, and a flexible RSS Parser. This document describes how to use the parser and the library provided in the RSS Utilities Package. The zip [distribution] file contains a jar file, rssutils.jar, providing the classes needed to use the utilities, and a tld file rssutils.tld which defines JSP custom tags for extracting information from RSS documents... The parser was a by-product of the project. Although the parser was developed with the tag library in mind, it is completely self-contained, and it can be used in Java applications. To do so, however, you obviously need to know how to write at least basic Java code; if you know how to write Hello World in the Java language, you are probably all set... The RSS object generated by the parser is a Java object representation of the RSS document found at the provided URL [http://mydomain.com/document.rss]. Use the methods provided by the RSS object to get a handle to other RSS objects, such as Channels and Items. The RssParser can also parse File objects and InputStream objects... RSS provides a simple way to add and maintain news -- as well as other content -- on your web site, from all over the web. Even though RSS is a simple XML format, parsing and extracting data out of XML documents hosted elsewhere on the web can be a bit tricky-- or at least tedious -- if you have to do it over and over again. The RSS Utilities Package leverages Custom Tag and XML Parsing technologies to make the "Real Simple Syndication" format live up to its name..." The first release of the RSS Utilities Package is available for download. General references in "[RDF Site Summary | Real Simple Syndication] (RSS)."
[August 26, 2003] "Integrating CICS Applications as Web Services. Extending the Life of Valuable Information." By Russ Teubner. In WebServices Journal Volume 3, Issue 9 (September 2003), pages 18-22. With 4 figures. ['Web services promise to lower the costs of integration and help legacy applications retain their value. This article explains how you can use them to integrate mainframe CICS applications with other enterprise applications.'] "IBM's CICS (Customer Information Control System) is a family of application servers that provides online transaction management and connectivity for legacy applications. There are two basic models for integrating CICS applications as Web services, both of which include the use of adapters. The differences between these models depend upon where the Web services exist, how they operate under the covers, and the types of applications you want to integrate. In this article, we refer to these models as connectors and gateways. Connectors run on the mainframe and can use native interfaces that permit seamless integration with the target application. Gateways run off the mainframe on middle-tier servers and often use traditional methods such as screen-scraping... Connectors allow you to transform your legacy applications into Web services without requiring the use of additional hardware, without changes to the legacy application, and without falling back upon brittle techniques like screen scraping. Compared to gateways, connectors yield better performance by running on the host, and more reliable operation due to the elimination of the many layers data must pass through due to screen-scraping... Unlike connectors, gateways typically run on a physical or logical middle tier. Where the gateway runs is important because there are so few options for accessing the host from the middle-tier servers, which means gateways usually involve some form of screen-scraping. The solution is tightly coupled in that the integration is between the gateway and a specific application. Any changes to the application will break the integration. When gateways communicate with terminal-oriented legacy applications they open a terminal session with the legacy application, send a request to the application, receive the terminal datastream, use HLLAPI to capture the screen data, process the screen data, convert the contents to XML, and ship the XML document to the requester... IBM's CICS Transaction Server includes facilities that allow third-party vendors to create connectors that can immediately enable legacy applications as Web services. These facilities provide additional benefits over gateways, such as improved performance and increased stability compared to their screen-scraping counterparts. By using the same industry-standard technologies as Web services, some connectors make it possible for applications to transparently invoke CICS transactions within a Web services architecture and receive the resulting data as well-formed XML. For organizations that want to retain the value of their CICS applications, the combination of XML-enabling connectors and Web services offers a practical and powerful integration solution. Web services are not a trend, but an industry-wide movement that can provide a long-term solution for companies that want to integrate legacy applications and data with new e-business processes. In the end, companies need to assess the value of the data contained in their CICS applications. Most companies have already determined that such data is highly valuable and they are looking for ways to preserve their investments. Given that recent surveys show the top strategic priorities of CIOs and CTOs are integrating systems and processes, the use of Web services for legacy integration will grow rapidly..." [alt URL]
[August 26, 2003] "Structured Documents: Searching XML Documents via XML Fragments." By David Carmel, Yoelle S. Maarek, Matan Mandelbrod, Yosi Mass, and Aya Soffer (IBM Research Lab in Haifa, Mount Carmel, Haifa). Presented July 30, 2003 at the 26th Annual International ACM SIGIR Conference [ACM Conference on Research and Development in Information Retrieval] (Toronto, Canada). Published in the the Conference Proceedings, pages 151-158. "Most of the work on XML query and search has stemmed from the publishing and database communities, mostly for the needs of business applications. Recently, the Information Retrieval community began investigating the XML search issue to answer information discovery needs. Following this trend, we present here an approach where information needs can be expressed in an approximate manner as pieces of XML documents or 'XML fragments' of the same nature as the documents that are being searched. We present an extension of the vector space model for searching XML collections via XML fragments and ranking results by relevance. We describe how we have extended a full-text search engine to comply with this model. The value of the proposed method is demonstrated by the relative high precision of our system, which was among the top performers in the recent INEX workshop. Our results indicate that certain queries are more appropriate than others for the extended vector space model. Specifically, queries with relatively specific contexts but vague information needs are best situated to reap the benefit of this model. Finally our results show that one method may not fit all types of queries and that it could be worthwhile to use different solutions for different applications.' .. We present here an approach for XML search that focuses on the informational needs of users and therefore addresses the search issue from an IR viewpoint. In the same spirit as the vector space model where free-text queries and documents are objects of the same nature, we suggest that query be expressed in the same form as XML documents, so as to compare 'apples and apples'. We present an extension of the vector space model that integrates a measure of similarity between XML paths, and define a novel ranking mechanism derived from this model. We evaluate several implementations of our model on the INEX collection and obtained good evidence that the use of XML fragments with an extended vector space model is a promising approach to XML search. By sticking to the well known and tested model where the query and document are of the same form, we were able to achieve very high precision on the INEX topics. The initial results also indicate that queries that are well specified in terms of the required contexts, are best situated to reap the benefit of more complex context resemblence measures and statistics. However, these results should still be considered as initial due to the limited set of queries studied here. A deeper analysis and more than a few, almost 'anecdotal', queries should be discussed as soon as larger test collections become available. Finally, we are convinced that one method will not fit all types of queries and that it could be worthwhile to use different solutions for different types of applications..." See also the earlier paper online: "An Extension of the Vector Space Model for Querying XML Documents via XML Fragments," by David Carmel, Nadav Efraty, Gad M. Landau, Yoelle S. Maarek, and Yosi Mass [cache]
[August 26, 2003] "Development of SNMP-XML Translator and Gateway for XML-Based Integrated Network Management." By Jeong-Hyuk Yoon, Hong-Taek Ju, and James W. Hong. In International Journal of Network Management Volume 13, Issue 4 (July/August 2003), pages 259-276. "The research objective of our work is to develop a SNMP MIB to XML translation algorithm and to implement an SNMP-XML gateway using this algorithm. The gateway is used to transfer management information between an XML-based manager and SNMP-based agents. SNMP is widely used for Internet management, but SNMP is insufficient to manage continuously expanding networks because of constraints in scalability and efficiency. XML based network management architectures are newly proposed as alternatives to SNMP-based network management, but the XML-based Network Management System (XML-based NMS) cannot directly manage legacy SNMP agents. We also implemented an automatic specification translator (SNMP MIB to XML Translator) and an SNMP-XML gateway... We developed a gateway which translates messages between SNMP and XML/HTTP. For this gateway, we proposed a translation algorithm which changes SNMP MIB into the XML Schema as a method of specification translation, and implemented an MIB to XML translator which embodied the algorithm. Also, we defined the operation translation methods for interaction translation. SNMP has limits in scalability and efficiency when managing increasingly large networks. Research on XML-based NMS is evolving to solve these shortcoming of SNMP-based NMS. XML-based NMS uses XML in network management to pass management data produced in large networks. XML-based NMS delivers management data in the form of an XML document over the HTTP protocol. This method is efficient for transferring large amounts of data. However, an XMLbased NMS cannot manage the legacy SNMP agent directly. If a manager cannot communicate with an SNMP agent, it is not practical in the real world where SNMP is used worldwide. Because most Internet devices are equipped with an SNMP agent, and network namagement was performed by the agent, we studied how to manage the legacy SNMP agent using the advantage of XML-based network management simultaneously. Because of the excellent compatibility and user-friendly features of XML, integration of data into XML is expected to accelerate in the future. Because of the excellent compatibility and userfriendly features of XML, integration of data into XMLis expected to accelerate in the futue. Specifically, in order to use XML as middleware for information transmission between different systems, a standard translation method to change SNMP MIB to XML within transmission of information for network and system management is necessary. In future work, we need to enhance the translation algorithm through a performance evaluation of the algorithm. For enlarging scalability, we need to study how one manager can manage many SNMP agents distributed to large networks such as enterprise networks. Distributed processing is the method presented here. For example, one XML-based manager governing several distributed SNMP-XML gateways through networks can expand the scope of management..."
[August 26, 2003] "Universal Plug and Play: Networking Made Easy." By Stephen J. Bigelow. In PC Magazine (September 2003). "A technology called Universal Plug and Play (UPnP) is starting to make networking-configuration hassles a thing of the past. Just as Plug and Play (PnP) technology changed the way we integrate hardware with our PCs, UPnP will ease the way we add devices to a network. With PnP, you no longer need to configure resources for each device manually, hoping there are no conflicts. Instead, each device identifies itself to the operating system, loads the appropriate drivers, and starts operating with minimal fuss. PC-based networks, however, still require a cumbersome setup and configuration process, and devices such as printers, VCRs, PDAs, and cell phones are still difficult or impossible to network... With UPnP, adding devices to your network can be as easy as turning them on. A device can automatically join your network, get an IP address, inform other devices on your network about its existence and capabilities, and learn about other network devices. When such a device has exchanged its data or goes outside the network area, it can leave the network cleanly without interrupting any of the other devices. The ultimate goal is to allow data communication among all UPnP devices regardless of media, operating system, programming language, and wired/wireless connection. To foster such interoperability, UPnP relies on network-related technologies built upon industry-standard protocols such as HTTP, IP, TCP, UDP, and XML... UPnP is an open networking architecture that consists of services, devices, and control points. Services are groups of states and actions. For example, a light switch in your home has a state (either on or off) and an action that allows the network to get or change the state of the switch. Services typically reside in devices. A UPnP-compliant VCR might, for example, include tape handling, tuning, and clock services -- all managed by a series of specific actions defined by the developer. Devices may also include (or nest) other devices. Because devices and their corresponding services can vary so dramatically, there are numerous industry groups actively working to standardize the services supported by each device class. Today, there are four standards: Internet Gateway Device (IGD) V 1.0; MediaServer V 1.0 and MediaRenderer V 1.0; Printer Device V 1.0 and Printer Basic Service V 1.0; and Scanner (External Activity V 1.0, Scan V 1.0, Feeder V 1.0, and Scanner V 1.0). Industry groups will produce XML templates for individual device types, which vendors will fill with specific information such as device names, model numbers, and descriptions of services... There is one caveat with regard to UPnP: security..." See the recent news story "UPnP Forum Releases New Security Specifications for Industry Review."
[August 26, 2003] "Sun Seeks to Spur App Server Adoption. High Availability Stakes Raised in Upgrade." By Paul Krill. In InfoWorld (August 26, 2003). "Sun Microsystems hopes to make a bold leap in the Java application server space with its upcoming Sun ONE Application Server 7 Enterprise Edition, featuring high availability. Having trailed companies such as BEA Systems and IBM in market share, Sun is looking to turn things around by focusing on a high availability database layer in the product that is based on technology acquired through its aquisition of Clustra Systems in 2002. Sun's high availability technology is intended to ensure 99.999 uptime for applications such as e-commerce transactional systems, according to Sun officials, who discussed the technology during a chalk talk session in San Francisco... The high availability database layer features state information on transactions. Transactional loads can be shifted between application servers in the network if needed, Keller said. While the current version of the enterprise application server, release 6.5, has had high availability support, Version 7's support of the Clustra technology boosts real-time database functionality and scalability, to 24 processors per system. Version 7, which is set to ship in September for $10,000 per processor, also is compliant with the J2EE 1.3 Java specification, which features container management support for access to a database without requiring programmer involvement, according to Sun. Load balancing in Version 7 will enable uptime when taking down an application server for maintenance. Additionally, the high availability layer enables performance boosts through the addition of more processors, rather than having to add more application servers... Sun will add J2EE 1.4 compliance to the application server, featuring conformance to Web services specifications, in 2004, Sun officials said..."
[August 26, 2003] "Transacting Business with Web Services, Part I. The Coming Fusion of Business Transaction Management and Business Process Management." By Alastair Green (Choreology Ltd). In WebServices Journal Volume 3, Issue 9 (September 2003), pages 32-35. "Business transaction management (BTM) is a promising new development in general-purpose enterprise software. Most large companies are devoting significant resources to the problem of reliable, consistent integration of application services. BTM offers previously inaccessible levels of application coordination and process synchronization, radically simplifying the design and implementation of transactional business processes. Business process management (BPM) needs to be enriched by BTM for users to see the potential value of BPM realized in practice. XML is already widely deployed as a useful lingua franca enabling the creation of canonical data standards for particular industries, trading communities, and information exchanges. The extended family of Web services standards (clustered around the leading duo of SOAP and WSDL) is gaining growing acceptance as an important way of providing interoperable connectivity between heterogeneous systems. Many organizations are also examining the use of BPM technologies, exemplified by the current OASIS initiative, Web Services Business Process Execution Language (WS BPEL). Increasingly, attention is turning to the special problems associated with building transactional business processes and reliable, composable services. This is where BTM technology comes into its own. In this article I'm going to look at the rationale for and current status of BTM, and how vendors and users are thinking about the integration or fusion of BTM with BPM, particularly in the OASIS BPEL standardization effort. BPEL, as a special-purpose programming language designed to make processes portable across different vendors' execution engines, can become a very useful standard programming interface for business transactions in the Web services world... Full-scale BTM software needs to implement interoperable protocols that define three phases of any transactional interaction, whether integrating internal systems, or automating external trades and reconciliations: (1) Phase One: Collaborative Assembly: The business-specific interplays of messages that assemble a deal or other synchronized state shift in the relationship of two or more services. A useful general term for such an assemblage of ordered messages is collaboration protocol. Examples include RosettaNet PIPs, UN/Cefact trade transactions, and the FIX trading protocol. In the future, BPEL abstract processes should help greatly in defining such protocols. Reliable messaging has an important role in this assembly phase, but as a subordinate part of a new, extended concept of GDP (guaranteed delivery and processing). (2) Phase Two: Coordinated Outcome: The coordination of an outcome that ensures that the intended state changes occur in all participant systems, consistent with the business rules or contracts which govern the overall transaction. Examples of relevant coordination protocols are WS-Transaction (Atomic Transaction and Business Activity, supplemented by WS-Coordination) and BTP (the OASIS Business Transaction Protocol) and the recently released WS-TXM (Transaction Management, part of the WS-Composite Application Framework). A coordination protocol requires three related sub-protocols: a control protocol, which creates and terminates a coordination or transaction (present in BTP); a propagation protocol, which allows a unique transaction identity to be used to bind participating services to a coordination service (this sub-protocol is mostly defined by WS-Coordination); and an outcome protocol, which allows a coordination service to reliably transmit the instructions of a controlling application to the participants, even in the event of temporary process, processor or network failures. WS-T, BTP, and WS-TXM, contain very similar outcome protocols... (3) Phase Three: Assured Notification: Notification of the result of the transaction to the parties involved, ensuring that they're all confident of their final relationship to their counterparties. Ideally, this requires a reliable notification protocol, which allows the different legal entities or organizational units to receive or check the final outcome, including partial or complete failures..." General references in "Business Process Execution Language for Web Services (BPEL4WS)." [alt URL]
[August 25, 2003] "Macromedia Plays Drag-and-Drop Game." By Gavin Clarke. In Computer Business Review Online (August 25, 2003). "Macromedia Inc is eyeing up Delphi and Visual Basic developers with a web-programming environment exploiting both drag-and-drop and XML web services... Flash MX Professional 2004 is designed to exploit the Flash player's popularity as a deployment environment by introducing development techniques and workflows uncommon to existing Flash development environment but familiar to application coders. Flash currently uses a so-called timeline design metaphor, friendly to visually creative programmers but not those comfortable with drag-and-drop. Macromedia hopes drag-and-drop will attract Microsoft Corp's Visual Basic and Borland Software Corp's Delphi developers whose tools, company president of products Norm Meyrowitz said, haven't morphed to become web centric... Flash MX Professional 2004 also uses web services with scriptable data binding that supports SOAP and XML in addition to Macromedia's Flash Remoting. There is also integration with Microsoft's Visual SourceSafe, to manage source code and project files. Macromedia becomes the latest in a growing string of companies, including BEA and Sun Microsystems Inc, attempting to appeal especially to the Visual Basic crowd... Dreamweaver features enhanced support for Cascading Style Sheets (CSS) that helps reduce consumption of network bandwidth by separating design from content, multi-browser validation to check tags and CSS rules across different browsers, updated Flash Player, and drawing tools in Fireworks for greater control over bitmap and vector images. In attempt to provide a unified look and feel, Macromedia will also unveil Halo, a set of design guidelines on the principle of Apple Computer Inc's OS X Aqua interface..." Note SYS-CON Media's announcement for a new MX Developer's Journal. See details in the announcement "Macromedia Announces Dreamweaver MX 2004. New Version Builds Foundation for Widespread Adoption of Cascading Style Sheets (CSS)." [temp URL]
[August 25, 2003] "Macromedia Unveils MX 2004 Lineup. New versions of Flash, Dreamweaver, Fireworks Slated for September Release." By Dave Nagel. In Creative Mac News (August 25, 2003). With screen shots. "Macromedia has unveiled its new lineup of graphic design tools in the MX 2004 family. These include two new versions of Flash, Flash MX 2004 and Flash MX Professional 2004; Dreamweaver MX 2004; and Fireworks MX 2004. The company has also introduced the new Studio MX 2004 suite, as well as Flash Player 7. All are expected to be available next month for Mac OS X and Windows. Completely new to Macromedia's lineup are split versions of Flash, the standard version and the Professional edition. The standard Flash MX 2004 adds new functionality and gains several workflow enhancements. It includes new Timeline Effects for adding common effects like blurs and drop shadows without scripting; pre-defined behaviors for navigation and media control; ActionScript 2.0 for enhanced interactivity; support for cascading style sheets for producing hybrid Flash and HTML content; spell checking and global search; accessibility features; and Unicode and localization tools. It also gains a high-performance compiler for improving playback considerably, including playback of content created for earlier versions of the Flash Player... The Professional edition includes all of the new features of Flash MX 2004, as well as a beefed-up application development environment for developing rich Internet applications and tools for delivering video with interactivity and custom interfaces. It adds forms-based programming capabilities as an alternative to timeline-based development and offers connectivity to server data with scriptable binding, supporting SOAP, XML and Flash Remoting. For video, Flash Professional includes a streamlined development workflow with Apple FInal Cut Pro and other video editing systems. With the new Flash Player 7, it provides support for full-motion, full-frame video and progressive downloads. And it includes pre-built components for building custom interfaces and easily compositing text, animated graphics and images into a video presentation. It also gains for developing content for mobile devices... Dreamweaver and Fireworks have also been boosted into the MX 2004 fold. The primary focus of Dreamweaver MX 2004 is the simplification of cascading style sheets, with the entire design environment built around CSS for precise control over design elements. It offers support for SecureFTP, dynamic cross-browser validation functionality, built-in graphics editing, integration with Microsoft Word and Excel (including copying and pasting formatted tables) and updated support for ASP.NET, PHP and ColdFusion technologies. It will ship with MX Elements for HTML, which includes starter and template components for Web pages, including preset cascading style sheets..." See also the announcement "Macromedia Announces Dreamweaver MX 2004. New Version Builds Foundation for Widespread Adoption of Cascading Style Sheets (CSS)."
[August 25, 2003] "Goals of the BPEL4WS Specification." By Frank Leymann, Dieter Roller, and Satish Thatte. Working document submitted to the OASIS Web Services Business Process Execution Language TC. See the posting from Diane Jordan and the original posting, with attachment. The memo articulates ten (10) overall goals of the "original" BPEL4WS Specification, presented as a record of the "Original Authors' Design Goals for BPEL4WS." It covers: Web Services as the Base, XML as the Form, Common set of Core Concepts, Control Behavior, Data Handling, Properties and Correlation, Lifecycle, Long-Running Transaction Model, Modularization, and Composition with other Web Services Functionality. "This note aims to set forward the goals and principals that formed the basis for the work of the original authors of the BPEL4WS specification. The note is set in context to reflect the considerations that went into the work, rather than being presented as a set of axioms. Much of this material is abstracted from comments and explanations embedded in the text of the specification itself. This is intended to be informative and a starting point for a consensus in the WSBPEL TC for the work of the TC. The goals set out here are also reflected in the charter of the WSBPEL TC... BPEL4WS is firmly set in the Web services world as the name implies. In particular, all external interactions occur through Web service interfaces described using WSDL. This has two aspects: (1) the process interacts with Web services through interfaces described using WSDL and (2) the process manifests itself as Web services described using WSDL. We concluded that although the binding level aspects of WSDL sometimes impose constraints on the usage of the abstract operations, in the interests of simplicity and reusability we should confine the exposure of process behavior to the 'abstract' portType (i.e., 'interface') level and leave binding and deployment issues out of the scope of the process models described by BPEL4WS. The dependence is concretely on WSDL 1.1, and should remain so, given the timeline for the WSBPEL TC, and the likelihood that WSDL 1.1 will remain the dominant Web service description model for some time to come. At the same time we should be sensitive to developments in WSDL 1.2 and attempt to stay compatible with them..." Note from Satish Thatte's post: "As promised, the goals document is attached. As I said during the last phone meeting, this document only covers high level design points... If TC members feel that there are any important aspects not yet covered here please let us know and we will try to address those concerns..." General references in "Business Process Execution Language for Web Services (BPEL4WS)." [source .DOC]
[August 22, 2003] "J2ME Connects Corporate Data to Wireless Devices. Sacrificing Proprietary Gimmicks for Software Portability, J2ME Leads the Way." By Tom Thompson. In InfoWorld (August 22, 2003). "The vast differences among portable devices -- Pocket PCs running Windows CE, PDAs running Palm OS or Linux, cell phones running the Symbian OS -- pose significant problems for developers. Even the cell phones from a single vendor such as Motorola (the company I work for) can vary widely in processor type, memory amount, and LCD screen dimensions. Worse, new handsets sporting new features, such as built-in cameras and Bluetooth networking, are released every six months to nine months. For IT managers whose chief concern is that applications running on device A today also run on device B tomorrow, the best choice among development platforms is J2ME, a slimmed-down version of Java tailored for use on embedded and mobile devices. Most handset vendors implement their own Java VM, and third-party VMs provide Java support in Palm and Pocket PC devices. For a broad range of devices, past, present, and future, J2ME provides a high degree of security and application portability -- but not without drawbacks... J2ME limits support for vendor-specific hardware features to accommodate variations among devices. J2ME tackles hardware variations in two ways. First, J2ME defines an abstraction layer known as a configuration, which describes the minimum hardware required to implement Java on an embedded device. The J2ME configuration that addresses resource-constrained devices such as mobile phones and low-end PDAs is the CLDC (Connected Limited Device Configuration). Second, J2ME defines a second abstraction layer, termed a profile, that describes the device's hardware features and defines the APIs that access them. Put another way, profiles extend a configuration to address a device's specific hardware characteristics. J2ME currently defines one profile for CLDC devices: the MIDP (Mobile Information Device Profile). In addition to stipulating the basic hardware requirements, the MIDP implements the APIs used to access the hardware... Down the road, the JCP proposes a new JTWI (Java Technology for the Wireless Industry) specification. In JTWI, a number of optional J2ME APIs -- such as MMAPI and WMA (Wireless Messaging APIs) -- become required services. Even in its current state, J2ME offers developers the ability to write once and deploy a business application across the wide range of wireless gear currently available. J2ME's abstraction layers also provide a hedge against vendor lock-in, and they help cope with the rapid changes in today's wireless devices. Developers may have to craft the midlet's interface to address the lowest-common-denominator display, but that's a small price to pay compared with writing a custom client application for each device the corporation owns..." See: (1) "J2ME Web Services Specification 1.0," JSR-000172, Proposed Final Draft 2; (2) "IBM Releases Updated Web Services Tool Kit for Mobile Devices."
[August 22, 2003] "'Java Everywhere' is for World Domination. Why the Latest Wireless Buzz Matters to All Developers." By Michael Juntao Yuan. In JavaWorld (August 22, 2003). "The buzzword from the 2003 JavaOne conference was 'Java everywhere.'... Java runtimes are built into more than 150 devices from more than 20 manufactures. All five major cell phone manufactures have committed to the Java platform. In addition to manufacturer support, Java has also gained widespread support from the wireless carrier community. Wireless carriers are conservative and wary of any security risks imposed by new technologies. As part of the J2ME specification process, carriers can influence the platform with their requirements. As a result, all major wireless carriers around the world have announced support for J2ME handsets and applications. For developers, J2ME applications can take advantage of not only ubiquitous device support, but also ubiquitous network support. A major effort has been made to support games on J2ME handsets. Mobile entertainment has proven to be an extremely profitable sector. In Europe, simple ring-tone download has generated $1.4 billon in revenue last year. In comparison, the entire global J2EE server market is $2.25 billion. J2ME games are content rich, over-the-air downloadable, and micro-payment-enabled. The J2ME gaming sector is projected to grow explosively and create many new Java jobs in the next couple of years. In fact, J2ME games are already the second largest revenue source for Vodafone's content service. Notable recent advances in the J2ME space: (1) The Mobile 3D Graphics API for J2ME [JSR 184] promises to bring 3D action games to Java-enabled handsets. Nokia presented an impressive demonstration at JavaOne. (2) The Advanced Graphics and User Interface Optional Package (JSR 209) will provide Swing and Java 2D support on PDA-type devices. At JavaOne, SavaJe Technologies, a smaller vendor, demonstrated a prototype smart phone device running Java Swing. (3) IBM has already ported its SWT [Standard Widget Toolkit] UI framework to Pocket PC devices as part of its Personal Profile runtime offering. (4) The Location API for J2ME [JSR 179] enables novel applications not possible in the desktop world. The API can determine a user's location either from a built-in GPS device or from a phone operator's triangular location signals in compliance with the enhanced 911 government requirements. (5) The completion of the SIP (Session Initiation Protocol) API for J2ME [JSR 180] enables the development of instant messaging applications on mobile devices. That will finally facilitate convergence between the popular desktop IM applications and wireless SMS messaging systems. (6) The Security and Trust Services API for J2ME [JSR 177] allows J2ME phones to access the device's embedded security element, e.g., the SIM (Subscriber Identity Module) card for GSM phones. JSR 177 enables support for more powerful and flexible security solutions for financial and other mobile commerce applications. (7) The J2ME Web Services Specification [JSR 172] supports Web services clients on mobile devices... The central message from this year's JavaOne is that the long overdue Java client-side revolution has finally arrived in the form of 'Java everywhere.' To paraphrase JavaOne keynote speaker Guy Laurence from Vodafone: the Java mobility train has already left the station, you are either on board or not. Every time you pick up your Java-enabled cell phone, think about the opportunities you might have missed..."
[August 21, 2003] "webMethods Extends UAN Support." By Demir Barlas. In Line56 (August 21, 2003). "For joint Siebel/webMethod customers, a shortcut to solving some potentially messy integration problems. webMethods has extended its support of the Universal Application Network (UAN), developed by customer relationship management (CRM) software provider Siebel, to include applications for the communications, media, and entertainment (CME) industries. UAN reflects a basic Siebel philosophy: the importance of the business process. UAN is a standards-based architecture that serves as a kind of hub, using business processes to drive the ways in which applications communicate. In theory, this means that point-to-point integration between applications can be bypassed, because business processes themselves reach out to UAN, which then touches all applications within that process. If this makes UAN sound like an integration solution in itself, beware; it isn't. UAN derives its efficacy from Siebel's partnerships with integration software providers like webMethods and TIBCO. Scott Opitz, SVP of marketing and business development for webMethods, explains further. 'Siebel's used our tools to build the connections,' he says. 'It's about pre-configuring business processes and eliminating the need for you to define them from scratch to support an integration environment.' In CME, as in other verticals, the integration environment can get tricky. For example, cable companies that until recently provided just one kind of service many also find themselves providing Internet access, video-on-demand, and so forth. That means the same customer could show up in different databases (including Siebel systems), so companies interested in distilling a single view of the customer would either have to do point-to-point integration or rely on something prepackaged, like the UAN. In this context, UAN would also be useful to run industry-specific business processes in the quote-to-cash cycle..." See also: (1) the note on UAN from the Siebel white paper; (2) a related article: "WebMethods Releases Integration App Based on Siebel's UAN," by Kimberly Hill, in CRMDaily.com News (August 21, 2003). Details in the announcement "Siebel Systems and webMethods Announce Expanded Offering for Universal Application Network. Siebel Integration Applications for Communications, Media and Energy Industries Now Available on webMethods Integration Platform."
[August 21, 2003] "Web Services Basic Profile for Industry and J2EE 1.4." By Gavin Clarke. In Computer Business Review Online (August 13, 2003). [The WS-I Basic Profile announcement] "means Sun Microsystems' latest server edition of Java, Java 2 Enterprise Edition (J2EE) 1.4, can now proceed to market, with the specification's final publication expected by the end of December. Sun and Java Community members postponed the most recent proposed release date of the already delayed J2EE 1.4, in an attempt to ensure the specification was in lock-step with the industry's latest web services specifications... Sun expects a number of vendors to launch sample J2EE 1.4 applications during in coming months as the Java Community Process (JCP) completes final release of Test Compatibility Kits (TCKs) and reference implementations for certification. The WS-I is, meanwhile, also planning a set of test tools and sample applications for Java and Microsoft Corp's C Sharp will be made available in the next few months. However, WS-I will not orchestrate or co-ordinate a testing regime for vendors to certify their products are compatible with the Basic Profile. Instead, the organization is relying on goodwill and market pressure to drive certification, hoping ISVs will not want to risk the shame of having a planned WS-I logo removed from them... Sun believes its own JCP-driven certification process can step in to help ensure conformity, in Java at-least, by embedding the Basic Profile 1.0 into the J2EE platform specification. Under JCP rules, J2EE 1.4 vendors must need to undergo testing using the TCK and reference implementations, ensuring they are conformant with the platform. Mark Hapner, distinguished engineer and chief web services strategist for Sun and the company's WS-I board representative, said: 'We are efficient at taking on the role of WS-I certification.' Interoperability is a fundamental issue, and one of the largest issues in Basic Profile 1.0 has been an attempt to ensure consistency in fault handling and error handling between Java and .NET web services. 'If you can't communicate what the fault is, you don't know what to do,' Cheng said. He believes the Basic Profile will mean vendors correctly implement SOAP 1.1, WSDL 1.1, UDDI 2.0, XML 1.0 and XML Schema in products, themselves, so users don't need to build out what is regarded as basic infrastructure. Hapner, said the Basic Profile would be integrated into J2EE's component model, viewed as fundamental building block of Java web services, to support web services' truly 'global computing model'..." See: "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services."
[August 21, 2003] "NETCONF Configuration Protocol." Edited by Rob Enns (Juniper Networks). IETF Network Working Group, Internet-Draft. Reference: 'draft-ietf-netconf-prot-00'. August 11, 2003, expires February 9, 2004. 73 pages. "There is a need for standardized mechanisms to manipulate, install, edit, and delete the configuration of a network device. In addition, there is a need to retrieve device state information and receive asynchronous device state messages in a manner consistent with the configuration mechanisms. There is great interest in using an XML-based data encoding because a significant set of tools for manipulating ASCII text and XML encoded data already exists... NETCONF uses a remote procedure call (RPC) paradigm to define a formal API for the network device. A client encodes an RPC in XML and sends it to a server using secure, connection-oriented session. The server responds with a reply encoded in XML. The contents of both the request and the response are fully described in XML DTDs or XML schemas, or both, allowing both parties to recognize the syntax constraints imposed on the exchange. A key aspect of NETCONF is an attempt to allow the functionality of the API to closely mirror the native functionality of the device. This reduces implementation costs and allows timely access to new features. In addition, applications can access both the syntactic and semantic content of the device's native user interface. NETCONF allows a client to discover the set of protocol extensions supported by the server. These 'capabilities' permit the client to adjust its behavior to take advantage of the features exposed by the device. The capability definitions can be easily extended in a noncentralized manner. Standard and vendor-specific capabilities can be defined with semantic and syntactic rigor. The NETCONF protocol is a building block in a system of automated configuration. XML is the lingua franca of interchange, providing a flexible but fully specified encoding mechanism for hierarchical content. NETCONF can be used in concert with XML-based transformation technologies such as XSLT to provide a system for automated generation of full and partial configurations. The system can query one or more databases for data about networking topologies, links, policies, customers, and services. This data can be transformed using one or more XSLT scripts from a vendor-independent data schema into a form that is specific to the vendor, product, operating system, and software release. The resulting data can be passed to the device using the NETCONF protocol..." See other details in the news story "IETF Network Configuration Working Group Releases Initial NETCONF Draft." [cache]
[August 20, 2003] "Embedded Markup Considered Harmful." By Norman Walsh. From XML.com (August 20, 2003). "XML is pretty simple: there's plenty of complexity to be found if you go looking for it: if you want, for example, to validate or transform or query it. But elements and attributes in well formed combinations have become the basis for an absolutely astonishing array of projects. Recently I've encountered a design pattern (or antipattern, in my opinion) that threatens the very foundation of our enterprise. It's harmful and it has to stop... It came as a surprise to me when I discovered that the RSS folks were supporting a form of escaped markup. Webloggers often publish a list of their recent entries in RSS and online news sites often publish headlines with it. Like most XML technologies, there's enough flexibility in it to suit a much wider variety of purposes than I could conveniently summarize here. Surprise became astonishment when I discovered that the folks working on the successor to RSS weren't going to explicitly outlaw this ugly hack. When I discovered that this hack was leaking into another XML vocabulary, FOAF, I became outright concerned... The idea of escaping markup goes against the fundamental grain of XML. If this hack spreads to other vocabularies, we'll very quickly find ourselves mired in the same bugward-compatible tag soup from which we have struggled so hard to escape. And evidence suggests that it's already spreading. Not long ago, the question of escaped markup turned up in the context of FOAF. The FOAF specification condones no such nonsense, but one of the blogging tools that produces FOAF reacted to a users insertion of HTML markup into the 'bio' element by escaping it. The tool vendor in question was quickly persuaded to fix this bug. Escaped Markup Must Stop There is clear evidence that the escaped markup design will spread if it isn't checked. If it spreads far enough before it's caught, it will become legacy..."
[August 20, 2003] "Should Atom Use RDF?" By Mark Pilgrim. From XML.com (August 20, 2003). ['Mark Pilgrim explains the use of RDF in the new Atom syndication format.'] "The problem with discussing RDF (where that means: 'I think this data format should be RDF') is that you can support any four of these RDF issues (model, syntax, tools, vision), in any combination, while vigorously arguing against the others. People who believe that the RDF conceptual model is a good thing may think that the RDF/XML serialization is wretched, or that there are no good RDF tools for their favorite language, or that the Semantic Web is an unattainable pipe dream, or any combination of these things. People who are familiar with robust RDF tools (such as RDFLib for Python) -- and, thus, never have to look at the RDF/XML serialization because their tools hide it from them completely -- may nonetheless think that RDF/XML is wretched. People who defend the RDF/XML syntax may have nothing polite to say about the vision of the Semantic Web. And around and around it goes... This is a problem with 'I think this format should be RDF' discussions. Many people who are thought to be pro-RDF are, in fact, against it in one or more ways (the model is limiting, the syntax is wretched, the tools are buggy or nonexistent, the vision is stupid). And many people who are perceived as anti-RDF are in fact in favor of it in one or more ways (the model is good, the serialization is no more complex than straight XML, the tools work well enough, the Semantic Web is worth the wait). For the record, I think that the RDF model is sound, the tools work for me, the serialization is wretched, and the Semantic Web is an unattainable pipe dream. If I appear to be wavering over time, sometimes pro-RDF, sometimes anti-RDF, it may be that I'm simply arguing different facets... How can we allow you to use your RDF tools on Atom, and do the right thing with reusing existing ontologies, and keep the syntax simple for people who simply want to parse Atom feeds in isolation, as XML? We can make the XSLT transformation normative... every platform that has robust RDF tools (a small but growing number) also has robust XSLT tools. But Atom-as-RDF is not the primary mode of consuming Atom feeds. There are dozens, perhaps more than 100, tools that consume syndication feeds now. Some of them have already been updated to consume Atom feeds and the format hasn't even been finalized yet. Most will be updated once the format is stable. And, to my knowledge, only one (NewsMonster) handles them as RDF, and it already has the infrastructure to transform XML because it does this for six of the seven formats called 'RSS' (the seventh is already RDF). In other words, we're hedging our bets. Whether a vocal minority likes it or not, RDF is very much a minority camp right now. It has a lot to offer -- I saw that first-hand as it forced us to clarify our model -- but it hasn't hit the mainstream yet. On the other hand, it seems perpetually poised to spring into the mainstream. Tool support is obviously critical here (since they help hide the wretched syntax), and the tools are definitely maturing. So should Atom be consumed as RDF? It depends. If you want to, and have the right tools, you can. You'll need to transform it into RDF first, but we'll provide a normative way to do that. If you don't want to, then you don't have to worry about it. Atom is XML..."
[August 20, 2003] "The Semantic Web is Closer Than You Think." By Kendall Grant Clark. From XML.com (August 20, 2003). "The W3C's web ontology language, now called OWL, was advanced to W3C Candidate Recommendation on 19-August-2003. While there is a lot of talk these days about the Semantic Web being the crack-addled pipe dream of a few academic naifs, in reality it's a lot closer to realization than you might be thinking... I'm not suggesting that we stand on the brink of a fully achieved, widespread Semantic Web. I am suggesting that some of the major pieces of the puzzle are now or will soon be in place. OWL, along with RDF, upon which it builds, are two such very major pieces of the Semantic Web puzzle... OWL is an ontology language for the Web, which builds on a rich technical tradition of both formal research and practical implementation, including SHOE, OIL, and DAML+OIL. The technical basis for much of OWL is the part of the formal knowledge representations field known as Description Logics (aka 'DL'). DL is the main formal underpinning of such diverse kinds of knowledge representation formalisms as semantic nets, frame-based systems, and others... OWL includes an RDF/XML interchange syntax, an abstract, non-XML syntax, and three sublanguages or variants, each of different expressivity and implementational complexity (OWL Lite, OWL DL, and OWL Full). The takeaway point is simple: OWL is real stuff; whether it's the right real stuff, whether it can gain critical mass, whether it can or will operate at web scale -- these are and will remain open questions for the foreseeable future. But the foundation is solid... What can be done with an ontology language for the Web? In short, you can formally specify a knowledge domain, describing its most salient features and constituents, then use that formal specification to make assertions about what there is in that domain. You can feed all of that to a computer which will reason about the domain and its knowledge for you. And, here's the most tantalizing bit, you can do all of this on, in, and with the Web, in both interesting and powerful ways... OWL has been specifically crafted out of its Webbish forerunners, particularly SHOE and DAML+OIL, to take advantage of some of the interesting things about the Web. What is interesting about the Web? Lots of things, including its scale, its distributedness, its relatively low barriers of access and accessibility. OWL is intended to be an ontology language that has some of these features: it should operate at the scale of the Web; it should be distributed across many systems, allowing people to share ontologies and parts of ontologies; it should be compatible with the Web's ways of achieving accessibility and internationalization; and it should be, relative to most prior knowledge representation systems, easy to get started with, non-proprietary, and open..." See: "W3C Releases Candidate Recommendations for Web Ontology Language (OWL)."
[August 20, 2003] "OWL Ascends Within Standards Group." By Paul Festa. In CNET News.com (August 18, 2003). "As part of its ongoing effort to give digital documents meaning that computers can understand, the Web's leading standards body advanced a key protocol as a candidate recommendation. The World Wide Web Consortium's (W3C) Web Ontology Language (OWL), a revision of the DAML+OIL Web ontology language, forms just one part of what the consortium calls its 'growing stack' of Semantic Web recommendations. The W3C for years has braved skepticism directed at its Semantic Web initiative, which aims to get computers to 'understand' data rather than to just transfer, store and display documents for computer users. Other documents in the Semantic Web stack include the Extensible Markup Language (XML), a general-purpose W3C recommendation for creating specialized markup languages, and the Resource Description Framework (RDF), which integrates different methods of describing data. OWL, by contrast, goes a step beyond existing recommendations to provide for more detailed descriptions of content. 'OWL can be used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms,' according to the W3C's OWL overview, the first of the set of six OWL drafts released Monday. 'This representation of terms and their interrelationships is called an ontology. OWL has more facilities for expressing meaning and semantics than XML (and) RDF...and thus OWL goes beyond these languages in its ability to represent machine interpretable content on the Web'..." See details in the news story "W3C Releases Candidate Recommendations for Web Ontology Language (OWL)."
[August 20, 2003] "Building Interoperable Web Services: WS-I Basic Profile 1.0." By Jonathan Wanagel, Andrew Mason, Sandy Khaund, Sharon Smith, RoAnn Corbisier, and Chris Sfanos (Microsoft Corporation). From the Microsoft Prescriptive Architecture Group (PAG). Series: Patterns & Practices. August 12, 2003. 133 pages. "This guide covers WS-I Basic Profile contents, use within Microsoft development tools, coding compliance challenges, degrees of freedom for customers and best options based on technical and non-technical requirements. The Guide is intended to help software architects and developers design and code Web services that are interoperable. We emphasize "interoperable" because we assume that you already understand how to implement a Web service. Our goal is to show you how to ensure that your Web service will work across multiple platforms and programming languages and with other Web services. Our philosophy is that you can best achieve interoperability by adhering to the guidelines set forth by the Web Services Interoperability (WS-I) organization in their Basic Profile version 1.0. In this book, we will show you how to write Web services that conform to those guidelines. Focusing on interoperability means there are some Web service issues that fall outside the scope of the discussion. These issues include security, performance optimization, scalability, and bandwidth conservation..." Also available in PDF format... To encourage interoperability, the WS-I is creating a series of profiles which will define how the underlying components of any Web service must work together. Chapter 2 [of this Guide] discusses the first of these profiles, called the Basic Profile, and includes the following topics: (1) The Basic Profile's underlying principles; (2) An explanation of the WS-I usage scenarios; (3) An explanation of the WS-I sample application, which demonstrates how to write a compliant Web service; (4) An explanation of the testing tools, which check that your implementation follows the Basic Profile guidelines. Chapter 3 lists some general practices you should follow for writing Web services or clients that conform to Basic Profile. Chapter 4 assigns each of the profile's rules to one of four possible levels of compliancy and, on a rule-by-rule basis, shows how to adjust your code to make your Web service comply with the profile's rules. Chapter 5 assigns each of the profile's rules to one of four possible levels of compliancy and, on a rule-by-rule basis, shows how to adjust your code to make your Web service client comply with the profile's rules. Appendix A goups the Basic Profile's rules according to their level of compliancy for implementing a Web service. Appendix B groups the Basic Profile's rules according to their level of compliancy for implementing a Web service client..." See "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services."
[August 20, 2003] "Canonical Situation Data Format: The Common Base Event." By IBM Staff Members: David Ogle (Autonomic Computing), Heather Kreger (Emerging Technologies), Abdi Salahshour (Autonomic Computing), Jason Cornpropst (Tivoli Event Management), Eric Labadie (WSAD PD Tooling), Mandy Chessell (Business Integration), Bill Horn (IBM Research - Yorktown), and John Gerken (Emerging Technologies). Reference: ACAB.BO0301.1.1. Copyright (c) International Business Machines Corporation. 66 pages. With XML Schema. IBM submission to the OASIS Web Services Distributed Management TC. "This document defines a common base event (CBE) and supporting technologies that define the structure of an event in a consistent and a common format. The purpose of the CBE is to facilitate effective intercommunication among disparate enterprise components that support logging, management, problem determination, autonomic computing and e-business functions in an enterprise. This document specifies baseline that encapsulate properties common to a wide variety of events, including business, autonomic, management, tracing and logging type events. The event format of the event is expressed as an XML document using UTF-8 or 16 encoding. This document is prescriptive about the format and content of the data that is passed or retrieved from component. However, it is not prescriptive about the ways in which how individual applications are to store their data locally. Therefore, the application requirement is only to be able to generate or render events in this format, not necessarily to store them in this format. The goal of this effort is to ensure the accuracy, improve the detail and standardize the format of events to assist in designing robust, manageable and deterministic systems. The results are a collection of specifications surrounding a 'Common Base Event' definition that serves as a new standard for events among enterprise management and business applications... The goal of this work is to provide more than just an element definition for a common event. In addition, an XML schema definition is provided. This document's scope is limited to data format and content of the data; how the data is sent and received and how an application processes the data is outside the scope of this document... When a situation occurs, a 3-tuple must be reported: (1) the identification of the component that is reporting the situation, (2) the identification of the component that is experiencing the situation (which might be the same as the component that is reporting the situation), and (3) the situation itself... The sourceComponentId is the identification of the component that was affected or was impacted by the event or situation. The data type for this property is a complex type as described by the ComponentIdentification type that provides the required data to uniquely identify a component... The reporterComponentId is the identification of the component that reported the event or situation on behalf of the affected component. The data type for this property is a complex type as described by the ComponentIdentification type that provides the required data to uniquely identifying a component... The situationInformation is the data that describes the situation reported by the event. The situation information includes a required set of properties or attributes that are common across products groups and platforms, yet architected and flexible enough to allow for adoption to product-specific requirements..." See also the note from Thomas Studwell posted 2003-08-20 to the OASIS WSDM TC list ['IBM Submits Common Base Events Specification to WS-DM TC']: IBM is pleased to announce the submission of the 'Canonical Situation Format: Common Base Event Specification' (CBE) to the Web Services Distributed Management Technical Committee (WS-DM) of OASIS. This submission has been developed in collaboration with a number of industry leaders and is being supported in this submission by Computer Associates International, and Talking Blocks, Inc., both key members of the WS-DM TC. This submission will be moved for acceptance by the WS-DM TC for consideration in the WS-DM TC standards on Thursday, August 21, 2003. The general principles behind the CBE specification were presented to the WS-DM TC on July 28 [2003] during the WS-DM TC face to face meeting..." See also "Management Protocol Specification." Update 2003-10: References and Additional Information in the IBM/Cisco press release. [source .DOC]
[August 20, 2003] "[Unicode] Identifier and Pattern Syntax." By Mark Davis. Public review draft from the Unicode Technical Committee. Reference: Proposed Draft, Unicode Technical Report #31. Date: 2003-07-18. "This document describes specifications for recommended defaults for the use of Unicode in the definitions of identifiers and in pattern-based syntax. It incorporates the Identifier section of Unicode 4.0 (somewhat reorganized) and a new section on the use of Unicode in patterns. As a part of the latter, it presents recommended new properties for addition to the Unicode Character Database. Feedback is requested both on the text of the new pattern section and on the contents of the proposed properties... A common task facing an implementer of the Unicode Standard is the provision of a parsing and/or lexing engine for identifiers. To assist in the standard treatment of identifiers in Unicode character-based parsers, a set of specifications is provided here as a recommended default for the definition of identifier syntax. These guidelines are no more complex than current rules in the common programming languages, except that they include more characters of different types. In addition, this document provides a proposed definition of a set of properties for use in defining stable pattern syntax: syntax that is stable over future versions of the Unicode Standard. There are many circumstances where software interprets patterns that are a mixture of literal characters, whitespace, and syntax characters. Examples include regular expressions, Java collation rules, Excel or ICU number formats, and many others. These patterns have been very limited in the past, and forced to use clumsy combinations of ASCII characters for their syntax. As Unicode becomes ubiquitous, some of these will start to use non-ASCII characters for their syntax: first as more readable optional alternatives, then eventually as the standard syntax. For forwards and backwards compatibility, it is very advantageous to have a fixed set of whitespace and syntax code points for use in patterns. This follows the recommendations that the Unicode Consortium made regarding completely stable identifiers, and the practice that is seen in XML 1.1. In particular, the consortium committed to not allocating characters suitable for identifiers in the range 2190..2BFF, which is being used by XML 1.1. With a fixed set of whitespace and syntax code points, a pattern language can then have a policy requiring all possible syntax characters (even ones currently unused) to be quoted if they are literals. By using this policy, it preserves the freedom to extend the syntax in the future by using those characters. Past patterns on future systems will always work; future patterns on past systems will signal an error instead of silently producing the wrong results..." Note: See also the 2003-08-20 notice from Rick McGowan (Unicode, Inc.), said to be relevant to anyone dealing with programming languages, query specifications, regular expressions, scripting languages, and similar domains: "The Proposed Draft UTR #31: Identifier and Pattern Syntax will be discussed at the UTC meeting next week. Part of that document (Section 4) is a proposal for two new immutable properties, Pattern_White_Space and Pattern_Syntax. As immutable properties, these would not ever change once they are introduced into the standard, so it is important to get feedback on their contents beforehand. The UTC will not be making a final determination on these properties at this meeting, but it is important that any feedback on them is supplied as early in the process as possible so that it can be considered thoroughly. The draft is found [online] and feedback can be submitted as described there..." General references in "XML and Unicode."
[August 19, 2003] "XML for e-Business." By Eve Maler (Sun Microsystems, Inc). Tutorial presentation. July 2003. 105 pages/slides. ['This tutorial was delivered at the CSW Informatics XML Summer School on 28-July-2003, and subsequently edited slightly to incorporate fixes, notes, and timestamps.'] The presentation provides an opportunity to: "(1) learn about the Universal Business Language (UBL) and its significance to, and place in, modern e-business; (2) study UBL's design center and underlying model -- a model that may be useful for many information domains; (3) study UBL as an application of XML, and its lessons for other large XML undertakings; (4) take a look at some real UBL inputs and outputs along the way... UBL is an XML-based business language standard; it leverages knowledge from existing EDI and XML B2B systems; it applies across all industry sectors and domains of electronic trade; it's modular, reusable, and extensible in XML-aware ways; it's non-proprietary and committed to freedom from royalties; it is intended to become a legally recognized standard for international trade... The Electronic Business XML initiative (ebXML) is a joint 18-month effort of OASIS and UN/CEFACT, concluding in May 2001. The work continues in several forums today with over 1000 international participants; the ebXML vision is for a global electronic marketplace where enterprises of any size, anywhere, can find each other electronically and conduct business by exchanging XML messages... The ebXML stack for business web services includes: Message contextualization [Context methodology]; Standard messages [Core components]; Business agreements [CPPA]; Business processes [BPSS]; Packaging/transport [ebMS]... The ebXML Core Components Technical Specification is at version 1.90; it is syntax neutral and ready for mapping. This includes the Context Methodology work, which likewise is syntax neutral rather than syntax bound. UBL proposes to flesh out the ebXML stack, using the UBL Context Methodology with ebXML Context Methodology and the UBL Library with ebXML Core components... The ebXML Core Components substrate allows for correlation between different syntactic forms of business data that has the same meaning and purpose; UBL is striving to use the CCTS metamodel accurately... UBL offers important and interesting solutions: as a B2B standard, it is user-driven, with deep experience and partnership resources to call on; it is committed to truly global trade and interoperability; its standards process is transparent. As an XML application, it is layered on existing successful standards; it is tackling difficult technical problems without losing sight of the human dimension..." [adapted/excerpted from the .PPT version] See the canonical source files in OpenOffice and Microsoft PPT formats. On UBL, see: (1) OASIS Universal Business Language TC website; (2) general references in "Universal Business Language (UBL)." [cache .PPT]
[August 19, 2003] "Turn User Input into XML with Custom Forms Using Office InfoPath 2003." By Aaron Skonnard. In Microsoft MSDN Magazine (September 2003). "Office InfoPath 2003 is a new Microsoft Office product that lets you design your own data collection forms that, when submitted, turn the user-entered data into XML for any XML-supporting process to use. With an InfoPath solution in place, you can convert all those commonly used paper forms into Microsoft Office-based forms and end the cycle of handwriting and reentering data into your systems. Today organizations are beginning to realize the value of the mountains of data they collect every day, how hard it is to access it, and are striving to mine it effectively. InfoPath will aid in the design of effective data collection systems... The Web Services platform builds on XML by using it for information exchange over protocols like TCP, HTTP, SMTP, and potentially many others. Combining XML with these open protocols makes it possible to build an infrastructure for sharing information between business processes in a standard way. All that is needed to reap the benefits across the enterprise is an easy way to get previously hand-written data into XML. InfoPath, previously known as XDocs, is a new member of the Microsoft Office System of products that let's you do just that. InfoPath provides an environment for designing forms built around XML Schema or Web Services Description Language (WSDL) definitions. In a matter of seconds, you can use InfoPath to build a new form that's capable of outputting XML documents conforming to an XML Schema Definition (XSD) or communicating with a Web Service conforming to a WSDL definition. XML Web Services and InfoPath can be used together to replace their legacy information-gathering techniques. InfoPath is chock-full of functionality, including rich client functionality and off-line capabilities that surpass those of traditional Web Forms. Best of all it's much easier to use than traditional Web Services development environments... InfoPath makes it easy for anyone to design, publish, and fill out electronic forms based on XML and Web Services technology, which offers many advantages over traditional techniques used today... This article will focus on the main features of InfoPath..."
[August 19, 2003] "The Trojan Document." By Erika Brown. In Forbes Magazine (August 18, 2003). "Bruce Chizen [President and Chief Executive Officer of Adobe Systems Incorporated] is pushing hard to make Adobe more relevant to big business. It's a bold bet that puts the company directly in Microsoft's way. Contractors used to dread getting approval for a Wal-Mart parking lot off a Kansas highway. They had to drive to a local Department of Transportation office, fill out multiple forms and photocopy blueprints; the stack of paper would be mailed to a district office, then to engineering, then to state-agency designers and back again. The ordeal took two months. Six months ago the agency began using new forms software from Adobe Systems, and the process has been cut to three weeks. Now a transportation official scans the contractor's designs, saves the file in Adobe's Portable Document Format, or PDF, and submits it online with the proper electronic forms. As each state official puts his digital signature to the plan, the file sends itself to the next person for approval and ultimately to a database at the Department of Transportation headquarters. 'These forms are so smart they're like their own applications,' says Cynthia Wade, head of technology for the department... Chizen spent the last two years redesigning products, replacing sales staff and buying up smaller firms to gird Adobe for a new assault on the corporate market. The grand plan: Convince companies that every single document they produce should be turned into an Adobe PDF. It used to be that a document created in Acrobat was the only thing that could become a PDF. Now, with Adobe's new software, a Word memo, an Excel spreadsheet, a Web site, a videoclip or a hybrid combination of all four formats can be converted to a PDF. Adobe has begun selling software that gives any of these documents the ability to be read by Adobe Reader, as well as tell company servers where to send itself, who can read it, who has made changes to it and what data within it should go into which part of the database. 'The ubiquity of Reader means we can build more applications to take advantage of that platform,' says Chizen. 'It's like what Microsoft has in Office.' Well, not quite. But at least Chizen is showing real chutzpah in stepping between Microsoft and its customers. Adobe has managed to get by for two decades without incurring the wrath of Redmond. Now Microsoft is paying attention. Its new electronic-forms product, InfoPath, is due out later this year with the next version of Office. Like Acrobat, it will use the Internet programming language XML to make forms more interactive but, in typical Microsoft fashion, InfoPath is designed to work within Office and doesn't read Adobe forms... The promotions group at Macy's West has been testing the new Acrobat Pro for two months. Designers, art directors and buyers are huddling over ad copy and catalog pages online. Michael Margolies, Macy's technology director, expects the proofing of print materials will go from days to minutes. Pfizer is using Adobe software to manage its clinical trials. A doctor types into a PDF form on Pfizer's Web site, making the data on patient progress work in real time. Chizen's hunt for new revenue is off to a good start..."
[August 19, 2003] "Acrobat Challenges InfoPath. Adobe Takes a Giant Step Forward Into Direct Competition with Microsoft." By Jon Udell. In InfoWorld (August 15, 2003). "I've always regarded Adobe's PDF as an odd creature, neither fish nor fowl. I'm intensely annoyed when I have to view a multicolumn PDF document onscreen. Some monitors rotate into a portrait orientation, but mine -- and probably yours - are landscape devices. Every time I scroll from the bottom of column No. 1 to the top of column No. 2, I taste the worm at the PDF apple's core... So I was delighted to learn, in a recent conversation with Adobe senior product manager Chuck Myers, that the ongoing integration of XML into PDF is about to shift into high gear... The backstory includes initiatives such as XMP (Extensible Metadata Platform), which embeds XML metadata in PDF files; and Tagged PDF, which enables PDF documents to carry the structural information that can be used, for example, to reflow a three-column portrait layout for landscape mode. So far, though, XML data hasn't been a first-class citizen of the PDF file --especially those PDF files that represent business forms. Acrobat 5 does support interactive forms. It also has a data interchange format called FDF (Forms Data Format), for which an XML mapping exists. But as Myers wryly observes, 'There's one schema, from Adobe, we hope you like it.' Acrobat 6 blasts that limitation out of the water. It supports arbitrary customer-defined schemas, Myers told me. That's a huge step forward, and brings Acrobat into direct competition with Microsoft's forthcoming InfoPath. Look at Adobe's interactive income tax form. That document is licensed, by the Document Server for Reader Extensions, to unlock the form fill-in and digital signature capabilities of the reader. Filling in a form and then signing it digitally is an eye-opening experience. It's more interesting now that the form's data is schema-controlled and, Myers adds, can flow in and out by way of WSDL-defined SOAP transactions. The only missing InfoPath ingredient is a forms designer that nonprogrammers can use to map between schema elements and form fields. That's just what the recently announced Adobe Forms Designer intends to be. I like where Adobe is going. The familiarity of paper forms matters to lots of people..." See: (1) "Extensible Metadata Platform (XMP)"; (2) Enhanced Adobe XML Architecture Supports XML/PDF Form Designer and XML Data Package (XDP)"; (3) "Microsoft Office 11 and InfoPath [XDocs]."
[August 19, 2003] "Hands-on XForms. Simplifying the Creation and Management of XML Information." By Micah Dubinko (Cardiff Software). In XML Journal Volume 4, Issue 8 (August 2003). "Organizations have evolved a variety of systems to deal with the increasing levels of information they must regularly process to remain competitive. Business Process Management (BPM) systems presently take a wide variety of shapes, often including large amounts of ad hoc scripting and one-off implementations of business rules. Such systems tend to be developed incrementally, and pose a significant obstacle to continued development and maintenance. A World Wide Web Consortium (W3C) specification called XForms aims to change this situation. This article compares XForms to ad hoc solutions to produce a real-life application: the creation of XML purchase orders... Of the several efforts that are under way to define XML vocabularies for business, the most promising seems to be UBL, the Universal Business Language. At the expense of being slightly verbose, the vocabularies defined by UBL do a remarkable job of capturing all of the minor variations that occur in real-world business documents across diverse organizations. For the sample application I chose a purchase order... Microsoft InfoPath, currently in beta as part of Office System 2003, offers a better user experience than HTML forms, but still relies heavily on scripting through an event-driven model. As the remainder of this article will show, a declarative approach as used in XForms can eliminate a substantial amount of complexity from the overall solution. Since XForms is designed to be used in concert with a 'host language,' I chose a combination of XHTML 1.1 and XForms for the solution, even though a DTD for the combined language isn't available... The two main challenges facing developers deploying XForms solutions today are deciding on a host language and configuring stylesheets for all target browsers. Eventually XHTML 2.0, including XForms as the forms module, will be finalized, providing a known and stable target for browsers to implement and designers to write toward. Until that time, however, a reasonable approach is to use XForms elements within XHTML 1.0 or 1.1, without the luxury of DTD validation... XForms has made vast strides in 2003, becoming a technology suitable for production use by early adopters. Already, businesses are using XForms to produce real documents. The combination of an open standard with a wide variety of both free and commercial browsers makes a powerful business case for deploying XForms solutions. Unlike many other XML standards, XForms has remained small, simple, and true to its roots, addressing only well-known and well-understood problems, and providing a universal means to express solutions to these problems. Part of the appeal of XForms is the reuse of proven technologies, such as XPath, for which developers are more willing to invest the time necessary for learning. XForms can also leverage existing XML infrastructure, including XML Schema and Web services components..." A fuller treatment is presented in "UBL in XForms: A Worked Example." See also: (1) W3C XForms: The Next Generation of Web Forms; (2) general references in "XML and Forms." [alt URL]
[August 19, 2003] "XForms Building Blocks." By Micah Dubinko (Cardiff Software). Draft Chapter 2 (20 pages) from XForms Essentials: Gathering and Managing XML Information, [to be] published by O'Reilly & Associates as part of the Safari Bookshelf. 'More Than Forms; A Real-World Example [based upon UBL]; Host Language Issues; Linking Attributes. "This chapter goes into greater detail on the concepts underlying the design of XForms, as well as practical issues that come into play, including a complete, annotated real-world example. A key concept is the relationship between forms and documents, which will be addressed first. After that, this chapter elaborates on the important issue of host languages and how XForms integrates them... Despite the name, XForms is being used for many applications beyond simple forms. In particular, creating and editing XML-based documents is a good fit for the technology. A key advantage of XML-based documents over, say, paper or word processor templates, is that an entirely electronic process eliminates much uncertainty from form processing. Give average 'information workers' a paper form, and they'll write illegibly, scribble in the margins, doodle, write in new choices, and just generally do things that aren't expected. All of these behaviors are manually intensive to patch up, in order to clean the data to a point where it can be placed into a database. With XForms, it is possible to restrict the parts of the document that a given user is able to modify, which means that submitted data needs only a relatively light double-check before it can be sent to a database. One pitfall to avoid, however, is a system that is excessively restrictive, so that the person filling the form is unable to accurately provide the needed data. When that happens, users typically either give bad information, or avoid the electronic system altogether..." About the book XForms Essentials: "The use of forms on the web is so commonplace that most user interactions involve some type of form. XForms -- a combination of XML and forms -- offers a powerful alternative to HTML-based forms. By providing excellent XML integration, including XML Schema, XForms allows developers to create flexible, web-based user-input forms for a wide variety of platforms, including desktop computers, handhelds, information appliances, and more. XForms Essentials is an introduction and practical guide to the new XForms specification. Written by Micah Dubinko, a member of the W3C XForms working group and an editor of the specification, the book explains the how and why of XForms, showing readers how to take advantage of them without having to write their own code. You'll learn how to integrate XForms with both HTML and XML vocabularies, and how XForms can simplify the connection between client-based user input and server-based processing. XForms Essentials begins with a general introduction to web forms, including information on history and basic construction of forms. The second part of the book serves as a reference manual to the XForms specification. The third section offers additional hints, guidelines, and techniques for working with XForms..." See also the preceding bibliographic entry, online version of the book, and the author's XML and XForms blog.
[August 19, 2003] "Object-Oriented XsLT: A New Paradigm for Content Management." By Pietro Michelucci. In XML Journal Volume 4, Issue 8 (August 2003). "What could be better for managing content than separating data from presentation? How about separating data from data? XsLT can actually be used to allow for different levels of data abstraction; this can reduce the complexity of managing Web content by an order of magnitude and facilitate code reuse. What I'm talking about here is object-oriented XsLT... Isolating content from presentation was the original purpose of stylesheet languages. In the conventional approach, there is just one data layer (XML) and one presentation layer (HTML), with XsL transformations (XsLT) in between. This two-layer architecture simplifies Web site management by allowing content providers to edit their data without concern for stylistic issues, and, conversely, by permitting graphics designers to set the visual tone without regard for specific content. While the two-layer model has been fruitful, XsL transformations (XsLT) empower us to extend data abstraction through the use of multiple data layers. Toward this end, I have created a general-purpose XsLT that you can easily use to apply multiple serial XsL transformations to an XML data document. OOX, like OOP, isn't just about stringing together multiple transformations, using extra data layers, or treating schemas like interfaces. It's an approach to content management and Web architecture that involves the judicious application of data abstraction and the reuse of transformation objects. When applied strategically, OOX can result in a low-maintenance Web site that is quickly built, logically organized, and robust to structural content changes... there are feature-rich software tools on the market to facilitate Web development and content management. Many of these tools function by storing proprietary metadata, which describe both structural and thematic aspects of the Web site. For example, metadata might be used to programmatically maintain navigation links on all pages of a Web site. These metadata are not directly accessible to the Web developer, so even though the software uses them internally for content management, they may impede fine-level control. Furthermore, migrating from one of these content management tools to another can prove vexing because the tools often do not recognize each other's metadata. In contrast to most content management tools, OOX relies exclusively upon W3C-based technologies. Therefore, in adopting OOX as a Web development paradigm, it is possible to exercise complete control over your Web site without getting locked into proprietary technology. Furthermore, flexible tools can work in concert with OOX development. OOX may not be suitable for all developers. But if you have dabbled in XML and aren't afraid to explore the power afforded by XsLT, you might be surprised at what the latest addition to alphabet soup has to offer for content management..." [alt URL]
[August 19, 2003] "XML MetaData for Simplifying Web Development." By George M. Pieri and Arnoll Solano. In JavaPro Magazine (August 2003). ['Achieve more efficient code development and maintenance while freeing yourself from object properties and getting new functionality without recompiling.'] "Web application development has become time consuming. Making a simple change to display a new database field on screen often involves recompiling business classes and then all the resources that use those business classes. You can simplify this Java development process by using XML to deliver your data and to describe the business objects that are responsible for building the data. Using metadata to describe your business objects and presentation components can speed up development... Much of application development revolves around building business objects. These objects usually represent the entities of the system such as customers, invoices, and products. The responsibilities of a typical business object are to retrieve, add, update, delete, and validate data. The data usually comes from a data source, which can be a database such as Microsoft SQL Server or Oracle. In the applications that we developed we use the term databean to describe the typical business object because its primary responsibilities revolve around data. In building business objects, or databeans, it is important to make them stateless to free you from the time-consuming process of maintaining properties. Stateless objects have no properties and instance variables that maintain state, which saves you time from having to add, get, and set methods every time an end user requests a new column to be displayed on one of your Web pages. All the data is returned each time a method is called. This characteristic also has the extra benefit of improving performance because the object can be reused quickly. It is possible to use XML to return the data without having business object properties. The start and end tags around the data field represent the field name, which frees you from having to maintain field names ... Representing your visual components with metadata has many advantages. It enables you to add a new column to your view XML and within minutes have it show up automatically on the grid. No longer do you have to modify the HTML and then make sure that everything lines up correctly. The entire color of the grid can be changed just by modifiying the view metadata along with fonts and many other attributes. It is also easy to identify what columns are used for which screens and makes modifications quickly. Using XML to serve up your data helps you have business objects without properties, which speeds up code development and, more importantly, code maintainence. In addition, using XML to describe your data has many more benefits. Metadata can be used to describe your business objects by abstracting out their functionality, DataBean.xml, which allows you to change the SQL behind your business objects without recompiling code. It can also be helpful in describing your presentation layer such as menus and grids that are commonly developed. We have used these approaches successfully and have greatly reduced our development time and have become more efficient at making code modifications..." [alt URL]
[August 19, 2003] "Discover Key Features of DOM Level 3 Core, Part 1. Manipulating and Comparing Nodes, Handling Text and User Data." By Arnaud Le Hors and Elena Litani (IBM). From IBM developerWorks, XML zone. August 19, 2003. ['In this two-part article, the authors present some of the key features brought by the W3C Document Object Model (DOM) Level 3 Core Working Draft and show you how to use them with examples in Java code. This first part covers manipulating nodes and text, and attaching user data onto nodes.'] "The Document Object Model (DOM) is one of the most widely available APIs. It provides a structural representation of an XML document, enabling users to access and modify its contents. The DOM Level 3 Core specification, which is now in Last Call status, is the latest in a series of DOM specifications produced by the W3C. It provides a set of enhancements that make several common operations much simpler to perform, and make possible certain things you simply could not do before. It also supports the latest version of different standards, such as Namespaces in XML, XML Information Set, and XML Schema, and thus provides a more complete view of the XML data in memory. The first part of this article covers operations on nodes; the second part focuses on operations on documents and type information, and explains how to use DOM in Xerces. We show you how DOM Level 3 Core can make your life easier when working with nodes, whether it is renaming a node, moving nodes from one document to another, or comparing them. We also show you how DOM Level 3 Core lets you access and modify the text content of your document in a more natural way than having to deal with Text nodes that tend to get in the way. Finally, we explain to you how you can use the DOM Level 3 Core to more easily maintain your own structure that is associated with the DOM... DOM Level 3 can do a lot of work for you. First, it allows you to store a reference to your application object on a Node. The object is associated with a key that you can use to retrieve that object later. You can have as many objects on a Node as you want; all you need to do is use different keys. Second, you can register a handler that is called when anything that could affect your own structure occurs. These are events such as a node being cloned, imported to another document, deleted, or renamed. With this, you can now much more easily manage the data you associate with your DOM. You no longer have to worry about maintaining the two in parallel. You simply need to implement the appropriate handler and let it be called whenever you modify your DOM tree. And you can do this with the flexibility of using a global handler or a different one on each node as you see fit. In any case, when something happens to a node on which you have attached some data, the handler you registered is called and provides you with all the information you need to update your own structure accordingly... In Part 2 [of the series], we will show you other interesting features of DOM Level 3 Core, such as how to bootstrap and get your hands on a DOMImplementation object without having any implementation-dependent code in your application, how the DOM maps to the XML Infoset, how to revalidate your document in memory, and how to use DOM Level 3 Core in Xerces..." Article also in PDF format. See: (1) W3C Document Object Model (DOM) website; (2) DOM Level 3 Core Issues List; (3) general references in "W3C Document Object Model (DOM)."
[August 19, 2003] "Low Bandwidth SOAP." By Jeff McHugh. From O'Reilly WebServices.xml.com (August 19, 2003). "With the mobile phone industry reporting better than expected sales, and news that, by the end of this year, smart phones are expected to outsell hand-held computers, it should come as no surprise that wireless application development is on the rise. Sun recently announced that by the end of 2004 there may well be more than 200,000,000 Java-enabled mobile handsets. Yet, with all the attention being paid to these microdevices (i.e., low resource mobile devices), it's surprising to learn that a developer wishing to build a wireless application using XML, SOAP, and web services is left behind. Why is this? First, a microdevice by definition has an extremely limited amount of memory. Second, traditional packages such as Xerces (for XML) and Axis (for SOAP) are far too large and resource-intensive to work on microdevices. A examination of Xerces.jar file should adeptly demonstrate this fact; it's over one megabyte in size. Microdevices are simply too small to be expected to work with packages originally designed for desktop clients and servers. Fortunately this issue is well recognized by the larger wireless community. Sun, in particular, is currently in the stage of finalizing JSR 172, a specification that addresses the use of XML, SOAP, and web services on microdevices. The downside is that, given past experience, it's not unreasonable to expect at least ten to twelve months to pass before finalization and widespread implementation. But that shouldn't deter anyone wishing to create a wireless application today, for doing so is quite possible using a powerful, free, and open source package readily available from Enhydra.org. This article explains the basics of building web service servers and clients using Enhydra's KSOAP implementation. A key ingredient for any web services application is SOAP. The problem with developing a wireless SOAP/XML application -- and the reason for the above-mentioned JSR 172 -- revolves around the following issues. First, the common XML and SOAP packages currently available are quite large and contain hundreds of classes. Second, these packages depend on features of the Java runtime that simply don't exist on a microdevice. I'm thinking specifically about the Connected Limited Device Configuration (CLDC) specification which did away with nearly the entire core set of Java classes normally present in the J2EE and J2SE distributions: AWT, Swing, Beans, Reflection, and most java.util and java.io classes have simply disappeared. The purpose of this 'bare bones' Java runtime is to accommodate the limited footprint of the KVM -- a low-memory virtual machine running on a microdevice. This is where Enhydra.org comes to the rescue. KSOAP and KXML are two packages available from the web site designed to enable SOAP and XML applications to run within a KVM. They are thin, easy to use, and well documented. Combined into a single jar file, they take up less than 42K... By leveraging KSOAP for your wireless application, you can help make it a more powerful and reliable one. Since much of the infrastructure is provided, you as the developer can spend more time focusing on the important aspects of development such as the business logic..."
[August 19, 2003] "J2ME Web Services Specification 1.0." JSR-000172. Proposed Final Draft 2. By Jon Ellis and Mark Young. Date: July 14, 2003, Revision 10. Release date: July 18, 2003. Copyright (c) 2003 Sun Microsystems, Inc. 86 pages. The specification has been developed under the Java Community Process (JCP) version 2.1 as Java Specification Request 172 (JSR-172). Comments to jsr-172-comments@sun.com. The specification builds on the work of others, specifically JSR-63 Java API for XML Processing and JSR-101 Java API for XML based RPC. "The broad goal is to provide two new capabilities to the J2ME platform: access to remote SOAP- and XML-based web services, and the parsing of XML data. There is great interest and activity in the Java community in the use of web services standards and infrastructures to provide the programming model for the next generation of enterprise services. There is considerable interest in the developer community in extending enterprise services out to J2ME clients... The main deliverables of the JSR-172 specification are two new, independent, optional packages: (1) an optional package adding XML Parsing support to the platform. Structured data sent to mobile devices from existing applications will likely be in the form of XML. In order to avoid including code to process this data in each application, it is desirable to define an optional package that can be included with the platform; (2) an optional package to facilitate access to XML based web services from CDC and CLDC based profiles. The goal of the 'JAXP Subset' optional package is to define a strict subset wherever possible of the XML parsing functionality defined in JSR-063 JAXP 1.2 that can be used on the Java 2 Micro Edition Platform (J2ME). XML is becoming a standard means for clients to interact with backend servers, their databases and related services. With its platform neutrality and strong industry support, XML is being used by developers to link networked clients with remote enterprise data. An increasing number of these clients are based on the J2ME platform, with a broad selection of mobile phones, PDAs, and other portable devices. As developers utilize these mobile devices more to access remote enterprise data, XML support on the J2ME platform is becoming a requirement.In order to provide implementations that are useful on the widest possible range of configurations and profiles, this specification is treating the Connected Limited Device Configuration (CLDC) 1.0 as the lowest common denominator platform... JAX-RPC is a Java API for interacting with SOAP based web services. This specification defines a subset of the JAX-RPC 1.1 specification that is appropriate for the J2ME platform. The functionality provided in the subset reflects both the limitations of the platform; memory size and processing power, as well as the limitations of the deployment environment; low bandwidth and high latency. The web services API optional package should not depend on the XML parsing optional package; it must be possible to deliver the web services optional package independent of XML parsing... The WS-I Basic Profile (WS-I BP) provides recommendations and clarifications for many specifications referenced by this specification, and its superset -- JAX-RPC 1.1. To provide interoperability with other web services implementations, JAX-RPC Subset implementations must follow the recommendations of the WS-I BP where they overlap with the functionality defined in this specification..." See other details in: (1) the original JSR document; (2) the news story "IBM Releases Updated Web Services Tool Kit for Mobile Devices."
[August 19, 2003] "Use of SAML in the Community Authorization Service." By Von Welch, Rachana Ananthakrishnan, Sam Meder, Laura Pearlman, and Frank Siebenlist. Working paper presented to the OASIS Security Services TC. August 19, 2003. 5 pages. "This document describes our use of SAML in the upcoming release of our Community Authorization Service. In particular we discuss changes we would like to see to SAML to address issues that have come up both with current and planned development. A virtual organization (VO) is a dynamic collection of resources and users unified by a common goal and potentially spanning multiple administrative domains. VOs introduce challenging management and policy issues, resulting from often complex relationships between local site policies and the goals of the VO with respect to access control, resource allocation, and so forth. In particular, authorization solutions are needed that can empower VOs to set policies concerning how resources assigned to the community are used -- without, however, compromising site policy requirements of the individual resources owners. The Community Authorization Service (CAS) is a system that we have developed as part of a solution to this problem. CAS allows for a separation of concerns between site policies and VO policies. Specifically, sites can delegate management of a subset of their policy space to the VO. CAS provides a fine-grained mechanism for a VO to manage these delegated policy spaces, allowing it to express and enforce expressive, consistent policies across resources spanning multiple independent policy domains. Both past and present CAS implementations build on the Globus Toolkit middleware for Grid computing, thus allowing for easy integration of CAS with existing Grid deployments. While our currently released implementation of CAS uses a custom format for policy assertions, the new version currently in development uses SAML to express policy statements. In this document we describe our use of SAML with some issues we have encounters with its use..." Note on CAS: "Building on the Globus Toolkit Grid Security Infrastructure (GSI), Community Authorization Service (CAS) allows resource providers to specify course-grained access control policies in terms of communities as a whole, delegating fine-grained access control policy management to the community itself. Resource providers maintain ultimate authority over their resources but are spared day-to-day policy administration tasks (e.g., adding and deleting users, modifying user privileges)... The second Alpha release (alphaR2) of the Community Authorization Service includes a CAS server, CAS user and administrative clients as well as a CAS-enabled GridFTP server. Other portions of the Globus Tookit (e.g., the Gatekeeper, MDS, replica management) are not CAS-enabled at this time and are not included in this release... The Globus Toolkit uses the Grid Security Infrastructure (GSI) for enabling secure authentication and communication over an open network. GSI provides a number of useful services for Grids, including mutual authentication and single sign-on... GSI is based on public key encryption, X.509 certificates, and the Secure Sockets Layer (SSL) communication protocol. Extensions to these standards have been added for single sign-on and delegation. The Globus Toolkit's implementation of the GSI adheres to the Generic Security Service API (GSS-API), which is a standard API for security systems promoted by the Internet Engineering Task Force (IETF)..." See general references in "Security Assertion Markup Language (SAML)." [cache]
[August 19, 2003] "Use of SAML for OGSA Authorization." From the Global Grid Forum OGSA Security Working Group. Submitted for consideration as a recommendations document in the area of OGSA authorization. GWD-R, June 2003. 16 pages. "This document defines an open grid services architecture (OGSA) authorization service based on the use of the security assertion markup language (SAML) as a format for requesting and expressing authorization assertions. Defining standard formats for these messages allows for pluggability of different authorization systems using SAML. There are a number of authorization systems currently available for use on the Grid as well as in other areas of computing, such as Akenti, CAS, PERMIS, and VOMS. Some of these systems are normally used in decision push mode by the application -- they act as services and issue their authorization decisions in the form of authorization assertions that are conveyed, or pushed, to the target resource by the initiator. Others are used in decision pull mode by the application -- they are normally linked with an application or service and act as a policy decision maker for that application, which pulls a decision from them... With the emergences of OGSA and Grid Services, it is expected that some of these systems will become OGSA authorization services as mentioned in the OGSA Security Roadmap. OGSA authorization services are Grid Services providing authorization functionality over an exposed Grid Service portType. A client sends a request for an authorization decision to the authorization service and in return receives an authorization assertion or a decision. A client may be the resource itself, an agent of the resource, or an initiator or a proxy for an initiator who passes the assertion on to the resource. This specification defines the use of SAML as a message format for requesting and expressing authorization assertions and decisions from an OGSA authorization service. This process can be single or multi-step. In single step authorization, all the information about the requested access is passed in one SAML request to the authorization service. In multi-step authorization, the initial SAML request passes information about the initiator, and subsequent SAML requests pass information about the actions and targets that the initiator wants to access. The SAML AuthorizationDecisionQuery element is defined as the message to request an authorization assertion or decision, the DecisionStatement element is defined as the message to return a simple decision, and the AuthorizationDecisionStatement the method for expressing an authorization assertion. By defining standard message formats the goal is to allow these different authorization services to be pluggable to allow different authorization systems to be used interchangeably in OGSA services and clients..." See also "Security Architecture for Open Grid Services." [cache]
[August 19, 2003] "Southwest Airlines Shows SAML's Promise." By Terry Allan Hicks, Ray Wagner, and Roberta J. Witty (Gartner). Gartner Research Note. Reference Number: FT-20-7798. August 13, 2003. ['Enterprises that manage large numbers of external identities should consider SAML-based cross-domain trust.'] "On 12 August 2003, Oblix, which develops identity-based security solutions, announced that Southwest Airlines has completed a large-scale implementation of the Oblix NetPoint identity management and access control product. The NetPoint implementation uses single sign-on based on the SAML standard to secure communications within Southwest's internal networks and with its suppliers and other external business partners. Southwest is one of the first to use SAML-enabled identity management on a large scale to perform cross-domain trust. This implementation also marks an early step in the movement toward federated identity management. However, this approach appears to deliver many of the real-world benefits of federated identity management without the use of additional technologies or standards, such as Liberty and WS-Federation. The Oblix system enables Southwest to vouch for the identity of an employee who accesses external partners' networks (for example, an aircraft mechanic looking for technical documentation). The partner grants session access to the Southwest employee without performing authentication on its own site. With this approach, Southwest enjoys enhanced employee productivity, and the external partner does not need to manage credentials for large numbers of outside users. According to some estimates, identity management solutions deliver an average three-year return on investment of as much as 300 percent. The use of standards such as SAML may drive up return on investment still further by offering cost savings for security administration, help desk support and application development..." Note also available in HTML format. See: (1) the announcement, "Southwest Airlines Deploys Industry Leading SAML Implementation On Oblix NetPoint. NetPoint SAML Solution Enables User Authentication and Authorization Across Corporate Extranets."; (2) general references in "Security Assertion Markup Language (SAML)."
[August 19, 2003] "A Web Services Strategy for Mobile Phones." By Nasseam Elkarra. From O'Reilly WebServices.xml.com (August 19, 2003). ['Planning to deploy information services on mobile phones? This article gives an overview of the various technologies and routes available for mobile web service development.'] "In most web services presentations, the speaker has a slide of a mobile phone, a PDA, a computer, and other devices communicating with a web service via SOAP and HTTP. You quickly envision a utopia of universal access but overlook the fact that your old Nokia doesn't do XML web services. If you have a J2ME-enabled phone connected to the Internet, it's very possible to interact with web services directly. However, the majority of mobile phone users do not have these phones, which means an alternative mode of access must be provided... VoiceXML is a language for building voice applications much like you hear when calling customer service hotlines. It is an XML-based standard developed by the W3C's Voice Browser Working Group. Most VoiceXML developer portals give you access to a phone number for testing your application; however, VoiceXML is not limited to phones and can actually be accessed by any VoiceXML-enabled client. This client can be the usual phone, but it could also be an existing Web browser with a built in VoiceXML interpreter. A good example of this is the multimodal browser being developed by IBM and Opera based on the XHTML+Voice (X+V) proposed specification. The term 'multimodal' simply refers to multiple modes of interaction by extending user interfaces to include input from speech, keyboards, pointing devices, touch pads, electronic pens, and any other type of input device. The W3C also has a Multimodal Interaction Working Group that is developing standards to turn the concept of universal accessibility into a reality... The Wireless Application Protocol (WAP) is a set of standards to enable wireless access to Internet services from resource-constrained mobile devices. WAP provides an entire architecture to make a mini-Web possible by defining standards such as the Wireless Markup Language (WML) and WMLScript... The Wireless Messaging API (WMA) package gives you access to SMS functionality but there are third party packages that are more suitable for XML messaging. kSOAP, another open source project from Enhydra, is a lightweight SOAP implementation suitable for J2ME... With the availability of packet-switched, always-on networks for mobile phones becoming more widespread, mobile access to data will become easier than ever. web services seem like the natural solution for integration problems, but mobile phones do not have the privilege of guaranteeing support for the core web services technologies. However, you can still effectively deploy a web service for mobile clients by deploying a client interface using existing technologies available. Technologies such as SMS, WAP, and VoiceXML can be utilized to make this possible. As more mobile phones support J2ME, you can even choose to deploy a pure SOAP client without the need for a middleman..." See also "Java Web Services in a Nutshell, by Kim Topley, with sample Chapter 3: 'SAAJ (SOAP with Attachments API for Java)'.
[August 19, 2003] "Members Offer Glimpse Inside WS-I Consortium." By John Hogan. In SearchWebServices.com News (August 18, 2003). "Throw 150 software engineers together in a room to discuss interoperability standards and what do you get? A raging debate, certainly. A consensus is a little trickier. For the vendor-backed Web Services Interoperability Organization (WS-I), getting a group of engineers to reach a consensus was a matter of deciding what was critical for making Web services specifications work with one another and dropping debate on everything else. The result was Basic Profile 1.0, a set of guidelines for Web services interoperability that was released at last week's XML Web Services One conference. In a wide-ranging interview with SearchWebServices.com, WS-I board members from IBM Corp. and Oracle Corp. and the chairman of the Basic Profile working group talked about the inner workings of the consortium and how users of Web services technology will soon be able to judge for themselves whether solutions are truly interoperable. 'There's nothing magical about WS-I,' said Rob Cheng, Oracle's representative on WS-I's 11-member board. 'There was a need. There was a demand. There was motivation to do it. So we got together and did it.' [...] Chris Ferris, chairman of the Basic Profile working group and a senior software engineer at IBM, said a perfect example was Simple Object Access Protocol (SOAP) encoding, a method of encoding 'type' information in XML messages. Members argued about what type systems to use between different development platforms. How did they resolve this issue? The working group dropped the idea of SOAP encoding interoperability in favor of XML Schema as the type system for Web services... 'Fully 44% of the [interoperability] issues we tackled, of the 200-odd issues, were around the WSDL specification,' Ferris said. The working group had to clarify WSDL and 'clean up the ambiguity aspects of it,' such as how to use it with SOAP and the Universal Description, Discovery and Integration (UDDI) registry. This will likely be the case when the WS-I tackles interoperability for other Web services specifications, Ferris said. Some functions, or options, of an underlying specification will be 'must options' for vendors to follow. Other functions can be added as a service to users, 'but when you do, you're on your own' in terms of interoperability with other products, Ferris said... Glover and Ferris predicted that WS-I has at least 10 years of work ahead to fine-tune various Web services specifications in areas such as security, reliable messaging, management and orchestration. Cheng said the order in which these issues will be handled rests entirely with the demands of WS-I's 170 member companies and the implementation issues that arise as they develop applications that can deliver Web services. Next in line for the WS-I is security interoperability. Ferris said a planning group has already outlined the scope of the effort and is awaiting the final release this month or next of the Web Services Security specification by the OASIS standards group. The focus of the security profile, which Ferris predicted would be complete within a year, will be to narrow down options within the specification to the 'must haves'..." See details in the news story "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services."
[August 19, 2003] "WS-I Basic Profile Set." By Darryl K. Taft. In eWEEK (August 18, 2003). "After a long period of hype around Web services, the Web Services-Interoperability organization last week announced the official delivery of WS-I Basic Profile 1.0. WS-I BP 1.0 is a set of specifications that guarantee Web services interoperability if users adhere to the profile's guidelines and if vendors include support for it in products. The profile identifies how Web services specifications should be used together to create interoperable Web services. Although WS-I BP 1.0 has been available as a draft standard in public review for almost a year, the formal announcement means several vendors will endorse the profile to guarantee their offerings adhere to the standard, thus eliminating much of the research and guesswork customer organizations had to go through to find interoperable implementations... Rob Cheng, a senior product manager at Oracle Corp., of Redwood Shores, Calif., and chair of the WS-I marketing committee, said when he talks to customers about Web services, 'the real thing they focus on is that companies will not have to worry about plumbing anymore. 'This profile will reduce cost and complexity and will reduce early-adopter risks. The Basic Profile 1.0 lays the foundation for all the future work we'll be doing,' said Cheng at the XML Web Services One conference here. 'This means developers don't have to delve into the details of the technologies and try to pick and choose what will work,' said Mark Hapner, chief Web services strategist at Sun Microsystems Inc., of Santa Clara, Calif., and Sun's representative on the WS-I board. 'Now there's unanimity amongst the vendors, and there's an underlying set of scenarios represented by the WS-I sample applications.' This fall, the WS-I group will release test tools and sample applications to support the profile, available in both C# and Java. 'The test suite will allow a developer to get a specific analysis about whether they're compliant [with the BP 1.0] spec or not and, if not, what the issues are,' Hapner said..." See details in the news story "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services."
[August 19, 2003] "OWL Flies As Web Ontology Language. W3C Seeks More Implementations." By Paul Krill. In InfoWorld (August 18, 2003). "The World Wide Web Consortium (W3C) on Tuesday issued its Web Ontology Language, its acronym spelled and pronounced 'OWL,' as a W3C Candidate for Recommendation... According to the W3C, OWL is a language for defining structured Web-based ontologies that enable richer integration and interoperability of data across application boundaries. Some implementations already exist. Early adopters include bioinformatics and medical communities, corporate enterprise, and governments. OWL enables applications such as Web portal management, multimedia collections that cannot respond to English language-based search tools, Web services and ubiquitous computing. 'Essentially, an ontology is the definition of a set of terms and how they relate to each other for a particular domain and that can be used on the Web in a number of different ways,' said Jim Hendler, co-chairman of the W3C Web Ontology Working Group, which released OWL... While earlier languages have been used to develop tools and ontologies for specific user communities such as sciences, they were not compatible with the architecture of the World Wide Web in general, in particular the Semantic Web, said W3C. OWL uses both URLs for naming and the linking provided by RDF (Resource Description Framework) to add the following capabilities to ontologies: distributable across many systems; scalable for the Web; compatible with Web standards for accessibility and internationalization; and open and extensible..." Further detail in the news story "W3C Releases Candidate Recommendations for Web Ontology Language (OWL)."
[August 18, 2003] "Nation's ebXML Standard to Be Adopted Asia-Wide." By Sim Kyu-ho. In Korea IT Times [The Electronic Times] (August 18, 2003). "e-Business Extensible Markup Language (ebXML) proposed by Korea will be adopted in the first version of Asian guidelines. Jang Jae-gyung, manager of standard development at the Korea Institute for Electronic Commerce (KIEC), has also been named head of the new agency, which will develop Asian edition of ebXML. If the initiative turns out to be successful, therefore, the nation's ebXML technology could become a part of global standards. According to KIEC on August 17 [2003], the ebXML Asia Committee decided at the 9th eAC meeting recently held in Bangkok, Thailand to link e-document guidelines proposed by Korea with Hong Kong's e-government project to set up ebXML Asia guidelines and a library before the end of this year. eAC launched a taskforce called the Core Component Task Group, or CCTG, to push craft guidelines, and named Korea and Taiwan to co-char the organization. At the meeting, attendants also agreed to issue 'ebXML Asia Interoperability Certificate' to 12 businesses and organizations in 6 countries in a way of guaranteeing messasing functionality and reliability. In Korea, Pos DATA, Korea Trade Information Communication, InoDigital and Samsung SDS will be granted the certificate for their ebXML solutions..." See also the announcement "ebXML Asia Committee Starts New ebXML Interoperability Certification Program. Twelve Organizations Receive Certifications on ebXML Message Service specification 2.0." General references in "Electronic Business XML Initiative (ebXML)." [alt URL]
[August 18, 2003] "ebXML Seen as SME Web Service Enabler. Government, Private Sector Begin Pilot Projects." By Sasiwimon Boonruang. In Bangkok Post (August 06, 2003). "The Government and the private sector have adopted the Electronic Business Extensible Markup Language (ebXML) standard to boost national competitiveness, launching Internet-based paperless trading pilot projects and a collaborative e-tourism project. ebXML is an open standard around web services that will be crucial in three major aspects -- setting standards for data, standards for data interchange, as well as for electronic service interchange -- according to the National Electronics and Computer Technology Centre (Nectec) director Dr Thaweesak Koanatakool. Speaking at a seminar on ebXML Awareness Day last week, Dr Thaweesak noted that these three open standards were important infrastructure necessary to develop one-stop e-government services, for collaborative B2B e-commerce and to provide an opportunity for the local software industry. Meanwhile, the Information and Communications Technology (ICT) Ministry will now appoint a committee on data interchange standard. IT veteran and honorary president of the ATCI Manoo Ordeedolchest said traditional e-commerce was the interaction between humans and computers, but that we would soon be seeing computer-to-computer interactions. But this new economy will not bring benefits here unless SMEs were also part of this electronic business. Smaller firms needed to use ICT and in order to create competitiveness ebXML technology or web services were the solution, said Mr Manoo, who is also a consultant to the ICT minister... According to Commerce Ministry's Business Development deputy director general Skol Harnsuthivarin, the department was now working with other organisations to cope with the problem of data interchange by applying the ebXML standard. To achieve the target of paperless trading in the year 2005, the department and all agencies in the ministry first have to complete integration within the ministry by the end of next year and then extend it to the external partners. An Internet-based paperless trading pilot project is now being conducted with the cooperation of the Customs Department, the E-commerce Resource Centre (ECRC), the Institute for Innovative IT of Kasetsart University (i3t-KU), the Business Development department and private companies such as Minebea (Thailand), TKK, and CTI Logistics. The project aims to analyse the system in terms of traditional EDI and ebXML, to find a suitable way to promote the utilisation of ICT in SMEs and to boost competitiveness through the B2B e-business. It also pushes for the development of data interchange and service interchange standards in order to accommodate APEC's paperless trading project. Another ebXML pilot project is collaborative e-tourism, conducted by i3t-KU, ECRC, and Datamat. Objectives are to promote SMEs in the tourism industry to use ICT to cut costs, to enhance efficiency and to expand their markets..." General references in "Electronic Business XML Initiative (ebXML)."
[August 18, 2003] "Coalition Uses Web for Emergency Notification. System Uses Web Services, Off-The-Shelf Software." By Grant Gross. In InfoWorld (August 18, 2003). "The 9-1-1 emergency service in Oregon has expanded to include instant notifications to school administrators, hospitals and other people who need timely emergency notifications, thanks to a coalition of Oregon local governments and technology vendors using Web services and off-the-shelf software. The Regional Alliances for Infrastructure and Network Security (RAINS) launched its RAINS-Net technology platform, which sends live emergency information to selected users over the Internet and by cell phone. The creators of RAINS-Net are billing it as an extension of 9-1-1 service, in which the existing computer-aided dispatch system is connected to the Internet and sends alerts to officials who need to know about emergency situations in their neighborhoods... When a 9-1-1 call comes into a dispatch center, the information an operator types into the dispatch center computers can be routed to a cell phone message or a pop-up dialog box on a PC. In the case of an emergency event like a hazardous waste spill, those people on the RAINS-Net network would be notified immediately, and the dialog box might direct them to additional multimedia information, such as a video on how to respond to a hazardous waste spill. The RAINS-Net system, which goes live on Thursday, already has about 1,000 files that provide additional information on emergency situations. In some cases, such as a crime in progress, the RAINS-Net system would wait until police show up on the scene before notifying people on the network, so that police can assess the situation before raising concerns, Jennings said. The system uses the nine-digit zip code to route messages to recipient, so that a school in one neighborhood wouldn't get an emergency message about a fire across town. The system also has the capability of sending out city-wide emergency messages to appropriate recipients. RAINS-Net initially integrates the technologies of RAINS sponsor companies, including FORTiX, Tripwire, Centerlogic, and Jennings' Swan Island Networks by using XML and Web services. More companies are working with RAINS to integrate their technologies into the RAINS-Net as new capabilities are added..." See also the press release.
[August 18, 2003] "Service-Oriented Architecture Explained." By Sayed Hashimi (NewRoad Software). From O'Reilly ONDotnet.com (August 18, 2003). "SOA (service-oriented architecture) has become a buzzword of late. Although the concepts behind SOA have been around for over a decade now, SOA has gained extreme popularity of late due to web services. Before we dive in and talk about what SOA is and what the essentials behind SOA are, it is a useful first step to look back at the evolution of SOA. To do that, we have to simply look at the challenges developers have faced over the past few decades and observe the solutions that have been proposed to solve their problems... In the context of SOA, we have the terms service, message, dynamic discovery, and web services. Each of these plays an essential role in SOA. A service in SOA is an exposed piece of functionality with three properties: (1) The interface contract to the service is platform-independent; (2) The service can be dynamically located and invoked; (3) The service is self-contained -- that is, the service maintains its own state. Service providers and consumers communicate via messages. Services expose an interface contract. This contract defines the behavior of the service and the messages they accept and return. Because the interface contract is platform- and language-independent, the technology used to define messages must also be agnostic to any specific platform/language. Therefore, messages are typically constructed using XML documents that conform to XML schema. XML provides all of the functionality, granularity, and scalability required by messages. That is, for consumers and providers to effectively communicate, they need a non-restrictive type of system to clearly define messages; XML provides this... Dynamic discovery is an important piece of SOA. At a high level, SOA is composed of three core pieces: service providers, service consumers, and the directory service. The role of providers and consumers are apparent, but the role of the directory service needs some explanation. The directory service is an intermediary between providers and consumers. Providers register with the directory service and consumers query the directory service to find service providers Although the concepts behind SOA were established long before web services came along, web services play a major role in a SOA. This is because web services are built on top of well-known and platform-independent protocols. These protocols include HTTP, XML, UDDI, WSDL, and SOAP. It is the combination of these protocols that make web services so attractive. Moreover, it is these protocols that fulfill the key requirements of a SOA. That is, a SOA requires that a service be dynamically discoverable and invokeable. This requirement is fulfilled by UDDI, WSDL, and SOAP. SOA requires that a service have a platform-independent interface contract. This requirement is fulfilled by XML. SOA stresses interoperability. This requirement is fulfilled by HTTP. This is why web services lie at the heart of SOA... As complexity grows, researchers find more innovative ways to answer the call. SOA, in combination with web services, is the latest answer. Application integration is one of the major issues companies face today; SOA can solve that. System availability, reliability, and scalability continue to bite companies today; SOA addresses these issues. Given today's requirements, SOA is the best scalable solution for application architecture..."
[August 18, 2003] "DocBook for Eclipse: Reusing DocBook's Stylesheets." By Jirka Kosek. From XML.com (August 13, 2003). ['Use XSLT to integrate your own documentation into the Eclipse IDE.'] "DocBook is a popular tool for creating software documentation among developers. One reason for its success is the existence of the DocBook XSL stylesheets, which can be used to convert DocBook XML source into many target formats including HTML, XHTML, XSL-FO (for print), JavaHelp, HTML Help, and man pages. The stylesheets can be further customized to get other outputs as well. In this article I am going to show you how easily you can integrate DocBook documents into the Eclipse platform help system by reusing existing stylesheets... auxiliary help files are usually XML or HTML-based, so we can use XSLT to generate them. If you have your documentation in DocBook and you want to feed it into the help system, the only thing you need is to extend the existing stylesheets to emit the auxiliary files together with a standard HTML output. That is even easier if you reuse some existing DocBook XSL stylesheet templates. The whole Eclipse platform is developed around the idea of plugins. If you want to contribute your help documents to the Eclipse platform, you have to develop a new help plugin. The plugin is composed of the HTML and image files, the table of contents file in XML, and the manifest file... As the Eclipse help is based on HTML, we can reuse existing stylesheets that generate multiple HTML files from DocBook XML source. However, we need to extend these stylesheets to generate the table of contents file and the manifest file... Software documentation is an area where you can very effectively use XML and XSLT to do multichannel publishing. If you stick to using a well-known and standardized vocabulary like DocBook, you can benefit from usage of existing stylesheets and other conversion tools. If you want to plug your DocBook documentation into some new help format, you can quite easily hack existing stylesheets to generate a new output format. The method for creating output for the Eclipse platform help described in this article can be used for almost any HTML based online help system..." General references in "DocBook XML DTD."
[August 18, 2003] "XSLT Recipes for Interacting with XML Data." By Jon Udell. From XML.com (August 13, 2003). ['Udell explores alternative ways of making XML data interactive using XSLT.'] "In last month's column, 'The Document is the Database', I sketched out an approach to building a web-based application backed by pure XML (and as a matter of fact, XHTML) data. I've continued to develop the idea, and this month I'll explore some of the XSLT-related recipes that have emerged. Oracle's Sandeepan Banerjee, director of product management for Oracle Server Technologies, made a fascinating comment when I interviewed him recently. 'It's possible,' he said, 'that developers will want to stay within an XML abstraction for all their data sources'. I suppose my continuing (some might say obsessive) experimentation with XPath and XSLT is an effort to find out what that would be like. It's true that these technologies are still somewhat primitive and rough around the edges. Some argue that we've got to leapfrog over them to XQuery or to some XML-aware programming language in order to colonize the world of XML data. But it seems to me that we can't know where we need to go until we fully understand where we are... It's crucial to be able to visualize data. As browsers are increasingly able to apply CSS stylesheets to arbitrary XML, the XHTML constraint becomes less important. The Microsoft browser has been able to do CSS-based rendering of XML for a long time. Now Mozilla can too. Safari doesn't, yet, but I'll be surprised if it doesn't gain that feature soon. So while I'm sticking with XHTML for now, that may be a transient thing. Of more general interest are the ways in which XPath and XSLT can make XML data interactive... The techniques I've been exploring for the past few months are, admittedly, an unorthodox approach to building Web applications. The gymnastics required can be strenuous, and some of the integration is less than seamless. But the result is useful, and along the way I've deepened my understanding of XPath and XSLT. Is it really advisable, or even possible, to make XML the primary abstraction for managing data? I'm still not sure, but I continue to think it's a strategy worth exploring..." General references in "Extensible Stylesheet Language (XSL/XSLT)."
[August 18, 2003] "Introducing Anobind." By Uche Ogbuji. From XML.com (August 13, 2003). ['Uche Ogbuji introduces anobind, his new Python databinding tool.'] "My recent interest in Python-XML data bindings was sparked not only by discussion in the XML community of effective approaches to XML processing, but also by personal experience with large projects where data binding approaches might have been particularly suitable. These projects included processing both data and document-style XML instances, complex systems of processing rules connected to the XML format, and other characteristics requiring flexibility from a data binding system. As a result of these considerations, and of my study of existing Python-XML data binding systems, I decided to write a new data Python-XML binding, which I call Anobind. I designed Anobind with several properties in mind, some of which I have admired in other data binding systems, and some that I have thought were, unfortunately, lacking in other systems: (1) A natural default binding, i.e., when given an XML file with no hints or customization; (2) Well-defined mapping from XML to Python identifiers; (3) Declarative, rules-based system for finetuning the binding; (4) XPattern support for rules definition; (5) Strong support for document-style XML, especially with regard to mixed content; (6) Reasonable support for unbinding back to XML; (7) Some flexibility in trading off between efficiency and features in the resulting binding... In this article I introduce Anobind, paying attention to the same considerations that guided my earlier introduction of generateDS.py and gnosis.xml.objectify... Anobind is really just easing out of the gates. I have several near-term plans for it, including a tool that reads RELAX NG files and generates corresponding, customized binding rules. I also have longer-term plans such as a SAX module for generating bindings without having to build a DOM..." See also: (1) Python & XML, by Christopher A. Jones and Fred L. Drake, Jr.; (2) general references in "XML and Python."
[August 18, 2003] "Binary XML, Again." By Kendall Grant Clark. From XML.com (August 13, 2003). ['The old chestnut of a binary encoding for XML has cropped up once more, this in time in serious consideration by the W3C. Kendall Clark comments on the announcement of the W3C's Binary XML Workshop.'] "The [W3C] workshop announcement is interesting in its own right and worth quoting... from a 'steadily increasing demand,' the W3C has decided to get in front of those of its vendor-members which want 'to find ways to transmit pre-parsed XML documents and Schema-defined objects, in such a way that embedded, low-memory and/or low bandwidth devices can' get in on the XML game... The workshop announcement also mentions a few tantalizing details, including talk of 'multiple separate implementers' having some success with an ASN.1 variant of XML... The other interesting thing of note here is that the W3C is talking about a binary variant of (parts of) the XML Infoset. What difference that could make remains to be seen, but it's interesting enough to pay some attention to it. There are at least two issues at this workshop: binary variants and, as the workshop announcement says, 'pre-parsed' artifacts; they seem orthogonal to each other..." The W3C Workshop on Binary Interchange of XML Information Item Sets will be held September 24-26, 2003 in Santa Clara, California. The Workshop goal is "... to study methods to compress XML documents, comparing Infoset-level representations with other methods, in order to determine whether a W3C Working Group might be chartered to produce an interoperable specification for such a transmission format." For background and discussion, see: (1) the workshop Call for Participation; (2) the thread on XML-DEV, including a key posting from Liam Quin (W3C XML Activity Lead); (3) "Fast Web Services" (Sun Microsystems paper).
[August 18, 2003] "JavaOne: Fast Web Services." Presentation by Santiago Pericas-Geertsen and Paul Sandoz (Sun Microsystems). JavaOne 2003 San Francisco, June 2003. "Current Web service application frameworks perform at more than an order of magnitude worse than similar technologies that use binary representations for messages (for example, RMI and RMI/IIOP). The performance difference is due to the fact that messages are represented in the XML infoset: the result of which is (i) large message sizes and (ii) slow serialization/deserialization (for example, marshalling/unmarshalling) of messages. A binary representation of the XML infoset has been proposed as a possible solution to this problem. However, the so-called 'Binary Infoset Encoders' have shown only moderate performance improvements for server-side computing and have not been widely adopted. In this talk, we argue that 'Binary Schema-binding Frameworks' offer a much better solution to this problem. This approach relies on the assumption that the schema of a message is known by both peers. This common knowledge, together with suitable encodings, result in small message sizes and fast serialization/deserialization. Abstract Syntax Notation One (ASN.1) is a technology and set of standards for abstractly defining messages for distributed communication that are separate from a set encodings or message representations. A number of XML-related ASN.1 standards are being defined, namely an XML encoding so that ASN.1 messages can be represented as XML, and a mapping from W3C XML Schema (XSD) to ASN.1 schema. Thus, ASN.1 is perfectly suited for the definition of a Binary Schema-binding Framework that can be easily integrated into Java API for XML-based RPC (JAX-RPC) and Java API for XML Messaging. Our preliminary results show that the resulting performance is comparable to that of RMI/IIOP..." See also the ASN.1 work program description and the following bibliographic entries.
[August 18, 2003] "The Emergence of ASN.1 as an XML Schema Notation." By John Larmouth (Larmouth T & PDS Ltd, Bowdon, UK). Presentation given at XML Europe 2003, May 5-8, 2003. With slides in HTML and .PPT format. "This paper describes the emergence of ASN.1 as an XML schema notation. Use of ASN.1 as an XML schema notation provides the same functionality as use of W3C XML Schema (XSD), but makes compact binary representations of the data available as well as XML encodings. ASN.1 also provides a clear separation of the specification of the information content of a document or message from the actual syntax used in its encoding or representation. Examples of representation differences that do not affect the meaning (semantics) being communicated are the use of an attribute instead of an element in an XML encoding, or of space-separated lists instead of repeated elements. Examples are given of ASN.1 specification of an XML document, and some comparisons are made with XSD and RELAX NG... The focus of ASN.1 is very much on the information content of a message or document. A distinction is drawn between whether changes in the actual representation of a message or document affect its meaning (and hence its effect on a receiving system), or are just variations of encoding that carry the same information. Thus the use of an XML attribute rather than a child element does not affect the information content. Nor does the use of a space-separated list rather than a repetition of an element. ASN.1 tools provide a static mapping of an XML schema definition to structures in commonly-used programming languages such as C, C++ and Java, with highly efficient encode/decode routines to convert between values of these structures and the informnation content of XML documents. By contrast, most tools based on XSD or RELAX NG are more interpretive in nature, providing details of the infoset defined by the XML document through enquiries by the application or by notifications to the application (a highly interactive - and CPU intensive procedure)..." See also the ASN.1 website, "What ASN.1 can offer to XML." [PDF from IDEAlliance, cache]
[August 12, 2003] "Fast Web Services." By Paul Sandoz, Santiago Pericas-Geertsen, Kohuske Kawaguchi, Marc Hadley, and Eduardo Pelegri-Llopart. Sun Microsystems Web Services library. Appendices include a WSDL Example and an ASN.1 Schema for SOAP. With 21 references. August 2003. "Fast Web Services is an initiative at Sun Microsystems aimed at the identification of performance problems in existing implementations of Web Services standards. Our group has explored several solutions to these problems. This article focuses on a particular solution that delivers maximum performance gains. Fast Web Services explores the use of more efficient binary encodings as an alternative to textual XML representations. In this article, we identify the performance of existing technologies, introduce the main goals of Fast Web Services, both from a standards and an implementation perspective, highlight some of the use cases for Fast Web Services, discuss standards and associated technologies needed for Fast Web Services, present an example in which XML and the proposed binary encoding are compared. and describe the Java prototype that has been used to obtain some compelling performance results... Fast must define the interoperability between Fast peers. In addition, it must define the interoperability with existing Web Services that do not support Fast. The approach is to: 'Use Fast when available, and use XML otherwise.' [...] Fast annotations for WSDL allow services to explicitly state that a binding can support the Fast encoding (in addition to XML). Although Fast does not require any modification to WSDL, specifically to the SOAP binding, it may be appropriate to formalize the contract to state clearly that the binding supports Fast in addition to XML... Fast is not a Java-only technology: it is designed to be platform-independent, just like existing Web Services. This expands the interoperability to non-Java platforms, such as C#, C and C++ or scripting languages such as Perl and Python. Standards are crucial: Fast will not be deployed and implemented by vendors unless it has good standards traction backed by parties influential in the Web Services space. Fast web services is designed to maximize the performance of Web Services in a number of domains, while minimizing developer impact and ensuring interoperability. The performance gains from Fast WS are very substantial although its applicability is not universal; there are some issues due to its loss of self-description that are not present when using XML encoding. Performance results obtained from the Java prototype provide compelling evidence that it is possible for a Web Service implementation to perform at speeds close to that of binary equivalents such as RMI and RMI/IIOP. If performance is an issue then Fast may be the answer, and a number of use cases in which Fast can be used were presented. Sun Microsystems is participating in the ITU-T SG-17 to ensure that Fast Web Services is standardized. The majority of the standardization process is complete (or close to be completed) given that X.694 and the ASN.1 encoding rules represent a significant proportion of work. X.695 represents the finishing touches that are needed for a well proven technology such as ASN.1 to be applied to Web Services..." See also the preceding item.
[August 12, 2003] "The XML Enabled Directory." By Steven Legg (Adacel Technologies Ltd) and Daniel Prager (Department of Computing and Mathematics, Deakin University, Victoria, Australia). IETF Internet Draft. Reference: 'draft-legg-xed-roadmap-00.txt'. Intended Category: Standard Track. 10 pages. "The XML Enabled Directory (XED) framework leverages existing Lightweight Directory Access Protocol (LDAP) and X.500 directory technology to create a directory service that stores, manages and transmits Extensible Markup Language (XML) format data, while maintaining interoperability with LDAP clients, X.500 Directory User Agents (DUAs), and X.500 Directory System Agents (DSAs). This document introduces the various XED specifications. The main features of XED are: (1) semantically equivalent XML renditions of existing directory protocols; (2) XML renditions of directory data; (3) the ability to accept at run time, user defined attribute syntaxes specified in a variety of XML schema languages; (4) the ability to perform filter matching on the parts of XML format attribute values; (5) the flexibility for implementors to develop XED clients using only their favoured XML schema language... The XED framework does not aim for a complete specification of the directory in one schema language (e.g., by translating everything that isn't ASN.1 into ASN.1, or by translating everything that isn't XML Schema into XML Schema), but rather seeks to integrate specifications in differing schema definition languages into a cohesive whole. The motivation for this approach is the observation that although XML Schema, RELAX-NG and ASN.1 are broadly similar, they each have unique features that cannot be adequately expressed in the other languages. Thus a guiding principle for XED is the assertion that the best schema language in which to represent a data type is the language of its original specification. Consequently, a need arises for the means to reference definitions not only in different documents, but specified in different schema languages... This document and the technology it describes are a product of a joint research project between Adacel Technologies Limited and Deakin University on leveraging existing directory technology to produce an XML-based directory service..." See the following bibliographic entry ("XED: Schema Language Integration") and initial drafts of several related IETF IDs: (1) "Directory XML Encoding Rules for ASN.1 Types"; (2) "ASN.1 Schema: An XML Representation for ASN.1 Specifications"; (3) "Translation of ASN.1 Specifications into XML Schema"; (4) "Translation of ASN.1 Specifications into RELAX NG"; (5) "LDAP: Transfer Encoding Options"; (6) "XED: Schema Operational Attributes"; (7) "XED: Matching Rules"; (8) "XML Lightweight Directory Access Protocol." [cache]
[August 12, 2003] "XED: Schema Language Integration." By Steven Legg (Adacel Technologies Ltd) and Daniel Prager (Department of Computing and Mathematics, Deakin University, Victoria, Australia). IETF Internet Draft. Reference: 'draft-legg-xed-glue-00.txt'. Intended Category: Standard Track. August 7, 2003. 14 pages. "This document defines the means by which an Abstract Syntax Notation One (ASN.1) specification can incorporate the definitions of types and elements in specifications written in other Extensible Markup Language (XML) schema languages. References to XML Schema types and elements, RELAX NG named patterns and elements, and Document Type Declaration (DTD) element types are supported. Non-ASN.1 definitions are supported by first defining an ASN.1 type whose values can contain arbitrary markup, and then defining constraints on that type to restrict the content to specific nominated datatypes from non-ASN.1 schema definitions. The ASN.1 definitions in this document are consolidated in Appendix A..." [cache]
[August 12, 2003] "Mindreef SOAPscope 1.0. Bring SOAP Protocol into View with Handy Diagnostic Tool." By Joe Mitchko. In Web Services Journal Volume 3, Issue 7 (July 2003), pages 54-55. "Mindreef SOAPscope 1.0 is a Web services diagnostic tool, designed to provide toolkit-independent logging and monitoring of SOAP network traffic. SOAPscope is composed of two components, a network sniffer and a browser-based message viewer. The sniffer component is designed to capture SOAP request and response messages within the HTTP protocol traffic and persist the information to an embedded relational database. The message viewer component is a browser-based Web application that allows a user to view the persisted SOAP request and response messages and more. Since it is browser-based, the viewer opens the door for remote and collaborative debugging sessions. The SOAPscope viewer provides a pseudocode and XML view of message details, and two ways to monitor SOAP traffic -- log view or live view. The log view provides message history and search capabilities while the live view allows for real-time debugging. In addition, a handy WSDL viewer allows you to punch in a WSDL URL and view it in either native XML or in pseudocode mode. Some of the more advanced features of the tool allow you to modify and resend previously captured SOAP requests -- handy for on-the-fly debugging... I found the viewer's user interface to be very clean, easy to read, and relatively uncluttered. The information displayed was basically accurate and bug free. In addition, both the XML and pseudocode views have color-coded text, making it easy to see SOAP-specific tags, namespace information, and message request and response content. All SOAP message content and log information is stored in an embedded database. Although it is basically transparent, you will need to do a little database management in order to purge or back up the database. Nothing in the way of log maintenance is provided in SOAPscope for this release. Luckily, database maintenance instructions are included in the documentation and are relatively easy to follow... It's not often that you find a tool that is so well thought out and designed...The amount of functionality provided is just right, neither overloading the GUI with seldom-used features nor leaving you to find some other diagnostic tool because it doesn't do enough..." Update: see the announcement for SOAPscope 2.0: "Mindreef Announces Availability of SOAPscope 2.0. Features First WSDL Interoperability Checker, Including Rules for WS-I Basic Profile 1.0." [alt URL]
[August 12, 2003] "Instant Logging: Harness the Power of log4j with Jabber. Learn How to Extend the log4j Framework with Your Own Appenders." By Ruth Zamorano and Rafael Luque (Orange Soft). From IBM developerWorks, Java technology. August 12, 2003. With source code. ['Not only is logging an important element in development and testing cycles -- providing crucial debugging information -- it is also useful for detecting bugs once a system has been deployed in a production environment, providing precise context information to fix them. In this article, Ruth Zamorano and Rafael Luque, cofounders of Orange Soft, a Spain-based software company specializing in object-oriented technologies, server-side Java platform, and Web content accessibility, explain how to use the extension ability of log4j to enable your distributed Java applications to be monitored by instant messaging (IM)'] "The log4j framework is the de facto logging framework written in the Java language. As part of the Jakarta project, it is distributed under the Apache Software License, a popular open source license certified by the Open Source Initiative (OSI). The log4j environment is fully configurable programmatically or through configuration files, either in properties or XML format. In addition, it allows developers to filter out logging requests selectively without modifying the source code. The log4j environment has three main components: (1) loggers control which logging statements are enabled or disabled. Loggers may be assigned the levels 'ALL, DEBUG, INFO, WARN, ERROR, FATAL, or OFF'. To make a logging request, you invoke one of the printing methods of a logger instance. (2) layouts format the logging request according to the user's wishes. (3) appenders send formatted output to its destinations... The log4j network appenders already provide mechanisms to monitor Java-distributed applications. However, several factors make IM a suitable technology for remote logging in real-time. In this article, we cover the basics of extending log4j with your custom appenders, and document the implementation of a basic IMAppender step by step. Many developers and system administrators can benefit from their use..." See also: "Jabber XML Protocol."
[August 12, 2003] "JBoss Fork Spawns Apache Project. ASF Begins Work on New J2EE Server Called Geronimo." By Robert McMillan. In InfoWorld (August 11, 2003). "A rift between the developers of the open source JBoss J2EE (Java 2 Enterprise Edition) application server has brought the Apache Software Foundation (ASF) into the J2EE game. The ASF announced last week that it had begun work on a new J2EE server called Geronimo, which the foundation believes will be a more business-friendly alternative to the other open source J2EE servers currently available, according to Apache Software Foundation Chairman Greg Stein. Companies such as IBM and BEA Systems sell commercial J2EE servers, but open source implementations of Sun's J2EE specification are popular among developers looking for a low-cost alternative to IBM's WebSphere and BEA's WebLogic... There are already two popular open source J2EE servers in circulation: JBoss and the Jonas server. But both have had difficulties in obtaining J2EE certification from Sun Microsystems, and neither is available under an Apache-style software license, which is considered more conducive to commercial development. 'There isn't a certified server out there, and there certainly isn't one that has a low restriction license like ours,' said Stein. Geronimo will have an easier time obtaining J2EE certification than did its open source rivals, because the ASF's non-profit status makes the application server a candidate for Sun scholarship, which would pay for certification, Stein said. A certified version of Geronimo is expected in the next year..." See: (1) the Apache Geronimo project website and the proposal; (2) Java 2 Platform, Enterprise Edition (J2EE).
[August 12, 2003] "STnG: A Streaming Transformations and Glue Framework." By K. Ari Krupnikov (Research Associate, University of Edinburgh, HCRC Language Technology Group). In [Preliminary] Proceedings for Extreme Markup Languages 2003, held in August 2003, Montréal, Québec. "STnG (pronounced 'sting') is a framework for processing XML and other structured text. In developing STnG, it was our goal to allow complex transformations beyond those afforded by traditional XML transforming tools, such as XSLT, yet make the framework simple to use. We claim that to meet this goal, a system must: (1) support and encourage the use of small processing components; (2) offer a hierarchical tree-like view of its data; (3) factor out facilities for input chunking through a pattern/action model; (4) not provide processing facilities of its own, instead invoking processors written in existing languages. STnG is built around common XML tools and idioms, but can process arbitrary structured text almost as easily as XML. In the first part of this paper, we show how these requirements result in powerful and flexible systems, and how they can be achieved. The balance of this paper describes a processing framework we have developed in Java that implements these requirements... The entire transformation requires only one pass on the source document and is comfortably described in one STnG. We could achieve the same effect with standalone XSLT stylesheets, Java programs, and perhaps a makefile to describe dependencies between these components, as well as between intermediate results. While for some elaborate tasks such complexity is warranted, STnG can simplify many common processing scenarios considerably. Note that beyond complexity that grows with the number of different components required to accomplish a task, a standalone application would include considerably more code than the DOM handler fragment in this STnG, as it would need to instantiate and configure a parser and navigate to the desired fragments using custom code, as well as handle potential errors -- all tasks factored out into STnG. More likely than not, this custom code would not be as robust as a standard, reusable component. See: (1) the Extreme Markup Languages 2003 Program and (2) the event listing.
[August 12, 2003] "XIndirect: Indirect Addressing for XML." By W. Eliot Kimber (ISOGEN International, LLC). In [Preliminary] Proceedings for Extreme Markup Languages 2003, held in August 2003, Montréal, Québec. "This paper describes and explains the XIndirect facility, a W3C Note. The XIndirect Note defines a simple mechanism for representing indirect addresses that can be used with other XML-based linking and addressing facilities, such as XLink and XInclude. XIndirect is motivated primarily by the requirements of XML authoring in which the management of pointers among systems of documents under constant revision cannot be easily satisfied by the direct pointers provided by XLink and XInclude. Indirect addressing is inherently expensive to implement because of both the processing demands of multi-step pointers and the increased system complexity required to do the processing. XLink and XPointer (and by extension, XInclude) explicitly and appropriately avoid indirection in order to provide the simplest possible solution for the delivery of hyperlinked documents, especially in the context of essentially unbounded systems, such as the World Wide Web. XIndirect enables indirect addressing when needed without adding complexity to the existing XML linking and addressing facilities -- by defining indirection as a separate, independent facility, processors that only need to support delivery of documents are not required to support indirection simply in order to support XLink or XInclude. Rather, when indirection management is required, developers of XML information management systems can limit the support for indirection to closed systems of controlled scope where indirection is practical to implement. The paper illustrates some of the key use cases that motivate the need for the XIndirect facility, describes the facility itself, and discusses a reference implementation of the XIndirect facility..." See: (1) the Extreme Markup Languages 2003 Program and (2) the event listing.
[August 12, 2003] "Datatype- and Namespace-Aware DTDs: A Minimal Extension." By Fabio Vitali, Nicola Amorosi, and Nicola Gessa (Department of Computer Science, University of Bologna). In [Preliminary] Proceedings for Extreme Markup Languages 2003, held in August 2003, Montréal, Québec. "DTDs and XML Schema are important validation languages for XML documents. They lie at opposite ends of a spectrum of validation languages in terms of expressive power and readability. Differently from other proposals for validation languages, DTD++ provides a DTD-like syntax to XML Schema constructs, thereby enriching the ease of use and reading of DTDs with the expressive power of XML Schema. An implementation as a pre-processor of a Schema-validating XML parser aids in ensuring wide support for the language... The literature seems to agree that schema languages for XML documents lie between the two extremes of a DTD, that has maximum terseness and readability, but minimum expressive power, and XML Schema, that has the greatest expressive power but a much lesser clarity and conciseness. Additionally, coexistence of different schema languages within the same document still are not straightforward. DTD subsets are required when using general entities, and some parsers get overly confused dealing with entities defined in a DTD, and elements and attributes in a different schema document. Our proposal aims at finding a reasonable compromise between the expressive power of XML Schema and the ease of use and compactness of a DTD. What we decided first was that there was no sense in creating a completely new language; extending an existing syntax with features taken from another existing language seemed, and still seems now, a much better approach. Of course, the final result is still incomplete and partial. In particular, support for keys and unique values that exist in XML Schema has not been provided yet. Still, the experience so far with the DTD++ language appears to be interesting and rewarding..." See: (1) the Extreme Markup Languages 2003 Program and (2) the event listing.
[August 09, 2003] "New and Improved String Handling." By Bob DuCharme. From XML.com (August 06, 2003). ['In the Transforming XML column Bob DuCharme explains some of the new and improved string handling functions -- for concatenation, search, and replace -- in XSLT/XPath 2.0.'] "In an earlier column, I discussed XSLT 1.0 techniques for comparing two strings for equality and doing the equivalent of a 'search and replace' on your source document. XSLT 2.0 makes both of these so much easier that describing the new techniques won't quite fill up a column, so I'll also describe some 1.0 and 2.0 functions for concatenating strings. Notice that I say '1.0' and '2.0' without saying 'XSLT'; that's because these are actually XPath functions available to XQuery users as well as XSLT 2.0 users. The examples we'll look at demonstrate what they bring to XSLT development. The string comparison techniques described before were really boolean tests that told you whether two strings were equal or not. The new compare() function does more than that: it tells whether the first string is less than, equal to, or greater than the second according to the rules of collation used. 'Rules of collation' refers to the sorting rules, which can apparently be tweaked to account for the spoken language of the content... New features such as data typing and a new data model may make XSLT and XPath 2.0 look radically different from their 1.0 counterparts, but many of these new features are straightforward functions that are familiar from other popular programming languages. The compare(), replace(), and string-join() functions, which will make common coding tasks go more quickly with less room for error, are great examples of this..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."
[August 12, 2003] "XACML J2SE Platform Policy Profile." By Anne H. Anderson (Sun Microsystems Laboratories, Burlington, MA, USA). Version 1.28. Updated July 21, 2003. "This document contains a proposed profile for supporting use of OASIS eXtensible Access Control Markup Language (XACML) Version 1.0 policies for applications written for the Java 2 Platform, Standard Edition (J2SE). The proposal recommends creation of a J2SE Policy Provider that accepts XACML policies as its policy input. The Policy Provider accepts standard XACML policies, but also supports new XACML extensions to deal with J2SE-specific objects and concepts. XACML is designed to be extensible, and the proposed extensions are fully in the spirit of those intended. The profile defines mappings between certain Java Class objects and standard XACML Attributes. Such mappings allow Java applications to use standard XACML policies that protect resources accessed by multiple applications, not all of which are written using the Java programming language... This would provide a way for Java applications using the standard Java Policy API to use XACML policies and [other features] that come with XACML, including dynamic policies, use of arbitrary Subject and target attributes independent of the application, distributed policies, policies shared between multiple applications, policies shared between Java and non-Java applications, role based access control, and resource labels..." See also: (1) "A Brief Introduction to XACML"; (2) XACML TC website; (3) general references in "Extensible Access Control Markup Language (XACML)."
[August 09, 2003] "Web Services Architecture." Edited by David Booth (W3C Fellow / Hewlett-Packard), Hugo Haas (W3C), Francis McCabe (Fujitsu Labs of America), Eric Newcomer (Iona), Michael Champion (Software AG), Chris Ferris (IBM), David Orchard (BEA Systems). W3C Working Draft 8-August-2003. Third public Working Draft, produced by the W3C Web Services Architecture Working Group as part of the W3C Web Services Activity. Latest version URL: http://www.w3.org/TR/ws-arch/. Also in PDF format. ['This document defines the Web Services Architecture. The architecture identifies the functional components, defines the relationships among those components, and establishes a set of constraints upon each to effect the desired properties of the overall architecture. Since the last publication, the concepts and relationships have been organized into five architectural models.'] "Web services provide a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks. This document (WSA) is intended to provide a common definition of a Web service, and define its place within a larger Web services framework to guide the community. The WSA provides a model and a context for understanding Web services and the relationships between the various specifications and technologies that comprise the WSA. The WSA promotes interoperability through the definition of compatible protocols. The architecture does not attempt to specify how Web services are implemented, and imposes no restriction on how services might be combined. The WSA describes both the minimal characteristics that are common to all Web services, and a number of characteristics that are needed by many, but not all, Web services. The WSA integrates different conceptions of Web services under a common 'reference architecture'. There isn't always a simple one to one correspondence between the architecture of the Web and the architecture of existing SOAP-based Web services, but there is a substantial overlap. We offer a framework for the future evolution of Web services standards that will promote a healthy mix of interoperability and innovation. That framework must accommodate the edge cases of pure SOAP-RPC at one side and HTTP manipulation of business document resources at the other side, but focus on the area in the middle where the different architectural styles are both taken into consideration..." See also the Web Services Architecture WG Issues Document.
[August 09, 2003] "XML Namespaces and Training Wheels. It Would Help to Have Tools to Avoid Creating Problems for Downstream Applications." By Jon Udell. In InfoWorld (August 08, 2003). "There is an ongoing controversy in the XML world about the use of a feature called namespaces. By default, every element in an XML document is assigned to the 'empty' namespace, but the document's root element -- or any contained element -- can be assigned to another namespace, identified by a URI (Universal Resource Identifier). The idea is to be able to mix and match XML vocabularies in a modular way. For example, my Weblog's RSS 2.0 feed includes an experimental element, called <body>, which lives in the XHTML (eXtensible HTML) namespace, not in the (empty) RSS namespace... [But] clearly there's something counter-intuitive about XML namespaces... In general, we don't have much experience creating and using simple XML vocabularies, never mind mixed ones. InfoPath, the first application making a serious bid to enable mainstream folks to routinely gather and use XML data, hasn't even shipped. I think the creators of InfoPath and similar tools -- who hope that use of modular XML vocabularies will turn out to be like riding a bicycle -- ought to provide some training wheels. One thing that complicates use of namespaces, for example, is that their effects on downstream XML applications can be hard to predict. There are a number of equivalent ways to write a mixed-namespace document. But for downstream applications, such as structured search, some of those ways make life much harder than others. Tools that help us visualize the effects of mixing namespaces are an example of what I mean by training wheels. We're going to need them..." General references in "Namespaces in XML."
[August 09, 2003] "Special Report: E-Gov Under Construction." By Gail Repsher Emery. In Washington Technology (July 21, 2003). ['A look at three prominent programs reveals how far government has come and how far it has to go.'] "When the Office of Management and Budget two years ago unveiled its e-government program, 'e-gov' became a buzzword, and OMB's 25 high-profile projects requiring extensive collaboration among agencies became synonymous with e-government... Much of e-gov focuses on streamlining operations across government to improve efficiency and customer service, whether the customers are citizens, businesses or government entities. For government contractors, this translates into an expanding range of opportunities. Although many e-gov projects carry relatively little value -- perhaps in the low millions -- their potential for follow-on business can be significant. Taking on an agency's e-gov project can raise a company's profile and give it a chance to introduce new services and technologies... At the federal level, new laws, as well as executive attention to government management and technology spending, have helped spur e-gov investment. A case in point is the development of Grants.gov, which will consolidate the grants application process for 26 agencies at a single Web site... Mark Forman, OMB's administrator of e-government and IT, said July 15 [2003] that agencies have begun developing plans to consolidate government operations in criminal investigation, public health information, financial management and human resources. Plans for these new cross-agency IT initiatives should be complete by September, he said. The benefits will be about $4 billion in savings through fiscal 2008 and improved government operations, Forman said. For example, federal officials realized that when anthrax was being sent through the mail in 2001, the plethora of public health information systems weren't effectively linking medical facilities to the information they needed. "We probably need two systems, not 18," he said. Integrator opportunities will lie in the use of Web services and XML for the integration of back-office systems, said Gene Zapfel, a principal at Booz Allen Hamilton Inc. Integrator opportunities will lie in the use of Web services and XML for the integration of back-office systems -- the systems that conduct government operations -- making data "available to anybody, anywhere, anytime," said Gene Zapfel, a principal at Booz Allen Hamilton Inc. in McLean, Va. Zapfel is responsible for the firm's e-gov projects..." On Grants.gov (electronic storefront for US Federal grants), see: IT Integration Workshop and XML Schema Documents."
[August 09, 2003] "BEA Pins Future on Weblogic as Integration Software. Greatly Expanded Role for Workshop Development Tool." By Robert McMillan. In InfoWorld (August 04, 2003). "Executives from BEA Systems Inc. outlined their vision of BEA as an integration software vendor at the official launch of BEA's WebLogic 8.1 product Monday. WebLogic 8.1, which actually began shipping on July 15, represents a major upgrade to the previous version of the product, WebLogic 7.0 because it is the first time WebLogic's portal, application server and development tool have been so completely integrated... The new version of WebLogic features a greatly expanded role for its WebLogic Workshop development tool, which can now be used to build custom Java applications as well as WebLogic Portal applications. Previously it was only used for the development of Web services, BEA said. This more integrated WebLogic reflects a focus on application integration rather than just software development, said BEA's chief marketing officer, Tod Nielsen. 'We believe that integration and development will become one,' he said. 'All integration projects have some development aspect to them, and all development projects have some integration aspect to them.' As well as continuing to support the emerging Web services standards, BEA will focus on ease of deployment and application security as WebLogic evolves, Nielsen said. The company is currently working on integrating the software it picked up in its February acquisition of security management vendor CrossLogix Inc., he said in an interview after the event... Approximately 100 partners have now issued statements supporting WebLogic 8.1, Nielsen said, and four of them were at the event, including Hewlett-Packard Co., which has trained 300 of its service professionals to support WebLogic 8.1, and Intel Corp., which is working on joint marketing programs for WebLogic on Intel systems... Siebel Systems Inc. announced that it has developed three Siebel Business Integration Applications packages for BEA's WebLogic Integration 8.1 software in the communication, media, and energy industries. Supply chain management software company Manugistics Group Inc. announced plans to deliver its suite of software on WebLogic 8.1, which would serve as the preferred application platform for Manugistics..." See the announcement: "BEA WebLogic Platform 8.1 Ships. New Products Offer Faster Time to Value by Converging the Development and Integration of Applications, Portals and Business Processes."
[August 06, 2003] "Extensible 3D: XML Meets VRML." By Len Bullard. From XML.com (August 06, 2003). "The Virtual Reality Modeling Language is very much alive and being used to solve real problems. In this article, we will examine the new VRML standard, Extensible 3D (X3D), as well as software and other resources available to support it. Examples and a short tutorial on the new X3D XML-format are provided. A real-time, 3D multimedia language is not meant to be ubiquitous in the sense that all web designers will learn and apply it. VRML is a language for animators and modellers who are specialists. VRML is meant to be perform well. If that requirement had been made of HTML, HTML would be Postscript. Even in the fields it is designed for, more optimum languages exist for more narrow applications, but VRML nicely hits the sweetspot of complexity, capability, and performance. And for those who value standards, it is standard. To those who say it is a dead language, has no real use, or solves no real problems, they're simply wrong. With VRML the artistry of 3D animation and the disciplines of software programming and object-oriented design merge in a multimedia/hypermedia modeling language to enable some of the most compelling and useful content on the Web. The new web 3D standard is X3D/ Extensible 3D. Open source libraries and commercial plugins have already been released in beta for this new standard. Exporters from commercial editors and dedicated editors are in development. Freeware editors are already available. The original syntax for VRML 1.0 and VRML97 is the so-called curly bracket syntax familiar to C and C++ programmers. It is compact, has very fast parsing speed, and is context free... While the decision to provide an XML syntax was quite controversial in the early days of designing the successor to VRML97, few dispute the wisdom of that decision today... There are three outstanding browsers that support the XML-encoding now. First, Xj3D is a Java browser. The second, Flux, is an Active-X plugin for use inside Internet Explorer. The third is Contact, an OCX for Internet Explorer from Bitmanagement Software GmbH... It's now possible to create real-time simulations using a powerful combination of XML and VRML. The potential for applying XML technologies such as XSLT to combine higher level language descriptions that can then be rendered into free-roaming worlds full of intelligent and even mischievous objects is on the horizon. There are interesting aspects of X3D such as GeoVRML and Human Animation (standard avatars) as well as scripting with exciting applications to such diverse domains as virtual theater and public safety systems. The challenge of creating real-time 3D applications using a standard XML application language for the Web has been realized..." See: (1) "Web3D Consortium Publishes X3D Final Working Draft"; (2) Extensible 3D (X3D) International Draft Standards; (3) general references in "VRML (Virtual Reality Modeling Language) and X3D."
[August 05, 2003] "UML for Web Services." By Will Provost. From O'Reilly WebServices.xml.com (August 05, 2003). ['In recent years, many software developers have used the Unified Modeling Language, UML, as an aid to designing software systems. As new software strategies such as web services emerge, it becomes important that they can be used within the strictures of formal design. In this article Will Provost shows in detail how web service applications can be designed with UML. Provost covers an implementation process that leads from UML to WDSL and W3C XML Schema to program code.'] "You've heard the hype, you've read the literature, and you're convinced that web services is the next step. You know SOAP and WSDL, and you're ready to build something. It's time to take web services to the white board. You don't want to go plunging into your first web-services project without a proper design process, right? Enter the Unified Modeling Language, which is the white board notation for object-oriented analysis and design (and much more), offering a natural fit to RPC-style service design. In a recent XML.com article I talked about the importance of working from WSDL and W3C XML Schema (WXS) as the source language for web-service development, as opposed to starting from a chosen implementation language and generating WSDL and SOAP code. This is right and good, but WSDL is neither comprehensive nor easy enough to work well as a design language. So the process we really want is not just WSDL-to-Impl, but UML-to-WSDL-to-Impl. [That is:] design in UML, express service semantics in WSDL and WXS, and implement in a supporting programming language. One big advantage of a UML design process is that UML -- through stereotypes, mostly -- can express designs over three important domains: WSDL for messaging semantics, WXS for serializable types, and the OO service or client implementation language. In an earlier article I laid out a convenient UML notation (know as a 'profile') for WXS types. This article will focus on two new capabilities: (1) Modeling WSDL components for service semantics, such as port types, ports and services; (2) Integrating WSDL, WXS, and traditional OO types for effective modeling of web services: interoperable description, service and client implementations in concise diagrams that explicitly relate these various types... Web-service designs will need to express certain WXS constructs frequently: complex types, built-in simple types, and enumerations are all most tools support so far... I'll be combining types from three different profiles I'll apply the namespace prefix xs: to the stereotypes from the WXS notation. Similarly, for the few stereotypes I'm about to introduce I'll use wsdl:... To illustrate the UML notation I'll build up a design for a simple service called 'Love Is Blind', which offers an online dating service based on simple matching queries over sex, age, and interest keywords. Members will be registered in a database with attributes for sex, age, interests, nickname, and picture; each will also have a unique member ID and a password. The service will offer a simple use case to walk-in clients: [1] Query the database for matches, receiving a result set of member profiles, [2] Send a love note to one or more members as found in the result set, passing in name, email address, and message text. Other use cases for other roles such as members and administrators will be developed along the way. Our goal will be to develop a UML design document that expresses the WSDL description, including supporting WXS types, the service implementation, and client interactions with the service..." See also: (1) Learning UML, by Sinan Si Alhir (July 2003); (2) "Web Services Description Language (WSDL)"; (3) Dave Carlson's XMLModeling.com website, which provides UML Models of Common XML Schemas; (4) "Conceptual Modeling and Markup Languages."
[August 05, 2003] "WDSL Tales From the Trenches, Part 3." By Johan Peeters. From O'Reilly WebServices.xml.com (August 05, 2003). ['Continuing the focus on sound design, we have the third and final installment of Johan Peeters' "WSDL Tales from the Trenches." Peeters concentrates on the importance of modeling the data elements involved in web services, and explains the best strategies for using W3C XML Schema to model this data.'] "I examine the type definitions and element declarations in the types element of a WSDL document. Such types and elements are for use in the abstract messages, the message elements in a WSD. WSDL does not constrain data definitions to W3C XML Schema (WXS). However, alternatives to WXS are not covered in this article: the goal of the series is to provide help and guidance with current real-world problems, and I have not seen any of the alternatives to WXS being used for web services on a significant scale to date. This may change in the future: while only the WXS implementation is discussed in the WSDL 1.1 spec, it was always the intention of the WSDL designers to provide several options. The WSDL 1.2 draft's appendix on Relax NG brings this closer to realization. Data modeling with WXS is not for the faint-hearted. It presents a lot of pitfalls. This article will point some of these out and helps you avoid them..." On WSDL 1.2, see the announcement "W3C Releases Three Web Services Description Language (WSDL) 1.2 Working Drafts." The non-normative Appendex E ('Examples of Specifications of Extension Elements for Alternative Schema Language Support') in the WSDL Part 1: Core WD includes a section on RELAX NG: "A RELAX NG schema may be used as the schema language for WSDL. It may be embedded or imported; import is preferred. A namespace must be specified; if an imported schema specifies one, then the [actual value] of the namespace attribute information item in the import element information item must match the specified namespace. RELAX NG provides both type and element definitions which appear in the {type definitions} and {element declarations} properties of [Section 2.1.1] 'Definitions Component' respectively..." See also: (1) XML Schema: The W3C's Object-Oriented Descriptions for XML, by Eric van der Vlist; (2) "Web Services Description Language (WSDL)"; (3) general references in "XML Schema Languages."
[August 05, 2003] "Build Rich, Thin Client Applications Automatically Using XML." By Laurence Moroney. In DevX Network (July 30, 2003). ['New products let you build highly graphical client-side applications without the performance headaches of applets or the security concerns of ActiveX. Find out what these XWT-based technologies can do.'] "The decoupling of data and presentation in HTML -- by using XML for the data and XSLT for the presentation of data -- has led to much innovation and flexibility, not least of which is the ability to deliver a document as data in XML and deliver custom styling for that document with different XSLTs. The next big trend in decoupling appears to be separating the user interface implementation from the user interface definition. There are countless initiatives, both open source and commercial, that will have at their core this very principle. The next big revolution in the desktop Windows operating system from Microsoft, codenamed Longhorn, is perhaps the most notable of these. With this type of decoupling, when a user interface is defined in a document such as XML, users would not have to download and install their GUIs; they can simply browse them as easily as they browse Web pages. A runtime engine would be present on the desktop, and servers would be able to deliver the GUI to the browser with an XML document. This will be huge for the corporate environment where at present, rich desktops are difficult and expensive to produce and maintain. Corporations are necessarily paranoid about allowing users to download and install binary files, and in general block users from doing this. The only options are to build a rich interface in HTML, or to build Windows applications and install them using a CD... The world is becoming used to XML driving HTML, with XSLT stylesheets to brand the data differently for different users. So why should more complex user interfaces be any different? They shouldn't, and that's the philosophy behind these new initiatives, including Longhorn, which will have its user interfaces defined using a language called XAML (XML Application Markup Language). In short, if a user interface is defined using XML as opposed to programmed using something such as C# or Java, then a runtime engine can parse it and render it on screen. Hooks within the XML could link to server-side applications that process information... Some of the better known initiatives in this area are XUL (XML User interface Language) and XWT (XML Windowing Toolkit) in the open source community, and Bambookit -- a commercial offering... Bambookit is a particularly impressive implementation as it is very easy to use and gives some great functionality and the ability to build complex, visually appealing, and high-performing user interfaces. Another player in this space is XWT, an open source initiative; XWT gives you either a Java-based or ActiveX-based runtime, and has the peculiar implementation of having to run off a server. This means that the XWT runtime has to be on a server and the configuration that you want to launch is passed to that server as a parameter. The configuration is an XML file, but it has to be zipped up in a special .XWAR file along with all of its dependencies... XWT has a flexible scripting model, using Javascript to process actions instead of the property pattern that Bambookit utilizes. It can also communicate with middleware using RPC or SOAP. Other players in this space are XUL, Luxor, Thinlets, and JEasy... At present, hand-coding of the XML documents to define the interfaces -- or the middleware servers to generate the interfaces -- is still necessary, but as time unfolds, IDE packages will have UI designers that compile to an XML format..." See: (1) "User Interface Markup Language (UIML)"; (2) Open XUL Alliance; (3) general references in "XML Markup Languages for User Interface Definition."
[August 05, 2003] "Creating Java Grid Services." By Aaron E. Walsh (Mantis Development Corporation). In Dr. Dobb's Journal (DDJ) (September 2003), pages 18-23. Special Issue on Distributed Computing. ['Aaron uses the Globus Toolkit, a development framework for developing special-purpose grids, to build Java-based grid services and grid clients.'] "Of the many grid technologies available to Java developers, the Globus Project's Globus Toolkit is perhaps the most well known and widely adopted. In this article I briefly discuss the Globus Project, then examine the latest version of the Globus Toolkit, with which you can create Java-based grid services and grid clients... Simply put, a grid service is a special-purpose web service designed to operate in a grid environment. To this end, OGSI uses WSDL to define compliant grid services that are accessible over the Internet using SOAP/HTTP. The current OGSI 1.0 specification implemented by GT3 extends WSDL 1.1 to define grid services and will eventually support WSDL 1.2. OGSI 1.0 defines a component model by extending WSDL 1.1 and XML Schema Definition (XSD) to support a number of useful and innovative enhancements, including stateful web services, support for inheritance of web services interfaces, asynchronous state change notifications, references to service instances, service collections, and service state data..."
[August 05, 2003] "Grid Services Extend Web Services. A Solid Foundation for Service Consumer Reliability." By Dr. Andrew Grimshaw and Steve Tuecke. In Web Services Journal Volume 3, Issue 8 (August 2003), pages 22-26. "The differences between Grid services as defined in the Open Grid Services Infrastructure [OGSI] V.1.0 specification and Web services are few: a Grid service is simply a Web service that conforms to a particular set of conventions. For example, Grid services are defined in terms of standard WSDL (Web Services Definition Language) with minor extensions, and exploit standard Web service binding technologies such as SOAP (Simple Object Access Protocol) and WS-Security (Web Services Security). So Grid services do look like Web services. [But] these Grid service conventions are not superficial in their function; they address fundamental issues in distributed computing relating to how to name, create, discover, monitor, and manage the lifetime of stateful services. More specifically, these conventions support: (1) Named service instances and a two-level naming scheme that facilitates traditional distributed system transparencies; (2) A base set of service capabilities, including rich discovery (reflection) facilities; (3) Explicitly stateful services with lifetime management... At the heart of the Grid service specification is the notion of a named service instance. A service instance is named by a Grid Service Handle (GSH). The GSH is a classic abstract name - it is not necessarily possible to determine by examination of a GSH such things as the location, number, implementation, operational status (e.g., up, down, failed), and failure characteristics of the associated named object. GSHs by themselves are not useful. Instead they must be resolved into a Grid Service Reference (GSR). A GSR contains sufficient information to communicate with the named grid service instance. Thus, GSHs and GSRs form a two-level naming scheme in which GSHs serve as abstract names and GSRs provide the method and address of delivery... The OGSI specification provides the foundation upon which a wide range of Grid services will be defined, built, and interconnected. Grid services marry important concepts from the Grid computing community with Web services. They extend basic Web services by defining a two-layer naming scheme that enables support for the conventional distributed system transparencies, by requiring a minimum set of functions and data elements that support discovery, and by introducing explicit service creation and lifetime management. These extensions are more than syntactic sugar. They extend and enhance the capabilities of Web services by providing common, powerful mechanisms that service consumers can rely upon across all services, instead of the service-specific, often ad-hoc approaches typically employed in standard Web services..." [alt URL]
[August 05, 2003] "Introducing Open Grid Services. An Infrastructure Built on Existing Technologies." By Dr. Savas Parastatidis (Chief Software Architect, North-East Regional e-Science Centre [NEReSC], Newcastle, UK). In Web Services Journal Volume 3, Issue 8 (August 2003), pages 10-14. "In June 2003, the Global Grid Forum (GGF) adopted the Open Grid Services Infrastructure (OGSI) specification as a GGF standard. OGSI is essential to the Open Grid Computing vision as it is the foundation on top of which the building blocks of future Grid applications will be placed. Those building blocks, the Grid services, are being defined by various GGF working groups, with the Open Grid Services Architecture (OGSA) working group orchestrating the entire process. This article introduces OGSA, presents the OGSI specification, and discusses the significant role of Web service standards in Grid computing... The blueprint of the Grid architecture is defined by OGSA as a set of fundamental services whose interfaces, semantics, and interactions are standardized by the GGF working groups. OGSA plays the coordinating role for the efforts of these groups. It identifies the requirements for e-business and e-science applications in a Grid environment and specifies the core set of services and their functionality that will be necessary for such applications to be built, while the technical details are left to the groups... The OGSI working group decided to build the Grid infrastructure on top of Web services standards, hence leveraging the great effort - in terms of tools and support-that the industry has put into the field. Nevertheless, the group identified some key characteristics missing from Web services standards that they thought would be instrumental in building Grid applications, including the ability to create new services on demand (the Factory pattern); service lifetime management; statefulness; access to state; notification; and service groups. OGSI is based on the concept of a Grid service instance, which is defined as 'a Web service that conforms to a set of conventions (interfaces and behaviors)'. All services in the OGSA platform must adhere to the conventions specified by OGSI and, therefore, they are all Grid services. It is crucial to note that the term 'Grid service' is used here to refer to all aspects of OGSI While OGSA has adopted a services-oriented approach to defining the Grid architecture, it says nothing about the technologies used to implement the required services and their specific characteristics. That is the task of OGSI. OGSA is the Grid community's effort to create a services-oriented platform for building large-scale distributed applications. OGSI constructs the foundations of the OGSA platform on top of Web services. In making some of the characteristics it introduces a mandatory part of the infrastructure, OGSI has moved away from the vision of the Web services infrastructure as a collection of interoperable components that are built on top of widely accepted standards. Instead, OGSI brings a flavor of object orientation a la CORBA and J2EE into Grid services (e.g., service data and Grid service instances) and introduces noncompliant extensions to the WSDL standard (e.g., portType inheritance and service data). It's my opinion that OGSI encourages the development of fine-grained, component-based architectures while Web services promote a coarse-grained approach. Due to its nonstandard features, like service data, mandatory statefulness, GSR, and factories, the OGSI specification deviates from common Web services practices. Then again, OGSI is an application of the Web services concept to the Grid application domain so experience may prove that the introduced characteristics are indeed required..." See Global Grid Forum website. [alt URL]
[August 05, 2003] "Build Interoperable Web Services With JSR-109. Understanding the Foundation of JSR-109." By Jeffrey Liu and Yen Lu (Web Services Tools Team at the IBM Toronto Lab). From IBM developerWorks, Web services. August 05, 2003. "JSR-109 facilitates the building of interoperable Web services in the Java 2 Platform, Enterprise Edition (J2EE) environment. It standardizes the deployment of Web services in a J2EE container. This article discusses the server and client programming models defined by JSR-109 and provides code examples. A crucial objective of Web services is to achieve interoperability across heterogeneous platforms and runtimes. Integrating Web services into the Java 2 Platform, Enterprise Edition (J2EE) environment is a big step forward in achieving this goal. JAX-RPC (JSR-101) took the first step into this direction by defining a standard set of Java APIs and a programming model for developing and deploying Web services on the Java platform. JSR-109 builds upon JAX-RPC. It defines a standard mechanism for deploying a Web service in the J2EE environment, more specifically, in the area of Enterprise JavaBean (EJB) technology and servlet containers. Both of these specifications will be integrated into the J2EE 1.4 specification. Together they serve as the basis of Web services for J2EE. This article describes how JSR-109 works with an emphasis on its benefits and illustrates some example scenarios. Before you begin, you should have a working knowledge of the following: WSDL, XML, Web services, JAX-RPC (JSR-101), and EJB components... The J2EE platform defines several roles in the application development cycle. They include: J2EE product provider, application component provider (developer), application assembler, deployer, system administrator, and tool provider. In an attempt to integrate Web services development into the J2EE platform, JSR-109 defines additional responsibilities for some of the existing J2EE platform roles. The J2EE product provider is assigned the task of providing the Web services run time support defined by JAX-RPC, Web services container, Web services for J2EE platform APIs, features defined by JAX-RPC and JSR-109, and tools for Web services for J2EE development. In the actual Web services for J2EE development flow, the developer, assembler, and deployer are assigned specific responsibilities... In general, a developer is responsible for providing the following: web service definition, implementation of the Web service, structural information for the Web service, implementation of handlers, Java programming language and WSDL mappings, and packaging of all Web service related artifacts into a J2EE module... JSR-109 provides a preliminary standardized mechanism for deploying Web services in the J2EE environment. There is still room for future improvements, especially in the area of security. Although the demand for secure Web services is as important as having interoperable Web services, the challenge to standardize a security model across heterogeneous platform remains. As for JSR-109, it defines the security requirements that it attempts to address, but the actual standardization is deferred to a future release of the specification. Security requirements include the following: credential based authentication (for example, HTTP BASIC-AUTH), authorization defined by the enterprise security model, integrity and confidentiality using XML encryption and XML, digital signature, audit, and non-repudiation. Besides security improvements, there are areas where JSR-109 can improve that are directly related to JAX-RPC. For example, JAX-RPC defines an in-depth Java <=> XML serialization and deserialization framework. However, JSR-109 does not provide any support in this area. In addition, JSR-109 did not provide a complete representation for the type mapping rules defined by JAX-RPC. It lacks the support for MIME types. For example, JAX-RPC allows java.lang.String to be mapped to MIME type: text/plain, however, JSR-109 cannot support this mapping as there is no standard representation for MIME types in the JAX-RPC mapping file. Hopefully, some of these issues will be addressed in a future release of the JSR-109 specification..." See "Implementing Enterprise Web Services (JSR 109)", and final release.
[August 05, 2003] "Building a Business Logic Layer Over Multiple Web Services. Leveraging Multiple Web Services to Build a Truly Distributed Web Services Architecture" By Rajesh Zade and Avinash Moharil. In Web Services Journal Volume 3, Issue 8 (August 2003), pages 16-20 (with 5 figures). "This article discusses how to leverage application resident business logic by building a business logic layer over multiple Web services. Many businesses are adopting Web services to gain access to applications and legacy databases that reside inside corporate networks (usually behind corporate firewalls). Web services have changed the B2B model from centrally located, exchange-oriented B2B APIs to distributed, corporate network-resident APIs. This model has gained huge popularity because it gives corporations complete control over the applications they run and also allows them to expose only those areas that are of interest to their business partners and those they deem appropriate to expose to the world. In the future it will be hard to imagine any businesses running all the applications internally without communicating with any other businesses or partners. Since Web services allow businesses to gain access to partial business logic and data residing within other businesses, the technology also opens up a whole new area for building a business logic layer that can operate over several other Web services in real time. The central idea of this article is how to build a logical layer serving as a Web service over other multiple Web services accessing remote applications and analyzing responses obtained from those Web services in real time. We use a simple case study based on JAX-RPC to build two Web services, 'Timesheet' and 'Insurance,' and also build a logical layer, 'Payroll,' to evaluate responses from these two Web services. The 'Payroll' client can easily be represented as another Web service that satisfies requests from such parties as your payroll processing company. We will also discuss EAI/BPM service integration techniques... The examples presented here are based on Sun Microsystems' Java Web Services Developer Pack 1.1... " [alt URL]
[August 05, 2003] "Creating Web Services Using GLUE. An Easy Development Framework." By (AVAO Corporation). In Web Services Journal Volume 3, Issue 8 (August 2003), pages 38-43. "GLUE, by The Mind Electric, is a framework for developing and publishing Web services. It is simple, easy, and fast to create Web services. GLUE supports SOAP1.2, WSDL1.1, and UDDI v2. It comes in two editions: GLUE standard is free, and GLUE Professional has more advanced features. The Mind Electric hosts a Yahoo group for developers to post questions and share knowledge. This article provides an introduction to how to use GLUE, including publishing and invoking Web services, working on SOAP messages, using SOAP over JMS, publishing EJBs as Web services, and publishing and inquiry using UDDI... The underlying structure for transporting XML documents is SOAP (Simple Object Access Protocol), a standard packaging mechanism. In our last example, the SOAP message is encapsulated from GLUE. Sometimes you need to work on low-level SOAP messages directly. GLUE provides SOAP-level APIs for you to create, send, and parse SOAP messages... JMS (Java Message Service) provides reliable and guaranteed delivery messaging. GLUE provides a mechanism for sending and receiving SOAP messages over JMS. Built-in adapters are included for MQSeries (IBM), SonicMQ (Sonic Software), TIBCO JMS (TIBCO), and SwiftMQ (IIT Software). For other JMS products, you can write your own adapter... GLUE can publish any stateless session bean as a Web service. It uses a generic IService interface that allows different kinds of objects to be exposed. GLUE includes an implementation of IService called StatelessSessionBean Service that acts as a proxy to forward a SOAP request to the stateless session bean, and return the result as a SOAP response... UDDI (Universal Description, Discovery and Integration) provides a registry of Web services for advertisement, discovery, and integration purposes. You can use GLUE to publish your business to a registry, or to search for a business using UDDI..." See details at the GLUE development website.
[August 05, 2003] "Web Services Made Easy with Ruby. A Simple Development Method." By Aravilli Srinivasa Rao (Hewlett-Packard). In Web Services Journal Volume 3, Issue 8 (August 2003), pages 34-37. "This article looks at how to develop a Web service client to access the Web services that are hosted in the Internet and how to develop a Web service with simple steps using Ruby. Ruby is the interpreted scripting language invented by Yukihiro Matsumoto for quick and easy object-oriented programming and is more popular in Japan than in other countries. Ruby's open-source nature makes it free for anyone to use. All data structures in Ruby are objects, but you can add methods to a class or instance of a class during runtime. Because of this, any instance class can behave differently from other instances of the same class. Ruby's dynamic typing nature doesn't require explicit declaration of variable types. It features a true mark-and-sweep garbage collector that cleans all Ruby objects without needing to maintain a reference count... Several libraries have been developed with Ruby, including Ruby/DBI (to access different databases), Ruby/LDAP (to search and operate on entries in the LDAP), XSLT4R (for XSL transformation), XMLRPC4R (for writing and accessing Web services), and SOAP4R (for writing and accessing Web services). SOAP4R is the implementation of Simple Object Access Protocol for Ruby developed by Hiroshi Nakamura. SOAP4R depends on the XML processor, http-access, logging, and date-time packages... SOAP4R supports logging for the client side and server side as well and supports the user-defined data types. SOAP4R provides the ability to customize the mapping between the Ruby and the SOAP Types. It has good support for SOAP Parameters like IN, OUT, INOUT, and return, and supports WSDL as well..." [alt URL]
[August 05, 2003] "A Weblog API For the Grassroots." By Rich Salz. From O'Reilly WebServices.xml.com (August 05, 2003). "Last month I looked at the Necho message format. I compared it to RSS, its predecessor. In this column, I want to look at its API. Joe Gregorio is the main author of the API, written in the IETF RFC format. Joe is using Marshall Rose's xml2rfc package, so various formats are available... I'll be talking about the Atom API, which is used to manipulate what I previously called Necho data. But both of those might end up being called Feedster pretty soon, judging by an entry in the Wiki, whose URL still reflect it's original name, pie... What does the API do? According to the draft, 'AtomAPI is an application level protocol for publishing and editing web resources...' Compare this to RFC 2518, HTTP Extensions for Distributed Authoring -- WEBDAV. According to the WebDAV FAQ: The stated goal of the WebDAV working group is (from the charter) to 'define the HTTP extensions necessary to enable distributed web authoring tools to be broadly interoperable, while supporting user needs', and in this respect DAV is completing the original vision of the Web as a writable, collaborative medium. On the surface there seems to be a lot of overlap, but on closer inspection this isn't quite true. WebDAV spends a lot of time on locking, which is required for distributed authoring of the same document, but probably less germane to the single author/publisher model of a weblog. It's model of a collection nicely maps into a weblog entry and its comments, but it enforces a hierarchical syntax on the URLs which may not be always be possible or even desirable in weblog software. Finally, it defines a suite of HTTP extensions -- new verbs, new headers -- which also make the burden of implementation too great... So far, the most detailed part of the API draft has to do with the manipulation of entries. Doing an HTTP POST to the appropriate URL creates a new entry, and the server returns an HTTP Location header with the URL of the new entry. It's also responsible for 'filling out' the entry, adding its own values for the link, id, and timestamp elements... Doing an HTTP GET on the URL obviously retrieves the entry, a PUT replaces the entry with new contents, and DELETE removes it. There's also a search operation..." See 'The AtomAPI' [draft-gregorio-07], which "presents a technique for using XML (Extensible Markup Language) and HTTP (HyperText Transport Protocol) to edit content... AtomAPI is an application level protocol for publishing, and editing web resources. AtomAPI unifies many disparate publishing mechanisms into a single, simple, extensible protocol. The protocol at its core is the HTTP transport of an XML payload..."
[August 05, 2003] "RDF Site Summary 1.0 Modules: Context." By Tony Hammond (Elsevier), Timo Hannay (Nature Publishing Group), Eamonn Neylon (Manifest Solutions), and Herbert Van de Sompel (Los Alamos National Laboratory). "This draft RSS 1.0 module is being released to support applications that wish to make use of the ANSI/NISO Draft Standard for Trial Use 'The OpenURL Framework for Context-Sensitive Services'. It is anticipated that a revised version of this module will be resubmitted following a successful ballot by NISO Voting Members when the Draft Standard is published as currently expected NISO Standard Z39.88-2003. It is being published now in a preliminary version to allow RSS 1.0 feeds to make use of the OpenURL Framework data model. It must be emphasized, however, that the OpenURL Framework has not been finalized and details are therefore subject to change. Note that this draft is aligned with the OpenURL Framework DSFTU in following the OpenURL Framework naming architecture which makes use of the URI, ORI, XRI namespaces. Work is currently ongoing to harmonize this naming architecture with the URI naming architecture and to normalize all identifier references within the OpenURL Framework to be URIs. We expect the outcome of this work to coincide with the successful ballot of the Draft Standard... The rationale in defining the RSS 1.0 'Context' module is that this module allows RSS 1.0 feeds to provide contextual information which relates a description of the feed channel or item to the provenance of the feed, the requester of the feed, and the where and how the feed should be processed. Rich descriptions of this contextual information can be provided using both identifiers and metadata. The OpenURL Framework data model exists to support network applications in the provisioning of context-sensitive services..." Note on ANSI/NISO Draft Standards For Trial Use (DSFTU): "NISO releases proposed standards as Draft Standards for Trial Use (DSFTU) when there is a need for field experience. This DSFTU (see the DSFTU documents Part 1 and Part 2 on the NISO Committee AX website) has not been balloted by the NISO members and is not a consensus document. It is released for review and trial implementation for the period: May 1, 2003 - November 1, 2003. Publication of this Draft Standard for Trial Use has been approved by the NISO Standards Development Committee. At the end of the Trial Use period, this draft Standard will be revised as necessary and balloted by the NISO Voting members, continue in the development cycle, or withdrawn..."
[August 04, 2003] "XML Matters: The RXP Parser. An Extremely Fast Validating Parser With a Python Binding." By David Mertz, Ph.D. (Comparator, Gnosis Software, Inc). From IBM developerWorks, XML zone. August 04, 2003. ['RXP is a validating parser written in C that creates a non-DOM tree representation of XML documents. While RXP itself is not well documented -- and not for the faint of heart -- at least two excellent higher level APIs have been built on top of RXP: pyRXP, a Python binding; and LT XML, a collection of utilities and libraries. In this article, David introduces you to RXP, compares it with the expat parser, and briefly discusses pyRXP and LT XML as ways of taking advantage of the speed RXP has to offer without all of its complexity.] "Readers of this column will have picked up on the fact that while I write here about XML generally, I have a particular fondness for Python tools. I had planned to break with this pattern for this installment, and focus on using RXP with C applications. However, once I took a closer look at the RXP library, I found that the easiest way to utilize it is through the Python module pyRXP. While the underlying RXP GPL library is almost certainly the fastest validating XML parser you can find, the actual parser code is quite under-documented, and comes with just one simple example of a command-line tool. This tool, rxp, is similar to the utility xmlcat.py (which I presented in my tip Command-line XML processing) as well as a variety of similar utilities -- it reads XML documents, validates them, and outputs a canonical form. You can look through the source code for the file rxp.c to see the way that RXP parsing generates a compact document tree as a data structure. On top of RXP itself, the Language Technology Group has built LT XML, which contains a variety of higher-level tools and APIs. A number of additional tools are built using LT XML, including XED (an XML editor). In this article, I will take a brief look at the tools in LT XML, but my main focus will be examining the RXP tree API as exposed through the pyRXP binding. As far as I can determine, other high-level languages that might naturally have RXP bindings -- such as Perl, TCL, and Ruby -- have not yet grown them. RXP is fast. A C application that uses the (optionally) validating RXP parser is probably not much different in speed than one that uses the non-validating expat parser, which is itself known to be very fast. RXP works by building a compact in-memory tree structure of the XML document being parsed. Failures in parsing are failures in tree building, and a successful parse gives you a data structure that is much more efficient than a DOM representation of XML. Where you need to build a complete data structure out of an XML document, RXP probably edges out expat slightly; and if you need validation, expat is simply not an option. However, for purely sequential processing, or for extracting a small subset of the information in an XML document, expat can be superior, since it doesn't need to save any representation of already processed or already skipped tags. In fact, for sufficiently large documents, expat gains an overpowering advantage -- you rarely want to create an in-memory representation of a 1 GB XML document, and with RXP you have no choice about this. An application built around expat is happy to pull off a few tags of interest as it reads through that much XML, likely utilizing orders of magnitude less memory than the document size. The speed of RXP really stands out in the context of the pyRXP binding..."
[August 04, 2003] "Dispute Exposes Bitter Power Struggle Behind Web Logs." By Paul Festa. In CNET News.com (August 04, 2003). ['As commercial interests have increasingly dominated the Internet, Web logs have come to represent a bastion of individual expression and pure democracy for millions of bloggers.'] "It should come as little surprise that a technology behind blogs -- online chronicles of personal, creative and organizational life -- has manifested the kind of bitter fight for control that is inevitable in any truly democratic institution. The conflict centers on something called Really Simple Syndication (RSS), a technology widely used to syndicate blogs and other Web content. The dispute pits Harvard Law School fellow Dave Winer, the blogging pioneer who is the key gatekeeper of RSS, against advocates of a different format. The most notable of these advocates are Blogger owner Google and Sam Ruby, an influential IBM developer who is now shepherding an RSS alternative through its early stages of development. Winer's opponents are seeking a new format that would clarify RSS ambiguities, consolidate its multiple versions, expand its capabilities, and fall under the auspices of a traditional standards organization. Calls to revise RSS itself fell on deaf ears when Winer decided to freeze its technological core, preventing substantial changes to the heart of the format. The dispute offers a glimpse into the byzantine and highly politicized world of industry standards, where individuals without legal authority over a protocol may nonetheless exercise control over it and where, consequently, personal attacks can become the norm... 'Dave Winer has done a tremendous amount of work on RSS and invented important parts of it and deserves a huge amount of credit for getting us as far as we have,' Tim Bray, a member of the World Wide Web Consortium's (W3C) influential Technical Architecture Group, wrote in a June 23 Web log entry. Bray is also a co-creator of Extensible Markup Language (XML), a (W3C)-recommended language on which RSS is based. 'However, just looking around, I observe that there are many people and organizations who seem unable to maintain a good working relationship with Dave.' [...] Harvard insists that the transfer of the format to its Berkman Center from UserLand should put to rest any questions about Winer's control... Critics reject that argument, saying the format's transfer to Harvard and the creation of an RSS advisory board--which includes Winer--merely obscures his de facto control of the format. 'RSS has always been controlled by a single vendor,' said Mark Pilgrim, a Web developer and professional trainer in Apex, N.C., who works for Washington-based software development and consulting firm MassLight and on the Web Standards Project. 'RSS is not an open format.' [...] The wrangling over RSS has led many to call for the transfer of the syndication format to a formal standards body, where disputes over technologies' direction and development can be settled by working groups that represent a broad array of parties. Standards stalemate Yet even if Winer and his detractors agree on that direction, the issue will face another contentious question: Which standards group should get the project? 'Don't go to W3C, which is just too popular these days for its own good,' Bray wrote in his June 23 blog. 'If we could convince W3C to launch a working group (which would take months) there would instantly be 75 or more companies who wanted to join it, because RSS is hot stuff. It's not entirely impossible they could do a good job, but it is entirely possible they could really screw it up.' A better destination for the alternative format, Ruby and Bray suggested, is the IETF, which is less restrictive in letting people post 'request for comment' drafts and which admits individual members rather than dues-paying corporate and organizational representatives..." Related news: "RSS 2.0 Specification Published by Berkman Center Under Creative Commons License." See general references in "RDF/Rich Site Summary (RSS)."
[August 02, 2003] "Infrastructure-Level Web Services." By Anne Thomas Manes (Burton Group). In Web Services Journal Volume 3, Issue 8 (August 2003), page 58. "Infrastructure-level Web services are Web services that implement part of the distributed computing infrastructure. They help other Web services communicate. In particular, these services make the Web services framework more robust. They provide such functionality as: (1) Security and provisioning; (2) Performance management; (3) Operational management; (4) Metering, billing, and payments; (5) Routing and orchestration; (6) Advertisement and discovery; (7) Caching and queuing; (8) State management and persistence... Consider how much simpler it would be if you were using a distributed security infrastructure based on a service-oriented architecture. Imagine a set of Web services that provides simple, easy-to-use security functionality -- available to all users and all applications regardless of language, platform, application, or location. These trust services include single sign-on, entitlement, signature generation and verification, and key management. From an administrative point of view, you also have policy management and provisioning services. Once you've defined the standard formats and APIs for these services, the functionality can be built into the runtime frameworks so that you no longer need to rely on developers to implement security properly. Security becomes automatic. This isn't just a glossy-eyed dream of the future. The standards community is hard at work making this stuff happen. OASIS Security Assertions Markup Language (SAML) defines standard XML formats for security tokens (authentication and authorization assertions). It also defines standard protocols for single sign-on and entitlement Web services. OASIS Service Provisioning Markup Language (SPML) defines provisioning Web services and is a framework for exchanging user, resource, and service provisioning information. OASIS Digital Signature Services (DSS) defines standard Web services for signature generation and verification. And W3C XML Key Management Specification (XKMS) defines standard Web services for key management and distribution. Security is not the only infrastructure area moving toward Web services. OASIS UDDI defines a standard advertising and discovery service, and OASIS Web Services Distributed Management (WSDM) will use Web services to manage distributed systems. Peer-to-peer and grid computing systems can also capitalize on a pervasive, distributed set of infrastructure-level services to manage issues such as routing, scheduling, caching, presence, localization, security, state management, and persistence..." [alt URL]
[August 04, 2003] "In Their Orbit. What Drives Business-Technology Innovation? Look to Large Companies' Supply Chains, Not Just Tech Vendors." By Rick Whiting. In InformationWeek (August 04, 2003). "When Wal-Mart Stores Inc. told 100 key suppliers this year that they need to be able to track pallets of merchandise using radio-frequency ID technology by January 2005, it did more than send research and development teams scrambling. It offered the latest example of how supply chains increasingly will become the innovation chains that shape business technology. Wal-Mart didn't discover RFID, and there are companies farther down the path to exploiting it. But the ongoing interconnection of supply chains and the role that technology plays in enabling them, means the largest 'and smartest' companies at the center of those hubs have greater power than ever to shape the pace and focus of technology development... RFID is only the latest example of this kind of market influence. McKesson is working with its major retail customers to adopt a standard for EDI called Electronic Data Interchange-Internet Integration Applicability Statement 2 (EDIINT AS2). Last year, Wal-Mart asked its nearly 10,000 suppliers to begin using the standard as an alternative to expensive value-added networks. Wal-Mart's actions even influence its big-retailer competitors, says Eric Peters, senior VP of products and strategy at Manhattan Associates Inc. Manhattan Associates develops applications for supply-chain and trading-partner management that companies such as Sulyn Industries use in their relationships with big customers such as Wal-Mart to comply with their technical requirements, including RFID and EDIINT AS2. Peters says Wal-Mart's competitors with which he works say they can't let Wal-Mart get more than six months ahead in adopting new business-technology practices. They're comfortable letting Wal-Mart take a lead in adopting technology such as RFID, as long as they can be close followers. 'As competitors to Wal-Mart, they can't let them gain too much of a competitive edge,' Peters says. With its relentless chip innovation, Intel has done as much as any company to explore what's possible with IT. But as a major market influencer, Intel also leverages its sway with its suppliers to try to modify how businesses use technology. Intel is a strong proponent of RosettaNet, a set of XML-based standards used to automate business-to-business transactions that it's increasingly using with customers and suppliers. Since 2000, when Intel began using RosettaNet, the company has adapted 28 of its transaction systems, including ordering, payment, and inventory status, to support the standards..." See also "Physical Markup Language (PML) for Radio Frequency Identification (RFID)."
[August 04, 2003] "Portal Standards Take Flight. Vendor Frameworks to Support JSR 168 and WSRP." By Cathleen Moore. In InfoWorld (July 30, 2003). "As two key specifications approach final status, portal vendors are stitching in standards-compliant APIs for delivering content and applications into the portal framework. Both JSR (Java Specification Request) 168 and WSRP (Web Services for Remote Portals) are on pace for final release between mid-August and early September [2003]. JSR 168, shepherded by the Java Community Process, was created to establish a standard portlet programming API. The Organization for the Advancement of Structured Information Standards' (OASIS) WSRP specification, meanwhile, leverages Web-services standards to integrate remote content and applications into portals. Sun Microsystems, IBM, Vignette, Plumtree Software, and BEA Systems are outfitting support into their portals... Leading the pack, Sun last month kicked off the beta release of its Sun ONE Portal Server 6.2, which includes support for JSR 168 via an early-access version of the Sun ONE Portlet Builder 2.0. The final release of the portal, due in September, will include the full JSR 168 implementation, and support for WSRP will be added in the first quarter of 2004, Sun officials said. IBM this month plans to roll out Version 5.0 of its WebSphere Portal featuring an open source implementation of JSR 168. After the JSR 168 and WSRP specs are finalized, IBM will furnish support for both via an incremental update..." See: (1) "JSR 168 Portlet API Specification 1.0 Released for Public Review"; (2) "OASIS Web Services for Remote Portlets - Toward Standardization"; (3) general references in "Web Services for Remote Portals (WSRP)."
[August 04, 2003] "Introducing the Portlet Specification, Part 1. Get Your Feet Wet with the Specification's Underlying Terms And Concepts." By Stefan Hepper and Stephan Hesmer. In JavaWorld (August 01, 2003). ['Portlets are Java-based Web components, managed by a portlet container, that process requests and generate dynamic content. Portals use portlets as pluggable user interface components that provide a presentation layer to information systems. The next step, after servlets in Web application programming, portlets enable modular and user-centric Web applications. The goal of JSR (Java Specification Request) 168, the Portlet Specification, is to enable interoperability between portlets and portals. This specification defines the contract between portlet and portlet container, and a set of portlet APIs that address personalization, presentation, and security. The specification also defines how to package portlets in portlet applications. Part 1 of this two-part series describes the Portlet Specification and explains its underlying concepts. In Part 2, the authors explain the specification's reference implementation and show some portlet examples.'] "With the emergence of an increasing number of enterprise portals, various vendors have created different APIs for portal components, called portlets. This variety of incompatible interfaces generates problems for application providers, portal customers, and portal server vendors. To overcome these problems, JSR (Java Specification Request) 168, the Portlet Specification, was started to provide interoperability between portlets and portals. JSR 168 defines portlets as Java-based Web components, managed by a portlet container, that process requests and generate dynamic content. Portals use portlets as pluggable user interface components that provide a presentation layer to information systems... The IT industry has broadly accepted JSR 168. All major companies in the portal space are part of the JSR 168 expert group: Apache, ATG, BEA, Boeing, Borland, Broadvision, Citrix, EDS, Fujitsu, Hitachi, IBM, Novell, Oracle, SAP, SAS Institute, Sun Microsystems, Sybase, TIBCO, and Vignette... A portal is a Web-based application that provides personalization, single sign-on, and content aggregation from different sources, and hosts the presentation layer of information systems. Aggregation is the process of integrating content from different sources within a Webpage. A portal may have sophisticated personalization features to provide customized content to users. Portal pages may have different sets of portlets creating content for different users. [In terms of] basic architecture, the portal Web application processes the client request, retrieves the portlets on the user's current page, and then calls the portlet container to retrieve each portlet's content. The portlet container provides the runtime environment for the portlets and calls the portlets via the Portlet API. The portlet container is called from the portal via the Portlet Invoker API; the container retrieves information about the portal using the Portlet Provider SPI (Service Provider Interface)..." See references in the news story "JSR 168 Portlet API Specification 1.0 Released for Public Review." [Part 2.]
[August 04, 2003] "Orbeon Framework Transforms XML to Java." By Yvonne L. Lee. In Software Development Times (August 01, 2003). "Orbeon Inc. says it has improved the parsing engine in the latest version of its Open XML Framework (OXF) XML transformation middleware, which will ship early this month. The software lets developers turn XML documents into J2EE applications without writing new Java code... OXF 2.0 includes several other features, including the ability to import and export to Adobe's Portable Document Format (PDF) and to Microsoft's Excel format. The new parser (Web Application Controller) lets developers separate site navigation, page validation, page layout, and site presentation into distinct parts of an application... The new version has drivers for importing and exporting XML documents to Adobe's PDF and Microsoft's Excel formats. This would enable developers to build an application where users could work with and update information offline from within a spreadsheet and later upload the batch results..." See: (1) the announcement "Orbeon Ships Version 2.0 of Its Open XML Framework (OXF). Productivity, Flexibility, Reuse, and Web Services Are Not Buzzwords Anymore."; (2) Open XML Framework website.
[August 01, 2003] "BEA Targets IBM With App Server Plan." By Martin LaMonica. In CNET News.com (August 01, 2003). "BEA Systems on Monday [2003-08-04] will launch an effort that's designed to help it regain the lead in the application server market. The company, runner-up to IBM in the market for Java-based software that's used to manage Web transactions, said it plans to officially announce a new version of its WebLogic Platform server software, along with a plan for linking business applications, at a press conference in San Francisco. The plan includes key pieces of BEA's new strategy to tackle the integration software and development tool markets, as it looks for an edge in a highly competitive area. Version 8.1 of BEA's WebLogic Platform may appear at first glance to be a minor upgrade. But CEO Alfred Chuang said the release is the culmination of more than two years and several million dollars' worth of software engineering. WebLogic Platform is a suite of Java-based server applications, based on the Java 2 Enterprise Edition (J2EE) standard, which businesses use to build custom applications. The new release includes corporate portal software, specialized integration software, data access tools, and a set of development tools that BEA hopes will make Java programming much easier. San Jose, Calif.-based BEA once dominated the market for Java server software but lost its lead to IBM, according to Gartner Dataquest research announced in May. The company is facing renewed attacks on its customer base from IBM, Oracle, Sun Microsystems and open-source alternatives such as JBoss Group. Now, the company is basing a comeback on better software for connecting business systems from multiple vendors and on an easy-to-use Java development tool. It still faces competition from deep-pocketed rivals, though, and the task of expanding its base of customers beyond technologically sophisticated, high-level users to more mainstream corporate developers..." See earlier: "BEA WebLogic Platform 8.1 Ships. New Products Offer Faster Time to Value by Converging the Development and Integration of Applications, Portals and Busines Processes."
[August 01, 2003] "Oracle, Sun and Partners Publish WS Coordination Specification." By [CBDi Newswire Staff]. In CBDI Newswire: Insight for Web Service and Software Component Practice (August 01, 2003). "Last August IBM, Microsoft and BEA published two specifications -- WS-Transaction and WS-Coordination. [With Web Services Composite Applications Framework (WS-CAF)] we now have a further specification covering a very similar area from Oracle, Sun and partners. At a capability level these specifications cover very similar functionality. At first sight the WS-Coordination specification looks to be tightly bound with BPEL, which of course Oracle and Sun are currently in opposition to. Whilst we are predicting that the broader industry will eventually converge around a BPEL like protocol, it is clear that this is currently very inadequate when compared to ebXML, and it will not happen soon. However in reality the interface between business process steps is a Web Service, and hence an agreed standard protocol. Neither specification has been submitted to a standards body yet, and this latest publication from Oracle Sun et al. seems like an attempt to bounce IBM et al. into a standards process that has broader representation. We think this is a good idea. The industry really needs to address how it is going to come together on the standards in the area of complex business transactions, and the arbitrary leadership of IBM, Microsoft plus a chosen partner (in this case BEA) is really inadequate for what is a complex and crucial set of protocols. We assess that the specifications are sufficiently close to be a basis for agreement on that specific area, without requiring the more difficult and contentious area of process scripting to be resolved concurrently..." Note: The "Global XML Web Services Specifications" from Microsoft and its partners include WS-Coordination and WS-Transaction; see also "Understanding GXA." Commentary is provided in the Web Services Journal articles by Jim Webber and Mark Little: "Introducing WS-Coordination"; "Introducing WS-Transaction Part 1"; "Introducing WS-Transaction Part 2." See references in the news story "Web Services Composite Application Framework (WS-CAF) for Transaction Coordination."
[August 01, 2003] "Truce Called in Java Standards Battle." By Martin LaMonica. In CNET News.com (July 31, 2003). "A closely watched feud over Java standards compliance moved closer to resolution this week, but questions over the value of that standard still linger. Open-source Java software distributor JBoss Group said Monday that it will work to certify its software with the Sun-controlled Java 2 Enterprise Edition (J2EE) standard. That decision reverses the company's previous stand and could resolve a long-standing dispute with Sun over Java certification. J2EE isn't a product. It's a set of specifications used by commercial software makers to build products using Java in a standardized way. A software application written to the J2EE specification should run without change on any J2EE-compatible application server, for instance.... In March, Sun offered JBoss a chance to license the software tests that certify J2EE compliance, but the negotiations promptly broke down. This week, however, JBoss executives said they will seek J2EE certification in order to broaden the software's appeal to large businesses. Sun representatives confirmed that the companies are in discussions. JBoss executives stressed that the certification is largely a symbolic move and doesn't change the company's technology..."
[August 01, 2003] "Nortel Adds Application-Level Security to Alteon Switches." By Larry Hooper. In InternetWeek (August 01, 2003). "Nortel Networks is deepening its push into the security market with new products in its Layer 4-7 Alteon application switch line. As Web services take hold among enterprise clients, building security into Layer 4-7 switching platforms is a natural progression... Application layer security features in the Alteon OS 21.0 release include denial of service protection and XML and SOAP inspection functionality. The XML and SOAP inspection provides Web services-aware traffic management that inspects, classifies and directs Web Services traffic. The new Alteon SSL VPN release 4.1 is designed for companies that use SSL VPNs as their primary means of remote access. The software offers dynamic access control to assess the security level of a client and restrict access accordingly. The release also features auto log-off to automatically time-out a session and purge all cached information after a period of inactivity..." See details in the announcement: "Nortel Networks Announces New Security Products and Security Partner Program. Enhanced Portfolio Offers Customers More Choices in Network Security."
[August 01, 2003] "Netware 6.5 Marks Novell's Open Integration Push." By Ian Palmer. In Enterprise Developer News (July 31, 2003). "Novell, after years of quiet contributions to Open Source projects, including Apache and OpenLDAP, is taking its Open Source efforts to a new level. Novell is the latest in a line of commercial software firms, including IBM, Microsoft and Borland, to begin using Open Source tools and techniques to broaden customer access to their code. Notably, Novell execs have promised that this August's Netware 6.5, along with its support for web services deployment, will bundle some of the most popular Open Source technologies for the enterprise, including Apache 2.0.45, MySQL 4.0.12, Perl 5.8, PHP 4.2.3 and Tomcat 4.1.18. These technologies are the types of solutions developers can integrate with existing legacy software. But adding Open Source to NetWare is just the latest flavor of Novell's growing interest in Open Source. Aside from simply borrowing from Open Source communities, the company has also launched projects aimed to contribute. In April, Novell launched Novell Forge, an Open Source developers' portal and resource modeled after SourceForge. To date, Novell Forge hosts some 200-plus Open Source coding projects for extending NetWare support to other commercial and Open Source technologies..."
[August 01, 2003] "Guess What? Microsoft Won." By Charles Cooper. In CNet News.com (August 01, 2003). An unchecked monopoly: "Three years removed from the breakup order and 10 months after a final settlement was struck, it's fair to ask who really came out on top. One clear winner was the team of high-priced defense lawyers from Sullivan & Cromwell that Microsoft hired for the occasion. Another was the clutch of legal commentators who parlayed their rent-a-quote talents during the trial into comfy gigs. But considering current events, the 'end of Microsoft as we know it' crowd isn't looking quite so hot. While the specter of David Boies may haunt the corridors of Redmond like Banquo at the feast, no would-be challenger has yet emerged to wrest away the crown. Bill Gates' cyberempire is growing richer all the time. To be sure, Microsoft received some unexpected help from the vagaries of the business cycle. A recession came along at just the right time, wiping out many start-ups that were challenging the company while halting an exodus of employees and managers. The net effect of all this is a Microsoft that is more confident and stronger than ever. 'They made either no change or only cosmetic changes to their business practices,' a former executive recently confided to me. Bill Gates' cyberempire is growing richer all the time. The scope of the company's wealth and ambition was on full display last week, when senior management outlined its latest plans to spend $6.8 billion and hire some 5,000 new people. Those are breathtaking numbers for most folks, but they're mere rounding numbers when you're sitting on a $49 billion trove. Meanwhile, the company's desktop monopoly is no less valuable than it was when Internet mania busted out in the late 1990s. Microsoft is predictably late with Longhorn, the next major version of the Windows operating system. But Microsoft's being behind schedule with an OS release hardly rates as an event anymore. The company still winds up registering blockbuster sales..."
[August 01, 2003] "China to Snub MPEG Standard For Own Format." By [Staff]. In CNETAsia (August 01, 2003). "China will have its own audio-video compression standard, as part of moves to shift reliance away from Western formats. According to a report from wire agency Dow Jones, multinationals like Microsoft, IBM and Philips have already signed up to be part of the new standard's working group. The new format is aimed at rivaling technology from the globally-dominant MPEG (Moving Picture Experts Group). MPEG-1 compression used in the video CD (VCD) format common throughout Asia, while MPEG-2 is used on DVDs. MPEG-4 is widely used for compressing video for Web download and streaming and will also be the worldwide standard for streaming multimedia to third-generation (3G) phones. Hardware manufacturers and content providers pay licensing fees to the MPEG Licensing Authority (MPEG LA) for the use of these compression standards. MPEG LA represents 18 patent holders, including Apple Computer and Sun Microsystems. China, a key manufacturing hub, is also keen to avoid MPEG license fees, said the report. In future, companies selling AV equipment to Chinese consumers will have to pay for the new format's licenses, which has been pegged at 1 yuan per device, or 12 US cents, much lower than current MPEG fees. The competing Chinese standard, known as AVS, will be proposed as a national standard in 2004, according to Huang Tiejun, secretary-general of the Audio Video Coding Standard Workgroup... China is not the only Asian country which has developed a rift with MPEG LA. Japan's mobile video content providers have threatened to drop MPEG-4 compression technology -- touted as crucial for delivering video to mobile handsets -- unless license fees come down..." See 'MPEG-4' in "Patents and Open Standards."
Previous Articles July 2003
[July 30, 2003] "MPEG Standard Addresses Rights." By Paul Festa. In CNET News.com (July 30, 2003). "The Moving Pictures Experts Group has completed an effort on two digital rights management technologies intended to increase the MPEG standard's appeal to the recording industry and Hollywood. MPEG announced the completion of parts 5 and 6 of MPEG-21, a member of the MPEG family of multimedia standards that defines how audio and video files can play in a wide range of digital environments. The digital rights management (DRM) capabilities are crucial to MPEG-21, as they are to other emerging multimedia standards, so that publishers in the recording and movie industries will adopt the standard without fear of losing control of copyrighted works. Part 5 of the standard, the Rights Expression Language (REL), lets multimedia publishers designate rights and permissions for how consumers can use their content. The REL expression 'play,' for instance, would let the consumer use the material in a 'read only' mode, while other expressions could allow more flexibility in playback and reproduction. REL also lets consumers establish privacy preferences for their personal data. Part 6 of the standard, the Rights Data Dictionary (RDD), defines terms that publishers can use when working with REL..." Other details are given in the announcment "MPEG Approves Another MPEG-21 Technology." See also the note on the relationship between MPEG-21 Part 5 (viz., Information technology -- Multimedia framework (MPEG-21) -- Part 5: Rights Expression Language ) and the XrML-based 'Rights Language' targeted for development within the OASIS Rights Language Technical Committee; the OASIS RLTC was formed in March 2002 to "use XrML as the basis in defining the industry standard rights language in order to maximize continuity with ongoing standards efforts..." Since (a) MPEG Part 5: Rights Expression Language is now (effectively) an ISO FDIS [Final Draft International Standard] and (b) is scheduled to become an ISO Standard in September 2003, and (c) no draft committee specification for an XrML-based rights language has been created within the OASIS RLTC, it appears that the MPEG-21 Part 5 document as an ISO Standard will become the reference standard for the strongly patented ContentGuard/Microsoft XrML rights language technology. References: (1) "MPEG Rights Expression Language"; (2) "Extensible Rights Markup Language (XrML)"; (3) "XML and Digital Rights Management (DRM)."
[July 30, 2003] "Identifying and Brokering Mathematical Web Services." By Mike Dewar (Numerical Algorithms Group, NAG). In Web Services Journal Volume 3, Issue 8 (August 2003), pages 44-46. "The MONET project is a two-year investigation into how service discovery can be performed for mathematical Web services, funded by the European Union under its Fifth Framework program. The project focuses on mathematical Web services for two reasons: first, mathematics underpins almost all areas of science, engineering, and increasingly, commerce. Therefore, a suite of sophisticated mathematical Web services will be useful across a broad range of fields and activities. Second, the language of mathematics is fairly well formalized already, and in principle it ought to be easier to work in this field than in some other, less well-specified areas... MSDL is the collective name for the language we use to describe problems and services. Strictly speaking, it is not itself an ontology but it is a framework in which information described using suitable ontologies can be embedded. One of the main languages we use is OpenMath, which is an XML format for describing the semantics of mathematical objects. Another is the Resource Description Framework Schema (RDFS), which is a well-known mechanism for describing the relationship between objects. The idea is to allow a certain amount of flexibility and redundancy so that somebody deploying a service will not need to do too much work to describe it. An MSDL description comes in four parts: (1) A functional description of what the service does; (2) An implementation description of how it does it; (3) An annotated description of the interface it exposes; (4) A collection of metadata describing the author, access policies, etc... There are two main ways in which it is possible to describe the functionality exposed by a service. The first is by reference to a suitable taxonomy such as the 'Guide to Available Mathematical Software (GAMS)' produced by NIST, a tree-based system where each child in the tree is a more specialized instance of its parent... The second way to describe the functionality exposed by a service is by reference to a Mathematical Problem Library, which describes problems in terms of their inputs, outputs, preconditions (relationships between the inputs), and post-conditions (relationships between the inputs and outputs). The MSDL Implementation Description provides information about the specific implementation that is independent of the particular task the service performs. This can include the specific algorithm used, the type of hardware the service runs on, and so on... In addition, it provides details of how the service is used. This includes the ability to control the way the algorithm works and also the abstract actions that the service supports. While in the MONET model a service described in MSDL solves only one problem, it may do so in several steps. For example, there may be an initialization phase, then an execution phase that can be repeated several times, and finally a termination phase. Each phase is regarded as a separate action supported by the service... While WSDL does a good job in describing the syntactic interface exposed by a service, it does nothing to explain the semantics of ports, operations, and messages. MSDL has facilities that relate WSDL operations to actions in the implementation description, and message parts to the components of the problem description. In fact the mechanism is not WSDL-specific and could be used with other interface description schemes such as IDL... There are many other aspects of Web services -- not least the ability to negotiate terms and conditions of access, measure the quality of the actual service provided, and choreograph multiple services to solve a single problem -- that are still being worked out. The partners in the project's ultimate goal is to develop products and services based on the MONET architecture but the viability of this depends to a large extent on solutions to the other emerging issues. While we are confident that this will happen, it is not yet clear what the timescale will be. The MONET project is currently building prototype brokers that can reason about available services using MSDL descriptions encoded in the W3C's OWL. We are also investigating the applicability of this technology to describing services deployed in the Open Grid Service Architecture (OGSA)..." [alt URL]
[July 30, 2003] "IBM, CA Square Up to HP on Management." By Keith Rodgers. From LooselyCoupled.com (July 30, 2003). "IBM and Computer Associates teamed up at a key web services standards meeting yesterday [2003-07-29] in a surprise rebuff to a submission by Hewlett-Packard. At stake is the future development path of IT management software. Although the initial purpose of the rival proposals is merely to establish standards that govern web services manageability, the ultimate aim is to roll out the same standards as a foundation for the entire IT management spectrum -- not just management of web services, but management of other IT assets through web services. The established systems management giants are also hoping that, by shifting the emphasis back to the wider management framework, they can recapture the market advantage they've currently ceded in web services management to smaller specialist vendors. HP had grabbed headlines on July 21 [2003], when it formally announced it would submit its Web Services Management Framework to the Web Services Distributed Management (WSDM) committee of e-business standards body OASIS. The HP submission had the backing of eight other developers on the committee, including Sun, Oracle, BEA, Iona, Tibco and webMethods... rivals IBM and CA [have] joined forces with web services management specialist Talking Blocks to present their own vision, dubbed WS-Manageability, to the OASIS meeting... The WS-Manageability proposal stems from work that IBM, Computer Associates and Talking Blocks have done for another standards group, the W3C Web Services Architecture Working Group. A primary concern is to make full use of other emerging 'WS-*' web services standards, such as WS-Policy, that form part of the generic web services platform. Any proposed management standard should not stray from its core management mission, they warn, either into defining elements of the generic infrastructure, or into specifying aspects of management applications... The irony of this particular standards battle is that none of the big three systems management vendors -- IBM, HP and CA -- can claim to be leading the field in web services management. It is specialists such as Actional, Amberpoint, Talking Blocks and Infravio who have been making all the running in terms of delivering production software into user deployments, with each of them able to point to several reference customers. [However,] by emphasizing management through web services, the established systems management giants can gain recognition in the web services arena while broadening the issue out to play to their own strengths..." See also: (1) the presentations "Web Services Manageability" and "Management and Web Service Management," referenced in the following bibliographic entry; (2) the HP framework proposal, "HP Contributes Web Services Management Framework Specification to OASIS TC."
[July 29, 2003] "Web Services Distributed Management (WSDM) TC Submission: Web Services Manageability." By Heather Kreger (IBM), Igor Sedukhin (Computer Associates), and Mark Potts (Talking Blocks). PDF from source .PPT. July 24, 2003. 10 pages. Posted to the OASIS WSDM TC list by Ellen Stokes (IBM). Prose adapted from the slides: "#2: As to background, the design started with active involvement of the authors on W3C Web Services Architecture Working Group Management Task Force. To avoid fragmentation of Management Standards the team co-authored a specification to facilitate development of consensus among management vendors. They considered concepts from existing work on Web services as a management platform. The specification is the agreed minimum sufficient set of information that make Web service endpoints manageable. #3: As to main considerations, the design does not imply the implementation of the manageability or the manager. It captures manageability with XML and WSDL and is consistent with existing standards based Web Service infrastructures. It is consistent with existing management models and standards, uses existing infrastructure mechanisms, has an easily extensible model, is easily implementable, reducing impact on Web service development. #4: As to the intention of the submission, the specification will define the model for the manageability of Web services and define access to this manageability. The access and model can be rendered in (a) WSDL 1.1; (b) GWSDL; (c) CIM Models; (4) WSDL 1.2. The specification identifies requirements for more general Web services standards, but does not define them. The team is submitting the Common Base Event specification (XML event format for management events)... #8: As to an extensible manageability, the topics are extensible (new topics can be created; any topic can have aspects added to them [i.e., define new properties, operations, and events]; and new aspects can be created. Manageability information is extensible. #9: As to infrastructure, the specification supports building WS-I basic profile compliant manageable Web services; it leverages non-management specific infrastructure available to us from: (a) WS* e.g., WS-addressing, WS-policy; (b) OGSI e.g., serviceData, notifications; (c) CMM e.g., lifecycle, relationships, metadata. It does not imply a specific management framework. The authorship team intends to submit this work to the OASIS WSDM TC..." See also the presentation "Management and Web Service Management" as posted 2003-07-29; this presentation "offers work to OASIS completed by IBM with contribution from CA and Talking Blocks... It details a frame of reference for Management Applications, Managers, Manageability using Web services and Manageability of Web services. The work also identifies the management concerns pertinent to each and the dependencies in terms of common description that are required..." [source .PPT, cache]
[July 29, 2003] "Introducing BPEL4WS 1.0. Building on WS-Transaction and WS-Coordination." By Dr. Jim Webber and Dr. Mark Little (Arjuna Technologies Limited). In Web Services Journal Volume 3, Issue 8 (August 2003), pages 28-33. With source code and 3 figures. "The value of BPEL4WS is that if a business is the sum of its processes, the orchestration and refinement of those processes is critical to an enterprise's continued viability in the marketplace. Those businesses whose processes are agile and flexible will be able to adapt rapidly to and exploit new market conditions. This article introduces the key features of Business Process Execution Language for Web Services, and shows how it builds on the features offered by WS-Coordination and WS-Transaction. The BPEL4WS model is built on a number of layers, each one building on the facilities of the previous. The fundamental components of the BPEL4WS architecture consists of the following: (1) A means of capturing enterprise interdependencies with partners and associated service links; (2) Message correlation layer that ties together messages and specific workflow instances; (3) State management features to maintain, update, and interrogate parts of process state as a workflow progresses; (4) Scopes where individual activities (workflow stages) are composed to form actual algorithmic workflows. We'll explore the features of this stack, starting with the static aspects of the application -- capturing the relationship between the Web services participating in workflows -- and on to the creation of workflows using the BPEL4WS activities... BPEL4WS is at the top of the WS-Transaction stack and utilizes WS-Transaction to ensure reliable execution of business processes over multiple workflows, which BPEL4WS logically divides into two distinct aspects. The first is a process description language with support for performing computation, synchronous and asynchronous operation invocations, control-flow patterns, structured error handling, and saga-based long-running business transactions. The second is an infrastructure layer that builds on WSDL to capture the relationships between enterprises and processes within a Web services-based environment. Taken together, these two aspects support the orchestration of Web services in a business process, where the infrastructure layer exposes Web services to the process layer, which then drives that Web services infrastructure as part of its workflow activities. The ultimate goal of business process languages like BPEL4WS is to abstract underlying Web services so that the business process language effectively becomes the Web services API. While such an abstract language may not be suitable for every possible Web services-based scenario it will certainly be useful for many, and if tool support evolves it will be able to deliver on its ambition to provide a business analyst-friendly interface to choreographing enterprise systems..." See also: "Introducing WS-Transaction Part II. Using Business Activities," in Web Services Journal Volume 3, Issue 7 (July 2003), pages 6-9. General references in "Business Process Execution Language for Web Services (BPEL4WS)." [alt URL]
[July 29, 2003] "Double Standards." By Sean Rhody (WSJ Editor-in-Chief). In Web Services Journal Volume 3, Issue 8 (August 2003), page 3. "In June I attended the JavaOne conference... and was reminded, once again, that the lack of a single standards body is a serious roadblock to implementation of Web services... I was further reminded of the mess we're in by some of the Web services presentations. While obviously biased toward Java (it was JavaOne, after all), what really got me was the way everyone needed to explain how this specification came from HP, that standard was developed by W3C, and OASIS has a competing specification to some other specification. It's clear that there are too many bodies producing standards, not to mention too many standards themselves. The Java model works somewhat better, with a single standards organization and the JSR process. Rather than develop competing specifications (SAML or WS-Security, for example), the JCP provides guidance from multiple companies toward the creation of a single standard that all Java vendors will comply with. No one has to decide whether to use BPML or BPEL, or the Java equivalent... I would propose that WS-I become the central Web services body, and that the members of the other bodies treat them as the Supreme Court of Web services. Once they rule on a specification, let there be no further disputes. Let's limit the number of specifications so the innovations can go toward making a smaller set of standards better. Of course the WS-I may not want to act as the final arbiter of Web services fate, and for various reasons, many vendors may not want the WS-I as currently constituted to be the sole determining body for Web services..." On WS-I, see "Web Services Interoperability Organization (WS-I)." [alt URL]
[July 29, 2003] "Microsoft Brings Secure Web Services Closer." By John McIntosh. In IT-Director.com (July 28, 2003). "As the noise of secure communications and identify management continues unabated and vendors clamour at the door, Microsoft's recent announcement of Web Services Enhancements 2.0 might have been missed. This is a significant announcement, not because it comes from Microsoft but because of what it potentially means to the Web Services market and the security market... WSE version 2.0 offers new security features should simplify development and deployment of secure Web Service applications that span company boundaries and trust domains, connecting and exchanging information with customer and partner systems. According to the Company, WSE 2.0 means that developers can apply security policies to Web services with minimal lines of code and support interoperability across heterogeneous systems. WSE 2.0 does this by building on the security, routing and attachment capabilities of version 1.0 and adds a foundation for building applications based on Web services specifications published by Microsoft and its industry partners including WS-Security, WS-Policy, WS-SecurityPolicy, WS-Trust, WS-SecureConversation and WS-Addressing... There is within the .NET Framework and WSE 2.0 the ability to do many interesting things in terms of secure application development to support integration and federation of security through the value chain. WSE is important because it introduces for the first time the ability to test the theories behind emerging WS-Security standards. Essentially, is it possible to build a system that can securely expose internal systems to partners as Web services, leveraging existing technology investments to generate future revenue opportunities? Without the following new [WSE] capabilities, the answer to that question would probably be no..." See details in the news story "Security Featured in Microsoft Web Services Enhancements Version 2.0 Technology Preview."
[July 29, 2003] "Using the WS-I Test Tools." By Yasser Shohoud (Microsoft). July 24, 2003. 18 minutes. Tutorial prepared as an MSDN TV Episode; the presentations is played using the Microsoft Windows Media Player. Summary: "The Web Services Interoperability organization (WS-I) has published a draft version of the Basic Profile Test Tools. Yasser Shohoud shows how to use these tools to test your Web service for WS-I Basic Profile conformance." Details: A Beta Release of the WS-I Testing Tools was issued in April 2003 and is available in C# and Java. The WS-I testing tools are designed to help developers determine whether their Web services are conformant with Profile Guidelines. The WS-I Testing Working Group also published draft [June 26, 2003] versions of the WS-I Monitor Tool Functional Specification and WS-I Analyzer Tool Functional Specification. The WS-I Monitor Tool specification edited by Scott Seely (Microsoft) documents the message capture and logging tool. "This tool captures messages and stores them for later analysis. The tool itself will have to capture messages traveling over different protocols and transports. The first version of this tool will focus on being able to accurately capture HTTP based SOAP messages. Also, while many interception techniques are available, this implementation uses a man in the middle approach to intercept and record messages... The Monitor has two distinct sets of functionality: (1) It is responsible for sending messages on to some other endpoint that is capable of accepting the traffic while preserving the integrity of communication between the two endpoints. (2) It is responsible for recording the messages that flow through it to a log file. One can think of these two pieces as an interceptor and a logger. For this first version of the Monitor, the interceptor and logger functionality will exist in the same application. The working group recognizes that we may later desire to separate the interceptor and the logger into two, standalone entities. This design discusses how one would go about structuring an application today that should be able to be broken into separate pieces in future versions..." The WS-I Analyzer Tool specification edited by Peter Brittenham (IBM) documents "the design for Version 1.0 of the analyzer tool, which will be used for conformance testing of WS-I profiles. The purpose of the Analyzer tool is to validate the messages that were sent to and from a Web service. The analyzer is also responsible for verifying the description of the Web service. This includes the WSDL document that describes the Web service, and the XML schema files that describe the data types used in the WSDL service definition. The analyzer tool has a defined set of input files, all of which are used to verify conformance to a profile definition: Analyzer configuration file; Test assertion definition file; Message log file; WSDL for the Web service. The analyzer configuration file and test assertion definition file are described in greater detail in the subsequent sections of the document; the message log file contains the list of messages that were captured by the monitor tool..." See also the WS-I Basic Profile Version 1.0 (Working Group Approval Draft 2003/05/20) and the WS-I Testing Working Group Charter. General references in "Web Services Interoperability Organization (WS-I)."
[July 29, 2003] "Understanding XML Digital Signature." By Rich Salz. In Microsoft MSDN Library (July 2003). ['This article looks at the XML Digital Signature specification, explaining its processing model and some of its capabilities. It provides a low-level understanding of how the WS-Security specification implements its message security feature. The author surveys the XML DSIG specification using the schema definition to describe the features that are available and the processing that is required to generate and verify an XML DSIG document. He starts with the basic signature element (ds:SignedInfo), looks at how it incorporates references to application content to protect that content, and looks at part of the ds:KeyInfo element to see how an application can verify a signature, and perhaps validate the signer's identity. These three aspects provide the most basic and low-level components of protecting the integrity of XML content.'] "Digital signatures are important because they provide end-to-end message integrity guarantees, and can also provide authentication information about the originator of a message. In order to be most effective, the signature must be part of the application data, so that it is generated at the time the message is created, and it can be verified at the time the message is ultimately consumed and processed. SSL/TLS also provides message integrity (as well as message privacy), but it only does this while the message is in transit. Once the message has been accepted by the server (or, more generally, the peer receiver), the SSL protection must be 'stripped off' so that the message can be processed. As a more subtle point, SSL only works between the communication endpoints. If I'm developing a new Web service and using a conventional HTTP server (such as IIS or Apache) as a gateway, or if I'm communicating with a large enterprise that has SSL accelerators, the message integrity is only good up until the SSL connection is terminated. As an analogy, consider a conventional letter. If I'm sending a check to my phone company, I sign the check -- the message -- and put it in an envelope to get privacy and delivery. Upon receipt of the mail, the phone company removes the envelope, throws it away, and then processes the check. I could make my message be part of the envelope, such as by gluing the payment to a postcard and mailing that, but that would be foolish. An XML signature would define a series of XML elements that could be embedded in, or otherwise affiliated with, any XML document. It would allow the receiver to verify that the message has not been modified from what the sender intended. The XML-Signature Syntax and Processing specification (abbreviated in this article as XML DSIG) was a joint effort of the W3C and the IETF. It's been an official W3C Recommendation since February 2002. Many implementations are available..." See general references in "XML Digital Signature (Signed XML - IETF/W3C)."
[July 29, 2003] "Sun's Proposed New Web Services Standards." By Charles Babcock. In InformationWeek (July 39, 2003). "Sun is trying to initiate a new round of Web services with a proposal for a set of standards that work on top of XML and Web Services Description Language (WSDL). But Sun and its partners have yet to say to which standards body they will submit their proposed specification... Arjuna Technologies, Fujitsu Software, Iona Technologies, Oracle, and Sun have teamed up to propose that individual Web services be called up and combined to form 'composite applications.' Through Sun's proposed set of standards, such a composite application would be given a shared runtime environment that could determine the specific systems contributing to the service. It also would be given a coordination agent that made sure applications ran in the correct sequence and a transaction manager that supervised transactions across dissimilar applications. The proposed set is called Web Services-Composite Application Framework, or WS-CAF. Today's leading Web services handle such coordination issues is 'in a very ad hoc manner, if at all,' says Mark Little, chief team architect for Arjuna. The proposed standards will take the guesswork and ambiguities out of how to coordinate services from scattered systems into one composite application, or new Web service, says Ed Julson, Sun's group manager of Web services standards and technology. The alternative, Julson says, is to go forward with competing methods of resolving service issues, as is the case with two of today's Web-services security standards: Web Services-Security proposed by IBM, Microsoft and VeriSign, and Web Services-Reliability proposed by Fujitsu, Hitachi, NEC, Oracle, Sonic Software, and Sun. Among the standards bodies that might receive the Sun proposal are the Oasis Open consortium of vendors setting XML standards; the World Wide Web Consortium; and the Internet Engineering Task Force. 'From a pure technology standpoint, the group isn't breaking new ground,' says Stephen O'Grady of Red Monk, a market research group. Sun and partners are making use of existing technologies, sometimes already in use in deployed Web services, he says. But 'it's a novel and unique approach for creating composite applications composed of distinct Web services.' The most significant part of the proposal may prove to be the way it defines a way to manage transactions in the Web-services context, O'Grady says..." See: (1) the news story "Web Services Composite Application Framework (WS-CAF) for Transaction Coordination"; (2) the Arjuna announcement "Arjuna Enables Reliable Web Services-Based Business Applications with Arjuna XTS. Technology to Address the Reliable Coordination Issues Preventing the Early Adoption of Serious E-Business Solutions Through Web Services."
[July 29, 2003] "XHTML-Print." Edited by Jim Bigelow (Hewlett-Packard). W3C Last Call Working Draft, 29-July-2003. Produced by members of the W3C HTML Working Group as part of the W3C HTML Activity. The Last Call review period ends on 7-September-2003. Latest version URL: http://www.w3.org/TR/xhtml-print. Also in PDF. "XHTML-Print is a member of the family of XHTML Languages defined by the W3C Recommendation Modularization of XHTML. It is designed to be appropriate for printing from mobile devices to low-cost printers that might not have a full-page buffer and that generally print from top-to-bottom and left-to-right with the paper in a portrait orientation. XHTML-Print is also targeted at printing in environments where it is not feasible or desirable to install a printer-specific driver and where some variability in the formatting of the output is acceptable... XHTML-Print is not appropriate when strict layout consistency and repeatability across printers are needed. The design objective of XHTML-Print is to provide a relatively simple, broadly supportable page description format where content preservation and reproduction are the goal, i.e., 'Content is King.' Traditional printer page description formats such as PostScript or PCL are more suitable when strict layout control is needed. XHTML-Print does not utilize bi-directional communications with the printer either for capabilities or status inquiries. This document creates a set of conformance criteria for XHTML-Print. It references style sheet constructs drawn from CSS2 and proposed for CSS3 Paged Media as defined in the CSS Print Profile to provide a strong basis for rich printing results without a detailed understanding of each individual printer's characteristics. It also defines an extension set that provides stronger layout control for the printing of mixed text and images, tables and image collections. The document type definition for XHTML-Print is implemented based on the XHTML modules defined in Modularization of XHTML." Note: this specification is based "in large part on a work by the same name XHTML-Print from the Printer Working Group (PWG), a program of the IEEE Industry Standard and Technology Organization." See general references in "XHTML and 'XML-Based' HTML Modules."
[July 29, 2003] "Microsoft Plays Hiring Hardball." By Darryl K. Taft. In eWEEK Volume 20, Number 30 (July 28, 2003), pages 1, 16. "Like baseball's New York Yankees, Microsoft Corp. has been paying top dollar for top talent in an effort to dominate the new playing fields of XML and Web services. During the past 18 months, the Redmond, Wash., company has gobbled up some of the best-known XML, Web services and application development brains around. Most recently it hired Cape Clear Software Inc. Chief Technology Officer Jorgen Thelin, who last week announced he would be leaving the Web services infrastructure company to join Microsoft. The effort, which runs counter to Microsoft's traditional strategy of scooping up complementary companies, has concerned developers crying foul and claiming the company is only looking to improve its standing among standards groups... Not all developers are happy about the issue, saying that once again the company is using its might irresponsibly. 'Microsoft is trying to buy the standard; you own all the soldiers, and then you win,' said one industry insider, who requested anonymity. 'I have heard hallway grumblings about Microsoft trying to corner the market on Web services experts, especially from companies looking to hire people who can represent them on Web services committees,' said Iona Technologies plc. CTO Eric Newcomer, of Waltham, Mass..."
[July 29, 2003] "Sun, Oracle, Others Propose Transaction Specification Middleware Vendors Publish Web Services Composite Applications Framework." By James Niccolai and Peter Sayer. In InfoWorld (July 28, 2003). "Sun Microsystems Inc., Oracle Corp., Fujitsu Software Corp., Iona Technologies PLC and Arjuna Technologies Ltd. published the Web Services Composite Applications Framework (WS-CAF), designed to solve problems that arise when groups of Web services are used in combination to complete a transaction or share information, the companies said in a joint statement Monday. They plan to submit WS-CAF to an industry standards group and will allow for its use on a royalty-free basis, moves intended to promote its broad use. The initiative appears to lack support so far from some key Web services players, however, including IBM Corp., BEA Systems Inc. and Microsoft Corp., which were not part of the announcement. Web services use standard technologies such as SOAP (Simple Object Access Protocol) and XML (Extensible Markup Language) to link disparate applications in a way that's supposed to be more affordable and flexible than using proprietary messaging systems. Some transactions, such as purchasing a book, are relatively simple to complete, in part because they can be finished instantaneously. Others, such as fulfilling a purchase order or completing an insurance claim, can take days or weeks to process and as such pose problems for Web services developers, the companies said. WS-CAF aims to solve those problems by defining a set of rules for coordinating transactions in such long-running business processes, the group said. WS-CAF actually is a collection of three specifications: Web Service Context (WS-CTX), Web Service Coordination Framework (WS-CF), and Web Service Transaction Management (WS-TXM)... The new specifications add to an already tangled clump of Web services coordination specifications with varying degrees of support from standards bodies. One such, BPEL4WS (Business Process Execution Language for Web Services), has been adopted by OASIS, the Organization for the Advancement of Structured Information Standards, but it still leaves significant gaps that need to be filled, according to [Jeff] Mischkinsky..." See details in "Web Services Composite Application Framework (WS-CAF) for Transaction Coordination."
[July 28, 2003] "Composite Capability/Preference Profiles (CC/PP): Structure and Vocabularies." Edited by Graham Klyne (Nine by Nine), Franklin Reynolds (Nokia Research Center), Chris Woodrow (Information Architects), Hidetaka Ohto (W3C / Panasonic), Johan Hjelm (Ericsson), Mark H. Butler (Hewlett-Packard), and Luu Tran (Sun Microsystems). W3C Working Draft 28-July-2003. Latest version URL: http://www.w3.org/TR/CCPP-struct-vocab/. "This specification was originated by the W3C CC/PP Working Group and has now been passed to the W3C Device Independence Working Group to carry forward towards a Recommendation." Summary: "This document describes CC/PP (Composite Capabilities/Preference Profiles) structure and vocabularies. A CC/PP profile is a description of device capabilities and user preferences that can be used to guide the adaptation of content presented to that device. The Resource Description Framework (RDF) is used to create profiles that describe user agent capabilities and preferences. The structure of a profile is discussed. Topics include: (1) structure of client capability and preference descriptions, AND (2) use of RDF classes to distinguish different elements of a profile, so that a schema-aware RDF processor can handle CC/PP profiles embedded in other XML document types. CC/PP vocabulary is identifiers (URIs) used to refer to specific capabilities and preferences, and covers: [1] the types of values to which CC/PP attributes may refer, [2] an appendix describing how to introduce new vocabularies, [3] an appendix giving an example small client vocabulary covering print and display capabilities, and [4] an appendix providing a survey of existing work from which new vocabularies may be derived... It is anticipated that different applications will use different vocabularies; indeed this is needed if application-specific properties are to be represented within the CC/PP framework. But for different applications to work together, some common vocabulary, or a method to convert between different vocabularies, is needed. (XML namespaces can ensure that different applications' names do not clash, but does not provide a common basis for exchanging information between different applications.) Any vocabulary that relates to the structure of a CC/PP profile must follow this specification. The appendices introduce a simple CC/PP attribute vocabulary that may be used to improve cross-application exchange of capability information, partly based on some earlier IETF work..."
[July 28, 2003] "IBM To Update Portal With Document Management and Collaboration Technology. New WebSphere Software to Ship Next Month." By Elizabeth Montalbano. In CRN (July 28, 2003). "IBM has unveiled an upcoming version of its WebSphere portal that the company says has new capabilities for document management and collaboration. WebSphere Portal Version 5 will feature new functionality that allows solution providers to aggregate content from various back-end resources -- such as human resources, CRM or supply chain applications -- to a single portal, according to IBM. The new software also will include out-of-the-box document management tools that allow solution providers to integrate and manage information, such as a company's financial reports or sales documents, within the portal, IBM executives said. At the same time, the document management tools also will allow end users to view, create, convert and edit basic documents, spreadsheets and other files while working within the portal. In addition, WebSphere Portal Version 5 will include a new Collaboration Center, which leverages IBM Lotus software to allow portal users to interact with various collaborative applications, such as instant messaging, team workplaces and virtual meetings. Other new features in Version 5 include performance enhancements and a simplified installation process..."
[July 27, 2003] "Sun Proposes New Web Services Specifications." By Martin LaMonica. In CNET News.com (July 27, 2003). "Sun Microsystems and a handful of partners have announced they are seeking the approval of Web services specifications for coordinating electronic transactions. Sun, Oracle, Iona Technologies, Fujitsu Software and Arjuna Technologies will submit the specifications, the Web Services Composite Applications Framework (WS-CAF), to either the World Wide Web Consortium or the Organization for the Advancement of Structured Information Standards (OASIS) for development as standards in the next several weeks... WS-CAF, which comprises three individual specifications, proposes a mechanism for coordinating transactions across many machines in multistep business processes. The authors of the specifications hope simplified interactions between Web services will allow companies to assemble business applications with Web services more quickly. The WS-CAF specifications would create a prearranged way to configure systems so that Web services applications from different providers could share important transactional information. For example, administration tools based on WS-CAF would ensure that a consumer making vacation reservations online could coordinate bookings at three different Web sites for travel, car and hotel reservations at the same time. Current business systems have methods for sharing the status of ongoing transactions across different machines. The WS-CAF set of specifications seeks to improve interoperability by standardizing that capability among different providers, said Eric Newcomer, chief technology officer at Iona. The Sun-led group of companies intends to garner input from other IT providers through the standardization process, said Ed Jolson, Sun's group manager for Web services standards..." See details in "Web Services Composite Application Framework (WS-CAF) for Transaction Coordination."
[July 25, 2003] "The Future of XML Documents and Relational Databases. As New Species of XML Documents Are Emerging, Vendors Are Unveiling Increased RDBMS Support for XML." By Jon Udell. In InfoWorld (July 25, 2003). "Having absorbed objects, the RDBMS vendors are now working hard to absorb XML documents. Don't expect a simple rerun of the last movie, though. We've always known that most of the information that runs our businesses resides in the documents we create and exchange, and those documents have rarely been kept in our enterprise databases. Now that XML can represent both the documents that we see and touch -- such as purchase orders -- and the messages that exchange those documents on networks of Web services, it's more critical than ever that our databases can store and manage XML documents. A real summer blockbuster is in the making. No one knows exactly how it will turn out, but we can analyze the story so far and make some educated guesses. The first step in the long journey of SQL/XML hybridization was to publish relational data as XML. BEA Chief Architect Adam Bosworth, who worked on the idea's SQL Server implementation, calls it 'the consensual-hallucination approach -- we all agree to pretend there is a document.' XML publishing was the logical place to start because it's easy to represent a SQL result set in XML and because so many dynamic Web pages are fed by SQL queries. The traditional approach required programmatic access to the result set and programmatic construction of the Web page. The new approach materializes that dynamic Web page in a fully declarative way, using a SQL-to-XML query to produce an XML representation of the data and XSLT to massage the XML into the HTML delivered to the browser. Originally these virtual documents were created using proprietary SQL extensions such as SQL Server's 'FOR XML' clause. There's now an emerging ISO/ANSI standard called SQL/XML, which defines a common approach. SQL/XML is supported today by Oracle and DB2. It defines XML-oriented operators that work with the native XML data types available in these products. SQL Server does not yet support an XML data type or the SQL/XML extensions, but Tom Rizzo, SQL Server group product manager at Redmond, Wash.-based Microsoft, says that Yukon, due in 2004, will... Most of the information in an enterprise lives in documents kept in file systems, not in relational databases. There have always been reasons to move those documents into databases -- centralized administration, full-text search -- but in the absence of a way to relate the data in the documents to the data in the database, those reasons weren't compelling. XML cinches the argument. As business documents morph from existing formats to XML -- admittedly a long, slow process that has only just begun -- it becomes possible to correlate the two flavors of data..." See general references in "XML and Databases."
[July 25, 2003] "Interwoven Unwraps Version 6. New Versions of Platform and Server Offer Content Control for Business Users." By Cathleen Moore. In InfoWorld (July 25, 2003). "Content management vendor Interwoven has launched new versions of its platform and content server with a focus on improving business usability. Interwoven 6 introduces a new customization framework, dubbed the ContentServices UI Toolkit, which is designed to enable customized user interfaces. The release also includes a Web services-based integration toolkit via the Interwoven ContentServices SDK 2.0. The toolkit allows quick and flexible integration with business applications, according to Interwoven officials in Sunnyvale, Calif. Addressing specific business needs, Interwoven 6 also includes offerings aimed at business functions such as sales, services, marketing, and IT. Based on Web services standards, the set of solutions include a Digital Brand Management offering powered by MediaBin; Sales Excellence portals via integration with portals from IBM, BEA, SAP, Plumtree, Sun, and Oracle; and a Global Web Content Management offering for supporting hundreds of Web properties in multiple locations and languages. Meanwhile, Version 6.0 of the TeamSite content server aims to give business users more control over content via the user-friendly ContentCenter interface... The ContentCenter framework includes ContentCenter Standard, an interface designed for business users featuring customizable, portal style content management components that allow users to easily add or modify content. The ContentCenter Professional is a power user interface containing advanced features such as branching, workflow, virtualization, versioning, security, and tool integration..." See details in the announcement: "Interwoven Releases TeamSite 6.0 - The New Benchmark in the Content Management Industry. New Release Empowers Enterprises to Boost Workforce Productivity and Enable Faster, Smarter Decision-Making with an All New, Easy, and Customizable User Experience."
[July 25, 2003] "Standards Stupidities and Tech's Future." By Charles Cooper. In CNet News.com (July 25, 2003). "The technology business may employ more brainy people on a per capita basis than any industry in the world. But when it comes to agreeing on technical standards, the behavior of some of these very bright people more resembles the plot in Dumb and Dumber. An outsider looking in would easily assume that, with all this intellectual firepower, these folks would understand their best interests and would be able to decide how to proceed without a major struggle... when it comes to figuring out the best way to get from A to Z, bruising (and pointless) Silicon Valley clashes over standards are the norm rather than the exception. Hang around this business long enough and you realize that, while the actors change, the script lines remain much the same. And each time one of these donnybrooks erupt, the protagonists say they are only working on behalf of what's good for customers... Back on Planet Earth, however, it rates as a monumental waste of time and energy. The latest bit of grandstanding involves the move to set standards for Web services, for which -- surprise, surprise -- Microsoft and Sun Microsystems are happily bickering and backstabbing each other. This is only the latest debacle in their decades-long rivalry, but it comes at a particularly inopportune time for IT customers who are debating where Web services should fit within their operations. Although the Web services world has been inching toward resolving a lot of its issues, this has been a slow-motion story. If Web services is ever going to live up to its hype, the technology industry needs to make sure that its programming standards guarantee the reliability of XML message transmissions. The two sides agree with that notion. From there, however, it's pistols at 20 paces... Microsoft is pushing something that it calls WS-ReliableMessaging, which was co-developed with IBM, BEA Systems and Tibco. Meanwhile, a competing specification called Web Services Reliable Messaging is being backed by Sun, Oracle, Fujitsu, Hitachi, NEC and Sonic Software..."
[July 24, 2003] "Eve Maler of Sun Microsystems Discusses the Future of Web Services Security." By Janice J. Heiss. From WebServices.org (July 24, 2003). In this article Janice J. Heiss speaks to Sun Microsystems' Eve Maler, vice-chair of the WS-I Basic Security Profile Working Group and currently coordinating editor of the SAML (Security Assertion Markup Language) Technical Committee, seeking an update on the development of Web services security. Maler: [As to when viable Web services security standards will be established] "It's best not to think in black and white terms. There are specifications appearing on the scene that attempt to secure different facets of Web services. As each specification becomes standardized and viable over time, the operation of Web services will be better protected... This may not be fully standardized until late in 2003 and it's important for this work to reflect a clear understanding of the problem space. And after that, there's going to be a lot more work on trust management. So improvements will occur as long as these processes take place in venues that allow the right experts to look at them... Traditional technologies won't always suffice [for Web services security]. First, the trust issues still haven't been fully solved in traditional computing; they haven't scaled to meet our expectations, and Web services present an opportunity to get this right. With Web services, end to end isn't the same as point to point. Messages are going between a requester and a responding service, but they may also pass through several intermediaries, and thus, several possible hubs. Therefore, a technology that focuses solely on securing the transport channel may not be sufficient. You need security technologies that persist past that transient part; without the XML security standards, they don't take advantage of the opportunities inherent in XML's granularity... Sun Microsystems is very concerned with the open specification of standards and the specification of systems that don't rely on a single hub to do all the jobs. We have heard some intimations that a system like Passport will ultimately be a federated system so that you won't always have to go through one Web site to start your journey online. That would be a good thing. The Liberty Alliance takes exactly this federated approach to managing and using your electronic identity. What's best is for all of the relevant security infrastructure for Web services to be standardized in an open venue to be seen by all the right eyes, and especially for the IPR (intellectual property rights) terms to be open enough so that implementations can be widely accepted. This is Sun's goal in participating in Web services security standardization, and it's the key for ensuring that no one company can create lock-in..." See: (1) "Security Assertion Markup Language (SAML)"; (2) security standards reference list at "Security, Privacy, and Personalization."
[July 23, 2003] "Why Choose RSS 1.0?" By Tony Hammond. In XML.com News (July 23, 2003). "RSS, a set of lightweight XML syndication technologies primarily used for relaying news headlines, has been adapted to a wide range of uses from sending out web site descriptions to disseminating blogs. This article looks at a new application area for RSS: syndicating tables of contents for serials publications. Serials newsfeeds -- especially scientific newsfeeds -- differ from regular newsfeeds in that a key requirement for the reader, or more generally for the consumer, of the feed is to be able to cite, or produce a citation for, a given article within the serial. This need for additional information exists across many types of publishing activities. A user may choose not to follow a link directly to some content for whatever good reason, such as preferring to access a locally stored version of the resource. This requires that rich metadata describing the article be distributed along with the article title and link to the article. The need to include metadata within the feed raises the following questions: (1) Which version of RSS best supports the delivery of metadata to users? (2) Which metadata term sets are best employed for supply to users? This article examines both of these issues and then considers how such extensions can actually be used in practice. The primary purpose of syndicating tables of contents for serials is to provide a notification service to inform feed subscribers that a new issue has been published. There are, however, secondary uses for such a syndication service -- that is, to provide access to archival issues resident within a feed repository. The hierarchical storage arrangements for archival issues suggest that one possible resource discovery mechanism might be to have feeds of feeds whereby a feed for an archival volume of issues would syndicate the access URIs for the feeds of the respective issues contained within that volume. This arrangement could even be propagated up the hierarchy whereby a subscription year for a given serial might contain the feed URIs for the volumes within that year, or that a serial feed might contain the feed URIs for the subscription years for that serial. Another way of using a feed of feeds would be for a publisher to publish an RSS feed of all sites that it wanted to syndicate. As an example of such a feed Nature Publishing Group now has a feed which delivers the access URIs for all its current production feeds..." Related news: "RSS 2.0 Specification Published by Berkman Center Under Creative Commons License." See general references in "RDF/Rich Site Summary (RSS)."
[July 23, 2003] "Extending RSS." By Danny Ayers. In XML.com News (July 23, 2003). "The boom of weblogs has boosted interest in techniques for syndicating news-like material. In response a family of applications, known as aggregators or newsreaders, have been developed. Aggregators or newsreaders consume and display metadata feeds derived from the content. Currently there are two major formats for these data feeds: RSS 1.0 and RSS 2.0... The names are misleading -- the specifications differ not only in version number but also in philosophy and implementation. If you want to syndicate simple news items there is little difference between the formats in terms of capability or implementation requirement. However, if you want to extend into distributing more sophisticated or diverse forms of material, then the differences become more apparent. The decision over which RSS version to favor really boils down to a single trade-off: syntactic complexity versus descriptive power. RSS 2.0 is extremely easy for humans to read and generate manually. RSS 1.0 isn't quite so easy, as it uses RDF. It is, however, interoperable with other RDF languages and is eminently readable and processible by machines. This article shows how the RDF foundation of RSS 1.0 helps when you want to extend RSS 1.0 for uses outside of strict news item syndication, and how existing RDF vocabularies can be incorporated into RSS 1.0. It concludes by providing a way to reuse these developments in RSS 2.0 feeds while keeping the formal definitions made with RDF... RSS 1.0's strong point is its use of the RDF model, which enables information to be represented in a consistent fashion. This model is backed by a formal specification which provides well-defined semantics. From this point of view, RSS 1.0 becomes just another vocabulary that uses the framework. In contrast, outside of the relationships between the handful of syndication-specific terms defined in its specification, RSS 2.0 simply doesn't have a model. There's no consistent means of interpreting material from other namespaces that may appear in an RSS 2.0 document. It's a semantic void. But it doesn't have to be that way since it's relatively straightforward to map to the RDF framework and use that model. The scope of applications is often extended, and depending on how you look at it, it's either enhancement or feature creep. Either way, it usually means diminishing returns -- the greater distance from the core domain you get, the more additional work is required for every new piece of functionality. But if you look at the web as one big application, then we can to get a lot more functionality with only a little more effort..." General references in "RDF/Rich Site Summary (RSS)."
[July 23, 2003] "HP Buys Swedish VoiceXML Company. HP to Expand OpenCall Unit With Purchase of PipeBeach." By Gillian Law. In InfoWorld (July 23, 2003). "Hewlett-Packard plans to expand its OpenCall business unit with the purchase of Swedish VoiceXML company PipeBeach... PipeBeach of Stockholm makes interactive voice products for speech-based information portals, such as sports and traffic information systems and phone banking. Ed Verney, director of interactive media platforms in HP's OpenCall unit, said HP has been working in the VoiceXML area for some time, but that it would have taken a further two years to develop products of a similar quality to PipeBeach's technology. HP will take PipeBeach's principal products, including SpeechWeb and SpeechWeb Portal, and integrate them into its own OpenCall suite of telecommunication software, it said in a statement. SpeechWeb is a VoiceXML platform that lets applications and services, located on standard Web servers, be accessed over the phone. It can automatically understand speech in 30 languages, and can also turn text in those languages into speech, according to PipeBeach's Web site. SpeechWeb Portal makes it easier to give access to different information databases through one phone number, and to personalize services, according to PipeBeach. A provider just has to link the SpeechWeb Portal software to a database to produce a voice service, Verney said. 'It's removed a lot of the guess-work'..." See: (1) details in the announcement "HP Acquires PipeBeach to Strengthen Leadership in Growing VoiceXML Interactive Voice Market. Standards-based Products from PipeBeach Bolster HP OpenCall Portfolio and Enhance HP's Ability to Deliver Speech-based Solutions."; (2) general references in "VoiceXML Forum."
[July 22, 2003] Scalable Vector Graphics (SVG) 1.2. Edited by Dean Jackson (W3C). W3C Working Draft 15-July-2003. Latest version URL: http://www.w3.org/TR/SVG12/. Third public working draft of the SVG 1.2 specification, produced by the W3C SVG Working Group as part of the W3C Graphics Activity within the Interaction Domain. "This document specifies version 1.2 of the Scalable Vector Graphics (SVG) Language, a modularized language for describing two-dimensional vector and mixed vector/raster graphics in XML. This draft of SVG 1.2 is a snapshot of a work-in-progress. The SVG Working Group believe the most of the features here are complete and stable enough for implementors to begin work and provide feedback. Some features already have multiple implementations. [The WD] lists the potential areas of new work in version 1.2 of SVG and is not a complete language description. In some cases, the descriptions in this document are incomplete and simply show the current thoughts of the SVG Working Group on the feature. This document should in no way be considered stable. This version does not include the implementations of SVG 1.2 in either DTD or XML Schema form. Those will be included in subsequent versions, once the content of the SVG 1.2 language stabilizes. This document references a draft RelaxNG schema for SVG 1.1..." See details in the news story "New Scalable Vector Graphics 1.2 Working Draft Positions SVG as an Application Platform ."
[July 22, 2003] "Dynamic Scalable Vector Graphics (dSVG) 1.1 Specification." Edited by Gordon G. Bowman. July 09, 2003. Copyright (c) 2003 Corel Corporation. See the expanded Table of Contents and file listing from the distribution package. "This specification defines the features and syntax for Dynamic Scalable Vector Graphics (dSVG), an XML language that extends SVG, providing enhanced dynamic and interactive capabilities that were previously only available via scripting. dSVG is a language for describing UI controls and behaviors in XML [XML10]. It contains eleven types of UI controls ('button', 'checkBox', 'radioButton', 'contextMenu', 'comboBox', 'listBox', 'listView', 'slider', 'spinBox', 'textBox' and 'window'), six categories of behaviors (DOM manipulation, viewer manipulation, coordinate conversion, constraints, flow control and selection ability), and two container elements ('action' and 'share'). dSVG UI controls have instrinsic states (up, down, hover, focus and disabled), which change according to mouse and keyboard events. Their appearances are defined in skins that are completely customizable. These skins can also contain dSVG constraints, which allow the UI controls to be 'intelligently' resized. SVG files with dSVG elements are interactive and dynamic. Behaviors can be directly or indirectly associated to SVG elements or to dSVG UI controls and triggered by specified events. Sophisticated applications of SVG are possible by use of a supplemental scripting language which accesses the SVG Document Object Model (DOM), which provides complete access to all elements, attributes and properties. A rich set of event handlers such as onmouseover and onclick can be assigned to any SVG graphical object. However, scripting has many downsides Note: The distribution file "contains the proposal submitted to the World Wide Web Consortium (W3C) SVG Working Group to enhance SVG's support of enterprise application development for dynamic interfaces. It is a technical specification intended for developers, the SVG community, and the SVG working group to access the content in the proposed changes. It also contains a test suite that includes code not intended for commercial purposes, but provided by Corel to help developers test the specification..." See: (1) details in Dynamic Scalable Vector Graphics (dSVG); (2) "Corel Smart Graphics Studio 1.1 Update Now Available."
[July 22, 2003] "SOAP Message Transmission Optimization Mechanism." Edited by Noah Mendelsohn (IBM), Mark Nottingham (BEA), and Hervi Ruellan (Canon). W3C Working Draft 21-July-2003. Latest version URL: http://www.w3.org/TR/soap12-mtom. ['The W3C XML Protocol Working Group has released the first public Working Draft of the SOAP Message Transmission Optimization Mechanism. Inspired by PASWA and enhancing the SOAP HTTP Binding, this technical report presents a mechanism for improving SOAP performance in the abstract and in a concrete implementation.'] "The first part of this document ('Abstract Transmission Optimization Feature') describes an abstract feature for optimizing the transmission and/or wire format of a SOAP message by selectively re-encoding portions of the message, while still presenting an XML Infoset to the SOAP application. This Abstract Transmission Optimization Feature is intended to be implemented by SOAP bindings, however nothing precludes implementation as a SOAP module. The usage of the Abstract Transmission Optimization Feature is a hop-by-hop contract between a SOAP node and the next SOAP node in the SOAP message path, providing no normative convention for optimization of SOAP transmission through intermediaries. Additional specifications could in principle be written to provide for optimized multi-hop facilities provided herein, or in other ways that build on this specification (e.g., by providing for transparent passthrough of optimized messages). The second part ('Inclusion Mechanism') describes an Inclusion Mechanism implementing part of the Abstract Transmission Optimization Feature in a binding-independant way. The third part ('HTTP Transmission Optimization Feature') uses this Inclusion Mechanism for implementing the Abstract Transmission Optimization Feature for an HTTP binding. This document represents a transmission optimization mechanism which was inspired by a similar mechanism in the PASWA document ('Proposed Infoset Addendum to SOAP Messages with Attachments'). The WG plans to work later on the other parts of that document (assigning media types to binary data in XML infosets and including representations of Web resources in SOAP messages) and to publish other drafts which will include such mechanisms... This specification has currently no well-defined relation with the 'SOAP 1.2 Attachment Feature' specification. However, it may be expected that this specification will supersede the SOAP-AF specification once this specification has reached a stable state..." See also "SOAP 1.2 Attachment Feature, W3C Working Draft 24-September-2002. General references in "Simple Object Access Protocol (SOAP)."
[July 22, 2003] "Web Services Security, Part 4." By Bilal Siddiq. In O'Reilly WebServices.xml.com (July 21, 2003). "In this fourth article of the series, the author puts the pieces together to demonstrate the simultaneous use of all four of the XML security standards (XML signature, XML encryption, WSS, and SAML) in one application. He discusses two important and typical web services security application scenarios and presents two topics: first, how the different web services security standards work together in an XML firewall to protect SOAP servers; second, what the different types of security tokens that you can use in WSS messages are and how they are related to digital signatures and encrypted data... [In this series] We have discussed four XML security standards and two application scenarios (direct authentication and sharing of authentication data) in this series of articles. Before we conclude this series, we would like to point at another important XML security standard being developed by W3C and two other application scenarios of web services security. We have also discussed cryptographic keys in this series of articles. In fact the whole concept of security over the Internet is based on the use of cryptographic keys. The management of cryptographic keys is itself a whole topic, which is of paramount importance. Keeping in mind the importance of key management, W3C is currently developing an XML-based key management standard known as XML Key Management Services (XKMS). Refer to the XKMS page at W3C for further details. Transactions in web services is an important web service application. WS-Transaction is an attempt to standardize the transactional framework in web services. You can download the WS-Transaction specification and check the security considerations section of the specification to see that WS-Transaction uses WSS to secure transactional web services. SOAP-based messaging is another important application of web services. The ebXML Messaging Services (ebMS) standard by OASIS defines the messaging framework for web services. You can download the ebMS specification from the ebXML Messaging page to see how it uses XML signatures..." See also: (1) Part 3, Part 2, and Part 1.
[July 22, 2003] "Groove Tackles Project Management. Workspace Project Edition Taps TeamDirection Tools." By Cathleen Moore. In InfoWorld (July 22, 2003). "Groove Networks has rolled out a project management version of its desktop collaboration software designed to help distributed project teams work together more effectively. Groove Workspace Project Edition bundles project-based collaboration tools from Bellevue, Wash.-based TeamDirection. TeamDirection's Project offering includes project creation tools, status view, role-based permissions, and integration with Microsoft Project. TeamDirection Dashboard, meanwhile, provides cross-project views, filtering and sorting capabilities, and related discussion access. Because project management tools are separate from collaboration and communication products, cross-team and cross-company projects usually require the use of multiple, disconnected applications. This often forces project managers to manually re-enter project updates into a static project document, which is then distributed to team members, according to officials at Groove, in Beverly, Mass. To improve and simplify that process, the Project Edition of Groove Workspace lets project managers create a workspace, and add data manually or through a link to an existing project template or Microsoft Project plan. Team members, who are invited to the workspace via e-mail or instant messaging, each receive a shared, synchronized copy of the tools and project data. Groove software's multi-level presence awareness shows which team members are online and active in the workspace, allowing immediate decision making and problem resolution..." See the product description.
[July 22, 2003] "XML Watch: Tracking Provenance of RDF Data. RDF Tools Are Beginning to Come of Age." By Edd Dumbill (Editor and publisher, xmlhack.com). In IBM DeveloperWorks (July 21, 2003). ['When you start aggregating data from around the Web, keeping track of where it came from is vital. In this article, Edd Dumbill looks into the contexts feature of the Redland Resource Description Format (RDF) application framework and creates an RDF Site Summary (RSS) 1.0 aggregator as a demonstration.'] "A year ago, I wrote a couple articles for developerWorks about the Friend-of-a-Friend (FOAF) project. FOAF is an XML/RDF vocabulary used to describe -- in computer-readable form -- the sort of personal information that you might normally put on a home Web page, such as your name, instant messenger nicknames, place of work, and so on... I demonstrated FOAFbot, a community support agent I wrote that aggregates people's FOAF files and answers questions about them. FOAFbot has the ability to record who said what about whom... The idea behind FOAFbot is that if you can verify that a fact is recorded by several different people (whom you trust), you are more likely to believe it to be true. Here's another use for tracking provenance of such metadata. One of the major abuses of search engines early on in their history was meta tag spamming. Web sites would put false metadata into their pages to boost their search engine ranking... I won't go into detail on the various security and trust mechanisms that will prevent this sort of semantic vandalism, but I will focus on the foundation that will make them possible: tracking provenance... To demonstrate, I'll show you how to use a simple RSS 1.0 document as test data. Recently I set up a weblog site where I force my opinions on the unsuspecting public... Though RSS feeds of weblogs and other Internet sites are interesting from a browse-around, ego-surfing perspective, I believe the real value of a project like this is likely to be within the enterprise. Organizations are excellent at generating vast flows of time-sequenced data. To take a simple example, URIs are allotted for things like customers or projects, then RSS flows of activity could be generated and aggregated. Such aggregated data could then be easily sliced and diced for whoever was interested. For instance, administrators might wish to find out what each worker has been doing, project managers might want the last three status updates, higher-level management might want a snapshot view of the entire department, and so on. It is not hard to imagine how customer relationship management (CRM) might prove to be an area where tools of this sort would yield great benefits... The simple example demonstrated in this article only scratches the surface of provenance tracking with RDF. On the Web, where information comes from is just as important as the information itself. Provenance-tracking RDF tools are just beginning to emerge, and as they become more widely used they will no doubt become more sophisticated in their abilities. The Redland RDF application framework is a toolkit that's definitely worth further investigation. It has interfaces to your favorite scripting language; it runs on UNIX, Windows, and Mac OS X..." See general references in: (1) "Resource Description Framework (RDF)"; (2) "RDF Site Summary (RSS)."
[July 22, 2003] "IBM Adds Grid Computing to WebSphere." By Peter Sayer. In ComputerWorld (July 22, 2003). "IBM will add some grid-computing capabilities to the enterprise edition of its WebSphere Application Server, allowing companies to squeeze more performance from disparate Web applications running on clusters of servers through better load balancing, it announced Monday. 'This is something to bring grid capabilities to commercial customers. It's about the ability to balance Web server workloads in a more dynamic way than has ever been possible before,' said Dan Powers, IBM's vice president of grid computing strategy. Grid computing is seen as a way to deliver computing power to applications as it is needed, in much the same way that the power grid delivers electricity from many sources to where it is needed. In this case, rather than assigning fixed functions, such as serving up Web pages or handling back-office transactions, to particular machines in a cluster running WebSphere software, the software update allows each server to take on any task, depending on workload.."
[July 22, 2003] "WSRP: The Web Services Standard for Portals." By Lowell Rapaport. In Transform Magazine (July 2003). "Let's say you have a Web portal that distributes data originating from a remote third party. If the remote application is a Web service, then the portal application can address the service's API. Formatting and displaying the data returned from the Web service is the responsibility of the portal. This is fine if you have just one or two remote Web services to incorporate into your portal, but what happens if you have a dozen? The greater the number of Web services, the higher the cost of integration. Web Services for Remote Portals (WSRP) is an emerging standard designed to simplify integration. WSRP is expected to cut portal development costs by standardizing the way a remotely executed portlet integrates with a portal. WSRP specifies how a Web service downloads its results to a portal in HTML via simple object access protocol (SOAP. The specification's goals are similar to JSR 168: both promote the standardization of portlets. 'JSR 168 and WSRP are closely related,' says Carol Jones, chief architect of the WebSphere Portal at IBM. 'Portlets written in JSR 168 will be WSRP compatible. 168 deals with the life cycle of a portlet -- what gets called when it's time for it to render itself. WSRP addresses how you take a portlet and use it on a different portal. Once WSRP is adopted, users should be able to take a portlet from one company's product and install it in another.' JSR 168 integrates portals and portlets at the application layer while WSRP works at the communications layer. Based on XML and SOAP, WSRP portlets are transferred over the Internet using hypertext transfer protocol and don't require any special programming or security changes to a consumer's firewall. If a JSR 168 portlet is run remotely, the consumer's firewall has to be modified to support distributed Java applications. WSRP is inherently distributed..." See: (1) OASIS Web Services for Remote Portlets TC; (2) WSRP specification advances toward an OASIS Open Standard; (3) "JSR 168 Portlet API Specification 1.0 Released for Public Review." General references in "Web Services for Remote Portals (WSRP)."
[July 22, 2003] "Web Services Spending Down But Not Out." By Martin LaMonica. In BusinessWeek Online (July 22, 2003). ['A new Gartner survey finds that Web services projects remain a top priority for corporations despite budget cutbacks that are due to the economic downturn.'] "Shrinking IT budgets have forced corporations to cut back on Web services spending, but such projects still remain a top priority, according to a Gartner report released Wednesday. Web services is an umbrella term for a set of XML-based standards and programming techniques that make it simpler to share information between applications. Once touted as a boon to consumers conducting transactions with e-commerce providers, Web services have instead resonated with corporations as a relatively cost-effective way to integrate disparate systems. In an Internet survey of 111 North American companies, Gartner found that 48 percent of respondents have had to pare back spending on Web services application development projects because of the economic slowdown. A full one-third of survey participants said they are continuing to invest in Web services over the next two years despite the grim economic environment... The findings indicate that corporate America has a strong commitment to using Web services, according to Gartner analysts. Web services development projects are at the top of the list of company priorities and are one of the last budgets to be raided when budget cuts are made. The survey found that in the next 12 months, 39 percent of respondents plan to use Web services to share data between internal applications, such as sales automation and order management systems. And 54 percent expect to use Web services for both internal applications and to share information with outside business partners in the next year..." See details in the Gartner Survey announcement: "Gartner Survey Shows Despite U.S. Economic Slowdown Companies Continuing Web Services Development."
[July 22, 2003] "WSDL First." By Will Provost. In O'Reilly WebServices.xml.com (July 22, 2003). "Web services vendors will tell you a story if you let them. 'Web services are a cinch,' they'll say. 'Just write the same code you always do, and then press this button; presto, it's now a web service, deployed to the application server, with SOAP serializers, and a WSDL descriptor all written out.' They'll tell you a lot of things, but probably most glorious among them will be the claim that you can develop web services effectively without hand-editing SOAP or WSDL. Does this sound too good to be true? Perhaps the case can be made that in some cases SOAP has been relegated to the role of RPC encoding, that it's no more relevant to the application developer than IIOP or the DCOM transport. When it comes to WSDL, though, don't buy it. If you're serious about developing RPC-style services, you should know WSDL as well as you know WXS [W3C XML Schema]; you should be creating and editing descriptors frequently. More importantly, a WSDL descriptor should be the source document for your web service build process, for a number of reasons, including anticipating industry standardization, maintaining fidelity in transmitting service semantics, and achieving the best interoperability through strong typing and WXS. The willingness in some quarters to minimize the visibility of service description betrays a more basic and troubling bias, one which has to do with code-generation paths and development process. It assumes that service semantics are derived entirely from application source code. There are two viable development paths for RPC-style service development: from implementation language to WSDL and vice-versa. In fact, to start from the implementation language is the weaker strategy... WSDL first offers a clear advantage in interoperability of generated components. Under the WS-I Basic Profile, and in all typical practice, web services rely on WXS as the fundamental type model. This is a potent choice. WXS offers a great range of primitive types, simple-type derivation techniques such as enumerations and regular expressions, lists, unions, extension and restriction of complex types, and many other advanced features. To put it simply, WXS is by far the most powerful type model available in the XML world. It's more flexible than relational DDLs and much more precise and sophisticated than the type system of many programming languages. Why would we choose to use anything else to express service semantics? What good are WXS's advanced features if they can't be mapped to the implementation language?... For new service development, and even for most adaptations of existing enterprise code assets, the WSDL-to-Impl path is the most robust and reliable; it also fits the consensus vision for widely available services based on progressively more vertical standards. It does a better job of preserving service semantics as designed, and it offers best interoperability based on the rich type model of WXS..." General references in "Web Services Description Language (WSDL)" and "XML Schemas."
[July 22, 2003] "Web Services and Sessions." By Sergey Beryozkin. In O'Reilly WebServices.xml.com (July 22, 2003). "Web services are becoming an important tool for solving enterprise application and business-to-business integration problems. An enterprise application is usually exposed to the outside world as a single monolithic service, which can receive request messages and possibly return response messages, as determined by some contract. Such services are designed according to the principles of a service-oriented architecture. They can be either stateless or stateful. Stateful services can be useful, for example, for supporting conversational message exchange patterns and are usually instance or session-based, but they are monolithic in the sense that the session instantiation is always implicit. In general, a service-oriented approach (simple interactions, complex messages) may be better suited to building stateful web services, especially in the bigger B2B world, where integration is normally achieved through an exchange of XML documents. Coarse-grained services, with their API expressed in terms of the document exchange, are likely to be more suitable for creating loosely coupled, scalable and easily composable systems. Yet there still exists a certain class of applications which might be better exposed in a traditional session-oriented manner. Sometimes a cleaner design can be achieved by assigning orthogonal sets of functionality to separate services, and using thus simpler XML messages as a result. Such web services are fine-grained... If you believe that for a particular use case a fine grained design can result in a better interface, and that a reasonable compromise with respect to those problems can be achieved, then such a route should at least be explored. It is likely we'll see some standardization efforts in this area of state and resource management in the near future. Meanwhile, this article will look at ways of building stateful web services. In particular we highlight different ways of defining service references and identifying individual sessions..."
[July 21, 2003] "Introduction to JSR 168 - The Java Portlet Specification." From Sun Microsystems. Whitepaper. 19 pages. "The Java Specification Request 168 Portlet Specification (JSR 168) standardizes how components for portal servers are to be developed. This standard has industry backing from major portal server vendors. The specification defines a common Portlet API and infrastructure that provides facilities for personalization, presentation, and security. Portlets using this API and adhering to the specification will be product agnostic, and may be deployed to any portal product that conforms to the specification. An example, the Weather Portlet, is provided by Sun to demonstrate the key functionality offered by the Portlet API: action request handling, render request handling, render parameters, dispatching to JavaServer Pages (JSP) technology, portlet tag library, portlet URLs, portlet modes, portlet cache and portlet preferences... The specification defines a common Portlet API and infrastructure that provides facilities for personalization, presentation, and security. Portlets using this API and adhering to the specification will be product agnostic, and may be deployed to any portal product that conforms to the specification. IT Managers benefit from the ability to support multiple portal products, thus accommodating the unique business needs of various departments and audiences. The compliant portlets can be deployed to all compliant portal frameworks without extensive engineering changes. For developers, the specification offers code reusability. Developers who want to portal enable their applications can create and maintain one set of JSR 168 compliant portlets. These portlets can be run on any JSR 168 Portlet Specification compliant portal server with few, if any, modifications. The Portlet Specification addresses the following topics: The portlet container contract and portlet life cycle management; The definition of window states and portlet modes; Portlet preferences management; User information; Packaging and deployment; Security; JSP tags to aid portlet development..." See also the sample portlet code supplied by Sun. Details in the news story "JSR 168 Portlet API Specification 1.0 Released for Public Review." [cache]
[July 21, 2003] "Identity-Management Plans Draw Praise." By Steven Marlin. In InformationWeek (July 17, 2003). "Liberty Alliance and SAML earn plaudits from the Financial Services Technology Consortium for making single sign-on easier for customers. The Financial Services Technology Consortium, a financial-services research group, last week praised two identity-management proposals, Liberty Alliance and Security Assertion Markup Language, for sparing customers the chore of maintaining multiple sets of IDs and passwords. By supporting single sign-on, Liberty Alliance and SAML have the potential to advance Web services initiatives, the FSTC says. Web services -- online applications that invoke other applications via standard protocols--now require that users authenticate themselves to each application, analogous to someone having to present a building pass at the front entrance to a building and then again at the elevator, the office door, the lavatory, etc. SAML, an XML-based specification of the Organization of Structured Information Standards, defines messages known as assertions containing information such as whether a person has already authenticated himself and whether the person has authority to access a particular resource. By exchanging assertions, online applications verify that users are who they claim to be without requiring them to log in. Liberty Alliance, a 2-year-old project backed by 170 companies, has published a set of technical and business guidelines for a 'federated' identity model in which the user logs in once at the beginning of a transaction and SAML assertions provide authentication at the intermediate stages. By enabling companies to automate the task of authenticating customers, employees, suppliers, and partners, the Liberty Alliance and SAML remove an obstacle to the adoption of Web services. Web services' potential can't be realized until organizations can manage trusted relationships without human intervention, says Michael Barrett, president of Liberty Alliance and VP of Internet strategy at American Express... A four-month review by the financial consortium concluded that Liberty Alliance and SAML have the potential to quell consumer fears over identity theft. The review was backed by Bank of America, Citigroup, Fidelity Investments, Glenview State Bank, J P. Morgan Chase & Co., National City Bank, University Bank, and Wells Fargo Bank. Although banks have moved to protect themselves against attacks from hackers, viruses, and network sabotage, they've been poor at communicating the steps they've taken to protect customers from online fraud, says George Tubin, a senior analyst in TowerGroup's delivery-channels service..." See: (1) "Liberty Alliance Publishes Business Requirements and Guidelines for Identity Federation"; (2) general references in "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[July 21, 2003] "SPML Passes Demo As Multi-Platform Provisioning Specification." By Vance McCarthy. In Enterprise Developer News (July 15, 2003). "OASIS execs passed a hurdle last week, as they successfully demoed the Service Provisioning Markup Language (SPML) as an XML-derived standard for multi-platform provisioning during last week's Catalyst Conference. SPML 1.0 is an XML-derivative that proposes to enable organizations to automate, centralize, and manage the process of provisioning user access to internal and external corporate systems and data. SPML was designed to work with the W3C's recently ratified SOAP 1.2 and the OASIS SAML and WS-Security specifications. Just published on June 1, SPML is now out of OASIS technical committee consideration, and being reviewed by OASIS at large membership, which could approve the standard in late August. In the demo, a fictitious PeopleSoft employee was remotely created, sending an SPML 'document' via SOAP to the PeopleSoft application. Before arriving directly at the PeopleSoft, the document -- or the XML schema -- was sent through a messaging multiplexer, which created a duplicate (or 'sub-document') and sent it to other privileged systems. The implication is that vendor-specific adapters could be replaced by open, standard XML schema which would allow different enterprise systems to more easily, and cost-effectively interoperate and keep one another in synch. Aside from PeopleSoft, supporters of SPML include BMC Software, BEA Systems, Novell, Sun Microsystems, Business Layers, Entrust, OpenNetwork, Waveset, Thor Technologies, and TruLogica... SPML was designed to work with the W3C's recently ratified SOAP 1.2 and the OASIS SAML and WS-Security specifications. Other security standards in process at OASIS include WS-Security for high-level security services, XACML for access control, XCBF for describing biometrics data and SAML for exchanging authentication and authorization information..." See: (1) "OASIS Member Companies Host SPML Identity Management Interoperability Event"; (2) "Sun and Waveset Provide Identity Management Solution for PeopleSoft Using SPML"; (3) general references in "XML-Based Provisioning Services."
[July 21, 2003] "XSLT Performance in .NET." By Dan Frumin. In O'Reilly ONDotnet.com (July 14, 2003). "The Microsoft .NET Framework brings with it many new tools and improvements for developers. Among them is a very rich and powerful set of XML classes that allow the developer to tap into XML and XSLT in their applications. By now, everyone is familiar with XML, the markup language that is the basis for so many other standards. XSLT is a transformation-based formatter. You can use it to convert structured XML documents into some other form of text output -- quite often HTML, though it can also generate regular text, comma-separated output, more XML, and so on... Before the Microsoft .NET Framework was released, Microsoft published the XML SDK, now in version 4.0. The XML SDK is COM-based, and so can be used from any development language, not just Microsoft .NET. Its object model is also a little different than the .NET implementation, and therefore requires a bit of learning to use. But in the end, the XML SDK can do the same things for XSLT that the .NET Framework offers. Which raises the question: how do these two engines compare to each other in performance? This article will answer that question... Looking at the results, we can see that in a single end-to-end operation, the cost of the COM overhead can offset the advantages gained in transformation. This is especially true for smaller XML files (20 to 40 nodes). However, the margin of difference grows as the input files grow in size and as the transformation grows in complexity. When dealing with these scenarios, developers should consider using MSXML as well as two techniques to optimize their applications. First, consider storing the XSLT transform objects (including IXSLProcessor) in some shared location (e.g., a static member) for future use. This eliminates the cost of creating and preparing the XSLT objects and allows for a reusable transformation object that can simply be applied to XML input. Second, developers should consider creating their own COM object garbage collector for the XML files, especially if they are large in size..."
[July 21, 2003] "An XML Fragment Reader." By William Brogden. In XML.com (July 21, 2003). ['A lot of XML parsing deals with document fragments, as opposed to complete documents. Unfortunately, XML parsers prefer to deal with entire documents. A Java solution turns out to be simple and quite flexible. It enables you to combine many bits of XML formatted character streams to feed an XML parser.'] "With the release of the Java SDK 1.4, XML parser classes joined the standard Java release, creating a standard API for parser access. Thus, in the org.xml.sax package, you'll find the InputSource class. An InputSource object can feed a character stream to either a SAX or a DOM parser. You can create an InputSource from a Reader, the basic Java class for streams of characters. A workable plan of attack is to create a class extending Reader that can supply characters to an InputStream from a sequence of character stream sources..."
[July 21, 2003] "From XML to Wireless, Office Suites Move With the Times. Enhanced Basics and Added Features Change the Dynamics of Office Suites " By Cecil Wooley. In Government Computer News Volume 22, Number 19 (July 21, 2003). "Office suites are even more indispensable than paper at government agencies. Probably 90 percent of every work task starts within an office suite and combines elements of e-mail, word processing, databases, graphics, spreadsheets, networking, instant messaging and presentations... [In this review] I tested four leading office suites, grading them for quality, ease of use and price. I looked for applications that could interact with each another and, to a lesser extent, with programs that were not in suites. When multiple versions of a suite were available, I chose the version with the most components not designed for specific users -- for example, accountants or Web developers. Microsoft Office, the king of office suites, has by far the largest market share. Office 11, originally due last month, won't arrive until August because of the large number of new features. Microsoft Corp. submitted a late beta version of Office 11 for this review... Overall, Office 11 represents a minor upgrade. The functions that changed since Office XP, especially XML compatibility and the integration of SharePoint Team software, were necessary. The rest was mostly cosmetic. Office remains the top suite for good reason: simple functions, tight integration and excellent business tools for users at all skill levels. Corel WordPerfect Office 11, with mainstays WordPerfect 11, Quattro Pro 11, Presentations 11 and Paradox 10, has run a distant second to Microsoft Office 11 for some time. But since the early 1980s WordPerfect has retained many loyal government users, particularly in legal offices... The WordPerfect file format has changed little since Version 6.1, so archived data is still compatible. There was even a tool to convert older documents to XML format plus an XML editor that made using the converted work simple. Corel apparently has embraced XML even more thoroughly than Microsoft has. Corel also stuck to its strengths. I could print documents with all their coding and save them in Adobe Portable Document Format without any additional steps. For the legal community, there's a wizard to draw up court pleadings... I found StarOffice completely functional though lacking many extras such as document sharing. The drawbacks: no contact manager, scheduler or e-mail client. You could always use Microsoft Outlook Express for e-mail, but that would mean adding programs to the suite and eliminating the plug-and-play advantage... If you need a basic office suite and have little to spend, StarOffice 6 can do the job. Just don't look for the extras that most users have come to expect from other office suites..."
[July 21, 2003] "BEA Ships WebLogic Platform 8.1. The Suite Includes BEA's Application Server and JRockit Java Virtual Machine." By James Niccolai. In ComputerWorld (July 18, 2003). "BEA Systems Inc. has announced the general availability of WebLogic Platform 8.1, the latest edition of its suite of Java server software for developing, deploying and integrating business applications. The suite includes BEA's application server and JRockit Java virtual machine, which were released in March 2003, as well as new editions of its portal server, integration server and Workshop development environment. The products can be downloaded together or separately from BEA's Web site. See the annnouncement: "BEA WebLogic Platform 8.1 Ships. New Products Offer Faster Time to Value by Converging the Development and Integration of Applications, Portals and Business Processes."
[July 21, 2003] "EIPs More Compelling Than Ever." By Jim Rapoza. In eWEEK (July 21, 2003). "While interest in many enterprise applications has cooled in the last few years, companies remain hot on enterprise information portals. And why not? Portals provide the much-needed ability to integrate and unify access to a company's applications, back-end systems, data sources and content repositories. And unlike many other pricey enterprise applications, EIPs continue to show an excellent return on investment. However, although the attractiveness of portals hasn't changed much, the applications themselves -- as well as the companies that provide them -- have changed a great deal. In eWEEK Labs' last big comparison of EIPs almost two years ago, many of the products we reviewed were moving toward greater use of XML and Java. Based on the products we review here and on other recent stand-alone portal reviews, that move now appears to be complete. In fact, all six of the EIPs we tested this time around are based on Java server technology and use XML heavily in their data structures. Not surprisingly, then, they all did a good job of consuming and creating Web services during our tests. For this eValuation, eWEEK Labs tested many of the major EIPs, which have all been revised during the last few months: Art Technology Group Inc.'s ATG 6.0, BEA Systems Inc.'s WebLogic Portal 8.1, Computer Associates International Inc.'s CleverPath Portal 4.51, Plumtree Software Inc.'s Corporate Portal 5.0, Sybase Inc.'s Enterprise Portal 5.1 and Vignette Corp.'s Application Portal 4.5. We decided not to include in this review portals that are tightly tied to specific back-end applications, such as SAP AG's MySAP... Portal consolidation may be easier now, given that all the systems are similar in their underlying architecture. These similarities will also prove to be a boon to companies implementing EIPs: Just a couple of years ago, implementing portals often meant learning new portlet languages and dealing with unfamiliar server applications. Now, expertise in Java and XML is enough to develop for any portal application. Still, these things are far from commodity products. Companies need to answer questions such as the following to ensure that the portal they're buying will meet their needs. Does the portal make application integration simple? Can multiple portal implementations work together? Does the portal integrate well with existing security infrastructures? Can portal systems be easily managed and monitored? When doing a large comparative review such as this one, one product sometimes jumps clearly to the fore -- either through superior capabilities in all areas or a high level of innovation. In our EIP review, no one product was clearly superior to the others, and all of the products did well in our tests. However, several of the products we tested excelled in specific areas. In development of portlets and Web applications, BEA's WebLogic Portal and its WebLogic Workshop provided one of the best environments we've seen for creating these applications. Plumtree Corporate Portal offered very high levels of customization and design flexibility. And Vignette's Application Portal provided the best and most detailed portal administration interface we've seen..." See the annnouncement on BEA: "BEA WebLogic Platform 8.1 Ships. New Products Offer Faster Time to Value by Converging the Development and Integration of Applications, Portals and Business Processes."
[July 21, 2003] "The Security Components Exchange Protocol (SCXP)." By Yixian Yang (Information Security Center, Beijing University of Posts and Telecom, BUPT). IETF Internet Draft. Reference: draft-yang-scxp-00. June 2003, expires December 2003. Section 7 supplies the SCXP XML DTDs (SCXP DTD, channelType Option DTD, channelPRI Option DTD). "This document describes the Security Components Exchange Protocol (SCXP), an application-level protocol for exchanging data between security components. SCXP supports mutual-authentication, integrity, confidentiality and replay protection over a connection-oriented protocol. SCXP is designed on Blocks Extensible Exchange Protocol (BEEP), and it can be looked upon a profile of BEEP in a way. BEEP is a generic application protocol framework for connection-oriented, asynchronous interactions. Within BEEP, features such as authentication, privacy, and reliability through retransmission are provided. A chief objective of this protocol is to exchange data between security components..." See also: "Blocks eXtensible eXchange Protocol Framework (BEEP)."
[July 21, 2003] "SCO Takes Aim at Linux Users." By Stephen Shankland and Lisa M. Bowman. In CNET News.com (July 21, 2003). "SCO Group, a company that says Linux infringes on its Unix intellectual property, announced on Monday that it has been granted key Unix copyrights and will start a program to let companies that run Linux avoid litigation by paying licensing fees. The company, which is at the heart of a controversial lawsuit over Linux code, said it plans to offer licenses that will support run-time, binary use of Linux to all companies that use Linux kernel versions 2.4 and later SCO sparked a major controversy in the Linux world in March, when it sued IBM, saying the company had incorporated SCO's Unix code into Linux and seeking $1 billion in damages. The company alleged, among other things, trade secret theft and breach of contract. SCO then updated its demands in June, saying IBM owed it $3 billion. In the meantime, it sent out letters to about 1,500 Linux customers, warning them that their use of Linux could infringe on SCO's intellectual property. The claim of copyrights on the Unix code in question may raise the stakes in the dispute. Some attorneys say a copyright claim, which was not included in the earlier allegations against IBM, could be easier for the company to prove. SCO said prices for licensing its Unix System V source code would be announced in coming weeks. Pricing will be based on the cost of UnixWare 7.13, the company's current Unix product. SCO, at least initially, isn't directly targeting home users of Linux, McBride said..."
[July 21, 2003] "XQuery and SQL: Vive la Différence." By Ken North. In DB2 Magazine (Quarter 3, 2003). "Sometimes SQL and XML documents get along fine. Sometimes they don't. A new query language developed by SQL veterans is promising to smooth things over and get everything talking again. It's impossible to discuss the future of the software industry without discussing XML. XML has become so important that SQL is no longer the stock reply to the question, 'What query language is supported by all the major database software companiesfi' The new kid on the block is XQuery, a language for running queries against XML-tagged documents in files and databases. A specification published by the World Wide Web Consortium (W3C) and developed by veterans of the SQL standards process, XQuery emerged because SQL -- which was designed for querying relational data -- isn't a perfect match for XML documents. Although SQL works quite well for XML data when there's a suitable mapping between SQL tables and XML documents, it isn't a universal solution. Some XML documents don't reside in SQL databases. Some are shredded or decomposed before their content is inserted into an SQL database. Others are stored in native XML format, with no decomposition. And the nature of XML documents themselves poses other challenges for SQL. XML documents are hierarchical or tree-structured data. They're self-describing in that they consist of content and markup (tags that identify the content). In SQL databases, such as DB2, individual rows don't contain column names or types because that information is in the system catalog. The XML model is different. As with SQL, schemas that are external to the content they describe define names and type information. However, it's possible to process XML documents without using schemas. XML documents contain embedded tags that label the content. But unlike SQL, order is important when storing and querying XML documents. The nesting and order of elements in a document must be preserved in XML documents. Many queries against documents require positional logic to navigate to the correct node in a document tree. When shredding documents and mapping them to columns, it's necessary to store information about the document structure. Even mapping XML content to SQL columns often requires navigational logic to traverse a document tree. Other requirements for querying XML documents include pattern matching, calculations, expressions, functions, and working with namespaces and schemas... For these and other reasons, the W3C in 1998 convened a workshop to discuss proposals for querying XML and chartered the XML Query Working Group..." General references in XML and Databases."
[July 21, 2003] "Foundry Networks Launches XML Switching for Load Balancer. TrafficWorks Ironware OS is Used in its Upper-Layer Load Balancing Switches." By Matt Hamblen. In ComputerWorld (July 21, 2003). "Foundry Networks Inc. today will release a new version of its TrafficWorks Ironware operating system, which is used in its upper-layer load-balancing switches. A key ingredient of Version 9.0 is XML switching capability to control and direct traffic based on XML tags, which should make it easier to control e-commerce traffic over extranets connected to suppliers and customers, Foundry executives said. Among other features, 9.0 also includes an enhancement to provide denial-of-service (DOS) protection, which protects servers against TCP SYN and TCP ACK attacks, according to San Jose-based Foundry. This protection comes at 1.5 million packets/sec., a 15-fold increase over previous versions. Bryan A. Larrieu, vice president of voice, data and system security at CheckFree Corp. in Atlanta, has been testing Version 9.0 and is especially pleased with the DOS protection improvements. He said he has used prior version and had hoped for such an enhancement..."
[July 21, 2003] "Content-Centric XML: Coming Soon to an Intranet Near You?" By Robert J. Boeri. In (July 20, 2003). "Content-centric XML hasn't followed its original five-year script. Celebrating its fifth birthday as a standard last February, XML was supposed to supplant HTML, shift the burden of processing Web sites from servers to underutilized client PCs, and achieve the holy grail of 'create once, reuse many times.' Although use of XML to transfer information between applications was one of the World Wide Web Consortium's original goals, emphasis was on content-centric XML: Web pages and documents. What happened...? Although XML originally emphasized text content, multimedia use is also increasing, especially on intranets. And now here's the XML-intranet connection. Intranets often provide employees with external newsfeeds. It's easy linking to these feeds, if you're satisfied with employees jumping outside the firewall or viewing them in a pop-up window. If that's all you want, then basic HTML (or XHTML) works fine. But if you'd like to store that news locally, index and search it, or contribute new content expressed in XML, consider NewsML. 'News Markup Language' is an XML schema conceived by Reuters, developed and ratified by the International Press Telecommunications Council, and increasingly a standard for composing and delivering news. NewsML provides a way to produce news and maintain its metadata, and it supports text and rich media. Intranets can use automated processes to deliver NewsML content to a wide variety of devices such as financial service desktops, Web sites, and mobile phones. Journalists can write news stories using standard XML authoring tools in several languages... Although NewsML isn't a one-size-fits-all model (no model is), its adoption is growing both by news organizations creating syndicated content as well as intranets delivering that content. NewsML works its magic with a carefully conceived schema that packages news items, regardless of language or media type, with robust metadata. News 'envelopes' contain one or more news items, which in turn contain one or more components in one or more written or spoken languages. Envelopes describe items with attributes like date and time sent, news service, and priority. Text, images, video, and sound can be packaged in an item as hyperlinks. And Gregor Geiermann, a consultant with NetFederation Interactive Media, actually uses NewsML and has success stories to tell..." General references in "NewsML."
[July 21, 2003] "Auto-ID Center Opens Demo Lab." By [RFID Journal Staff]. In RFID Journal News (July 11, 2003). ['The center today opened a robotic demonstration lab at its facility in Cambridge, England, to show off RFID's manufacturing capabilities.'] "Most of the focus on low-cost RFID has been on moving items from manufacturer to distribution center to store. Today, the Auto-ID Center opened a robotic demonstration at its facility in Cambridge, England, which shows the value of robots being able to identify unique items... The demonstration highlights automatic picking, placing, storage and flexible packaging. The lab has product bins where tagged items are stored before being packed. There is a packing area, where empty gift boxes come in, and a storage area for individual items that haven't been packed. A robot in the middle of the station can perform several different tasks. The robot chooses from a variety of Gillette products, including razors and deodorants, to assemble a gift pack. There are two different types of packaging. As a new package comes into the station, the RFID tag on it tells the robot what type of package it is and triggers the order... [In the Auto-ID Center's system] the RFID tag contains an EPC, a serial number that identifies the unique item. When a reader picks up an EPC code, it sends the number to a computer running something called a Savant. Savants are distributed software programs that manage data. They can, for instance, eliminate duplicate codes if two readers pick up the same item. The Savant sends the EPC to an Object Name Service, which is similar to the Web's Domain Name Service. ONS points the Savant to a Physical Markup Language (PML) server where data on the product is stored. PML is [based upon] XML, created by the Auto-ID Center to describe products in ways computers could understand and respond to. The PML server then sends instructions to the robot. Mark Harrison, a research associate at the Auto-ID Center, says the the robot needs only to be connected to the Internet. Instructions can be sent from a PML server located literally anywhere in the world; to reduce latency, of course, it makes sense to use a PML server located fairly close to the robot. Harrison says that the interaction between the item and the robot happens quickly because only a small fragment of the PML file is actually sent to the robot..." Note, on the (evidently misplaced) concern for privacy, WRT RFID: "Big Brother's Enemy," by RFID Journal editor Mark Roberti. See: (1) Auto-ID Center website; (2) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)."
[July 20, 2003] "Debate Flares Over Weblog Standards. Despite Technical Battles, Weblogs Prepare to Alter the Collaboration and Content Management Space." By Cathleen Moore. In InfoWorld (July 18, 2003). "Weblogs are poised to roil the status quo of enterprise collaboration and content management despite recent debate regarding the protocols underpinning the technology. Quietly flourishing for years with tools from small vendors, online personal publishing technology has skyrocketed in popularity during the past year, attracting serious interest from megaplayers such as AOL and Google. This summer, AOL plans to launch a Weblog tool dubbed AOL Journals, while Google continues to digest Pyra Labs, acquired earlier this year. Most Weblogs are currently fueled by RSS, known both as Really Simple Syndication and RDF (Resource Description Framework) Site Summary. Based on XML, RSS is a Web publishing format for syndicating content, and it is heralded for its simple yet highly effective means of distributing information online. Although not officially sanctioned by a standards body, the format enjoys wide adoption by RSS content aggregators and publishing systems. Media companies such as the BBC, The New York Times, and InfoWorld currently support RSS... Despite the undisputed popularity and proven utility of RSS, a new standard is emerging in an attempt to lay the foundations for the Weblog's future. Originally dubbed Echo and now rechristened as Atom, the effort is described as a grassroots, vendor-neutral push to address some of the limitations of RSS. Rather than adding to the existing RSS specification, development on these issues has splintered off into a separate effort due to disagreement among community members as to the purpose and direction of RSS. The idea is to build on the foundation of RSS, according to Anil Dash , vice president of business development at Six Apart, a San Francisco-based Weblog vendor. 'The reason there is a need for something else is that there are new types of data and richer and more complex connections we are trying to do that RSS is not meant to do,' Dash said. Critics charge that the multiple versions of RSS, the number of which ranges between two and five depending on whom you talk to, are causing confusion and are hindering interoperability. 'To date, people [involved with RSS] have failed to converge on one version and make the confusion go away,' Antarctica's Bray said. Other issues with RSS include the lack of an API component for editing and extending Weblogs. RSS uses separate APIs, metaWeblog and Blogger , which are controlled by Userland Software and Google , respectively. Atom will be necessary for enterprises that 'want interoperability or need to exchange data with someone who is outside the firewall,' Six Apart's Dash said..." General rferences in "RDF Site Summary (RSS)."
[July 16, 2003] "XML Semantics and Digital Libraries." By Allen Renear (University of Illinois at Urbana-Champaign), David Dubin (University of Illinois at Urbana-Champaign), C. M. Sperberg-McQueen (MIT Laboratory for Computer Science), and Claus Huitfeldt (Department for Culture, Language, and Information Technology, Bergen University Research Foundation). Pages 303-305 (with 14 references) in Proceedings of the Third ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2003, May 27-31, 2003, Rice Univerersity, Houston, Texas, USA). Session on Standards, Markup, and Metadata. "The lack of a standard formalism for expressing the semantics of an XML vocabulary is a major obstacle to the development of high-function interoperable digital libraries. XML document type definitions (DTDs) provide a mechanism for specifying the syntax of an XML vocabulary, but there is no comparable mechanism for specifying the semantics of that vocabulary -- where semantics simply means the basic facts and relationships represented by the occurrence of XML constructs. A substantial loss of functionality and interoperability in digital libraries results from not having a common machine-readable formalism for expressing these relationships for the XML vocabularies currently being used to encode content. Recently a number of projects and standards have begun taking up related topics. We describe the problem and our own project... Our project focuses on identifying and processing actual document markup semantics, as found in existing document markup languages, and not on developing a new markup language for representing semantics in general... XML semantics in our sense refers simply to the facts and relationships expressed byXML markup. It does not refer to processing behavior, machine states, linguistic meaning, business logic, or any of the other things that are sometimes meant by 'semantics'. [For example:] (1) Propagation: Often the properties expressed by markup are understood to be propagated, according to certain rules, to child elements. For instance, if an element has the attribute specification lang='de', indicating that the text is in German, then all child elements have the property of being in German, unless the attribution is defeated by an intervening reassignment. Language designers, content developers, and software designers all depend upon a common understanding of such rules. But XML DTDs provide no formal notation for specifying which attributes are propagated or what the rules for propagation are. (2) Class Relationships and Synonymy: XML itself contains no general constructs for expressing class membership or hierarchies among elements, attributes, or attribute values -- one of the most fundamental relationships in contemporary information modeling. (3) Ontological variation in reference: XML markup might appear to indicate that the same thing, is-a-noun, is-a-French-citizen, is-illegible, has-been-copyedited; but obviously either these predicates really refer to different things, or must be given non-standard interpretations. (4) Parent/Child overloading: The parent/child relations of the XML tree data structure support a variety of implicit substantive relationships... These examples demonstrate several things: what XML semantics is, that it would be valuable to have a system for expressing XML semantics, and that it would be neither trivial nor excessively ambitious to develop such a system. We are not attempting to formalize common sense reasoning in general, but only the inferences that are routinely intended by markup designers, assumed by content developers, and inferred by software designers... The BECHAMEL Markup Semantics Project led by Sperberg-McQueen (W3C/MIT) grew out of research initiated by in the late 1990s and is a partnership with the research staff and faculty at Bergen University (Norway) and the Electronic Publishing Research Group at the University of Illinois. The project explores representation and inference issues in document markup semantics, surveys properties of popular markup languages, and is developing a formal, machine-readable declarative representation scheme in which the semantics of a markup language can be expressed. This scheme is applied to research on information retrieval, document understanding, conversion, preservation, and document authentication. An early Prolog inferencing system has been developed into a prototype knowledge representation workbench for representing facts and rules of inference about structured documents." See general references in "XML and 'The Semantic Web'."
[July 16, 2003] "The XML Log Standard for Digital Libraries: Analysis, Evolution, and Deployment." By Marcos André Gonçalves, Ganesh Panchanathan, Unnikrishnan Ravindranathan, Aaron Krowne, and Edward A. Fox (Virginia Polytechnic and State University, Blacksburg, VA); Filip Jagodzinski and Lillian Cassel (Villanova University, Villanova, PA). Pre-publication version of the paper delivered at JCDL 2003. Pages 312-314 (with 5 references) in Proceedings of the Third ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2003, May 27-31, 2003, Rice Univerersity, Houston, Texas, USA). Session on Standards, Markup, and Metadata. ['The authors describe current efforts and developments building on our proposal for an XML log standard format for digital library (DL) logging analysis and companion tools. Focus is given to the evolution of formats and tools, based on analysis of deployment in several DL systems and testbeds.'] "In 2002 we proposed an XML log standard for digital libraries (DLs), and companion tools for storage and analysis. The goal was to minimize problems and limitations of web servers, search engines, and DL systems log formats (e.g., incompatibility, incompleteness, ambiguity). Accordingly, our new format and tools allow capturing a rich, detailed set of system and user behaviors supported by current DL systems. In this paper, we report advances based on analysis of experimentation and deployment in several DL systems and testbeds. We hope that discussion of this work will move the community toward agreement on some DL log standard, which is urgently needed to support scientific advance... Our next generation DL logger will enhance this communication by allowing direct, peer-to-peer communication between DL components and the (componentized) log tool. Following the philosophy of the Open Archives Initiative, we intend to use standard (or slightly extended) lightweight protocols, to allow this direct communication, therefore promoting interoperability and reuse. In particular, the extended OAI (XOAI) set of protocols defined by the ODL approach, provides specialized OAI protocols for several DL services and can serve as a foundation for such communications... The design of the log analysis tools is highly object oriented, with little or no coupling between modules. The design makes modification and creation of new modules very easy. In the case where a novel statistic is required or in the case that a new XML format feature is added, a new module can be built and connected to the already existing set of modules. The modular design of the log analysis tools also will allow for more advanced analysis capabilities to be integrated into future versions. The current document search and browse output statistics provide information about the total number of hits for each document as well as a breakdown of hits based on aspects of the server domain... Our formats and tools have evolved to deal with the results of such experiments. With the interest demonstrated by many DLs and institutions (e.g., CiteSeer, MyLibrary, Daffodil) in adopting the format and tools, we expect soon to release stable versions of both. Once this phase is achieved, other research issues will become the focus of future efforts, such as richer analysis and evaluation, and efficient use of distributed storage..." See: (1) the Digital Library XML Logging Standard and Tools project website; (2) "An XML Log Standard and Tool for Digital Library Logging Analysis," in Proceedings of the Sixth European Conference on Research and Advanced Technology for Digital Libraries (Rome, Italy, September 16-18, 2002); (3) background in "Streams, Structures, Spaces, Scenarios, Societies (5S): A Formal Model for Digital Libraries" (Technical Report TR-03-04, Computer Science, Virginia Tech). [cache]
[July 16, 2003] "Logic Grammars and XML Schema." By C. M. Sperberg-McQueen (World Wide Web Consortium / MIT Laboratory for Computer Science, Cambridge MA). Draft version of paper prepared for Extreme Markup Languages 2003, Montréal. "This document describes some possible applications of logic grammars to schema processing as described in the XML Schema specification. The term logic grammar is used to denote grammars written in logic-programming systems; the best known logic grammars are probably definite-clause grammars (DCGs), which are a built-in part of most Prolog systems. This paper works with definite-clause translation grammars (DCTGs), which employ a similar formalism but which more closely resemble attribute grammars as described by [D. Knuth, 'Semantics of Context-Free Languages,' 1968] and later writers; it is a bit easier to handle complex specifications with DCTGs than with DCGs. Both DCGs and DCTGs can be regarded as syntactic sugar for straight Prolog; before execution, both notations are translated into Prolog clauses in the usual notation... Any schema defines a set of trees, and can thus be modeled more or less plausibly by a grammar. Schemas defined using XML Schema 1.0 impose some constraints which are not conveniently represented by pure context-free grammars, and the process of schema-validity-assessment defined by the XML Schema 1.0 specification requires implementations to produce information that goes well beyond a yes/no answer to the question 'is this tree a member of the set?' For both of these reasons, it is convenient to use a form of attribute grammar to model a schema; logic grammars are a convenient choice. In [this] paper, I introduce some basic ideas for using logic grammars as a way of animating the XML Schema specification / modeling XML Schema... The paper attempts to make plausible the claim that a similar approach can be used with the XML Schema specification, in order to provide a runnable XML Schema processor with a very close tie to the wording of the XML Schema specification. Separate papers will report on an attempt to make good on the claim by building an XML Schema processor using this approach; this paper will focus on the rationale and basic ideas, omitting many details..." See also the abstract for the Extreme Markup paper [Tuesday, August 5, 2003]: "The XML Schema specification is dense and sometimes hard to follow; some have suggested it would be better to write specifications in formal, executable languages, so that questions could be answered just by running the spec. But programs are themselves often even harder to understand. Representing schemas as logic grammars offers a better approach: logic grammars can mirror the wording of the XML Schema specification, and at the same time provide a runnable implementation of it. Logic grammars are formal grammars written in logic-programming systems; in the implementation described here, logic grammars capture both the general rules of XML Schema and the specific rules of a particular schema." Note: the paper is described as an abbreviated version of "Notes on Logic Grammars and XML Schema: A Working Paper Prepared for the W3C XML Schema Working Group"; this latter document (work in progress 2003-07) provides "an introduction to definite-clause grammars and definite-clause translation grammars and to their use as a representation for schemas." General references in "XML Schemas."
[July 15, 2003] "Testing Structural Properties in Textual Data: Beyond Document Grammars." By Felix Sasaki and Jens Pönninghaus (Universität Bielefeld). [Pre-publication draft of paper published] in Literary and Linguistic Computing Volume 18, Issue 1 (April 2003), pages 89-100. "Schema languages concentrate on grammatical constraints on document structures, i.e., hierarchical relations between elements in a tree-like structure. In this paper, we complement this concept with a methodology for defining and applying structural constraints from the perspective of a single element. These constraints can be used in addition to the existing constraints of a document grammar. There is no need to change the document grammar. Using a hierarchy of descriptions of such constraints allows for a classification of elements. These are important features for tasks such as visualizing, modelling, querying, and checking consistency in textual data. A document containing descriptions of such constraints we call a 'context specification document' (CSD). We describe the basic ideas of a CSD, its formal properties, the path language we are currently using, and related approaches. Then we show how to create and use a CSD. We give two example applications for a CSD. Modelling co-referential relations between textual units with a CSD can help to maintain consistency in textual data and to explore the linguistic properties of co-reference. In the area of textual, non-hierarchical annotation, several annotations can be held in one document and interrelated by the CSD. In the future we want to explore the relation and interaction between the underlying path language of the CSD and document grammars..." See: (1) the abstract for LitLin; (2) the research group's publication list; (3) the related paper "Co-reference annotation and resources: a multilingual corpus of typologically diverse languages", in Proceedings of the Third International Conference on Language Resources and Evaluation (LREC-2002); (4) related references in "Markup Languages and (Non-) Hierarchies." [source PDF]
[July 15, 2003] "Identifying Metadata Elements with URIs: The CORES Resolution." By Thomas Baker (Birlinghoven Library, Fraunhofer-Gesellschaft) and Makx Dekkers (PricewaterhouseCoopers). In D-Lib Magazine Volume 9, Number 7/8 (July/August 2003). ISSN: 1082-9873. "On 18-November-2002, at a meeting organised by the CORES Project (Information Society Technologies Programme, European Union), several organisations regarded as maintenance authorities for metadata elements achieved consensus on a resolution to assign Uniform Resource Identifiers (URIs) to metadata elements as a useful first step towards the development of mapping infrastructures and interoperability services. The signatories of the CORES Resolution agreed to promote this consensus in their communities and beyond and to implement an action plan in the following six months. Six months having passed, the maintainers of Global Information Locator Service (GILS), ONIX, MARC 21, CERIF, DOI, IEEE/LOM, and Dublin Core report on their implementations of the resolution and highlight issues of relevance to establishing good-practice conventions for declaring, identifying, and maintaining metadata elements more generally. In June 2003, the resolution was also endorsed by the maintainers of UNIMARC. After presenting the text of the CORES Resolution and its three 'clarifications', the article summarises the position of each signatory organisation towards assigning URIs to its metadata elements, noting any practical or strategic problems that may have emerged... The article closes with a few general observations about these first steps towards the clarification of shared conventions for the identification of metadata elements and perhaps, one can hope, towards the ultimate goal of improving interoperability among a diversity of metadata communities. In the six months since the signing of the CORES Resolution, the signatories have worked towards translating their commitments into practical URI assignment and persistence policies. Given the need to evaluate the impact of design decisions and to build consensus in the communities behind the standards, it was perhaps too ambitious to expect that policies could be finalised and URIs assigned within just thirty-six weeks. However, having such a short fuse for such a specific set of tasks has highlighted a number of areas where forms of good practice have yet to emerge... Beyond mandating the assignment of URIs to 'elements', the Resolution left it up to the signatories to decide exactly what that means in the context of a particular standard and which other entities, such as sets of elements or values in controlled vocabularies, should also be so identified. Some interesting questions have arisen in this regard: (1) Should the URI of an element reflect a hierarchical context within which it is embedded? (2) If organisation A creates a URI designating an entity maintained by organisation B, and organisation B then creates its own URI for the same entity, by what etiquette or mechanism can the redundant identifiers be cross-referenced or preferences declared? (3) If semantically identical elements are shared across multiple element sets maintained by an organisation, should they each be assigned a separate URI or share one common URI? (4) Should successive historical versions of an element should share a single, unchanging URI or should each version should be assigned its own URI, perhaps by embedding a version number in the URI string? [...] The Resolution leaves it to the signatory organizations what the URIs should look like and explicitly says that no assumptions should be made that URIs resolve to something on the Web... The Resolution is silent about how the URIs assigned can be used in asserting semantic relationships between elements in different sets. URIs were seen as a useful common basis for asserting the relationship of elements in a diversity of applications to shared ontologies such as the Basic Semantic Register or the <indecs> Data Dictionary, or to formally express the relationship between two element sets in the machine-processable and re-usable form of an RDF schema. Facilitating the expression and processing of such assertions in the interest of interoperability between different forms of metadata was seen by its signatories as the longer-term significance of the CORES Resolution..."
[July 15, 2003] "Using the OAI-PMH ... Differently." By Herbert Van de Sompel (Digital Library Research and Prototyping, Los Alamos National Laboratory), Jeffrey A. Young (OCLC Office of Research), and Thomas B. Hickey (OCLC Office of Research). In D-Lib Magazine Volume 9, Number 7/8 (July/August 2003). ISSN: 1082-9873. "The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) was created to facilitate discovery of distributed resources. The OAI-PMH achieves this by providing a simple, yet powerful framework for metadata harvesting. Harvesters can incrementally gather records contained in OAI-PMH repositories and use them to create services covering the content of several repositories. The OAI-PMH has been widely accepted, and until recently, it has mainly been applied to make Dublin Core metadata about scholarly objects contained in distributed repositories searchable through a single user interface. This article describes innovative applications of the OAI-PMH that we have introduced in recent projects. In these projects, OAI-PMH concepts such as resource and metadata format have been interpreted in novel ways. The result of doing so illustrates the usefulness of the OAI-PMH beyond the typical resource discovery using Dublin Core metadata. Also, through the inclusion of XSL stylesheets in protocol responses, OAI-PMH repositories have been directly overlaid with an interface that allows users to navigate the contained metadata by means of a Web browser. In addition, through the introduction of PURL partial redirects, complex OAI-PMH protocol requests have been turned into simple URIs that can more easily be published and used in downstream applications... Through the creative interpretation of the OAI-PMH notions of resource and metadata format, repositories with rather unconventional content, such as Digital Library usage logs, can be deployed. These applications further strengthen the suggestion that the OAI-PMH can effectively be used as a mechanism to maintain state in distributed systems. [We] show that simple user interfaces can be implemented by the mere use of OAI-PMH requests and responses that include stylesheet references. For certain applications, such as the OpenURL Registry, the interfaces that can be created in this manner seem to be quite adequate, and hence the proposed approach is attractive if only because of the simplicity of its implementation. The availability of an increasing amount of records in OAI-PMH repositories generates the need to be able to reference such records in downstream applications, through URIs that are simpler to publish and use than the OAI-PMH HTTP GET requests used to harvest them from repositories. This article shows that PURL partial redirects can be used to that end..." General references in "Open Archives Metadata Set (OAMS)."
[July 15, 2003] "IBM Introduces EPAL for Privacy Management." By John Fontana. In Network World (July 09, 2003). "IBM has introduced a set of tools that will help companies automatically set and manage privacy policies that govern access to sensitive data stored in corporate applications and databases. IBM's new XML-based programming language called Enterprise Privacy Authorization Language (EPAL) allows developers to build policy enforcement directly into enterprise applications. The move is another in a series by IBM to create a suite of tools and software to support identity management, a broad initiative that relies on user identity to control access and secure systems. EPAL allows companies to translate clearly stated privacy policies into a language a machine can read and act upon. 'You may have a policy that says your primary care physician can look at some private patient data, but only in specific situations,' says Arvind Krishna, vice president of security products for IBM. 'We don't know how to do that with technology, we need a common language. With EPAL, you can go from an English language description of a policy to an XML-based representation of that policy.' Krishna says the key is that privacy is based on the purpose for accessing the information and not just on an identity of the person seeking access. EPAL builds on current privacy specifications, namely the Platform for Privacy Preferences (P3P) that provide privacy controls for information passed between business applications and consumers with browsers. EPAL lets companies use those privacy controls internally with their corporate users. The language will be part of an infrastructure that will include monitors that are built into the interface of corporate applications and databases and perform the enforcement of policies. IBM will use its Tivoli Privacy Manager as a hub that the monitors plug into to check policies. The Privacy Manager will store policies, as well as, log and audit access to data as a means to document policy enforcement..." See details and references in "IBM Releases Updated Enterprise Privacy Authorization Language (EPAL) Specification."
[July 15, 2003] "New OpenOffice on the Threshold." By David Becker. In CNET News.com (July 15, 2003). "The first major upgrade of OpenOffice moved a step closer with the introduction of a near-final version of the revamped open-source software. A 'release candidate' version of OpenOffice 1.1 is available now through the Web site of the organization behind the productivity package. With commercial software, the release candidate is the edition sent to manufacturers for distribution. But OpenOffice developers will make a few final tweaks to 1.1 before declaring a final version next month, said Sam Hiser, co-leader of the marketing project for OpenOffice.org. OpenOffice is the free, open-source sibling of Sun Microsystems' StarOffice, a software package that includes a word processor, spreadsheet application and other software tools. The package competes with Microsoft's dominant Office product, but can open and save files in Office formats. While it's dwarfed in market share terms by Microsoft Office, OpenOffice is slowly winning a following, thanks in part to its cost advantages and its ability to work with files created by Microsoft applications. Key additions to OpenOffice 1.1 include the ability to export files in the portable document format (PDF) created by Adobe Systems and in Macromedia's Flash animation format. Both standards are widely used by Web publishers and usually require the use of special authoring software... Version 1.1 also incorporates more support for XML (extensible markup language), the format increasingly embraced as the standard for exchanging data between disparate computing systems. Besides allowing people to save files in industry-standard XML, OpenOffice 1.1 is also designed to work with third-party 'schemas' (custom XML configurations), including those Microsoft plans to use in the upcoming version of Office. In addition, OpenOffice 1.1 offers support for non-Latin character sets, allowing easier creation of customized versions of OpenOffice for specific languages. The software is currently available in 30 languages, and another 60 localization projects are under way..." See: (1) "OpenOffice.org XML File Format"; (2) "XML File Formats for Office Documents"; (3) related news OpenGroupware.org Announces Open Source Project for Groupware Server Software.
[July 15, 2003] "Startup Unveils New Web-Services Language." By Charles Babcock. In InformationWeek (July 15, 2003). "A startup called Clear Methods has produced Water, an XML programming language, and a run-time environment for Water code, called Steam Engine, which it's offering as part of a pure Web-services platform... Not only can content be built in XML format and transferred with XML-based messaging, it also can be processed at its destination with XML commands and application code, says Clear Methods CEO Michael Plusch. 'If you have a syntax that's compatible with XML and document representation in XML, you can have a Web-services platform that just lives in XML,' he says, rather than the mix of programming and scripting languages that make up the typical Web site. One goal of XML as a programming language is to avoid the passing of XML data from Perl to Java to perhaps Visual Basic or C as it reaches its destination. Plusch and co-founder Christopher Fry were founders of the portal software firm Bowstreet Inc. They began Clear Methods, a six-employee company, in 2001. The Water syntax was composed and Steam was first deployed as a run-time environment in March 2002; this version is the debut of Water with Steam Engine 3.10. A document hand-off mechanism that's written in XML will be more versatile in handling XML data than a language like Java that struggles to make the connection. But Water and Steam Engine are intended as more than a handshake mechanism. Water is an object-oriented language, and programmers who learn it will build class libraries, a body of code from which a family of software objects may be rapidly built and modified. There are few users to date, but one of them, Ben Koo, an engineering doctoral candidate at MIT, says he's been using an early version of Water for two years in a research project he directs on how to model complex hardware and software systems..." See the Water website.
[July 15, 2003] "Microsoft Previews Upgraded Web Services Pack. Security to Get Boost." By Paul Krill. In InfoWorld (July 15, 2003). Microsoft has released a "a preview of an upcoming update to its Web Services Enhancements (WSE) kit, focusing on security. Due for general release by the end of this year, the free kit for Visual Studio .Net users is intended to enable development of what Microsoft describes as advanced Web services. WSE 2.0 features security improvements and enables developers to build Web services that are compliant with a set of Web services specifications released by Microsoft and IBM, including WS-SecurityPolicy and WS-Addressing. These specifications, which have not yet been submitted to an industry standards organization, would receive a volume boost by developers who use Microsoft's kit... Microsoft is releasing an early version of the kit to give users and vendors time to review it and provide feedback. The new version builds on the security, routing, and attachment capabilities of Version 1.0 of WSE. Version 2.0 provides a message-based object model that supports multiple transports, including HTTP and TCP, and asynchronous and synchronous communications, according to Microsoft. In synchronous communications, messages are sent and the sender must wait for a reply, unlike asynchronous communications, in which a request can be sent and retrieved without waiting for a reply. Asynchronous communications is useful for long-running transactions such as with routing of payroll requests or purchase orders... Said Bill Evjen, technical director for Reuters, in Saint Louis: 'The technology is moving so fast that if we had to wait for a standards body like OASIS to approve them, we would really be behind the curve; Reuters is confident that the backing of vendors such as Microsoft and IBM give weight to the specifications'..." See details in the news story "Enhanced Adobe XML Architecture Supports XML/PDF Form Designer and XML Data Package (XDP)."
[July 15, 2003] "Microsoft Bolsters Web Services Security." By Martin LaMonica. In CNET News.com (July 15, 2003). "Microsoft has released a toolkit designed to help software programmers tighten security in Web services applications. The toolkit, called Web Services Enhancements (WSE) version 2, will let companies use the latest security capabilities from Microsoft and other software giants like IBM and Sun Microsystems... Eventually, Microsoft will add the capabilities to its Visual Studio.Net development tool and the .Net Framework, the software 'plumbing' needed to run Web services applications on Windows operating systems. Microsoft is using the latest Web services security mechanisms even though the various specifications are likely to change, according to Microsoft executives. However, the toolkit introduces a programming technique that will allow software developers and administrators to establish security policies that can be altered without having to rewrite existing code. For example, a company could write a policy that would give network administrators access to corporate servers during working hours, but not after-hours. Using the policy authoring mechanisms in the WS-Policy and WS-SecurityPolicy, a developer can alter the policy without having to completely rewrite the application code, noted Rebecca Dias, product manager for advanced Web services at Microsoft..." See details in the news story "Enhanced Adobe XML Architecture Supports XML/PDF Form Designer and XML Data Package (XDP)."
[July 15, 2003] "Computer Associates Tackles Web Services Management. Tool Released for Discovery, Monitoring." By Brian Fonseca. In InfoWorld (July 14, 2003). "In an effort to overcome complexities associated with Web services management, Computer Associates on Monday introduced Unicenter Web Services Distributed Management (WSDM) 1.0, a tool designed to automatically discover and monitor Web services... For the monitoring of Web services within .Net environments and support of ASP .Net, CA introduced Unicenter Management for .Net Framework 3.0. The tool offers service-level reporting, health and performance reporting, and capacity utilization, said Dmitri Tcherevik, vice president and director of Web services at Islandia, N.Y.-based CA. Meanwhile, Unicenter Management for WebSphere Release 3.5 and Unicenter Management for WebLogic 3.5 work within J2EE to discover deployed Web services and their interfaces. Tcherevik said WSDM can analyze information about services, servers, and applications surrounding Web services to enable customers to either take corrective action or allow Unicenter's automated 'self-healing' capability to resolve the problem without human intervention. Supporting both the J2EE and .Net environments, WSDM offers services controls that allow users to disable, enable, or redirect Web services. The product monitors service characteristics of Web services transactions. In effect, one can use WSDM to automatically set alert thresholds and offer centralized management... CA announced the release of eTrust Directory 4.1. The product offers a UDDI implementation to support Web services, featuring the ability to store, replicate, and distribute vast amounts of Web services data." See details in the announcement: "CA Ensures Performance, Reliability and Security of Web Services With New Unicenter and eTrust Solutions. Five Advanced Management and Security Offerings Enable IT Organizations To Optimize Service Levels for Enterprise and Customer-Facing Systems."
[July 15, 2003] "CA Unveils Web Services Management Technology." By Steven Burke and Heather Clancy. In InternetWeek (July 15, 2003). "Computer Associates International has announced several new products for facilitating Web services, including Unicenter WSDM, a new management product designed to monitor and track Web services. The product, currently in beta, allows solution providers to quickly respond to lowered services levels or interruptions across servers and storage networks. 'I don't have to have access to someone else's infrastructure to monitor the services, yet I can manage it,' said Dmitri Tcherevik, vice president and director of Web services at CA. CA CTO Yogesh Gupta demonstrated Unicenter WSDM (Web Services Distributed Management) during his CA World keynote Tuesday in Las Vegas. CA Chairman and CEO Sanjay Kumar said CA will not actively recruit partners for Unicenter WSDM, but existing Unicenter channel partners will have the ability to sell and support the product. Tcherevik said Unicenter WSDM is expected to be generally available by the end of the calendar year. The vendor will use the beta program to establish pricing strategy but likely will use a tiered approach based on CA's current FlexSelect program, he said. The software can be deployed in a stand-alone fashion, although additional features are available to those that use it with the Unicenter management console..." See the announcement: "CA Ensures Performance, Reliability and Security of Web Services With New Unicenter and eTrust Solutions. Five Advanced Management and Security Offerings Enable IT Organizations To Optimize Service Levels for Enterprise and Customer-Facing Systems."
[July 15, 2003] "Adobe Expands E-Forms Push." By David Becker. In CNET News.com (July 15, 2003). "Publishing software giant Adobe Systems on Tuesday announced a new electronic forms application that appears to be aimed at Microsoft's upcoming InfoPath product. The as-yet unnamed product, which Adobe plans to introduce next year, allows companies to create and distribute interactive forms using Adobe's portable document format (PDF) and Extensible Markup Language (XML), the fast-spreading standard behind Web services. XML support means data from forms designed with the software can be automatically sucked into back-end software, such as corporate databases and customer relationship management (CRM) systems, eliminating the costly data re-entry associated with paper forms. Microsoft is touting similar advantages for InfoPath, a part of the dramatically revamped Office productivity software line the company plans to introduce in a few months. InfoPath, formerly code-named XDocs, designs XML-based forms and ties them in to back-end software to automate data exchange and delivery. Key differences in the Adobe product include reliance on PDF, a widespread format that can be read by any device equipped with the free Adobe Reader software. InfoPath forms can be used only by those who buy the application. 'We're combining the XML advantage with the best of PDF as far as document integrity and ubiquity,' said Marion Melani, senior product makret manager for Adobe's ePaper division. 'When you're conversing system to system, it's just an XML file. But the user gets the full PDF for the visual representation of that document.' InfoPath has the advantage of being tied to Microsoft's equally widespread Word word-processing application, said John Dalton, an analyst for Forrester Research. 'I have a feeling those (Word versus Adobe Reader) are almost red herrings in terms of advantages,' he said. The new Adobe software will also include simple tools for adding XML functions to existing PDF forms. Many financial, government and other institutions use PDF for electronic distribution of forms ultimately intended for printout..." See details in the news story "Enhanced Adobe XML Architecture Supports XML/PDF Form Designer and XML Data Package (XDP)."
[July 15, 2003] "Adobe Outlines XML Forms Plans." [Edited] By Patricia Evans. In The Bulletin: Seybold News and Views on Electronic Publishing Volume 8, Number 41 (July 16, 2003). "Seeking to counter the rumors that Microsoft's forthcoming XML forms product (now called InfoPath) means the death of PDF, Adobe this week departed from typical practice and discussed a product that's at least six months away from delivery. Adobe's forthcoming forms software will allow companies to create and distribute interactive forms using PDF and XML. Microsoft is touting similar features for InfoPath (formerly code-named XDocs), a part of the much-touted revamped Office suite the company plans to introduce in a few months. Adobe's product, which is slated to be released next year, relies on PDF, so forms can be viewed by anyone with the free Adobe Reader, whereas InfoPath forms can be used only by those who buy the application. PDF is also cross-platform, while Microsoft is tying its forms to its highly popular Word program and Windows platform. Adobe is working on form-design software that will include simple tools for adding XML functions to existing PDF forms. It features a universal client, which includes Adobe Reader, and also includes an intelligent document tier, which is where PDF documents with XML 'smarts' are created via the forms designer... InfoPath does not threaten the traditional uses of PDF in prepress and the graphic arts... But it very much threatens Adobe's plans to blend PDF and XML in electronic forms-the reason Adobe acquired Accelio, which used to go by the name of JetForms... By breaking tradition and preannouncing the product, Adobe has certainly landed a preemptive strike and alerted the market that when it comes to electronic forms, there will be more than one game in town..." See other details in the news story "Enhanced Adobe XML Architecture Supports XML/PDF Form Designer and XML Data Package (XDP)."
[July 15, 2003] "Liberty Alliance Offers Advice on External ID Federation. The Guidelines Explain How Companies Should Work Together on the ID Effort." By Scarlet Pruitt. In Computerworld (July 10, 2003). "Having already set forth the technical requirements needed to create a federated identity architecture, the Liberty Alliance Project released guidelines this week for how companies should include business partners and customers in their networks, saying it's crucial for the advancement of Web services. The group released the Liberty Alliance Business Guidelines document at the Burton Catalyst Conference in San Francisco on July 8, 2003 outlining how companies should ensure mutual confidence, risk management, liability assessment and compliance when considering wide-scale deployment of federated network identity. The guidelines come on the heels of the group's federated network-identity technical requirements, released last year, and the second set of recommendations, which is available for public review. The nonprofit group represents more than 170 companies and organizations working to develop and deploy open, federated network-identity standards. Members include companies such as Sun Microsystems Inc., SAP AG and American Express Co. The group's open standards for federated identity compete against Microsoft Corp.'s Passport service in the user authentication and identity management arena... The group claimed that extending access to customers, partners and suppliers is the next phase of Web services and advises companies to put processes in place that guard against losses due to identity fraud and leakage of information... The group is expected to release additional guidelines later this year..." See details in "Liberty Alliance Publishes Business Requirements and Guidelines for Identity Federation."
[July 15, 2003] "Microsoft Shifts TrustBridge Towards WS- Roadmap." By Gavin Clarke. In Computer Business Review Online (July 15, 2003). "Microsoft Corp is changing the direction of TrustBridge for Active Directory to fit the WS- roadmap jointly authored with IBM Corp, and is targeting a 2005 release date for the technology. TrustBridge has been re-worked to accept security tokens other than Kerberos, by adding support for WS-Security. TrustBridge will also support the related WS-Federation and WS-Trust among other WS- specifications. Microsoft has also put a rough date on TrustBridge's delivery. TrustBridge will ship in the "wave" of Longhorn, the company's next planned operating system due in 2005, according to XML services architect John Shewchuk. It is not year clear, though, whether TrustBridge will ship as a feature of Longhorn or a separate product, although Microsoft will offer developers greater access to WS- specifications with a second planned Web Services Enhancements (WSE) toolkit, due soon. TrustBridge was unveiled by Microsoft last July, but has not been spoken about publicly since. However, Microsoft lead product manager Michael Stephenson, speaking after last week's Burton Group's Catalyst 2003 conference, told ComputerWire: "We are shifting direction on directory to build TrustBridge on top of interoperable identity standards." As originally intended, TrustBridge would have enabled Active Directory to take a Kerberos security token and communicate with another Kerberos-based system, not necessarily Active Directory. Stephenson said, though, this required "proprietary" work..." See also "Microsoft Revives TrustBridge for Web Services Role," by John Fontana.
[July 15, 2003] "Serialize XML Data. Saving XML Data Using DOMWriter in XML for the C++ Parser." By Tinny Ng (System House Business Scenarios Designer, IBM Toronto Laboratory). From IBM developerWorks, XML zone. July 15, 2003. ['IBM developer Tinny Ng shows you how to serialize XML data to a DOMString with different encodings. You'll also find examples that demonstrate how to use the MemBufFormatTarget, StdOutFormatTarget, and LocalFileFormatTarget output streams in XML4C/Xerces-C++.'] "Xerces-C++ is an XML parser written in C++ and distributed by the open source Apache XML project. Since early last year, Xerces-C++ has added an experimental implementation of a subset of the W3C Document Object Model (DOM) Level 3 as specified in the DOM Level 3 Core Specification and the DOM Level 3 Load and Save Specification. The DOM Level 3 Load and Save Specification defines a set of interfaces that allow users to load and save XML content from different input sources to different output streams. This article uses examples to show you how to save XML data in this way -- how to serialize XML data into different types of output streams with different encodings. Users can stream the output data into a string, an internal buffer, the standard output, or a file... For more details please refer to the W3C DOM Level 3 Load and Save Specification and the complete API documentation in Xerces-C++..." See: "Document Object Model (DOM) Level 3 Core Specification Version 1.0" (W3C Working Draft 09-June-2003) and "Document Object Model (DOM) Level 3 Load and Save Specification Version 1.0" (W3C Working Draft 19-June-2003). The LS specification "defines the Document Object Model Load and Save Level 3, a platform- and language-neutral interface that allows programs and scripts to dynamically load the content of an XML document into a DOM document and serialize a DOM document into an XML document..." General references in "W3C Document Object Model (DOM)."
[July 15, 2003] "XML for Data: Reuse it or Lose it, Part 3. Realize the Benefits of Reuse." By Kevin Williams (CEO, Blue Oxide Technologies, LLC). From IBM developerWorks, XML zone. July 08, 2003. ['In the final installment of this three-part column, Kevin Williams looks at some of the ways you can take advantage of the reusable XML components that he defined in the previous two installments of this column. Designing XML with reusable components can, in many ways, create direct and indirect benefits; Kevin takes a quick look at some of the most important. You can share your thoughts on this article with the author and other readers in the accompanying discussion forum.'] "This column builds on the philosophy of XML reuse I described in the first two columns... The first benefit of using reusable components isn't necessarily a direct benefit of the design of XML structures that use components, but it is a natural outcome of the approach. To create components that can be reused, you need to capture solid semantics about those components. These semantics can be extended into the processing code itself to make the programmer's job easier... Another natural benefit of the component-based approach to XML design is the ability to reuse XSLT fragments to ensure a standardized presentation of information across many different documents. Again, this is a natural outcome of capturing good semantics and reusing elements and attributes whenever possible... Another benefit [Class-to-XML mapping (fragment serialization, deserialization)] begins to appear when higher-order elements are reused... By creating XML-aware classes, you can make it possible to reuse parsing and serialization code -- as long as you have properly reused the structures in your XML schemas... You'll find many benefits to designing XML schemas using reusable components. These benefits lead directly to shorter development cycles and simpler maintenance of code. If you are designing a large system with many different types of XML documents, taking the time to identify reusable components of those documents early in the development effort benefits that effort in the long term..." See Part 1 "XML Reuse in the Enterprise," which "looks at some of the historical approaches to reusing serialized data, and then shows how XML allows one to break from tradition and take a more flexible approach to document designs; Part 2 "Understanding Reusable Components" describes "the types of components that can be reused in XML designs and provides examples of each in XML and XML Schema."
[July 15, 2003] "Tip: Send and Receive SOAP Messages with SAAJ. Java API Automates Many of the Steps Required in Generating and Sending Messages Manually." By Nicholas Chase (President, Chase & Chase, Inc). From IBM developerWorks, XML zone. July 10, 2003. ['In this tip, author and developer Nicholas Chase shows you how to use the SOAP with Attachments API for Java (SAAJ) to simplify the process of creating and sending SOAP messages.'] "The foundation of Web services lies in the sending and receiving of messages in a standard format so that all systems can understand them. Typically, that format is SOAP. A SOAP message can be generated and sent manually, but the SOAP with Attachments API for Java (SAAJ) -- an offshoot of the Java API for XML Messaging (JAXM) -- automates many of the required steps, such as creating connections or creating and sending the actual messages. This tip chronicles the creation and sending of a synchronous SOAP message. The process involves five steps: (1) Creating a SOAP connection; (2) Creating a SOAP message; (3) Populating the message; (4) Sending the message; (5) Retrieving the reply. SAAJ is available as part of the Java Web Services Developer Pack 1.2. This package also includes a copy of the Tomcat Web server (so you can host your own service) and sample applications. Setting up the Java Web Services Developer Pack 1.2 is easy -- as long as you send your messages through the included Tomcat Web server. [Once installation is complete] you should be able to send a message from anywhere on your system using a standalone program... The simple application [developed in this article] just outputs the received message, but you can just as easily extract the information from the XML document. Also, while this tip demonstrates the synchronous sending and receiving of messages, the JAXM API, available as an optional download, allows for the use of a messaging provider for asynchronous delivery through the use of a ProviderConnection object rather than a SOAPConnection. The provider holds the message until it is delivered successfully. JAXM also allows for the use of profiles, which make it easy to create specialized SOAP messages such as SOAP-RP or ebXML messages..." [Note: 'The Java Web Services Developer Pack (Java WSDP) is an integrated toolkit that allows Java developers to build, test and deploy XML applications, Web services, and Web applications with the latest Web services technologies and standards implementations. Technologies in Java WSDP include the Java APIs for XML, Java Architecture for XML Binding (JAXB), JavaServer Faces, Web Services Interoperability Sample Application, XML and Web Services Security, JavaServer Pages Standard Tag Library (JSTL), Java WSDP Registry Server, Ant Build Tool, and Apache Tomcat container.']
[July 15, 2003] "Use RosettaNet Based Web Services, Part 1: BPEL4WS and RosettaNet. How to Instantly Add Years of E-Business Experience and Expertise to Your Web Services." By Suhayl Masud (Founder and Lead Architect, Different Thinking). From IBM developerWorks, Web services (July 15, 2003). ['While Web services are a gentle evolution of existing technology, they are a revolution in the way business can be represented in software. However, we cannot realize the full potential of Web services, or see their revolutionary nature, unless we start constructing partner-to-partner e-business dialogues that conduct real business transactions. This series of articles demonstrates the creation of a real e-business dialogue by leveraging the industry leading e-business process specifications from RosettaNet, and translating them to Web services using the expressive and flexible BPEL4WS.'] "The purpose of this series of articles is to demonstrate the true potential of Web services by creating an e-business dialogue that can be used to conduct real business. This e-business dialogue will be based on a real world business problem and the problem will be addressed by using a proven solution from RosettaNet. In this series, I will show you that the most important aspect of Web services is the e-business dialogue; I will explain what they are and how to construct them for business peers. In this first article in the series, I will cover the following: the true potential of Web services, understanding how to conduct e-business dialogues, advantages of leveraging RosettaNet, introduction to RosettaNet, and translating RosettaNet into Web services. In Parts 2 and 3, I will discuss choreography for Web services and construct a sample end-to-end e-business scenario that demonstrates the benefits of combining RosettaNet and BPEL4WS..." See also: (1) "Business Process with BPEL4WS"; (2) general references in "Business Process Execution Language for Web Services (BPEL4WS)" and "RosettaNet."
Earlier XML Articles
- XML Articles and Papers July 2003
- XML Articles and Papers June 2003
- XML Articles and Papers May 2003
- XML Articles and Papers April 2003
- XML Articles and Papers March 2003
- XML Articles and Papers February 2003
- XML Articles and Papers January 2003
- XML Articles and Papers December 2002
- XML Articles and Papers November 2002
- XML Articles and Papers October 2002
- XML Articles and Papers September 2002
- XML Articles and Papers August 2002
- XML Articles and Papers July 2002
- XML Articles and Papers April - June, 2002
- XML Articles and Papers January - March, 2002
- XML Articles and Papers October - December, 2001
- XML Articles and Papers July - September, 2001
- XML Articles and Papers April - June, 2001
- XML Articles and Papers January - March, 2001
- XML Articles and Papers October - December, 2000
- XML Articles and Papers July - September, 2000
- XML Articles and Papers April - June, 2000
- XML Articles and Papers January - March, 2000
- XML Articles and Papers July-December, 1999
- XML Articles and Papers January-June, 1999
- XML Articles and Papers 1998
- XML Articles and Papers 1996 - 1997
- Introductory and Tutorial Articles on XML
- XML News from the Press