Other collections with references to general and technical publications on XML:
- XML Article Archive: [September 2003] [August 2003] [July 2003] [June 2003] [May 2003] [April 2003] [March 2003] [February 2003] [January 2003] [December 2002] [November 2002] [October 2002] [September 2002] [August 2002] [July 2002] [April - June 2002] [January - March 2002] [October - December 2001] [Earlier Collections]
- Articles Introducing XML
- Comprehensive SGML/XML Bibliographic Reference List
[October 29, 2003] "W3C Seeks Re-examination of Eolas Browser Patent." By Matt Hicks. In eWEEK (October 29, 2003). "The World Wide Web Consortium is seeking a reexamination of a Web browser patent that it says threatens to undermine the smooth operation of the Web. On Tuesday [2003-10-28], W3C Director Tim Berners-Lee sent a letter to the United States Patent and Trademark Office formally requesting a reexamination of the patent, U.S. Patent No. 5,838,906. The Web standards group claims that the patent is invalid because 'prior art' (a legal term in patent law referring to whether an invention existed prior to the filing of a patent) was not considered at the time the patent was granted in 1998 or during the trial. 'A patent whose validity is demonstrably in doubt ought not be allowed to undo years of work that have gone into building the Web,' Berners-Lee wrote in his letter to James E. Rogan, undersecretary of commerce for intellectual property in the patent office In a separate filing with the patent office, the W3C last week outlined examples of prior art, including two publications from a Hewlett Packard Laboratories researcher, Dave Raggett, about a proposed HTML+ specification that it says were published a year before the patent filing. The W3C claims that the publications describe the EMBED tag in HTML+ in an identical way to the EMBED tag in the patent. Beyond the claims of prior art, the W3C also cited the far-flung impact of the patent as a reason for it to be re-examined. As well as Web and software developers being forced to modify Web pages and applications at a considerable expense, millions of Web pages that are no longer being actively maintained but that have historical significance could be broken because no one is responsible for covering the cost of changing them, Berners-Lee wrote..." See details in the news story: "W3C Presents Prior Art Filing to USPTO and Urges Removal of Eolas Patent."
[October 28, 2003] "Web Group Backs Microsoft in Patent Suit." By Steve Lohr. In New York Times (October 29, 2003). W3C, a "leading Internet standards-setting organization took the unusual step yesterday of urging the director of the United States Patent and Trademark Office to invalidate a software patent that the group says threatens the development of the World Wide Web. The move by the World Wide Web Consortium puts the group squarely behind Microsoft in a patent-infringement lawsuit that the company is losing so far. A federal jury ruled against Microsoft in August and awarded $521 million to a former University of California researcher who holds the patent the Web consortium now wants revoked. The Web group contends that the patent based on work done by Michael Doyle, founder of Eolas Technologies in Chicago, while he was an adjunct professor at the University of California at San Francisco, was improperly granted. In a filing with the patent office, the Web consortium asserts that the ideas in the Eolas patent had previously been published as prior art, a legal term. That prior art was not considered when the patent was granted, or in the Microsoft trial, and thus the patent claims should be invalidated, the consortium contends. In a long letter yesterday, Tim Berners-Lee, the consortium director, who created the basic software standards for the Web, said the patent office should begin a review of the patent 'to prevent substantial economic and technical damage to the operation of the World Wide Web.' In his letter to James E. Rogan, director of the patent office, Mr. Berners-Lee repeatedly emphasized the wider public interest in a review of the patent. If the claims in the patent are upheld and enforced, Mr. Berners-Lee warned, 'the cycle of innovation on the Web would be substantially retarded.' Later, he wrote that the patent, if unchallenged, represented 'a substantial setback for global interoperability and the success of the open Web.' The technology in question lets a Web browser summon programs automatically over the Internet. The programs that use this technology include those for playing music, videos and animations and exchanging documents over the Internet. The technology has become a standard feature in the software for coding Web pages, called hypertext markup language. To comply with the court ruling, Microsoft has told several software companies and the Web consortium that it plans to make changes in its Internet Explorer browser, the on-ramp to the Web for 90 percent of computer users. That, the Web consortium warned, could force changes in other Internet media software including the Real Networks music player, Apple's QuickTime video program, Macromedia Flash, Adobe's document reader, and Web scripting languages like Sun Microsystems' Java. In addition, the standards group said, Web pages across the Internet might have to be modified to adjust to changes made by Microsoft to comply with the court ruling..." See references in "W3C Presents Prior Art Filing to USPTO and Urges Removal of Eolas Patent."
[October 27, 2003] "Code Name Avalon: Create Real Apps Using New Code and Markup Model." By Charles Petzold. In Microsoft MSDN Magazine Volume 19 Number 1 (January 2004). "The new presentation subsystem in the next version of Windows, code-named 'Longhorn,' offers powerful new capabilities to developers. This new subsystem, code-named 'Avalon,' allows developers to take advantage of its capabilities through a language -- Extensible Application Markup Language (code-named 'XAML'), as well as through modern object-oriented programming languages such as C#. Because most applications written to Avalon will probably be a mix of XAML and programming code, this article discusses XAML tags used to control page layout along with the procedural code written to respond to events. Avalon consists mostly of a new collection of classes added to the .NET Framework... In addition, Avalon also defines a new markup language you can use in Longhorn that's code-named 'XAML.' You use XAML much like HTML to define a layout of text, images, and controls. Being based on XML, XAML has stricter and much less ambiguous syntax than HTML. It is expected that most XAML will be machine-generated by visual design programs, but it may be a good learning experience to hand-write your own XAML (initially). Most applications written to Avalon will probably contain both program code and XAML. You'll use the XAML for defining the initial visual interface of your application, and write code for doing everything else. You can embed the program code directly in XAML or keep it in a separate file. Everything you can do in XAML you can also do in program code, so it's possible to write a program without using any XAML at all. The reverse is not true, however; there are many tasks that can only be done in program code, so only the simplest applications will consist entirely of XAML. Here's a little snippet of some XAML: <Button Background='LightSeaGreen' FontSize='24pt'>Calculate</Button>... XAML has very intimate ties with the Avalon class library: every element type you can use in XAML is actually a class and, specifically, a descendent of the UIElement or ContentElement classes declared in the MSAvalon.Windows namespace. Among the descendents of UIElement is Control, from which are descended all the common user-interface controls such as buttons, scroll bars, list boxes, edit fields, and so forth. Classes derived from ContentElement include Bold and Italic. Of course, the modern programmer wants to know: can I use my own classes as XAML elements? And the answer is: of course you can. XAML is called the Extensible Application Markup Language for a reason. Any class that has a public parameterless constructor and settable properties can be used in XAML... The hierarchical nature of XAML is one reason why a markup language makes more sense than programming code for defining a visual interface. The markup can mimic the hierarchy with nesting and indentation. In fact, when Windows 1.0 was first introduced in 1985, programmers used a text resource script to define the hierarchical structure of their menus and dialog boxes. At the time, the hierarchy only went one level deep, but it was a start. An XAML file is just the resource script... Avalon and XAML represent a departure from Windows-based application programming of the past. In many ways, designing your application's UI will be easier than it used to be and deploying it will be a snap. With a lightweight XAML markup for UI definition, Longhorn-based applications are the obvious next step in the convergence of the Web and desktop programming models, combining the best of both approaches..." See also "XML Markup Languages for User Interface Definition."
[October 27, 2003] "Web Services, XML Touted for Longhorn. Microsoft Again Beats Drum for Planned Operating System." By Paul Krill. In InfoWorld (October 27, 2003). "Microsoft Senior Vice President Jim Allchin at the Microsoft Professional Developers Conference here on Monday touted the upcoming Longhorn release of Windows for its XML, Web services, collaboration, and storage capabilities... Developers, who are receiving early code from Longhorn at the conference, will have to wait a while for the finished Longhorn product. A first beta release is not planned until the second half of 2004, with general availability expected in 2006. For Web services and collaboration, the Indigo technology in Longhorn enables access to multiple elements of the system, including security and trust. Peer-to-peer communications also is supported... Applications can be run as a service, according to Allchin. Synchronization also is provided in Indigo, as is collaboration, providing for unification of contact lists among systems such as real-time messaging and e-mail, he said. The XAML (XML Application Markup Language) functionality in Longhorn provides a markup language enabling development in a declarative programming fashion, according to Allchin. The declarative concept involves separating coding from content. 'XAML allows collaboration between designers and developers,' he said... Also featured in Longhorn is Next Generation Secure Computing Base (NGSCB), formerly called Palladium, for secure booting in hardware and creation of shadowed memory. 'The idea is it's a curtained memory, so it would be very, very hard to penetrate that,' Allchin said..."
[October 27, 2003] "XMPP Instant Messaging." By Peter Saint-Andre and Jeremie Miller (Jabber Software Foundation). IETF Network Working Group, Internet-Draft. Reference: 'draft-ietf-xmpp-im-18'. October 26, 2003. 109 pages. "The Extensible Messaging and Presence Protocol (XMPP) is a protocol for streaming XML elements in order to exchange messages and presence information in close to real time. The core features of XMPP are defined in XMPP Core. These features -- specifically XML streams, stream authentication and encryption, and the <message/>, <presence/>, and <iq/> children of the stream root -- provide the building blocks for many types of near-real-time applications, which may be layered on top of the core by sending application-specific data qualified by particular XML namespaces. This memo describes extensions to and applications of the core features of XMPP that provide the basic functionality expected of an instant messaging (IM) and presence application as defined in RFC 2779 ['Instant Messaging / Presence Protocol Requirements']... For the purposes of this memo, the requirements of a basic instant messaging and presence application are defined by RFC 2779. At a high level, RFC 2779 stipulates that a user must be able to complete the following use cases: (1) Exchange messages with other users (2) Exchange presence information with other users (3) Manage subscriptions to and from other users (4) Manage items in a contact list (in XMPP this is called a "roster") (5) Block communications to or from specific other users Detailed definitions of these functionality areas are contained in RFC 2779... While XMPP-based instant messaging and presence meets the requirements of RFC 2779, it was not designed explicitly with RFC 2779 in mind, since the base protocol evolved through an open development process within the Jabber open-source community before RFC 2779 was written. Note also that although protocols addressing many other functionality areas have been defined in the Jabber community, such protocols are not included in this memo because they are not required by RFC 2779..." See following entry, and general references in "Extensible Messaging and Presence Protocol (XMPP)."
[October 27, 2003] "XMPP Core." By Peter Saint-Andre and Jeremie Miller (Jabber Software Foundation). IETF Network Working Group, Internet-Draft. Reference: 'draft-ietf-xmpp-core-19'. October 26, 2003. 94 pages. "The Extensible Messaging and Presence Protocol (XMPP) is an open XML protocol for near-real-time messaging, presence, and request-response services. The basic syntax and semantics were developed originally within the Jabber open-source community, mainly in 1999. In 2002, the XMPP WG was chartered with developing an adaptation of the Jabber protocol that would be suitable as an IETF instant messaging (IM) and presence technology. As a result of work by the XMPP WG, the current memo defines the core features of XMPP; XMPP IM defines the extensions required to provide the instant messaging and presence functionality defined in RFC 2779 ['Instant Messaging / Presence Protocol Requirements']... Although XMPP is not wedded to any specific network architecture, to date it usually has been implemented via a typical client-server architecture, wherein a client utilizing XMPP accesses a server over a TCP socket.. A server acts as an intelligent abstraction layer for XMPP communications. Its primary responsibilities are to manage connections from or sessions for other entities (in the form of XML streams (Section 4) to and from authorized clients, servers, and other entities) and to route appropriately-addressed XML stanzas (Section 9) among such entities over XML streams. Most XMPP-compliant servers also assume responsibility for the storage of data that is used by clients (e.g., contact lists for users of XMPP-based instant messaging and presence applications); in this case, the XML data is processed directly by the server itself on behalf of the client and is not routed to another entity. Compliant server implementations MUST ensure in-order processing of XML stanzas between any two entities... Most clients connect directly to a server over a TCP socket and use XMPP to take full advantage of the functionality provided by a server and any associated services. Although there is no necessary coupling of an XML stream to a TCP socket (e.g., a client could connect via HTTP polling or some other mechanism), this specification defines a binding of XMPP to TCP only. Multiple resources (e.g., devices or locations) may connect simultaneously to a server on behalf of each authorized client, with each resource differentiated by the resource identifier of a JID (e.g., <node@domain/home> vs. <node@domain/work>) as defined under Addressing Scheme (Section 3). The RECOMMENDED port for connections between a client and a server is 5222, as registered with the Internet Assigned Numbers Authority (IANA)... A gateway is a special-purpose server-side service whose primary function is to translate XMPP into the protocol used by a foreign (non-XMPP) messaging system, as well as to translate the return data back into XMPP. Examples are gateways to Internet Relay Chat (IRC), Short Message Service (SMS), SIMPLE, SMTP, and legacy instant messaging networks such as AIM, ICQ, MSN Messenger, and Yahoo! Instant Messenger. Communications between gateways and servers, and between gateways and the foreign messaging system, are not defined in this document..." See preceding entry, and general references in "Extensible Messaging and Presence Protocol (XMPP)."
[October 24, 2003] "Industry Commentary: Think Async." By Edwin Khodabakchian (Collaxa). In Web Services Journal Volume 3, Issue 10 (October 2003). "Within the IT industry, we're seeing a steady shift from an RPC (remote procedure call) integration style to asynchronous, document-based integration. More and more developers are realizing that asynchronous services are core to a new, loosely coupled and message-driven architecture. The main benefits that are enabled by asynchronous services and the associated loosely coupled, message-based architecture are: (1) Reliability: A synchronous integration style results in brittle applications where the weakest link in a chain impacts an entire application or business process. Reliable integration requires asynchronous interactions. (2) Scalability: Message-oriented middleware vendors and developers have shown that queues provide for maximum throughput and efficiency when connecting systems with different processing and availability capacities. The same approach can be taken with all integration projects by leveraging asynchronous services. (3) Long-running services: Integration projects frequently bring the greatest value when automating business processes that involve both manual processes and automated systems. Any service that supports or requires manual intervention must have an asynchronous interface due to the time it will likely need to complete... there are many standards (BPEL4WS, SOAP, WSDL, WS-Addressing, WS-Transaction, etc.) that assist in implementing asynchronous architectures. These standards provide a framework for an asynchronous architecture and bring significant value in both the design and implementation phases of an integration project by supporting asynchronous interactions as first-class citizens. But all of these standards and technology are the second step - the first step is for developers and architects to learn to be more comfortable designing asynchronous, loosely coupled services and systems..." General references in "Asynchronous Transactions and Web Services."
[October 24, 2003] "W3C to Broaden Footstep in China." By Bob Liu. In InternetNews.com (October 24, 2003). "The World Wide Web Consortium (W3C) announced it will broaden its footsteps in mainland China with the group's first-ever event to be organized in November. The China International Forum on WWW's Development 2003 will be held on 12-13 November 2003 in Beijing. The event is co-organized by the China Computer Federation and the W3C Office in Hong Kong. China has more than 45 million Internet users and the number is still growing. Chinese is the second most widely used language (behind English) on the Web. Cultural and language differences increase the necessity to pay attention to how the Web grows so that more people can access the Web easily, and so that user agents can render and search Chinese Web pages correctly with a variety of devices. 'The differences in knowledge levels, languages, consumers and producers, etc., have created information asymmetry which hinders international information exchange,' said Professor Shi Zhongzhi. Many international companies have set up research and development laboratories in China to work on Web-related technologies. However, the Web community at large needs more input from Chinese public users, academia and local industry, who depend on the Web in their daily activities..." See China International Forum on WWW's Development 2003, to be held November 12-13, 2003. Related W3C news: "El Consorcio World Wide Web inaugura la Oficina Española en Oviedo, España."
[October 24, 2003] "Navy Deploying Its Battle Plan: SAML." By Anne Chen. In eWEEK (October 20, 2003). "At the U.S. Navy's Space and Naval Warfare Systems Command, the battle plans to gain control of an it environment with an estimated 200,000 applications center on single-sign-on capabilities and the use of SAML... In 2001, Adm. William Fallon, vice chief of naval operations, created Task Force Web, an initiative to winnow the Navy's thousands of legacy applications. The program called for all Navy applications to be Web-enabled by next year and available to some 720,000 Navy users via the Navy Enterprise Portal. The task proved to be much larger than anyone thought. At the time, the Navy had about 200,000 applications in use, many of which were deployed at the department level and overlapped with those in other Navy units. To control that environment, the Navy decided to deploy a portal based on a Web services architecture. It was decided the portal would be based on open standards, so the Navy chose to build its Web services architecture using the J2EE (Java 2 Platform, Enterprise Edition) environment. The Navy spent about $1 million to develop internally a middleware layer that enables the agency to substitute standards or data definitions without forcing changes to user services or underlying databases. This portal connector links the Navy's disparate legacy applications and Web services... SPAWAR -- which acquires and deploys the technology used in ships and airplanes, as well as in network operating centers in the continental United States and overseas -- decided single sign-on would be the most effective way to handle identity management for users to access the Navy Enterprise Portal... Because of the Navy's need to support personnel and contractors stationed around the globe, SPAWAR chose to support single-sign-on capabilities that are managed as a reusable Web service. For identity management authorization, SPAWAR decided to use open standards, including SAML; XML; Simple Object Access Protocol; and Universal Description, Discovery and Integration. This led to the Navy's decision earlier this year to pilot Oblix Inc.'s NetPoint Identity Management and Access Control Solution 6.1 because Oblix supports SAML... In the initial phase of the program, SPAWAR deployed NetPoint to handle SAML-enabled, single-sign-on authentication of 5,500 users aboard the battleship USS Teddy Roosevelt, enabling them to access applications that do everything from tracking parts to pinpointing the location of enemy vessels. NetPoint handles the exchange of SAML security assertions between users on the ship and servers onshore, and it automatically logs users in to the Navy Enterprise Portal and its available applications. The deployment of the project was successful enough that the Navy is planning to use NetPoint to provide single-sign-on capabilities to all 720,000 naval users and civilian contractors who access the Navy Marine Corps Intranet. Eventually, that number could reach as high as 3 million because all users associated with the Navy will be able to have their identity managed this way..." General references in "Security Assertion Markup Language (SAML)."
[October 24, 2003] "Messaging Worlds on Course to Merge. XMPP, SIMPLE Groups Discuss Interoperability." By Cathleen Moore. In InfoWorld (October 24, 2003). "Talks are under way to bridge the gap between rival IETF instant messaging protocols, paving the way for development of new collaboration technologies. Speaking last week at an IM Planet Show panel in San Jose, Calif., representatives from the XMPP (Extensible Messaging and Presence Protocol) and SIMPLE (Session Initiation Protocol for IM and Presence Leveraging Extensions) standards camps said preliminary talks have begun. 'The groups plan to talk more about interoperability at the IETF meeting in November,' said Joe Hildebrand, chief architect at Jabber. Hildebrand said once interoperability work begins, the standards groups will look at combining existing IM technology and developing new technology. Other panelists included Jonathan Rosenberg, SIP and SIMPLE co-author and CTO at dynamicsoft; and Maxime Seguineau, founder, chairman, and CEO of Antepo. Illustrating the potential for interoperability, the nonprofit Jabber Software Foundation also announced last week the public availability of two gateways designed to extend XMPP networks to SIMPLE and Wireless Village IM implementations. The SIMPLE gateway is built for IBM's Lotus Instant Messaging implementation of SIMPLE. Both SIMPLE and XMPP are nearing final ratification by the IETF. IM standards development is 'a big-time issue' due to the significant development and market opportunities in enterprise IM and presence technology, said Rob Batchelder, president of Relevance... Although the initial talks between XMPP and SIMPLE camps stayed safely within the bounds of interoperability, the possibility of merging the efforts into a single standard would yield the most benefit to the industry, Batchelder said..." IETF drafts recently published in the Extensible Messaging and Presence Protocol (XMPP) Working Group: XMPP Instant Messaging (September 7, 2003, reference: 'draft-ietf-xmpp-im-17', 118 pages); XMPP Core (September 7, 2003, reference: 'draft-ietf-xmpp-core-18', 95 pages). Additional specifications cited in "Extensible Messaging and Presence Protocol (XMPP)." See also: (1) IETF SIMPLE Working Group Charter [SIP for Instant Messaging and Presence Leveraging Extensions]; (2) "Common Profile for Instant Messaging (CPIM).".
[October 24, 2003] "Denmark Urges Government Support for Open Source." By Matthew Broersma. In ZDNet News (October 24, 2003). "Open source software and open standards are vital for any attempt at e-government, argues a new report from Denmark. Open source software represents a serious alternative to proprietary products, and should be used as a tool to open up software markets to more competition, according to a report carried out under the auspices of the Danish government. The report, which stirred up controversy when it was published in Denmark earlier this month, was released in English this week by the Danish Board of Technology. While a number of governments in Europe and elsewhere are eyeing open source software as a way of cutting costs and stimulating localised software development, the Danish study goes a step further, arguing that public sector support for open-source and open standards may be necessary for there to be any real competition in the software market... The study recommended that governments take an active role in promoting standardised file formats and alternatives to dominant proprietary applications in order to help break a 'de facto monopoly'. 'The ordinary market conditions for standard software will tend towards a very small number of suppliers or a monopoly,' the Board of Technology stated in the report. 'It will only be possible to achieve competition in such a situation by taking political decisions that assist new market participants in entering the market.' The Board was particularly critical of closed, proprietary standards such as Microsoft's Word format, arguing they go against the principles of e-government by requiring citizens to use particular software and reinforcing monopolies. 'A strategy for e-government should not be based on a closed, proprietary standard in a key technology,' the report said. 'There is no genuine competition at present in the desktop (office software) area, largely due to the fact that Microsoft formats also represent de facto standards for electronic document exchange.' The Board recommended that the Danish government take an active role in promoting an open, XML-based alternative for file formats, either by switching to OpenOffice's XML format or launching an EU-wide project to develop a new format... The Danish Board of Technology urged the government to take action, dismissing the lukewarm approach of other European countries: 'It is... not sufficient for us in Denmark to follow Britain and Germany, for example, in merely recommending that open source should be 'considered'. A more active decision must be taken in those areas where there is a de facto monopoly'..." See "Danish Board of Technology Report Recommends Open Source Software for E-Government."
[October 22, 2003] "Raising the XML Flag." By David Becker. In CNET News.com (October 22, 2003). "The launch of the new Microsoft Office System included a handful of new products and major changes to existing applications that are intended to position them as part of a broad platform for interacting with corporate data. New XML (Extensible Markup Lanugage)-based functions in familiar programs such as Word and Excel expand their role as purely local applications. New applications such as InfoPath, an ambitious attempt to apply electronic forms to internal business processes, and SharePoint collaborations tools further blur the distinction between desktop and server. But Jeff Raikes, vice president of Microsoft's Productivity and Business Services group, insists that XML 'plumbing' is beside the point -- the real news is that the new Office will buy harried workers a few more minutes in their day. He points to a recent third-party study that concludes that Office 2003 improves efficiency enough to save the average office worker two hours a week. 'I think the kind of benefits you see in Office 2003, where information worker productivity can be improved by up to two hours per week, where you can have the kind of cumulative impact where you can pay for the investment in eight months on average -- that has to be at the core of what we do,' he said. Raikes spoke with CNET News.com in conjunction with the Office System launch...' [Raikes:] 'The biggest story, really, is the transformation. People have historically had a narrow view of what Office means to them and their productivity. With Office 2003, and in particular establishing the Office System concept, we're very clearly...signaling a major transformation of what we're doing with Office and our aspiration to really help people in the broad facets of information work..." See: "New Microsoft Office System Marketed to Enterprises."
[October 21, 2003] "Create Web Applets with Mozilla and XML. Mozilla's Simple and Flexible XUL Saves Time When Building Java-less Applets." By Nigel McFarlane. From IBM developerWorks, Web architecture. ['To go beyond simple HTML, historically the only options have been to use Java technology or plug-ins. Now, you have a new way -- write and display applications natively in XML. The Mozilla platform provides such a mechanism. In this article, Nigel McFarlane introduces XUL (the XML User-interface Language). XUL is set of GUI widgets with extensive cross-platform support that are designed for building GUI elements for applications that have traditional, non-HTML GUIs.'] "The Mozilla platform is a bundle of freely available open source technology that underlies many user-oriented software applications. Some of these applications are desktops and some are development tools, but the most famous ones are Web browsers, including Mozilla, AOL for the Macintosh, Galeon on Linux, and Netscape. Although these browsers are mostly used to display HTML, the platform beneath them offers much more. In particular, the Mozilla platform's extensive support for XML provides an alternative to Java technology for the creation of applets and applications. In this article, I'll demonstrate how to create such applets using XML tags instead of Java classes. It is a refreshingly simple, yet powerful approach. Although the Mozilla platform has its share of object classes (more than a thousand at last count), it is best known for its deep use of XML. For some flavors of XML (like XHTML), the platform provides full rendering support while for other flavors (like RDF), it provides support for data processing only. Rendering support is required if an XML document is to have a visual representation. The platform has rendering support for HTML/XHTML, MathML, optionally SVG, and also its own XUL, covered in this article..." See: (1) "Extensible User Interface Language (XUL)"; (2) "XML Markup Languages for User Interface Definition."
[October 21, 2003] "Opposition to the H.R. 3261, Entitled 'Database and Collections of Information Misappropriation Act'." Public interest community letter from ARL and others to Rep. James Sensenbrenner of the Committee on the Judiciary and Rep. Billy Tauzin of the Committee on Energy and Commerce expressing opposition to H.R. 3261. (October 21, 2003). "The private sector proponents of the bill have yet to offer a convincing case that existing federal and state laws, including federal copyright law, federal anti-hacking prohibitions, and a variety of state contract and tort laws, are insufficient to provide database producers with adequate protection. They have certainly failed to demonstrate a problem that would justify the fundamental and constitutionally suspect changes to our Nation's information policy called for in the legislation. We represent a broad and diverse coalition of database producers and users consisting of research and educational institutions, consumer and public interest groups, libraries, Internet and communications companies, financial services providers, and large corporate users of information. We believe the open sharing of information has been fundamental to our nation's advancement in knowledge, technology, and culture. We fear that legislation like H.R. 3261 will lead to the growing monopolization of the marketplace for information, where the ability to use facts is increasingly controlled by a small number of international publishing houses..." Signatures from: Amazon.com, American Association of Law Libraries, American Civil Liberties Union, American Historical Association, American Library Association, Association of Research Libraries, Bloomberg, LP, Charles Schwab & Co., Inc., CheckFree, Comcast, Computer & Communications Industry Association, Consumer Project on Technology, Digital Future Coalition, Electronic Frontier Foundation, Essential Information, Google, Information Technology Association of America, Media Access Project, Medical Library Association, National Academy of Engineering, National Academy of Sciences, National Business Coalition on E-Commerce and Privacy, NetCoalition, Public Knowledge, SBC Communications, Special Library Association, Society of American Archivists, The River, U.S. Chamber of Commerce, Verizon, Virginia ISP Association, Washington ISP Association, Worldnet Communications, Wyoming ISP Association, and Yahoo! Inc. See also: (1) testimony of Mr. William Wulf (President, National Academy of Engineering) and (2) testimony of Tom Donohue (President and Chief Executive Officer of the U.S. Chamber of Commerce). Donohue: "Our country's basic information policy provides that facts -- the building blocks of information -- cannot be owned..." See summary by Roy Mark. [cache]
[October 17, 2003] "House Panel Approves Database Protection Bill." By Roy Mark. In InternetNews.com (October 17, 2003). "The U.S. House Judiciary Committee's Subcommittee on Courts, the Internet and Intellectual Property voted 10-4 Thursday [2003-10-16] afternoon to approve legislation to prohibit the misappropriation of commercial databases. The measure now goes to the full committee... The Database and Collections of Information Misappropriation Act of 2003 (H.R. 3261), sponsored by Howard Coble (R.-N.C.), allows database owners to sue in civil court for damages arising from the theft of the information and represents, according to Coble, a compromise effort to create a balance where the interests of users and producers of databases. Various versions of the legislation have kicked around Congress for the last eight years with opponents, which include the U.S. Chamber of Commerce and college and university libraries, contending other laws on the books provide remedies for database owners. Thursday those same objections were raised by Democrats on the subcommittee. 'This is the classic solution in search of a problem,' said Rep. Rick Boucher (D.-Va.), who introduced several amendments to the bill that were all defeated. Boucher sought to exempt libraries from the penalty provisions of the bill and also wanted an amendment that would not allow database owners from using the bill to protect legal materials produced by courts..." Note: this proposed piece of legislation [alt URL] is opposed especially by universities, research libraries and archivists, Internet Service Providers, the American Civil Liberties Union, and consumer groups. See Coalition opposition to HR 3261.
[October 17, 2003] "M-Commerce, Certifications Next for Liberty Alliance. Federated Network Identity Effort Proceeds." By Paul Roberts. In InfoWorld (October 14, 2003). "Single sign-on standards group the Liberty Alliance Project said Tuesday that it was taking over the work of European mobile computing standards group Radicchio Ltd. and that it will unveil a program to certify products and services for compliance with the Liberty Alliance's federated network identity standards. The announcements come as the trade group looks for ways to increase adoption of Liberty specifications and build a secure foundation for the growth of mobile and wireless transactions... Radicchio is a U.K.-based cross industry group that was created in 1999 to foster a secure platform for conducting transactions using mobile devices such as cell phones and PDAs (personal digital assistants). The group developed a platform called the 'Trusted Transaction Roaming platform,' or t2r, for authenticating mobile device users across different mobile networks. The t2r platform was recently submitted to the European Commission for evaluation. Under an agreement, which is still being negotiated, t2r will be transferred to the Liberty Alliance Project along with any other specifications and assets belonging to Radicchio, according to a statement released by Radicchio Tuesday at the ITU Telecom conference in Geneva. Once the transfer is complete, Radicchio will discontinue operations, according to James van der Beek, senior manager of strategy at Radicchio member Vodaphone Group. The t2r platform uses the Liberty Alliance's Federated Identity Architecture, Radicchio said in its statement. The decision to fold Radicchio, which counts leading IT players including VeriSign, Telefonaktiebolaget LM Ericsson, Vodafone and Orange as members, grew out of the realization that the challenge of mobile commerce was converging with that of verifying user identity, Van der Beek said. 'Identity impacts everything and the Liberty Alliance is the place to handle identity,' he said. The merger also fits with the Liberty Alliance's focus on a new generation of identity services, according to Simon Nicholson, chairman of the Business and Marketing Expert Group at the Liberty Alliance and a manager of strategic initiatives at Sun Microsystems. Inheriting the t2r platform will give the Liberty Alliance a head start developing standards for mobile payment and wallet services, Nicholson said. 'It's a logical next step for the Liberty Alliance to solve those future problems,' he said. The Liberty Alliance is also launching a certification program to make sure single sign-on software products and services adhere to the group's published guidelines and interoperate with other Liberty products..." See the announcement "Radicchio to Submit M-Commerce and Security Standards Work to the Liberty Alliance Project." General references in "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[October 17, 2003] "Microsoft and Vodafone: Mobile Web Service Standardisation." By [Computerwire/Datamonitor]. In The Register (October 15, 2003). "Microsoft and Vodafone are to develop XML-based specifications for mobile web services standards. Microsoft has selected telecoms operator Vodafone to help lead development of XML-based specifications for convergence of fixed and mobile applications. Microsoft and Vodafone are to work closely to create web services standards that can extend desktop applications to mobile devices. This could help extend Windows applications to mobile devices giving the desktop franchise new areas of growth. On October 13, 2003, Microsoft and Vodafone announced that they plan to unveil a roadmap of technical specifications later this month, at Microsoft's Professional Developers' Conference (PDC) in Los Angeles. The roadmap, though, continues Microsoft's attempt to subtly stamp its hallmark on web services standards whilst working with a revolving door of partners. In the world of desktop and server-based computing, Microsoft has collaborated with IBM to develop XML specifications and standards for web services. In April 2002, the companies published the WS-roadmap outlining a series of planned security, routing, business process and other specifications. Since then, the companies have consistently met that roadmap, working with a shopping list of partners such as BEA Systems and VeriSign where appropriate, while completely ignoring any similar, external efforts. The announcement appears to indicate that IBM's usefulness does not extend to the mobile sector, and Microsoft is attempting to repeat its success with, arguably, IBM's counterpart in the mobile and cell-phone based computing sector. Vodafone is a global telecommunications player with 123 million customers. Standards could open a back door to broader mobile success for Microsoft's struggling Smartphone operating system. Windows is being steamrollered by Sun Microsystems' Java 2 Micro Edition (J2ME) in Asia while Nokia, the world's largest handset manufacturer, has resisted Windows on its devices. Developing specifications that integrate with Smartphone and the limitations of mobile devices would potentially increase the operating system's appeal to operators. Standards could also help extend the desktop-bound Windows applications to mobile devices, breathing additional life into the desktop franchise..."
[October 11, 2003] "ebXML Registry Overview. What Is An ebXML Registry, and What Can It Do?" From the OASIS ebXML Registry Technical Committee. October 2003. 2 pages. "An XML registry is an information system that securely stores XML artifacts (e.g., XML schemas, data elements, etc.) and non-XML artifacts (e.g., other e-business objects), as well as details (metadata) about the artifacts. The storage facility (e.g., a file system or database) that holds registered objects is known as a repository, while the part of the information system that maintains the metadata for the registered objects is known as a registry. ebXML Registry Benefits: (1) Promotes service discovery and maintenance of registered content (2) Enables secure and efficient version control for registered content (3) Promotes unified understanding of registered content in federated registries (4) Ensures availability and reuse of authoritative artifacts. A controlled registration and validation of XML / non-XML artifacts from authoritative sources promotes interoperability between trading partners, and facilitates greater reuse (5) Enables collaborative development. Users can create XML / non-XML artifacts and submit them to an XML registry for use and potential enhancement by authorized parties. The enhanced versions can then be made available for access by other authorized parties. ebXML Registry allows a federation of cooperating registries to act as a single e-Business Registry. This ability enables the tying together of internal applications and the systems of critical trading partners via Seamless Query [search registered content in any registry in a federation, regardless of registry type], Seamless Synchronization [synchronization of registered content between all registries in a federation, regardless of registry type], and Seamless Relocation [relocation of registered content from one registry to another in a federation, regardless of registry type]... ebXML Registry enables a secure deployment with reduced implementation / maintenance costs for semantically correct and meaningful information exchange packages. To control how its content is accessed it leverages the XML Signature and XACML standards. Additionally, the ability to employ Policy Decision Points (PDPs) allows it to act as a 'Policy Store' for resources that reside outside its own registry... The ebXML Registry supports e-Business operations in different geographic locations, providing an effective discovery and collaboration tool. For example, (i) locating partners, capabilities, services, documents and business processes; (ii) interacting with partners subsidiaries, customers about multiple product lines. More than just a classification tool for electronic yellow pages it enables the distributed marketing and promotion products and services, viewing of inventory levels, ordering of goods and return tracking, and accessing account information via the Web..." See also: "New Production-Ready Release of Open Source ebXML Registry." General references in "Electronic Business XML Initiative (ebXML)." [source .DOC and PDF]
[October 11, 2003] "ebXML Case Study: Centers for Disease Control and Prevention, Public Health Information Network Messaging System (PHINMS)." OASIS ebXML Member Section. Contributor: Alan Kotok (ebXML Forum). October 04, 2003. 8 pages. "The Public Health Information Network Messaging System (PHINMS) provides a secure and reliable messaging system for the Public Health Information Network. The Centers for Disease Control and Prevention (CDC) says that there are currently multiple systems in place that support communications for public health labs, the clinical community, and state and local health departments. However, many of these systems operate in isolation, not capitalizing on the potential for a cross-fertilization of data exchange. A crosscutting and unifying framework is needed to better monitor these data streams for early detection of public health issues and emergencies. To meet these requirements, the Public Health Information Network will enable a consistent exchange of response, health, and disease tracking data between public health partners. Ensuring the security of this information is also critical as is the ability of the network to work reliably in times of national crisis... Developed by the Centers for Disease Control and Prevention, PHINMS uses the ebXML infrastructure to securely transmit public health information over the Internet. PHINMS is a generic, standards-based, interoperable and extensible message transport system. It is platform-independent and loosely coupled with systems that produce outgoing messages or consume incoming messages... PHINMS has three major components: the Message Sender, Message Receiver, and Message Handler. The Message Sender functions as the client. It is a Java application that runs on a workstation or server. The Message Sender polls the Transport Queue for outgoing data. The Transport Queue can be a database table or a file system directory. When outgoing data is found, the Message Sender packages the data as an ebXML message and sends it to the Message Receiver. The Message Receiver functions as a server. It is a servlet that runs on a J2EE compliant application server. When the Message Receiver receives a message, it processes the message envelope, decrypts the message, verifies the signature and then forwards the message payload to the Message Handler or writes the message directly into a worker queue. The Message Handler can process synchronous messages posted by the message receiver or poll the worker queue. It is a servlet that runs on a J2EE compliant application server. The Message Handler and the Message Receiver can reside on the same system. When the Message Handler receives the message payload from the Message Receiver in synchronous scenarios, it processes the message payload and then sends a response, which contains the Message Handler's status, back to the Message Receiver. In asynchronous scenarios, the message handler polls its worker queue to receive the incoming message..." General references in "Electronic Business XML Initiative (ebXML)." [source PDF]
[October 11, 2003] "Interview with CEO of OASIS on Framework for Web Services Implementation Technical Committee." From Infocomm Development Authority of Singapore (IDA). October 08, 2003. "Patrick Gannon, President and Chief Executive Officer of the Organisation for the Advancement of Structured Information Standards (OASIS), was recently in Singapore for the launch of the OASIS Framework for Web Services Implementation (FWSI) Technical Committee. OASIS is an international not-for-profit non-governmental organisation (NGO) whose mission is to drive the development, convergence and adoption of e-business standards. The OASIS FWSI Technical Committee, co-chaired by IDA and SIMTech, is the first Asian-led Technical Committee in OASIS... Mr Gannon sees the FWSI Technical Committee as an example of government-sponsored research paying off for the industry. He notes, 'The OASIS FWSI Technical Committee came about as a result of the Web Services Reference Architecture (WSRA) that the Singapore Institute of Manufacturing Technology, a research institute, had developed under funding from IDA and A*STAR. OASIS saw value in the WSRA and suggested that IDA help to form an OASIS Technical Committee based on it.' An industry veteran of 35 years who has held senior e-commerce positions at BEA Systems, Netfish Technologies and the CommerceNet Consortium, Mr Gannon sees this initiative to develop Web Services standards as an excellent example of public-private sector collaboration, which brings together government funding and private sector contributions to offer greater benefits to the broader local business community and, ultimately, the worldwide e-business community..." See also: (1) "Singapore to Lead in Setting Global Standards for Web Services. IDA Singapore and SIMTech to Co-Chair Committee"; (2) other details in the news story "OASIS Announces Framework for Web Services Implementation (FWSI) TC." [cache]
[October 10, 2003] "I.B.M. and Cisco Plan Venture to Develop Software Standards." By Steve Lohr. In New York Times, Technology Section (October 10, 2003). IBM and Cisco Systems announce "that they will jointly develop and promote open software standards intended to simplify the increasingly complex task of managing corporate data networks. The software technologies developed in the collaboration will be shared with the rest of the industry, the companies said. In addition, I.B.M. said it had submitted a new problem-tracing method to a technology standards group so that it could be used throughout the industry... For businesses, the challenge of managing data centers and electronic commerce, analysts agree, is huge. The shift of commerce to the Internet has created numerous technical headaches. A company's computer network must interact with an outside world of all kinds of technologies, vastly increasing security risks and complexity. Industry analysts estimate that up to 80 percent of the information technology budgets at most companies is spent on fixing problems and keeping the systems running. The I.B.M.-Cisco collaboration, analysts say, is a step toward reducing some of those costs by introducing technology that can automate the detection, correction and prevention of problems in technically diverse computer systems... Several large computer companies and software makers including I.B.M., Hewlett-Packard, Sun Microsystems, Computer Associates and Veritas are working on this kind of network management technology. The challenge has also attracted start-ups like Opsware. The companies often use marketing terms like 'autonomic' and 'adaptive' computing to try to convey the appeal of what is a difficult, abstract subject. Today, the companies have somewhat different approaches and often use different technologies. The I.B.M.-Cisco collaboration is an effort to create certain standards for systems management technology - essentially creating a common language for naming and reporting the routine actions of computer systems and the failures that occur... The systems management standards being proposed by I.B.M. and Cisco would build on Web services. I.B.M. has proposed a standard way to identify and trace information about problems in large software applications on computer systems, called the Common Base Event format, and submitted it to an independent standards group, the Organization for the Advancement of Structured Information Standards. 'We're trying to put the pieces together, and trying to get buy-in from the rest of the industry,' said Ric Telford, director of technology at I.B.M.'s autonomic computing unit..." Originally pubished at www.nytimes.com/2003/10/10/technology/10blue.html. For information on Common Base Event (CBE) format, see References and Additional Information in the IBM/Cisco press release.
[October 10, 2003] "IBM, Cisco Help Networks Help Themselves. Companies Announce Drive Toward Self-Diagnostic, Self-Healing Networks." By Stephen Lawson. In InfoWorld (October 10, 2003). "IBM Corp. and Cisco Systems Inc. want to make it easier to diagnose and solve problems in an enterprise's IT infrastructure, even to the point where it can do that by itself... Self-diagnosis and self-healing are key parts of IBM's broader autonomic computing initiative, aimed at creating systems and networks that in many respects run themselves, said Ric Telford, director of architecture and technology in the autonomic computing business of Armonk, New York-based IBM. Companies can never remove the human administrator from the picture completely, but Cisco and IBM's steps should make life easier even when people have to get involved... The IBM-developed Common Base Event (CBE) specification defines a standard format for event logs, which devices and software use to keep track of transactions and other activity. All the components of systems typically have different formats for the information they collect about events, Telford said. For example, if an IS team needs to figure out where something went wrong with an e-business application, they may need to understand 40 different event log formats, he said. Root cause analysis of the problem could require several different administrators -- database, network and so on -- getting involved. As a common format, CBE can simplify that process, Telford said. Future products should use CBE as their native log format, but "log adapters" can define mappings between current proprietary log formats and CBE, he said. IBM now has a team of about 24 engineers developing log adapters for core IBM products, including hardware, software and storage products, according to the company. In August, IBM proposed CBE as a standard to the Organization for the Advancement of Structured Information Standards (OASIS)... Another piece of the puzzle is a log and trace analyzer, a visual tool for administrators to study log files in various views. IBM has already made a log and trace analyzer available in its WebSphere Studio application development platform, where developers can use it to work out problems before a product is deployed. For IS administrators using production versions of software, IBM probably will ship a log and trace analyzer as part of its Tivoli system management software, at an undetermined future date, Telford said. Other things that could be standardized include correlation mechanisms -- ways of associating events with one another -- and filters for sifting out the events that are relevant, Telford said. Depending on who at IBM, Cisco or other companies develops methods of doing that, such methods could be proposed to OASIS or another standards body for approval..." For information on Common Base Event (CBE) format, see References and Additional Information in the IBM/Cisco press release.
[October 10, 2003] "IBM, Cisco Push Data Center Standard." By Martin LaMonica. In CNET News.com (October 10, 2003). "IBM and Cisco Systems on Friday announced that they are spearheading an effort to create an industrywide method for troubleshooting glitches in complex-computing data centers. The two companies will pursue the standardization of problem-resolution techniques through submissions to the Organization for the Advancement of Structured Information Standards (OASIS), a standards body. Their initial proposal for a standard is based on IBM's work on autonomic computing, which envisions computing systems that can automatically fix their own problems without human intervention. IBM and Cisco seek to tackle the lack of a common reporting format for application failures in the disparate parts of a corporate data center, Alan Ganek, vice president of autonomic computing at IBM, said Thursday. The individual elements of a data center -- such as servers, software components and networking gear -- each contain a log that helps systems administrators locate the source of a problem. Typically, the logging information from these elements is tracked separately and not collated or compared. This slows down problem resolution, according to Ganek. To expedite the process, IBM and Cisco are proposing a single data format that would allow disparate systems to share troubleshooting information... [IBM's Common Base Event format], based on IBM's existing 'log and trace' tools, aims to provide a way for hardware and software from multiple companies to share logging information. 'If you expect to add intelligence in order to manage systems, you better understand what the systems are doing. And that means you have to put in the instrumentation to capture what's going on in the system,' Ganek said. IBM expects to finish its own set of network problem resolution tools, including the log and trace software, by the end of the year..." For information on Common Base Event (CBE) format, see References and Additional Information in the IBM/Cisco press release. See also the OASIS WSDM TC, to which the CBE draft specification was submitted for consideration.
[October 09, 2003] "Developers Gripe About IE Standards Inaction." By Paul Festa. In CNET News.com (October 09, 2003). "Web developers want to light a fire under Microsoft to get better standards support in the company's Internet Explorer browser, but they can't seem to spark a flame. Gripes have mounted recently over support in IE 6 for Cascading Style Sheets (CSS), a Web standard increasingly important to design professionals. Web developers and makers of Web authoring tools say the software giant has allowed CSS bugs to linger for years, undermining technology that promises to significantly cut corporate Web site design costs. Seeking to goad Microsoft into action, digital document giant Adobe Systems last week unveiled a deal to bolster support for CSS in its GoLive Web authoring tool with technology from tiny Web browser maker Opera Software, whose chief technology officer first proposed CSS nine years ago. Opera maintains an active role in developing CSS through the World Wide Web Consortium (W3C). But standards advocates said it was unclear whether Adobe's action could prod Microsoft into better CSS support, given the lack of browser competition. 'Because it owns the marketplace, Microsoft's under very little pressure to fix remaining IE 6 bugs,' said Jeffrey Zeldman, an independent Web developer and cofounder of the Web Standards Project. 'When it formed this partnership with Opera, Adobe may have wanted to light a fire under IE, but lots of people have wanted to do that and have not been able to.' ... Complaints over Microsoft's CSS support come amid broader criticisms that improvements in browser technology have slowed to a glacial pace since the software giant crushed credible competition in the market--an outcome that some view as ironic given Microsoft's cries during the antitrust trial that court-mandated restraints on its ability to bundle applications would stifle innovation. 'While it is true that our implementation is not fully, 100 percent W3C-compliant, our development investments are driven by our customer requirements and not necessarily by standards,' said Greg Sullivan, a lead product manager with the Windows client group. When it was pointed out that the most vocal critics of IE's CSS support are Web developers and authoring tool makers, rather than standards bodies, Sullivan said those critics were comparatively few... Developers and toolmakers disagree on the degree to which Adobe and Macromedia can influence Microsoft's decisions with respect to CSS or other IE development. One authority in CSS, a former Netscape engineer now advising companies on their use of the technology, expressed hope that Adobe's deal with Opera could advance CSS and spur changes to IE..." See the Web Standards Project, "a grassroots coalition fighting for standards that ensure simple, affordable access to web technologies for all"; also WaSP's "End of Free IE" opinion.
[October 09, 2003] "Diagramming the XML Family." By Daniel Zambonini. In XML.com (October 08, 2003). ['Daniel Zambonini illustrates the XML family using RDF, SVG, XSLT and XSL-FO.'] "In this article we'll introduce some of the XML family members and discuss how they relate to one another. We'll then use these technologies to create a diagram of their relationships in order to demonstrate how they work together in practice. Of the hundreds of XML technologies in use, we'll limit the scope of this article to the technologies used in the creation of the diagram... We use Apache FOP. FOP converts our plain text XSL-FO into a PDF file, rasterizing the SVG into a diagram on the page (using Apache Batik)... We use terms from the Dublin Core RDF Schema (which makes use of XML, Namespaces and URIs) to create a RDF description for each technology. These are converted to SVG using XSLT and XPath. We could have validated our XSLT file with XML Schema (using the XSLT schema). The SVG diagram is finally embedded into an Adobe PDF document with XSL-FO, resulting in a printable file that contains a diagram of the technology relationships... We finally have our PDF diagram of the XML technologies... When XML and RDF data become ubiquitous on the Web, the potential for querying and displaying the information will be enormous. The tools and underlying technologies are already in place. All that's needed is a greater understanding of the potential that it offers. The growth of these technologies is limited largely by our reluctance to commit..."
[October 09, 2003] "Thinking XML: Semantic Anchors for XML. Universal Identifier Schemes for XML Interchange." By Uche Ogbuji (Principal Consultant, Fourthought, Inc). From IBM developerWorks, XML zone. October 7, 2003. ['XML syntax is just the foundation for data interoperability. The next step is semantic transparency. Some groups are working to address this by defining entire document formats to be adopted wholesale, while other groups are working on ways to express common terminology and concepts at a more granular level. In this installment, Uche Ogbuji looks at XML Topic Maps Published Subjects and Universal Data Element Framework (UDEF), two ideas that take the granular approach by seeking to provide anchors in the semantic stream.'] "XML only provides the most basic foundation towards the goal of universal information interchange. XML is thoroughly established, and a great deal of the effort to build standards on top of XML has been directed towards semantic transparency, which would allow disparate systems to share some understanding of the actual concepts that are represented in some structured form in XML documents. See the inaugural Thinking XML article for a discussion of semantic transparency. Many approaches are taken toward such an ambitious goal, but I tend to classify these into two main categories: (1) Top-down initiatives define entire document formats along with the semantics of all the elements, attributes, and content, usually by reference to relevant industry standards. Examples are OAGIS and Universal Business Language (UBL). (2) Bottom-up initiatives define terms and concepts at the discrete level, independently of the documents in which they would appear. Examples are the ISO Basic Semantics Register (BSR), an effort that unfortunately seems to have stalled, and RosettaNet Dictionaries. Top-down approaches are often less ambitious in scope and positioned for industry backing. Bottom-up approaches have broader potential, but are also far more difficult to develop and evangelize. RosettaNet is rather interesting in that it pursues both, providing dictionaries and document schemata. Also, UBL shares close ties with bottom-up efforts in the ebXML space. The terms and concepts formally defined in dictionaries and semantic registries are the anchors on which you can build generalized semantics for communications in XML. In this article, I shall look at two additional initiatives to build such anchors... UDEF offers examples of how to use their IDs in ebXML and OAGIS as well as RDF and XML schemata. But it is still interesting to consider whether top-down or bottom-up approaches will be most crucial in establishing true interoperability at the semantic level. Will it take the establishment of complete and coherent documents standards that can be readily used, or do the basic building blocks of a shared terminology have to be in place so that interoperability is possible even without agreement on precise document standards?"
[October 08, 2003] "Eolas Files Motion to Enjoin IE." By Paul Festa. In CNET News.com (October 08, 2003). "Eolas Technologies on Monday [2003-10-06] filed a motion to permanently enjoin Microsoft's distribution of its Internet Explorer browser amid a flurry of court filings by both sides in the pivotal patent-infringement case. Eolas, the sole licensee and sublicensor of a browser plug-in patent owned by the University of California, asked the U.S. District Court in Chicago for an injunction against distributing copies of IE capable of running plug-in applications in a way covered by the Eolas patent... The Eolas patent-infringement victory has rattled the Web since it was handed down in August. In its verdict, a jury found that Microsoft's IE browser infringed on an Eolas patent that describes how a browser opens external applications of the type produced by Macromedia, Adobe Systems, RealNetworks, Apple Computer, Sun Microsystems and many other software providers. Lueck said Eolas would still permit Microsoft to distribute IE as is, as long as it's being used in conjunction with an application provider or corporate intranet that has an Eolas plug-in license. So far, Eolas has not granted any such licenses... Microsoft said it is well on its way to side-stepping both the patent and a potential injunction with an IE alteration it previewed Monday, a version of which it expects to introduce early next year in its next version of Windows, code-named Longhorn... [Eolas and the University of California said they are] willing to resolve the case on a very reasonable basis, but Microsoft contested Lueck's characterization of the offer as 'reasonable,' and said the company preferred to pursue its workaround strategy than sign a deal. 'In addition, the changes we rolled out for IE are modest and will not have significant impact on consumers or the Web community as a whole,' said Microsoft's Wallent. 'Based on that, the idea that we would pay more than $630 million to get rid of a single mouse click on a small fraction of Web pages is not something that we're entertaining'..." See: (1) "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft"; (2) Microsoft Adjusts Windows and IE to Address Eolas Patent Ruling; (3) general references in "Patents and Open Standards."
[October 07, 2003] "A Comparison of Web Services Transaction Protocols. A Comparative Analysis of WS-C/WS-Tx and OASIS BTP." By Mark Little (Arjuna Technologies Ltd) and Thomas J. Freund (IBM). From IBM developerWorks, Web services. October 7, 2003. ['Up to August 2003 there were two contenders for the Web services transaction space: OASIS Business Transactions Protocol (BTP), and the Web Services Transactions (WS-Tx) specification. There have been several subjective articles and comments comparing BTP to WS-Tx, attempting to show that BTP can do everything WS-Tx can and ignoring the important differences that exist. This article will try to give an objective comparison of these two specifications and show how they both attempt to address the problems of running transactions with Web services. At the end of the article it should be apparent how and why WS-Tx and BTP are different, while at the same time illustrating where they do have some commonality.'] "In 2001, a consortium of companies including Hewlett-Packard, Oracle and BEA began work on the OASIS Business Transaction Protocol (BTP), which was aimed at business-to-business transactions in loosely-coupled domains such as Web services. By April 2002 it had reached the point of a committee specification. However, others in the industry, including IBM, Microsoft, and BEA released their own specifications: Web Services Coordination (WS-C) and Web Services Transactions (WS-T)... Although we'll examine this in more detail later, they key differences between these specifications can be roughly categorized as follows: (1) BTP is not specifically about transactions for Web services -- the intention was that it could be used in other environments. As such, BTP defines the transactional XML protocol and must specify all of the service dependencies within the specification. WS-C and WS-Tx are specifically for the Web services environment and hence build on the basic definition of a Web services infrastructure. (2) The foundations of WS-Tx are based on traditional transaction infrastructures, where there is a strong separation between the functional aspects of business logic and the non-functional aspects of using transactions within an application. BTP essentially started from scratch and requires business-level decisions to be incorporated within the transaction infrastructure... In this paper we'll give an objective analysis of these two transaction protocols and compare and contrast the approaches they have taken. Because there are a number of good texts available on OASIS BTP we will not spend as much time describing that protocol as we will for WS-C and WS-Tx where less information is currently available... A few years ago the world of Web services and transactions looked like requiring new techniques to address the problems that it presented, and BTP was seen as the solution to those problems. Unfortunately, with the benefit of hindsight it did not address what users really want: the ability to use existing enterprise infrastructures and applications and for Web services transactions to operate as the glue between different corporate domains. Although the BTP model has some similarities with WS-Tx, the two specifications differ in some critical areas. For example, transaction interoperability: most enterprise transaction systems do not expose their coordinators through the two-phase protocol. In addition, BTP has many subtle (and some not-so-subtle) impacts on implementations, both at the transaction level and, more importantly, at the user/service level. Much has been made of the fact that ACID transactions aren't suitable for loosely-coupled environments like the Web. However, very little attention has been paid to the fact that these loosely-coupled environments tend to have large strongly-coupled corporate infrastructures behind them. Any Web services transactions specification should not ask 'what can replace ACID transactions?', but rather 'how can we leverage what already exists?' Note: The article concludes with a table showing a summary of the various differences and similarities between WS-C/T and BTP. See: (1) "Updated Specifications for the Web Services Transaction Framework"; (2) "Messaging and Transaction Coordination."
[October 07, 2003] "Transactions in Business Processes: A New Model." By Rich Rollman and William Cox (BEA Systems). In Web Services Journal Volume 3, Issue 10 (October 2003), page 34. "Business processes use transactions to ensure that all activities complete as a unit. But because activities in a business process utilize resources from many business partners and can execute for hours, days, or longer, the transactions must be managed differently. Consider a purchasing business process. It might solicit quotes, reserve inventory, issue purchase orders, confirm receipt of items, and transfer funds. For scenarios like this, using a single ACID transaction for the entire business process is impractical. Business processes require a new kind of transaction. Long-running transactions avoid locks on non-local resources, use compensation to handle failures, potentially aggregate smaller ACID transactions, and typically use a coordinator to complete or abort the transaction. In contrast to rollback in ACID transactions, compensation restores the original state, or an equivalent, and is business-specific. The compensating action for making a hotel reservation is canceling that reservation, possibly with a penalty. A number of protocols have been specified for long-running transactions using Web services within business processes. WS-Transaction with WS-Coordination, OASIS Business Transaction Processing, and WS-CAF are examples. These protocols use a coordinator to mediate the successful completion or use of compensation in a long-running transaction... Efforts to standardize business processes like Process Definition for Java (JSR 207) and OASIS' WSBPEL will drive the creation and acceptance of standards for long-running transactions and compensation. Working with long-running transactions, these technologies will expand a business's ability to share and execute business processes with and among its trading partners..." See also: (1) "OASIS Members Form ebXML Business Process Technical Committee"; (2) "Standards for Business Process Modeling, Collaboration, and Choreography."
[October 07, 2003] "Why Use DITA to Produce HTML Deliverables? Overcoming the Limitations of HTML." By John Hunt (User Assistance Architect, IBM), Don Day (Lead DITA Architect, IBM), Erik Hennum (DITA Domain Architect, IBM), Michael Priestley (DITA Specialization Architect, IBM), and Dave A. Schell (Chief Strategist and Tools Lead, IBM). From IBM developerWorks, XML zone. October 7, 2003. ['The Darwin Information Typing Architecture (DITA) is an XML-based format for structuring and authoring technical content. This article explores advantages DITA provides for producing HTML content -- including easy global changes, portability through standards, superior linking and Web management, conditional processing, content and design reuse, and better writing through focused content. DITA consolidates all of the benefits in a consistent, overall information architecture that can evolve and grow along with your product information needs and delivery modes, and with the evolution of standard tools for delivering XML as the presentation mechanism.'] The authors address a FAQ like this: Since my output deliverables are exclusively HTML and will stay that way for the foreseeable future, why go the route of creating content in an intermediary XML format and generating HTML? Why not produce HTML directly with an HTML editor? Why go through the extra cycles involved in producing and managing DITA XML content when it seems so much easier to simply write and produce the HTML directly, with tools that were specifically created to support doing that? "DITA leverages the advantages inherent in XML and extends beyond those advantages in the following ways: (1) Easy global changes through customized transforms. With DITA and XSLT, you can update the structure and presentation of an entire information set by applying a consistent, core transform. (2) Portable through standards: Using DITA, product groups and external business partners can easily share and exchange content. (3) Linking and Web management: DITA makes it possible to create and maintain cross-topic links from outside the topic itself; you can apply different sets of links in different situations. (4) Conditional processing: With DITA, you can tag parts of a topic by product, audience, or other characteristics. (5) Reuse: You can reuse topics in different collections using maps, and you can reuse content between topics as well, maintaining common elements like definitions, warnings, and product names in a central place. (6) Focused content and better writing: Topic-based authoring produces better writing... It's possible to achieve some of the above benefits through highly disciplined authoring of HTML and subsequent processing of the authored HTML. However, this quickly becomes a bits-and-pieces process. For example, you might tweak HTML to support a form of conditional processing, but in so doing make it difficult to generate a customized presentation. Then, when you tweak the HTML to improve the presentation, you might need to re-work the content and form of the topic navigation links. XML and DITA overcome this bits-and-pieces problem of HTML. DITA consolidates all of the benefits in a consistent, overall information architecture that can evolve and grow along with your product information needs and delivery modes, as well as the evolution of standard tools for delivering XML as the presentation mechanism..." See general references in "Darwin Information Typing Architecture (DITA XML)."
[October 06, 2003] "Business-Centric Methodology Specification." From the OASIS BCM Technical Committee (Bruce Peat, Mike Lubash, David RR Webber, Eric Okin, Carl Mattocks, Hans Aanesen, Sally St. Amand, Laila Moretto, Dan Pattyn, Paul Kirk, Bob Greeves, and Murali Iyengar). Version 0.05. September 28, 2003. 75 pages. ['This specification covers the requirements associated with the Phase 1 implementation of the BCM which is limited to defining the BCM vision and sets out to define a methodology which 230 allows business users and experts to participate in the development process.'] "The Business-Centric Methodology (BCM) for Enterprise Agility and Interoperability is a roadmap for the development and implementation of procedures that produces effective, efficient, and sustainable interoperability mechanisms. The methodology emphasizes Business First'; shifting power and responsibility to the users -- customers and business domain experts. Business is defined for this specification in broad terms as the reason for an organization's existence -- their functional domain. The BCM task is to provide an overall roadmap for developing interactions between collaboration partners and within Communities of Interest (CoI). The roadmap can be used for new development, providing guidance in defining requirements for the procurement of products, and for providing the structure for interfacing to extend legacy application and services. The BCM offers an approach for managers facing the problem of tying together disparate systems and services. The approach extends the traditional Enterprise Application Integration (EAI) model which only provides internal viewpoints and reengineering of an organization's processes. The critical BCM take-away is that of providing a holistic solution to the interoperability quandary business and technical mangers face today by providing an organizational memory that is persistent. This memory is also agnostic to the implementation architecture and enables business personnel to understand, direct and manage the operations. This approach is at the heart of the BCM and is implemented as a series of BCM Templates for each of the architecture layers that the BCM defines. The BCM Templates prompt for the information artifacts required for proper control, understanding, and building of a shared information architectural foundation. The BCM Templates provide for the precise communication required for not only business understanding but also for directing and controlling the application implementation; an example set of BCM Templates are provided in Appendix A. Templates can be used both internally and externally. Ideally collections of BCM Templates are shared across a CoI to foster adoption, promote re-use and align implementation efforts. The BCM is not intended to be an end-point solution but rather a point-of-departure for, and enabler of, downstream analysis, development and implementation. The intent of the BCM is to provide flexible guidance to those tackling the difficult challenge of interoperability at both tactical and strategic levels..." See: (1) BCM TC Call For Comment; (2) BCM TC FAQ document; (3) general references in "Business-Centric Methodology."
[October 06, 2003] "Web Services Orchestration and Choreography." By Chris Peltz (Hewlett-Packard Company). In IEEE Computer Volume 36, Number 10 (October 2003), pages 46-52 (with 6 references). ['Combining Web services to create higher level, cross-organizational business processes requires standards to model the interactions. Several standards are working their way through industry channels and into vendor products.'] "The terms orchestration and choreography describe two aspects of emerging standards for creating business processes from multiple Web services. The two terms overlap somewhat, but orchestration refers to an executable business process that can interact with both internal and external Web services. Orchestration always represents control from one party's perspective. This distinguishes it from choreography, which is more collaborative and allows each involved party to describe its part in the interaction. Proposed orchestration and choreography standards must meet several technical requirements that address the language for describing the process workflow and the supporting infrastructure... Proposed orchestration and choreography standards must meet several technical requirements for designing business processes that involve Web services. These requirements address both the language for describing the process workflow and the supporting infrastructure for running it. First, asynchronous service invocation is vital to achieving the reliability and scalability that today's IT environments require. The capability to invoke services concurrently can also enhance process performance. Implementing asynchronous Web services requires a mechanism to correlate requests with each other. Software architects commonly use correlation identifiers for this purpose. The process architecture must also provide a way to manage exceptions and transactional integrity. In addition to handling errors and time-out constraints, orchestrated Web services must ensure resource availability for long-running distributed transactions. Traditional ACID (atomicity, consistency, isolation, and durability) transactions are typically not sufficient for long-running, distributed transactions because they cannot lock resources in a transaction that runs over a long time. The notion of compensating transactions offers a way to undo an action if a process or user cancels it. With compensating transactions, each method exposes an undo operation that a transaction coordinator can invoke if necessary. Web services orchestration must be dynamic, flexible, and adaptable to meet changing business needs... While BPEL4WS, WSCI, and BPML work their way through standards processes and into vendor product implementations, other enhancements and issues relevant to Web services orchestration are emerging. IBM researchers have proposed a peer-to-peer model of e-business interaction. They compare current Web services to a vending machine -- a set number of buttons that can be pressed in a predefined order. They propose a conversational model -- more like a telephone call with flexible, dynamic exchanges between the parties at each end. At this time, IBM's Conversation Support for Web Services is the only proposal that claims to support this capability..." See: (1) IBM's "Conversation Support"; (2) "Business Process Execution Language for Web Services (BPEL4WS)"; (3) general references in "Messaging and Transaction Coordination."
[October 06, 2003] "Turning Software into a Service." By Mark Turner, David Budgen, Pearl Brereton (Keele University, Staffordshire, UK). In IEEE Computer Volume 36, Number 10 (October 2003), pages 38-44 (with 11 references). ['The software as a service model composes services dynamically, as needed, by binding several lower-level services -- thus overcoming many limitations that constrain traditional software use, deployment, and evolution.'] "The authors explore the concept of software as a service, which envisages a demand-led software market in which businesses assemble and provide services when needed to address a particular requirement. The SaaS vision focuses on separating the possession and ownership of software from its use. Delivering software's functionality as a set of distributed services that can be configured and bound at delivery time can overcome many current limitations constraining software use, deployment, and evolution... When developing complex Web services, the lack of a universally accepted protocol that provides all the functionality required at each layer can cause problems. Adding to this confusion is the lack of an overall definition for the actual layers such a stack requires. The many standards organizations and companies involved all have different visions of the layers and protocols that make up the Web services architecture. IBM produced one of the stack's original definitions in its Web Services Conceptual Architecture document [Web Services Conceptual Architecture]. It included the three de-facto standards at the XML-based messaging, service implementation, and description and discovery layers, along with a service flow layer that incorporated IBM's Web Services Flow Language. However, the latter has now been combined with Microsoft's XLANG protocol to produce a new set of protocols: the Business Process Execution Language for Web Services (BPEL4WS). The W3C Web Services Architecture group also is working on its own stack version to standardize the required layers, again emphasizing the three basic protocols. Few of the available stacks include any detail on the semantic Web protocols or the more business-oriented Electronic Business using Extensible Markup Language (ebXML). As a result, which technologies to use at each level -- and even which of the available technologies are compatible -- remains unclear. To this end, we propose an updated Web services stack framework that places the currently available initiatives in context. The stack framework consists of several open-systems-interconnection-type layers, with each level using the services of the levels below it:  Network - the underlying transport protocol layer;  XML-based messaging;  Service description - provides the functional description of a Web service in terms of its interface and implementation; [4 Nonfunctional description - protocols at this layer describe a service in terms of its less technical features, such as quality of service, cost, geographic location, number of retries, and legal factors;  Conversations - describes the correct data types and sequence of messages or documents a Web service is exchanging;  Choreography;  Transactions;  Business process and workflow;  Contracts - the format of the machine-readable contracts;  Discovery..."
[October 06, 2003] "CPXe: Web Services for Internet Imaging." By Timothy Thompson, Rick Weil, and Mark D. Wood (Kodak). In IEEE Computer Volume 36, Number 10 (October 2003), pages 54-62 (with 10 references). ['The Common Picture eXchange environment leverages the Web services paradigm to serve the electronic photographic services market, combining open standards for exchanging digital images, orders, and other information with an online directory of service providers.'] "The Common Picture eXchange environment is a highly interoperable service delivery framework that leverages the Web services paradigm to give providers access to an expanded market and offer consumers a broad range of digital imaging services. Multiple providers can register their services in a central directory and precisely characterize their offerings using an extensible catalog and order model as well as a C examples presents an intellectual challenge that could foster a range of new creative applications. Thus, the authors sought to apply machine-learning methods to the problem of musical style modeling. Their work has produced examples of musical generation and applications to a computer-aided composition system. Using statistical and information-theoretic tools that analyze musical pieces, they seek to capture some of the regularity apparent in the composition process. The resulting models can be used for inference and prediction, and to generate new works that imitate the great masters' styles. CPXe relies on the universal description, discovery, and integration specification for directory functionality. UDDI is a Web service that lets businesses discover one another and describe how they interact. It provides simple object access protocol interfaces for publishing entries and querying the UDDI registry, and it uses document literal encoding to pass XML-formatted data in SOAP messages. UDDI relies on tModels to represent metadata. A tModel is defined by a name, a description, and an overview; how it is used is up to the definer. In CPXe, providers use tModels to classify their services and specify their supported interfaces. UDDI provides a relatively flexible mechanism for searching the directory to find services that categorize themselves using a particular tModel. Because UDDI is not designed to provide the fine level of detail about service offerings that consumers want, each provider also operates a catalog service that describes its own services and products. Service locators The CPXe system also implements the service locators concept. Functioning much as a travel agent or sales broker, a service locator consults the UDDI directory to determine available services and queries those services for catalog information. Applications seeking specific kinds of services and products can interact directly with a selected service locator service to identify an appropriate service provider. service locators who may charge for their service and are not tied to a particular vendor. Although applications can interact directly with UDDI, most will interact with a specific service locator service. Portal and end-user application providers typically will implement the service locator service that their application uses. Vendors can enter into business agreements with specific service locator providers to be listed by that provider... CPXe uses the Web Services Description Language to describe the interfaces to all its services... Although the WS-I Basic Profile 1.0 was not completed before development of CPXe, the system's designers were guided by early decisions made by the WS-I Basic Profile Working Group. CPXe adheres to most Basic Profile 1.0 requirements, including use of the document literal message format and SOAP binding in WSDL..." See general references in "Common Picture Exchange Environment (CPXe)."
[October 06, 2003] "Ontology-Mediated Integration of Intranet Web Services." By Tse-Ming Tsai, Han-Kuan Yu, Hsin-Te Shih, Ping-Yao Liao, Ren-Dar Yang (Taiwan Institute for Information Industry); Seng-cho T. Chou (National Taiwan University). In IEEE Computer Volume 36, Number 10 (October 2003), pages 63-71 (with 9 references). ['The smart office task automation framework uses Web services, an ontology, and agent components to create an integrated informationservice platform that provides user-centric support for automating intranet office tasks.'] "XML-based Web services standards have simplified integration with open Internet protocols and given machines an enhanced ability to communicate. Developers are striving to tap into the depth of mutual understanding that can be culled from the Web's diverse content to provide a vast increase in available services. Dealing with this flood of options will require sweeping automation. To meet this challenge, the authors built their smart office task automation framework -- SOTA -- using Web services, an ontology, and agent components. In SOTA, Web services define programmable application interfaces logically accessible using standard Internet protocols. Applications access these services using ubiquitous Web protocols and data formats such as HTTP and XML, defined through the simple object access protocol (SOAP). SOTA uses the Web Services Description Language (WSDL) to define and describe the programming interface for the SOAP messaging service's independent implementation. The universal description, discovery, and integration (UDDI) specification serves as a centralized services registry that offers a global services market... The SOTA platform uses a mediating ontology to integrate intranet applications, providing a single integrated user interface instead of separate operations. By modeling the semantic relationships between Web services interfaces and the mediating ontology, SOTA assumes most of the complex tasks previously performed by hand. During the design phase, SOTA creates a wrapper for existing back-end legacy systems and makes them accessible through Web services interfaces. The domain ontology, which reflects abstract concepts and relationships in the real application domain, provides the architecture's pivotal element. When deploying these systems on the SOTA platform, registering the services interfaces' semantics to corresponding concept properties captured in the mediating ontology is vital. SOTA supports two registering tools for the system annotator -- WSDL Semantic Annotator and Ontology Locator. The Task Process Composer tool helps with constructing a reusable task flow for complex tasks that involve multiple services through the Task Flow Engine. At runtime, SOTA can take plain-text sentences as inputs and serve end users with a single, integrated user-interface form, avoiding the need to rely on the user's knowledge and memory to access the necessary distributed services manually. An authorized context-and-content parser deals with different functions and presentation devices in full view of users. A conventional application-centric platform requires significant user effort to find systems and log onto them, select functions using the mouse, and copy and paste the data. In contrast, SOTA is a userand task-centric platform that shifts effort away from users so that they can complete their tasks more efficiently..."
[October 03, 2003] "InfoPath Makes Office Shine. Office 2003 Offers Little to Individual Users, But XML Features Should Wow IT Shops." By Tom Yager. In InfoWorld (October 03, 2003). "In terms of providing features that individual users need, the MS Office productivity suite reached the zenith of its evolution with Office 2000. But Office 2003 Professional Enterprise Edition and Professional Edition for retail deliver XML capabilities that are compelling to companies as a whole, and Enterprise Edition's inclusion of InfoPath turns Office into a powerful front end for IT shops rooted in XML (which, if sense prevails, describes all of IT). Office 2003 Enterprise's XML enhancements alone are worth the upgrade cost... Along with XML, InfoPath is, in our estimation, the best new feature to hit Office since real-time spell-checking. InfoPath is an XML editor with a twist: The user never sees the XML or the XML Schema that structures and validates it. Unlike XML features in Word and Excel, which must be set up by one skilled in the XML arts, InfoPath paints forms, validates input, and pumps out squeaky clean, standards-compliant XML without requiring one bit of XML expertise. InfoPath lives up to its billing both in ease of use and the quality of its output... Office 2003 is, in the main, an excellent piece of work. But two key features are missing. Formatting information, which can be relevant to the interpretation of a document, is either stripped from exported XML documents, or retained in a needlessly complex format. Worst of all is the absence of XML support in Outlook, which continues to use an opaque data store for messages. Through XML, Outlook could interact with non-Microsoft mail clients and servers, and more importantly, it could easily incorporate RSS functionality. Office 2003 Enterprise is a fantastic desktop suite, easily deserving of its Very Good rating. But Microsoft's decision to deprive most Office users of integrated XML functionality is, to be blunt, idiotic. As a reviewer, I'm obliged to evaluate and score the product at hand, and my rating is accurate for the Enterprise Edition I reviewed. But I add this footnote: To be truly useful, XML support must be consistent across all Office editions, and not limited to the Professional and Enterprise editions. I fear that thousands of Office 2003 users will be left wondering where all of these thrilling new features are..." See general references in "Microsoft Office 11 and InfoPath [XDocs]."
Earlier Articles from September 2003
[September 30, 2003] "OASIS to Build Web Services Framework. Committee Will Define Vendor-Neutral Methodology." By Paul Krill. In InfoWorld (September 30, 2003). "Members of OASIS this week announced plans to develop a global Web services framework to define a methodology for a broad-based, multiplatform and vendor-neutral implementation. The OASIS Framework for Web Services Implementation (FWSI) Technical Committee plans to design a template for Web services deployment to enable systems integrators, software vendors, and in-house developers to build e-commerce solutions more quickly, according to OASIS. The committee will define functionality for building Web services applications and service-oriented architectures. Specifically, the committee will specify a set of functional elements for practical implementation of Web services-based systems. At first glance, the OASIS project appears similar to the Basic Profile for Web services being set up by the Web Services Interoperability Organization (WS-I). But the technical committee expects to complement WS-I, according to OASIS. Committee member Sun Microsystems also is a major supporter of the WS-I Basic Profile. The committee plans to leverage applicable work within OASIS and other standards groups..." See details in the news story "OASIS Announces Framework for Web Services Implementation (FWSI) TC."
[September 30, 2003] "Taking XML's Measure." By David Becker. In CNET News.com (September 23, 2003). "Tim Bray and his colleagues in the World Wide Web Consortium had a very specific mission when they set out to define a new standard seven years ago. They needed a new format for Internet-connected systems to exchange data, a task being handled with increasing awkwardness by HyperText Markup Language. The solution Bray helped concoct was XML (Extensible Markup Language), which has since become one of the building blocks of information technology and today serves as the basic language for disparate computing systems to exchange data. Microsoft is betting heavily on XML-based technology that will turn the new version of Office into a conduit for viewing and exchanging data from backend systems. The biggest players in technology are betting heavily on Web services based on XML. And corporate giants such as Wal-Mart Stores are relying on XML to streamline their business processes. Bray has since gone on to address another big challenge -- the visual representation of data -- with his company, Antartica, which sells tools that display information from Web searches, corporate portals and other sources in an intuitive map-based format. Bray talked about the spread of XML, challenges in search technology and other concerns with CNET News.com..." [Excerpt, on standards:] "Standards processes don't do well in dealing with new technologies, so I disagree that being ahead of the market is a good thing. The standards process works best when you've got a problem that's already been solved, and we have a consensus on what the right way to go is, and you just need to write down the rules. That's totally what XML was. There had been 15 years of SGML, so there was a really good set of knowledge as to how markup and text should work. And the Web had been around for five years, so we knew how URLs (Uniform Resource Locators) worked, and Unicode had been around, so we knew how to do internationalization. XML just took those solved problems, packaged them up neatly and got consensus on it all..."
[September 30, 2003] "Sun: Office 2003 Will 'Protect Microsoft's Monopoly'." By Andrew Colley. In ZDNet Australia (September 30, 2003). "Document protection tools in the next version of Microsoft's office suite represent extremes of proprietary thinking, says a Sun document. Sun Microsystems has expressed concerns that document protection tools that Microsoft will include in Office 2003 will fortify the software giant's domination over enterprise desktops. In a document never before released outside Sun, but shared with ZDNet Australia this week, Laurie Wong, Sun Microsystems software product manager, argued that while document rights management was a positive step, Microsoft was using its rights management regime to protect its 'monopoly'. According to Wong, Microsoft's adoption of rights management services would negate any positive impact that might have resulted from its decision to adopt open standards for its file storage format. 'In summary, on the one hand Microsoft claims to have opened up the storage format from a proprietary binary one to XML, an open one. On the other they have locked this 'open' format up with rights management,' wrote Wong, adding 'Yes, a couple of deck chairs have been shifted around, but you certainly are not on a different ship. It is a vexatious issue, promulgated by the extremes of proprietary thinking'. Wong argued that Windows RMS locks out members of the community using non-Microsoft products by coupling document protection systems to proprietary features of Microsoft's latest server technology. Windows RMS is designed to give enterprises control over their documents by specifying who can access them and how they can be used at the time they are created. Windows RMS requires the list of restrictions attached to each document to be registered on a RMS-capable Microsoft server. The server authenticates each user and issues him or her with a license to use an RMS-protected document. Anyone without access to the RMS technology server is effectively locked out of a protected document. When concerns about this were raised when Microsoft announced its rights management technology early this year, the company said that RMS was targeted for internal corporate use and that it could be incorporated into the Passport service for wider community inclusion. However, Wong is not satisfied by either argument. Nodding in the direction of the global divide between the technology have and have-nots, Wong said that users shouldn't be forced to buy one company's products for the privilege of using widely used document formats. Adding to Wong's concerns, Microsoft has added the capability to apply rights management to emails and Web pages through Outlook 2003 and Internet Explorer..." See: (1) "Microsoft Announces Windows Rights Management Services (RMS)"; (2) general references in "XML and Digital Rights Management (DRM)."
[September 30, 2003] "Adobe's PDF-Everywhere Strategy." By David Becker. In CNET News.com (September 30, 2003). "Adobe Systems wants to put more than a few pulp mills out of business. Formed more than 20 years ago with the mission of ensuring uniform typefaces, the San Jose, Calif.-based software maker has since built a grand e-paper network, with Adobe products replacing or supplementing paper for tasks that range from tax forms to book publishing. But with its Portable Document Format (PDF) now widely used for distributing documents electronically, Adobe now wants to expand the PDF format into a multiplatform foundation for viewing and sharing corporate data. It's an ambitious plan that will likely bring Adobe into more direct competition with Microsoft -- though this would not be the first time the two companies have clashed. Meanwhile, Adobe is looking to extend its reach with publishing and graphics professionals. Adobe Creative Suite, a package of software the company announced earlier this week, combines common applications such as Photoshop with new tools for collaboration and managing files. Among other things, the package is expected to help boost market share for Adobe's InDesign page layout software, one of the company's most competitive products. Adobe CEO Bruce Chizen talked with CNET News.com about its suite approach, the future of the PDF and the possible confrontation with Microsoft, among other issues. [Chizen:] "The market we're going after is different from the market [Microsoft is] focused on. We're focused on those customers and those industries that care about the reliability of the document outside their environment, and they want to have intelligent documents that cut across platforms--and it's where good-enough -- meaning HTML -- is not going to meet their requirements. Our industries are banking, insurance, legal, manufacturing, pharmaceuticals, government--places where they want to do business with partners or customers or citizens, where they can't dictate the operating environment. They don't want to tell their customers, "If you want to open a certain document, you have to go out and buy a certain operating system and a certain piece of software... Version Cue is really designed for individual and work groups of 25 or fewer people. And as those individuals scale up, they're going to want a much more comprehensive, administrative-intense solution, and that's when they'll go buy an enterprise solution. And because we use industry standards that are built around XML schemas, we'll integrate well with those solutions. And we already are well along the way of creating partnerships with folks like IBM and Documentum..."
[September 30, 2003] "Create Web Services Using Apache Axis and Castor. How to Integrate Axis and Castor in a Document-Style Web Service Client and Server" By Kevin Gibbs, Brian D Goodman, and Elias Torres (IBM). From IBM developerWorks, Web services. September 30, 2003. ['Recent work has pointed out the benefits of using Document-style Web services over RPC -- they're cleaner, more natural to XML, and facilitate object exchange. However, Document-style services can be less than straightforward to deploy using Axis, since Axis's data binding framework can be difficult to use, doesn't support some popular features of XML-Schema, and most importantly, lacks validation support. This article addresses those woes by providing a step-by-step tutorial which explains how to integrate Axis with the Castor data-binding framework, creating a best-of-both worlds Web service that combines the Web services prowess of Axis with the data-binding brawn of Castor.'] "RPC-style encoding is ultimately a limiting, unnatural use of its underlying technology, XML. It represents a misuse of technology -- when simple XML alone, in a Document-style service, provides all the expressibility desired. Keeping technology standards in the vein of the most natural, straightforward solutions, like Document style, is the true spirit of Web services, where interfaces are exposed, back-end and middleware systems are hidden, and dynamic discovery, binding, and endless reuse abound. This article shows how to use Castor XML binding to make Document-style Web services within an Apache Axis environment easier, cleaner, and more intuitive. It begins with a discussion of Web service encoding methods and an explanation of why Castor and Axis together make a good solution. It provides instructions and explanations for all of the necessary steps to getting a Document-style Web service up and running -- everything from designing the schema and service to generating the service and client code. The article covers configuring Axis to use Castor and attempts to cover any 'gotchas' a developer might encounter as they get their hands dirty... But once you're off the ground, you've got a Web service that gains all the flexibility and clarity of Document-style, the robust Web services support of Axis, and the validation and data binding prowess of Castor. When you've got Document-style services, Castor, and Axis set up, there are a lot of other interesting directions you can go in. For instance, in just a few more lines of code, you can have your server-side Castor objects marshall themselves into an SQL database, using Castor JDO. You can also use the regular expression and validation support of Castor to clean up Web service data so that your service and client have less room for potential bugs in their data..."
[September 30, 2003] "What is Service-Oriented Architecture?" By Hao He. From O'Reilly WebServices.XML.com (September 30, 2003). "Service Oriented Architecture (SOA) is an architectural style whose goal is to achieve loose coupling among interacting software agents. A service is a unit of work done by a service provider to achieve desired end results for a service consumer. Both provider and consumer are roles played by software agents on behalf of their owners. SOA achieves loose coupling among interacting software agents by employing two architectural constraints: (1) a small set of simple and ubiquitous interfaces to all participating software agents. Only generic semantics are encoded at the interfaces. The interfaces should be universally available for all providers and consumers. (2) Descriptive messages constrained by an extensible schema delivered through the interfaces. No, or only minimal, system behavior is prescribed by messages. A schema limits the vocabulary and structure of messages. An extensible schema allows new versions of services to be introduced without breaking existing services... Interfacing is fundamentally important: if interfaces do not work, systems do not work. Interfacing is also expensive and error-prone for distributed applications. An interface needs to prescribe system behavior, and this is very difficult to implement correctly across different platforms and languages. Remote interfaces are also the slowest part of most distributed applications. Instead of building new interfaces for each application, it makes sense to reuse a few generic ones for all applications. Since we have only a few generic interfaces available, we must express application-specific semantics in messages. We can send any kind of message over our interfaces, but there are a few rules to follow before we can call say that an architecture is service oriented. First, the messages must be descriptive, rather than instructive, because the service provider is responsible for solving the problem. This is like going to a restaurant: you tell your waiter what you would like to order and your preferences but you don't tell their cook how to cook your dish step by step. Second, service providers will be unable to understand your request if your messages are not written in a format, structure, and vocabulary that is understood by all parties. Limiting the vocabulary and structure of messages is a necessity for any efficient communication. The more restricted a message is, the easier it is to understand the message, although it comes at the expense of reduced extensibility. Third, extensibility is vitally important... If messages are not extensible, consumers and providers will be locked into one particular version of a service. Despite the importance of extensibility, it has been traditionally overlooked. At best, it was regarded simply as a good practice rather than something fundamental. Restriction and extensibility are deeply entwined. You need both, and increasing one comes at the expense of reducing the other. The trick is to have a right balance. Fourth, an SOA must have a mechanism that enables a consumer to discover a service provider under the context of a service sought by the consumer. The mechanism can be really flexible, and it does not have to be a centralized registry..."
[September 30, 2003] "QA Framework: Operational Guidelines." Edited by Lofton Henderson, Dominique Hazaël-Massieux Lynne Rosenthal, and Kirill Gavrylyuk. W3C Candidate Recommendation. 22-September-2003. Latest version URL: http://www.w3.org/TR/qaframe-ops/. Produced by members of the W3C QA Working Group under the W3C Quality Assurance (QA) Activity. This document outlines a "common operational framework for building conformance test materials for W3C specifications is defined. It presents operational and procedural guidelines for groups undertaking conformance materials development. This document is one of the QA Framework family of documents of the Quality Assurance (QA) Activity, which includes the other existing or in-progress specifications: Introduction; Specification Guidelines; and, Test Guidelines. The scope of this specification is a set of verifiable requirements for the process and operational aspects of the quality practices of W3C Working Groups. The primary goal is to help the W3C Working Groups (WGs) with the planning, development, deployment, and maintenance of conformance test materials (TM). For this guidelines document, the term conformance test materials includes conformance test suites, validation tools, conformance checklists, any other materials that are used to check or indicate conformance of an implementation to a specification... As the complexity of W3C specifications and their interdependencies increase, quality assurance becomes even more important to ensuring acceptance and deployment in the market. These guidelines aim to capture the experiences, good practices, activities, and lessons-learned of the Working Groups, and to present them in a comprehensive, cohesive set of documents for all to use and benefit from. They thereby aim to: (1) standardize the best of current practice, (2) allow the WG's to reuse what works rather than having to reinvent, (3) which should facilitate and expedite the work of the WGs, (4) and should also promote consistency across the various WG quality activities and deliverables..." See also the "Implementation Plan and Report for the QA Operational Guidelines" and the public archives of the 'www-qa' list.
[September 30, 2003] "An Introduction to StAX." By Elliotte Rusty Harold. In XML.com (September 17, 2003). "Most current XML APIs fall into one of two broad classes: event-based APIs like SAX and XNI or tree-based APIs like DOM and JDOM. Most programmers find the tree-based APIs to be easier to use; but such APIs are less efficient, especially with respect to memory usage. An in-memory tree tends to be several times larger than the document it models. Thus tree APIs are normally not practical for documents larger than a few megabytes in size or in memory constrained environments such as J2ME. In these situations, a streaming API such as SAX or XNI is normally preferred. A streaming API uses much less memory than a tree API since it doesn't have to hold the entire document in memory simultaneously. It can process the document in small pieces. Furthermore, streaming APIs are fast. They can start generating output from the input almost immediately, without waiting for the entire document to be read. They don't have to build excessively complicated tree data structures they'll just pull apart again into smaller pieces. However, the common streaming APIs like SAX are all push APIs. They feed the content of the document to the application as soon as they see it, whether the application is ready to receive that data or not. SAX and XNI are fast and efficient, but the patterns they require programmers to adopt are unfamiliar and uncomfortable to many developers. Pull APIs are a more comfortable alternative for streaming processing of XML. A pull API is based around the more familiar iterator design pattern rather than the less well-known observer design pattern. In a pull API, the client program asks the parser for the next piece of information rather than the parser telling the client program when the next datum is available. In a pull API the client program drives the parser. In a push API the parser drives the client. [Now] the next generation API is here. BEA Systems, working in conjunction with Sun, XMLPULL developers Stefan Haustein and Aleksandr Slominski, XML heavyweight James Clark, and others in the Java Community Process are on the verge of releasing StAX, the Streaming API for XML. StAX is a pull parsing API for XML which avoids most of the pitfalls I noted in XMLPULL. XMLPULL was a nice proof of concept. StAX is suitable for real work. Like SAX, StAX is a parser independent, pure Java API based on interfaces that can be implemented by multiple parsers. Currently there is only one implementation, the reference implementation bundled with the draft specification... StAX is a fast, potentially extremely fast, straight-forward, memory-thrifty way to loading data from an XML document the structure of which is well known in advance; it will be a very useful addition to any Java developer's XML toolkit..." See details in the following bibliographic reference.
[September 30, 2003] "Streaming API for XML." Java Specification Request (JSR) #173. Specification Lead: Christopher Fry (BEA Systems). Produced under the Java Community Process. Expert Group Members: Arnaud Blandin, Intalio, Inc.), Andy Clark (Apache), James Clark, Christopher Fry (BEA Systems, Specification Lead), Stefan Haustein). Simon Horrell (Developmentor), K. Karun (Oracle), Glenn Marcy (IBM), Gregory M. Messner (Breeze Factor), Aleksander Slominski), David Stephenson (Hewlett-Packard), James Strachan), and Anil Vijendran (Sun Microsystems). JSR-000173 Streaming API for XML Specification 0.7. Proposed Final Draft. August 27, 2003. 61 pages. A reference implementation is included in the ZIP archive containing the draft specification. "This specification describes the Streaming API for XML (StAX), a bi-directional API for reading and writing XML. This document along with the associated API documentation is the formal specification for JSR-173... This document specifies a new API for parsing and streaming XML between applications in an efficient manner. Efficient XML processing is fundamental for several areas of computing, such as XML based RPC and Data Binding... The Streaming API for XML gives parsing control to the programmer by exposing a simple iterator based API and an underlying stream of events. Methods such as next() and hasNext() allow an application developer to ask for the next event (pull the event) rather than handling the event in a callback. This gives a developer more procedural control over the processing of the XML document. The Streaming API also allows the programmer to stop processing the document, skip ahead to sections of the document, and get subsections of the document. The Streaming API for XML consists of two styles: A low-level cursor API, designed for creating object models and a higher-level event iterator API, designed to be used in pipelines and be easily extensible..." Background to the StAX design: "Processing XML has become a standard function in most computing environments. Two main approaches exist: (1) the Simple API for XML processing [SAX] and (2) the Document Object Model (DOM). SAX is a low-level parsing API while DOM provides a random-access tree-like structure. One drawback to the SAX API is that the programmer must keep track of the current state of the document in the code each time they process an XML document and thus cannot iteratively process it. Another drawback to SAX is that the entire document needs to be parsed at one time. DOM provides APIs that allow random access and manipulation of an in-memory XML document. At first glance this seems like a win for the application developer. However, this perceived simplicity comes at a very high cost: performance. For very large documents one may be required to read the entire document into memory before taking appropriate actions based on the data..." See preceding bibliographic entry.
[September 30, 2003] "Marking Up Bureaucracy." By Paul Ford. From XML.com (September 24, 2003). "If there is a perfect user of XML, it's the huge, sprawling United States government. With thousands of diverse offices, from the Navy to National Park Service, each federal agency routinely exchanges gigabytes-worths of documents and data with other offices, businesses, and citizens. Organizations as large as the US government rarely move quickly, so at first it's surprising to see so much XML activity underway. Historically, however, many government organizations are not strangers to markup... Right now, centralization is the exception, not the norm. Different XML applications are scattered across different government agencies. The DoD, EPA, IRS, and others create schemas as needed, and apply them internally. In an effort to encourage centralization of all online government services, including those using XML, the White House created the E-government initiative, which divides government technology into three roles: Government-to-Government (G2G), Government-to-Business (G2B), and Government-to-Citizen (G2C). Most effort has been focused on G2G. As described above, one of the major creators and consumers of markup is the the Department of Defense. Earlier efforts at standardizing schemas DoD-wide met with significant resistance, so now the DoD uses a 'market-oriented' strategy to manage its own XML registry. According to Owen Ambur, co-founder and co-chair of the XML working group, 'essentially, individual departments are encouraged to post schemas,' and other departments are encouraged to work with existing schemas instead of inventing new ones with the hope that, over time, individual schemas will be identified as most useful and promoted broadly throughout government. Much effort is also being applied to the E-Forms for E-Gov project, which is currently creating an infrastructure for using XForms, PDF, and related technologies to allow the myriad different federal forms to be filled out and signed electronically. This technology is expected to be useful both in G2G and G2B and will allow the processing of common forms, like passports applications, applications for federal assistance, and travel vouchers to be completely automated..."
[September 30, 2003] "Report: Widespread Use of Microsoft Poses Security Risk. Organizations Should Diversify Their Software Mix, Says Industry Group." By Stacy Cowley. In InfoWorld (September 24, 2003). "Whatever Microsoft's strengths or failings as a developer of reliable software, the mere existence of an operating system monopoly is a critical security risk, argues a new report released Wednesday at a Computer & Communications Industry Association (CCIA) gathering in Washington, D.C. Written by seven IT security researchers, CyberInsecurity -- The Cost of Monopoly calls on governments and businesses to consider in their buying decisions the dangers of homogenous systems, and to diversify the software mix deployed in their organizations. It also urges the U.S. government to counterbalance Microsoft's user lock-in tactics by forcing the company to offer multiplatform support for its dominant applications, including Internet Explorer and Microsoft Office products... While Microsoft is a focus of the report, the company isn't solely responsible for the risky situation that now exists, the authors said... None of the report's authors were paid for their contributions, and the CCIA is merely acting as the paper's publisher and did not influence its content, according to the report's instigator, @stake Inc. Chief Technical Officer Dan Geer. The report's conclusions do, however, dovetail with CCIA's push for tighter regulatory controls on Microsoft and for greater diversity in the U.S. federal government's IT systems. The group plans to feature the report at this week's conference, and in its conversations with representatives of Congress and federal agencies. The report's authors said they hope it will aid corporate IT workers in efforts to convince executives at their companies that Microsoft's software shouldn't be deployed by default. 'There isn't a lot of talk about monoculture and security problems. Our hope is that we can bring this into the debate,' [Perry] Metzger said. Beyond recommending diversification, the paper suggests steps the U.S. government could take to mitigate the effects of Microsoft's monopoly position. Forced publication of APIs (application program interfaces) for Microsoft's Windows and Office software would help, as would requiring the company to work with other industry vendors on development of future specifications through a process similar to the Internet Society's RFC (request for comments) system, the report said..." Note: The "@stake Inc. Chief Technical Officer Dan Geer" mentioned above was fired in connection with his authorship contribution in this report. See: (1) "Security Expert Geer Sounds Off on Dismissal"; (2) "Former @stake CTO Dan Geer on Microsoft Report, Firing." Bibliographic reference for the report is cited below.
[September 30, 2003] "CyberInsecurity: The Cost of Monopoly. How the Dominance of Microsoft's Products Poses a Risk to Security." By Daniel Geer, Sc.D (Chief Technical Officer, @Stake), Charles P. Pfleeger, Ph.D (Master Security Architect, Exodus Communications, Inc.), Bruce Schneier (Founder, Chief Technical Officer, Counterpane Internet Security), John S. Quarterman (Founder, InternetPerils, Matrix NetSystems, Inc.), Perry Metzger (Independent Consultant), Rebecca Bace (CEO, Infidel), and Peter Gutmann (Researcher, Department of Computer Science, University of Auckland). Published by Computer & Communications Industry Association (CCIA). September 2003. 25 pages. "... As fast as the world's computing infrastructure is growing, security vulnerabilities within it are growing faster still. The security situation is deteriorating, and that deterioration compounds when nearly all computers in the hands of end users rely on a single operating system subject to the same vulnerabilities the world over. Most of the world's computers run Microsoft's operating systems, thus most of the world's computers are vulnerable to the same viruses and worms at the same time. The only way to stop this is to avoid monoculture in computer operating systems, and for reasons just as reasonable and obvious as avoiding monoculture in farming. Microsoft exacerbates this problem via a wide range of practices that lock users to its platform. The impact on security of this lock-in is real and endangers society. Because Microsoft's near-monopoly status itself magnifies security risk, it is essential that society become less dependent on a single operating system from a single vendor if our critical infrastructure is not to be disrupted in a single blow. The goal must be to break the monoculture. Efforts by Microsoft to improve security will fail if their side effect is to increase user-level lock-in. Microsoft must not be allowed to impose new restrictions on its customers -- imposed in the way only a monopoly can do -- and then claim that such exercise of monopoly power is somehow a solution to the security problems inherent in its products. The prevalence of security flaw in Microsoft's products is an effect of monopoly power; it must not be allowed to become a reinforcer. Governments must set an example with their own internal policies and with the regulations they impose on industries critical to their societies. They must confront the security effects of monopoly and acknowledge that competition policy is entangled with security policy from this point forward..."
[September 30, 2003] "Java Panel Pondering Web Services, Portal Proposals. J2EE 1.4 Readied for Approval." By Paul Krill. In InfoWorld (September 24, 2003). "Proposals to boost Web services and portal capabilities in Java are up for imminent votes by stewards of the programming language, according to an official at Java inventor Sun Microsystems. Java 2 Platform, Enterprise Edition (J2EE) 1.4, which adds Web services support and backing for the Web Services Interoperability Organization's Basic Profile for Web services, is up for a vote by an executive committee of the Java Community Process (JCP) in the next couple of weeks, said Onno Kluyt, director of the JCP program office at Sun. J2EE 1.4 will be voted on by the JCP Standard Edition Enterprise Edition Executive Committee (SE/EE), with results expected by the end of the year. Up for a vote this week by the SE/EE committee is JSR 168, which is intended to define a standard API allowing developers to write a portlet once and deploy it from any compliant server with little or no recoding. The vote is expected to be finalized in two weeks, according to Sun. JCP in the next two weeks also is conducting elections to its two executive committees. These committees are the ME (Micro Edition) committee, which oversees Java 2 Platform Micro Edition for consumer and embedded systems, and the SE/EC committee, overseeing Java technologies for the server and desktop. Five seats are up on each panel. In place of current member PalmSource, Sun, which nominates 10 members for each panel, is nominating service provider Vodafone to the ME executive committee. JCP members then vote on the nominations..." See also JSR 168 Portlet API. JSR #168 was approved in a final ballot; voting to approve: Apple Computer, Inc., BEA Systems, Borland Software Corporation, Caldera Systems, Cisco Systems, Fujitsu Limited, Hewlett-Packard, IBM, IONA Technologies PLC, Doug Lea Macromedia, Inc., Nokia Networks, Oracle, SAP AG, Sun Microsystems, Inc.
[September 30, 2003] "Client Quality Reporting for J2EE Web Services. Use SOAP Attachments to Report Client Response Times for Web Services." By Brian Connoll. In JavaWorld (September 19, 2003). This article documents the implementation of "a general-purpose architecture for recording client response times for J2EE (Java 2 Platform, Enterprise Edition) Web services. The response times recorded are actual client response times, so they accurately reflect a user's perspective of the service quality. The sample implementation was built using the Sun ONE (Open Network Environment) Application Server and IDE, but the general approach can be easily adapted to other J2EE implementations... While Web services ease the building of client-server systems, monitoring service quality is a significant problem. Consider a client application that submits a transaction on a user's behalf. A business transaction usually involves several Web service calls: an initial call to submit a work item, subsequent calls to check for completion, and a final call to get the result. Each call is a distinct HTTP/SOAP (Simple Object Access Protocol) exchange. Put yourself in the position of an IT department responsible for monitoring server load and forecasting future needs. The fundamental question you must answer is, 'How well am I serving my clients now, and what will I need to serve them in the future?' Answering this question is difficult if you have only HTTP logs. Clients care about transactions, but since each transaction consists of several HTTP requests, the best you can do to estimate service quality is to develop custom data-mining software that cursors through HTTP logs and builds a model of user transactions. Even so, the information you have is still limited because it can't reflect network transport or client application overhead. This article's key idea is that transaction service quality is best measured by the client. The approach adopted here allows the client to record actual transaction response times. A client application uploads response time reports to the server by appending them to the next up-bound transaction request. The server strips off these attachments and queues them for storage and offline analysis... This approach can be used to measure accurate response times from the perspective of a client application. The implementation is lightweight. No new network traffic is needed between the client and the server. Metrics payloads are queued for low-priority logging, so server resources can be reserved for application processing..."
[September 30, 2003] "Is New Office 2003 Suite Worth the Upgrade?" By Mario Morejon, Vincent A. Randazzese, and Michael Gros. In CRN (September 25, 2003). "Microsoft Office 2003 is slated to launch Oct. 21, and it's already available to volume buyers. But one question looms -- the same question, in fact, that hovers over every Microsoft release. Is the upgrade really worth it? ... One of the most significant developments in Office 2003 is the use of embedded XML throughout, which makes the suite an excellent developer's tool and allows data to be shared easily among applications and users. Developing department-level applications has never been easier now that Microsoft has introduced InfoPath 2003, a client-driven XML form editor that integrates with XML-driven data sources in a variety of ways. InfoPath can query XML-driven databases and has a database wizard that ports Access 2003 tables and converts them into XML forms. The tool also preserves database schemas, so users don't have to reinvent the wheel when integrating their database-driven applications. In the case of SQL Server, database connections are driven via OLE DB... Web services, too, can tie into InfoPath as long as they're discoverable via UDDI, and the UDDI lookup can be done without any coding. But InfoPath isn't perfect yet. SQL joins can't be used because primary keys generated in Access cannot be replicated nor created outside the Access environment. Essentially, a many-to-one relationship violation can occur between tables, and conversion of Access forms into InfoPath forms is not yet always possible. Also, InfoPath chokes up when handling repeating fields in Access forms. As for FrontPage 2003, it's much improved and should no longer be considered an HTML editor for novice users. Test Center engineers predict that the program will now give Macromedia Dreamweaver a run for its money. Users can create XML data-driven Web sites in four easy steps and publish them to Micosoft SharePoint Services sites with little understanding of XML or XSLT. Data inside XML files are kept live because XML files are not copied into the pages but are linked to databases instead. Web services also work well with FrontPage, but they have to be published to a catalog before data can be pulled from them. Users can hook up a Web service to a FrontPage site easily--in just four--steps. Another impressive FrontPage feature is the conditional formatting task pane, which controls what is viewable on a page. Users can determine what's viewable by highlighting content and determining what data can be put on a page based on field values in a conditional query. No coding is required to do this..."
[September 30, 2003] "RFID Ripples Through Software Industry. Sun, SAP, Oracle, IBM Integrate RFID Data Into Mainstream Applications." By Ephraim Schwartz. In InfoWorld (September 26, 2003). "Big name vendors including Sun, SAP, Oracle, and IBM have caught the RFID (Radio Frequency Identification) buzz. Spurred in part by a WalMart edict that requires suppliers to tag all shipping cases and palettes with RFID by 2006, the vendors are rewriting their enterprise applications to integrate RFID data.... 'Walmart's marching orders are heard across the industry,' said Joshua Greenbaum, principal, Enterprise Applications Consulting. The changes on queue include RFID extensions to Oracle's database and application server and SAP R3 applications, higher-level integration of RFID with Sun's SunOne integration platform, and integration with IBM's DB2 Information Integrator to facilitate the handoff of data from RFID readers to enterprise applications. Most industry analysts argue that RFID tagging is a transformational development that will ultimately change the way businesses plan, price, distribute, and advertise products. But for the present, enterprise application vendors are extending their products to handle an expected boom in RFID data. Until now, a bar coded item used to sit on a retail shelf and did not generate any data until it was scanned by a bar code reader. And then the data was read only once. RFID, on the other hand, is a passive technology that does not require human interaction to scan. A reader can extract location and product description data from a tagged item every 250 milliseconds. Some readers are capable of reading data from 200 tags per second. The result is a data increase of more than one thousand times above traditional scanning methods. In response, Sun Microsystems is developing a middleware product to manage the influx of RFID data to filter out noise and duplicate data, according to Solutions Product Architect Sean Clark. Currently in its pilot phase and commercially available by first quarter 2004, Sun's middleware will comply with Savant, an industry standard for this aspect of RFID filtering. 'Savant acts as the buffering layer between readers and enterprise applications,' Clark said. In addition, Sun is writing a software component that will implement its version of the RFID industry standard EPC (Electronic Product Code) Information Service..." See "RFID Resources and Readings."
[September 30, 2003] "What's Next for SQL Server?" By Lisa Vaas. In eWEEK (September 26, 2003). "Users demanded SQL Server bond tighter with Visual Studio .Net, and Microsoft Corp. has since heeded the call, putting into beta testers' hands a version that opens the database up to .Net-compliant languages. The next version of SQL Server, code-named 'Yukon,' was originally slated for a spring 2004 release. That deadline was pushed out to the second half of next year after customers said they expected Yukon to fit hand-in-glove with the next version of .Net, code-named Whidbey. The Yukon beta was released in July to some 2,000 customers and partners. eWEEK recently talked with Microsoft Group Product Manager Tom Rizzo to find out how the .Net integration that customers demanded, along with upcoming features such as native XML and Web Services support, will benefit enterprises." [Rizzo:] "From the data level, we have things like native XML support. You take data from SQL Server, put it into XML format and ship it to anything that understands XML, such as Oracle has some XML support, and [IBM's DB2 database]. XML is ultimate interoperability -- it's an industry-standard format, and it's self-describing. You know both the schema of the data as well as the data itself. You don't lose the context when you pass your data around. We upped the level of XML support in Yukon through a number of things. In 2000 we had XML support but -- it was shredding. (Shredding is the parsing of XML tag components into corresponding relational table columns.) In Yukon the key thing is we have an XML type. Like you have STRING and NUMBERS and all that inside the database, now you can declare with the native data type XML. Although we had XML support in 2000, and many leveraged it and were happy with it, now we have native support... One reason we [moved to a native data type for XML] it is to support XQuery. Also to support XQuery we had to build code so as to combine XML with relational query language. You can take the relational sorts of queries you're used to in the database world, where people select things from tables with filters on that data. You can combine XQuery statements with such relational queries..." See also "XML and Databases."
[September 30, 2003] "Web services Players Push Management Barrow. Actional, AmberPoint, Empirix Separately Unveil Wares." By Paul Krill. In InfoWorld (September 29, 2003). "Web services vendors Actional, AmberPoint, and Empirix this week will attempt to improve Web services management capabilities with a host of product releases... AmberPoint is unveiling Exception Manager for resolving operational and business exceptions in Web services systems, while Empirix is bolstering testing and monitoring with its e-Test suite 6.8 and OneSight 4.8. For its part, Actional will announce its SOAPstation Edge XML firewall software and MyServicesPortal dashboard for monitoring Web services activity and service-level agreement compliance. Actional's SOAPstation Edge enables Web services management to be conducted outside a firewall by extending the brokering capabilities of the company's SOAPstation product. An add-on product to SOAPstation, SOAPstation Edge, reduces redundant processing in SOAP messages, providing XML firewall capabilities and processing of messages and management policy in a single message passage, said Dan Foody, CTO at Actional. MyServicesPortal provides a portal for both technical and non-technical persons to examine factors such as how a service network affects a particular customer, said Foody. Users can customize their views of the management system. Service Stabilizer identifies and corrects undesirable operating conditions in Web services and services-based applications before they become problems, according to Actional. For example, Service Stabilizer can detect if a network is overloaded, Foody said... AmberPoint Exception Manager is intended to detect and resolve distributed business exceptions in Web services systems, ranging from simple data entry errors to complex faults, the company said. It enables businesses to react more quickly to operational and business contingencies and minimize inefficiencies... Empirix will announce new Web services monitoring capabilities for its e-Test suite and OneSight products. The products feature a script wizard to simplify scripting needed to test and monitor Web services, the company said. This capability is added in Version 6.8 of the e-Test suite and Version 4.8 of OneSight. The e-Test product is used prior to launching applications while OneSight is used to manage and monitor applications in production, according to Joe Alwen, vice president of marketing at Empirix..."
[September 30, 2003] "AmberPoint Introduces Distributed Web Services Exception Management Solution." From CBDi Newswire (September 30, 2003). "Using AmberPoint Exception Manager, enterprises can react to operational and business contingencies, minimize inefficiencies and reduce the costs of maintaining their Web services environments. Due to its distributed, agent-based architecture, AmberPoint Exception Manager is able to detect and resolve distributed exceptions, where the clues to the condition reside in multiple messages. AmberPoint Exception Manager provides capabilities for managing exceptions in distributed Web services environments. In addition to providing visibility into hard-to-resolve operational errors, the solution also handles exceptions that have business impact. For example, if a customer were to place a large order that could not be fulfilled, AmberPoint Exception Manager can alert the appropriate business manager to resolve the situation before the customer encounters the problem... Where Amberpoint is moving the goalposts with it's Exception Manager is in recognizing the potential complexity inherent in Web Services. Amberpoint is providing what you might regard as in-flight diagnostic capabilities that allow intelligent response to both business and technical problems. The starting point for the Exception Manager is that in distributed (and particularly federated) systems, it is likely to be commonplace that whilst the symptom of a problem may be obvious, the root cause may not... Amberpoint provides quite sophisticated exception management capabilities including: in-flight message comparison (prior to current, incorrect to working); filtering of messages dependent on a variety of conditions; filtering and identification of specific message combinations; pattern recognition; creation of data for drill down; allowing real time data correction and value update... The Exception Manager tool is interesting because it will be of particular value in the final stages of testing as well as in production and it demonstrates that Amberpoint is getting real world feedback that they are feeding into the product..."
[September 30, 2003] "Object-Oriented Database Field Shrinks Again." By Lisa Vaas. In eWEEK (September 29, 2003). "In a deal worth $26 million, object-oriented database companies Versant Corp. and Poet Holdings Inc. are merging, Versant officials have announced... Versant will swap 1.4 shares of Versant Common Stock for each Poet share. The Versant stock that will be given to Poet shareholders represents about 45 percent of outstanding shares. The move was unanimously approved by both companies' boards of directors but is subject to approval from the Securities and Exchange Commission and from shareholders. Such a merger is unsurprising in what International Data Corp. analyst Carl Olofson has deemed a saturated market for object-oriented databases. The market consists of consumers of very complex data, such as media content and scientific and technical applications... In a statement, Versant officials said the two companies will work on software that delivers storage, integration, analysis and the ability to act on real-time data. Poet's object database, Fast Objects, is used in embedded applications. Versant's object database, VDS, in used in high-performance, large-scale, real-time applications. The merged technologies will be designed to manage real-time, XML, and other hierarchical and navigational data, according to officials... The acquisition is important to Versant as it pursues the emerging technology of JDO (Java Data Objects), Chandra said. The JDO API is used to directly store Java domain model instances into databases. JDO allows developers to create a universal way of accessing data and thus the ability to choose databases from major or minor database vendors such as Oracle Corp, Sybase Inc., Versant or Poet, without the need to make code changes..." General references in "XML and Databases."
[September 30, 2003] "Opinion: Shakeout Looms in Web Services Management." By James Kobielus. In Computerworld (September 25, 2003). "Web services management (WSM) is one of the most innovative sectors in today's IT industry. Despite the general economic slump, dozens of start-ups have ventured into the WSM market over the past few years. Consequently, enterprise customers can choose from many sophisticated tools for managing their complex Web services middleware environments. WSM is no passing fad. WSM tools address a growing need in today's Web-oriented e-business environment. They help companies ensure that the performance, reliability, availability and security of Web services environments continue to comply with service-level agreements and quality-of-service requirements. By contrast, traditional IT management tools can't monitor the end-to-end performance, availability, reliability and security of Web services environments. Typically, organizations deploy management tools associated with particular application, server and network environments. This explains why companies turn to WSM for a holistic view of service performance, as well as invest in enterprise management frameworks from Computer Associates International (CA), Hewlett-Packard (HP), IBM Tivoli and other strategic vendors... But today's WSM market is overcrowded and due for a serious shakeout. Start-ups are having a tough time establishing WSM as a separate market from IT management tools. WSM tools don't eliminate the need for traditional management tools that focus on particular applications, systems and network environments. You can't optimize Web services if you don't have the tools for viewing and fixing problems that originate in the underlying infrastructure. Sensing an opportunity to strengthen their competitive positions, management vendors are adding WSM features to their offerings. Others are bootstrapping themselves into the WSM market through strategic acquisitions. We see evidence for the latter trend in CA's recent acquisition of Adjoin and HP's announcement of its intention to buy Talking Blocks. Over the next several years, traditional IT management vendors will dominate the WSM market as they leverage their established customer bases and product families. Likewise, vendors of application servers, integration brokers, operating environments and other Web services platforms will embed WSM features in their offerings..." See OASIS Web Services Distributed Management TC.
[September 30, 2003] "Integrating Services with XSLT." By Will Provost. From O'Reilly WebServices.XML.com (September 30, 2003). "For all the magic that XML, SOAP, and WSDL seem to offer in allowing businesses to interoperate, they do not solve the more traditional problems of integrating data models and message formats. Analysts and developers must still plod through the traditional process of resolving differences between models before the promise of XML-based interoperability is even relevant. Happily, there's more magic out there: having committed to XML, companies can take great advantage of XSLT to address integration problems. With XSLT one can adapt one model to another, which is a tried-and-true integration strategy, implemented in a language optimized for this precise purpose. In this article I'll discuss issues and techniques involved in bringing XSLT into web service scenarios and show how to combine it with application code to build SOAP intermediaries that reduce or eliminate the stress between cooperating data structures... XSLT can make many annoying integration problems go away and with relatively low effort at that. We remember that almost all integration issues will require bidirectional transformation. That is, data that's transformed on its way in, and perhaps stored somewhere, will eventually be requested and sent back out, and it will have to look right to the requester. Form is not the only problem here. It is important to avoid the trap of inbound transformations that produce redundant results for different inputs. In other words, there must be a one-to-one mapping between the external and internal value spaces. Precisely preserving information is key to service adaptation, and this is not always so simple.. As wonderful as XSLT is, it's not designed to solve all possible transformation problems. Generally, it's strong on structural work using node sets and progressively weaker working with single values and their components. String arithmetic, algorithms, and math are notable weak points..." See related resources in "Extensible Stylesheet Language (XSL/XSLT)."
[September 30, 2003] "Data Visualization Tools Emerge. Antarctica Systems, Others Help Relay Complex Information." By Cathleen Moore. In InfoWorld (September 29, 2003). "Data visualization is back on the map as a host of emerging vendors unveil products designed to help enterprises analyze reams of information. Antarctica Systems is unwrapping Version 4.0 of its Visual Net software designed to present map-based visual representations of complex data from sources such as databases, BI tools, and ERP applications. The real pain point in applications and data stores is at the UI level, said Tim Bray, founder and CTO of Antarctica Systems. In fact, tools such as BI typically suffer low adoption rates because of their complexity. 'What got everyone using computers is the advent of the GUI,' Bray said. 'We are a GUI for information spaces.' Visual Net 4.0 adds a visual configuration wizard that allows users to point and click to hook up back-end data records to the display front end. In addition, added support for DHTML brings a cleaner, more compelling user interface, Bray said. TheBrain Technologies recently released a Lotus Notes Connector Version 1.0 for its BrainEKP (Enterprise Knowledge Platform), which provides a relational, visual interface for multiple data repositories. The connector allows users to see a graphical representation of Lotus Notes information in the context of company projects, customer accounts, and business processes. Next month Mindjet will add XML support to its MindManager X5 Pro mapping and collaboration software. MindManager creates visual representations of the thinking and planning stages of the collaborative process..." See also the Visual Net 4.0 announcement: "Antarctica Systems Announces Visual Net 4.0. Maximizing Information Display to Reveal Clarity, Truth in Data."
[September 30, 2003] "Developers Show Their Independent Streak, Favoring Web-Based Apps." By Eric Knorr. In InfoWorld (September 26, 2003). "Software behemoths are trying to sell programmers on elaborate new paradigms; but as our survey results show, many programmers aren't buying. Web applications rule the enterprise. That's the indisputable conclusion to be drawn from this year's InfoWorld Programming Survey. Despite imperitives from Microsoft and others that developers abandon server-based HTML apps for fat desktop clients, the ease of 'zero deployment' through the browser continues to win the day. To build those Web apps, significant numbers of programmers favor such humble scripting languages as VBScript and Perl. Contrary to the hype that says Microsoft .Net and the Java elite have a lock on the programming world, many developers have settled on cheaper (and often faster) ways to build the Web applications they need to build. Click for larger view. Responses gathered in August come from a group of 804 programmers and their managers. Our survey mirrors trends identified by such research companies as IDC, Gartner, and Forrester... Our respondents aren't afraid of new technology, either. A robust 51 percent say that Web services are part of their server development and 52 percent are employing XML or object-oriented databases. At a solid 40 percent, the uptake on .Net should warm Microsoft's heart, considering that the .Net Framework officially launched only 18 months ago. Adoption of Microsoft's Java-like C# was somewhat less impressive at 22 percent, though still respectable for a new programming language... The war between guerilla and IT-sanctioned technology has persisted since the first PC slipped in the back door of a big corporation. But there's one thing nearly everyone can agree on: Nobody wants to write it twice if they don't have to. In our survey, when asked what the biggest obstacle to reusing software is, only 10 percent say programmer disinclination... No matter what languages or tools they use, developers of all stripes are feeling the heat from the business side to respond quickly to business needs. At the high end of application development, Web services and the movement toward SOA (service-oriented architecture) promise to deliver application components that can be recombined ad infinitum with minimal development time. But analysts agree that enterprise adoption of SOA will take many years. Meanwhile, programmers are finding their own way, often using simple scripting tools, to develop the Web applications they need fast..."
[September 30, 2003] "Sun Expands Push For Auto-ID." By Matt Villano. In InternetNews.com (September 19, 2003). "Already a major player in the Auto-ID market, Sun Microsystems this week announced an initiative for delivering the hardware, software and services that enable enterprises to link into the Elecronic Product Code (EPN) Network. The announcement coincided with news that the Santa Clara, Calif.-based services firm is creating a new Auto-ID business unit to work to develop and deliver a standards-based Auto-ID/EPC solution down the road. Sun's announcement came just weeks after retail giant Wal-Mart aired a mandate for its suppliers to become EPC compliant by Jan. 1, 2005. According to Jonathan Schwartz, executive vice president for Sun Software, the Sun initiative will help Wal-Mart suppliers and other enterprises integrate real-time supply chain data seamlessly into their existing business processes and enterprise assets, enabling companies to not only meet these new requirements but exceed them... As Schwartz explained it, the technology behind Sun's Auto-ID effort will be similar to the technology behind Radio Frequency Identification (RFID) tags, the microscopic chips that some companies and retailers have considered for security and tracking purposes of clothes and electronics. This kind of EPC technology helps make the supply chain more efficient, safe, and secure by tracking goods every step along the way, reducing threats of counterfeiting, tampering, and terrorism, while increasing compliance with industry and shipping regulations. More specifically, Sun said its software will deliver a dynamic federated service architecture that emphasizes reliability, availability and scalability (RAS) for Auto-ID pilots and deployments. The proposed solutions also will include lifecycle services to maximize the value of Auto-ID deployments, helping customers proactively architect, implement, and manage IT operations in heterogeneous environments. According to Julie Sarbacker, who will head the new Auto-ID business at Sun, most of the company's EPC offerings will be delivered through the Solaris OE and Linux-based hardware platforms, setting the stage for transparent integration into the EPC Network..." The datasheet says that Sun's EPC initiative highlights an architecture "designed around Auto-ID standards such as EPC, Savant System Interface, Object Name Service (ONS), and Physical Markup Language (PML) supply chains with applications that address counterfeiting, tampering, terrorism, and regulatory compliance..." See: (1) Sun Auto-ID home; (2) the announcement, "Sun Microsystems Announces Vision and Initiative for Enterprise Auto-ID/EPC Deployments. Newly Formed Sun Business Leads Auto-ID/EPC Product and Market Development Efforts." General references in "Radio Frequency Identification (RFID) Resources and Readings."
[September 29, 2003] "OAI-Rights White Paper." By Carl Lagoze (Cornell University Information Science), Herbert Van de Sompel (Los Alamos National Laboratory), Michael Nelson (Old Dominion University Computer Science), and Simeon Warner (Cornell University Information Science). From the Open Archives Initiative. September 26, 2003. "The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has become an important foundation for interoperability among networked information systems. It is widely used in a variety of domains including libraries, museums, government, and research. Like any vehicle for exchanging information, the OAI-PMH exists in a context where information holders have concerns about rights to the use of their information. Although the OAI-PMH is nominally about the exchange of metadata, this does not lessen the complexities of rights-related issues: The distinction between content (data) and metadata is fuzzy at best, especially vis-à-vis intellectual property, and many providers are justifiably wary about uncontrolled reuse of rich metadata that represents a significant intellectual effort. Since the only technical restriction on data exchanged via OAI-PMH is that it must use XML encoding, it is entirely feasible to use the protocol for transmission of content itself. Since the primary reason for making metadata available via OAI-PMH is usually eventual access to the resource described by the metadata, guidelines and frameworks for expressing rights to that resource are in the scope of the protocol. As a result of these issues, discussion of rights and their relationship to the OAI-PMH have been frequent throughout work on the protocol. This paper is intended as a foundation for work aimed at incorporating structured rights expressions into the OAI-PMH. This work will be undertaken by a technical group called OAI-rights, and will result in a set of OAI-PMH guidelines scheduled for release in second quarter 2004... This paper examines issues and suggests alternatives for the incorporation of rights expressions in the OAI-PMH along three dimensions: (1) Entity association, which covers the association of rights expression with metadata and data (resources). (2) Aggregation association, which covers whether rights expressions can be associated with entities in the OAI-PMH that group other entities. (3) Binding, which covers where rights expressions are placed in protocol responses..." See details in the 2003-09-26 news story: "RoMEO and OAI-PMH Teams Develop Rights Solution Using ODRL and Creative Common Licenses."
[September 25, 2003] "Patent Politics." By Paul Festa. In CNET News.com (September 25, 2003). "During a recent meeting held at Macromedia's San Francisco headquarters, Silicon Valley companies asked a familiar question: What to do about Microsoft? But the strategy event, sponsored by the World Wide Web Consortium, differed significantly from so many others, at which participants have typically gathered to oppose the software giant's power. This time, Microsoft was the guest of honor. 'There's no doubt that there are some people who are happy to see Microsoft get nailed for anything,' said Dale Dougherty, a vice president at computer media company O'Reilly & Associates. 'But for those of us who are part of the Web, we wanted the browser to be on every desktop. And if it has to be a Microsoft browser, OK.' What a difference a patent suit makes. With one staggering loss at the hands of a federal court jury in Chicago, Microsoft has won the support -- if not the sympathy -- of nearly the entire software industry, from standards organizations to corporate rivals that are rushing to defend the company's Internet Explorer browser... the [court] verdict is increasingly interpreted as a potentially crushing burden on the Web, threatening to force significant changes to its fundamental language, HTML. Microsoft's competitors fear that Eolas' lawyers will target them next, and its partners -- such as Macromedia and Sun Microsystems -- worry that an enjoined IE browser would be prohibited from running their software plug-ins without awkward technology alternatives. The result has been a complex shift of industry dynamics that has turned many traditional alliances and rivalries upside down, prompting long-suffering competitors in the browser market to side with archrival Microsoft. At the same time, as the Eolas case has progressed, critics have portrayed company founder and sole employee Mike Doyle as an opportunist, despite his claims to be acting on behalf of the Web against a rapacious captor... Microsoft might still pull out a victory at the appellate level. Moreover, even if Eolas' patent is upheld, the rest of the software industry may very well go with Microsoft's workarounds rather than face the prospect of abandoning development for the universally distributed IE... Given the daunting odds in any challenge to Microsoft, Doyle believes that his struggle exceeds biblical proportions. He said the often-cited comparisons to David and Goliath don't go far enough in conveying the ambition and travails of his quest, which he believes could reverse Microsoft's victory in the so-called browser war and break its control over much of the digital world... 'We're no big fan of Microsoft, but I'm a big fan of the Web,' said Dougherty, who is in charge of online publishing at O'Reilly and testified on behalf of Microsoft in its recent patent trial. 'What worries people is that this is the first successful patent offense on the Web, and lots of other things could be coming.' The prospect of having such a basic necessity as running plug-ins subject to the whim of Eolas has the industry in a near panic -- not least among those organizations whose rules restrict or ban the use of patented technologies, such as open-source browser makers and the W3C. Groups that advocate software that has open-source code say their licenses prohibit them from including patented technologies. The W3C in March reaffirmed its opposition to the use of royalty-encumbered technologies, after a lengthy public battle that ended in a near-ban. 'We have experience and proof that the specter of a fee stops standards development cold,' W3C representative Janet Daly said. 'It doesn't even have to be a firm guarantee. All you need is a little bit of fear, uncertainty and doubt that a developer is going to be slapped with a licensing fee, and the developer will leave that technology alone'..." See: (1) the W3C news item from 2003-09-23, "W3C Launches HTML Patent Advisory Group" with the PAG FAQ, Home Page, and Charter; (2) the news story of August 28, 2003: "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft"; (3) general references in "Patents and Open Standards."
[September 24, 2003] "Grab Headlines From a Remote RSS File. Retrieve Syndicated Content, Transform It, and Display the Result." By Nicholas Chase (President, Chase & Chase, Inc). From IBM developerWorks, XML zone. September 23, 2003. ['In this article, Nick shows you how to retrieve syndicated content and convert it into headlines for your site. Since no official format for such feeds exists, aggregators are often faced with the difficulty of supporting multiple formats, so Nick also explains how to use XSL transformations to more easily deal with multiple syndication file formats.'] "With the popularization of weblogging, information overload is worse than ever. Readers now have more sites than ever to keep up with, and visiting all of them on a regular basis is next to impossible. Part of the problem can be solved through the syndication of content, in which a site makes its headlines and basic information available in a separate feed. Today, most of these feeds use an XML format called RSS, though there are variations in its use and even a potential competing format. This article explains how to use Java technology to retrieve the content of a syndicated feed, determine its type, and then transform it into HTML and display it on a Web site. This process involves five steps: (1) Retrieve the XML feed; (2) Analyze the feed; (3) Determine the proper transformation; (4) Perform the transformation; (5) Display the result. This article chronicles the creation of a Java Server Page (JSP) that retrieves a remote feed and transforms it using a Java bean and XSLT, and then incorporates the newly transformed information into a JSP page. The concepts, however, apply to virtually any Web environment... The application uses a DOM Document to analyze the feed and determine the appropriate stylesheet, but you can further extend it by moving some of that logic into an external stylesheet. You can also adapt the system so that it can pull more than one feed, perhaps based on a user selection, with each one creating its own cached file. Similarly, you can enable the user to determine the interval between feed retrievals..." See general references in "RDF Site Summary (RSS)."
[September 24, 2003] "ISO to Require Royalties?" By Kendall Clark. From XML.com (September 24, 2003). ['Kendall Clark on the ISO's proposal to charge for using their country codes.'] "It has come to the attention of the W3C, as well as various other communities, that the ISO is thinking about imposing licensing fees for the commercial use of several of its standards, including 3166, the one which establishes country codes, as well as ISO 639 and ISO 4217, which establish language and currency codes, respectively. [In this] article I provide some background to the present controversy as well as sample the reaction of web and other internet developers and development communities... ISO, rather than taking a step forward to make possible wider access to its standards, is taking the opposite tack. It's considering requiring license fees be paid for the commercial use of the information contained in ISO 639, 4217, and 3166... It's not clear how far reaching this requirement might be. Would it require my Linux distribution maker to pay a fee for selling a CD that contains software, like the Python language and libraries, which uses ISO identifiers? What about all of the Web software, both client and server, which uses language and country codes extensively? The W3C's XML Recommendation makes reference to ISO 639 and 3166. Does that mean any product which uses an XML parser owes the ISO a fee? At least three important institutions have responded to perceived change in the ISO's licensing policy: the W3C, the Unicode Technical Committee, and INCITS... Does the ISO need a reliable means of funding? Absolutely. But it needs, at least in my view, a way which is independent of selling, at least at such exorbitant rates, its standards themselves. If it's a truly global standards body, it should be able to find funding from the UN (which might be able or more inclined to fund ISO if the US would pay its delinquet UN dues), from wealthy western nations (why not, since the G7 benefits the most from the ISO's work?), and even from philanthropically-minded individuals and corporations. However, some things which the ISO has standardized -- and language, currency, country identifiers, as well as date-time representations, are among those things -- should be put immediately into the public domain. Some of its standards are simply too crucial and too much in the public trust to be tied in any way to the ISO's revenue model..." See other details in "Standards Organizations Express Concern About Royalty Fees for ISO Codes."
[September 24, 2003] "Secure, Reliable, Transacted Web Services: Architecture and Composition." By Donald F. Ferguson (IBM Fellow and Chairman; IBM Software Group Architecture Board) Tony Storey (IBM Fellow), Brad Lovering (Microsoft Corporation Distinguished Engineer), and John Shewchuk (Microsoft Web Services Architect). With credits to 66 contributors. In Microsoft MSDN Library (September 2003). "The basic set of Web service specifications enables customers and software vendors to solve important problems. Building on their success, many developers and companies are ready to tackle more difficult problems with Web service technology. The very success of Web services has led developers to desire even more capabilities from Web services. Since meaningful tool and communication interoperability has been successful, developers now expect the enhanced functions to interoperate. In addition to basic message interoperability and interface exchange, developers increasingly require that higher-level application services interoperate. Many commercial applications execute in an environment ('middleware' or 'operating systems') that provide support for functions like security and transactions. IBM, Microsoft, and others in the industry are often asked to make Web services more secure, more reliable, and better able to support transactions. In addition we are asked to provide these capabilities while retaining the essential simplicity and interoperability found in Web services today. This paper provides a succinct overview for the set of Web service specifications that address these needs. For the details of the specifications we provide references to the actual documents. The main purpose of this paper is to briefly define the value these specifications provide to our customers. We also describe how these specifications complement each other to compose robust environments for distributed applications. We face a key engineering challenge: How do we give Web services new security, reliability, and transaction capabilities without adding more complexity than needed? ... IBM, Microsoft, and our partners are developing Web service specifications that can be used as the building blocks for a new generation of powerful, secure, reliable, transacted Web services. These specifications are designed in a modular and composable fashion such that developers can utilize just the capabilities they require. This 'component-like' composability will allow developers to create powerful Web services in a simple and flexible manner, while only introducing just the level of complexity dictated by the specific application. This technology will enable organizations to easily create applications using a Service-Oriented Architecture (SOA). Furthermore, IBM and Microsoft have demonstrated secure, reliable, transacted SOA applications that illustrate the richness of the business processes that can be created using this approach. Moreover, these demonstrations have been operating in a federated security environment on a heterogeneous environment consisting of IBM WebSphere and Microsoft .NET software. We anticipate that these Web Service technologies will be available in operating systems, middleware, with tools that will make it even easier for developers to use these technologies..." General references in "Web Services Implementation."
[September 23, 2003] "Experiences with the Enforcement of Access Rights Extracted from ODRL-Based Digital Contracts." By Susanne Guth [firstname.lastname@example.org], Gustaf Neumann, and Mark Strembeck (Department of Information Systems, New Media Lab, Vienna University of Economics and BA, Austria). Prepared for presentation at DRM 2003, October 27, 2003, Washington, DC, USA. 13 pages (with 38 references). "In this paper, we present our experiences concerning the enforcement of access rights extracted from ODRL-based digital contracts. We introduce the generalized Contract Schema (CoSa) which is an approach to provide a generic representation of contract information on top of rights expression languages. We give an overview of the design and implementation of the xoRELInterpreter software component. In particular, the xoRELInterpreter interprets digital contracts that are based on rights expression languages (e.g. ODRL or XrML) and builds a runtime CoSa object model. We describe how the xoRBAC access control component and the xoRELInterpreter component are used to enforce access rights that we extract from ODRL-based digital contracts. Thus, our approach describes how ODRL-based contracts can be used as a means to disseminate certain types of access control information in distributed systems... A contract typically represents an agreement of two or more parties. The contract specifies rights and obligations of the involved stakeholders with respect to the subject matter of the respective contract. Contracts in the paper world can be tailored to meet the needs of a specific business situation or to fit the requirements of individual contract partners. In principle, the same is true for digital contracts as they can be used in the area of digital rights management for example. Most often digital contracts are defined using special purpose rights expression languages (REL) as ODRL, XrML, or MPEG 21 REL for instance. In this connection one can differentiate between the 'management of digital rights' and the 'digital management of (arbitrary) rights'. We especially focus on contracts that contain information on digital rights, i.e., rights which are intended to be controlled and enforced in an information system via a suitable access control service -- in contrast to rights which are enforced by legislation or other 'social protocols'... In Section 2 we give an overview of the abstract structure of digital contracts. We especially describe how information within a digital contract is encapsulated in different contract objects. Section 3 then summarizes the contract processing procedures performed by a contract engine. Subsequently, Section 4 introduces the generalized contract schema CoSa and the software components we used to implement our system, before Section 5 shows how ODRL-based digital contracts are mapped to a runtime CoSa object model. Next, Section 6 describes the initialization of the xoRBAC access control service via a mediator component and the subsequent enforcement of the corresponding access rights. Section 7 gives an overview of related work, before we conclude the paper in Section 8..." See also: (1) Open Digital Rights Language (ODRL) Initiative website; (2) ODRL International Workshop 2004; (3) local references in "Open Digital Rights Language (ODRL)"; (4) general references in "XML and Digital Rights Management (DRM)."
[September 23, 2003] "Update: European Parliament Votes to Limit Scope of Software Patents. Issue Still Must be Debated by European Union Member States." By Paul Meller. In InfoWorld (September 24, 2003). "The European Parliament voted in favor of a law that goes some way toward limiting the scope for patents on software programs Wednesday. With 364 voting in favor, 153 against, and 33 abstentions, members of the European Parliament (MEPs) appear to have ignored heavy lobbying from both extremes in the debate by opting for a compromise solution. The Parliament was considering changes to the original text published by the European Commission (EC), the executive branch of the EU. Most of the changes were designed to tighten up the wording of the law to make it harder for people to obtain patents. For example, the MEPs agreed to an amendment which outlaws the patenting of algorithms. Another accepted amendment explicitly outlaws the patenting of business methods, such as the 'one-click' online shopping technique patented in the U.S. by Amazon.com. 'Inventions involving computer programs which implement business, mathematical or other methods and do not produce any technical effect beyond the normal physical interactions between a program and the computer, network or other programmable apparatus in which it is run, shall not be patentable,' the amendment read. This is the first of two votes on the software patent directive in the European Parliament. Before casting their ballots again, the directive, including the amendments agreed on by the MEPs Wednesday, will be debated by ministers from the 15 EU state governments... MEP Arlene McCarthy, a U.K. member of the Socialist Party, said the Parliament has sent a clear message: 'We do want strict limits on patentability of software. All the amendments that were adopted were in this direction,' she said. 'We have effectively rewritten the directive.' McCarthy led the debate when the bill was being discussed at committee stage in the Parliament and also drew up the amendments to be considered at this week's plenary session of the body. She said, however, that she expects the text supported by the Parliament today to be rejected by the 15 member state governments and by the directive's original author, the European Commission..." See: "Patents and Open Standards."
[September 23, 2003] "W3C Investigation Begins on HTML Standard." By Matt Hicks. In eWEEK (September 23, 2003). "The ramifications of the recent Web browser patent verdict against Microsoft Corp. could strike at the heart of the Web's common language -- HTML. The World Wide Web Consortium (W3C) is investigating whether the claims in the patent infringement lawsuit brought by Eolas Technologies Inc. and the University of California could require changes to both the current and future HyperText Markup Language specifications, W3C officials said on Tuesday. Eolas in its lawsuit has claimed that Microsoft infringed on its patent of technology which allows for the embedded applications within Web pages such as applets and plug-ins. Microsoft has disputed the claims and has promised to appeal a $521 million jury verdict handed down in August. Eolas' attorney also has said that the patent could apply to a broad range of Web technology. The W3C is forming a patent advisory group that will decide whether to recommend changes to HTML and could also call on the full standards body to conduct a formal legal analysis of the patent. 'This is a serious issue,' said Philipp Hoschka, W3C deputy director for Europe who also oversees HTML activities. 'As you know, we have tried for our specifications to be royalty free.' Hoschka wouldn't specify what portions of HTML the patent might affect. Determining whether any tags or HTML specifications fall within the patent's claims would be the HTML patent advisory group's role, he said. W3C patent advisory groups typically are formed to avoid royalties as the standards body develops technical specifications and usually involve the W3C member making patent claims, W3C spokeswoman Janet Daly said. In this case, the group will be working without the participation of the patent holder... Beyond suggesting changes to HTML, the advisory group also could become involved in the ongoing debate concerning 'prior art' -- a legal term in patent law referring to whether an invention existed prior to the filing of a patent. Hoschka declined to say whether any investigation into the existence of prior art could also lead to the W3C becoming more directly involved in the patent lawsuit. The W3C has sought legal opinions concerning prior art before. In 1999, it concluded after a yearlong examination that the then-proposed Platform for Privacy Preferences (P3P) standard for Web privacy did not infringe on an existing patent. Earlier this month, Lotus Notes creator Ray Ozzie had claimed in Microsoft had made prior art arguments during the trial and is expected to use that argument in an appeal..." See: (1) the W3C news item from 2003-09-23, "W3C Launches HTML Patent Advisory Group" with the PAG FAQ, Home Page, and Charter; (2) the news story of August 28, 2003: "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft"; (3) general references in "Patents and Open Standards."
[September 23, 2003] "Eolas Suit May Spark HTML Changes." By Paul Festa. In CNET News.com (September 19, 2003). ['The World Wide Web Consortium is on the verge of forming a patent advisory group in response to the Eolas patent suit. Fallout from Eolas' patent victory over Microsoft threatens to hit Web developers and HTML itself.'] "As anxiety builds throughout the Web over the patent threatening Microsoft's Internet Explorer browser, the Web's leading standards group is considering modifying the medium's lingua franca itself, HTML, to address the same threat. The World Wide Web Consortium (W3C) is on the verge of forming a patent advisory group, or PAG, in response to the Eolas patent suit, according to sources close to the consortium. That group would conduct a public investigation into the legal ramifications of the patent on Hypertext Markup Language, the signature W3C standard that governs how most of the Web is written, and other specifications related to it... the W3C is said to be contemplating changes to HTML, considered one of the consortium's more mature and settled specifications. The potential problem for HTML is that it describes a way of summoning content located on a server other than the one serving the page in question. The 'object' and 'embed' tags in HTML, consortium members worry, may fall under the wording of the Eolas patent. Options the PAG could recommend include a technical workaround or new wording in HTML and related specifications warning that authors who implement the tags in question should contact the patent holders and take out a license, if necessary. The HTML PAG could also, as have previous PAGs in other working groups, launch a drive to discover 'prior art,' or technologies older than the Eolas patent that could potentially invalidate it in court. The W3C established the PAG system after its P3P privacy preferences recommendation was threatened by patents. The groups have since been formed to respond to patent disputes among VoiceXML working group members. The PAG policy was codified with the rest of the W3C's patent-averse policy, which was ratified in March after a rancorous debate..." See: (1) the W3C news item from 2003-09-23, "W3C Launches HTML Patent Advisory Group" with the PAG FAQ, Home Page, and Charter; (2) the news story of August 28, 2003: "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft"; (3) "Patents and Open Standards."
[September 23, 2003] "OASIS Ratifies SAML 1.1. RSA Supports Latest Version in Products." By Paul Roberts. In InfoWorld (September 19, 2003). "The OASIS Internet standards consortium said Monday that its members ratified SAML (Security Assertion Markup Language) Version 1.1 as an official standard, approving changes to the specification will improve interoperability with other Web services security standards. The vote assigns the highest level of OASIS (The Organization for the Advancement of Structured Information Standards) ratification to SAML 1.1 and could open the door for wider adoption of the XML (Extensible Markup Language) framework for companies using Web services to conduct high value transactions, according to Prateek Mishra of Netegrity Inc., co-chair of the OASIS Security Services Technical Committee. SAML is a standard that supports so-called 'federated identity' systems in which user authentication and authorization information is securely exchanged between Web sites within an organization or between organizations. SAML enables a user to sign on once to Web-enabled services, instead of having to repeatedly log in when they move from one Web site or Web-enabled application to another... The new version of SAML includes a number of updates and fixes for problems identified in the 1.0 standard, he said. In particular, SAML 1.1 revised guidelines for the use of digital certificates to sign SAML user authentication exchanges, known as SAML assertions. SAML 1.0 standards were vague about how to digitally sign SAML assertions, creating interoperability problems between different companies implementing Web services using the 1.0 standard, Mishra said. Only a 'small group' of companies are currently interested in using digital certificates to sign SAML assertions. However, that group is growing, as companies look for ways to exchange sensitive data with employees and business partners while also verifying that digital transactions took place -- a capability known as nonrepudiation... Having handed off the SAML 1.1 standards, OASIS's Security Services Technical Committee is now at work on the SAML 2.0 specification, Mishra said. That version will come with major additions to the standard based on feedback from large companies. Among other things, the group is looking at ways to implement distributed log out, in which three or more Web sites that share a single login session will synchronize when a user terminates that session. OASIS also wants to harmonize SAML 2.0 with the Liberty Alliance's ID-FF layer, another federated identity, single-sign on standard..." See: (1) the announcement, "Security Assertion Markup Language (SAML) Version 1.1 Ratified as OASIS Standard. Baltimore Technologies, BEA Systems, Computer Associates, Entrust, Hewlett-Packard, Netegrity, Oblix, OpenNetwork, Reactivity, RSA Security, SAP, Sun Microsystems, Verisign, and Others Collaborate on Authentication and Authorization."; (2) "Security Assertion Markup Language (SAML)"; (3) "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[September 23, 2003] "New ISO Fees on the Horizon?" By Evan Hansen. In CNET News.com (September 19, 2003). ['IT standards groups are rallying opposition to an ISO proposal to introduce usage royalties for widely adopted standards, including country codes.'] "Information technology standards groups are raising warning flags over a proposal that could raise fees for commonly used industry codes, including two-letter country abbreviations, used in many commercial software products. At stake is a tentative proposal from the International Organization for Standardization (ISO) to add usage royalties for several code standards, a move that opponents say could weaken standards adherence by forcing software providers to pay a fee for each ISO-compliant product they sell. The standards -- ISO 3166, ISO 4217, ISO 639 -- cover country, currency and language codes, respectively. Critics say the proposal could weaken standards adherence by forcing software providers to pay a fee for each ISO-compliant product they sell. The backlash illustrates growing sensitivity in software circles over belated intellectual property claims... The proposal is still in the early stages, and may yet be significantly altered or shelved. Still, technology standards groups -- including the International Committee for Information Technology Standards (INCITS), the World Wide Web Consortium (W3C) and the Unicode Technical Committee -- are rallying opposition. 'Charging (usage fees) for these codes would have a big impact on almost every commercial software product, including operating systems,' said Mark Davis, president of software consortium Unicode, which is seeking to set standard character sets for disparate computing systems. 'They're used in Windows, Java, Unix and XML. They're very pervasive.' ... The ISO's claims on the codes stem from copyrights it owns on documents that describe the standards. ISO generally does not make its standards freely available, but sells them to fund its operations. Whether those copyrights apply to the codes themselves has not yet been tested, according to opponents of the proposal. 'There has not been a detailed discussion of how they own that copyright for the codes themselves,' said Martin Duerst, W3C Internationalization Activity Lead. 'The copyrights may not apply to individual codes, but only to the whole collection of codes--like a dictionary, where each word is not copyrighted, but the entire collection of words and definitions is copyrighted.' Duerst said the ISO's proposal is troubling because so many other standards groups have adopted the ISO codes. For example, he said, the Internet Engineering Task Force (IETF) has largely adopted the ISO's country codes..." See details and references in the news story "Standards Organizations Express Concern About Royalty Fees for ISO Codes." General references (for language codes) in "Language Identifiers in the Markup Context."
[September 23, 2003] "When Good Institutions Go Bad." By Simon St. Laurent (Editor, O'Reilly & Associates). From O'Reilly Developer Weblogs (September 23, 2003). "The last few weeks have seen a dismaying upturn in the number of semi-public institutions which seem to out to make a buck rather than a contribution, risking the contributions they've already made. ISO has the potential to cause the largest trainwreck, with plans to require licensing fees from those who use their language codes (ISO 639), country codes (ISO 3166), and currency codes (ISO 4127). The W3C has posted a letter to ISO... It appears that ANSI (the US member body for ISO is already at work collecting these royalties, as this exchange suggests. Warnings have gone up on the ISO 3166 site as well... I've been a critic of the W3C's structure for a long time now, having doubts about the nature of vendor consortia. On these kinds of issues, however, the W3C seems to be well ahead of its peers. While the process of creating many W3C specifications may remain veiled in mystery, the specifications themselves are open for anyone to implement, free of charge -- and the W3C seems intent on keeping it that way, even in the face of recent patent lunacy. The larger problem this illustrates isn't the greedy nature of everyone, but rather the difficulties of trust in a world where organizations are underfunded and expected to scramble for dollars. Building organizations which are intended to promote the sharing of resources requires an independent source of funds. Otherwise, organizations will end up placing tolls on their results, impeding the very sharing they were set up to create..." See details and references in the news story "Standards Organizations Express Concern About Royalty Fees for ISO Codes." General references (for language codes) in "Language Identifiers in the Markup Context."
[September 22, 2003] "Add XML Parsing to Your J2ME Applications. Combine Mobile Data and Mobile Code on Your Mobile Device." By Soma Ghosh (Application developer, Entigo). From IBM developerWorks, Wireless. September 16, 2003. ['More and more enterprise and Java technology projects are making use of XML as a medium to store data in a portable fashion. But due to the increased processing power demanded by XML parsers, J2ME applications have largely been left out of this trend. Now, however, small-footprint XML parsers for the Java language are emerging that will allow MIDP programmers to take advantage of the power of XML.'] "The fusion of Java and XML technologies creates the powerful combination of portable code and portable data. But where does the Java 2 Platform, Micro Edition (J2ME) fit in? In this article, I'll show some of the progress that has been made in cutting XML parsers down to a size suited to J2ME applications and limited-resource platforms. I'll use the kXML package to write an application for the MIDP profile that can parse an XML document... In this article, you'll see how you can use J2ME to fuse Java technology and XML -- in other words, to fuse portable code with portable data. Designing J2ME applications with embedded parsers can be a challenge because of the resource constraints inherent in J2ME devices. However, with the gradual availability of compact parsers suited to the MIDP platform, XML parsing will soon will be a widely used feature of the Java platform on mobile devices... Both push and model parsers require an amount of memory and processing power that is beyond the capabilities of many J2ME devices. To get around those device limitations, a third type of parser, called a pull parser, can be used. A pull parser reads a small amount of a document at once. The application drives the parser through the document by repeatedly requesting the next piece. The kXML parser that I'll use in my sample application is an example of a pull parser... You can use XML parsers in J2ME applications to interface with an existing XML service. For example, you could get a customized view of news on your phone from an aggregator site that summarizes headlines and story descriptions for a news site in XML format. XML parsers tend to be bulky, with heavy run time memory requirements. In order to adapt to the MIDP environment, XML parsers must be small to meet the resource constraints of MIDP-based devices. They should also be easily portable, with minimum effort required to port them to MIDP. Two frequently used XML parsers for resource-constrained devices are kXML and NanoXML. kXML is written exclusively for the J2ME platform (CLDC and MIDP). As of version 1.6.8 for MIDP, NanoXML supports DOM parsing..."
[September 22, 2003] "Microsoft Seeks Stronger XML Ties. ERP Vendors Pour Cold Water on Office as Window to Enterprise Applications." By Joris Evers. In InfoWorld (September 19, 2003). "Microsoft's forthcoming Office 2003 suite offers enterprises a promise few vendors or analysts are willing to support. The software giant argues that organizations will realize significant business process improvements by using the Office 2003 suite as a window into back-end enterprise systems. Office 2003's support for XML, Microsoft contends, is the key to bridging this front-end to back-end gap. But enterprise application vendors such as SAP, PeopleSoft, and Siebel Systems are far more interested in using XML for back-end integration, not to support a new front end. SAP, a longtime Microsoft partner, hopes Microsoft's support for XML will improve integration between Office and SAP back-end systems -- as SAP users can already tie Excel to their enterprise applications. But SAP does not expect users to switch from using portals to access data in enterprise systems to using Office... A Microsoft showcase for using the new made-for-XML InfoPath Office application instead of Word, Excel, Outlook, or PowerPoint, Cooper Tire and Rubber is one of those pioneers. With the help of Microsoft, Cooper Tire is building an XML front end to its customized tire-mold management system. Using XML forms and InfoPath, the company will be able to track the movements of molds between its various locations, said Ron Sawyer, manufacturing IT manager at Cooper Tire. 'Right now, we do not have visibility of the molds as they are in transit, and we make estimates of how long it will take for a mold to get shipped out of one plant and arrive at the other,' Sawyer said. 'We are very new to using XML and wanted to stick with Office and the Microsoft tools because that is our standard.' About 40 employees at Cooper Tire will use XML forms. The forms are opened in InfoPath and interact with a Windows Server 2000 system that sends the data on to an Oracle database..." See also: "Microsoft Office 11 and InfoPath [XDocs]" and "XML File Formats."
[September 22, 2003] "Sun Touts Liberty for Digital Rights Management." By Gavin Clarke. In Computer Business Review Online (September 19, 2003). "Sun Microsystems hopes to replicate an industry initiative for federated identity in the field of Digital Rights Management (DRM), to stymie Microsoft Corp's own controversial plans to control distribution of electronic content. The company has thrown its weight behind the OMA wireless group's effort to define a DRM specification on mobile devices. Ultimately, though, Sun hopes to build a coalition of vendors and end-users similar to the Liberty Alliance Project to drive uptake of DRM. Sun CTO John Fowler said a Liberty-style group would have the advantage of including input into specifications from end-users. Sun has helped work on a DRM specification at the Open Mobile Alliance, whose list of 200 members includes hardware vendors, ISVs, mobile specialists and content providers such as AOL Time Warner and Sony Inc. Liberty's members include end-users such as Amex and General Motors... 'Liberty was less about vendors who have technology and about the user,' Fowler said. Sun additionally believes mobile DRM for mobile systems to be important, given the expected growth rates in use of cell phones and other devices. Mobile platforms are also dominated by Sun's Java 2 Micro Edition (J2ME), meaning any DRM specification could ultimately be built into the platform. Ironically, Microsoft is also an OMA member, meaning the company could end-up putting its name to DRM work that ultimately competes against its own. OMA is an amalgamation of formerly disparate wireless and mobile vendor groups, formed in June 2002..." General references in "XML and Digital Rights Management (DRM)."
[September 22, 2003] "SOAP Gains Traction: Q&A with Rebecca Dias." By Jack Vaughan. In Application Development Trends (September 19, 2003). Microsoft's Rebecca Dias discusses the status of interoperability between Microsoft's .NET and IBM Java, binary communications, the goals of Web Services Enhancements (WSE) V2, and related topics. Dias: "There's a great deal of traction in terms of just general SOAP message processing and interop, and that goes across the board. Actually, it's more than just IBM and Microsoft, it's the Java world as well as other worlds that exist out there. There are Lisp implementations, for instance, that are finding interop, as well as WSDL and the WS-I basic profile they've defined. There are about a 100 partners, if not more, collectively collaborating on profiling how you do interoperability of SOAP, WSDL and the basic Web services protocol. There are also numerous implementations deployed based on that interoperability... The key to [SOA] is the meta data provided to you in the different SOAP headers, so SOAP is very quintessential to that. If a standards specification comes out that defines a different way to do the encoding that is highly and widely adopted, there's no reason why that can't happen. But today, the spec is still SOAP and XML meta data. If you have two intermediaries that are intelligent, that understand and know that the next intermediary hop happens to be in the same technology domain, and knows that we can actually do some kind of binary format from here to there, there's no reason why that can't happen and why your corresponding infrastructure can't support that. And if it ends up going to the next hop, which happens to not be potentially aware or know how to deal with that binary format, those systems had better know how to translate that back to SOAP, otherwise you're losing the whole value of a highly heterogeneous, interoperable system..."
[September 22, 2003] "Sun Touts Fast Web Services Plan. Binary Encodings Key to Proposal." By Paul Krill. In InfoWorld (September 19, 2003). "Researchers at Sun Microsystems are working on an initiative called Fast Web Services, intended to identify and solve performance problems in existing Web services standards implementations. Key to Sun's approach is boosting performance through use of binary encodings as an alternative to textual XML representations. 'Our technology improves both transmission speed, [with] less data transmitted, and processing performance on sender and receivers. The format requires less processor time than XML,' said Marc Hadley, Sun's senior staff engineer for Web technologies, products, and standards, in an e-mail response to questions. Sun plans to have a prototype of Fast Web Services in its Java Web Services Developer Pack early in 2004. Sun Distinguished Engineer Eduardo Pelegri-Llopart gave a presentation on Fast Web Services at the SunNetwork conference in San Franciscothis week. Sun believes Web services is going to become the new paradigm for distributed systems going forward, he said. But Web services need to be tuned for performance while enabling interoperability, according to Pelegri-Llopart. 'We're trying to provide better performance. We don't want a solution that is specific to our implementation,' he said. Sun's plan requires changes from developers. 'We believe that developers are to a large degree lazy. They find a concept that they're comfortable with, they take that concept, and push it to the limit,' said Pelegri-Llopart. In Sun's view, the XML-based messaging that lies at the heart of current Web services technology carries with it a performance price. XML-based messages require more processing than protocols such as RMI (Remote Method Invocation), RMI/IIOP (RMI Over Internet Inter-ORB Protocol), or CORBA/IIOP; data is represented inefficiently and binding requires computation, according to Sun in a paper published in August. 'The main point here is there is almost an order of magnitude between straightforward Web services using XML encoding and an implementation that takes care of binary encoding,' Pelegri-Llopart said. Fast Web Services attempts to solve bandwidth problems, including on wireless networks, by defining binary-based messages, albeit while losing the self-descriptive nature of XML. Although not an attempt to replace XML messaging, Fast Web Services is intended for use when performance is an issue..." See: (1) "JavaOne: Fast Web Services," presentation by Santiago Pericas-Geertsen and Paul Sandoz (Sun Microsystems); (2) "Fast Web Services," by Paul Sandoz, Santiago Pericas-Geertsen, Kohuske Kawaguchi, Marc Hadley, and Eduardo Pelegri-Llopart (Sun Microsystems Web Services library; appendices include a WSDL Example and an ASN.1 Schema for SOAP).
[September 22, 2003] "Adobe E-Doc Format Under Siege." By David Becker. In CNET News.com (September 18, 2003). ['Adobe's popular PDF document-sharing format faces challenges from Autodesk and Macromedia, each looking to take a bite out of the market with their own new technology. Analysts say the rivals could be a real threat to Adobe, which attributed a major earnings boost last week to a new line of PDF products.'] "Adobe Systems' portable document format, long a de facto Internet standard, is under fire from competitors looking to muscle in on the electronic document market. Autodesk, the leading maker of drafting software for architectural and engineering documents, recently began an aggressive advertising campaign urging customers to share documents in Autodesk's own Design Web Format (DWF) rather than in Adobe's PDF. In addition, Macromedia introduced FlashPaper, a new component based on the company's widespread Flash animation format that allows documents to easily be incorporated into Web pages and printed... One of the most influential Web design writers, Jakob Nielsen, recently attacked the widespread use of PDF for displaying documents over the Web, declaring the format 'unfit for human consumption.' The challenges come at a key time for San Jose, Calif.-based Adobe, which attributed a major earnings boost last week to a new line of PDF-related products released earlier this year. While PDF is firmly established in the PC world, 'I think there's always the possibility of a real threat,' said Rob Lancaster, an analyst for research firm The Yankee Group. 'Adobe is attempting to entrench itself within business applications, extending the capabilities of PDF beyond its typical role as viewing software, and a big part of that appeal rests on the ubiquity of the viewing capability.' Chuck Meyers, a technology strategist for Adobe's ePaper division, characterized recent swipes at PDF as acknowledgement of the company's success in popularizing the format. 'The key thing that's happening is that as we get bigger and better...the area we're in is a little bit more interesting a target than it used to be,' he said. 'We're going to take heat from a variety of different directions.' The most pointed business attack has come from Autodesk, whose new advertising and marketing campaign focuses on the supposed faults of PDF for exchanging engineering documents. The campaign comes as a surprise turnaround, after Adobe highlighted compatibility with AutoCAD -- Autodesk's main application for architectural drafting--as a key selling point for Acrobat Professional, the new high-end version of its PDF authoring tool. Tony Peach, the director of DWF corporate strategy for Autodesk, says the campaign stems from customer inquiries about the best way to exchange engineering documents..."
[September 18, 2003] "No Standard for Standards." By Jim Ericson. In Line56 (September 18, 2003). "History shows the value of uniformity, but portal standards are not yet a path to better workplace advantage... We have written and written about the value of extensible languages and protocols, and lately we've been excited and led some of the cheering for the arrival of portal standards like JSR 168, the Java API for local portlets and WSRP, an interface to assemble and connect third-party portlets. We're as patient as the next bunch, but who wouldn't cheer for ease of content integration and portal interoperability? Well, amid the myriad and venerable Web standards movements in progress, the first 1.0 spec of WSRP has finally just arrived with approval of the OASIS standards body, and JSR 168 is in its latest final draft. All the vendors and integrators have lined up behind the standards with products that support JSR and WSRP. We should be happy for our cause, but we're not because now we know we have only scratched the portals standards surface. There doesn't appear to be a compelling competitive advantage for a standards-adopting first mover and besides, putting portal standards to use today will be nothing so easy as plugging a toaster into a wall socket... JSR 168 builds on existing practices and lets developers create portlets that are interoperable with portal servers and Web applications. It's really designed for local execution of portlets, says Phil Soffer, who manages products and standards compliance at Plumtree. 'JSR 168's biggest strength is simplicity and the tools available from Java vendors can be used right away or with few extensions,' Soffer says. The weaknesses are that quality of service cannot be guaranteed, and that it is hard to scale locally without the addition of multiple application servers. WSRP, or Web Services for Remote Portlets, is a cross-platform standard designed to let portlets execute remotely from a portal server. It's a 'plug-'n-play' standard for multiple proprietary portals, Web applications and content sources. A good thing is that the standards are complementary. A developer could build self-service HR into a JSR 168 portlet, wrap it as a WSRP service and expose it to other portals and applications. This way a .NET portal framework could look at a JSR-built portlet through the WSRP 'wrapper.' A not so good thing from a standardization view is that proprietary portal applications like Plumtree's can presently deliver a lot more functionality natively than can be delivered through standards-based interfaces. So rather than running JSR 168 natively, Plumtree puts the standard in a parallel engine that can be used as works best, while retaining native benefits like fault tolerance and caching..." See recently: "Web Services at Apache Hosts WSRP4J Open Source Project for Remote Portlets."
[September 18, 2003] "The State of the Python-XML Art, 2003." By Uche Ogbuji. From XML.com (September 11, 2003). The author updates his overall Python-XML survey to encompass notable developments over the past year, many of which have been mentioned in the previous XML.com Python articles. This article serves as a ready and rapid index to folks who want to process XML using "the best language available for the purpose." Ogbuji organizes the review in a table according to the areas of XML technology. This will give newcomers to Python a quick look at the coverage of XML technologies in Python and should serve as a quick guide to where to go to address any particular XML processing need. He rates the vitality of each listed project as either "weak", "steady", or "strong" according to the recent visible activity on each project: mailing list traffic, releases, articles, other projects that use it, etc. The table uses these categories for tools supporting Python-based processing: XML parsing engines, DOM, Data bindings and specialized APIs, XPath and XSLT, Schema languages, Protocols, RDF and Topic Maps, Miscellaneous. A year ago the author reported 34 Python-XML projects; this year he adds 24; most of the additions point to the impressive activity that continues on the Python-XML front..." See also "XML and Python."
[September 18, 2003] "Commentary: SOE - Service Oriented Everything?" By CBDi Forum Analyst. In CBDi Newswire (September 17, 2003). "The plethora of Service Oriented acronyms appearing is a sure sign that Service Orientation is the 'next big thing'. As with Object Orientation, expect Service Oriented Programming, Service Oriented Analysis and Design, etc., to take centre stage with developers in the near future. Already some vendors are telling us to 'watch out for SOx' as their product plans begin to take shape, whilst analysts rush to each invent their own SOxx acronyms in typical 'we thought of it first' style. Whatever the acronym, successful adoption of SOx and Web Services will not happen by a process of osmosis, simply allowing technology to drive Service Orientation from the bottom up. At a recent workshop we held for a large global company it was evident pockets of Web Service adoption were springing up across the organization often with little visibility between one group and another. This is not unexpected, and should not be looked upon as a bad thing or discouraged. In this case the individual results were successful, and as ever it is often preferable that people prove for themselves that new ideas work rather than have it dictated to them from on high... The CBDI Web Services Roadmap initiative is designed to help organizations properly manage the shift to Web Services and SOA. We provide the roadmap in recognition that this shift is a journey that won't happen overnight, but now it is evident the take up of Web Services is accelerating it looks like a good time to start..."
[September 18, 2003] "A Preview of WS-I Basic Profile 1.1." By Anish Karmarkar. From O'Reilly WebServices.xml.com (September 16, 2003). "On 12th August 2003 WS-I (Web Services Interoperability Organization) announced the release of the final specification of Basic Profile 1.0 a set of recommendations on how to use web services specifications to maximize interoperability. For developers, users, and vendors of web services and web services tools this is a big leap forward to achieving interoperability in the emerging and fast changing world of web services. But what else has WS-I been working on? WS-I recognizes the fact that Basic Profile 1.0 is just a beginning and that it's a long road toward web services maturity and interoperability. In its mission toward accelerating the adoption of web services and promoting interoperability, the Basic Profile Working Group, which developed Basic Profile 1.0, is tasked with generating Basic Profile 1.1 to incorporate attachments... Basic Profile 1.1, as the name indicates, is the next version of Basic Profile. It builds on 1.0, adding support for SOAP Messages with Attachments (SwA) and WSDL 1.1's Section 5 MIME bindings. As part of the process of releasing a Profile, other Working Groups within WS-I develop sample applications and test tools for the Profile. This ensures that the Profile is implementable and 'debugged' before its final release. Like Basic Profile 1.0, Basic Profile 1.1 will be released with sample applications and test tools. This article provides a preview of Basic Profile 1.1 based on the latest Working Group Draft. The Basic Profile Working Group has been working on Basic Profile 1.1 since January 2003. In the course of its development the WG identified more than 70 technical issues that needed to be resolved. Only a very few minor ones remain. Please remember that this preview is based upon a Working Group Draft; as a work in progress can (and almost certainly will) be modified as the draft Profile is reviewed and refined... The most widely implemented and accepted attachment technology is MIME. SwA combines MHTML and content-id URIs (CID) for referencing MIME parts in SOAP. Basic Profile 1.1 has selected SwA as the attachment technology and WSDL 1.1 Section 5 MIME bindings for describing SwA. Basic Profile 1.1, as with Basic Profile 1.0, clarifies, fixes, and subsets the relevant specs to make it more interoperable and removes ambiguities. This addresses a real need that developers and users of web services have when dealing with large binary data and transporting it within a SOAP 1.1 Envelope. The direction that Basic Profile 1.1 has taken fits very nicely with the direction that XMLP WG has taken with respect to attachments for SOAP 1.2, as documented in SOAP Message Transmission Optimization Mechanism (MTOM). Both use MIME and are based on SwA... Interoperable attachments is one of the features that is frequently demanded by developers and users of web services. The Basic Profile Working Group addresses this need by including SwA in Basic Profile 1.1, resolving ambiguities, and by filling in the gaps of existing specifications. Furthermore, Basic Profile 1.1 also enables language binding tools to generate appropriate APIs to take full advantage of attachments..." See: (1) "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services"; (2) Charter v1.1; (3) general references in "Web Services Interoperability Organization (WS-I)."
[September 17, 2003] "Sun Embraces Open-Source Database." By Matt Hicks. In eWEEK (September 17, 2003). "Sun Microsystems Inc. is standardizing its software products on an open-source database to store and manage non-relational data. The company has chosen Sleepycat's Berkeley DB database as the embedded database within its software line. The database is incorporated in key components of the Sun Java Enterprise System, formerly known as Project Orion, and the Sun Java Enterprise Desktop System, formerly known as Project Mad Hatter, both launched on Tuesday. Sleepycat President and CEO Mike Olson said Sun chose Berkeley DB not only because of the technology behind it but also because Sleepycat offers a dual license: It offers a free open-source license for using the database within open-source software and a paid commercial license for software vendors like Sun using Berkeley DB within commercial software..." See: (1) the announcement: "Sun Microsystems Selects Sleepycat Software for New Middleware and Desktop Initiatives. Sun to Standardize on Berkeley DB to Meet Non-Relational Data Management Needs Within Key Components of Sun Java Enterprise System and Sun Java Desktop System." (2) "Sleepycat Software Releases Berkeley DB XML Native XML Database"; (3) "Berkeley DB XML: An Embedded XML Database."
[September 17, 2003] "Microsoft, IBM Toast Next Era of Web Services. Companies Demonstrate Web Service Interoperability on Windows, Linux Platforms." By Paula Rooney. In CRN (September 17, 2003). "Microsoft and IBM united in New York to demonstrate preview code for the next set of Web service protocols designed to enable more complex, secure, cross-company e-business transactions. Microsoft Chairman Bill Gates, on hand with top IBM software executive Steve Mills, said the forthcoming WS-Security, WS-Reliable Messaging and WS-Transaction protocols are designed to enable the kind of e-business relationships many dot.com vendors hyped during the late 1990s. 'Web services are important to the foundation of the Internet, enabling e-commerce to become a reality,' Gates said during a briefing in New York. 'That rich new layer will take Web services to a new level... we hope to see implementation in .NET and Websphere. At a briefing in New York on Wednesday, Microsoft and IBM together demonstrated early WS-Security, WS-Reliable Messaging and WS-Transaction protocol code working in the form of a supply chain Web service application among a car dealer, manufacturer and supplier. The Web service application -- which replicates the same function as a costly Electronic Data Interchange (EDI) transaction of the past -- was running on disparate systems -- a Windows 2003 Server, a Linux-based Websphere server from IBM and Linux-based wireless handheld. The WS-Security, WS-Reliable Messaging and transactions specifications have been under development for more than a year. The demonstration on Wednesday -- a big milestone in the evolution of Web services -- proved interoperability of systems and the execution of a hassle-free secure, financial transaction between three partners, Gates and Mills said... While the two companies voiced continued commitment to standards, there remain a number of uncertainties that could undermine Web service interoperability, sources note. Privately, one IBM executive said the formal adoption of WS-Security by OASIS is expected 'very soon' -- within the next six months. The two other protocols -- WS-Reliable Messaging and WS-Transaction -- are due in 2004 or 2005. However, during the briefing, neither Gates nor IBM Steve Mills, senior vice president and group executive of IBM's Software Group, could say when complaint products will be delivered, or when the specification will be formally adopted and by which standards body. 'We're still evaluating that,' Gates said. 'WS-Security went to OASIS, that's a possibility. No decision has been made'." Article also published in TechWeb News.com. See: (1) "OASIS WSS TC Approves Three Web Services Security Specifications for Public Review"; (2) "Updated Specifications for the Web Services Transaction Framework"; (3) "Reliable Messaging."
[September 17, 2003] "Web Services Reliable Messaging Update." By Peter Abrahams. In IT-Director.com (September 15, 2003). "In March  I wrote two articles about Web Services Reliable Messaging, describing two competing specifications: WS Reliability from Sun, Oracle and friends and WS Reliable Messaging from BEA, IBM, Microsoft and Tibco (BIMT). Since I wrote some progress has been made. Firstly OASIS set up a WS Reliable Messaging Technical Committee (WS-RM TC) and based its work on the Sun-Oracle specification... this committee has met several times and improved and expanded the specification... The OASIS specification recognises a Reliable Messaging Process (RMP) that does that on behalf of the application. However, just as with the BIMT specification, there is no definition of the application interface to the RMP... The specification is still very much a work in progress with several comments in the draft saying that sections must be improved or rewritten. On the 4th of September the TC had a face to face meeting. The meeting included the first successful tests of the protocol enabling communications between different implementations from Fujitsu, Hitachi, NEC and Oracle. The test harness included a 'network troublemaker' that simulated various error conditions that could affect the successful message delivery. The tests ran for 36 hours without problem... Looking at the OASIS and the BIMT specification there now seems little functional difference (obviously the detailed syntax is not identical). The only substantive difference I could find is that OASIS sends an acknowledgment (ACK) for each message separately; whereas BIMT has a construct that allows multiple messages to be acknowledged in on ACK. The BIMT construct will improve performance, by reducing message traffic, to some extent but does add an extra layer of complexity to the implementation..." See: "Reliable Messaging."
[September 17, 2003] "Gates, Mills Talk Up Web Services in NY." By Michael R. Zimmerman. In eWEEK (September 17, 2003). "Bill Gates and IBM Software chief Steve Mills joined together here today to give an update on their companies' combined work in advancing Web services... Gates and Mills, IBM Software Group's senior vice president and general manager, demonstrated for the first time reliable messaging and secure, authenticated transactions across a federated, heterogeneous environment. They also announced that they plan to take the specifications used to pull off the demonstration, and which the companies have been developing for three years, to open standards bodies soon, and that they would not seek royalties for the specs... The demo was based on an auto dealer/parts scenario that comprised three partners: an auto dealer, a parts supplier, and the parts manufacturer and a high-tech cocktail of DB2, SQL Server, WebSphere and .Net. The dealer was notified upon logging on of a windshield wiper shortage. The crowd followed as the dealer proceeded to place an order with the supplier, who in turn placed an order with the manufacturer. Sounds simple enough, but the underpinnings of the demonstration were actual Web services apps, developed with specs such as the Web Services (WS)-Coordination and WS-Atomic Transaction specs, both of which were created by IBM and Microsoft along with BEA Systems Inc. The former is a 'framework for providing protocols that coordinate the actions of distributed apps,' while the latter 'provides the definition of the atomic transaction coordination type that is to be used with the extensible coordination framework described in WS-Coordination.' Other specs put to work today were the WS-Federation and WS-Reliable Messaging..." See: "Updated Specifications for the Web Services Transaction Framework."
[September 17, 2003] "Microsoft, IBM Push Web Services Advances." By Mike Ricciuti. In CNET News.com (September 17, 2003). "Microsoft and IBM, usually bitter rivals, on Wednesday demonstrated how their competing software packages can interact using Web services and pledged cooperation in establishing additional standards. At a press briefing, Microsoft Chairman Bill Gates and Steve Mills, the executive in charge of IBM's software unit, demonstrated for the first time what Gates termed 'advanced' Web services capabilities designed by the two companies for linking business software. The companies showed off an application that links automotive parts suppliers, manufacturers and dealers via Web services that use new specifications to ensure security, reliable messaging, and transaction support. The companies said the demonstration, which used software from both Microsoft and IBM, including servers running Linux, would have been difficult to accomplish with older technologies... Gates said the new specifications are needed in addition to existing standards such as XML (Extensible Markup Language) and SOAP (Simple Object Access Protocol). 'This rich new layer will take us to the next level,' he said. 'This is the first time anyone has seen this running,' Gates said. 'We think what will come out of this is along the lines of what we did with earlier specifications. We will submit (these specifications) to a standards group as royalty-free standards.' Wednesday's demonstration, which sources said was largely arranged by Microsoft, indicates that the companies could be concerned that Web services isn't being used for the mission-critical applications, as they had envisioned. 'I think there is concern that they need to keep these ideas in people's minds,' Narsu said. 'There seems to be concern that adoption is shallow.' Nearly 90 percent of big companies surveyed earlier this year by Gartner Group said they were using XML, the key Web services technology. Most respondents said they were interested in Web services and were in early trials. But Web services is in its infancy. While effective, the technology can only connect applications at a rudimentary level. The advanced capabilities outlined by Gates are needed before Web services can become widely used as a way to link companies, analysts said...."
[September 17, 2003] "Enterprise Transformation: Agile Solutions Requires Developing for 'Choice'." From Defense Finance and Accounting Service [Ms. Audrey Davis, Director for Information and Technology, DFAS CIO]. "The Defense Finance and Accounting Service (DFAS) is in the frontline of systems integration, trying to cope with many legacy systems and thousands of interfaces. One of the primary missions of the agency is to unify financial support functions of an agency of the United States Department of Defense (DoD). Much of this effort has been on eliminating duplication/redundancy of systems through reuse and conformation to standards... This paper starts with sharing lessons-learned that are applicable to many organizations that are transforming themselves to be agile. Then wider needs are covered including: how to be more customer responsive, being proactive rather than reactive, and addressing new business requirements with declining budgets. The same set of principles given here applies for all information systems that offer diffused and distributed content that is difficult to manage, coordinate, and evolve. In this regard we will also be discussing the Business-Centric Methodology (BCM)... The Business-Centric Methodology (BCM) effort underway at OASIS addresses the challenges of agility and interoperability through the adoption of a business first philosophy. The BCM facilitates the capture of decision rationale and involves the business experts to scope, define, relate and manage the business semantics concisely. Business users and customers can communicate concerns and aspects of the business more easily and accurately than developers can. The BCM's declarative approach allows business users to take back the 'steering wheel' of development and integration, much like the car factory evolved from machinist-built Model Ts to the modern factory's process configured by the customer's job order. The BCM Contract (job order) approach handles potentially thousands of relevant Choice Points in an organization through patterns defined via predefined BCM Templates, rather than being lost in tactical software programs. The BCM provides a clean separation of concerns in four layers: Conceptual, Business, Extension, and Implementation. Each layer is defined by its primary aspects, which are natural and intuitive means for providing a solution for interoperability. This separation allows for maximum reusability in terms of both components and aspects..." See: "OASIS Forms Business-Centric Methodology Technical Committee"; (2) BCM TC website. [source, cache .DOC]
[September 17, 2003] "Web Services Management Heats Up." By Martin LaMonica. In CNET News.com (September 17, 2003). "The development of a Web services management standard continued to move forward, in a technology area fast becoming the next major competitive race among Web services providers. Computer Associates International, IBM and Web services management start-up Talking Blocks last Thursday submitted a technical specification to the standards group Organization for the Advancement of Structured Information Standards (OASIS) for consideration as an eventual industry standard... The goal of the Web Services Distributed Management (WSDM) technical committee at OASIS is to write a technical blueprint for products that track the performance of applications written according to Web services standards. The standard, due in January of next year, will ensure that Web services management wares from different companies will interoperate. The WSDM technical committee is slated to meet in two weeks to discuss the standard. ... Weeks before HP announced plans to acquire Talking Blocks, CA quietly purchased Adjoin, another Web services management company. Several other start-ups, including Actional, AmberPoint and Confluent have also introduced Web services management products. Analysts said that investment in the development of Web services management products reflects a growing need among businesses for tools that can spot Web services glitches and ensure that applications run according to predefined performance goals..." See details in the news story: "IBM, Computer Associates, and Talking Blocks Release WS-Manageability Specification." Also: (1) OASIS Web Services Distributed Management TC website; (2) "Talking Blocks, CA, and, IBM Announce Submission of Web Services Manageability Standard to OASIS. Leaders in Systems and Web Services Management Create and Jointly Submit Standard to OASIS Web Services Distributed Management Technical Committee."
[September 16, 2003] "Using XPath with SOAP." By Massimiliano Bigatti. From O'Reilly WebServices.XML.com (September 16, 2003). ['Max Bigatti shows that we don't always need heavyweight data binding for RPC-style SOAP processing. With a working example he shows how Java's Jaxen XPath processor can be used to implement a loosely coupled web service.'] "XPath is a language for addressing parts of an XML document, used most commonly by XSLT. There are various APIs for processing XPath. For the purposes of this article I will use the open source Jaxen API. Jaxen is a Java XPath engine that supports many XML parsing APIs, such as SAX, DOM4J, and DOM. It also supports namespaces, variables, and functions. XPath is useful when you need to extract some information from an XML document, such as a SOAP message, without building a complete parser using JAXM (Java API for XML Messaging) or JAX-RPC (Java API for XML-Based RPC). Moreover, the loosely-coupled nature of web services suggests that the use of dynamic data extraction is sometimes better than using static proxies like the ones produced using JAX-RPC. In the article I'll show a JAXM Web Service for calculating statistics and a generic JAXM client that uses the service, demonstrating the use of XPath for generic data extraction. The Jaxen library implements the XPath specification on the Java Platform. Jaxen supports different XML object models, including DOM4J, JDOM, W3C DOM, and Mind Electric's EXML. It supports so many object models by abstracting the XML document using the XML Infoset specification, which provides a representation of XML documents using abstract 'information items'... The full source code is available online. Notice that the full libraries required (JAXM, JAX-RPC, Axis and Jaxen) are not provided. They can be downloaded from the web sites mention in the Resources section below. The example uses JWSDP 1.1 JAXM and SAAJ APIs and reference implementations. The generic client uses Axis (which is JAXM complaint) and the Jaxen library..."
[September 16, 2003] "Enabling Smart Objects: Breakthrough RFID-Enabled Supply Chain Execution Infrastructure." Sun Microsystems White Paper: Sun and Auto-ID. September 9, 2003. 32 pages. "Using technology breakthroughs in radio frequency identification (RFID) design, the Massachusetts Institute of Technology (MIT) Auto-ID Center, along with the Uniform Code Council (UCC), is leading a group of more than 90 companies and research centers to define widely supported global standards in reading, finding, and formatting product information. These standards are being designed for use as a next generation of the bar code. The Auto-ID standards will create a cost-effective way to make the supply chain more efficient. The compelling aspect of an Auto-ID enabled operation is the association of information with product movement. The combination of tags, antennas, readers, and local computers ('Savants') provides a near real-time view of product status and location. Many companies have begun trials to determine how this new infrastructure can be best used to make significant improvements in enterprise cost structures or revenue capabilities... The key components of the Auto-ID standard are: Electronic Product Code (EPC), Radio frequency identification (RFID) tags, Tag readers, Savant servers, Object Name Service (ONS), and the Physical Markup Language (PML)... The EPC identifies individual products, but useful information about the product is written in a new, standard computer language called Physical Markup Language (PML). PML is based on the widely accepted, extensible markup language (XML), and is expected to become a universal standard for describing physical objects, processes, and environments. Thus PML can store any information that could be useful; for example, product composition, lot number, and date of manufacture. This information can be used to create new services and strategies. For example, a consumer could find out how to recycle a product's packaging, a retailer could set a trigger to lower prices on milk as expiration dates approach, or a manufacturer could recall a specific lot of product. PML is designed to be a dynamic data structure, with information that can be updated over time. For example, the PML record for a product can be updated to store the location of a product as it moves through a supply chain... Once EPC data are detected by the readers, they are passed to The Savant. The Savant acts as event manager, filtering out extraneous EPC reads or events. The ONS Server provides the IP address of a PML Server that stores information pertinent to the EPC. Data from the Savant is passed into the application infrastructure, or operations bus, either locally or over a WAN such as the Internet. From here, the data is made available to virtually any application that can make use of it..." See: (1) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)"; (2) Sun RFID resources at Auto-ID: Reinventing the Global Supply Chain; (3) "Radio Frequency Identification (RFID) Resources and Readings." [cache]
[September 16, 2003] "RFID: Driving Benefits Throughout the Supply Chain." By Norm Korey (IBM Global Services). In Wireless Business and Technology Volume 3, Issue 9 (September 2003). "RFID is an emerging, advanced wireless technology for item tagging that enables end-to-end asset awareness. At its core, RFID uses tags, or transponders that, unlike bar code labels, have the ability to store information that can be transmitted wirelessly in an automated fashion to specialized RFID readers, or interrogators. This stored information may be written and rewritten to an embedded chip in the RFID tag. When affixed to various objects, tags can be read when they detect a radio frequency signal from a reader over a range of distances and do not require line-of-sight orientation. The reader then sends the tag information over the enterprise network to back-end systems for processing. RFID tags can be introduced to goods during the manufacturing process, to an individual item, or at a pack, box, or pallet level. RFID systems are also distinguished by their frequency ranges. Low-frequency (30KHz to 500KHz) systems have short reading ranges and lower system costs. They are most commonly used in security access, asset tracking, and animal identification applications. High-frequency (850MHz-950MHz and 2.4GHz-2.5GHz) systems, offering long read ranges (greater than 90 feet) and high reading speeds, are used for such applications as railroad car tracking and automated toll collection...The uses of RFID tags are endless: animal identification, security access, anti-theft retail systems, asset and inventory tracking, automatic toll collection, wildlife and livestock tracking, house-arrest monitoring systems, manufacturing work-in-process data, shipping, container and air cargo tracking, fleet maintenance, etc... RFID tags will replace traditional barcode technology due to several intrinsic disadvantages of barcodes, including: (1) Loss/damage: Barcodes are prone to loss or damage because they are stuck to the outside of packages and so can easily be damaged; (2) Human interaction: Barcodes require human intervention to operate the barcode scanner ; (3) Limited information: Barcodes cannot be programmed or reprogrammed and can provide only the most basic product number information; (4) Stock storage space constraints: Barcodes require line-of-sight to be read... During the past decade, supply chain management has seen a complete overhaul of traditional logistics procedures as tight integration between warehouses, distribution, and retail has smoothed out duplication and improved time-to-market. Supply chain efficiencies are being driven by improvements in information accuracy and availability. However, further improvements have been constrained by the technology used to track goods through the supply chain. The use of RFID wireless technology changes that, providing organizations with an opportunity to significantly enhance supply chain processes as well as deliver improvements in customer service..." See: (1) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)"; (2) "Radio Frequency Identification (RFID) Resources and Readings."
[September 16, 2003] "James Clark Unveils a New XML Mode for GNU Emacs." By Michael Smith. From XMLHack (September 10, 2003). "James Clark has announced the alpha release of nXML, a new mode for editing XML documents from within GNU Emacs. It's a milestone in that it's the first open-source editing application to enable context-sensitive validated editing against Relax NG schemas. It also provides a clever mechanism for real-time, automatic visual identification of validity errors, along with flexible syntax-highlighting and indenting capabilities. The real-time validation feature is similar to a feature in the Topologi Collaborative Markup Editor, a relatively new commercial application that takes a number of novel approaches to XML editing. The Emacs/nXML implementation works like this: As you are editing a document, nXML does background re-parsing and re-validating of the document in the idle periods between the times when you are actually typing in content. It visually highlights all instances of invalidity it finds in the document. If you then mouse over one of the invalidity-highlighted points in the document, popup text appears describing the validity error..." The resources are available for download. Also of note: on September 5, 2003 a list "emacs-nxml-mode - New XML Mode for Emacs" was started on Yahoo!Groups "for discussion of a new major mode for GNU Emacs for editing XML, with support for RELAX NG. This is under development by James Clark. This group will discuss details of what features the mode should provide and how they should work. Also users will be able to get help on using the mode." See also the new "relaxng-user: A public mailing list for users of RELAX NG" with address email@example.com and the associated relaxng-user archives. General references in "RELAX NG."
[September 16, 2003] "Chicago Show Heralds New 'Internet of Things'. Electronic Product Code Network Launched at Conference." By Paul Roberts. In InfoWorld (September 15, 2003). "A Chicago symposium highlights technology that may fuel the next 50 years of economic growth: a global network of intelligent objects. The EPC (Electronic Product Code) Executive Symposium will run from Monday September 15, 2003 through Wednesday, September 17, and marks the official launch of the Electronic Product Code (EPC) Network, an open technology infrastructure developed by researchers worldwide. The network uses RFID (Radio Frequency ID) tags to enable machines to sense man-made objects anywhere in the world. The Symposium will introduce EPC technology to an audience of corporate executives, explaining how the EPC network works and how to implement EPC technology in corporate supply chain networks, according to the Auto-ID Center. The gathering has the backing of major technology companies including IBM Corp., SAP AG and Sun Microsystems Inc... VeriSign will unveil three new services that will allow organizations to manage EPC data using the Internet: ONS Registry, EPC Service Registry and EPC Information Services. Together, the new services will create a registry, similar to the Internet DNS (Domain Name System), that link an EPC with an IP (Internet Protocol) address. Using the services, companies will be able to use the Internet to track their products in the time between when they leave the manufacturing plant and arrive at the loading dock of a retail outlet, Brendsel said. Unlike the much-publicized 'smart shelf' trials, in which RFID technology is used inside retail outlets to provide real-time merchandise stocking information, companies will be focusing on trials outside the four walls of the retail outlet, he said. Also at the show, Intel Corp. will announce a partnership with ThingMagic LLC of Cambridge, Massachusetts, to deliver a new generation of RFID tag readers. The new generation of readers will be built on ThingMagic's Mercury4 Platform and use Intel's IXP420 XScale network processors, improving the power of the readers so that they can process multiple RFID protocols simultaneously, the companies said. When it comes to practical applications for EPC technology, the focus at the Auto-ID EPC Symposium will be on the supply chain..." General references in "Radio Frequency Identification (RFID) Resources and Readings."
[September 16, 2003] "IBM, Others Unveil RFID Offerings. Big Blue Will Offer Consulting and Implementation Services for RFID." By Stephen Lawson. In InfoWorld (September 15, 2003). "IBM Corp. and a truckload of other vendors joined the RFID (Radio Frequency ID) parade Monday at a meeting in Chicago that is shaping up as a coming-out party for the object-identification technology. The EPC (Electronic Product Code) Executive Symposium, running Monday through Wednesday, marks the official launch of the EPC Network, an infrastructure that uses RFID tags to let machines anywhere in the world sense a tagged object. RFID tags are like bar codes except that devices can read the information they contain using radio frequencies. Most participants in the Symposium are highlighting the use of RFID to track products through a corporate supply chain. IBM will offer consulting and implementation services and specialized software to companies that want to start using RFID, the company announced Monday at the show. It will help companies evaluate and adopt the new technology in phases and integrate IBM software into their existing back-end inventory database systems, IBM said in a statement. The software is based on WebSphere Business Integration middleware and can work with WebSphere Application Server, DB2 Information Integrator, Tivoli Access Manager and WebSphere Portal Server... Intermec Technologies Corp. announced the EasyCoder Intellitag PM4i printer, which can encode a product's identifying information into an RFID chip embedded in a label. It can do this while also printing a visible barcode and text onto the label, said Warren Payne, a representative of Intermec, in Everett, Washington. The printer is the first that can encode data to so-called 'frequency-agile' RFID tags made by Intermec, which are visible to reader devices using different frequencies in different countries, according to Doug Hall, director of printer marketing at Intermec. A company in Europe could write data to one of these tags using a frequency that's appropriate there and then ship the product to the U.S., where the same tag could be recognized by a reader device that uses another frequency. The printer will be available early next year. Pricing has not yet been set, Hall said. Also at the conference, Intermec demonstrated a system it developed with Georgia-Pacific Corp.'s packaging division in which the packaging producer can manufacture boxes with embedded RFID tags. When a company packs a product in the box, it can encode information in that embedded tag, Payne said. Typically, a company would put a barcode label on the box at the same time so the product could be identified in parts of the supply chain that don't yet use RFID..." See: (1) "IBM Announces Comprehensive New RFID Service. Helping Retailers and Consumer Packaged Goods Companies Boost Accuracy in Picking, Packing, Shipping. Cutting Theft in the Supply Chain."; (2) "Radio Frequency Identification (RFID) Resources and Readings."
[September 16, 2003] "Using XML Schemas Effectively in WSDL Design. Achieve a Higher Degree of Portability With These Best Practices." By Chris Peltz and Mark Secrist (HP Developer Resources). In XML Journal Volume 4, Issue 9 (September 2003). With source code. "Developers are beginning to develop more sophisticated Web services, exchanging complex XML documents rather than simple parameter types. As this shift takes place, development teams begin to grapple with different approaches to designing these Web services interfaces through the use of WSDL. In this article, we will focus on four specific areas of best practices that can be applied, particularly in the use of XML Schemas in a Web services design: XML Schema style, namespaces, XML and WSDL import for modularity, and use of schema types for platform interoperability. Through the use of these techniques, you will be able to achieve a higher degree of portability of your WSDL and XML Schemas and will realize improved reusability and interoperability between a broader collection of Web services platforms... Using a more modular schema design can maximize the potential for reuse in your organization. The proper refactoring and naming techniques can also simplify the generation of implementation classes for your platform. A modular design approach will also require an effective use of namespaces in your XML Schemas. Namespaces provide a mechanism to scope different elements or type definitions in your design. They can simplify how you reference or import types that might exist in external schema files. They can also be used to enforce versioning of your Web services. The techniques that were discussed to modularize XML Schemas can also apply to the design of the WSDL interfaces. If used properly, the import mechanism can provide a great amount of reusability of both the XML Schema types and the WSDL message types. This design can be further enhanced through the use of development and design tools. It's important to remember that each Web services platform might manage XML differently. Use of certain XML data types or schema structures may not be supported on certain platforms. In the design, you should pay close attention to these interoperability issues, adding testing where appropriate..." [alt URL]
[September 15, 2003] "Generation of XML Records across Multiple Metadata Standards." By Kimberly S. Lightle and Judith S. Ridgway (Eisenhower National Clearinghouse, Ohio State University, USA). In D-Lib Magazine (September 19, 2003). "This paper describes the process that Eisenhower National Clearinghouse (ENC) staff went through to develop crosswalks between metadata based on three different standards and the generation of the corresponding XML records. ENC needed to generate different flavors of XML records so that metadata would be displayed correctly in catalog records generated through different digital library interfaces. The crosswalk between USMARC, IEEE LOM, and DC-ED is included, as well as examples of the XML records... Because the native metadata for the ENC collections follow different metadata standards (USMARC and IEEE 1484.12.1-2002 Learning Object Metadata (LOM) Standard) and the metadata to be harvested via the NSDL OAI repository follows the Dublin Core metadata standard, ENC needed to develop crosswalks between these three standard metadata schemas. ENC also needed to generate different flavors of XML records so that metadata would be displayed correctly in catalog records generated through different digital library interfaces. XML is an open, text-based markup language that provides structural and semantic information to data based on a specific schema such as USMARC. These XML records are searched by the Autonomy search engine with the metadata displayed in two different formats: the format used for the ENC DL libraries (Learning Matrix, ICON, and GSDL) and that used for ENC Online. The XML records are also exported in a Dublin Core format, so they are available to the NSDL OAI harvester. XML records generated by the Learning Matrix, ICON, and GSDL are based on the IMS Learning Resource Metadata Specification and are the most straightforward to produce -- there is a one-to-one correspondence between the metadata that are entered in the cataloging tool and that which are displayed as part of the catalog record. ENC also has to generate a USMARC XML record from the digital library metadata to be searched via ENC Online. This requires the IEEE LOM metadata to be crosswalked to the USMARC metadata standard. A third flavor of XML record is generated from both USMARC and the IEEE LOM metadata. These XML records have been crosswalked to DC-ED so that they are harvestable by the NSDL and searchable through the NSDL.org interface. A fourth type of XML record is generated so that IEEE LOM metadata can be displayed in a USMARC format via the ENC Online interface. In the future, an XML record will be generated in the IEEE LOM format based on the USMARC metadata used to describe ENC resources... ENC is not unique in its need to produce different flavors of XML records to conform to multiple schemas. Just as ENC chose the IEEE LOM schema, digital libraries should choose a schema that best embodies the nature of their resources and their cataloging goals. Crosswalks that extend interoperability are essential so that the digital library collections can be accessible through a variety of portals and search interfaces. As more organizations share what they have learned as they strive for maximum interoperability of their records that richly describe digital resources, the development of crosswalks will be better understood and more easily accomplished..." See: "IMS Metadata Specification."
[September 15, 2003] "Problems Arise During UML 2.0 Finalization. Lack of Clarity, Inability to Implement Specification Cited as Obstacles to Early Adoption." By David Rubinstein. In Software Development Times (September 15, 2003). "The co-chairman of the task force working on the finalization of UML 2.0 has acknowledged that two important problems have emerged during this phase of review, but said they are being fixed and the specification is expected to be released as an Object Management Group Inc. available technology in April 2004. Bran Selic, who is IBM Corp.'s liaison to OMG from the Rational software group, said vendors and academicians trying to implement the UML 2.0 specification, which was approved by the OMG Architecture Board, are raising issues... One of the problems, Selic said, is that new mechanisms used to define the abstract semantics of the language are not scaling up as needed to get the most of using UML within a Model Driven Architecture. 'We might have to modify the package/merge mechanism. People want to make sure the models will fit on a disk.' The second problem involves removing some flexibility that was built into the compliance scheme to allow software designers to mix and match various parts of UML... The finalization task force posted OMG's final adopted specification on August 8, 2003 and adopters have until mid-September  to call problems to the task force's attention. The draft of the final standard, also called the available technology, is set for the end of April 2004. The final available technology has three new capabilities that Selic said users were clamoring for -- the ability to model architectural structures, interactions and activities... The modeling of interactions now will let software designers combine simple interactions into larger sequences, and reuse them across different systems. For example, he said, to define an automated teller machine process, you first must define a sequence to enter a password. That password sequence could be reused in other processes that require it. The ability to model the flow of activities uses the BPEL4WS specification developed by IBM and Microsoft..." See: (1) "Unified Modeling Language Version 2.0" (overview from IBM); (2) "OMG Model Driven Architecture (MDA)."
[September 15, 2003] "IBM Package Gets E-Commerce Right." By Jim Rapoza. In eWEEK (September 15, 2003). "IBM's powerful e-commerce application gives enterprises all the capabilities they will need in a single platform and does so without sacrificing quality. However, companies interested in WebSphere Commerce will want to make sure that their needs are high-end enough to justify the high-end cost of the product... During tests, eWEEK Labs was impressed with the breadth of e-business capabilities in WebSphere Commerce 5.5 and the quality of all its features. Unlike many products that try to do everything and end up doing nothing well, WebSphere Commerce is the rare application that does many things and does many of them well. WebSphere Commerce 5.5, which shipped in June, is an excellent platform for running the most complex B2B and B2C e-commerce operations, and it can run both simultaneously, which allows for excellent integration between both sides of a company's business... On the B2B side, WebSphere Commerce now makes it possible to define and maintain a wide variety of value chains for different business models. This makes it possible to create private marketplaces, hosted services and complex multivendor purchase systems. We also liked the improved contracts and RFQ (request for quote) capabilities in WebSphere Commerce. These make it possible for business buyers to define unique product requirements that sellers can attempt to meet through custom design or through existing products... Like many other enterprise applications, WebSphere Commerce, which is based on Java 2 Platform, Enterprise Edition and XML, includes support for delivering and consuming Web services..."
- XML Articles and Papers September 2003
- XML Articles and Papers August 2003
- XML Articles and Papers July 2003
- XML Articles and Papers June 2003
- XML Articles and Papers May 2003
- XML Articles and Papers April 2003
- XML Articles and Papers March 2003
- XML Articles and Papers February 2003
- XML Articles and Papers January 2003
- XML Articles and Papers December 2002
- XML Articles and Papers November 2002
- XML Articles and Papers October 2002
- XML Articles and Papers September 2002
- XML Articles and Papers August 2002
- XML Articles and Papers July 2002
- XML Articles and Papers April - June, 2002
- XML Articles and Papers January - March, 2002
- XML Articles and Papers October - December, 2001
- XML Articles and Papers July - September, 2001
- XML Articles and Papers April - June, 2001
- XML Articles and Papers January - March, 2001
- XML Articles and Papers October - December, 2000
- XML Articles and Papers July - September, 2000
- XML Articles and Papers April - June, 2000
- XML Articles and Papers January - March, 2000
- XML Articles and Papers July-December, 1999
- XML Articles and Papers January-June, 1999
- XML Articles and Papers 1998
- XML Articles and Papers 1996 - 1997
- Introductory and Tutorial Articles on XML
- XML News from the Press