Other collections with references to general and technical publications on XML:
- XML Article Archive: [March 2003] [February 2003] [January 2003] [December 2002] [November 2002] [October 2002] [September 2002] [August 2002] [July 2002] [April - June 2002] [January - March 2002] [October - December 2001] [Earlier Collections]
- Articles Introducing XML
- Comprehensive SGML/XML Bibliographic Reference List
[April 30, 2003] "Using Extensible Markup Language-Remote Procedure Calling (XML-RPC) in Blocks Extensible Exchange Protocol (BEEP)." By Ward K. Harold (IBM, Austin, Texas). IETF Network Working Group, RFC. Reference: Request for Comments #3529. Category: Experimental. April 2003. 15 pages. "XML-RPC is an Extensible Markup Language-Remote Procedure Calling protocol that works over the Internet. It defines an XML format for messages that are transfered between clients and servers using HTTP. An XML-RPC message encodes either a procedure to be invoked by the server, along with the parameters to use in the invocation, or the result of an invocation. Procedure parameters and results can be scalars, numbers, strings, dates, etc.; they can also be complex record and list structures. This document specifies a how to use the Blocks Extensible Exchange Protocol (BEEP) to transfer messages encoded in the XML-RPC format between clients and servers... The BEEP profile for XML-RPC is identified as http://iana.org/beep/transient/xmlrpc in the BEEP 'profile' element during channel creation. In BEEP, when the first channel is successfully created, the 'serverName' attribute in the 'start' element identifies the 'virtual host' associated with the peer acting in the server role... In XML-RPC Message Exchange, a request/response exchange involves sending a request, which results in a response being returned. The BEEP profile for XML-RPC achieves this using a one-to-one exchange, in which the client sends a 'MSG' message containing an request, and the server sends back a 'RPY' message containing an response. The BEEP profile for XML-RPC does not use the 'ERR' message for XML- RPC faults when performing one-to-one exchanges. Whatever response is generated by the server is always returned in the 'RPY' message. This memo defines two URL schemes, xmlrpc.beep and xmlrpc.beeps, which identify the use of XML-RPC over BEEP over TCP. Note that, at present, a 'generic' URL scheme for XML-RPC is not defined... The IANA has registered the profile specified in Section 6.1, and has selected an IANA-specific URI http://iana.org/beep/xmlrpc..." See general references in: (1) "Blocks eXtensible eXchange Protocol Framework (BEEP)"; (2) "XML-RPC." [cache]
[April 30, 2003] "Towards a Core Ontology for Information Integration." By Martin Doerr (Institute of Computer Science, Foundation for Research and Technology, Heraklion, Greece), Jane Hunter (DSTC Pty, Ltd., Brisbane, Australia), and Carl Lagoze (Computing and Information Science, Cornell University, Ithaca NY). In Journal of Digital Information Volume 4, Issue 1 (April 2003). "In this paper, we argue that a core ontology is one of the key building blocks necessary to enable the scalable assimilation of information from diverse sources. A complete and extensible ontology that expresses the basic concepts that are common across a variety of domains and can provide the basis for specialization into domain-specific concepts and vocabularies, is essential for well-defined mappings between domain-specific knowledge representations (i.e., metadata vocabularies) and the subsequent building of a variety of services such as cross-domain searching, browsing, data mining and knowledge extraction. This paper describes the results of a series of three workshops held in 2001 and 2002 which brought together representatives from the cultural heritage and digital library communities with the goal of harmonizing their knowledge perspectives and producing a core ontology. The knowledge perspectives of these two communities were represented by the CIDOC Conceptual Reference Model, an ontology for information exchange in the cultural heritage and museum community, and The ABC Ontology and Model, a model for the exchange and integration of digital library information. This paper describes the mediation process between these two different knowledge biases and the results of this mediation - the harmonization of the ABC and CIDOC/CRM ontologies, which we believe may provide a useful basis for information integration in the wider scope of the involved communities... Information integration on the web involves a number of architectural building blocks that are the focus of work of the W3C and the related semantic web community. This work includes mechanisms for information encoding and manipulation (e.g., XML, RDF, XSLT), and ontology construction and reasoning (e.g., RDFS, DAML+OIL, OWL). Information integration also motivates much of the metadata work in the digital library community. Some of this work is focused within specific domains (e.g., FGDC in the geospatial community, IMS LTSC in the educational/instructional community), while other metadata initiatives are looking beyond domain specificity towards providing services across heterogeneous content (e.g.,, Dublin Core and its goal of cross-domain resource discovery). This paper describes work on a core ontology, arguably another of the building blocks to information integration. The goal of a core ontology is to provide a global and extensible model into which data originating from distinct sources can be mapped and integrated. This canonical form can then provide a single knowledge base for cross-domain tools and services (e.g., resource discovery, browsing, and data mining). A single model avoids the inevitable combinatorial explosion and application complexities that results from pairwise mappings between individual metadata formats and/or ontologies..." [cache]
[April 30, 2003] "Clean Up Your Schema for SOAP. Updating XML Schemas to be SOAP-Friendly." By Shane Curcuru (Advisory Software Engineer, IBM Research). From IBM developerWorks, Web services. April 29, 2003, ['More and more projects are using XML schemas to define the structure of their data. As your repository of schemas grows, you need tools to manipulate and manage your schemas. The Eclipse XSD Schema Infoset Model has powerful querying and editing capabilities. In this article, Shane Curcuru will show how you can update a schema for use with SOAP by automatically converting attribute uses into element declarations.'] "If you've built a library of schemas, you might want to reuse them for new applications. If you already have a data model for an internal purchase order, as you move towards Web services you may need to update it for use with SOAP. SOAP allows you to transport an xml message across a network; the xml body can also be constrained with a schema. However a SOAP message typically uses element data for its xml body, not attribute data. You'll explore a program that can automatically update an existing schema document to convert any attribute declaration into roughly 'equivalent' element declarations... Given the complexity of XML schemas, you certainly don't want to use Notepad to edit the .xsd files. A good XML editor is not much of a step up -- while it may organize your elements and attributes nicely, it can't show the many abstract Infoset relationships that are defined in the Schema specification. That's where the Schema Infoset Model comes in; it expresses both the concrete DOM representation of a set of schema documents, and the full abstract Infoset model of a schema. Both of these representations are shown through the programmatic API of the Model as well as in the Model's built-in sample editor for schemas... If you've installed the XSD Schema Infoset Model and Eclipse Modeling Framework (EMF) plugins into Eclipse, you can see the sample editor at work in your Workbench... performing a conceptually simple editing operation on schema documents (turning attributes into elements) can entail a fair amount of work. However the power of the Schema Infoset Model's representation of both the abstract Infoset of a schema and its concrete representation of the schema documents makes this a manageable task. The Model also includes simple tools for loading and saving schema documents to a variety of sources, making it a complete solution for managing your schema repository programmatically. Some users might ask, 'Why not use XSLT or another XML-aware application to edit schema documents?' While XSLT can easily process the concrete model of a set of schema documents, it can't easily see any of the abstract relationships within the overall schema that they represent. For example, suppose that you need to update any enumerated simpleTypes to include a new UNK enumeration value meaning unknown. Of course, you only want to update enumerations that fit this format of using strings of length of three; you would not want to update numeric or other enumerations... This article presupposes an understanding of schemas in XML and how SOAP works. The sample code included in the zip file works standalone or in an Eclipse workbench..." General references in "XML Schemas."
[April 30, 2003] "Sun Faces Challenges With Java." By Yvonne L. Lee. In Software Development Times (May 01, 2003). "A month before JavaOne, Sun Microsystems Inc.'s annual developer conference, the talk has moved away from the familiar 'How will Java compete against Microsoft's .NET Framework?' to whether Sun continues to be the appropriate standard-bearer for the language and Web services framework, and whether JavaOne continues to represent the interests of the entire Java community. JavaOne 'was all about Java. Now it's more like a Sun user group,' said Scott Hebner, IBM Corp.'s director of marketing for the WebSphere application server. 'I think the Sun presence has become Sun specific more than Java specific.' The issue arises because IBM and BEA Systems Inc., and not Sun, lead the market for Java middleware and application servers, according to both analysts and partners. 'I think the lead is going to be a two-horse race for at least a couple of years,' said Rob Hailstone, International Data Corp.'s European software infrastructure research director. 'My bets this year are a little different from last year,' said Sam Patterson, CEO of ComponentSource, an Atlanta-based distributor of software components for .NET and Java. 'Last year, I would have said IBM. This year, it's BEA.' The reason these market analysts and partners -- as well as financial analysts -- view BEA so strongly is that the San Jose company understands the importance of integration, they say... The problem for Sun, according to Bear Stearns analyst Naveen Bobba, is that it is trying to be both a systems company and a software company without having the resources of, say, an IBM. In this regard, he said, it is competing against not just IBM and BEA, but also Intel and Microsoft. 'The issue for Sun really is, can Sun invest and compete with multiple companies?' said Bobba. Still, having the plethora of competitors may not really be a bad thing. In fact, it may prove that Java is the multivendor platform that Sun has been touting, Hailstone said..."
[April 30, 2003] "Sun Readies Revamped Java/XML Integration Server." By Elizabeth Montalbano. In InternetWeek (April 30, 2003). "In a project code-named Ganymede, Sun is working to combine facets of its current integration software products to build a pure J2EE integration server, solution providers said. The new software is part of Sun's Project Orion strategy to 'bake' all of its Java middleware into its Solaris operating system... Sources said the new product will combine elements of Sun's Sun ONE Integration Server, B2B Edition, the former C-based iPlanet ECXpert product; and Sun ONE Integration Server, EAI Edition, formerly the Java-based Forte Fusion product. Sun also will cull pieces of functionality from the former iPlanet Process Manager product and include them in the new server. The new integration server will be J2EE-based and 'loaded on the application server so you can have, as Sun does with Forte Fusion, JMS and XML, with XML over HTTP on the back end,' Guinn said. The new product also will include B2B components of ECXpert, and support the Java Connector Architecture (JCA), a Java standard for integrating legacy systems to Java applications, he added. In order for Project Orion to succeed, Sun must offer integration software that works seamlessly with the rest of its middleware and can be updated in conjunction with other Sun ONE software products, Guinn said. 'This is the continued evolution of the vision from Jonathan Schwartz to get everything on the same release schedule,' he said. 'You're not going to have multiple products that don't support the same technology.' Solution providers said a competitive integration server is the missing link in Sun's Java middleware stack, as the company does not have Java-based software to provide both EAI and B2B integration that is closely linked to the rest of its middleware. Solution providers said Sun's plans are in part a response to BEA Systems' move to combine Java application development with application integration in its WebLogic Platform 8.1, due out in its entirety by August. Specifically, BEA's WebLogic Integration, the EAI and B2B software component of the platform, is the product Sun's new integration server will compete directly with..."
[April 30, 2003] "Microsoft to Expose Passport as XML-enabled .NET Web Service This Summer. Officials Give Update on .NET." By Paula Rooney. In CRN (April 30, 2003). "As part of its growing portfolio of .NET services, Microsoft will expose its Passport authentication service as an XML Web service this summer, officials said. The forthcoming Microsoft Passport Web Service, which will support XML and the delivery of Simple Object Access Protocol (SOAP) messages over HTTP, joins existing Microsoft Web services such as .NET Alerts and MapPoint Web Service. The Passport Web Service will also support WS-Security, a specification designed to secure data stored by in the online authentication service. Microsoft, IBM and VeriSign submitted the latest version of the Web Services Security (WS-Security) specification to the Organization for the Advancement (OASIS) last June. 'We're exposing the Passport service as a Web service,' said Steven VanRoekel, director of Web service marketing at Microsoft,' in an interview with CRN here on Wednesday. 'It'll add WS-Security so there's encryption and digital signatures, and it'll have a WSDL file.' Microsoft currently operates a Passport service on the Internet and has a Passport Software Development Kit (SDK) for developers. However, the availability of XML/SOAP and WS-Security capabilities of the forthcoming Passport Web Service due this summer will make it possible for developers to bind authentication within their own XML Web services. Solution providers say support for WS-Security will help developers of business-to-consumer (B2C) and business-to-business (B2B) applications get over their fears. Microsoft said the Passport Web Service is just the tip of the iceberg as the company begins integrating more advanced Web service specifications such as WS-Security, WS-Reliable Messaging and WS-Transactions into its infrastructure products and services over the next 12 to 18 months. At Professional Developers Conference in October, Microsoft is expected to debut a new software development kit for .NET MyServices and other business related Web services based on XML, SOAP, WSDL and UDDI, and WS-I standards, officials noted... Microsoft's latest toolset, for example, supports WS-Security, which will enable developers to harness the forthcoming Passport Web Service into their applications. The Windows Server integrates the latest .NET Framework as well as support for UDDI, which will enable companies to build catalogs of web services, Charney added. In the forthcoming months, Microsoft will continue embedding Web service 'plumbing' into its server applications such as BizTalk and future client applications including the 'Longhorn' Windows XP upgrade to make it a better 'consumer' of XML Web services, officials said... Neil Charney also said that the delivery of important .NET infrastructure products such as Visual Studio.NET and Windows Server 2003 last week and the completion of the Web service functionality for the WS-I specifications are significant milestones that will spur additional development in 2003..."
[April 30, 2003] "Corel Preps Vector Graphics for Business Use." By Rich Seeley. In Application Development Trends (April 30, 2003). "Corel Corp., Ottawa, Canada, today announced the availability of its new Smart Graphics Studio, SVG technology for creating and publishing open-standards graphics. The tools make it possible to take business data and existing graphics such as CAD files and convert them into useful applications, said Ian LeGrow, vice president of new ventures at Corel. Far from playing simply an illustrative role, explained LeGrow, SVG (Scalable Vector Graphics) technology can be deployed in applications to improve productivity. In a presentation with Rob Williamson, product manager for Corel Smart Graphics Studio, the point was illustrated with an example of a power plant using SVG rendering. When a problem arose with cabling in the demo plant, engineers and technicians followed cumbersome processes to identify and locate a specific cable. A query to a database produced identification numbers for the cable, which then had to be searched for manually on a CAD drawing of the cable layout. A new application using Corel Smart Graphics Studio integrates the ID numbers from the database and the CAD drawings onto a single PC screen with the cable in question highlighted so that it can be immediately identified. This represents a productivity gain for the plant workers and could be crucial in an emergency where every second may count, LeGrow and Williamson pointed out. They said this illustrates the firm's motto: 'Smart Graphics are good for business'..." See details in the news story "Corel Smart Graphics Studio Uses SVG for Graphical Applications Development."
[April 30, 2003] "SCO Directs Attention to New Software." By Stephen Shankland. In CNET News.com (April 29, 2003). "SCO Group's fastest-growing revenue source stems from its efforts to enforce proper licensing of its software, but the company announced Web services software Wednesday that could steer some attention back to the company's products as well. The company announced its SCOx framework Tuesday, a plan to build support for Web services standards into its Unix and Linux software. Web services technology -- pioneered largely by companies including Microsoft and IBM -- include a set of standards that govern how different computers can conduct sophisticated business transactions across the Internet... SCO has begun distributing portions of the Web services work. The company plans to show off some applications at its SCO Forum conference in Las Vegas on August 17. SCO will support basic Web services such as UDDI (Universal Description, Discovery and Integration) and SOAP (Simple Object Access Protocol). The SCOx framework will be able to run software developed for other Web services infrastructure, including Microsoft's .Net and Sun Microsystems' Java 2 Enterprise Edition, SCO said..." See reference following and the announcement: "SCO Outlines SCOx Web Services Strategy for Future Growth. Company to Roll Out Technology Framework for Integrated Web Services Applications at SCO Forum.".
[April 30, 2003] "SCOx Overview: Understanding and Getting Started with SCOx." By Erik W. Hughes, Simmi K. Bhargava, and Thor Christianson. SCO Overview White Paper. April 29, 2003. "SCOx is a technology framework used to deliver business applications and online services to key markets. Through this framework, SCOx lets solution providers plug their existing and new applications into a web services environment; provide billable, metered services; and leverage a fleet of resellers for selling the SCOx-enabled applications. Because customers have varied environments and policies, SCOx-enabled applications may be hosted securely from a remote location or installed on the customer's network... SCOx is a strategy and a framework, not a product. The SCOx strategy is how SCO will leverage web services and SCO's ebusiness services to offer new revenue opportunities to SCO resellers and ISV partners. The SCOx framework allows SCO partners to participate in the SCOx strategy, by offering the components partners need to create web services enabled applications and deliver these applications and associated benefits to their customers. SCO's customers will not buy SCOx. They will buy SCO operating systems, applications, online services, and third party products and services from SCO resellers and application vendors that are integrated into the SCOx framework. The SCOx framework is strictly standards based (XML, SOAP). This allows web services enabled applications developed on other platforms (.NET and J2EE) and running on non-SCO operating systems to integrate with the SCOx framework. The result is maximum flexibility for developers and resellers allowing them to provide solutions to heterogeneous environments. In summary the differentiators of SCOx are: (1) SCOx is an open development model. Developers can use their choice of tools; (2) SCOx enables integration of .NET applications as well as Java applications; (3) SCOx offers the power and stability of PC Unix as a deployment choice for SCOx applications; (4) SCOx makes available -- through a single dashboard -- a powerful suite of web-based applications which can be integrated with any web services enabled application to offer an enhanced solution to customers; (5) SCOx incorporates an extensive channel of resellers and software developers..."
[April 30, 2003] "Stenbit Details Defense's Plans for a Metadata Registry." By Dawn S. Onley. In Government Computer News (April 30, 2003). "The [US] Defense Department can't achieve net-centricity without a policy for managing the millions of lines of code in use throughout the department, CIO John Stenbit says. As DOD does more business over the Internet, applying metadata tags is a critical first step in controlling data while ensuring ease of access, Stenbit said at the Software Technology Conference. The use of metadata will promote interoperability and software reuse in the secure, networked environment planned for DOD's Global Information Grid. 'The effective management of metadata is essential to implementing the department's net-centric vision,' Stenbit said. To begin this effort, Stenbit early this month sent a memo to service secretaries and Defense agency chiefs outlining a phased approach for creating a DOD Metadata Registry as required in Defense Management Initiative Decision 905. To satisfy the requirements, Defense agencies must report by May 30  to the Defense Information Systems Agency about the types and quantity of Extensible Markup Language metadata they plan to include in the registry. Then, by July 30 , agencies must notify DISA of any metadata holdings not written in XML. Agencies have until September 30, 2003 to register all supported XML information resources, such as schema documents..." See: (1) DoD Metadata Registry and Clearinghouse; (2) DoD XML Registry. General references in "DII Common Operating Environment (COE) XML Registry."
[April 30, 2003] "Web Services Finds Royalty-Free OASIS." By Martin LaMonica. In ZDNet News (April 30, 2003). "A standards body has formed a committee -- which includes Microsoft, IBM, BEA and SAP -- to help standardise specifications for automating complex business processes A group within the Organisation for the Advancement of Structured Information Standards (OASIS) will meet next month to discuss the technical development of Business Process Execution Language for Web services (BPEL), a proposal led by several companies, including IBM, Microsoft, BEA and SAP... Products that adhere to the BPEL specification should make it easier for businesses to create Web services applications that automate multistep business processes such as handling an insurance claim. Web services is a term to describe a set of standards and programming methods for building applications that can easily share data across disparate systems. The build-up to the technical committee's first meeting at Oasis next month has been marked by some controversy. By seeking to standardise the Web services business specification through Oasis, backers of BPEL bypassed a similar effort by standards body the World Wide Web Consortium (W3C). The W3C choreography working group was formed at the beginning of this year with the goal of sorting through several overlapping proposals for automating business processes using Web services. Oracle, Sun Microsystems, BEA and SAP are members of the W3C's choreography working group, but BEA and SAP have decided to throw their weight behind BPEL. IBM and Microsoft were invited to participate in the W3C's standardisation process, but declined. Microsoft attended the first meeting of the W3C's choreography working group, but left after one day... Steven VanRoekel, Microsoft's director of Web services marketing, reiterated that the BPEL authors do not plan to charge royalties on any products based on BPEL. He indicated that royalties could slow implementation of business process automation products using Web services..."
[April 30, 2003] "OASIS Takes On Workflow Specification. Committee to Craft Standard Based on Spec from IBM, Microsoft, BEA." By John Fontana. In InfoWorld (April 30, 2003). "The Organization for the Advancement of Structured Information Standards on Tuesday agreed to form a committee to investigate crafting a Web services standard for process workflow... OASIS formed the Web Services Business Process Execution Language (WSBPEL) technical committee to take over work on the Business Process Execution Language for Web Services (BPEL4WS) specification. That specification is a workflow language that describes the Web services that need to be executed as part of a business process, the order in which they are executed, and the type of data they share. The new OASIS committee heats up the competition to create a business process workflow standard. The World Wide Web Consortium (W3C) is working on the Web Services Choreography Interface (WSCI), which is supported by Sun and a host of others including BPMI.org, Commerce One, Fujitsu, Intalio, IONA, Oracle, SAP, and SeeBeyond. IBM, Microsoft, and its partners will submit royalty-free Version 1.1 of BPEL4WS to OASIS on May 16,, 2003 during the committee's first formal meeting. The committee says it plans to establish a relationship with the W3C. The relationship may take some smoothing over since not more than a month ago Microsoft joined the WSCI group just in time for its first meeting and then abruptly resigned. Now, Microsoft and its partners have firmly established their intention to drive a specification within another standards body..."
[April 29, 2003] "Portal Syndication: Embedding One Web Site's Functionality in Another." By Ivelin Ivanov. From WebServices.xml.com (April 29, 2003). ['Ivelin Ivanov on embedding remote web site functionality with Apache Cocoon.'] "Web site syndication has gotten more popular as sites reference each other not only by a single hyperlink but also by embedding content. The idea was pioneered by Netscape's Rich Site Summary (RSS) XML format. RSS was developed in early 1999 to populate Netscape's My Netscape portal with external newsfeeds ('channels'). Since then RSS has taken on a life of its own and now thousands of sites use it as a 'what's new' mechanism. RSS is an example of an organically grown and widely accepted standard. It was not endorsed by any of the popular standards committees. Even so it quickly became popular and found a large number of creative uses. Lately, however, it has reached its limits. There is a demand for more advanced portal syndication which RSS cannot satisfy. The latest generation web portals demand more than posting news stories. Embedding and personalizing rich content and behavior from remote portals is becoming a necessity. Limited success has been achieved through complex and sophisticated backend integration via proprietary or web services compliant protocols. Recognizing the growing demand, influential organizations have attempted to develop new languages like the Web Services Expression Language (WSXL) from IBM, Web Services Inspection Language (WSIL) from Microsoft, and Web Services for Remote Portals (WSRP) from the OASIS Group. While these efforts are certainly worthwhile and promising, it will most likely take years before they pass the filters of real life trial and failure. All of them require a thick infrastructure layer to support implementations. While possible, it is again unlikely that mainstream deployment will be achieved instantly. Fortunately there is way to satisfy a large portion of the syndication requirements by applying established technologies and tools. We will illustrate the architecture of a possible solution using Apache Cocoon... The Web Service Proxy component is tightly integrated with the Cocoon framework and is particularly convenient to use in combination with XMLForm to enable syndication of web site functionality..." Related: (1) "RDF Site Summary (RSS)"; (2) "Web Services for Remote Portals (WSRP)"; (3) "Web Services Experience Language (WSXL)"; (4) "Web Services Inspection Language (WSIL)."
[April 29, 2003] "XML Transactions for Web Services, Part 2." By Faheem Khan. From WebServices.xml.com (April 29, 2003). ['The use of atomic transactions in web services applications. Part 2.'] "In the first part of this series, I introduced federated web service applications and transactional web services, including brief descriptions of of the WS-Coordination and WS-Transaction specifications. In this article I discuss and demonstrate the operation of atomic transactions in web services... The WS-Transaction specification defines a set of protocols to support atomic transactions. The term 'atomic transactions' is not specific to web services, of course. It is a concept well known in database applications. In fact, 'database transactions' is effectively a synonym for 'atomic transactions'. WS-Transaction defines the concept of an Atomic Transaction (AT) based on the proven concept of atomic database transactions. The WS-Transaction specification defines an AT as having the following characteristics: It results either in the commitment of all activities or none of them. Activities involved in an AT are treated as an indivisible whole. The entire set of activities -- the atomic whole -- either succeeds or fails. The success of the atomic whole is normally referred to as a 'Commit' operation, while the failure of any single activity results in a 'Rollback' of the set of activities which constitutes the AT. We have seen that a database transaction considers the results of a transaction to be temporary until they are either committed or rolled back. A logical implication of this temporary storage of results is that the transaction are isolated from the rest of the system. If transactions are not isolated, other database operations and transactions may alter the database during the execution of one transaction, thus producing inconsistent or inaccurate results. Isolation in database transactions is a technical issue. But it implies two important side effects. When we try to keep a transaction isolated, we have to temporarily lock some of the database resources, which means these resources will not be available to other applications during the execution of the transaction. This leads to two side effects: A transaction should take a little time as possible, freeing locked database resources as soon as possible. Thus, database transactions are short lived. Only trusted users and applications should be allowed to access the transactional features of a database; being able to lock resources indefinitely is a denial of service attack which may be launched by malicious users... In the next part of this series, we will see what happens if things go wrong. Does AT rollback mean that we lose this business opportunity? There should be ways to compensate for rolled back transactions. For example, we might want to buy components from vendors if they are not available in stock. These questions lead us to Business Activities (BA), which I will examine in the next article, particularly how a BA crosses the boundaries of trusted domains..."
[April 29, 2003] "Open Source and Open Standards." By Peter Saint-André. In O'Reilly ONLamp.com (April 29, 2003). ['On the intersection of protocol, source, and community.'] "How critical are open standards to the viability of the open source community? And which is a stronger guarantee of openness in the technology ecosystem: open standards or open code? ... What is a standard? Some people think that when the IETF or W3C approves a protocol or format, it thereby becomes a standard. But standardization is not a matter of approval; rather, it is a matter of acceptance in the market. And what is the market? It's a complex stew of projects and organizations who develop and use the emerging standard. In fact, it looks a lot like the ecosystem of developers and users, but written on a global scale. Not all standards are open (for example, MS Word and PowerPoint). However, when formats and protocols are open, then open implementations that are technically strong usually (but not always) tend to be accepted by the marketplace as standards. Indeed, often a particular implementation of a certain protocol becomes not only a standard but the dominant market-maker. For example, Apache is the dominant web server and a protocol like HTTPng failed to catch on in large part because it lacked support in the Apache community... In the Jabber community, we have pursued something of a hybrid approach. First we simultaneously created the core protocol and open source implementations, then we grew the developer community and user base (as well as the number and range of companies involved in development and deployment). Once that foundation was strong, we finally sought standardization of the core protocol through the IETF's XMPP Working Group, while maintaining a more nimble Jabber-specific community standards process managed by the open Jabber Software Foundation. Only time will tell if Jabber/XMPP becomes a standard for real-time messaging and presence. But the Jabber community is definitely focused on strengthening all three legs of the stool: protocol, source, and community. And given everything that's happening with Jabber and XMPP these days, we may well be witnessing the emergence of an Internet standard..." General references in "Extensible Messaging and Presence Protocol (XMPP)."
[April 29, 2003] "Mozilla Plan Sticks to Basics." By Jim Rapoza. In eWEEK (April 28, 2003). "During the last few years, the Mozilla Organization created the blueprint for taking a commercial product and making it open source, converting the old Netscape browser -- which had been vanquished by Microsoft Corp. and Internet Explorer -- into the open-source Mozilla, arguably the best browser on the market today. With this mission accomplished, The Mozilla Organization earlier this month began planning the next phase of Mozilla development. In many ways, it's a big departure from the past. The Mozilla development road map points to a new direction for Mozilla -- one that is more modular and has tighter controls over development... Probably the biggest change for most users will be the move away from the all-encompassing browser suite to individual components that can be easily integrated if a user so chooses. In eWEEK Labs' opinion, this is an excellent move because the massive browser suite that started with the old Netscape Communicator never made much sense to us and was emblematic of the browser wars bloat... Rather than Mozilla, the focus of browser development will now be the stand-alone Firebird (formerly Phoenix) browser. Like Mozilla, Firebird is a cross-platform browser, but it is much leaner and quicker because it is basically just a browser. The name of this component could change in the very near future because Firebird is already the name of an open-source database... Firebird does have an add-on model that makes it simple to bring in additional features and capabilities. Under Firebird, development of the browser will also change: Firebird features a new iteration of the XUL (XML User Interface Language) tool kit. Also getting the stand-alone treatment is the Mozilla mail client, which will now be known as Thunderbird (formerly Minotaur). This client will also have an XUL interface and will work across platforms, but it will also be more attractive to businesses and users that want a good open-source mail client but don't want to add an entire browser suite to get one. Other Mozilla components, such as Chatzilla and Composer, are currently up in the air -- it is not clear if they will become stand-alone applications or add-ons to Firebird and Thunderbird..."
[April 29, 2003] "Bleepin' BPELs." By John Taschek. In eWEEK (April 28, 2003). "Is it possible that something once pronounced Bipple and now Be-PEL is shaking up the Web services world? Is something as dry as Business Process Execution Language signaling an important split in Web services standards groups? Are Microsoft, IBM and BEA icing Oracle and Sun and their customers? The answer to each is yes. The decision of IBM, BEA and Microsoft (has anyone realized that the names of the big-three Web services players create an acronym for IBM?) to put a tiny specification for how Web services orchestration should occur into OASIS rather than the W3C sealed the developing split between the two groups. The result of the split is not antagonism, however. OASIS and the W3C are no longer competitors for standards; they're now perfectly complementary... At one time, standards became standards because: (a) they were de facto and in place because a behemoth vendor had tremendous presence in the market with a product -- think Microsoft Word; or (b) the standards bodies agreed through debate and comment to make an underlying communication foundation a reality through a specification -- think the W3C. We're now entering a phase in which standards become standards before the vendors have products and before the specification is fully debated and realized. It's standardization by market dominance, and IBM and Microsoft, with BEA as a tag-along, have mastered how this should happen. The vendors involved could not care less about customers. They're seeing things long before customers see them -- as if they were visionaries when they're actually just big-time capitalists. The problem is that Web services orchestration is years ahead of what most companies want or need to do now. But these vendors are going at it, spiting customers because they can. By the time customers are set to implement, the vendors will be aligned, and customers will have to go with one of the big-three vendors..." See: (1) the news story of April 16, 2003 "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)"; (2) general references in "Business Process Execution Language for Web Services (BPEL4WS).". The document "Business Process Management and Choreography" references related standards activity.
[April 29, 2003] "Bluestream Upgrades XML Database." By Lisa Vaas. In eWEEK (April 29, 2003). "Bluestream Database Software Corp. has released an upgrade to its native XML database that features smoother handling of collaborative content management with XML and binary data. XStreamDB 3.0 has a new resource manager that enables content management via integration with Web or print authoring and publishing software. It now supports Corel XMetaL for XML content authoring using that software's word processor-like view of content. The new version also improves support for Altova XMLSpy for editing data-centric XML documents... Other new features include faster full-text search and indexing, built-in WebDAV server, event triggers, automated backup and the new XStreamDB 3.0 Server Console application for easier server administration. XStreamDB 3.0 also now supports binary document types with MIME-type attributes in addition to native XML document support. It has also acquired derivation by extension and attribute groups, adding to its existing W3C schemas support. The database now has full-text search that features LIKE wildcard matching, found word marking, phrase search and proximity search. XStream 3.0 is a cross-platform database server that runs on Windows NT/2000/XP, Solaris, Mac OS X or Linux. It features a choice of Java API, WebDAV or the XStreamDB 3.0 Explorer application for access to documents and data. The database also supports XQuery with update extensions, full text search, shared resource management and XML schemas with automatic validation. XStreamDB is compliant with the ACID (Atomicity Consistency Isolation Durability) standard for open-source database systems. That standard indicates that the database supports "all or nothing" transactions -- those that either work to their conclusion or refrain from changing data. XStreamDB is a pure Java technology-based server and requires Java 1.3 or 1.4 Runtime Environment..." See the announcement "Bluestream Releases XStreamDB 3.0 Native XML Database. Major Upgrade Adds Features for Collaborative Content Management." General references in "XML and Databases."
[April 28, 2003] "OASIS Pushes Global E-Procurement Standardization." By Mark Leon. In InternetNews.com (April 28, 2003). "The Organization for the Advancement of Structured Information Standards (OASIS) consortium has created a forum for government agencies, organizations, and private companies to work together on global e-procurement standards. The new Electronic Procurement Standardization (EPS) Technical Committee will analyze requirements, identify gaps, and make recommendations for improving the interoperability of XML, internet based e-procurement systems. Other EPS participants include the Institute for Supply Management, Information Society Standardization System of the European Standards Committee (CEN/ISSS), US National Institute for Governmental Purchasing (NIGP), US National Association of State Procurement Officials (NASPO), RosettaNet, and Seebeyond. Terri Tracey of the Institute for Supply Management will chair the new committee. 'We are already hearing from other organizations that are interested in joining,' she said. Patrick Gannon, Oasis president and CEO, said input from government and industry is essential to ensure that new e-procurement specifications are neutral and effective. John Ketchell, CEN/ISSS director, said his organization joined the EPS to foster consensus between emerging global standards and regional European requirements. He also said CEN/ISSS plans to start an e-procurement project to complement European legislative initiatives designed to harmonize public e-procurement across EU member states. Rob Rosenthal, senior analyst with IDC in Framingham, Mass., saw the announcement as a positive development, but said no one should expect a universal standard for e-procurement any time soon. 'Procurement has always been a big part of e-commerce,' said Rosenthal, 'but it goes back at least twenty five years with EDI (Electronic Data Interchange) and there are as many EDI standards as there are companies with connections'..." See: (1) the news story of 2003-04-04: "OASIS Forms Electronic Procurement Standardization Technical Committee"; (2) general references in "Electronic Procurement Standardization."
[April 28 2003] "OASIS: Net Procurement Needs to Align." By Paul Festa. In CNET News.com (April 28, 2003). "A Web services standards group has launched an effort to push for uniform practices in the way supplies are bought and sold online. The Organization for the Advancement of Structured Information Standards (OASIS) formed a committee devoted to what it called 'e-procurement' systems, saying existing technology needed standardization... 'Our first priority will be to develop a comprehensive framework for electronic procurement standards, relating existing specifications to those in development,' Terri Tracey, chair of the OASIS Electronic Procurement Standardization (EPS) Technical Committee, said in a statement. 'It is vital that we reach consensus on how these standards fit together. Once we establish our framework and priorities, we will create technical committees within OASIS to advance the necessary standards and implementation processes.' Members of the new committee include the (Institute for Supply Management), the Information Society Standardization System of the European Standards Committee (CEN/ISSS), National Institute for Governmental Purchasing (NIGP), National Association of State Procurement Officials (NASPO), RosettaNet and SeeBeyond Technologies..." See: (1) the news story of 2003-04-04: "OASIS Forms Electronic Procurement Standardization Technical Committee"; (2) general references in "Electronic Procurement Standardization."
[April 26, 2003] "Introducing WS-Coordination." A Big Step Toward a New Standard." By Jim Webber and Mark Little (Arjuna Technologies Limited). In WebServices Journal Volume 3, Issue 5 (May 24, 2003). With 5 figures. ['In July 2002, BEA, IBM, and Microsoft released a trio of specifications designed to support business transactions over Web services. These specifications - BPEL4WS, WS-Transaction, and WS-Coordination - together form the bedrock for reliably choreographing Web services-based applications, providing business process management, transactional integrity, and generic coordination facilities respectively.'] "This article introduces the underlying concepts of Web Services Coordination, and shows how a generic coordination framework can be used to provide the foundations for higher-level business processes. In future articles, we will demonstrate how coordination allows us to move up the Web services stack to encompass WS-Transaction and on to BPEL4WS... The fundamental idea underpinning WS-Coordination is that there is indeed a generic need for propagating context information in a Web services environment, which is a shared requirement irrespective of the applications being executed. The WS-Coordination specification defines a framework that allows different coordination protocols to be plugged in to coordinate work between clients, services, and participants. The WS-Coordination specification talks in terms of activities, which are distributed units of work involving one or more parties (which may be services, components, or even objects). At this level, an activity is minimally specified and is simply created, made to run, and then completed... Ahatever coordination protocol is used, and in whatever domain it is deployed, the same generic requirements are present: (1) Instantiation (or activation) of a new coordinator for the specific coordination protocol for a particular application instance; (2) Registration of participants with the coordinator such that they will receive that coordinator's protocol messages during (some part of) the application's lifetime; (3) Propagation of contextual information between the Web services that comprise the application; (4) An entity to drive the coordination protocol through to completion... The context is critical to coordination since it contains the information necessary for services to participate in the protocol. It provides the glue to bind all of the application's constituent Web services together into a single coordinated application whole. Since WS-Coordination is a generic coordination framework, contexts have to be tailored to meet the needs of specific coordination protocols that are plugged into the framework... WS-Coordination looks set to become the adopted standard for activity coordination on the Web. Out of the box, WS-Coordination provides only activity and registration services, and is extended through protocol plug-ins that provide domain-specific coordination facilities. In addition to its generic nature, the WS-Coordination model also scales efficiently via interposed coordination, which allows arbitrary collections of Web services to coordinate their operation in a straightforward and scalable manner. Though WS-Coordination is generically useful, at the time of this writing only one protocol that leverages WS-Coordination has been made public: WS-Transaction We'll look at this protocol in our next article." [alt URL]
[April 26, 2003] "Business Processes: Turning Integration Upside Down." By Jim Mackay (Business Technology Group, webMethods). In WebServices Journal Volume 3, Issue 5 (May 24, 2003). "In the IT world, integration became an issue as soon as the second computer with the second application came online... Perhaps the largest group to latch onto business process integration is enterprise system vendors. The first was J.D. Edwards, with its XBP/XPI approach. SAP developed its version, known as xApps, on top of its NetWeaver platform.. PeopleSoft has AppConnect, and the most recent, and probably most publicized, is Siebel's UAN initiative. Each vendor approaches the solution in a slightly different way. SAP and PeopleSoft, for the most part, have built all of the underlying technology themselves as an extension of the latest generation of their core infrastructure and middleware. At this point, they both still outsource adapter support. JD Edwards, on the other hand, embeds webMethods technology as their XPI infrastructure, then builds their XBPs, or Cross Business Processes, using webMethods middleware. Siebel has decided that, to achieve true openness and interoperability they need to define platform-neutral business processes using standards, and then work with multiple vendors, including, webMethods and Tibco, to create packaged solutions to deploy these processes on their respective platforms. Whether employing business processes through the offerings of integration vendors, system integrators, or enterprise software vendors, a key set of requirements must exist in order to be successful: First, a compelling business problem that will return the investment in a short time. Generally, these are business problems endemic to specific industries, so implemented solutions are as close to "packaged" as possible. Next, the composite applications that make up the detailed functionality supporting these business processes have to be available, and exposed as services through standards. Finally, a service-oriented integration infrastructure for the design, development, extension, deployment, maintenance and management of the processes must be in place. This integration infrastructure needs to leverage the decoupling that was discussed earlier. It needs to encourage and leverage the separation of the process flow from the human interaction from the data documents and their associated representation and translation. It also must be able to instantiate business processes that were defined without the end implementation and deployment in mind. And last, it must facilitate the management of multiple business processes executing simultaneously across a distributed environment..." [alt URL]
[April 26, 2003] "Exposing Legacy Applications. An Apache SOAP Framework That Provides an Excellent Implementation." By Adelene Ng (Xerox Corporation, Rochester, NY). In WebServices Journal Volume 3, Issue 5 (May 24, 2003). With 10 figures. "In this article, I propose a solution using SOAP that allows legacy client methods to be called from a SOAP client. A SOAP client is just like any program that can be run from the user's computer. Even though other solutions are possible, for example using XML-RPC, SOAP offers the following advantages: Support for a richer set of data types; Allowing users to specify a particular encoding; Namespace aware; Asynchronous; Support for request routing and preprocessing of requests via SOAP headers... I use a three-tier architecture in this implementation... The first tier is the SOAP client, which accesses the legacy application through SOAP. The legacy application resides in another client/server application. The legacy client resides on the Web server -- the middle tier. A wrapper layer is provided around the legacy client application. This allows for the clean separation of legacy calls from the wrapper class, and it isolates the legacy code to a single location, which makes for easy maintenance. The SOAP client accesses the legacy client application on the Web server through the wrapper class. The legacy client, in turn, communicates with the legacy server through ONC-RPC. The legacy server may reside on a different machine and is the third tier in our architecture. This article shows how legacy applications can be exposed as a Web service and accessed through the SOAP client. I also discuss the possibilities of 'chaining' on both the Web server and legacy application tiers. The Apache SOAP implementation provides an excellent framework in which to develop SOAP applications. The underlying mechanism (Call object) makes all the SOAP communications appear to work seamlessly together. The user does not need to be exposed to the nitty-gritty details of SOAP in order make a SOAP call..." [alt URL]
[April 26, 2003] "Don't Segment Desktop XML. InfoPath Needs to be Everywhere." By Jon Udell. In InfoWorld (April 25, 2003). "My vocal support for the next version of Microsoft Office has drawn heat in various quarters. Naysayers are convinced that Microsoft will find some way to cripple the XML capabilities of Word, Excel, and InfoPath. I've said they're wrong. These products do XML by the book. Even more importantly, they embody a vision that's eluded the Web services plumbers: people are SOAP endpoints too. Business processes do not exist in some separate universe in which XML packets flow untouched by human hands. We're not just input sources and output sinks. We have to be monitors and exception-handlers too. And when our ubiquitous personal productivity tools enable us to see and touch XML data, we can be. Unfortunately, the power to see and touch XML data now seems a lot less ubiquitous than it did a couple of weeks ago. InfoPath will be bundled only with the volume-licensed Enterprise Edition of Office 2003. It will also be available as a standalone. Customer-defined schemas in Word and Excel will be supported in the Enterprise and Pro editions, but not in the Standard. Microsoft argues that because Pro outsells Standard at retail, and because businesses prefer Enterprise , the stratification of the XML offerings is a customer-driven response... Microsoft now sees schema support and InfoPath as an enterprise play that might -- or might not --trickle down. I think they're an enterprise play too, but I reject the trickle-down theory. The most vibrant XML applications today are coming from the grassroots up, in the form of RSS-enabled Weblogs... An enterprise can't simply equip all its people with the Enterprise edition of Office and assume it will only do business with other enterprises that are so equipped. The RSS network crosses all boundaries. So does e-mail... The future of XML on the desktop is far from certain. Now is not the time to segment a market that has only just begun to grow. I hope Microsoft will reconsider. And I trust that the competition is paying attention..." See: (1) "Microsoft Office 11 and InfoPath [XDocs]"; (2) "XML File Formats for Office Documents."
[April 26, 2003] "Web Services Creation Made Easy. Beta of BEA WebLogic Workshop 8.1 Insulates Developers from J2EE Trials and Tribulations." By Rick Grehan. In InfoWorld (April 25, 2003). "BEA WebLogic Workshop is a combination development/run-time environment, very much in the spirit of IBM's WebSphere Application Developer. But WebLogic Workshop exclusively generates J2EE applications, and it operates at a dizzying level of abstraction compared with similar tools. Whereas J2EE was all about abstracting low-level entities such as database rows and message queues, WebLogic Workshop is all about abstracting J2EE... BEA WebLogic Workshop 8.1 is a unique development environment for creating J2EE Web services that run atop the WebLogic application server. The developer does little interacting with J2EE itself, so you need only a moderate to advanced understanding of Java and very little understanding of J2EE, to build Web services with the Workshop. This allows for the rapid creation of J2EE applications, yet it also means tethering to a specific metaphor, toolset, run-time framework, and app server. Nevertheless, Workshop's handling of XML is something other development environments should pursue. BEA WebLogic Workshop 8.1 is available in two editions. The Application Developer Edition provides the basic development, testing, and deployment tools described here. The Platform Edition adds capabilities for developing portal applications and is designed to be used in conjunction with BEA's WebLogic Portal 8.1 and WebLogic Integration 8.1.. SOAP is the lifeblood of a Workshop Web service, and XML forms its corpuscles. All interactions between components of a Workshop-built J2EE application, as well as between clients and services, communicate via XML wrapped in SOAP. At multiple points of execution, therefore, your code must read and write XML messages. The Workshop provides two tools for this: the XQuery mapper and XMLBeans. The XQuery mapper reads an XML schema on one side, an input method to a Web service control on the other side, and allows you to graphically connect the two. A developer can literally drag connecting lines from nodes of a parsed XML schema view on one side and connect them to Java object members on the other. The Workshop does the rest; at run time, XML messages will be unpacked and their contents delivered to the proper object members. XMLBeans take a more procedural approach. The Workshop reads the schema, and produces a bean whose methods allow your application to read, write, and modify XML message elements. All the while, the XMLBean maintains the fidelity of the XML message itself..."
[April 26, 2003] "Programming for the 21st Century -- o:XML." By Jeff Davies. In ZDNet News (April 25, 2003). "Some years ago, I had an idea about an XML programming language, and made a DTD and XSL to convert the program in XML to Java through transformation via XSLT. Like all good ideas, I was by far and away only one of many with the same idea (having searched sourceforge for similar projects). At the time, I made contact with Martin Klang who had made far more progress on a project with the same idea. Time passed and I lost the files, but recently I recreated them and made a sourceforge project. Then, out of the blue, an e-mail arrived from Martin Klang announcing the availability of ObjectBox 1.0. This is currently closed-source license, but he's going to release the next version under GPL or similar license. Whereas I and many others had sat on our hands with our ideas, Martin had been working away, and actually developed a working product. It is currently limited to Servlet type applications, but there is no reason why (for example) the Linux Kernel can't be written in XML. See www.o-xml.org to download ObjectBox or look at object oriented XML programming language (o-xml) specification. o:XML is actually great for Servlets since XML and HTML can be more easily embedded in the code than say Perl, cgibin, or PHP by virtue of the fact you can use a full XML Editor like XMLSpy to edit the code... As XML Parsers are comparatively easy to implement and use, editors have the same parser in them, and therefore can display the hierarchical structure of the program in a tree format easily. Note: such is the nature of XML and the tools that from a given bit of code in XML, you could easily produce C or Java source code, or indeed, Assembler, or through a more custom SAX based parser, even a Machine code generating Transformation tool. The simplicity and plugability of the overall solution makes it easy to add management layers to the code (like auditing, like integration of functional specification). The move to XML should also make searching for similar blocks of code simpler. This means you can refactor using XP more easily : making sure there are not multiple bits of code doing the same thing in many places, the intention of course to reduce software maintenance costs. Therefore, I see the move to an XML Programming Language as a precursor to a new generation of cheaper, high-quality, bloat-free software..." See "The o:XML Programming Language," by Martin Klang.
[April 26, 2003] "XML for Data: Reuse It Or Lose It, Part 2. Understanding Reusable Components." By Kevin Williams (CEO, Blue Oxide Technologies, LLC). From IBM developerWorks, XML zone. April 25, 2003. ['In his last column, Kevin Williams explained how component-level reuse in XML designs can decrease code complexity and shorten maintenance cycles. In this article, the second in a series of three, he describes the types of components that can be reused in XML designs and provides examples of each in XML and XML Schema.'] "Now that I've explained the why of component-level reuse in XML document design, I'll delve into the what. Understanding the types of components that you can reuse in XML documents -- and the advantages to doing so -- will help you to identify good reuse opportunities and take advantage of them... The most atomic design construct you can reuse is the datatype. A datatype usually takes one of the built-in datatypes from XML Schema (or another user-defined datatype) and restricts it in some way... Enumerations are lumped together with datatypes in XML Schema, but they are different enough from other datatypes that they deserve their own discussion. An enumeration is a list of allowable values for a particular text-only element or attribute. The classic example is the list of all the states in the United States... As you move higher up the reuse food chain, you start to exit the world of XML Schema and enter the world of Web services. It's a bit out of scope for this column, but I'll just mention that higher-order artifacts -- operations, services, and even orchestrated flows -- can be built up out of these reusable building blocks. For example, you might define a getQuote operation that takes a tickerSymbol datapoint (reused) as its input and returns a stockInfo structure (reused) as its output. A service might then implement the getQuote operation (reused) at a particular endpoint and with particular bindings... XML design can be complicated and cumbersome. With the rapid proliferation of XML designs throughout the enterprise, you're in for significant headaches if you don't take the time to see how you can reuse these design artifacts up front. While a good approach to reuse isn't necessarily going to address design requirements for an entire application, most design efforts can probably leverage some existing structures, datapoints, datatypes, and enumerations, and then add just those components that are unique to a particular task. In the next column, I'll show you some approaches to implementing these reusable components in the enterprise..."
[April 26, 2003] "Tibco Promotes Flexibility Through BPM and Analytics." By [Staff]. In ComputerWeekly.com (April 25, 2003). "You would think that Tibco Software, an early innovator and leader in enterprise application integration (EAI), would be heavily promoting the addition of open-standard web services interfaces to its traditional messaging architecture and library of EAI adapters. But because web services interfaces have become mandatory for all integration products it has become tough to differentiate based on open protocols. So Tibco is focusing on enabling the dynamic enterprise by putting business process management (BPM) and analytics functionality higher in the stack. In the early days of the real-time enterprise -- the 1990s -- Tibco pioneered messaging and integration technologies for demanding markets such as brokerage trading desks, where packets of information had to be there in a fraction of a second. The supplier then prospered by packaging these tools and selling them as part of an integration server that could tie together any number of diverse applications and legacy systems. Today, Tibco and other pure-play integration suppliers, including WebMethods, Vitria and SeeBeyond, are on a collision course with application suppliers, which have realised the value in providing integration middleware themselves. Other suppliers, such as IBM and BEA Systems, pose a threat as they incorporate both integration and higher-level capabilities into the application server... The supplier is developing vertical, market-specific solutions that try to help enterprise IT leaders leverage the data already running through their pipes. Tibco has developed BPM tools, such as BusinessWorks, that allow enterprise IT managers to define modular business logic components and then to quickly and flexibly reuse them in different business processes as the need arises -- for example, if a product or partner changes or in a mergers and acquisitions situation. The supplier also recently introduced a business optimisation dashboard product called BusinessFactor, which aims to deliver real-time performance management technology, for example, for greater supply-chain visibility. Tibco's chief strategist Ram Menon described it as "closed-loop integration: Connect it together, see what's going on, understand it and react to it'..."
[April 26, 2003] "Judge Upholds Law Requiring ISPs to Name Downloaders. Critics Say Ruling is Big Blow to Internet Users' Privacy." By Grant Gross. In InfoWorld (April 25, 2003). "A U.S. federal judge has again sided with the recording industry in its efforts to subpoena the name of a music downloader, upholding a portion of the Digital Millennium Copyright Act (DMCA) that requires Internet service providers to turn over names of alleged copyright infringers. Critics said the law provides a cheap and easy way for music companies, or anyone else, to find out the names of anonymous Internet users... Verizon, along with 28 consumer and privacy groups and 18 other ISPs and ISP organizations, argued against the subpoenas, which are issued by a court clerk, not a judge, when a copyright holder has a reasonable suspicion of a violation. Verizon argued that the DMCA subpoenas only apply when the files are hosted on the ISP's network, not on a customer's computer. Verizon also argued that the clerk-issued subpoenas open up a potential for abuse, with anyone wanting to know the name of an anonymous Internet user, including pedophiles and stalkers, able to claim a copyright violation and get a subpoena... Verizon said it will appeal Bates' decision, and the company is currently fighting a second subpoena from the RIAA. John Thorne, senior vice president and deputy general counsel for Verizon, said in a statement the company will immediately seek a stay of Bates' order in the U.S. Court of Appeals. The decision goes beyond protecting 'copyright monopolists,' he added in the statement. 'This decision exposes anyone who uses the Internet to potential predators, scam artists and crooks, including identity thieves and stalkers,' Thorne added. 'We will continue to use every legal means available to protect our subscribers' privacy'..."
[April 24, 2003] "XML Grows... and Slows." By Jim Ericson. In Line56 (April 24, 2003). "With standards adoption spreading, some wonder if a messaging glut will lead to infrastructure overload. XML standards and specifications are more and more coming to bear on a variety of business processes inside and outside the four walls in support of new architectures that are flexible and loosely coupled. XML-based messaging in the form of Web services will dominate deployment of new application solutions by 2004, according to Gartner Inc. Gartner sees some 80 percent of all platform vendors now supporting Web service architectures, which it calls the 'next generation of platform middleware.' We can't name all the standards groups out there, but we're all familiar with the effects of too much of a good thing. The openness and flexibility of XML that serves machine and human interactions also comes with a layer of abstraction that is verbose and nested with content that requires extra processing. Performance degrades as messages and their accompanying security, encryption and digital signatures start to pile up. Now, some people are beginning to ask whether a looming glut of XML messaging will become the Los Angeles freeway of network traffic and data processing. 'There's a limit to how fast XML-based messaging can ever run,' says Joshua Greenbaum, principal at Enterprise Applications consulting. 'When you see where all the vendors are moving, like SAP with NetWeaver, there's an assumption [of increasing message traffic]. Until we have a way to speed the marshaling of XML, we'll end up hoping customers won't implement that many message streams in parallel'... By various estimates, the arrival of pervasive XML message traffic is anywhere from two to 10 years out. But we're already seeing workarounds to manifestations of the processing problem. Some portal users are adopting batch updates from offline portal specialist BackWeb over real-time connectivity to better address data latency. Another company, Cysive, advocates the use of native language calls over XML behind the firewall as a way to improve data performance..."
[April 23, 2003] "SIMPLE-XMPP Interworking." By Daniel-Constantin Mierla (Fraunhofer FOKUS, Berlin, Germany). IETF Network Working Group, Internet-Draft. Reference: 'draft-mierla-simple-xmpp-interworking-00'. April 23, 2003, expires October 22, 2003. "This document describes the behavior for the logical entity known as the SIMPLE-XMPP Interworking Function (SIMPLE-XMPP IWF) that will allow the interworking between the SIMPLE (Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions) and XMPP (eXtensible Messaging and Presence Protocol - also known as Jabber protocol) protocols... SIMPLE [Session Initiation Protocol (SIP) Extension for Instant Messaging] extends the Session Initiation Protocol with Instant Messaging and Presence functionality. The Session Initiation Protocol (SIP) [SIP: Session Initiation Protocol] was designed to initiate and manipulate media 'sessions' between communicating parties. XMPP is an XML-based streaming protocol designed for Instant Messaging and Presence. The primary objective of a SIMPLE-XMPP Interworking function (IWF) is to provide protocol conversion between SIMPLE and XMPP protocols. The document describes the requirements and behavior of the SIMPLE-XMPP Interworking function for conversion of the SIMPLE and XMPP protocols. How to use SIP to initiate XMPP chat sessions or how to initiate sessions over XMPP are not the subject of the present document... [Functionally,] SIMPLE-XMPP IWF can be designed in various ways. This may include coexistence of SIMPLE Servers and/or XMPP Servers with IWF. The co-location of the SIMPLE server and/or XMPP server in conjunction with the IWF is a matter of implementation and not a protocol issue. There shall be no assumptions made for the optional elements and components present in either SIMPLE or XMPP networks. The solution provided here shall work for a minimum configuration required for both protocols. There may be recommendations for other configurations, which include optional components..." See: "Extensible Messaging and Presence Protocol (XMPP)." [cache]
[April 23, 2003] "An SVG Case Study: Integrated, Dynamic Avalanche Forecasting." By Chris Cochella and Tyler Cruickshank. From XML.com (April 23, 2003). ['An article on using SVG to present integrated information on skiing weather conditions.'] "... We are avid backcountry skiers (backcountry snow conditions are not considered generally useful). A wise backcountry skier is always aware of the specific local and regional weather conditions in the mountains that contribute to avalanche danger. For winter backcountry enthusiasts like us, the problem is that all of the weather data available (i.e., National Oceanic and Atmospheric Association's National Weather Service) from remote mountain stations and ski areas is scattered throughout the Web -- in various formats, of varying frequency, contained in difficult to read text files, and differing in measured parameters. Cobbling this information together at 6AM prior to skiing is not our idea of fun. Thus, our goal is to collect all of this data in one place and then graphically display related parameters in a Web information appliance. We call this appliance the Avalanche Meteorology Toolkit (AMT) [requires SVG plugin; see also Avalanche.org]. The process of bringing all this information together is messy. But it can be done by using Perl to do data clean up, handing off to a MySQL database for storage, and then to Scalable Vector Graphics (SVG) for a lightweight graphics display, which can be viewed by Adobe's SVG Plugin. The article Data-Driven SVG Apps by Pramod Jain is useful for understanding the multi-tier approach (data storage, data processing, display)... Our solution requires five steps, utilizes two Perl scripts, one SVG information appliance interface, and one SVG file for each weather station... The primary benefits of our approach are the integration of disparate information sources into a single, lightweight display of relevant real-time information. Perl, or course, is great for text processing. SVG along with Adobe's server-connection capability has proven to be a very flexible and lightweight means of information display. We are planning the integration of additional dynamic information like avalanche pictures posted by the public, integration of the Utah Avalanche Center forecast advisory, and a regional map of recent avalanches. By combining all this information in one place, backcountry travelers and avalanche forecasters have been able to quickly assess backcountry conditions and make appropriate, informed decisions. There is no end to the possibilities for creating dynamic, lightweight displays of real-time data: other scientific monitoring devices, traffic monitoring, manufacturing processes, financial data, and so on..." See also: (1) "An Internet Based Snow Stability Toolkit for the Temporal and Spatial Display of Meteorological, Snow Stability, and Avalanche Related Data for Forecasters and Backcountry Users"; (2) SVG Essentials, by J. David Eisenberg. General references in "W3C Scalable Vector Graphics (SVG)."
[April 23, 2003] "At Microsoft's Mercy." By Kendall Grant Clark. From XML.com (April 23, 2003). ['The future of XML editing is largely in Microsoft's hands, observes Kendall Clark.'] "Even after five long years of XML development, the ideal and ubiquitous 'XML editor for humans' seems more rumor than reality. Could it be that we have underestimated the difficulty of building a tool with which ordinary people can easily and simply create XML content? What troubles me even more, however, was the conclusion I reached in that column, namely, that the XML creation facilities in the next major release of Microsoft Office are the best, realistic hope for the future of the documents side of XML, at least in terms of mass market success. No other entity in the industry (in any industry, for that matter) is as able to swing mass numbers of computers users toward or away from specific technological solutions. That conclusion troubled me then and troubles me today not only because of Microsoft's spotted history, but also because, if it's at all correct, it's not an enviable position for XML to occupy. A significant part of the robust vision of XML's success is the documents part of the XML application spectrum. The potential of XML will not be maximized unless or until it is used to exchange documents, freeing people and organizations from an inappropriate reliance on proprietary document-creation infrastructure and tools. But relying on one company to create ubiquitous end-user tools is no more a good position to occupy than is the one in which the people and organizations rely on a single vendor for proprietary access to their documents and data. In other words, XML is important as a document exchange format insofar as it prevents (or, at least, mitigates) proprietary vendor lock-in, for both people and organizations. The XML industry must seek to avoid finding itself in a similar position, namely, one in which it relies upon Microsoft to make XML creation tools for end users ubiquitous. Of course, there are many smaller vendors which are building XML creation tools, many of which are very well designed and engineered. But smaller vendors can only do so much to influence or to oppose the juggernaut... Microsoft's decision to offer the really interesting XML bits of the new Office only in its high-end versions is likely not to be as harmful to as many relevant parties as it might first appear, though it does reaffirm the prudential wisdom of assuming the worst about Microsoft and waiting to be pleasantly surprised by the unexpected..."
[April 23, 2003] "Online Magazines with Apache Cocoon." By Steve Punte. From XML.com (April 23, 2003). ['Apache Cocoon makes publishing magazines easy. Steven Punte brings together HTML and RSS documents to show how Cocoon's XML-directed architecture lends itself to elegant publishing solutions.'] "In order to demonstrate what I call XML-directed solutions using Apache Cocoon, in this article I will discuss how to use Cocoon to create an online magazine. XML-directed solutions are those where XML, rather than a programming language is used to control the application. If you are considering entering into the world of online publications or are thinking about upgrading your existing technology, consider how elegantly Apache Cocoon provides a publishing framework... In this article I examine a very simple and elegant two-layer solution for online publishing; it presents articles stored in a local repository or directly utilizes feeds from other online magazines and new services. It turns out that the management of articles and of news stories is very similar, and much of this type content is converging on the use of RSS. Thus, an appropriate architectural tactic is to divide the problem into two parts: the article repository layer and the presentation layer... A key architectural feature of this solution is that no application-specific Java or other procedural software is required. All necessary functionality and operation is achieved using existing off-the-shelf Apache Cocoon components, supplying them with appropriate XML configuration information. Such solutions are "XML directed architectures" and are expected to play an increasingly dominant role thanks to the software component interoperability that XML provides... While still in its infancy, component solutions directed by XML configurations are becoming viable and production-worthy ways of building web applications. Apache Cocoon excels in the territory of content presentation solutions and is making progress at addressing more interactive behavior situations with Apache Struts-like additions. The entire application presented in this article is contained in one Cocoon sitemap file and a handful of XSLT templates. Both these files define behavior and can be seen as an application layer on top of a generic, technology-agnostic XML framework. In my next article for XML.com, I will present a generalization of such a framework, which I call X2EE..." See also the X2EE White Paper.
[April 23, 2003] "Northrop's Latest Patent: Legitimate or Just 'A Silly Claim'? The Worldwide i-Technology Community Responds." By Jeremy Geelan. In XML Journal (April 23, 2003). "In an odd twist, even though the WWW was conceived by Tim Berners-Lee in 1989 and the first public release of a WWW client and server was in 1991, such 'prior art' (as it is known in patent circles) seems not to have discouraged New Jersey-based Charlie Northrup from claiming patent number 6,546,413 ['Access Method Independent Exchange Using a Communication Primitive']. The result -- for the US Office of Patents and Trademarks granted it after many years of deliberation -- according to many commentators including Maureen O'Gara of LinuxGram, is that Northrup now holds the patent on nothing less universal than what we now call 'Web services'... The claim that Northrup's ideas on Web services are the earliest is 'rot,' according to Uche Ogbuji of Fourthought, Inc. Ogbuji calls it 'a silly claim'... Tim Bray, co-inventor of XML, is equally dismissive. 'Charlie contacted me a couple of times [and] I told him that in my view that patent should not have been granted, and would not stand up in court, due to having failed the 'obviousness' test. Clearly he disagrees.' Bray adds a very strong cautionary note. 'If [Northrup] managed to enforce his claim the consequences would be disastrous; it would become impossible to have Open Source implementations of key pieces of the infrastructure. This would be harmful, perhaps fatal, to the grand plans of those who want to deploy Web services everywhere.' Veteran markup specialist Len Bullard isn't any more sanguine than Bray. 'I can't understand such a patent,' he says. 'There is too much prior art and research in this field. Web services are coming about as a result of standardization, not innovation.' Bullard points out an odd precedent, on the other hand. 'The patent on stylesheets held by Microsoft,' he notes, 'has never been challenged although the prior art again, is well documented. And the Sun patent on hyperlinking was the same kind of patent: bogus but unchallenged.' 'Unfortunately,' Bullard continues, 'our patent system has come down to a contest of money, lawyers, and stamina because the patent office does not have the expertise or resources to verify obscure claims'..." [Patent application #211266 was filed December 14, 1998, and patent #6,546,413 was granted April 8, 2003. Abstract: "The present invention provides a virtual network, sitting 'above' the physical connectivity and thereby providing the administrative controls necessary to link various communication devices via an Access-Method-Independent Exchange. In this sense, the Access-Method-Independent Exchange can be viewed as providing the logical connectivity required. In accordance with the present invention, connectivity is provided by a series of communication primitives designed to work with each of the specific communication devices in use. As new communication devices are developed, primitives can be added to the Access-Method-Independent Exchange to support these new devices without changing the application source code. A Thread Communication Service is provided, along with a Binding Service to link Communication Points. A Thread Directory Service is available, as well as a Broker Service and a Thread Communication Switching Service. Intraprocess, as well as Interprocess, services are available. Dynamic Configuration Management and a Configurable Application Program Service provide software which can be commoditized, as well as upgraded while in operation."] See: "Patents and Open Standards."
[April 23, 2003] "New Technologies Face Legal Headaches." By Lisa M Bowman. In ZDNet News (April 22, 2003). "Companies face a host of legal land mines that they need to consider when developing emerging technology, attorneys at the O'Reilly Emerging Technology Conference told developers Tuesday. The attorneys said companies are increasingly wielding patent and copyright laws such as the Digital Millennium Copyright Act (DMCA) to thwart competitors and maintain their market share. As a result, the attorneys said, we're heading toward a world where companies increasingly need to consider the legal ramifications of their products... Already, the DMCA has seeped into the market for printer cartridges and garage door openers. Printer maker Lexmark International and automatic garage door maker Skylink successfully have wielded the DMCA to ward off competitors who wanted to develop toners and garage door openers that would interoperate with their products. In both cases, the companies claimed that the competitors broke their protection measures... Von Lohmann said companies that provide automatic updates may come under fire because intellectual property owners could argue that such companies could send out a kill patch if they suspect or learn about a copyright infringement. He said companies such as Microsoft and other software makers that automatically update are starting to worry about such scenarios. 'These issues come up in back rooms all the time,' he said. Von Lohmann also predicted that more and more companies will build authentication measures into their products so that they can wield the DMCA to quash competitors. The law makes the act of breaking authentication measures illegal. 'They will invoke the DMCA to prevent anyone from interoperating with their systems,' he said. For example, he said, companies in the instant messaging market, long a battlefield in the interoperability wars, could build some authentication code into their products and then sue anyone who cracks the digital handshakes. The move would discourage competitors from developing products that interoperate because they would have to break the authentication code to do so, an act that could violate the DMCA... Emerging companies also need to be wary of patent claims, according to Fenwick & West lawyer Rajiv Patel. Patel said the biggest trend for companies in the down economy is increasingly leveraging their patents to make money through licensing fees. 'It's an opportunity to generate revenue,' Patel said. He said many companies also are using patents to quash competitors, hoping they'll be discouraged by the costs of fighting patent claims. He said companies can burn through as much as US$50,000 to US$500,000 per month defending themselves against such claims. He urged small companies to develop a patents strategy and portfolio that will work as a strong defence against future claims. Right now, he said, there's a patent landgrab in the wireless communications, security, nanotechnology and biotechnology markets..." See: "Patents and Open Standards." See: "Patents and Open Standards."
[April 22, 2003] "Inside Cisco's Eavesdropping Apparatus." By Declan McCullagh. In CNet News.com (April 21, 2003). "Cisco Systems has created a more efficient and targeted way for police and intelligence agencies to eavesdrop on people whose Internet service provider uses their company's routers. The company recently published a proposal that describes how it plans to embed 'lawful interception' capability into its products. Among the highlights: Eavesdropping 'must be undetectable,' and multiple police agencies conducting simultaneous wiretaps must not learn of one another. If an Internet provider uses encryption to preserve its customers' privacy and has access to the encryption keys, it must turn over the intercepted communications to police in a descrambled form. Cisco's decision to begin offering 'lawful interception' capability as an option to its customers could turn out to be either good or bad news for privacy... Marc Rotenberg, head of the Electronic Privacy Information Center, says: 'I don't see why the technical community should hardwire surveillance standards and not also hardwire accountability standards like audit logs and public reporting. The laws that permit 'lawful interception' typically incorporate both components -- the (interception) authority and the means of oversight -- but the (Cisco) implementation seems to have only the surveillance component. That is no guarantee that the authority will be used in a 'lawful' manner'... if you don't like Cisco's decision, remember that they're not the ones doing the snooping. Cisco is responding to its customers' requests, and if they don't, other hardware vendors will. Cisco's Internet draft may be titled 'lawful interception,' but there's no guarantee that the capability will always be used legally. If you're looking for someone to blame, consider Attorney General John Ashcroft, who asked for and received sweeping surveillance powers in the USA Patriot Act, along with your elected representatives in Congress, who gave those powers to him with virtually no debate..." See also: OASIS LegalXML Lawful Intercept Technical Committee.
[April 22, 2003] "End-to-End Object Encryption in XMPP." By Peter Saint-Andre (Jabber Software Foundation) and Joe Hildebrand (Jabber, Inc). IETF Network Working Group, Internet-Draft. Reference: 'draft-ietf-xmpp-e2e-02'. April 21, 2003, expires October 20, 2003. 18 pages. "This document describes an end-to-end signing and encryption method for use in the Extensible Messaging and Presence Protocol (XMPP) as defined by XMPP Core and XMPP Instant Messaging. Object signing and encryption enable a sender to encrypt an XML stanza sent to a specific recipient, sign such a stanza, sign broadcasted presence, and signal support for the method defined herein. This document thereby helps the XMPP specifications meet the requirements defined in RFC 2779 ['Instant Messaging / Presence Protocol Requirements']... For the purposes of this document, we stipulate the following requirements: (1) Encryption must work with any stanza type (message, presence, or IQ). (2) The full XML stanza must be encrypted. (3) Encryption must be possible using either OpenPGP or S/MIME. (4) It must be possible to sign encrypted content. (5) It must be possible to sign broadcasted presence. (6) Any namespaces used must conform to The IETF XML Registry..." See: "Extensible Messaging and Presence Protocol (XMPP)." [cache]
[April 22, 2003] "Securing Digital Content." By Dennis Fisher. In eWEEK (April 22, 2003). "As Microsoft Corp. prepares to release the beta version of its anticipated and controversial Rights Management Services, a small security company has been quietly working on technology that could trump Microsoft's and make it easier for companies to control digital content... Cryptography Research Inc. has developed a technology that associates security measures with each piece of content instead of using a generic protection scheme for all copies. The security measures are contained in code that runs on a virtual machine inside a playback device. As the content is decrypted during playback, the virtual machine uses APIs in the playback device to determine whether or how the playback should proceed. The architecture includes a digital watermarking function that would let content owners identify every legal copy of a given piece of content. If a legal copy is duplicated, the illegal version could be traced. Under this system, each playback device would have a unique set of keys for decrypting content. The concept, which the company calls self-protecting digital content, grew from research to uncover a DRM (digital rights management) solution amenable to everyone in the debate over mandating copy protection. 'Both sides are missing the point. Mandating copy protection isn't realistic,' said Paul Kocher, president of Cryptography Research, based here. 'But [content owners] have a real problem. Piracy is illegal, and my job is to solve the security problem.' Kocher said the company has had discussions with Hollywood studios about licensing the technology..." See: "XML and Digital Rights Management (DRM)."
[April 22, 2003] "BINPIDF: External Object Extension to Presence Information Data Format." By Mikko Lonnfors, Eva Leppanen, and Hisham Khartabil (Nokia). IETF SIMPLE WG, Internet-Draft. Reference: 'draft-lonnfors-simple-binpidf-00'. April 7, 2003, expires October 6, 2003. "This memo specifies a methodology whereby external content to a presence information document can be referenced in XML encoded presence information document (PIDF). The external content can be either transferred directly in the payload of SIP messages or indirectly as an HTTP reference. The external part might contain binary data such as images. The Presence Information Data Format (PIDF) is described in Common Presence and Instant Messaging (CPIM) Presence Information Data Format. It defines a generic XML encoded format to express a presentity's presence information. However, it does not specify any mechanism how external objects, e.g., pictures, belonging to presence information can be represented in such XML documents. The Content Indirection document provides an extension to the URL MIME External-Body Access-Type to allow any MIME part in a SIP message to be referred indirectly via a URL. In addition there is a need to specify an extension to PIDF in order to use the Content Indirection mechanism for the Presence in a way that the XML encoded presence information is carried directly in SIP message while external objects are referenced indirectly. There is also need to deliver the external objects in the payload of a SIP message. The MIME Multipart/Related content type provides a good basis for placing a reference to external contents as multiparts. An extension to PIDF is needed for referencing the multiparts from the PIDF XML formatted presence information. A similar kind of approach of utilizing the MIME Multipart/Related with HTML can be found in 'MIME Encapsulation of Aggregate Documents' (RFC 2557)..." See related references in "Common Profile for Instant Messaging (CPIM)." [cache]
[April 22, 2003] "XML and Java Technologies. Data Binding Part 4: JiBX Usage." By Dennis M. Sosnoski (President, Sosnoski Software Solutions, Inc). From IBM developerWorks, XML zone. April 18, 2003. ['Part 3 described the JiBX internal structure; now find out how you actually use JiBX for flexible binding of Java objects to XML. JiBX lead developer Dennis Sosnoski shows you how to work with his new framework for XML data binding in Java applications. With the binding definitions used by JiBX, you control virtually all aspects of marshalling and unmarshalling, including handling structural differences between your XML documents and the corresponding Java language objects. Want to refactor your code without changing the XML representation? With JiBX you can.'] "[Here] in Part 4, you'll find out how to use the power of this Java-centric approach to data binding in your applications. Most other data binding frameworks for the Java language force you to supply a DTD or W3C XML Schema grammar for your documents, then generate a collection of classes from that grammar. You need to work with these generated classes to use the frameworks, but in most cases you have little or no control over the classes -- they're basically JavaBean-type wrappers around simple data structures along with some added framework code. The whole point of these generated classes is to provide an interface for working with data from your XML documents. The JavaBean wrapper approach is sometimes presented as object-oriented because of the use of get/set methods to access data. In reality, it's about as far from true object-oriented development as you can get, because of the lack of extensibility in the data objects. True object-oriented programming means that objects hide their internal state and implement their own behaviors for working with that state information. This is typically not possible with the generated code approaches. With JiBX, binding to XML is treated as an aspect that applies to your classes, not as the primary purpose of those classes. Thus, you use object-oriented designs that are appropriate to your application. It also gives you the freedom to refactor your classes without needing to change the bound XML document structure. This aspect-oriented approach even lets you define multiple bindings to be used with the same classes, so that you can work with multiple input or output XML document structures without having to change your code..."
[April 22, 2003] "OASIS WS Business Process Execution Language TC Formed." By Lawrence Wilkes. In CBDI Journal (April 22, 2003). "The decision to form WSBPEL reflects the approach that IBM and Microsoft have taken so far in moving the various specifications they have proposed along with others into a standards process. Namely, one of refining and completing their already well-defined specification, rather than making a contribution (along with alternative proposals) to a broader process which might ultimately yield something quite different to their original specification, or at least be harder work in terms of reaching consensus. Though some of their competitors might claim that IBM and Microsoft are not being open' enough by adopting this stance, it is in reality a fairly common approach to standards setting. i.e. take a well defined specification with the backing of industry leaders and endorse that, as opposed to the typically slower process of design by committee. It also reflects the different way that W3C and OASIS work. Whereas W3C seem to take a more architected' approach, trying to ensure that each specification has its proper place, OASIS appears to have a looser philosophy -- if enough members want a committee, then they shall have one, regardless of whether there is overlap with other initiatives. It is up to the membership to sort such issues out, not the OASIS board. As such, it is easier for IBM and Microsoft to take their existing specifications (previous example is WS-Security) forward. It will therefore now be interesting to see how IBM and Microsoft respond to the recent formation by IONA, Oracle, Sun, and others of the OASIS WS Reliable Messaging TC. With a well-defined alternative of their own, will IBM and Microsoft form an alternative TC at OASIS? Or go to W3C instead? The decision by existing W3C WS-Choreography Working Group members (notably SAP, Novell, Tibco) to now join the OASIS initiative will leave the W3C initiative in a difficult position. Does it focus on developing WSCI? Even though the broader industry backing is shifting to BPEL, which does everything WSCI does and more? [...] Organizations, certainly the larger ones, may find themselves in a situation where in the near term at least they will need to support more than one business processing language/interface to reflect the needs of different partners, as well as continuing to support existing EDI and B2B interfaces. A good SOA strategy will be essentially in such situations to abstract services away from the multiple interface technologies in use... Perhaps there is room for two (or more) competing standards', and let the market eventually decide which one wins. This does run the risk that customers will slow their adoption as they wait the outcome of the eventually winner..." See details in "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)." General references in "Business Process Management and Choreography."
[April 22, 2003] "Review of Encoded Archival Description on the Internet." By Helen R. Tibbo (School of Information and Library Science, University of North Carolina at Chapel Hill). In D-Lib Magazine Volume 9, Number 4 (April 2003). ISSN: 1082-9873. Review of the book edited by Daniel V. Pitti and Wendy M. Duff. "In the introduction, Pitti and Duff note that they hope this collection of eleven papers 'will provide an effective introduction to archival description and EAD, a useful overview of its use in various contexts, and an insight into its potential to revolutionize archival practices and services and to democratize and extend access to archival resources.' The text, initially published as volume 4, numbers 3/4 2001 of the Journal of Internet Cataloging, succeeds admirably in these goals. Each article is informative and the compilation provides an excellent introduction to Encoded Archival Description and contextualization for its use. The articles discuss the fundamentals of archival arrangement and description and illustrate how EAD facilitates descriptive practice and extends reference and access in an electronic networked environment. EAD, a data structure standard and encoding scheme initially built as an SGML (Standard Generalized Markup Language) DTD (Document Type Definition) for the encoding of electronic archival finding aids, is now an XML (Extensible Markup Language) compliant scheme. EAD maintains the multilevel and hierarchical nature of finding aids and facilitates structured presentation and searching of these tools in networked environments such as the World Wide Web... as new metadata models, initiatives, and standards appear at a dizzying pace, the archival profession and, more broadly, the digital library arena need a thorough analysis of the relationship among these tools. Increasingly, archival institutions and other cultural heritage repositories are concerned that their collection metadata be compliant with the Open Archives Initiative (OAI) so that it can be harvested for inclusion in large union repositories built around the Dublin Core standard. Yet, OAI captures but a tiny fraction of the information resident in most finding aids and is even less rich than MARC records, which archivists have long lamented for their inability to represent hierarchical archival metadata. Studies of the relationship between the data represented in EAD, MARC, and the OAI standard are needed and must be analyzed in relationship to how archival users search for information. When will it be most efficacious for researchers to use search OAI records in large national or international databases? When will it be best for them to search the Web for the finding aids themselves? Will MARC records in repository online catalogs continue to play a vital role between the OAI and EAD levels of complexity? How will repositories invest in all these metadata schemes so as to best preserve and make accessible their materials?" General references in "Encoded Archival Description (EAD)."
[April 22, 2003] "RSS Needs Fixing." By Tim Bray. In ongoing (April 22, 2003). "There are two big problems with RSS that aren't going away and are just going to have to be fixed to avoid a train-wreck, given the way the RSS thing is taking off. They are first, what can go in a "description" element, and second, the issue of relative URIs... This essay tries to illustrate the problems it talks about. In its RSS description, it tries to mention the <description> tag, with the angle brackets visible, and it contains a relative reference to another ongoing article; one or both of these may have failed in your aggregator... RSS is no longer a science experiment, it's becoming an important part of the infrastructure, which means that a lot of programmmers are going to get the assignment of generating and parsing it, and they need better instructions." General references in "RDF Site Summary (RSS)."
[April 21, 2003] "Thinking XML: Introducing N-Triples. A Simpler Serialization for RDF." By Uche Ogbuji (Principal Consultant, Fourthought, Inc). From IBM developerWorks, XML zone. April 8, 2003. ['RDF/XML isn't the only representation of an RDF model. The W3C developed N-Triples, a format for an RDF representation that is especially suited for test suites. Here, Uche Ogbuji introduces N-Triples using examples converted from RDF/XML.'] "In an earlier article, I used the heading "Repeat after me: There is no syntax". RDF's traditional XML syntax is often maligned, but luckily it is not what makes RDF tick, and the emergence of alternative serializations has always been inevitable. One problem with XML as a serialization syntax is that it is so flexible that it can be difficult to compare desired versus actual results in the process of automated testing. Whether in regression testing or conformance testing, it is often useful to try to normalize XML to some form so that simple text comparisons give meaningful results. The XML community developed XML canonical form for such purposes, and the W3C RDF working group required the same sort of form for RDF while it was developing RDF conformance test suites. One option is to define a canonical form of RDF/XML that matches any graphs, and then canonicalize the resulting XML according to the relevant W3C recommendation. Instead, I think the RDF working group chose the right course in developing a simple and strictly-defined textual format for RDF graphs. This format is named N-Triples, and is incorporated into the RDF Test Cases working draft. In this article I introduce N-Triples, using examples converted from RDF/XML... I do not cover a few nuances; for example, a very strict set of characters is allowed in the syntax, and you must be careful to escape any characters outside these ranges. Some characters (in URI references) must be escaped using URI conventions, and others use an N-Triples convention with a leading backslash. If you are writing code to read or write N-Triples, be sure to see the specification for these details. One of several efforts aimed at a simple triples-based representation for RDF includes N3, which is pretty popular and is the source of some of the ideas in N-Triples. But N-Triples has the advantage of being written into a formal specification, and because of its use in the standard RDF test cases, will probably be implemented by all RDF processors..." See "Resource Description Framework (RDF)."
[April 18, 2003] "RSS 2.0." By Mark Nottingham [WWW]. IETF Network Working Group, Internet-Draft. Reference: 'draft-nottingham-rss2-00'. April 9, 2003, expires October 8, 2003. "This specification documents version 2.0 of RSS (Really Simple Syndication), an XML-based format for syndicating Web content and metadata. This specification provides stable documentation for the RSS 2.0 format [RSS 2.0] as described by Dave Winer of UserLand Software, to assist in implementation. As such, RSS documents conformant with this specification should be conformant with that specification, and vice versa. Note: This Internet-Draft is being made available only to allow the community to ascertain whether the specification described herein is true to the original RSS 2.0 specification. Comments should be directed to the RSS2 Support mailing list. RSS documents MUST be conformant to the XML 1.0 specification. There is explicitly no namespace URI associated with the elements described in this document... The root (top-level) element of an RSS document MUST be the rss element. It has one mandatory attribute, version, which indicates the version of RSS which the document conforms to. Documents conformant to this specification MUST have a version attribute of '2.0'. The rss element MUST contain a channel element... The channel element contains metadata about the RSS channel, as well as the items which comprise it..." See: (1) the announcement 2003-04-17; (2) general references in "RDF Site Summary (RSS)." See the note "Formal Specification of RSS 2.0," by Mark Nottingham. [cache]
[April 18, 2003] "IBM, MS Streamrolling W3C and Web Services?" By David Berlind. In ZDNet Tech Update (April 18, 2003). ['Hiding behind a puppet standards group called OASIS, these two 800-pound gorillas appear well on their way to usurping the promise of Web services and flattening the World Wide Web Consortium into irrelevancy. It's time for Tim Berner-Lee's group to take off the gloves.'] "Those who are following the development of Web services standards have been holding their breath, waiting to see how the controversy over choreography and workflow specifications will resolve itself. Most people who are intimately familiar with the issue have been telling me for some time now that a peaceful resolution is likely. But, now that Business Process Execution Language for Web services (BPEL4WS) co-authors BEA, IBM, and Microsoft have announced the submission of specification to the Organization for the Advancement of Structured Information Standards (OASIS), the controversy is about to evolve into an industry-splitting fracas. On the surface, it looks like just another skirmish among vendors over a standard. But under the covers, it's nothing of the sort. Whereas most such skirmishes take place behind the closed doors of a single consortium, this one pits OASIS, which is emboldened by the combined muscle of IBM and Microsoft, against the de facto leader of Web standards setting -- the World Wide Web Consortium (W3C)... For those of you not up to speed on this controversy, the promise of Web services--that of a series of specifications unanimously adopted by all vendors -- is on the verge of becoming a train wreck. The lack of consensus over how to handle Web services workflow, choreography, and transactions has devolved into a bifurcation of the industry into two camps -- one that's behind the Web services choreography interface (WSCI) and another that's supporting BPEL4WS. For a fleeting moment, it looked like a potential resolution was in the works when Microsoft surprised everyone by joining the W3C's working group for Web services choreography. That membership lasted little more than a day. Before the news of Microsoft joining the group had even been reported, Microsoft withdrew its membership, within hours of attending its first meeting... The non-royalty-free Web Services Interoperability Organization (WS-I), whose self-proclaimed but easily questioned mission is to guarantee interoperability through the provision of recommendations and test suites, is another IBM-Microsoft joint venture that's contributing to the theft of the W3C's thunder. It's a disturbing and unfortunate trend. The W3C's insistence on a royalty-free intellectual property policy was a gut-wrenching move for the consortia that demonstrated the thought leadership of W3C director Tim Berners-Lee. If the bleeding is to be stopped, the W3C can no longer confine itself to taking the moral high ground..." Earlier article from Berlind's ZDNet Tech Update series: "Web Services in Serious Jeopardy." See the news story "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)."
[April 18, 2003] "XMPP Rises to Face SIMPLE Standard, Vendor Coalition Challenges Standard Used by Microsoft and IBM." By Cathleen Moore. In InfoWorld (April 18, 2003). "With the lure of presence-aware applications and systems dangling before them, competitors are warming up for a heated race to establish an industry standard protocol for presence awareness and instant messaging interoperability. Lines are drawn between two protocols currently working their way through the Internet Engineering Task Force (IETF) standards body: the SIP (Session Initiation Protocol)-based SIMPLE (SIP for Instant Messaging and Presence Leveraging Extensions) and the open-source, XML-based protocol XMPP (eXtensible Messaging and Presence Protocol). Vendors are placing bets, hoping to choose the correct side of the market's eventual shakeout. Whereas Microsoft and IBM have thrown their weight behind SIMPLE, a groundswell of support is rising behind XMPP, as Hewlett-Packard, Intel, Hitachi , Sony, and others invest in the technology. Intel's Wireless Communications and Computing Group chose XMPP-based IM vendor Jabber last month. HP plans to deepen its XMPP support with a forthcoming distribution and systems integration deal with Denver-based Jabber. Often referred to as the Linux of IM, XMPP was developed in the open-source community during the late 1990s and then submitted to the IETF for standards consideration. This month XMPP reached working group final-call status within the IETF. The protocol is within months of reaching final ratification as an IM and presence awareness standard, according to PeterSaint-Andre, executive director of the Jabber Software Foundation, the Jabber-sponsored, open-source organization fostering XMPP's development... Whereas enterprise-focused vendors are at least attempting to build toward standards, the consumer networks such as AOL and Yahoo have made no strides toward SIMPLE or XMPP. Those companies have millions of dollars invested in their own protocols and networks and do not have much economic motivation to interoperate at this time. The need for IM and presence-awareness interoperability will eventually reach a critical juncture as enterprises seek to selectively expose elements of a private presence infrastructure to outside parties such as business partners and suppliers, according to Batchelder. 'Companies will build private presence infrastructures and they will want to tie them to the public IM network and clouds,' he said. SIMPLE may eventually glue IBM and Microsoft together, but it won't work industrywide and will be of limited value. In the end, the industry will most likely rally behind a standard that blends both protocols, Batchelder said. 'What the industry ultimately converges around will likely be a hybrid of XMPP, propriety ideas, and some generic work in IETF around SIP and SIMPLE,' he said..." General references in "Extensible Messaging and Presence Protocol (XMPP)."
[April 18, 2003] "RSA to Integrate Identification Products." By Peter Judge. In CNET News.com (April 17, 2003). "RSA Security is planning to integrate its products into an identity management system that will be compliant with Liberty Alliance software -- the main rival to Microsoft's Passport. Announcing the strategy at the RSA Conference 2003 here, the company said current products will be brought piece by piece into its system called Nexus. 'It's an incremental change,' said John Worral, product marketing manager at RSA. 'We have interoperability. The next level is integration.' The products will be tightly integrated, so they can be managed in one place. The upgrade will place Liberty-compliant technology at its core. RSA, which is a founding member of Liberty, has been saying that it would release Liberty-compliant software. The Sun Microsystems-backed Liberty Alliance Project is working on a 'single sign-on' standard that lets people who verify their identity on one Web site carry over that authenticated status when moving to other Web sites. Products fitting into Nexus will include ClearTrust access management, RSA mobile one-time access codes, SecureID two-factor authentication and Keon digital certificates. The common services will include user management, identity authority services, access authority and integration services. Other features such as provisioning will be provided by third parties..." See also the RSA Developer Central website 'geared toward developers interested in security issues and trends. Through the RSA Developer Central site, security developers can access white papers, articles and a discussion forum that allows collaboration on security issues. The site was created to serve as a single source for all of the information security software developers' needs, and also to foster the development of a strong, collaborative security development community. Unique from other commercial security developer sites, the RSA Developer Central site includes a chat function, helping developers benefit from direct conversations and collaborative opportunities, as well as gaining rapid access to RSA Security experts and other industry authorities... The site will contain a wide variety of resources, including: (a) Technical articles on topics such as XML security, wireless security and cryptography basics; (b) Exclusive material geared to the RSA Security developer community, such as selected chapters from RSA Press books and complete documentation for RSA Security developer solutions; (c) Direct links to documentation, sample code and specifications of RSA Security Developer Solutions Group products; (d) News and developments from RSA Laboratories, the research center for RSA Security." On XML security, see a list of security specifications.
[April 18, 2003] "Web Services Encoding and More." By Aaron Skonnard. In MSDN Magazine (May 2003). The XML Files. "There are two SOAP message styles, called document and rpc. Document style indicates that the SOAP body simply contains an XML document. The sender and receiver must agree on the format of the document ahead of time, such as traditional messaging systems like Microsoft Message Queue (MSMQ), MQSeries, and so on. The agreement between the sender and receiver is typically negotiated through literal XML Schema definitions. Hence, this combination is referred to as document/literal. RPC (Remote Procedure Call) style, on the other hand, indicates that the SOAP body contains an XML representation of a method call such as the traditional distributed component technologies of DCOM, Corba, and others. RPC style uses the names of the method and its parameters to generate structures that represent a method's call stack (see section 7 of the SOAP 1.1 specification). These structures can then be serialized into the SOAP message according to a set of encoding rules. The SOAP specification defines a standard set of encoding rules for this purpose (see section 5 of the SOAP 1.1 spec) that codify how to map the most common programmatic data structures, such as structs and arrays, into an XML 1.0 format. Since RPC is traditionally used in conjunction with the SOAP encoding rules, the combination is referred to as rpc/encoded. The document/literal approach is more straightforward and easier for toolkits to get right because it simply relies on XML Schema to describe exactly what the message looks like on the wire. SOAP, however, was created before XML Schema existed. And back then they were focused primarily on objects (hence the O in SOAP), which led to the rpc/encoded way of doing things. Since universal description languages such as XML Schema or Web Services Description Language (WSDL) weren't available back then, the rpc/encoded style had to assume that additional metadata would be available for describing the method call (such as a type library, Microsoft .NET Framework assembly, or Java class file). Such metadata made it possible for toolkits to automatically map between existing components and SOAP messages without explicitly defining what the messages looked like ahead of time. Looking back, rpc/encoded is mostly seen as a stop-gap measure that seemed reasonable in a world without XML Schema. Now that XML Schema is here to stay, document/literal is quickly becoming the de facto standard among developers and toolkits thanks to its simplicity and interoperability results. The WS-I Basic Profile actually disallows the use of SOAP encoding. This is a good indication that rpc/encoded will indeed become deprecated over time. For more information on this, see 'The Argument Against SOAP Encoding' on MSDN Online..." With source code
[April 18, 2003] "Manipulate XML Data Easily with Integrated Readers and Writers in the .NET Framework." By Dino Esposito. In MSDN Magazine (May 2003). "In the .NET Framework, XmlTextReader and XmlTextWriter provide for XML-driven reading and writing operations. In this article, the author discusses the architecture of readers and how they relate to XMLDOM and SAX parsers. He also shows how to use readers to parse and validate XML documents, how to leverage writers to create well-formed documents, and how to optimize the processing of large XML documents using functions to read and write Base64 and BinHex-encoded text. He then reviews how to implement a stream-based read/write parser that combines the functions of a reader and a writer into a single class... Readers and writers are the foundation of XML data support in the .NET Framework. They represent a primitive API for all XML data access features. Readers work as a new and innovative type of parser that falls somewhere between the true power of XMLDOM and the fast and simple approach carried by SAX. Writers are a parallel universe made of tools designed to simplify the creation of XML documents. Although commonly referred to as pieces of the .NET Framework, readers and writers are actually completely separate APIs. In this article, I discuss how to accomplish key tasks using readers and writers and introduced the architecture of validating parsers. Unifying readers and writers in a single, all-encompassing class is still possible and results in a lightweight, cursor-like XMLDOM model... The .NET Framework provides full support for the XMLDOM parsing model, but not for SAX. There's a good reason for this. The .NET Framework supports two different models of parser: XMLDOM parsers and XML readers. The apparent lack of support for SAX parsers does not mean that you have to renounce the functionality that they offer. All the functions of a SAX parser can be implemented easily and more effectively by using an XML reader. Unlike a SAX parser, a .NET Framework reader works under the total control of the client application. In this way, the application itself can then pull out only the data it really needs and skip over the remainder of the XML stream. With SAX, the parser passes all available information to the client application, which will then have to either use or discard the information... The XmlValidatingReader class is an implementation of the XmlReader class that provides support for several types of XML validation: DTD, XML-Data Reduced (XDR) schemas, and XSD. DTD and XSD are official recommendations issued by the W3C, whereas XDR is the Microsoft implementation of an early working draft of XML Schemas..." With source code.
[April 17, 2003] "Intellidimension Application Shows Power of RDF." By Jim Rapoza. In eWEEK (April 17, 2003). "Intellidimension Inc.'s RDF Gateway 1.0 is one of the first applications to address the use of the Resource Description Framework standard, which makes it possible to have metadata interoperate across Web sites and enable everything from powerful search agents to extensive cross-site agent applications. RDF is also the core technology behind the World Wide Web Consortium's semantic Web project, whose goal is to build a Web where the meaning of Web content is understandable to machines. However, adoption of RDF has been slow, especially when compared with XML, on which RDF is based. This is where RDF Gateway, released last month, comes in by providing a powerful server-based system for creating, deploying and managing RDF applications. In tests, eWEEK Labs used RDF Gateway to build a wide variety of Web applications that incorporated and understood RDF, providing a level of content interactivity that would be the envy of most enterprise portals and Web services. RDF Gateway runs as an all-in-one Web server, application server and database, with the database designed to handle RDF content. This works well and makes the product easy to deploy, but we would prefer a more modular approach that would make it possible, for example, to use the database in conjunction with another Web server and application server... The sample applications that Intellidimension provides on its site were a great help in developing for RDF Gateway. We especially liked the sample portal package, which made it possible to build a powerful, interactive information management portal with a slew of user customization options. This portal also showed how well RDF Gateway can make use of RSS (RDF Site Summary), probably one of the most common uses of RDF on the Web today. RSS is used by many Web sites to create summaries of their site content that can be easily used for syndication. With this application, it became easy for users to create customizable news sites by combing RSS feeds from a variety of Web and internal sources..." (1) W3C Resource Description Framework (RDF) website; (2) general references in "Resource Description Framework (RDF)."
[April 17, 2003] "Why Use OWL." By Adam Pease (Teknowledge). From xFront XML Collection (April 2003). "When you tell a person something, he can combine the new fact with an old one and tell you something new. When you tell a computer something in XML, it may be able to tell you something new in response, but only because of some other software it has that's not part of the XML spec. That software could be implemented differently in systems that still conform to the XML spec. You might get different answers from those systems. If you tell a computer something new in OWL, it can give you new information, based entirely on the OWL standard itself. A certain set of conclusions are required from any system that conforms to OWL. Systems may be able to provide all sorts of additional services and responses beyond the requirements of the standard but a certain basic set of conclusions will always be required. OWL gives computers one extra small degree of autonomy that can help them do more useful work for people. A set of OWL statements by itself (and the OWL spec) can allow you to conclude another OWL statement whereas a set of XML statements, by itself (and the XML spec) does not allow you to conclude any other XML statements. To employ XML to generate new data, you need knowledge embedded in some procedural code somewhere, rather than explicitly stated, as in OWL..." See also from xFront (Mitre): (1) "Using OWL to Avoid Syntactic Rigor Mortis"; (2) "A Quick Introduction to OWL Web Ontology Language." Other resources from W3C Web-Ontology Working Group (WebOnt) and "OWL Web Ontology Language."
[April 17, 2003] "Architect Struts Applications for Web Services. Bring the Power of the MVC Pattern to the Web Services Domain." By Jerome Josephraj (Developer, Ford Europe). From IBM developerWorks, Web services. April 15, 2003. ['When you're converting an enterprise app for use with Web services, the simplest way to do it is to associate a single operation with a single enterprise service. But that's not necessarily the best idea. In this article, Jerome Josephraj shows you how to build Web services applications based on the tried and true Model-View-Controller (MVC) design pattern. To that end, he's adapted Struts, a popular open-source MVC framework, for use in the Web services arena. By examining the sample application outlined here, you'll see how you can use Struts and Web services together.'] "The ever-evolving Java programming language and Sun's J2EE specification have enabled software developers of various disciplines to create distributed computing applications that were previously possible only with relatively proprietary tools. Thus, while some development teams may choose to implement new systems in the Java platform, others will create, enhance, and maintain applications using other skills and then integrate them into an existing heterogeneous distributed application. This situation gives rise to an interoperability challenge. How can a new application interact with an old one? The answer: Web services. Web services are the new holy grail of programming. They make it possible to share and coordinate dispersed, heterogeneous computing resources. In this article, you'll learn one route to this goal. You'll see how to architect an application based on the open-source Struts framework that can integrate with Web services. You should have some background in J2EE and Web services before you begin; I'll briefly introduce the Struts framework and the Model-View-Controller (MVC) pattern... The MVC design pattern clearly demarcates the roles of programmers and designers. In other words, it untangles data from business logic. This pattern allows designers to concentrate on the display portion of an application and developers to concentrate on developing the components required to drive the application's functions... Using the architecture outlined here, you can develop enterprise applications that are robust, easy to maintain, and easily integrated with any legacy applications. I hope that you will be ready to start exploring more about Struts and Web services and see how this architecture can be useful to you in your own projects..." See also Apache Jakarta Struts framework.
[April 17, 2003] "Sydney Firm to Protect 3G Content." By Nathan Cochrane. In Sydney Morning Herald (April 15, 2003). "A small Sydney maker of copyright enforcement technology has beaten Microsoft for the coveted crown of protecting content consumed by the next generation of multimedia mobile phones. IPR Systems' Open Digital Rights Language (ODRL) version 1.1 has been adopted by mobile makers such as Nokia, Samsung and Sony-Ericsson, operators including Vodafone and open-standards-setting body the Open Mobile Alliance to safeguard copyrighted content distributed over third-generation (3G) networks. ODRL gives content creators the power to determine exactly how their material is to be used, including how many times it can be consumed, for how long it can be consumed before it expires and how many times it can be forwarded, if at all. IPR's four engineers built the Digital Restrictions Management (DRM) language in about two years before version 1 was commercially adopted by Nokia and others in preference to Microsoft's XrML standard, in part due to political reasons, says chief scientist Renato Iannella. 'ODRL is more concise and this is a critical factor for network bandwidth for the telcos,' Iannella says... Research group Datamonitor predicts the market for content over mobile phones will kiss $US 38 billion ($A 62.7 billion) in three years. Iannella says the mobile phone operators and content producers wanted to avoid the 'Napsterisation' of their services... ODRL is an open standard provided free to developers and phone makers in an attempt to spur wide adoption. IPR makes its money by providing systems that best take advantage of the standard, says its business development manager, Fergus Stoddart. 'Nokia started using ODRL internally but they realised the need to follow emerging standards rather than invent another derivative so they decided to contact us and join the ODRL initiative and put their weight behind it,' he says. Iannella says users of devices such as Nokia's 3650 multimedia messaging service mobile phone benefit by having explicit rights to forward media once it has been consumed..." From Nokia: "The Nokia Content Publishing Toolkit 2.0 is stand alone off-line Windows application which generates OMA DRM and Download OTA files. These standardized technologies allow protection and control of paid content as well as improvement of the customer experience by ensuring that only compatible content is downloaded to terminal." [See the example screen image of the application and sample ODRL OMA output.] "Forum Nokia provides the information needed to develop applications using the technologies Nokia's products support. Today these technologies include the latest wireless standards and protocols such as Bluetooth, Java, SyncML, MMS and Symbian OS, as well as the established mobile technologies WAP and Smart Messaging..." (1) ODRL Home Page; (2) OMA DRM DTD; (3) "Proposed Open Mobile Alliance (OMA) Rights Expression Language Based Upon ODRL"; (4) "Open Digital Rights Language (ODRL)." General references in "XML and Digital Rights Management (DRM)."
[April 17, 2003] "Product Plans Afoot for BPEL4WS Spec. W3C Fears Fragmented Standards Efforts." By Paul Krill. In InfoWorld (April 17, 2003). "Major backers of the BPEL4WS specification for Web services business processes are preparing products to implement the technology, which is being submitted for consideration by OASIS as an industry standard. But a spokeswoman for the World Wide Web Consortium (W3C) expressed fears that Web services standardization could be fragmented, with both Organization for Advancement of Structured Information Standards (OASIS) and W3C now pondering Web services business process, or choreography, standards. BPEL4WS, or Business Process Execution Language for Web Services, also has been known as just BPEL. Developed by IBM, Microsoft, and BEA Systems and introduced in August 2002, the specification is intended as a way to describe business processes constructed as a collection of Web services, which is sometimes referred to as choreography, said John Kiger, director of Web services strategy at BEA in San Jose, Calif. 'By this fall, you'll see real products delivered that use BPEL4WS to do business processes between these heterogeneous systems,' said Steven Van Roekel, Microsoft director of Web services, in Redmond, Wash. Microsoft, for example, plans to support BPEL4WS in its BizTalk Server application integration platform, Van Roekel said... BEA will add BPEL4WS to its WebLogic platform later this year to enable business processes to be exchanged across heterogeneous environments, Kiger said. Siebel Systems, meanwhile, has been working with IBM, Microsoft, and BEA on using the specification in the Siebel Universal Application Network offering for multi-application integration, said Hitesh Dholakia, senior manager for Universal Application Network at Siebel, in San Mateo, Calif. SAP by the end of this year plans to implement BPEL4WS in various products, including its Exchange Infrastructure, which is a messaging platform for integrating SAP and non-SAP systems, said Sinisa Zimek, director of technology and standards at SAP, in Palo Alto, Calif. Despite supporters' ambitious plans for BPEL4WS, a spokeswoman for W3C, which also is deliberating Web services technologies including choreography, expressed fears that the dual OASIS and W3C efforts would lead to fragmentation and hinder standardization for Web services..." See the news story "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)."
[April 17, 2003] "Commentary: The Right Web Services Process Standard." By Ted Schadler (Principal Analyst, Forrester Research). In CNET News.com (April 15, 2003). "Applications heavyweights SAP and Siebel Systems have joined BEA Systems, IBM and Microsoft as co-authors on the BPEL4WS business process specification -- and given it to OASIS under royalty-free terms. It's good news for companies focused on Web services. Forrester spoke with executives from those software makers about their plans to submit BPEL4WS, or Business Process Execution Language for Web Services, to OASIS with 22 other co-submitters. We believe that the standards body will extend that specification with the best parts of competing specifications Web Service Choreography Interface (WSCI) and Business Process Modeling Language (BPML) to create a single process standard. Companies should bet on BPEL4WS today because commitments from SAP and Siebel will tip the balance in its favor. BEA, IBM and Microsoft already support the specification. Now SAP and Siebel say they'll implement it in their integration products later this year and will use it for internal application integration as well. Meaning? WSCI and BPML will fade away, and users can start coding to BPEL4WS today, safe in the knowledge that their application links will migrate to the standard when it's ratified... The track record for IBM's and Microsoft's Web service standard is unblemished. Moving from proposal to widely adopted standard is never easy. But recent cases lead Forrester to conclude that the effort will be successful--witness, for example, the successful transition of WS-Security from specification to OASIS standard to WS-I supported standard. The bottom line? Companies should feel confident that real standards for reliable messaging, security and work flow will appear in products by 2004..." [RCC Note: The characterization "under royalty-free terms" may be grounded in good authority, but the 2003-04-16 CFP and Charter do not actually address this topic.] "BPEL4WS: The Right Web Service Process Standard," by Ted Schadler, Charles Rutstein, and Sharyn Leaver (Forrester TechStrategy Brief). See details in the news story "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)."
[April 17, 2003] "Sun Exec Preaches UBL. Touts XML Business Document Standard for E-Commerce." By Paul Krill. In InfoWorld (April 16, 2003). "Universal Business Language (UBL), a proposed OASIS specification for electronic commerce, presents the potential of enabling smaller businesses worldwide to engage in global commerce alongside major vendors, a Sun official said during a presentation here on Tuesday. Jon Bosak, Sun's Distinguished Engineer and chairman of the OASIS (Organization for Structured Information Standards) Technical Committee on UBL, noted that most e-business is done in a business-to-business fashion. He stressed that a UBL infrastructure could level the playing field for smaller companies that need to conduct electronic business with large companies with expensive EDI systems. 'I think this is how the developing world gets into the party,' Bosak said. UBL defines a library of XML-based electronic-business documents for standardizing functions such as purchase orders and invoices. It plugs directly into existing traditional business, legal, and records management practices and eliminates the re-keying of data in existing fax- and paper-based supply chains, according to Bosak. It also fills the 'payload' slot, or document format, in b-to-b commerce frameworks such as the UN/OASIS ebXML initiative and various Web services schemes, said Bosak. Version .7 of UBL has just undergone a review period. Version 1.0, expected in May, will incorporate comments from a just-concluded review period. Adoption by OASIS would be expected by the end of the year if the organization chooses to take such an action, Bosak said..." See: (1) Universal Business Language TC website; (2) general references in "Universal Business Language (UBL)."
[April 17, 2003] Business Process Execution Language for Web Services. Version 1.1. 31-March-2003. 134 pages. By Tony Andrews (Microsoft), Francisco Curbera (IBM), Hitesh Dholakia (Siebel Systems), Yaron Goland (BEA Systems), Johannes Klein (Microsoft), Frank Leymann (IBM), Kevin Liu (SAP), Dieter Roller (IBM), Doug Smith (Siebel Systems), Satish Thatte (Microsoft - Editor), Ivana Trickovic (SAP), and Sanjiva Weerawarana (IBM). Copyright (c) 2002, 2003 BEA Systems, International Business Machines Corporation, Microsoft Corporation, SAP AG, and Siebel Systems. Abstract: "This document defines a notation for specifying business process behavior based on Web Services. This notation is called Business Process Execution Language for Web Services (abbreviated to BPEL4WS in the rest of this document). Processes in BPEL4WS export and import functionality by using Web Service interfaces exclusively. Business processes can be described in two ways. Executable business processes model actual behavior of a participant in a business interaction. Business protocols, in contrast, use process descriptions that specify the mutually visible message exchange behavior of each of the parties involved in the protocol, without revealing their internal behavior. The process descriptions for business protocols are called abstract processes. BPEL4WS is meant to be used to model the behavior of both executable and abstract processes. BPEL4WS provides a language for the formal specification of business processes and business interaction protocols. By doing so, it extends the Web Services interaction model and enables it to support business transactions. BPEL4WS defines an interoperable integration model that should facilitate the expansion of automated process integration in both the intra-corporate and the business-to-business spaces." APPENDIX D of the v1.1 specification supplies the XML (XSD) Schemas. In this second public draft release of the BPEL4WS specification (v1.1), a "more modular structure [has been given] to the initial specification announced in August 2002 by Microsoft, IBM, and BEA." XML Schemas include: (1) BPEL4WS V1.1 Schema; (2) Service Link Type Schema v1.1; (3) Service References Schema v1.1; (4) Message Properties Schema v1.1. See details in the 2003-04-16 news story "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)." General references in "Business Process Execution Language for Web Services (BPEL4WS)." Related specifications are listed in "Business Process Management and Choreography." [PDF source: IBM]
[April 15, 2003] "OASIS to Get BPEL4WS Jurisdiction. Web Services Specification Finally Goes to Standards Body." By Paul Krill. In InfoWorld (April 15, 2003). "Microsoft, IBM, and BEA Systems plan to submit their Web services choreography and business process specification, initially proposed in August 2002, to a standards body later this week. The Business Process Execution Language for Web Services (BPEL4WS) specification is expected to be submitted to Organization for the Advancement of Structured Information Standards (OASIS), Carol Geyer, spokeswoman for OASIS, confirmed. 'We anticipate it will probably be tomorrow and a charter will be submitted tomorrow or maybe Thursday,' Geyer said. The proposing companies still are making modifications to the charter for BPEL4WS that they submit to OASIS, she said. According to a source familiar with the announcement, SAP and Siebel are joining the original developers of BPEL4WS, IBM, Microsoft, and BEA, in the submission. BPEL4WS is intended to provide for more automated Web services, which is considered crucial to spread the use of Web services for back-end integration for applications such as e-commerce. The submission of BPEL4WS to a standards organization such as OASIS or World Wide Web Consortium (W3C) has been awaited. A technical committee is to be formed to deliberate on the specification at OASIS, with an initial meeting to be held May 15, the source said. Co-submitters of the technical committee charter include the following: Accenture, Akazi, CGEY, Collaxa, CommerceQuest, EDS, Vignette, FiveSight, Handysoft, HP, i2, JDEdwards, NEC, Novell, OpenStorm, SeeBeyond, SourceCode, TeamPlate, Tibco, Unisys, Ultimus, and WebV2, according to the source. One issue, whether the specification would be submitted royalty-free, apparently has been resolved, as all submitters have agreed to not seek royalties, or financial compensation, for their contributions to the specification used in any implementations, according to the source... BPEL4WS also is being upgraded to Version 1.1, although details on improvements were not immediately available..." See details in "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)." See general references in "Business Process Management and Choreography."
[April 15, 2003] "Web Services Standards Facing a Split?" By Martin LaMonica. In CNET News.com (April 15, 2003). "IBM, Microsoft and BEA Systems plan to submit a high-profile Web services proposal to the OASIS standards body, company executives said, despite an ongoing effort by the World Wide Web Consortium to sort through similar proposals. Led by the three powerhouse companies, about 20 businesses will propose the creation of a technical committee within the OASIS standards body to standardize the Business Process Execution Language for Web Services (BPEL4WS), which is a language for automating complex business processes. The companies, which include SAP and Siebel, will make the submission to OASIS as early as Tuesday, according to the IBM executive. An official announcement from OASIS is expected in about a week... The group of companies backing BPEL also plans to publish an update to the BPEL specification when it is submitted to OASIS. BPEL was originally authored by IBM, Microsoft and BEA. The submission of BPEL to OASIS is the latest move in a series of maneuvers among information technology providers around Web services standards. By using XML-based Web services standards, businesses can more easily share information between disparate systems. The ability to automate a multistep business process using Web services -- called choreography or orchestration -- is an important capability to drive broader adoption of Web services, according to analysts. The World Wide Web Consortium (W3C) last month created a choreography working group to sort out several overlapping standards proposals. The W3C group has garnered membership from several companies, including industry heavyweights Oracle, Sun Microsystems and Hewlett-Packard. The W3C requested that IBM, Microsoft and BEA participate in the choreography working group and submit the BPEL specification for consideration. Microsoft representatives attended the first meeting but decided to break with the working group after one day. IBM and Microsoft executives said they decided to submit BPEL to OASIS because it has been active in recent Web services standards efforts, including WS-Security, that the two companies have spearheaded. Although members of the OASIS technical committee will monitor the work done at the W3C's choreography group, there are no plans to officially interact with the W3C, IBM and Microsoft executives said. 'We're hopeful that the choreography field is big enough that there are complementary areas that the (W3C choreography) group can work on, rather than focusing on areas that don't allow us to build momentum,' said Karla Norsworthy, director of e-business technology at IBM..." See details in "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)." See general references in "Business Process Management and Choreography." General references in "Business Process Execution Language for Web Services (BPEL4WS)."
[April 15, 2003] "Liberty Alliance Moves Ahead." By Peter Judge. In CNET News.com (April 15, 2003). "Proponents of the Liberty Alliance Project, a group developing online identity standards, provided details Tuesday of their Phase Two specifications and demonstrated new features. Liberty held its first public interoperability demonstration at the RSA Conference here with four different applications on display, built with Liberty 1.0 technology from some twenty vendors. The group also released a draft of its Phase 2 specifications, which are expected to become finished standards later this year. 'We've added permissions-based attribute sharing and other features,' said Michael Barrett, president of the Liberty management board and vice president of Internet strategy at American Express. The second version of the Liberty specification maps a way for Web users to exchange information with Web sites without revealing their identity. It is also designed to allow people to specify a set of affiliated sites onto which they can log. The demonstrations of Liberty 1.0 technology focused on transactions between business and among employees. In one, led by Communicator, an employee was allowed access to several financial services after signing into a single identity server within his company. In another, led by Novell, an employee accessed her pensions and retirement information from external sites through the corporate intranet without having to repeatedly log in. American Express is likely to launch this kind of service soon, hinted Barrett. 'I won't preannounce anything, but we believe there are a number of opportunities.' [...] Beyond the Phase 2 specifications, there will be further enhancements to Liberty's online ID efforts, including more work on policy, said Barrett. In the future, its specifications will be linked more closely with Web services, which are applications that use Extensible Markup Language (XML)-based protocols to share information between disparate systems. 'Identity is at the heart of the Web service story,' he said. In related news, the Liberty project announced several new members, including Ericsson, bringing the total up to 160. Interest in the specifications comes from all over the world, with companies from the Pacific Rim showing increasing attention..." See details in the news story "Liberty Alliance Releases Phase 2 Specifications for Federated Network Identity." General references in "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[April 15, 2003] "Liberty Alliance Releases Phase 2 Specifications, Demos Vendor Interoperability." By Charlene O'Hanlon. In CRN (April 15, 2003). "The Liberty Alliance released Phase 2 of its draft identity-based Web services specifications and demonstrated interoperability of services with 20 of its member companies. The announcements, made Tuesday at the RSA Conference here, represent a major step forward for the alliance, said Michael Barrett, vice president of Internet Strategy for American Express and management board president of the Liberty Alliance. 'Last year we stood here as a fledgling operation and growing rapidly, but we said little about what we were going to do. Since then we have done a great deal,' he said. Phase 2 of the Liberty Alliance specifications for creating a common and trusted way of building and managing identity-based Web services builds on the Phase 1 specifications, also known as the Liberty Identity Federation Framework, released in July 2002. Phase 2 also includes affiliations, which allows a user to choose to 'federate' -- or create a simplified sign-on for -- a group of affiliated sites, as well as anonymity features, which enable a service to request certain user attributes without needing to know the user's identity..." See details in the news story "Liberty Alliance Releases Phase 2 Specifications for Federated Network Identity." General references in "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[April 15, 2003] "The Fedora Project: An Open-source Digital Object Repository Management System." By Thornton Staples, Ross Wayland, and Sandra Payette. In D-Lib Magazine Volume 9, Number 4 (April 2003). ISSN 1082-9873. "In September 2001, the University of Virginia was awarded a grant from the Andrew W. Mellon Foundation to develop the first digital object repository management system based on the Flexible Extensible Digital Object and Repository Architecture (Fedora)... The Fedora architecture is based on object models that by definition are templates for units of content, called data objects, which can include digital resources, metadata about the resources, and linkages to software tools and services that have been configured to deliver the content in desired ways. These software connections are provided as methods encoded into two kinds of inter-related behavior objects as described below. A Fedora repository provides access to the data objects by leveraging tools and services that are described by the behavior objects. The behavior objects store metadata that describes the operations of the tool/service and the runtime bindings for running the operations. The Web Services Description Language (WSDL) is used to describe the tool/service bindings.. The digital resources and the metadata are datastreams in an object model, definitions of which connect the content model either to internal content under the direct control of the repository or to external content that is delivered via HTTP servers. The content of a datastream is identified using a URL. When an object is ingested into a Fedora repository, a URL for a managed datastream is used by the repository system to retrieve the content and store it in the file space under its control; the datastream in the object is updated to be this internal address. When an object contains a datastream defined as external, the URL is stored in the datastream and used by the repository to access the data whenever necessary. An in-line metadata datastream is a bytestream that is name-spaced XML encoded data stored in the XML instantiation of the object directly, rather than as remote or managed content. From the user's point of view, the linkages to software tools and services (via disseminators) are seen as behaviors upon the units of content. These behaviors can be exploited to deliver varieties of prepared content directly to a web browser. They can also be used to prepare or configure content to be used through some external software application. In a sense, these object models can be thought of as containers that give a useful shape to information poured into them; if the information fits the container, it can immediately be used in predefined ways....In terms of management of collections, Fedora provides a flexible object model and an environment to store digital content and metadata in a secure, extensible repository system. By default, the repository will accept objects encoded in the Metadata Encoding and Transmission Standard (METS), with a few Fedora-specific requirements. Alternatively, objects can be built via client tools (both interactive and batch) that interact with the Fedora Management Web service. In either case, the repository will store digital objects as XML and protect the content datastreams..." See technical details in Sandra Payette and Thornton Staples, "The Mellon Fedora Project: Digital Library Architecture Meets XML and Web Services," Sixth European Conference on Research and Advanced Technology for Digital Libraries. Lecture Notes in Computer Science, Vol. 2459. Springer-Verlag, Berlin Heidelberg New York (2002) pages 406-421.
[April 15, 2003] "State of the Dublin Core Metadata Initiative, April 2003." By Makx Dekkers (Managing Director, DCMI) and Stuart Weibel (Consulting Research Scientist, OCLC Office of Research). In D-Lib Magazine Volume 9, Number 4 (April 2003). ISSN 1082-9873. "The Dublin Core Metadata Initiative continues to grow in participation and recognition as the predominant resource discovery metadata standard on the Internet. With its approval as ISO 15836, DC is firmly established as a foundation block of modular, interoperable metadata for distributed resources. This report summarizes developments in DCMI over the past year, including the annual conference, progress of working groups, new developments in encoding methods, and advances in documentation and dissemination. New developments in broadening the community to commercial users of metadata are discussed, and plans for an international network of national affiliates are described... Version 1.1 of the Dublin Core Metadata Element Set was balloted in the International Standards Organization (ISO) as DIS 15836. In January 2003, the ballot finished with 18 'yes' votes and without any 'no' votes. Publication as ISO 15836 will be forthcoming in the near future. Publication of this standard culminates a series of standardization work by the Internet Engineering Task Force, CEN (the European consortium of standards bodies), the National Information Standards Organization (NISO) in the US, and now in the international domain. Ratification of the Dublin Core Metadata Element Set within ISO illustrates that it is indeed a standard for all nations, serving as a foundation for global metadata interoperability... The specification of the expression of simple Dublin Core metadata in RDF/XML became a DCMI Recommendation in October 2002. The specification for the RDF/XML expression of qualified Dublin Core metadata and the finalization of the necessary RDF schemas are being progressed further in co-operation with representatives of W3C, taking into account recent changes in the W3C RDF specifications. The Proposed Recommendation for the expression of Dublin Core metadata in XML was made available for Public Comment during the month of March 2003. A new XML schema for simple Dublin Core metadata was published in December 2002..." General references in "Dublin Core Metadata Initiative (DCMI)."
[April 15, 2003] "Guidelines for Implementing Dublin Core in XML." Dublin Core Metadata Initiative Recommendation. 2003-04-02. Edited by Andy Powell and Pete Johnston (UKOLN, University of Bath). "This document provides guidelines for people implementing Dublin Core metadata applications using XML. It considers both simple (unqualified) DC and qualified DC applications. In each case, the underlying metadata model is described (in a syntax neutral way), followed by some specific guidelines for XML implementations. Some guidance on the use of non-DC metadata within DC metadata applications is also provided... At the same time, a new set of XML schemas is published. These support the encoding of Qualified Dublin Core metadata records, including the use of element refinements and encoding schemes, and follow the conventions described in Guidelines for expressing Dublin Core in XML..." General references in "Dublin Core Metadata Initiative (DCMI)."
[April 15, 2003] "XML Transactions for Web Services, Part 1." By Faheem Khan. From WebServices.xml.com (April 15, 2003). ['This article is the first in a multipart examination of the role of transactions in building complex, federated web services. Faheem Khan explains the scenarios in which web service developers can use transactions, as well as two of the most crucial specifications: WS-Transaction and WS-Coordination.'] "Web Services Transaction (WS-Transaction) and Web Services Co-ordination (WS-Coordination) are specifications being jointly prepared by BEA, IBM, and Microsoft. The purpose of these specifications is to define a mechanism for transactional web services. This three part series of articles describes and demonstrates transactional web services, elaborates why and when we may require web service transactions, and explains how WS-Coordination and WS-Transaction address the transactional requirements of web services. The present article, the first in the series, introduces of Service Oriented Architecture (SOA) and SOAP. It elaborates upon how SOA provides a layer of abstraction over conventional web applications and wraps the functionality of HTTP-based server side components. It examines a simple ecommerce scenario in order to discuss these concepts and to demonstrate how web services can federate portions of business logic to other web service applications. The federation of tasks is an important step integration across the enterprise; it allows an enterprise to concentrate on its own activities as well as integrate with its customers, suppliers, and business partners. The discussion of federated web services will help readers recognize the need to coordinate a sequence of SOAP requests and responses, so that the total process looks like a business transaction. The last section of this article introduces WS-Transaction and WS-Coordination specifications, briefly examining how they provide for the requirements of transactional web services. Service Oriented Architecture SOA offers a way to present web applications as services. SOAP is perhaps the most popular XML-based protocol to implement SOA in HTTP applications. Let's consider an ecommerce example to elaborate how SOAP can be used to expose the functionality of web applications..."
[April 15, 2003] "Web Services Specification Still Not Ready for Standardization. BPEL4WS Remains in Founders' Jurisdiction." By Paul Krill. In InfoWorld (April 11, 2003). "A specification for Web services choreography and business processes introduced by Microsoft, IBM, and BEA Systems last August remains under its founders' jurisdiction, despite repeated assurances of its pending submission to an industry standards organization... The specification, Business Process Execution Language for Web Services (BPELWS), is gaining momentum in the industry as a way to automate Web services back-end interactions for Web-based integration of applications such as e-business. Companies such as Collaxa and BEA have offered details of product plans for supporting BPEL4WS. The specification's founders have pledged to submit the proposal to a standards organization such as OASIS or W3C for consideration as an industry standard... [IBM's Bob] Sutor said there are political issues that have come about pertaining to differences of opinion on the specification. Asked if the issues had to do with intellectual property rights, pertaining to royalty rights for BPEL4WS founders, Sutor said IBM would not seek any royalties on BPEL4WS, as he has pledged before..." See "Business Process Execution Language for Web Services (BPEL4WS)."
[April 15, 2003] "The Semantic Blog." By Jon Udell. From WebServices.xml.com (April 15, 2003). ['Weblogs and XML-enabled databases can help find the semantic-web sweet spot.'] "... As we consume more of our information by way of RSS feeds, the inability to store, index, and precisely search those feeds becomes more painful. I'd like to be able to work with my RSS data locally, even while offline, in much more powerful ways. One emerging option is the XML layer being added to Sleepycat's Berkeley DB, the database that's embedded in the Mozilla mail/news client, in Movable Type, and in a slew of other programs... I've long dreamed of using RSS to produce and consume XML content. We're so close. RSS content is HTML, which is almost XHTML, a gap that HTML Tidy can close. In current practice, the meat of an RSS item appears in the <description> tag, either as an HTML-escaped (aka entity-encoded) string or as a CDATA element. As has been often observed, it'd be really cool to have the option to use XHTML as well. Then I could write blog items in which the <pre> tag, or perhaps a class='codeFragment' attribute, marks regions for precise search. You or I could aggregate those items into personal XPath-aware databases in order to do those searches locally (perhaps even offline), and public aggregators could offer the same capability over the Web. So I was delighted to see the recent ping-pong match between Sam Ruby and Don Box, each of whom has now demonstrated a valid RSS 2.0 feed (Sam, Don) that includes a <body> element, properly namespaced as XHTML, which carries XPath-friendly content. Excellent! If this idea takes hold, the <description> tag could in principle revert to its original purpose which (at least in my view) was to advertise, rather than fully convey an item..."
[April 15, 2003] "Implementing WS-Security: A Case Study." By Sam Thompson (jStart Program Manager for Web services, IBM). From IBM developerWorks, Web services. April 2003. ['This article describes how the emerging WS-Security standard was used to secure a Web service that was developed and deployed in the fall of 2002. The article will discuss the security-related requirements of the Web service and how they were met using a combination of HTTPS/SSL, digital certificates, and digital signature technologies. The article will crawl through the WS-Security element of the SOAP message used to trigger the Web service, explaining each section of the WS-Security element in detail. After reading this article, you should have a better understanding about how you can use WS-Security in a production Web service application and feel confident about using this emerging standard in your own projects.'] "... Since 1997, IBM has had a program called jStart to help its customers and business partners work with new emerging technologies. The program's goal is to help early adopters leverage new technologies to help make their businesses more successful. Last fall, the jStart program worked with a company who wanted to provide a business-to-business Web service using the Internet as a transport. They desired a strong level of security and interoperability, and they decided to use a WS-Security approach to secure the SOAP message traffic with their business partners. This paper discusses that project and its use of WS-Security... The paper demonstrates the soundness and overall viability of the draft WS-Security specification by offering itself as a proof-point that secure, mission critical, Web services applications are viable with today's development tools and deployment platforms. Yes, in our customer's case, some non-automated, manual steps were required to handle the WS-Security element of our SOAP message, but as support for WS-Security gets folded into the next iteration of the WSDL specification and support is added to the Web services development tools of many vendors, it will only get better..."
[April 15, 2003] "Using Categorization To Distinguish Entries And Create Communities in UDDI. Developing and Using a Validation Service for Checked Categories in UDDI." By Matt Rutkowski, Andrew Hately, and Robert Chumbley (UDDI Development, Emerging Technologies, IBM). From IBM developerWorks, Web services. April 2, 2003. ['This article describes the power of categorization in UDDI to differentiate data according to standard taxonomies and how to use categorization to create a subset of the registry that has been screened by an external party. The use of the UDDI validation service to create the Speed-start Web services community within the IBM UDDI Test registry is described in this article. This Speed-start community is an example of a set of public data that can be distinguished from all of the remaining public data within the same UDDI registry using a simple category based query. UDDI entries that are returned as part of the response to that query have been evaluated at publication time by the Speed-start validation service to ensure that they are Internet accessible Web services.'] "UDDI provides a mechanism to include standard taxonomies that can be used to describe each entry using as many industry standard search terms as needed. Each business, service, or technical model can contain a 'Category Bag' which holds keyed references (that is, categorization codes, locators, or keywords) that can specifically describe its type of business, physical location, and even the exact products and services it offers. These keyed references contain a reference to the classification system or taxonomy, a text field containing the value within that taxonomy and a text field for a human readable description. Using this method of categorization, the UDDI Inquiry API can quickly and efficiently connect businesses and services to exactly the customers that need them... Three of the UDDI taxonomies [NAICS, UNSPSC, GCS] are standards for categorizing entries. The fourth taxonomy, UDDI classifications, is a taxonomy that was developed as part of the UDDI specification to provide useful values for categorizing the technical information of Web services. The last taxonomy is useful for associating keywords with an entry, especially those that are not part of the name of the entry. Each of these category systems is uniquely identified by a UDDI entry called a tModel (Technical Model) and can be referenced using its tModelKey... The article provides a general overview of categorization and how it can be used in conjunction with validation services called by UDDI registries to provide a community or screened set of results according to category system specific criteria. The example of the Speed-start community is a simple example of the power of contextual validation services that can greatly enhance the quality of results corresponding to queries for data referencing a particular category or identifier system. Using the information in this article, it should be possible to develop services that greatly enhance the results from UDDI registries such as service quality or reference services..." General references in "Universal Description, Discovery, and Integration (UDDI)."
[April 15, 2003] "Liberty Releases Draft of New Spec." By Dennis Fisher. In eWEEK (April 15, 2003). "The Liberty Alliance Project on has released a draft of its second-generation specification for federated identity management. In addition to the new spec, the alliance also unveiled its new Identity Web Services Framework, which lays out the components needed to build interoperable identity-based Web services. Alliance members, speaking at a press conference at the RSA Conference here, said they are in discussions with officials from Microsoft Corp. about making the Liberty specifications interoperable with the Redmond, Wash., company's Passport authentication service. The two main enhancements to the new specification are protocols that enable affiliations and anonymity. The affiliations functionality allows users to federate their identities with a selected group of affiliated Web sites. This is seen as a key piece of any identity management service. The anonymity functionality enables users to give a Web site certain pieces of personal information without revealing his identity. The final Phase 2 specification is due for release in the third quarter, following a public comment and review period. The new Identity Web Services Framework (ID-WSF) comprises several separate features designed to give vendors and enterprises a road map for developing interoperable Web services. Among the key features are permission-based attribute sharing, an identity discovery service and security profiles. The security profiles feature describes in detail the requirements necessary for privacy protection and assurance of the integrity and confidentiality of the messages. But probably the most important feature is the permission-based attribute sharing, which gives users the ability to share certain attributes and preferences with a given Web site. That site can then use that data to offer the user personalized services.." See details in the news story "Liberty Alliance Releases Phase 2 Specifications for Federated Network Identity." General references in "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[April 15, 2003] "SyncML Device Management: An Emerging Protocol Standard for Managing Devices." By Rajkiran Guru (Software Engineer, IBM India Software Labs). From IBM developerWorks, Wireless. April 2003. ['With pervasive devices overwhelming today's market, developers need a standard protocol to set up and reconfigure devices, update software, and register new services remotely. The SyncML Device Management Protocol helps you do just that without having to commit to a proprietary protocol.'] "Ubiquitous access to information is vital in today's fast-moving computing world. If it isn't already, the market will soon be flooded with different types of pervasive devices. These include Personal Digital Assistants (PDAs) with network access, as well as a new class of not-so-personal, but more consumer-oriented devices, such as in-vehicle information systems, home service gateways, kiosks, and set-top boxes. And as those devices become more popular and more complicated, the task of setting up and reconfiguring devices, updating software, and registering new services automatically becomes more challenging. Therefore, we need a standard protocol that will allow service providers, device manufacturers, and corporate information management departments to perform the following tasks remotely: (1) Configure new devices; (2) Upgrade software on devices; (3) Upload new applications; (4) Perform backup and restoration; (5) Track hardware inventory; (5) Collect data from the devices; (6) Control devices remotely; (7) Implement service discovery and provisioning... This article gives you an under-the-hood look into the SyncML (Synchronization Markup Language) Device Management Protocol -- an emerging and efficient solution that has gained wide support among major industry players... Currently, there are several proprietary protocols you can use to manage devices, but there is no current standard device management protocol. If you consider the non-interoperability issues that come with multiple proprietary protocols, this is a disconcerting fact. Unless the industry proposes a standard device management protocol, a plethora of incompatible protocols will consume it, whereas a standard device management protocol would cater to all industry segments. The SyncML Initiative, led by more than 640 companies, including Ericsson, IBM, Nokia, and Motorola, has designed a highly-interoperable device management (DM) protocol. The initiative successfully created an industry standard data synchronization protocol. Now industry leaders in both the client and server segment are in the process of designing and promoting the SyncML Device Management Protocol, in hopes of making it the future standard. The SyncML Initiative is now a part of the Open Mobile Alliance (OMA), which consists of groups like WAP Forum, Location Interoperability Forum, and MMS Interoperability Group. By being a part of this widespread industry organization, SyncML's acceptance as a standard device management solution will likely increase significantly..." General references in "The SyncML Initiative."
[April 15, 2003] "Emerging Technology: XML - The End of Security Through Obscurity?" By Andy Dornan. In Network Magazine (April 14, 2003). ['New Web services protocols make data easy to read-and almost as easy to hide.'] "Web services are something of a paradox. At a time when networkers are increasingly concerned about security, the entire computer industry is pushing toward the Extensible Markup Language (XML), a way to encode information so that it can easily be understood by anyone with a text editor. What's more, vendors are suggesting that network managers use XML to make their internal databases accessible from outside the firewall, and even to run executable code found on the public Internet. It sounds bad, but several new and emerging standards promise to make XML and Web services secure. Rather than undermining network security, Web services may even enhance it. The new XML-based standards are more flexible than current security technologies, allowing different cryptographic keys and algorithms to be applied to different parts of a file. They also enable true end-to-end confidentiality, keeping a message en-crypted even as it passes through multiple servers. XML brings security out of its hiding place deep inside the TCP/IP stack, moving it right up to the Application layer..."
[April 15, 2003] "Making Sure Web Services Get Along. Specs Complete a Framework For Reliable, Interoperable Messaging." By Edward J. Correia. In Software Development Times (April 15, 2003). "BEA Systems Inc., IBM Corp., Microsoft Corp. and TIBCO Software Inc. in mid-March published a pair of new specifications the companies claim will improve the reliability and interoperability of messaging for Web services. Combined with the WS-Transaction and WS-Security specifications released last year, the companies said, the specs lay out a complete and interoperable framework reliable enough for mission-critical enterprise application deployment using Web services. WS-ReliableMessaging defines mechanisms for detecting duplicate messages, handling specific message-processing order and attaching levels of delivery assurance. WS-Addressing defines the use of message headers to identify and exchange references to Web services end points. The companies claim that both specs interoperate with existing messaging systems through SOAP bindings and other methods. John Kiger, BEA's director of Web services strategy, claimed the specs fill an important gap in enterprise Web services. 'There's nothing in the current SOAP specification that provides an easy way to ensure that a message is delivered or for end points to know whether a message was sent and received. This WS-ReliableMessaging specification defines a way to ensure delivery of messages and a way for both ends of a Web service to understand what the expectations are for message delivery using SOAP as a transport vehicle. In a sense, it's an expansion of the service capabilities of SOAP'..." See: (1) "New Web Services Specifications for Reliable Messaging and Addressing"; (2) general references in "Reliable Messaging."
[April 15, 2003] "MathML: Enabling Mathematical Functionality on the Web." By Ayesha Malik. In XML Journal Volume 4, Issue 04 (April 2003). "MathML's API consists of two main groups: presentation elements and content elements. Presentation elements describe a mathematical notation's visually oriented two-dimensional structure while content elements describe what the math means. MathML has about 30 presentation elements that accept 50 attributes. Most elements represent templates or patterns for laying out subexpressions... There are about 120 content elements and they accept close to a dozen attributes. Content markup facilitates applications other than display, like computer algebra and speech synthesis. The scope of content markup includes arithmetic, algebra, logic, relations, set theory, calculus, sequences, series, functions, statistics, linear algebra, and vector calculus... MathML is a powerful XML-based markup language for publishing mathematics on the Web. MathML makes it possible to develop Web-based applications for displaying, searching, indexing, archiving, and evaluating mathematical content. It consists of two basic structures: presentation markup for describing math notation, and content markup for describing mathematical objects and functions. The motivation for MathML is a world in which individuals and companies want to move away from proprietary data formats and use XML to express, interchange, and share information. With increasing tool and browser support (both Microsoft and Netscape have publicly declared support for the XML recommendation), MathML's importance and prevalence will continue to grow..." See: (1) "Last Call Draft for Mathematical Markup Language V2.0 Second Edition"; (2) general references in "Mathematical Markup Language (MathML)". [alt URL]
[April 15, 2003] "What's New in XSLT 2.0." By Jeff Kenton. In XML Journal Volume 4, Issue 04 (April 2003). "The XSLT version 1.0 language definition has been an official recommendation of the W3C since 1999. Its use has expanded dramatically in the past 18 months, for processing XML and XML/SOAP security policies and for generating HTML Web pages... Nearly as soon as the language became official, people began proposing to change it. The [revision] effort began as a version 1.1 proposal, which was abandoned in favor of the current Working Draft (WD)... XSLT 1.0 dealt with four types of data: strings, numbers, Booleans, and nodesets. XSLT 2.0 has 48 atomic (built-in) data types, plus lists and unions constructed from them. There are now 16 numeric types; 9 variations of date, time, and duration; plus hexBinary and base64Binary, among others. Users may also create others from the built-in types to suit their needs... XPath 2.0 has generalized its path expressions. Now, a primary expression (literal, function call, variable reference, or parenthesized expression) can appear at any step in the path, rather than just the first step. Basically, this allows any expression that returns a nodeset (sequence of nodes) to appear on either side of a /... One issue that has proven somewhat divisive in version 2.0 is the inclusion of schemas. There are those, including Microsoft and others, who believe that schemas are necessary to what people are doing with the language. Schemas provide both validation of data types in input documents, and clues to the XSLT processor about which data structures to expect. There are others who think it is too complicated, and that schema validation should be a separate process. There are reasonable arguments on both sides. In the end, it was decided to define a conformance level for which schema support was optional. With schema support, there is a new xsl:import-schema declaration. Every data type name that is not a built-in name must be defined in an imported schema. XPath expressions will be able to validate values against in-scope schemas, and will be able to use constructors and casts to imported data types. There are still issues to be resolved before the Working Drafts turn into official W3C Recommendations. Committee members have said that they hope the process will be complete by late this summer or early fall . Meanwhile, Michael Kay, who is both the editor of the XSLT 2.0 WD and the creator of Saxon, has made a version of Saxon available that supports most of the new proposal. And, of course, most suppliers of XSLT processors are working to support version 2.0 of the language as soon as it becomes official. The committees are doing a great job improving XSLT, and I expect it to be enthusiastically adopted by the XML community..." See also "Peek Into the Future of XSLT 2.0," by Charles White. For related resources, see "Extensible Stylesheet Language (XSL/XSLT)." [alt URL]
[April 15, 2003] "Simplifying the Development of Transactional Web Apps." By David Litwack. In XML Journal Volume 4, Issue 04 (April 2003). "XForms, the next generation of forms to be included in the XHTML standard, and now a W3C Candidate Recommendation, improves on HTML forms by cleanly separating data, logic, and presentation. This new standard will not only make development better structured, but will also pave the way for a new generation of development tools. XForms arrives as an ever-increasing percentage of information is moving across the Internet as XML. As the use of Web services increases, more business systems are being exposed using this standard. With new, easy-to-use tools to define XML integration, transformation, and mapping, there will soon be a huge repository of information and transactions available as distributed XML. The primary benefits of XForms are abstract description of presentation, independent of device; data binding between presentation components and XML instance data; and a range of interaction and logic capabilities without procedural programming. XForms provides an abstract metadata description of presentation components such as selection lists and edit boxes. At runtime, this metadata is processed by 'renderers' - server- or client-side components that translate the abstract to a specific implementation. As a result, XForms may be flexibly rendered in browsers by generating XHTML (either from the server or via a built-in or plug-in renderer), in rich clients by Java or Windows renderers, in specialized document formats such as PDF, and eventually by device-specific, vendor-supplied renderers in a variety of handheld devices... XForms presentation components may be bound to XML instance data, moving this burden to the underlying renderer. XForms will allow for end-to-end Web services, making the consumption of services as simple and high-level as the creation. Most dramatically, XForms will enable the next generation of development tools to be more appropriate and productive for the mainstream business developer..." General references in "XML and Forms." [alt URL]
[April 15, 2003] "The XML.com Interview: Liam Quin." By Russell Dyer. From XML.com (April 09, 2003). ['Russell Dyer talks to Liam Quin, XML Activity Lead at the World Wide Web Consortium, XML book author, and typography and markup enthuasiast.'] "Many people have contributed to the development of XML. One contributor and XML expert who stands out is Liam Quin -- author and co-author of three popular books on XML, and employee of the World Wide Web Consortium (w3c.org) as XML Activity Lead... Quin was born in Northhampton, England, son of a Vicar of the Church of England. He graduated from the University of Warwick in Coventry in 1984, earning a bachelor's degree in Computer Science with honors. Early on one could say he was drawn to text. He became interested in literature, ancient text and fonts. He has a reputation in typography as a technical font guru. He is the creator and maintainer of a text retrieval program called lq-text. Quin explains that 'lq-text is useful in keeping an index of all words in a document. You can index hundreds of megabytes of text with it.' With Quin's personal interest in typography and his background in SGML and HTML, he naturally had a deep rooted attraction to XML in the sense that it is an agreed upon method of delivering text, in a self-described manner. I asked Quin the standard question we're all asked by XML outsiders: 'What is the point of XML, what can one do with it that one couldn't do with a language like Perl?' He responded by saying, 'There's nothing one can do in XML that one can't already do in something else, except for one thing, which is working with other people and sharing data and reusing tools. By having the same format for all sorts of things, that means that the same tools can work on the same documents and you can have all sorts of tools written for the same document'..."
[April 15, 2003] "Processing RSS." By Ivelin Ivanov. From XML.com (April 09, 2003). ['Ivelin Ivanov makes light work of processing RSS files with XQuery. This is the first installment of a new regular column on XML.com, Practical XQuery. Ivelin Ivanov and Per Bothner will be publishing tips on the use of the XQuery language, as well as self-contained example applications.'] "The goal of this article is to demonstrate the use of XQuery to accomplish a routine, yet interesting task; in particular, to render an HTML page that merges RSS news feeds from two different weblogs. RSS has earned its popularity by allowing people to easily share news among and between web sites. And for almost any programming language used on the Web, there is a good selection of libraries for consuming RSS... Readers will benefit from a basic knowledge of the XQuery language; Per Bothner has written an informal introduction to XQuery. Even though XQuery started as an XML-based version of SQL, the language has a very broad application on the Web. In what follows, I will show that XQuery allows RSS feeds to be consumed and processed easily. In fact, we will see that it isn't necessary to use a specialized library. We will utilize only functions of the core language... The fact that XQuery recognizes XML nodes as first-class language constructs, combined with the familiar C-like language syntax, makes it an attractive tool for the problems it was built to solve. It must be noted that although it has a for loop structure, XQuery is a purely functional language. In short, this means that XQuery functions always return the same values given the same arguments. This is an important property of the language, which allows advanced compilation optimizations not possible for C or Java. In the past decade, functional language compilers have shown significant advantages over imperative language compilers. Their unconventional syntax and the inertia of imperative languages keep them under the radar of mainstream development. However, the XQuery team seems to recognize these weaknesses and is making an attempt to overcome them..." See: (1) "RDF Site Summary (RSS)"; (2) "XML and Query Languages."
[April 15, 2003] "Gems From the Archives." By Uche Ogbuji. From XML.com (April 09, 2003). ['In this month's Python and XML column Uche Ogbuji hunts for treasures in the archives of the Python XML SIG, locating interesting tidbits for producing and displaying XML.'] "The Python XML SIG, particulary its mailing list, is the richest resource there is for those looking to use Python for XML processing. In fact, efforts such as XML Bookmark Exchange Language (XBEL), created by the XML-SIG in September of 1998 and now used in more browsers and bookmark projects than any other comparable format, demonstrate this group's value to the entire XML world. We're all developers here, though, and for developers there is nothing as valuable as running code. There has been plenty of running code emanating from the XML-SIG over the years, including PyXML and a host of other projects I have mentioned in this column. But a lot of the good stuff is buried in examples and postings of useful snippets on the mailing list, and not readily available elsewhere. In this and in subsequent articles I will mine the richness of the XML-SIG mailing list for some of its choicest bits of code. I start in this article with a couple of very handy snippets from 1998 and 1999. Where necessary, I have updated code to use current APIs, style, and conventions in order to make it immediately useful to readers. All code in this article was tested using Python 2.2.1 and PyXML 0.8.2... There is more useful code available in the XML-SIG archives, and I will return to this topic in the future, presenting updates of other useful code from the archives..." General references in "XML and Python."
[April 15, 2003] "Axis-orizing Objects for SOAP. Go from Java Objects to a SOAP Web Service with Apache Axis." By Mitch Gitman. In Java World (April 14, 2003). "For developers wishing to bridge the gap between the Java and non-Java worlds, Apache Axis holds great promise. Axis is an open source Java framework for implementing Web services over XML-based SOAP (Simple Object Access Protocol). Yet, developers wishing to convert complex Java classes into SOAP-compliant types are likely to find their greatest challenges not with interoperability, but with Axis itself. This article guides the reader through the minefield of developing and deploying a sophisticated Web service using Axis. Coverage includes creating an Axis test client Viewed at a high level, the article really describes a three-part procedure: (1) Develop an existing object-oriented Java interface into a Web service interface; (2) Deploy this interface and its supporting classes as an Axis Web service; (3) Develop and deploy a client based on the Web service... the first part is easy; once you realize the relationship between Java objects and their XML type representations, the JAX-RPC rulebook is almost common sense. And you can apply this conversion to any JAX-RPC-compliant Web service engine... creating a service and client on Axis are the tricky parts..."
[April 14, 2003] "New Tools Aim to Secure Corporate IT." By Sandeep Junnarkar and Martin LaMonica. In CNET News.com (April 14, 2003). "A bevy of technology companies -- both big and small -- on Monday unveiled a slew of security products aimed at protecting corporate IT infrastructure. The announcements were made in conjunction with the RSA Conference 2003, which started Sunday and runs through Thursday in San Francisco. Well-known companies at the conference like Hewlett-Packard and VeriSign and lesser-known ones such as Netegrity and Renesas Technology America are showcasing technologies to secure corporate networks, mobile devices, storage systems and Web services. VeriSign said it will introduce software that adds security to Web services applications through its hosted security service. The gateway software resides on a company's premises and connects via the Internet to VeriSign's public key infrastructure (PKI) encryption security service. Using the gateway, businesses can overlay security to Web services applications based on corporate policies, rather than write security into each individual Web service, VeriSign executives said. The software uses the emerging WS-Security standard, which was co-authored by VeriSign, for secure communications. The subscription service starts at $50,000 a year. Meanwhile, security companies Netegrity and Business Layers said they are jointly demonstrating a new XML-based product for identity management. Business Layers makes software designed to allocate and then track computing equipment and network accounts given to employees. The product is designed to allow companies to secure their Web services and to automate, centralize and manage the process of doling out employee access to internal and external corporate systems and data..." See "Business Layers and Netegrity Partner on Industry's First Demonstration of SPML at RSA Conference. Vendors Present First XML Specification to Leverage Web Services for Secure Federated Resource Allocation."
[April 14, 2003] "Application Vulnerability Description Language Coined." By John Leyden. In The Register (April 14, 2003). "Security vendors joined together today to back a standard for describing application security vulnerabilities. The new Application Vulnerability Description Language (AVDL), to be managed through the OASIS consortium, provides a 'XML standard to define, categorize and classify application vulnerabilities in a standardized fashion'. The language provides a way for vulnerability scanners, for example, to exchange data with application security software. OASIS has established a Technical Committee to develop the standard. The laudable aim of the standard is to reduce security management headaches, but we have our doubts if will it work... First, the security industry is notoriously fragmented. Unlike other market segments, there are scores of vendors selling competitive and incompatible products. Standards are very much the exception rather than the norm. Take the incompatibilities that plagued the public-key infrastructure market, the stateful inspection versus packet filtering approaches to firewalls or the more current intrusion protection versus intrusion detection debate. On the other hand we're starting to see some sort of consensus (based on 802.1X) on an approach to wireless LAN security, but not comes from equipment vendors more than security firms. Secondly the list of names (Citadel Security Software, GuardedNet, NetContinuum, SPI Dynamics and Teros) so far signed up for AVDL lacks the real heavy hitters. Cisco, Network Associates, ISS and Symantec don't feature...' The first meeting of the full OASIS Technical Committee for AVDL has been scheduled for May 15, 2003. The first candidate AVDL specification will be posted for comment during Q3'03, with final spec due before the end of the year. Additional information on AVDL is available [online]..." See: (1) "OASIS Members Collaborate to Address Security Vulnerabilities for Web Services and Web Applications"; (2) OASIS TC to Standardize Classification of Application Vulnerabilities; (3) the news story "OASIS Forms TC for Application Vulnerability Description Language (AVDL)"; (4) AVDL TC Proposal; (5) "Application Security" (general reference document).
[April 14, 2003] "Standards Organizations Share the Stage at RSA. Security Event to Host Liberty Alliance, XML Spec News." By Paul Roberts. In InfoWorld (April 14, 2003). "At the RSA Conference three separate technology standards organizations will be making announcements:  Perhaps the highest profile development on the standards front comes from The Liberty Alliance Project, which appears ready to release draft specifications for the next phase in its network identity architecture on Monday. Representatives from eighteen 'big name' companies will be on hand to demonstrate interoperability among systems based on Liberty Alliance specifications, according to [Simon] Nicholson. Those services will range from projects that are in the 'late beta' phase of testing to those that are in production, and they will highlight how the Liberty Alliance specifications can be used to make user sign-on easier and less complicated, he said...  Also at RSA, a group of application security vendors affiliated with the Organization for the Advancement of Structured Information Standards (OASIS) will announce a proposal for a new XML standard for application vulnerabilities. The group, made up of Citadel Security Software, GuardedNet, NetContinuum, SPI Dynamics, and Teros, is promoting the development of the Application Vulnerability Description Language (AVDL), which is intended to standardize information about application vulnerabilities, enabling different products to share vulnerability information in a heterogenous network environment, according to a statement released by the five companies. The AVDL group submitted its idea to OASIS for study. In turn, OASIS has created a technical committee to develop an XML definition for exchanging information on the security vulnerabilities of applications exposed to networks. A draft specification from the AVDL Technical Committee is scheduled for September, with a final specification due in December, according to OASIS...  ... The Information Security Systems Association (ISSA) is making what it calls a 'historic announcement' at RSA. The group, an international non-profit organization made up of information security professionals and practitioners, will announce its intention to take over and complete development of the Generally Accepted Information Security Principles (GAISP). The announcement is quite significant, according to Mike Rasmussen, vice president of marketing for ISSA and an analyst at Forrester Research. 'What GAAP [Generally Accepted Accounting Principles] is for the accounting world, [GAISP] is trying to be for the security world,' Rasmussen said..." See details in: (1) "Liberty Alliance Contributes Phase 1 Network Identity Specifications to OASIS for Consideration in SAML 2.0"; (2) "Leading Application Security Vendors Propose New XML-Based Interoperability Standard Through OASIS. Application Vulnerability Description Language Will Enable Easy Communication Between Products That Find, Block, Fix, and Report Application Security Vulnerabilities."
[April 11, 2003] "Liberty Alliance Submitting Spec to OASIS. Turning Work Over to Standards Body for First Time." By John Fontana. In InfoWorld (April 11, 2003). "Liberty will announce at next week's RSA Conference that the first phase of its work, which was completed in June 2002 and updated in January, will be turned over to the Organization for the Advancement of Structured Information Standards (OASIS). The first phase, which was renamed Identity Federation Framework (ID-FF) in March, is basically Liberty's Version 1.1 specification that outlines single sign-on and account sharing between partners with established trust relationships. The Liberty move may be a reaction to IBM Corp. and Microsoft Corp., who are not Liberty members, but are trying to create their own federated identity management framework built on WS-Security, an evolving Web services standard they created and submitted to OASIS... Draft specifications for Liberty's second and third phases of work, which now incorporate the WS-Security protocol for securing Web services messages, also will be introduced at RSA and will outline how to build a permission framework and sets of services for user identities that can be shared across the Internet. The second phase of Liberty 's work, called Identity Web Services Framework (ID-WSF), will allow islands of trusted partners to link to other islands of trusted partners and provide users with the ability to control how their identity information is shared. Phase 3, called Identity Services Interface Specifications (ID-SIS), will build services on top of ID-WSF. The two draft specifications are not being submitted to OASIS at this time but will be opened to the usual public review. 'I think it is significant that Liberty is ready to open up to a wider world than its own group,' says Prateek Mishra, co-chair of the Security Services technical committee at OASIS and director of technology and architecture at Netegrity, a Liberty Alliance member. Liberty 's Version 1.1 specification will become a foundation document to help create Version 2 of OASIS's Security Assertion Markup Language (SAML), according to sources. SAML 1.0 is a standard for exchanging authentication and authorization information and is incorporated into and extended by Liberty 's Version 1.1. The hope is that ID-WSF and ID-SIS will eventually extend SAML 2.0 to create a single standards-based environment for federated identity and sharing of identity credentials..." On Liberty Alliance specs and SAML, see "The Liberty Alliance." See also: (1) "Liberty Alliance Specifications for Federated Network Identification and Authorization"; (2) "Security Assertion Markup Language (SAML)."
[April 11, 2003] "Web Services Security: Non-Repudiation." Edited by Eric L. Gravengaard (Reactivity, Inc). Contributors: Grant Goodale, Michael Hanson, Brian Roddy, and Dan Walkowski (Reactivity). Submitted to the OASIS Web Services Security TC. Document identifier: 'web-services-non-repudiation-05'. 11-April-2003. 19 pages. Posted to the WSS Mailing List 11-April-2003 by Eric Gravengaard (Reactivity, Inc). The Web Services Security: SOAP Message Security specification defines the usage of XML Digital Signatures within a SOAP header element to prove the integrity of a SOAP message. While this is useful in the context of non-repudiation to the receiver, it does nothing to guarantee to the sender that the message was delivered properly and without modification. Similarly, when the SOAP requestor receives the SOAP response message there is no way of proving that the SOAP response was generated after receiving and processing the SOAP request. This specification extends the use of XML Digital Signature in the context of WSS: SOAP Message Security to allow senders of SOAP messages to request message disposition notifications that may optionally be signed to prove that the receiver received the SOAP message without modification. The specification also defines a method for embedding SOAP message dispositions in a SOAP message header. This specification constitutes a protocol for voluntary non-repudiation of receipt that when used systematically provides cryptographic proof of both parties participation in a transaction. This specification does not define any mechanism to prove receipt of a message by a non-conformant implementation. Editor's comment: "Reactivity would like to submit this document to the TC for consideration and inclusion in the Web Services Security: SOAP Message Security specification. The Web Services Security: Non-Repudiation proposal (WSNR) defines a standard mechanism for voluntary non-repudiation of receipt. The goal of this proposal is to enable the exchange of SOAP messages in an environment where the SOAP Message sender has cryptographic proof that the SOAP Message responder received the request unaltered. This proposal makes use of the XML Signature specification to provide cryptographic proof of integrity and the WSS:SOAP Message Security Core to allow the transport of both receipt requests and receipts within a <Security> header. This submission is made under the OASIS rules regarding intellectual property rights. Reactivity intends the contents of this document to be available for license royalty free..." General references in "Web Services Security Specification (WS-Security)." [source]
[April 10, 2003] "Technology Update: WebDAV Secures Collaboration." By Lisa Dusseault (Xythos Software; Co-Chair of the IETF WebDAV Working Group). In Network World (April 07, 2003). "Web-based Distributed Authoring and Versioning is an extension of HTTP that lets users collaborate via the Internet. The Internet Engineering Task Force approved it as a standards-track specification in 1998, and it has been deployed widely on multiple platforms and in applications from many vendors. WebDAV can be found in Web servers such as Apache and Microsoft Internet Information Server and now is also supported by leading document and content management vendors. WebDAV functionality also is embedded in common desktop operating systems, including Windows and Mac OS X, and popular applications from Adobe, Lotus, Microsoft and others... WebDAV access-control lists provide advanced control over read, write and sharing permissions for every file, further improving system security. Analysts recently have suggested that the file management features in WebDAV can make it a cost-effective alternative to traditional document management products. WebDAV imposes a common data model that includes collections, resources, locks and properties, and defines a common syntax using HTTP messages with custom methods, headers and bodies. Extending HTTP, WebDAV defines several methods for file management, such as Copy and Move, and Mkcol for creating new Web folders. The Lock and Unlock methods let a document be protected while the author makes changes. The Propfind and Proppatch methods let folders be browsed and offer flexible management of metadata. All these methods operate on HTTP resources, so any Web server that supports WebDAV provides an integrated system for secure authoring... So, where is WebDAV headed? It's quite possible that WebDAV will remain almost invisible to most users as it becomes part of everyday applications. The protocol is fulfilling its promise of extending current file systems beyond the LAN to include just about any user or resource on the Internet..." See: (1) WebDAV Resources; (2) "WebDAV (Extensions for Distributed Authoring and Versioning on the World Wide Web."
[April 08, 2003] "Texas Counties Pilot Online Court Filing System." By Dibya Sarkar. In Federal Computer Week (April 08, 2003). "Two Texas counties are currently piloting a Web-based filing system for state and local courts that will be jointly developed by BearingPoint Inc. and Microsoft Inc. The companies are offering the product as a managed service so attorneys can file any type of case document, whether criminal or civil, simple or complex, said Frank Giebutowski, Microsoft's general manager for state and local government. Because it's a managed service, courts don't have to pay major capital investments for such a system, he said. Gary Miglicco, BearingPoint's managing director of national e-government solutions, said attorneys would register with the service and pay fees for filing cases round-the-clock. It could eliminate the use of couriers who normally file documents physically at the courts. He said filers also can track the status of their filed document, checking to see whether it's been delivered, if it's being reviewed, or if it has been accepted by the court, similar to the way UPS lets customers view the status of their packages. The companies signed a deal with the TexasOnline Authority, the state portal's governing body, in January 2002 and began development last summer. They launched pilots in Fort Bend and Bexar counties last November and plan to expand to another four counties this summer and then nationwide... 'Use of the service by our courts can save attorneys time, reduce total filing costs, and assist courts in becoming more efficient -- this is especially critical in these challenging economic times,' said Carolyn Purcell, Texas's chief information officer, in a press release. The service, called eFiling for Courts, is an open solution and can interface with any solution, said company officials. It is built on Microsoft technologies, including Windows Server 2000, BizTalk Server, SQL Server database, Internet Security and Acceleration Server, and Visual Studio .NET. The service uses LegalXML (Extensible Markup Language) standard schema and Web services standards like Simple Object Access Protocol..." See: (1) LegalXML Electronic Court Filing TC; (2) "Microsoft and BearingPoint Team to Launch Innovative Electronic Filing Solution to State Courts Nationwide. Solution Uses Microsoft .NET Software to Ensure Efficient Exchange of Information Among Attorneys and State Courts."
[April 08, 2003] "W3C Advances Semantic Web Drafts." By Paul Festa. In CNET News.com (April 08, 2003). "Aiming to rehabilitate both the technology and the image of its Semantic Web initiative, the Web's leading standards group has issued a number of updates and promised a spring and summer education campaign. The World Wide Web Consortium (W3C) last week issued five updated working drafts for the Web Ontology Language (OWL): the OWL Overview, Guide, Reference, Semantics and Abstract Syntax, and Use Cases and Requirements. Altogether, the specifications provide the most detailed layer of the W3C's model for describing data on the Web so that computers can 'understand' more about what the data is and how it fits in context with other data. The OWL releases join another six Semantic Web updates that the W3C made early in the year of the Resource Description Framework (RDF): the RDF Concepts and Abstract Syntax; Semantics; Primer; Vocabulary Description Language 1.0: RDF Schema; the revised RDF/XML Syntax Specification; and the Test Cases... 'We're trying to make the Semantic Web as easy as it is now to join relational database tables,' said Eric Miller, activity lead for the W3C's Semantic Web Activity and a research scientist at the Massachusetts Institute of Technology. 'The Semantic Web technologies are designed to enable data from different places to be seamlessly integrated.' Miller described the year's upgrades as an 'overall cleanup,' explaining that testers had found previous versions lacking in terms of stability and interoperability... Miller also defended the W3C's continuing investment in the Semantic Web while businesses are still demanding more work on Web services standards. The two technical areas should be considered complementary and not competitive, Miller said, comparing them to the Web's basic transmission and markup languages..." See: (1) details in "W3C Web Ontology Working Group Publishes Last Call Working Drafts"; (2) "OWL Web Ontology Language"; (3) "XML and 'The Semantic Web'".
[April 08, 2003] "Driving Content Management with Digital Rights Management." By Renato Iannella and Peter Higgs. White Paper. IPR Systems Pty Ltd. April 09, 2003. 9 pages. "There is often confusion in the market between the functions of Content Management and Digital Rights Management systems. This is understandable as both are dealing with the production and supply of digital content and share common technologies and techniques. However they are best thought of two sides of the same coin bicycle: Both are necessary to have a valuable and practical marketplace that brings content and relevant usage rights from creators to customers... Content Management has evolved and has now been segmented into Enterprise Content Management, Digital Asset Management, Media Asset Management, and Web Content Management. Similarly IPR Systems divides DRM into:  Digital Property Management (DPM), and  Digital Rights Enforcement (DRE). Digital Rights Management (DRM) involves the description, layering, analysis, valuation, trading and monitoring of the rights over an enterprise's assets; both in physical and digital form; and of tangible and intangible value. DRM covers the digital management of rights -- be they rights in a physical manifestation of a work (e.g., a book), or be they rights in a digital manifestation of a work (e.g., an ebook). Clearly, systems that manage and supply content need to interface to, or be closely coupled with, systems that manage rights... As content is created and managed (e.g., version control, digitisation etc), traded via an ecommerce exchange, and delivered to the consumer, appropriate rights information is also captured and managed in parallel. In many cases the CM functions and the DRM functions have high dependencies, such as the protection of the content at the consumer end of the transaction. The terms and conditions agreed on in the trade will need to inform the content rendering systems to ensure that the content is only used for the purposes acquired... Current rights management technologies are focused on managing downstream rights: the flow of content from a publishing organisation to consumers. This flow is predominantly of relative simple usages or 'passive consumption.' For example, the sale of music content, in which the end consumer can only play the audio file. A more complex market requirement is the management of upstream (content sourcing) agreements which can then be aggregated, managed and transformed into downstream usage offers and agreements. A key feature of managing online rights will be the substantial increase in this re-use of digital material on the Web. The pervasive Internet is changing the nature of distribution of digital media from a passive one way flow (from Publisher to the End User) to a much more interactive cycle where creations are re-used, combined and extended ad infinitum. At all stages, the rights need to be managed and honoured with services of various degrees of sophistication... The next two years will see an explosion in the production and distribution of digital content over the Internet. Much of this content will be re-used, aggregated and transformed to increase its applicability to its various markets. Content Management systems alone will not be able to cope with this need without addressing the rights management requirements. There are applications that are currently being developed that couple Rights Enabled Content Management Systems with open, standards based Rights Management Systems. When the integration is completed they will be at the forefront of the revolution in digital content marketing..." Related references: (1) "Open Digital Rights Language (ODRL)"; (2) "XML and Digital Rights Management (DRM)."
[April 08, 2003] "AIIM Zeroes in On Content Control. Vignette, Atomz, FatWire Roll Out Offerings." By Cathleen Moore. In InfoWorld (April 08, 2003). "Content management wares took center stage at the AIIM 2003 Conference in New York , as vendors unveiled offerings designed to pull together content, portal, and process management capabilities. To that end, Vignette at the show took the wraps off an integrated content management and portal package dubbed the Vignette Application Suites. Incorporating portal technology from its acquisition of Epicentric last year, the Application Suites attempt to provide out-of-the-box content, integration, business process, and collaboration services in a single management platform... Ektron previewed the forthcoming release of its Ektron CMS300 Version 2.5. The updated XML-based Web CM system features support for XSLT, XML validation through DTDs and Schemas, multi-step workflow, and an open API for extending the product to other applications. The system is designed to allow non-technical business users to create XML content in a word processor or forms-like environment... Atomz, meanwhile, launched Atomz Alert, a Web-based service designed to let companies deliver e-mail content alerts to their Web site visitors; with Atomz Alert, site visitors register to be notified when new site content about a specific topic is available... Interwoven and iManage at the show announced the general availability of their jointly developed CDM (Collaborative Document Management) product, which was first unveiled in February. CDM combines collaborative content management technology from iManage with Interwoven's enterprise content management software in an effort to address the overall collaborative document life cycle in enterprises..."
[April 08, 2003] "Sun Wastes Little Time Preparing WS-I 'Wish List'." By Vance McCarthy. In Integration Developer News (April 07, 2003). "Sun Microsystems is wasting little time mapping out their agenda for contributing to the WS-I (Web Services Interoperability Organization), a multi-vendor web services group of more than 160+ vendors co-founded by Microsoft and IBM. Sun's long campaign for a seat on WS-I's board came to a successful close last week, as Mark Hapner, a Sun distinguished engineer and chief web services strategist for the company, was voted to a two-year term. Following the vote, Hapner [said] that he felt 'WS-I had made a very good start,' but added that Sun intended to use its new influence as a board member to push for a few issues. 'Our job is to represent the Java and J2EE developer community, and that is what will do,' Hapner said. Java vendors IBM, BEA, Borland and Rational are all members of WS-I. While WS-I execs and publicly hoping that Sun's membership in the group's board will speed development of web services standards, it's equally possible that Sun could actually slow things down, depending how you look at it... Hapner wants WS-I to improve the way the group interacts with formal standards bodies, notably the W3C and OASIS. He noted that only two pending web services standards (WS-Security and DIME) had been presented to a formal standards body, and claimed that proposals advocated by WS-I members (including the WS family of standards being proposed by IBM and Microsoft) should also be submitted to standards group... [Hapner also wants ] stricter guides for cross-platform interoperability: 'WS-I's approach to interoperability between platforms may need some extra work,' and, as a result, he said he intends to push WS-I to make its conformance procedures more strict, and overall 'improve the quality of platform conformance.' Under the current WS-I approach, Hapner claimed, there are some 'very simple ways' for developers to get caught in building non-conformant specs..." General references in "Web Services Interoperability Organization (WS-I)."
[April 08, 2003] "Fast XSLT." By Steve Punte. From XML.com (April 02, 2003). ['The article focuses on the need for speed in XSLT. The widespread adoption of the W3C's XML transformation language has led to a demand for fast, conformant XSLT processors. One way of achieving speed is to pre-compile XSLT stylesheets. Steve Punte concentrates on that technique in "Fast XSLT." Steve tracks the emergence of Apache XSLTC and its modern competitor, Gregor.'] "This article reviews the birth and development of the promising compiled-XSLT engine, Apache XSLTC, and the fierce competition among developers of XSLT engines to be the performance leader... Apache XSLTC began in early 2000 at Sun Microsystems, first with Jacek Ambroziak and eventually turning into a small team. XSLTC is most significantly differentiated from other transformer engines by its distinctive two-phase operation. The XSL stylesheet is first compiled into a Java class called a 'translet' ('transformer servlet', not related to a web application servlet). This stand-alone translet can be loaded as a Java class and run in a different process or even on an entirely different machine. The core translet accepts a DOM-like object for input and produces a SAX-like output stream. Adaptors are provided to convert to all conventional standards. But Jacek Ambroziak has wiped the plate completely clean and started a new XSLT engine, which he calls 'Gregor', at his new firm Ambrosoft. As Jacek says, 'In the case of XSLTC (and especially Gregor) it is not only compilation but sets of particular algorithmic ideas that matter a lot for performance (speed/memory)'. In early testing, Gregor is tentatively showing nearly ten times the performance of Xalan-J and twice that of XSLTC; but it is not a complete product yet. Benchmarks can be misleading... In the end, the question always remains, 'are their sufficiently radical and advantageous architectural approaches that can improve performance yet unexplored and untapped?' In the case of XSLT engines, the answer is 'probably', since it's a very young field... Compilation to bytecode is just one of many tactics being utilized in the XSLT performance race. It is possible that additional significant gains will also be obtained in other areas such as intelligent pre-optimization of stylesheets, clever internal data structures, or even a more hot-spot like run-time optimizer. Most likely the field will remain competitive, each product achieving improvements and gains on a year-to-year basis, new contenders coming onto the scene, and some old ones slipping off. Performance is difficult to quantify and predict and highly dependent upon the exact customer usage. For performance in critical situations it is recommended that a small handful of the dominant engines be tested in a particular application before a final decision is made..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."
[April 08, 2003] "Conditional Execution." By Bob DuCharme. From XML.com (April 02, 2003). ['Bob DuCharme is back for his monthly Transforming XML column. Bob explains the use of the conditional constructs xsl:if and xsl:choose in order to selectively execute part of a stylesheet.'] "Most programming languages provide some means of conditional execution, which allows a program to execute an instruction or block of instructions only if a particular condition is true. Many programming languages do this with if statements; the XSLT equivalent is the xsl:if instruction. There's ultimately only one instruction that can or can't happen in XSLT based on the boolean value of an xsl:if element's test expression: nodes get added to the result tree or they don't. The addition of nodes to the result tree is the only end result of any XSLT activity, and the xsl:if element gives you greater control over which nodes get added than template rule match conditions can give you. For example, an xsl:if instruction can base its behavior on document characteristics such as attribute values and the existence (or lack of) specific elements in a document. Many programming languages also offer case or switch statements. These list a series of conditions to check and the actions to perform when finding a true condition. XSLT's xsl:choose element lets you do this in your stylesheets..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."
[April 08, 2003] "XML Isn't Too Hard." By Kendall Grant Clark. From XML.com (April 02, 2003). ['Kendall Clark's XML-Deviant column this week returns to the controversy sparked by Tim Bray's claim that XML was "too hard for programmers." Kendall focuses on reaction from XML experts on XML-DEV, and on Bray's follow-up essay, "Why XML Doesn't Suck."'] "... I examined the XML development community's reaction to some recent questions posed by Tim Bray about a perennial bugaboo, XML parsing strategies. To my surprise, the conversation about this issue hasn't faded away and warrants another look, especially since feedback from the XML development community and elsewhere has prompted Bray to compose another missive, 'Why XML Doesn't Suck'... 4 Out of 5 XML Programmers Agree: XML Isn't Too Hard... Among the recent contributors to this ongoing conversation, Liam Quin defended some of Bray's original claims, particularly the one about parsing XML with regular expressions. What Bray was suggesting, in Quin's view, was that Perl 6's expanded regular expression facilities might make some XML parsing jobs easier... But whether Perl 6 gains an even more powerful regular expression engine, and whether that engine spreads from Perl to other languages, is in some ways beside the point. As Sean McGrath pointed out, the discussion seems to oscillate between two poles, namely, 'correctness or input fidelity'. If you process XML with regular expressions, you can know that such processing will not change things like 'entity references, whitespace, attribute delimiters' and so on. But you can't know that your processing will work with every well-formed input. On the other hand, if you process XML with an XML parser, you can know that it will work with every well-formed input. But you can't know that it this processing will not alter (or otherwise 'negatively effect') 'entity references, whitespace, attribute delimiters' and so on. This conundrum, as expressed by Sean McGrath, sets the programmer on a path past the 'Scylla and Charybdis' of XML processing, namely, parsing or regular expressions. The point of McGrath's tale is to wonder whether XML programmers can have both correctness and input fidelity. Several people answered that you could have both, including Simon St. Laurent..."
[April 08, 2003] "KXML: A Great Find for XML Parsing in J2ME." By Robert Cadena. From DevX.com (April 04, 2003). "Enhydra's kXML is a great little XML parser with a small footprint, making it perfect for J2ME apps. It uses a unique method of DOM manipulation and parsing called pull parsing. Find out whether kXML is must-have for your toolbox. kXML is a compact library designed for use on J2ME devices, though it may be used in other contexts where a small XML parser is needed, for example, with applets. kXML, a project maintained by the Enhydra organization, supports the following features: (1) XML namespace support; (2) 'Relaxed' mode for parsing HTML or other SGML formats; (3) Small memory footprint [21 kbps]; (4) Pull-based parsing; (5) XML writing support; (6) Optional DOM support; (7) Optional WAP support. In this article I'll go into detail about a few of these features, specifically pull parsing and DOM manipulation, and I'll tell you how to check the effect of the kXML processing on memory. Included with this article are two MIDlet examples with full source that show you how to use kXML... There are two common ways of working with XML: manipulating the DOM or catching parsing events. Manipulating the DOM is a simple way of interacting with XML where the entire XML tree is parsed into a node structure that resides in memory and you can traverse the tree programmatically. It is very simple to use, but because the entire tree resides in memory -- as well as any objects needed to traverse it -- it is memory intensive. In the second method, catching parsing events, the parser traverses the XML data and issues callbacks to a previously registered event listener whenever it encounters particular structures in the data. For example, when the parser encounters an opening tag such as <html> then the event listener would receive an event notifying it of the encounter and pass it any necessary information. A parser that implements such a strategy is called a push parser because the parser is 'pushing' the event to a listener. kXML supports DOM parsing and manipulation but not push parsing. Instead, it uses a slightly different method called 'pull' parsing. In contrast to push parsing, pull parsing lets the programmer 'pull' the next event from the parser. In push parsing you would have to maintain the state of the current part of the data you were parsing, and based on the events passed to the listener you would have to take care to restore any previous state variables and save new ones when you were changing to a different state. Pull parsing makes it easier to deal with state changes because you can pass parser to different functions, which can maintain their own state variables..."
[April 08, 2003] "Making Progress in Web Services." By Lisa Vaas. In eWEEK (April 08, 2003). "The Progress Company plans to announce OpenEdge, a development platform for creating applications that consume and generate Web services, at its annual users' conference next week in Boston. Progress will release OpenEdge as a series of products that will consist of a database layer, an application server to interact with the database, a client piece to interact with the application server, a development framework in which to build applications, Progress' Fathom family of application management tools and integration services delivered by Progress' sister company, Progress Sonic. OpenEdge will also feature a component called Web Services Technology Preview that will let customers re-use legacy code to create Web Services. OpenEdge supports Microsoft Corp.'s .Net framework by offering partners the option to integrate the native .Net user interface with Progress-driven business logic. Developers will be able to choose from a variety of user interface techniques and technologies via a collection of business logic components. Officials said that this will enable end users of applications created by Progress' application partners to tailor how their desktops look and function. In the past, Progress technology has focused on turning out interfaces for users of applications such as point-of-sale registers. With .Net integration, applications can be built that feature any type of user interfaces, such as those designed for the Web, devices or the .Net interface, for example..."
[April 08, 2003] "Will the Real Reliable Messaging Please Stand Up? A.K.A. WS-Reliability, WS-ReliableMessaging, or WS-ReliableConundrum?" By Dave Chappell (Vice President and Chief Technology Evangelist, Sonic Software). Sonic Technical Report (Position Paper). April 02, 2003. 9 pages. Text supplied by the author. "... The need for open standards for reliable Web services has become so widely recognized that we now have three competing sets of SOAP-based reliable messaging out there. All were recently announced, and all are named under the defacto branding of 'WS-*'. There is the OASIS WS-Reliability spec that a large number of vendors, including Sonic, is a part of. There is a one-vendor set of specs announced by BEA recently, known as WS-Acknowledgement, WS-Callback, and WS-MessageData. Then less than two weeks after that announcement, along came another competing specification announced jointly by Microsoft, IBM, BEA, and TIBCO, known as WS-ReliableMessaging and WS-Addressing. So it would seem that BEA is even competing with itself in this area. On the plus side, it's a pleasure to see that reliable message-based communications -- a subject that has long been very near and dear to my heart -- has finally become a widely recognized requirement for Web services. So much so that we are witnessing a 'land grab' for mindshare and thought leadership in this area. The world's largest vendors are doing battle in the press, making claims about superior technical prowess, and making accusations about being 'proprietary'. The downside is that the current situation of multiple overlapping specifications is bound to cause further fracturing in the marketplace. The chair of the OASIS Reliable Messaging Technical Committee, Tom Rutt of Fujitsu, has publicly stated an open invitation for all of these to converge under the OASIS TC, and remains optimistic about that happening. Unfortunately, politics may prevail. What the user community, and most of the vendor community, wants is one spec that everyone can point at and feel good about, so we can all move on to larger issues. After careful examination, I have come to the conclusion that all of the new specs by-and-large cover the same ground. They all do SOAP-based reliable messaging via acknowledgements, and have varied levels of Quality of Service (QoS) options like at-most-once and at-least-once. All have an ability to specify a URI (URL) as a place to receive asynchronous callbacks, and a message ID based mechanism for correlating asynchronous requests with asynchronous 'responses'. The smaller differences include things like duplicate-elimination, acknowledgement timeouts, or purported support for WS-Security. The following is a summary of the three initiatives... Reliable messaging is a cornerstone to any enterprise capable integration strategy. Regardless of how this WS-Reliable-Conundrum turns out, the world needs to start focusing on the larger issues. Beyond the base reliable messaging protocol, you still need to think about things like how the messages are orchestrated together, how the XML messages get cached and aggregated, and what the buckets are to place things in when good messages go bad. The rapidly emerging ESB category encompasses these types of issues, and allows for multiple reliable protocols to coexist together. In the end its all about the infrastructure that holds it all together in a platform independent fashion." (1) "OASIS Members Form Technical Committee for Web Services Reliable Messaging"; (2) "BEA Releases Web Services Specifications Supporting Asynchrony, Reliable Messaging, Metadata"; (3) "New Web Services Specifications for Reliable Messaging and Addressing." General references in "Reliable Messaging." [source .DOC]
[April 08, 2003] "Web Service Reliability." Nokia Technical Report. 18 pages. Posted by Szabolcs Payrits 2003-04-04 to the Web Services Reliable Messaging TC list. "What is reliability? (1) Guaranteed delivery: ensure that all information to be sent actually received by the destination or error reported. (2) Duplicate Elimination: ensure that all duplicated information can be detected and filtered out. (3) Ordering: communication between parties consist of several individual Message Exchanges. This aspect ensures that Message Exchanges are forwarded to the receiver application in the same order as the sender application issued. (4) Crash tolerance: ensures that all information prescribed by the protocol is always available regardless of possible physical machine failure. (5) State synchronization: If the MEP is cancelled for any reason then it is desirable for both nodes to set their state as if there were no communication between the parties... Web Service Reliability is a communication layer in a Web Services protocol stack..." See: (1) OASIS Web Services Reliable Messaging TC; (2) general references in "Reliable Messaging."
[April 08, 2003] "XML and Java Technologies: Data Binding Part 3: JiBX Architecture." By Dennis M. Sosnoski (President, Sosnoski Software Solutions, Inc). From IBM developerWorks, XML zone. April 01, 2003. ['Enterprise Java technology expert Dennis Sosnoski gives a guided tour of his JiBX framework for XML data binding in Java applications. After introducing the current frameworks in Part 1 and comparing performance in Part 2, he now delves into the details of the JiBX design that led to both great performance and extreme flexibility for mapping between XML and Java objects. How does JiBX do it? The keys are in the internal structure...'] "Part 1 of this series gives background on why you'd want to use data binding for XML, along with an overview of the available Java frameworks for data binding. Part 2 shows the performance of several frameworks on some sample documents. Here in Part 3 you'll find out the details of the new JiBX framework that delivers such great test scores... JiBX originated as my experiment to explore the performance limits of XML data binding in Java technology. As it progressed beyond the experimental stage, I decided to also make it a test bed for trying out what I call a Java-centric approach to data binding -- as opposed to the XML-centric approach used in generating an object model from an XML grammar. Because of these choices, JiBX deliberately differs from the established data binding frameworks in several respects... At the core of these differences is the parsing technology that's used when unmarshalling documents. The established frameworks are all based on the widely used SAX2 push parser API. JiBX instead uses a newer pull parser API that provides a more natural interface for dealing with the sequence of elements in a document. Because of this different API, JiBX is able to use relatively simple code to handle unmarshalling. This simpler unmarshalling code is the basis for the second major difference between JiBX and the other frameworks. The other frameworks either generate their own data classes or use a technique called reflection that provides runtime access to data indirectly, using the class information known to the Java Virtual Machine (JVM). JiBX instead uses code that it adds to your existing classes, working directly with the binary class files generated by a Java compiler. This gives the speed of direct access to data while keeping the flexibility of working with classes that you control. The third JiBX difference goes along with this use of your existing classes. JiBX reads a binding definition file to control how the code it adds to your classes converts instances to and from XML. JiBX isn't the only data binding framework that supports this type of approach -- Castor, for one, also handles this -- but JiBX goes further than the alternatives in allowing you to change the relationship between your classes and the XML document. I'll give details on each of these points in the remainder of this article. Even if you're planning to use an alternative framework for your projects, you'll find some solid food for thought in going through this description of JiBX..."
[April 08, 2003] "BEA Defends Its Turf." By Darryl K. Taft. In eWEEK (April 04, 2003). "BEA Systems Inc. came out swinging on Friday after a week of taking hits from competitors claiming their technology is better for BEA customers than BEA's own. BEA defended its position in the Java application server market, saying the company has come under fire because the competition knows it has to come after the sector leader... [BEA's] Frieberg said the moves by BEA competitors only go to show that BEA is the application platform environment to catch. 'Most of the vendors that exit the market try to migrate their customers to WebLogic. This demonstrates that we have the best product on the market,' he said, speaking primarily of Hewlett-Packard Co., which got out of the application server space and established close ties to BEA. HP's CEO, Carly Fiorina, gave a keynote address at BEA's eWorld developer conference last month. Bob Sutor, IBM's director of WebSphere infrastructure software, said IBM is extending the capabilities of IBM WebSphere Studio to now allow BEA WebLogic developers to 'use WebSphere Studio to develop, test and deploy out to WebLogic. And we think this is going to be very attractive to developers. The tools BEA has are proprietary to BEA and are not open, nor do they support Eclipse.' Eclipse is the open-source, Java-based application development platform sponsored by IBM. Sutor said the new tools capability is a good option 'for enterprises that have mixed IBM WebSphere and BEA WebLogic application server environments.' And for companies that have WebLogic and are thinking of migrating to WebLogic this is a good first step..."
[April 08, 2003] "Degrees of Freedom: Virtuoso Universal Server 3.0 Expands the Horizons of Data Access, Integration, and Delivery." By Jon Udell. In InfoWorld Issue 12 (March 24, 2003), pages 1, 19-20. "Virtuoso Universal Server 3.0 Beta is enterprise middleware that unifies SQL, text, XML, and object data; it will play an increasingly vital role in delivering business data to users. It offers aggressive support for publishing Web services from local or foreign databases and for consuming them from anywhere; it's equipped to index and search XML data; it's all-in-one combination of universal database, application server, and Web-services-savvy integration toolkit. The activation threshold for learning to use this complex, multifunctional product is high... OpenLink's newest iteration of the Virtuoso database engine is geared to take the wild innovation of its predecessor to new heights. Adding to Version 2.7's support for XML, Web services, and Internet standards -- a list longer than your arm -- the forthcoming Version 3.0 will also host ASP.Net pages and user-defined functions written in .Net languages on both Windows and Linux, more than proving it worth a look. OpenLink CEO Kingsley Idehen, Virtuoso's mastermind, abides by two basic principles: No. 1, Data management is the foundation of IT; and No. 2, developers who build on that foundation need choices. One dimension of choice is data flavor. To that end, Virtuoso's SQL core embraces XML, has a validating parser and XSLT (Extensible Stylesheet Language Transformation) processor embedded in it, can index and search within XML blobs, can express SQL queries as XML (optionally producing an XML Schema), and supports the still-evolving XQuery 1.0 standard. Another dimension of choice is data access. In terms of data consumption, Virtuoso, which is rooted in OpenLink's suite of data-access drivers, can augment its own SQL tables and stored procedures by linking to foreign ones such as those from Oracle or Microsoft SQL Server. Its PL (procedure language) also includes primitives used to fetch data by way of HTTP Get, raw SOAP calls, or WSDL-described SOAP calls. As a data provider, Virtuoso supports database-style clients with ODBC, JDBC, and OLE-DB drivers; SOAP clients by means of PL wrappers around local or remote procedures; and WebDAV (Web-based Distributed Authoring and Versioning) clients, including Windows Explorer and Office, by projecting file-system-like views of query results..." See the announcement "OpenLink Software Unleashes Virtuoso 3.0. Virtual Database Integrates Disparate SQL, XML, and Web Services Data Sources."
[April 08, 2003] "XML Development with Eclipse." By Pawel Leszek (Independent software consultant). From IBM developerWorks, Open source projects. April 8, 2003. ['This article gives you an overview of how the Eclipse Platform supports XML (Extensible Markup Language) development. Eclipse does not support XML code editing right out of the box. However, because Eclipse is a platform-independent framework for building developer tools, you can add support for new languages relatively easily.'] "A multitude of XML plug-ins have been developed, and new ones are created all the time. This article focuses on the plug-in called XMLBuddy, because its feature-rich set contains most of the functions needed for XML document development. We do touch on other plug-ins when they provide a richer set of user options for specific tasks. This article will familiarize you with the basic XML editing features, but bear in mind that Eclipse is a dynamic frameset that puts an endless array of tools and features at your fingertips. Eclipse already includes source code for a very primitive XML editor that offers only XML syntax highlighting. It extends classes included in the org.eclipse.ui.editors package, which provides a standard text editor and a file-based document provider for the Eclipse Platform. This simple XML editor serves as a code example that you can use as a base for your own Eclipse XML plug-ins. Its source code can only be generated from the Eclipse project wizard, and you will need to compile it yourself as described here... The most popular and advanced XML editor plug-in for Eclipse is XMLBuddy, developed by Bocaloco Software. XMLBuddy is a freeware plug-in that enriches Eclipse with XML editing capabilities that include user-configurable syntax coloring, DTD-driven code assist, and validation and synchronized outline view. XML Buddy also adds an XML perspective to the Workspace and new project templates for XML documents and DTDs. You can install XMLBuddy in the same way as any other Eclipse plug-in... Other helpful XML plug-ins are Transclipse and Eclipse Tidy. Transclipse is a plug-in for XML transformation. It processes XML documents via XSLT with any JAXP-compliant XSL stylesheet processor and XSL-FO documents using the Apache Formatting Objects Processor (FOP). Transclipse is a part of the j2h or Java to HTML plug-in, which converts Java source code to HTML, XHTML, and LaTeX with syntax highlighting. The Eclipse Tidy project provides a plug-in for formatting and printing XML/HTML documents. Visit the categorized Eclipse plug-in registry for more information..."
[April 08, 2003] "XBRL: Standardizing Financial Data." By Amy Rogers. In CRN (April 04, 2003). "In the wake of accounting scandals that have resulted in imprisoned corporate executives and distraught investors, Extensible Business Reporting Language (XBRL), an offshoot of XML, promises to help make public companies more consistent in the way their financial data is transmitted, reported and presented to investors. XBRL could help prevent the financial chicanery allegedly engaged in by corporate entities such as Enron and WorldCom, experts say. It's a royalty-free, open standard under development by about 170 companies and agencies, including the Securities and Exchange Commission and the Federal Deposit Insurance Corporation. The Sarbanes-Oxley Act of 2002, which mandates a clearer set of rules by which companies make financial disclosures, has also added support to the XBRL initiative. With broader adoption, XBRL's proponents say, truly transparent financial information will be made available in annual reports, quarterly statements and other documents... XBRL involves tagging different elements of a financial statement so they can be more readily retrieved from a public company's financial documents. These components can then be pulled into various applications and formats, without having to generate a whole new set of accounting terms. Mary Knox, a senior research analyst in Gartner's financial services area, said it will be interesting to see if and when regulatory agencies mandate XBRL's adoption. In the meantime, 'some of the major accounting firms are very excited about XBRL because it will help them automate their auditing processes,' she said. 'One of the major issues is getting banks to sign on,' she continued. But if there were incentives to adopt the XBRL dialect,such as accounting firms realizing cost benefits from the automation of its auditing processes, and subsequently passing on those savings to win more banks as customers,any number of financial institutions might just fall in line, Knox said. PricewaterhouseCoopers is one of the accounting firms with resources invested in XBRL. Kim Jones, an independent consultant with his own firm, San Francisco's Spinning Electrons, is under contract to PricewaterhouseCoopers to help it devise its XBRL strategy. Microsoft is also closely aligned with this effort, Jones said..." General references in "Extensible Business Reporting Language (XBRL)."
[April 07, 2003] "A Quick Introduction to OWL Web Ontology Language." By Roger L. Costello and David B. Jacobs. OWL Tutorial from The MITRE Corporation, sponsored by DARPA. Designed to provide a quick introduction to the capabilities of OWL. Examples use a Gun License Ontology (the Gun License Ontology provides the data needed for the police computer to link the Robber and the Speeder) and OWL Camera Ontology (supports interoperability because the Web Bot is able to dynamically utilize the XML document from the Web site, despite the fact that the XML document use terminology different than is used to express the query). See also: (1) "W3C Web Ontology Working Group Publishes Last Call Working Drafts"; (2) general references in
[April 07, 2003] "Adobe Updates Acrobat for the XML Era." By Tony Smith. In The Register (April 07, 2003). "Adobe has melded its Portable Document Format (PDF) and XML, updating its Acrobat family of PDF creation tools to version 6.0 in the process. The move encourages organisations to use XML to encode their business information which retaining the popular PDF format to ensure that information can, where appropriate, be shared and published. XML provides a framework for not only formatting documents - HTML, basically -- but incorporating meta data -- information about the information. That makes it possible to incorporate into the data itself workflows for taking source information and generating documents based upon it, whether for internal or external use, archiving, email or printing. XML also makes it possible to share those documents outside the organisation by incorporating all the extra information reader software needs to display or print the document. That's exactly what PDF has offered for the last 15 years or so, albeit based on Adobe's proprietary PostScript document formatting language. Adobe no doubt feels that XML has achieved sufficient momentum in the corporate world - or is about to - that it needs to align these two technologies to prevent the one (XML) ousting the other (PDF)... should PDF fall out of favour, Adobe is positioning itself as a provider of document creation workflow tools that enable this world of rich XML documents and process automation driven by the use of document standards. Witness the alliances the company today said it had struck with the likes of Documentum, Open Tex, SAP and IBM. In the meantime, while PDF remains the de facto standard for document interchange, Adobe's tools manage the production of PDF files too..."
[April 07, 2003] "Adobe Divides to Conquer." By David Becker. In CNET News.com (April 06, 2003). "Adobe Systems is aiming to make its Acrobat electronic publishing software a standard business tool with new versions of the product that target different classes of officer workers... the new offerings include a light-duty version of Acrobat designed to let ordinary workers easily covert documents to PDF. Acrobat Elements will integrate with common applications, including Microsoft's Office, so that most documents can be converted simply by right-clicking on them. Acrobat Elements will be accompanied by Acrobat 6.0 Standard, replacing the current version of Acrobat, and Acrobat 6.0 Professional, a new high-end edition intended for workers using complex applications such as Autodesk's AutoCAD drafting software and Microsoft's Visio diagramming and drawing software... Acrobat 6.0 also includes expanded security features for digitally signing and encrypting documents, making it more feasible for banks and other institutions to send sensitive documents electronically. The new Acrobat enhances support for adding XML functions, allowing PDF documents to become interactive forms that can exchange data with corporate databases. Adobe has initiated partnerships with SAP and other major enterprise software makers and is working on further collaborations to make PDF the preferred presentation layer for exhibiting and exchanging corporate data..." See other details in the news story "Adobe Announces XML Architecture for Document Creation, Collaboration, and Process Management."
[April 05, 2003] "Semantic Interpretation for Speech Recognition." Edited by Luc Van Tichelen (ScanSoft). W3C Working Draft 01-April-2003. Latest version URL: http://www.w3.org/TR/semantic-interpretation/. Updates the WD of 2001-11-16. "This document defines the process of Semantic Interpretation for Speech Recognition and the syntax and semantics of semantic interpretation tags that can be added to speech recognition grammars to compute information to return to an application on the basis of rules and tokens that were matched by the speech recognizer. In particular, it defines the syntax and semantics of the contents of Tags in the Speech Recognition Grammar Specification. Semantic Interpretation may be useful in combination with other specifications, such as the Stochastic Language Models (N-Gram) Specification, but their use with N-grams has not yet been studied. The results of semantic interpretation are describing the meaning of a natural language utterance. The current specification represents this information as an EcmaScript object, and defines a mechanism to serialize the result into XML. The W3C Multimodal Interaction Activity is defining a data format (EMMA) for representing information contained in user utterances, and has published the requirements for this data format (EMMA Requirements). It is believed that semantic interpretation will be able to produce results that can be included in EMMA. Grammar Processors, and in particular speech recognizers, use a grammar that defines the words and sequences of words to define the input language that they can accept. The major task of a grammar processor consists of finding the sequence of words described by the grammar that (best) matches a given utterance, or to report that no such sequence exists. In an application, knowing the sequence of words that were uttered is sometimes interesting but often not the most practical way of handling the information that is presented in the user utterance. What is needed is a computer processable representation of the information, the semantic result, more than a natural language transcript. Semantic Interpretation Tags provide a means to attach instructions for the computation of such semantic results to a speech recognition grammar..."
[April 05, 2003] "EPP Internationalized Domain Name Mapping." By Edmon Chung and Henry Tong (Neteka). IETF Internet Draft. Reference: 'draft-chung-idnop-epp-idn-00.txt'. April 2003. 22 pages. "This document describes an Extensible Provisioning Protocol (EPP) mapping for the provisioning and management of Internationalized Internet domain names (that includes English alphanumeric domain names) stored in a shared central repository. Specified in XML, the mapping defines EPP command syntax and semantics as applied to domain names. More specifically, EPP-IDN intends to provide a mechanism for explicitly managing and provisioning Reserved Variants and Zone Variants created for a Primary Domain Name..." See: (1) IETF Provisioning Registry Protocol Working Group; (2) "Extensible Provisioning Protocol (EPP)." [cache]
[April 05, 2003] "Extensible Provisioning Protocol." By Scott Hollenbeck (VeriSign Global Registry Services, VeriSign, Inc). IETF Internet Engineering Task Force, Internet-Draft. Reference: 'draft-ietf-provreg-epp-09.txt'. March 11, 2003, expires September 11, 2003. 74 pages. This document describes specifications for the Extensible Provisioning Protocol (EPP) version 1.0, an XML text protocol that permits multiple service providers to perform object provisioning operations using a shared central object repository. EPP is specified using the Extensible Markup Language (XML) 1.0 as described in [XML-REC] and [W3C] XML Schema notation... EPP meets and exceeds the requirements for a generic registry registrar protocol as described in RFC3375. EPP content is identified by MIME media type application/epp+xml. Registration information for this media type is included in an appendix to this document. EPP is intended for use in diverse operating environments where transport and security requirements vary greatly. It is unlikely that a single transport or security specification will meet the needs of all anticipated operators, so EPP was designed for use in a layered protocol environment. Bindings to specific transport and security protocols are outside the scope of this specification. This original motivation for this protocol was to provide a standard Internet domain name registration protocol for use between domain name registrars and domain name registries. This protocol provides a means of interaction between a registrar's applications and registry applications. It is expected that this protocol will have additional uses beyond domain name registration... EPP is a stateful XML protocol that can be layered over multiple transport protocols. Protected using lower-layer security protocols, clients exchange identification, authentication, and option information, and then engage in a series of client-initiated command-response exchanges. All EPP commands are atomic (there is no partial success or partial failure) and designed so that they can be made idempotent (executing a command more than once has the same net effect on system state as successfully executing the command once). EPP provides four basic service elements: service discovery, commands, responses, and an extension framework that supports definition of managed objects and the relationship of protocol requests and responses to those objects..." See: (1) IETF Provisioning Registry Protocol Working Group; (2) "Extensible Provisioning Protocol (EPP)." [cache]
[April 05, 2003] "Business Process with BPEL4WS: Learning BPEL4WS, Part 6. Correlation, Fault Handling, and Compensation." By Rania Khalaf and William A. Nagy (Software Engineers, IBM TJ Watson Research Center). From IBM developerWorks, Web Services. April 2003. ['The previous articles have covered the fundamentals of BPEL4WS, providing you with an understanding of the activities defined and how they can be combined together using structured activities and the <link> construct. In this article, we cover the advanced properties of the language that are essential to the definition and execution of a business process. BPEL uses correlation to match returning or known customers with a long-running business process, fault handling to recover from expected as well as unexpected faults, and compensation to "undo" already committed steps in case something goes wrong in the middle of the process or, for example, a client wishes to explicitly cancel a transaction.'] "Message correlation is the BPEL4WS mechanism which allows processes to participate in stateful conversations. It can be used, for example, to match returning or known customers to long-running business processes. When a message arrives for a Web service which has been implemented using BPEL, that message must be delivered somewhere -- either to a new or an existing instance of the process. The task of determining to which conversation a message belongs, in BPEL's case the task of locating/instantiating the instance, is what message correlation is all about. In many distributed object systems, one component of the routing of a message involves examining the message for an explicit instance ID which identifies the destination. Although the routing process is similar, BPEL instances are not identified by an explicit instance field, but are instead identified by one or more sets of key data fields within the exchanged messages. For example, an order number may be used to identify a particular instance of a process within an order fulfillment system. In BPEL terms, these collections of data fields which identify the process instance are known as correlation sets. Each BPEL correlation set has a name associated with it, and is composed of WSDL-defined properties. A property is a named, typed data element which is defined within a WSDL document, and whose value is extracted from an instance of a WSDL message by applying a message-specific XPath expression. In WSDL, a propertyAlias defines each such mapping. The mappings are message specific, hence a single property can have multiple propertyAliases associated with it. For example, a WSDL document might say that property name corresponds to part username of WSDL message loginmsg and to part lastname of ordermsg. Together, the properties and propertyAliases provide BPEL authors with a way to reference a single, logical piece of information in a consistent way, even if it might appear in different forms across a set of messages... The next article will provide a runnable example that illustrates both the use of correlation for matching messages to the appropriate instances, as well as the use of a fault handler to catch and take care of errors in the process."
[April 05, 2003] "Grady Booch Polishes His Crystal Ball." By Michael O'Connell and Grady Booch. From IBM developerWorks. April 3, 2003. ['Grady Booch looks at software development's past, present, and future. Hint: It'll still be difficult.'] "Grady Booch spends his time pondering how to improve software development. As such, he thinks about how current trends -- UML, aspect-oriented programming, Web services, and so on -- will evolve into tomorrow's development environments. Most importantly, Grady believes that we solve the complexity problem by continually raising the level of abstraction. Will J2EE (Java 2 Platform, Enterprise Edition) vanquish Microsoft's .NET? Will Web services combined with XML schemas raise the level of abstraction? Rational Software Chief Scientist Grady Booch is paid to ponder such questions concerning the future of software development. If you think such a man possesses insights into how your development process will evolve over time, you'd be correct... Grady's contributions to easing software developers' lives stretches from co-creating Rational Rose, to writing (at last count) thirteen software development-focused books (see Resources), to co-developing the Unified Modeling Language (UML). As such, Grady believes software development will always prove difficult, but we, the developer community, should strive for improvement. Now Grady is with IBM, since IBM's purchase of Rational Software, and is one of the featured speakers at the IBM develperWorks Live! conference April 9-12 at the Morial Convention Center in New Orleans. We asked him to share a preview of his speech. Grady, being Grady, gave us much more in this exclusive interview with developerWorks..."
[April 04, 2003] "Document Object Model (DOM) Level 3 XPath Specification Version 1.0." Edited by Ray Whitmer (Netscape/AOL). W3C Candidate Recommendation 31-March-2003. Produced as part of the W3C DOM Activity. Latest version URL: http://www.w3.org/TR/DOM-Level-3-XPath. ['W3C has announced the advancement of the Document Object Model (DOM) Level 3 XPath Specification to Candidate Recommendation. The document provides access to a DOM tree using XPath 1.0. Reviews are welcome through May 26, 2003. Implementers are invited to send a message to the DOM public mailing list.'] "XPath 1.0 is becoming an important part of a variety of many specifications including XForms, XPointer, XSL, XML Query, and so on. It is also a clear advantage for user applications which use DOM to be able to use XPath expressions to locate nodes automatically and declaratively. This specification was created to map between the Document Object Model's representation of the W3C Information Set and XPath's model to permit XPath functions to be supplied and results returned within the framework of DOM APIs in a standard, interoperable way, allowing also for liveness of data, which is not addressed by the XPath specification but is present in results coming from the DOM hierarchy." See also the Document Object Model (DOM) Conformance Test Suites. Document available in in PDF format. General references: "W3C Document Object Model (DOM)."
[April 04, 2003] "Messaging Key to Web Services, CTOs Say. Reliable Messaging Considered At Least As Important As Security." By Tom Sullivan. In InfoWorld (April 04, 2003). "Reliable messaging came to the fore last week at the InfoWorld CTO Forum in Boston as perhaps the most important missing ingredient in the Web services recipe... Although security and orchestration often garner the most attention, several executives at the forum cited standards-based messaging as equally important to the success of Web services. To address that issue, Adam Bosworth, chief architect and senior vice president at San Jose, Calif.-based BEA Systems, revealed in his keynote speech that a forthcoming version of BEA'sWebLogic application server will include enhanced message brokering and management features... Web services management also received buzz at the forum, as Winston Bumpus, director of open standards and technologies at Novell and co-chair of OASIS, announced that the organization has formed a technical committee to focus on Web services for managing distributed resources. The Web Services Distributed Management (WSDM) technical committee will include vendors from OASIS, W3C, and the Desktop Management Task Force (DMTF). All the activity on the standards front has chief technologists divided as to whether Web services vendors are paying enough attention to customer's desires..." Related references: (1) "Reliable Messaging"; (2) "OASIS Members Form Technical Committee for Web Services Reliable Messaging"; (3) "New Web Services Specifications for Reliable Messaging and Addressing"; (4) "BEA Releases Web Services Specifications Supporting Asynchrony, Reliable Messaging, Metadata"
[April 04, 2003] Interactivity, Personalization Expand the Limits of Print." By Andreas Weber (DigitaldruckForum). In The Seybold Report Volume 2, Number 24 (March 31, 2003). ISSN: 1533-9211. "No longer chasing the elusive 'run of one,' interactive variable printing is finding its niche. At the PODi conference in Rome, PPML came of age; at Cebit, we saw how it can fit into the future of print communication. Since March 1999, expert teams in companies large and small have been feverishly debating the details and have managed to reach an agreement on PPML. For the first time in printing history, we have a generally recognized convention for a Personalized Print Markup Language. PPML is no off-the-shelf software product. Rather, it is a kind of digital 'container' in which all the parameters necessary for variable-printing jobs can be collected. The significance of PPML lies in the value it provides: 'maximizing the reuse of assets to minimize transport and printing costs.' JDF, PDF/X-3 and XML are the standards with which PPML most closely interacts... A decisive factor in the success [of PPML] was the founding of PODi, the Print On Demand Initiative (www.podi.org). The organization brought PPML to life in record time. Version 1.0 came out in March 2000, and version 2.0 is being released in April 2003. In parallel, PODi has gathered a unique collection of case studies. It is constantly being updated and is published as a 'book on demand.' PODi spokesperson David deBronkart says: 'It took an incredible amount of work to ferret out these 'best practice' examples and document them properly.' But the effort was worth it; digital printing has never before been documented in such detail and so effectively... In light of PPML and variable data exchange, two trends stand out: (1) Digital printing is moving away from the concept of the 'run of one.' (2) Digital printing permits high-volume production, with every piece individualized in real time and press runs in the millions... PPML significantly enriches the workflow for communication via print media. It encompasses preflight, soft-proofing and printing, as well as data verification. Still more important, PPML allows the development of tools for print providers, marketers and agencies that make it easier to plan, lay out and produce individualized printed pieces, and to integrate digital printing into the planning of multi-media advertising campaigns..." See: (1) "Job Definition Format (JDF)"; (2) "Personalized Print Markup Language (PPML)."
[April 04, 2003] "Rights-Expression Language Is Key to Interoperability. [Digital Rights Management.]" By Bill Rosenblatt (President, GiantSteps Media Technology Strategies). In The Seybold Report Volume 2, Number 24 (March 31, 2003). ISSN: 1533-9211. "Most of the pain points concerning digital rights are essentially political and cultural. But a few are technical, such as finding a standard method for describing exactly what rights are conveyed, for what price and under what restrictions. Many proprietary schemes have been proposed and (mostly) discarded. What seems likely to work in the long haul is a rights- expression language that's rich enough, simple enough and open enough to satisfy all parties in this turbulent industry. As it happens, there are two such languages. Trouble is, there needs to be one... One of the technical factors impeding the growth of the digital rights management market is the lack of interoperability among the increasing number of DRM solutions available. Rights expression languages (RELs) offer the promise of packaging assets in different DRM-enabled formats with a single set of business rules, which saves effort and promotes interoperability among different DRM-enabled components of digital-media value chains. We'll take a look at the background of RELs in general and the two most prominent emerging standard RELs: Extensible Rights Markup Language (XrML) from ContentGuard, Inc., and Open Digital Rights Language (ODRL) from IPR Systems Ltd... Today, a growing number of DRM technology vendors are adopting XrML, which is now at version 2.0; details are at www.xrml.org. Microsoft is incorporating either the full language or a subset in all of its DRM solutions, including Media Rights Manager for Windows Media audio and video, Digital Asset Server for Microsoft Reader eBooks and its recently announced Windows Rights Management Services for Windows Server 2003. Other adopters include the e-zine vendor Zinio and the Dutch infrastructure-software vendor DMDSecure. All of these companies have licensed ContentGuard's patents, as has Sony -- although the consumer electronics giant has yet to implement any technology based on XrML... ODRL, meanwhile, has made some headway in the wireless industry, thanks to efforts by the wireless giant Nokia. In addition to its adoption by the OMA, Nokia has released an SDK for implementing OMA download applications, and it has implemented the spec in its 3595 phone. ODRL is also supported in an open-source DRM package for the emerging MPEG-4 multimedia format called OpenIPMP, developed by New York-based ObjectLab Inc... For media applications, ODRL has the advantage of being more concise, meaning that rights descriptions in ODRL tend to be more compact than their equivalents in XrML and that ODRL interpreters can be smaller (in memory footprint) than XrML interpreters. The latter factor is especially important in the mobile-device space, where memory is at a premium; that is one reason why the Open Mobile Alliance (www.openmobilealliance.org) favored ODRL over XrML. ODRL also has some media-specific constructs that XrML does not share, including the ability to specify attributes of media objects such as file formats, resolutions and encoding rates..." Related references: (1) "XML and Digital Rights Management (DRM)"; (2) "Open Digital Rights Language (ODRL)"; (3) "Extensible Rights Markup Language (XrML)"; (4) "Patents and Open Standards" [ContentGuard]
[April 04, 2003] "IBM Fortifying XML Query Language. Big Blue Partnering with Microsoft, Oracle to Boost Spec." By Paul Krill. In InfoWorld (April 04, 2003). "IBM is preparing to advance the XQuery XML query language on two fronts: by submitting with Microsoft a test suite for industry consideration and by working with Oracle on a Java API for the language. IBM and Microsoft on Friday [2003-04-04] plan to submit to the World Wide Web Consortium (W3C) a test suite for the as-yet-unfinished XQuery language, said Nelson Mattos, Distinguished Engineer for IBM in charge of information integration efforts in data management, in San Jose, Calif. The suite is simply called the XQuery Test Suite. "One of the major milestones in establishing [XQuery as a] standard is a test suite that validates if an implementation conforms with the specification," Mattos said. The test suite consists of a series of programs that illustrate the different features in XQuery and checks if a given implementation supports the features in a way defined by the standard, Mattos said. He stressed the significance of XQuery as a mechanism for searching and retrieving XML data. Support for XQuery will occur in products such as databases, including IBM's DB2 database, as well as in applications such as content management, document management, and information integration systems... Final W3C approval of XQuery as a formal recommendation, which is tantamount to being an industry standard, is expected later this year, according to a W3C spokesperson. W3C will be looking for other vendors to submit test suites as well, according to W3C. The test suite from IBM and Microsoft is to be submitted to the W3C XML Query Working Group. The test suite provides a framework for comparing specific implementations of XQuery to the W3C specification, according to Microsoft's Michael Rys, program manager for the SQL Server database, in Redmond, Wash. Microsoft plans to support XQuery in the Yukon release of SQL Server, due in a beta version by June ... Also planned by IBM within a few weeks is formation, along with Oracle, of an expert group within the Java Community Process that would develop a Java API for XQuery to establish a standard way for a Java program to search for documents written in the XML language. IBM and Oracle would deliver the specification. The Java Community Process is an industry mechanism for adding standard technologies to the Java platform. The proposed API, which would be the subject of a Java Specification Request within JCP, would relate to XQuery in the same manner that JDBC relates to SQL, IBM said..." See the W3C XQuery website. General references in "XML and Query Languages."
[April 04, 2003] "Database Heavyweights Weigh In On XML Standard." By Lisa Vaas. In eWEEK (April 04, 2003). "Relational database heavyweights are pushing the XQuery standard for querying XML documents, with IBM and Microsoft Corp. expected to present a test suite for the standard to the W3C on Friday, and Oracle Corp. recently having posted a prototype of the standard on its site. The test suite IBM and Microsoft will present is considered an important milestone in finalizing a standard for querying XML data. If adopted by the W3C, the test suite will be used to check whether an XQuery implementation performs as standards dictate, thus ensuring that a given technology is portable across multiple applications that conform to the standard... IBM has pledged that when the XQuery standard is finalized, the company will plug the search technology into its DB2 database product family. The products that would adopt XQuery include DB2 Information Integrator, a product that grew out of IBM's Xperanto initiative that's designed to unify, integrate and search scattered repositories and formats of historical and real-time information as if they were one database; DB2 Universal Database; and DB2 Content Manager... According to Mattos, WebSphere Business Integrator, the Informix database, and Business Intelligence products such as Intelligent Miner or the Red Brick Warehouse will also support XQuery when it becomes an official W3C recommendation. XML aficionados differ on how much the XQuery standard matters. Timothy Chester, senior IT manager and adjunct faculty member at Texas A&M University, in College Station, Texas, called it an 'important step in a very small sandbox.' Chester uses XML for systems integration but doesn't query XML documents and thinks it's unlikely that many will give up the tried-and-true structured language of relational databases for XML..." See the W3C XQuery website. General references in "XML and Query Languages."
[April 02, 2003] "Really Simple Asynchronous Web Services." By Eric Newcomer (IONA Technologies). In Computerworld (March 25, 2003). The first word in SOAP (Simple Object Access Protocol) is "simple," but the world of Web services has become very complicated. In early 2000 the SOAP specification was published, and it was quickly followed by the SOAP with Attachments; Universal Description, Discovery and Interoperability (UDDI); and Web Services Description Language (WSDL) specifications. These core technologies have been very widely adopted, implemented and endorsed. Then came a plethora of proposals for additional functionality, positioned as extensions to the core technologies, such as WS-Security, WS-Transactions, WS-Routing, BPEL, WS-Coordination, HTTP-R and WS-Reliability... Most current Web services applications are used in a connection-oriented manner. That is, in order to use the applications over the Internet, a connection is required between the service requester and service provider. But for the mobile traveler, and for the emerging world of wireless networks, always being connected may not be possible or practical. In the world of high-speed, wireless networking and mobile Internet-enabled devices such as laptops, personal digital assistants (PDA), cellular telephones and automobiles, it is unlikely that a connection will always and constantly be available. A really simple solution works well in this environment, insulating users from worrying about connectivity loss. If a connection is present, the message is immediately transferred. If a connection is not present, the message is queued up for later transfer. ... File-based transfer mechanisms provide a really simple way to implement asynchronous Web services, resolving common concerns such as occasional connectivity, security and reliable messaging without adding the complexity of extended standards. File-based transfer mechanisms are consistent with current and emerging Web services standards. Considerable frustration with current Web services technologies arises when comparing them with RPC-oriented middleware since by definition Web services are not well suited to the RPC-oriented interaction style. Web services are much better suited for the message-oriented interaction style. Extending Web services toward message-oriented middleware architectures provides a better way to improve short-term value than proposing complex RPC-oriented extensions. Client-side applications benefit from the simple asynchronous architecture since they do not have to worry about whether or not they are connected to the Internet; and they can use better GUI tools, work with large documents and participate in long-running business process orchestrations..."
[April 02, 2003] "XPath Containment in the Presence of Disjunction, DTDs, and Variables." By Frank Neven (University of Limburg, Belgium) and Thomas Schwentick (Fachbereich Mathematik und Informatik, Philipps-Universität Marburg). Paper presented at ICDT 2003 (Ninth International Conference on Database Theory), January 8-10, 2003, Rettorato dell'Universita' di Siena, Italy. 32 pages (with 25 references). "XPath is a simple language for navigating an XML tree and returning a set of answer nodes. The focus in this paper is on the complexity of the containment problem for various fragments of XPath. In addition to the basic operations (child, descendant, filter, and wildcard), we consider disjunction, DTDs and variables. With respect to variables we study two semantics: (1) the value of variables is given by an outer context; (2) the value of variables is defined existentially.We establish an almost complete classification of the complexity of the containment problem with respect to these fragments... We have studied the complexity of the containment problem for a large class of XPath patterns. In particular, we considered disjunction, DTDs and variables. Unfortunately, the complexity of almost all decidable fragments lies between conp and exptime. On the other hand, the size of XPath expressions is rather small. As pointed out, Deutsch and Tannen, and Moerkotte already obtained undecidability results for XPath containment. We added two more: presence of node-set equality and modest negation or variables with the existential semantics. It would be interesting to have a precise classification of which combination of features makes the problem undecidable. In a next step, also navigation along the other axes of XPath should be investigated..." See other publications from the Foundations of XML group at LUC. [source PostScript]
[April 02, 2003] "Automata- and Logic-Based Pattern Languages for Tree-Structured Data." By Frank Neven (University of Limburg) and Thomas Schwentick (Philipps-Universität Marburg). In Semantics in Databases. Revised Papers from the Second International Workshop (Dagstuhl Castle, Germany, January 7-12, 2001). Lecture Notes in Computer Science #2582. Edited by L. Bertossi, G.O.H. Katona, K.-D. Schewe, and B. Thalheim. Berlin/Heidelberg: Springer-Verlag, 2003. 19 pages (with 31 references). The draft/manuscript version (PDF) is linked. "This paper surveys work of the authors on pattern languages for tree-structured data with XML as the main application in mind. The main focus is on formalisms from formal language theory and logic. In particular, it considers attribute grammars, query automata, tree-walking automata, extensions of first-order logic, and monadic second-order logic. It investigates expressiveness as well as the complexity of query evaluation and some optimization problems. Finally, formalisms that allow comparison of attribute values are considered... We survey work investigating well-known formalisms from formal languagesand logic as a pattern languages for tree-structured data. The main focus is on expressiveness, evaluation complexity, and decision problems relevant to optimization. Although attribute grammars as well as query automata are quite expressive they are quite complicated formalisms and do not seem to be the basisfor an easy-to-use pattern language. Tree-walking automata are much more intuitive. Therefore, we need to understand better their expressiveness. However, the problem whether they capture MSO has been open for a while now and therefore appears to be difficult. The restricted logics we consider might be useful in the design of a pattern language but this requires further work. We merely touch upon the issue of comparison of data values. Undoubtedly it deserves a lot more investigation..." Related papers include (1) Automata Theory for XML Researchers (Sigmod Record 31(3), 2002) and (2) Automata, Logic, and XML (CSL 2002, invited talk). See other publications by Frank Neven. [source PostScript]
[April 01, 2003] "The Liberty Alliance." By Paul Madsen. From WebServices.xml.com (April 01, 2003). "For the consumer or employee, federated identity will mean a far more satisfactory on-line experience - as well as new levels of personalization, security, and control over their identity information. The existence of such an infrastructure will open up new business opportunities, including providing economies of scale that lower business costs and expedite the growth of the Internet and e-commerce. Making this happen is what the Liberty Alliance Project is all about... Liberty's first phase focused on enabling simplified sign-on through identity federation -- this work is referred to as the Liberty Identity Federation Framework (ID-FF). The Liberty Phase 2 specifications (expected in mid-2003) will build on this base to provide key features for enhancing identity federation and enabling interoperable identity-based web services. This upcoming work is known as the Identity Web Services Framework (ID-WSF). The Liberty Phase 1 specifications released in July 2002, and updated in January 2003, provide the plumbing for federated identity management. These specifications, called the Liberty Alliance Identity Federation Framework (ID-FF), provide standards for simplified sign-on and federation or 'linking' among disparate accounts within a group of businesses that have already established relationships. The Liberty Phase 2 specifications, which are expected in mid-2003, will enhance Liberty's Identity Federation Framework and introduce the Liberty Alliance's Identity Web Services Framework (ID-WSF). This Web Services Framework outlines the technical components necessary to build interoperable identity-based web services that meet specific business needs and also protect the privacy and security of users' shared information. Phase 2 also includes the introduction of Liberty Alliance Identity Services Interface Specifications (ID-SIS), a collection of specifications built on the Liberty Identity Web Services Framework. These specifications will provide a standard way for companies to build interoperable services like registration profiles, contact books, or calendar, geo-location or alert services. The first service interface specification to be introduced is the ID-Personal Profile, which will define a basic profile template that can be used to build a registration service. As it did for Phase 1 ID-FF, XML will play a key role in Liberty's ID-WSF and subsequent phases. For instance, to enable the permission-based attribute sharing necessary for Web-based identity services that enable users to control their data, there will need to be XML schemas for capturing a users core profile (e.g., their shipping address, their cell phone number, etc), and a protocol for requesting such profile information..." General references in "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[April 01, 2003] "CTO Forum: BEA Exec Hails Service-Oriented Architectures. Web Services, Messaging Play Roles." By Paul Krill. In InfoWorld (April 01, 2003). "Service-oriented architectures leveraging messaging and Web services are key to meeting complicated system integration needs, a BEA executive stressed during a keynote presentation at the InfoWorld CTO Forum conference here on Tuesday... BEA's Adam Bosworth, chief architect and senior vice president of advanced development at the San Jose, Calif.-based company, said enterprises are dealing with hundreds or even thousands of applications that have difficulty communicating with each other... Standardization and flexible, service-oriented architectures are needed to address pressing integration issues, according to Bosworth... BEA, he said, has devised three principles pertaining to application loads: coarse-grained computing is needed to provide for a single trip to a database for each transaction, loose coupling enables interlinked applications to be changed without breaking the architecture, and asynchronous communications provides for delivery. [Bosworth] cited the recently proposed WS-Addressing specification as containing a core model for asynchronous messaging. Web services provide a model for coarse-grained communications, he said, but the key behind Web services is loose coupling. Additionally, Web services offer at least a standard way of communications between applications. Messaging, Bosworth said in an interview after his presentation, remains a clear part of BEA's strategy. 'The role there is very simple; it's reliable delivery,' he said. Bosworth cited ongoing issues with needing a query mechanism for XML data. XSLT currently is being used to query XML but it is not suitable, according to Bosworth. He cited an upcoming version of XML Query as a potential solution. Bosworth also said: (1) SOAP technology 'is going to freeze very quickly' after issues with asynchrony and reliable messaging are resolved. (2) BPEL4WS (Business Process Execution Language for Web Services) is attempting to provide for descriptions of business processes and business contracts, but BEA is not certain it can handle the contracts task. 'If it can do this, great. If not, we'll have a white paper on it,' Bosworth said. However, Bosworth said BEA would not push the rival WSCI (Web Services Choreography Interface) model unless it is proven that the BPEL4WS model does not work. BEA has participated in development of both specifications. (3) Instant messaging is important. 'IM gives us a way to push messages and it gives us an architecture of knowing who's available,' he said..."
[April 01, 2003] "Perspective: Pay Attention to BPM." By J. William Gurley. From CNet News.com (March 27, 2003). "Today, there is a new form of enterprise software that has the ability to do for white-collar business processes what Deming did for manufacturing. Delphi Group believes that business process management (BPM) is 'quickly emerging as the moniker for the next killer app in enterprise software.' Believe it or not, this may actually undersell the potential impact of BPM. BPM will not just change the software industry -- it will change industry in general... BPM is a new programming paradigm for the enterprise that leverages browser-based applications, e-mail, global connectivity and enterprise application integration (EAI) infrastructure to deliver a powerful, business-focused programming solution. A mix between workflow, EAI and application development, BPM makes it easy for companies to codify their current processes, automate their execution, monitor their current performance and make on-the-fly changes to improve the current processes... Typically, there are six key components that exist in any BPM solution: (1) IDE: An integrated design environment used to design processes, rules, events and exceptions. Believe it or not, many companies report huge improvements in process design merely from the act of sitting down and creating a structured definition of each process. (2) Process engine: The process engine keeps track of the states and variables for all of the active processes. Within a complex system, there could easily be thousands and thousands of processes with interlocking records and data. As such, this engine is nontrivial. (3) User directory: Administrators define people in the system by name, department, role and even potential authority level. (4) Work flow: This is the communication infrastructure that forwards tasks to the appropriate individual. This can actually be built quite simply by leveraging browsers and corporate e-mail. (5) Reporting/process monitoring: Companies need the ability to track the performance of their current processes and the performance of personnel who are executing these processes. (6) Integration: EAI is critical to BPM because all business processes will require data from systems throughout the organization. Widespread adoption of Web services could minimize the amount of effort needed here..."
[April 01, 2003] "Curing the Web's Identity Crisis: Subject Indicators for RDF." By Steve Pepper and Sylvia Schwab (Ontopia). Ontopia Technical Report. March 2003. "This paper describes the crisis of identity facing the World Wide Web and, in particular, the RDF community. It shows how that crisis is rooted in a lack of clarity about the nature of 'resources' and how concepts developed during the XML Topic Maps effort can provide a solution that works not only for Topic Maps, but also for RDF and semantic web technologies in general... The heart of the matter is the question 'What do URIs identify?' Today there is no consistent answer to this question... [Also:] 'What is a resource?' [...] Why is this important? Because without clarity on this issue, it is impossible to solve the challenge of the Semantic Web, and it is impossible to implement scaleable Web Services. It is impossible to achieve the goals of 'global knowledge federation' and impossible even to begin to enable the aggregation of information and knowledge by human and software agents on a scale large enough to control infoglut. Ontologies and taxonomies will not be reusable unless they are based on a reliable and unambiguous identification mechanism for the things about which they speak. The same applies to classifications, thesauri, registries, catalogues, and directories. Applications (including agents) that capture, collate or aggregate information and knowledge will not scale beyond a closely controlled environment unless the identification problem is solved. And technologies like RDF and Topic Maps that use URIs heavily to establish identity will simply not work (and certainly not interoperate) unless they can rely on unambiguous identifiers... The widely recognized 'identity crisis' of the Web is due to the absence of a formal distinction between information resources and subjects in general. This can be traced back to the definition of 'resource' in RFC 2396. Recognition of the important distinction made in Topic Maps between addressable and non-addressable subjects leads to the notion of subject indicators as an indirection mechanism for establishing the identity of subjects that cannot be addressed directly. This allows URIs to be used in two ways - as subject addresses or as subject identifiers - without ambiguity. Syntactic context can be used to determine which mode is intended in any specific instance. The concept of subject indicators also provides a powerful two-sided identification mechanism that can be used by both humans and applications. For RDF and other semantic web technologies to take advantage of this mechanism, changes are required in the underlying data model of RDF and the basic architecture of the Web. Once these are made, the foundation will have been laid for achieving the goals of the semantic web... A solution to the 'identity crisis of the Web' is clearly essential. The purpose of this paper is to offer an explanation of the root causes of the problem and to show how concepts originally developed as part of XML Topic Maps (XTM) offer a solution that can be applied to the semantic web in general..." General references in "(XML) Topic Maps."
[April 01, 2003] "T-Mobile Bet Is Paying Off." By Anne Chen. In eWEEK (April 01, 2003). "T-Mobile is placing a new bet on the potential of Web services to make it easier for external providers of content such as stock and weather updates to tie into its network, increasing revenue opportunities and enriching services for consumers. That bet, too, seems well on its way to paying off. The company's Web services development project-- in which the company has invested $3.17 million -- today connects more than 5 million mobile customers in Austria, Germany and the United Kingdom to 250 content providers... To fully integrate external content providers, T-Mobile needed to do more than just link phone users to the content itself. The company also needed to integrate content providers with T-Mobile systems providing services such as billing, identity management and localization for content customization... Interoperability would be key to T-Mobile's success because the company would be dealing with a wide variety of far-flung content partners running on different computing platforms, such as Java, Perl, PHP and Microsoft Corp. languages. Initially, the company considered using middleware such as BEA Systems Inc.'s Tuxedo and IBM's MQ Series Integrator to accomplish the integration. But Mike Glendinning, a consultant for T-Mobile in London who heads the company's Web services initiatives, preferred the flexibility of XML and the platform-neutral aspects of Web services. 'Part of the goal of using Web services was to put as few constraints on content providers as possible,' said Glendinning. 'We had to be aggressive about interoperability because most of the content partners deal with other mobile operators. The easier it is for them to connect to us, the larger the network we have.' With hundreds of content providers to work with, Glendinning decided he wanted to use a Web services tool kit that supported WSDL (Web Services Description Language) and SOAP (Simple Object Access Protocol) because of the standards' maturity, reliability and ease of use. At the end of 2001, T-Mobile deployed WASP (Web Applications and Services Platform) Developer 4.0 and WASP Server Advanced for Java 4.0, from Systinet Corp. WASP Server provides a run-time framework for T-Mobile's Web services including data mapping, security, administration and Java 2 Enterprise Edition integration. WASP Developer, a plug-in extension for Java integrated development environments, enables the development, debugging, testing and deployment of T-Mobile's Web services. Both run on top of BEA's WebLogic 6.1 application server deployed on a variety of Unix systems..."
[April 01, 2003] "ASN1C Mapping of ASN.1 Syntax to XML Schema." White Paper. From Objective Systems, Inc. March 2003. 21 pages. "An effort is currently underway within the ITU-T to map World-Wide Web Consortium (W3C) XML Schema Definitions Language (XSD) to Abstract Syntax Notation 1 (ASN.1). But a parallel effort to provide a mapping in the other direction -- from ASN.1 to XSD -- is on hold. Given the currently popularity of XSD for defining new standards, it would seem reasonable that ASN.1 to XSD conversion would be of interest to the ASN.1 community. This paper presents such a mapping. It uses as a basis the T1 Standard's Committee Draft Standard -- tML Guidelines for Mapping ASN.1 Syntax and Modules to XML Schemas. The Objective Systems' ASN1C ASN.1 Compiler Tool v5.5 will have the capability of producing XML Schema that is consistent with these mappings..." See related references in "ASN.1 Markup Language (AML)." [cache]
[April 01, 2003] "Web Services Security, Part 2." By Bilal Siddiqui. From WebServices.xml.com (April 01, 2003). "In my previous article I discussed the security requirements of web services in B2B integration applications. I also introduced some XML-based security standards from W3C and OASIS. In this article, I will discuss three XML-based security standards -- XML Signatures, XML Encryption and Web Services Security -- which offer user authentication, message integrity and confidentiality features in SOAP communications. You can safely bet that these three standards fill the SOAP security hole I described previously. In what follows I explain how that hole is filled by demonstrating the creation, exchange, and processing of XML messages inside XML firewalls. The discussion of message integrity, user authentication, and confidentiality employs some core concepts: keys, cryptography, signatures, and certificates. I will briefly discuss cryptographic basics... The third and fourth parts of this series will explore further details of XML-based security, the different types of security tokens that we can use with WSS, and the use of XML encryption in WSS messages. In the next article, we will discuss Security Assertions Markup Language (SAML), which provides a way for web service applications to share user authentication information. This sharing of authentication data is commonly referred to as single sign-on. SAML can be used as a security token in WSS applications. The next article will elaborate why, when, and how..." See also "Web Services Security, Part 1," by Bilal Siddiqui (XML.com Web Services March 04, 2003).
[April 01, 2003] "Native XML Scripting." By Timothy Appnel. From the O'Reilly Developer Weblogs. April 01, 2003. "ECMA announced it is completing what it calls E4X (ECMAScript for XML). The goal of this extension is to standardize the syntax and semantics of a general-purpose, cross-platform, vendor-neutral set of programming language extensions adding native XML support in ECMAScript. John Schneider's article on BEA's dev2dev site that illustrates the concept and value of native XML scripting in detail... This is really exciting stuff that should contribute significantly to the development of more lightweight and fluid Internet applications that can take full advantage emerging spectrum of Web services in a rapid and efficient manner... Incidentally Adam Bosworth was instrumental in driving the E4X effort. It is no surprise that BEA is the first to implement it in a product given the fact Bosworth is their Vice President of Engineering and public face for technologists..." See also: (1) "Web Leaders Agree to Add Native XML to ECMAScript. New ECMA International Standard, ECMAScript for XML (E4X), to Unlock the Power of XML for Web Developers."; (2) "Native XML Scripting in BEA WebLogic Workshop," by John Schneider [BEA DEV2DEV].
[April 01, 2003] "Getting Your Arms Around Web Services." By Murthy Nukala and M. R. Rangaswami (Sand Hill Group). In Optimize Magazine Issue 17 (March 2003). ['Early adopters are proving the business case and setting best practices for Web services.'] "The term 'Web services' can be confusing even to experienced technology folks. It's not a Web application, but rather a new way to integrate enterprise applications that's faster, easier, and less expensive than traditional integration technologies. Basically, it involves breaking down a large application into smaller components and making these components serviceable, so that other applications can request a 'service' without knowing the system's innards. These service requests take place using open, Web-based standards, which today include the Simple Object Access Protocol (SOAP) and Web Services Description Language... Our study at the Sand Hill Group identified one major distinction: Successful Web-services implementations are strictly business-driven. Enterprise customers say the technology will survive because it makes a business impact, rather than being adopted for technology's sake. The pioneering companies we interviewed are realizing genuine business benefits and plan to continue on the road to adopting Web services. Today, the dream of a real-time business remains just that. In reality, operational inefficiencies prevent companies from achieving this vision. However, technology can improve information access. To get there, systems need to communicate seamlessly and quickly. That's why many IT experts are hopeful about the promise of services-oriented architectures. This model speeds progress toward the real-time enterprise -- and Web-services technology plays an important role in it. The biggest benefit of Web services doesn't occur the first time, but on recurring integration, says the VP of technology at one Fortune 500 financial-services company, who participated in the study. Web services allow you to reuse the application interface in another integration setting, within a different business process, or with partners... To gather information about these implementations and a sense of the process through which companies implemented the technology, we studied 60 Web-services projects and interviewed 117 enterprise-software decision makers: Global 2,000 corporate customers, application vendors, EAI vendors, platform vendors, and systems integrators. Our interviews resulted in more than 150 hours of conversation. We found that Web services are usually implemented in three stages: (1) Educate: Most companies conducted pilots and trials to become familiar with the technology and to maximize learning about its advantages and disadvantages and the business problems it can address. (2) Power the engine of value: Conduct small, focused implementations that address a specific business goal. These discrete projects will combine to serve as the engine of value from Web services. (3) Realize global benefits: As more applications become enabled by Web services through completion of discrete projects, developing new applications and integration points takes less time, effort, and money. These benefits accumulate quickly across the company, making its IT systems faster and more flexible..."
[April 01, 2003] "Group Addresses Web Services Security." By Darryl K. Taft. In eWEEK (April 01, 2003). "The Web Services Interoperability Organization Tuesday announced the formation of its Basic Security Profile Working Group (BSPWG). The group will focus on developing a basic profile for Web services security, much like the WS-I has developed a basic profile for achieving overall interoperability. Eve Maler, XML standards architect at Santa Clara, Calif.-based Sun Microsystems Inc., is the chairwoman of the WS-I Basic Security Work Plan task force. Maler said WS-I saw a need to go beyond the Basic Profile 1.0 and address security. 'It's clear that security is an area of great interest in the area of Web services,' and in November 2002, the WS-I formed the Basic Security Work Plan task force to build a plan for dealing with security interoperability issues, she said. 'Our group is a bootstrapping group looking at the problem space as opposed to the solution space,' Maler said. She also noted that the group so far has focused on two main types of security: 'transport layer security and security that adheres to the message' throughout its journey. The group identified and listed several technologies that ought to be profiled, including HTTPS -- or secure HTTP -- OASIS Web Services Security V1.0 and SOAP (Simple Object Access Protocol) attachment security, she said..." See other details in the 2003-04-01 news story "WS-I Charters Basic Security Profile Working Group (BSPWG)." General references in "Web Services Interoperability Organization (WS-I),"
[April 01, 2003] "WS-I Working Group Tackles Security. Aims to Develop Guidelines for Web Services Interoperability." By Paul Roberts. In InfoWorld (April 01, 2003). "A new working group within the Web Services Interoperability Organization (WS-I) will focus on ways to develop secure Web services, according to a statement released Tuesday by the WS-I. The Basic Security Profile Working Group (BSPWG) is chartered with developing guidelines for interoperability between different Web services implementations, referred to as the Basic Security Profile. That security profile will be an extension of the WS-I Basic Profile 1.0, according to WS-I. The WS-I Basic Profile provides guidelines for how Web services specifications such as SOAP, XML, or UDDI should be used together to develop interoperable Web services. In particular, the BSPWG will tackle interoperability issues involving transport security, SOAP messaging security, as well as other issues raised by the WS-I Basic Profile..." See other details in the 2003-04-01 news story "WS-I Charters Basic Security Profile Working Group (BSPWG)." General references in "Web Services Interoperability Organization (WS-I),"
[April 01, 2003] "Architectural Design Patterns for XML Documents." By Kyle Downey. From XML.com (March 26, 2003). ['Kyle Downey describes some patterns for XML document structure that are useful for those designing documents from the ground up.'] "One way programmers try to reuse good ideas about object design is to look to catalogs of design patterns... XML has been used enough now that some high-level patterns are starting to emerge. Some patterns revolve around the low-level details of good schema design, like those put together by Dare Obasanjo in 'W3C XML Schema Design Patterns; but when you have a blank sheet of paper in front of you and you're ready to start designing your new XML format, you want patterns to guide you at a higher level. This article attempts to document a few whole-document design patterns that have proven themselves in the field..."
[April 01, 2003] "XML Standards for Financial Services." By Ayesha Malik. From XML.com (March 26, 2003). ['Ayesha Malik describes the current state of play in the electronic communications sector of the financial services world and gives an overview of the XML standards that will change the way the industry works.'] "The financial services industry spends billions of dollars on IT development to maintain its competitive edge. Most recently, banks, risk management firms, and insurance companies have been focusing on automating business processes and building systems that reduce the time from negotiating a trade to settling it to running risk analytics on trade positions. This is referred to as Straight Through Processing (STP); according to the Tower Group, the financial services industry will spend over $12.2 billion on STP technology through 2005. STP is currently the biggest challenge and the most hyped technology issue in the financial securities industry, particularly in the United States. The ultimate goal of straight through processing is to replace the traditional phone and fax confirmations with a completely automated loop, from pre-trade communication and deal capture through to clearing and settlement. This involves automating the processes passing trade-related data between investment managers, brokers, clearing agencies, and custodians. The core of STP, therefore, lies in the exchange of information between disparate systems. The structure of the information exchanged must be in a format that is agreed upon by the communicating parties and that is easily manipulated programmatically. The first requirement necessitates the presence of industry standards, and the second points to the use of XML as the data transport language. XML standards for the financial services industry, therefore, are essential for the successful implementation of STP. XML documents must carry information for pre-trade analysis such as quote requests, trade information such as security details and order information, post-trade analytics such as market data, and settlement and clearing specifics..." General references in "ISO 15022 XML."
[April 01, 2003] "Choosing Among JCA, JMS, and Web Services for EAI. Positioning Alternative Interface Technologies for Enterprise Integration." By Regis Coqueret (Engagement Manager/Solutions Architect, jStart Emerging Technologies, IBM Software Group) and Marc Fiammante (Senior Consulting IT Architect, SWG EMEA Business Integration Technical Sales, IBM). From IBM developerWorks, Web services. March 2003. ['This article discusses criteria for choosing among J2C Connector Architecture (JCA), Java Message Service (JMS), and Web services implementations, depending on your existing environments, the patterns you want to implement, and the your preset requirements for loose or tight coupling.'] "Organizations evolve rapidly, and they seek to meet changing business requirements while managing costs. This means that enterprises desire to structure their own applications in a way that allows easy reorganization of information systems. Major organizational changes such as mergers or the creation of subsidiaries might also introduce new variables into the information system. Enterprises might also need to buy applications on the market or to subcontract part of their business needs, such as ledger or back-office management. There is no guarantee that such services are available on the existing technical framework. As the complexity of information system increases, development must be simplified. This brought about interest in Enterprise Application Integration (EAI). Still, enterprises must complement EAI with business services and a flexible way to access the resulting, integrated applications. Interface-based architectures currently address this increasing need for flexible access to business services and client independence. Interface-based architectures include such technologies as Web services, J2C Connector Architecture (JCA) and Java Message Service (JMS). They also include all of the variations of the command pattern, which isolates client code from the implementation of the business service. You can call such invocation frameworks from EAI middleware, and vice versa. In this article we first discuss the main characteristics of each interface technology, and then the requirements that suggest one or another. After reading this article, you will understand how to position the various technologies and how to choose among them for a particular implementation..."
- XML Articles and Papers March 2003
- XML Articles and Papers February 2003
- XML Articles and Papers January 2003
- XML Articles and Papers December 2002
- XML Articles and Papers November 2002
- XML Articles and Papers October 2002
- XML Articles and Papers September 2002
- XML Articles and Papers August 2002
- XML Articles and Papers July 2002
- XML Articles and Papers April - June, 2002
- XML Articles and Papers January - March, 2002
- XML Articles and Papers October - December, 2001
- XML Articles and Papers July - September, 2001
- XML Articles and Papers April - June, 2001
- XML Articles and Papers January - March, 2001
- XML Articles and Papers October - December, 2000
- XML Articles and Papers July - September, 2000
- XML Articles and Papers April - June, 2000
- XML Articles and Papers January - March, 2000
- XML Articles and Papers July-December, 1999
- XML Articles and Papers January-June, 1999
- XML Articles and Papers 1998
- XML Articles and Papers 1996 - 1997
- Introductory and Tutorial Articles on XML
- XML News from the Press