The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: September 30, 2003
XML Articles and Papers September 2003

XML General Articles and Papers: Surveys, Overviews, Presentations, Introductions, Announcements

Other collections with references to general and technical publications on XML:

September 2003

  • [September 30, 2003] "OASIS to Build Web Services Framework. Committee Will Define Vendor-Neutral Methodology." By Paul Krill. In InfoWorld (September 30, 2003). "Members of OASIS this week announced plans to develop a global Web services framework to define a methodology for a broad-based, multiplatform and vendor-neutral implementation. The OASIS Framework for Web Services Implementation (FWSI) Technical Committee plans to design a template for Web services deployment to enable systems integrators, software vendors, and in-house developers to build e-commerce solutions more quickly, according to OASIS. The committee will define functionality for building Web services applications and service-oriented architectures. Specifically, the committee will specify a set of functional elements for practical implementation of Web services-based systems. At first glance, the OASIS project appears similar to the Basic Profile for Web services being set up by the Web Services Interoperability Organization (WS-I). But the technical committee expects to complement WS-I, according to OASIS. Committee member Sun Microsystems also is a major supporter of the WS-I Basic Profile. The committee plans to leverage applicable work within OASIS and other standards groups..." See details in the news story "OASIS Announces Framework for Web Services Implementation (FWSI) TC."

  • [September 30, 2003] "Taking XML's Measure." By David Becker. In CNET News.com (September 23, 2003). "Tim Bray and his colleagues in the World Wide Web Consortium had a very specific mission when they set out to define a new standard seven years ago. They needed a new format for Internet-connected systems to exchange data, a task being handled with increasing awkwardness by HyperText Markup Language. The solution Bray helped concoct was XML (Extensible Markup Language), which has since become one of the building blocks of information technology and today serves as the basic language for disparate computing systems to exchange data. Microsoft is betting heavily on XML-based technology that will turn the new version of Office into a conduit for viewing and exchanging data from backend systems. The biggest players in technology are betting heavily on Web services based on XML. And corporate giants such as Wal-Mart Stores are relying on XML to streamline their business processes. Bray has since gone on to address another big challenge -- the visual representation of data -- with his company, Antartica, which sells tools that display information from Web searches, corporate portals and other sources in an intuitive map-based format. Bray talked about the spread of XML, challenges in search technology and other concerns with CNET News.com..." [Excerpt, on standards:] "Standards processes don't do well in dealing with new technologies, so I disagree that being ahead of the market is a good thing. The standards process works best when you've got a problem that's already been solved, and we have a consensus on what the right way to go is, and you just need to write down the rules. That's totally what XML was. There had been 15 years of SGML, so there was a really good set of knowledge as to how markup and text should work. And the Web had been around for five years, so we knew how URLs (Uniform Resource Locators) worked, and Unicode had been around, so we knew how to do internationalization. XML just took those solved problems, packaged them up neatly and got consensus on it all..."

  • [September 30, 2003] "Sun: Office 2003 Will 'Protect Microsoft's Monopoly'." By Andrew Colley. In ZDNet Australia (September 30, 2003). "Document protection tools in the next version of Microsoft's office suite represent extremes of proprietary thinking, says a Sun document. Sun Microsystems has expressed concerns that document protection tools that Microsoft will include in Office 2003 will fortify the software giant's domination over enterprise desktops. In a document never before released outside Sun, but shared with ZDNet Australia this week, Laurie Wong, Sun Microsystems software product manager, argued that while document rights management was a positive step, Microsoft was using its rights management regime to protect its 'monopoly'. According to Wong, Microsoft's adoption of rights management services would negate any positive impact that might have resulted from its decision to adopt open standards for its file storage format. 'In summary, on the one hand Microsoft claims to have opened up the storage format from a proprietary binary one to XML, an open one. On the other they have locked this 'open' format up with rights management,' wrote Wong, adding 'Yes, a couple of deck chairs have been shifted around, but you certainly are not on a different ship. It is a vexatious issue, promulgated by the extremes of proprietary thinking'. Wong argued that Windows RMS locks out members of the community using non-Microsoft products by coupling document protection systems to proprietary features of Microsoft's latest server technology. Windows RMS is designed to give enterprises control over their documents by specifying who can access them and how they can be used at the time they are created. Windows RMS requires the list of restrictions attached to each document to be registered on a RMS-capable Microsoft server. The server authenticates each user and issues him or her with a license to use an RMS-protected document. Anyone without access to the RMS technology server is effectively locked out of a protected document. When concerns about this were raised when Microsoft announced its rights management technology early this year, the company said that RMS was targeted for internal corporate use and that it could be incorporated into the Passport service for wider community inclusion. However, Wong is not satisfied by either argument. Nodding in the direction of the global divide between the technology have and have-nots, Wong said that users shouldn't be forced to buy one company's products for the privilege of using widely used document formats. Adding to Wong's concerns, Microsoft has added the capability to apply rights management to emails and Web pages through Outlook 2003 and Internet Explorer..." See: (1) "Microsoft Announces Windows Rights Management Services (RMS)"; (2) general references in "XML and Digital Rights Management (DRM)."

  • [September 30, 2003] "Adobe's PDF-Everywhere Strategy." By David Becker. In CNET News.com (September 30, 2003). "Adobe Systems wants to put more than a few pulp mills out of business. Formed more than 20 years ago with the mission of ensuring uniform typefaces, the San Jose, Calif.-based software maker has since built a grand e-paper network, with Adobe products replacing or supplementing paper for tasks that range from tax forms to book publishing. But with its Portable Document Format (PDF) now widely used for distributing documents electronically, Adobe now wants to expand the PDF format into a multiplatform foundation for viewing and sharing corporate data. It's an ambitious plan that will likely bring Adobe into more direct competition with Microsoft -- though this would not be the first time the two companies have clashed. Meanwhile, Adobe is looking to extend its reach with publishing and graphics professionals. Adobe Creative Suite, a package of software the company announced earlier this week, combines common applications such as Photoshop with new tools for collaboration and managing files. Among other things, the package is expected to help boost market share for Adobe's InDesign page layout software, one of the company's most competitive products. Adobe CEO Bruce Chizen talked with CNET News.com about its suite approach, the future of the PDF and the possible confrontation with Microsoft, among other issues. [Chizen:] "The market we're going after is different from the market [Microsoft is] focused on. We're focused on those customers and those industries that care about the reliability of the document outside their environment, and they want to have intelligent documents that cut across platforms--and it's where good-enough -- meaning HTML -- is not going to meet their requirements. Our industries are banking, insurance, legal, manufacturing, pharmaceuticals, government--places where they want to do business with partners or customers or citizens, where they can't dictate the operating environment. They don't want to tell their customers, "If you want to open a certain document, you have to go out and buy a certain operating system and a certain piece of software... Version Cue is really designed for individual and work groups of 25 or fewer people. And as those individuals scale up, they're going to want a much more comprehensive, administrative-intense solution, and that's when they'll go buy an enterprise solution. And because we use industry standards that are built around XML schemas, we'll integrate well with those solutions. And we already are well along the way of creating partnerships with folks like IBM and Documentum..."

  • [September 30, 2003] "Create Web Services Using Apache Axis and Castor. How to Integrate Axis and Castor in a Document-Style Web Service Client and Server" By Kevin Gibbs, Brian D Goodman, and Elias Torres (IBM). From IBM developerWorks, Web services. September 30, 2003. ['Recent work has pointed out the benefits of using Document-style Web services over RPC -- they're cleaner, more natural to XML, and facilitate object exchange. However, Document-style services can be less than straightforward to deploy using Axis, since Axis's data binding framework can be difficult to use, doesn't support some popular features of XML-Schema, and most importantly, lacks validation support. This article addresses those woes by providing a step-by-step tutorial which explains how to integrate Axis with the Castor data-binding framework, creating a best-of-both worlds Web service that combines the Web services prowess of Axis with the data-binding brawn of Castor.'] "RPC-style encoding is ultimately a limiting, unnatural use of its underlying technology, XML. It represents a misuse of technology -- when simple XML alone, in a Document-style service, provides all the expressibility desired. Keeping technology standards in the vein of the most natural, straightforward solutions, like Document style, is the true spirit of Web services, where interfaces are exposed, back-end and middleware systems are hidden, and dynamic discovery, binding, and endless reuse abound. This article shows how to use Castor XML binding to make Document-style Web services within an Apache Axis environment easier, cleaner, and more intuitive. It begins with a discussion of Web service encoding methods and an explanation of why Castor and Axis together make a good solution. It provides instructions and explanations for all of the necessary steps to getting a Document-style Web service up and running -- everything from designing the schema and service to generating the service and client code. The article covers configuring Axis to use Castor and attempts to cover any 'gotchas' a developer might encounter as they get their hands dirty... But once you're off the ground, you've got a Web service that gains all the flexibility and clarity of Document-style, the robust Web services support of Axis, and the validation and data binding prowess of Castor. When you've got Document-style services, Castor, and Axis set up, there are a lot of other interesting directions you can go in. For instance, in just a few more lines of code, you can have your server-side Castor objects marshall themselves into an SQL database, using Castor JDO. You can also use the regular expression and validation support of Castor to clean up Web service data so that your service and client have less room for potential bugs in their data..."

  • [September 30, 2003] "What is Service-Oriented Architecture?" By Hao He. From O'Reilly WebServices.XML.com (September 30, 2003). "Service Oriented Architecture (SOA) is an architectural style whose goal is to achieve loose coupling among interacting software agents. A service is a unit of work done by a service provider to achieve desired end results for a service consumer. Both provider and consumer are roles played by software agents on behalf of their owners. SOA achieves loose coupling among interacting software agents by employing two architectural constraints: (1) a small set of simple and ubiquitous interfaces to all participating software agents. Only generic semantics are encoded at the interfaces. The interfaces should be universally available for all providers and consumers. (2) Descriptive messages constrained by an extensible schema delivered through the interfaces. No, or only minimal, system behavior is prescribed by messages. A schema limits the vocabulary and structure of messages. An extensible schema allows new versions of services to be introduced without breaking existing services... Interfacing is fundamentally important: if interfaces do not work, systems do not work. Interfacing is also expensive and error-prone for distributed applications. An interface needs to prescribe system behavior, and this is very difficult to implement correctly across different platforms and languages. Remote interfaces are also the slowest part of most distributed applications. Instead of building new interfaces for each application, it makes sense to reuse a few generic ones for all applications. Since we have only a few generic interfaces available, we must express application-specific semantics in messages. We can send any kind of message over our interfaces, but there are a few rules to follow before we can call say that an architecture is service oriented. First, the messages must be descriptive, rather than instructive, because the service provider is responsible for solving the problem. This is like going to a restaurant: you tell your waiter what you would like to order and your preferences but you don't tell their cook how to cook your dish step by step. Second, service providers will be unable to understand your request if your messages are not written in a format, structure, and vocabulary that is understood by all parties. Limiting the vocabulary and structure of messages is a necessity for any efficient communication. The more restricted a message is, the easier it is to understand the message, although it comes at the expense of reduced extensibility. Third, extensibility is vitally important... If messages are not extensible, consumers and providers will be locked into one particular version of a service. Despite the importance of extensibility, it has been traditionally overlooked. At best, it was regarded simply as a good practice rather than something fundamental. Restriction and extensibility are deeply entwined. You need both, and increasing one comes at the expense of reducing the other. The trick is to have a right balance. Fourth, an SOA must have a mechanism that enables a consumer to discover a service provider under the context of a service sought by the consumer. The mechanism can be really flexible, and it does not have to be a centralized registry..."

  • [September 30, 2003] "QA Framework: Operational Guidelines." Edited by Lofton Henderson, Dominique Hazaël-Massieux Lynne Rosenthal, and Kirill Gavrylyuk. W3C Candidate Recommendation. 22-September-2003. Latest version URL: http://www.w3.org/TR/qaframe-ops/. Produced by members of the W3C QA Working Group under the W3C Quality Assurance (QA) Activity. This document outlines a "common operational framework for building conformance test materials for W3C specifications is defined. It presents operational and procedural guidelines for groups undertaking conformance materials development. This document is one of the QA Framework family of documents of the Quality Assurance (QA) Activity, which includes the other existing or in-progress specifications: Introduction; Specification Guidelines; and, Test Guidelines. The scope of this specification is a set of verifiable requirements for the process and operational aspects of the quality practices of W3C Working Groups. The primary goal is to help the W3C Working Groups (WGs) with the planning, development, deployment, and maintenance of conformance test materials (TM). For this guidelines document, the term conformance test materials includes conformance test suites, validation tools, conformance checklists, any other materials that are used to check or indicate conformance of an implementation to a specification... As the complexity of W3C specifications and their interdependencies increase, quality assurance becomes even more important to ensuring acceptance and deployment in the market. These guidelines aim to capture the experiences, good practices, activities, and lessons-learned of the Working Groups, and to present them in a comprehensive, cohesive set of documents for all to use and benefit from. They thereby aim to: (1) standardize the best of current practice, (2) allow the WG's to reuse what works rather than having to reinvent, (3) which should facilitate and expedite the work of the WGs, (4) and should also promote consistency across the various WG quality activities and deliverables..." See also the "Implementation Plan and Report for the QA Operational Guidelines" and the public archives of the 'www-qa' list.

  • [September 30, 2003] "An Introduction to StAX." By Elliotte Rusty Harold. In XML.com (September 17, 2003). "Most current XML APIs fall into one of two broad classes: event-based APIs like SAX and XNI or tree-based APIs like DOM and JDOM. Most programmers find the tree-based APIs to be easier to use; but such APIs are less efficient, especially with respect to memory usage. An in-memory tree tends to be several times larger than the document it models. Thus tree APIs are normally not practical for documents larger than a few megabytes in size or in memory constrained environments such as J2ME. In these situations, a streaming API such as SAX or XNI is normally preferred. A streaming API uses much less memory than a tree API since it doesn't have to hold the entire document in memory simultaneously. It can process the document in small pieces. Furthermore, streaming APIs are fast. They can start generating output from the input almost immediately, without waiting for the entire document to be read. They don't have to build excessively complicated tree data structures they'll just pull apart again into smaller pieces. However, the common streaming APIs like SAX are all push APIs. They feed the content of the document to the application as soon as they see it, whether the application is ready to receive that data or not. SAX and XNI are fast and efficient, but the patterns they require programmers to adopt are unfamiliar and uncomfortable to many developers. Pull APIs are a more comfortable alternative for streaming processing of XML. A pull API is based around the more familiar iterator design pattern rather than the less well-known observer design pattern. In a pull API, the client program asks the parser for the next piece of information rather than the parser telling the client program when the next datum is available. In a pull API the client program drives the parser. In a push API the parser drives the client. [Now] the next generation API is here. BEA Systems, working in conjunction with Sun, XMLPULL developers Stefan Haustein and Aleksandr Slominski, XML heavyweight James Clark, and others in the Java Community Process are on the verge of releasing StAX, the Streaming API for XML. StAX is a pull parsing API for XML which avoids most of the pitfalls I noted in XMLPULL. XMLPULL was a nice proof of concept. StAX is suitable for real work. Like SAX, StAX is a parser independent, pure Java API based on interfaces that can be implemented by multiple parsers. Currently there is only one implementation, the reference implementation bundled with the draft specification... StAX is a fast, potentially extremely fast, straight-forward, memory-thrifty way to loading data from an XML document the structure of which is well known in advance; it will be a very useful addition to any Java developer's XML toolkit..." See details in the following bibliographic reference.

  • [September 30, 2003] "Streaming API for XML." Java Specification Request (JSR) #173. Specification Lead: Christopher Fry (BEA Systems). Produced under the Java Community Process. Expert Group Members: Arnaud Blandin, Intalio, Inc.), Andy Clark (Apache), James Clark, Christopher Fry (BEA Systems, Specification Lead), Stefan Haustein). Simon Horrell (Developmentor), K. Karun (Oracle), Glenn Marcy (IBM), Gregory M. Messner (Breeze Factor), Aleksander Slominski), David Stephenson (Hewlett-Packard), James Strachan), and Anil Vijendran (Sun Microsystems). JSR-000173 Streaming API for XML Specification 0.7. Proposed Final Draft. August 27, 2003. 61 pages. A reference implementation is included in the ZIP archive containing the draft specification. "This specification describes the Streaming API for XML (StAX), a bi-directional API for reading and writing XML. This document along with the associated API documentation is the formal specification for JSR-173... This document specifies a new API for parsing and streaming XML between applications in an efficient manner. Efficient XML processing is fundamental for several areas of computing, such as XML based RPC and Data Binding... The Streaming API for XML gives parsing control to the programmer by exposing a simple iterator based API and an underlying stream of events. Methods such as next() and hasNext() allow an application developer to ask for the next event (pull the event) rather than handling the event in a callback. This gives a developer more procedural control over the processing of the XML document. The Streaming API also allows the programmer to stop processing the document, skip ahead to sections of the document, and get subsections of the document. The Streaming API for XML consists of two styles: A low-level cursor API, designed for creating object models and a higher-level event iterator API, designed to be used in pipelines and be easily extensible..." Background to the StAX design: "Processing XML has become a standard function in most computing environments. Two main approaches exist: (1) the Simple API for XML processing [SAX] and (2) the Document Object Model (DOM). SAX is a low-level parsing API while DOM provides a random-access tree-like structure. One drawback to the SAX API is that the programmer must keep track of the current state of the document in the code each time they process an XML document and thus cannot iteratively process it. Another drawback to SAX is that the entire document needs to be parsed at one time. DOM provides APIs that allow random access and manipulation of an in-memory XML document. At first glance this seems like a win for the application developer. However, this perceived simplicity comes at a very high cost: performance. For very large documents one may be required to read the entire document into memory before taking appropriate actions based on the data..." See preceding bibliographic entry.

  • [September 30, 2003] "Marking Up Bureaucracy." By Paul Ford. From XML.com (September 24, 2003). "If there is a perfect user of XML, it's the huge, sprawling United States government. With thousands of diverse offices, from the Navy to National Park Service, each federal agency routinely exchanges gigabytes-worths of documents and data with other offices, businesses, and citizens. Organizations as large as the US government rarely move quickly, so at first it's surprising to see so much XML activity underway. Historically, however, many government organizations are not strangers to markup... Right now, centralization is the exception, not the norm. Different XML applications are scattered across different government agencies. The DoD, EPA, IRS, and others create schemas as needed, and apply them internally. In an effort to encourage centralization of all online government services, including those using XML, the White House created the E-government initiative, which divides government technology into three roles: Government-to-Government (G2G), Government-to-Business (G2B), and Government-to-Citizen (G2C). Most effort has been focused on G2G. As described above, one of the major creators and consumers of markup is the the Department of Defense. Earlier efforts at standardizing schemas DoD-wide met with significant resistance, so now the DoD uses a 'market-oriented' strategy to manage its own XML registry. According to Owen Ambur, co-founder and co-chair of the XML working group, 'essentially, individual departments are encouraged to post schemas,' and other departments are encouraged to work with existing schemas instead of inventing new ones with the hope that, over time, individual schemas will be identified as most useful and promoted broadly throughout government. Much effort is also being applied to the E-Forms for E-Gov project, which is currently creating an infrastructure for using XForms, PDF, and related technologies to allow the myriad different federal forms to be filled out and signed electronically. This technology is expected to be useful both in G2G and G2B and will allow the processing of common forms, like passports applications, applications for federal assistance, and travel vouchers to be completely automated..."

  • [September 30, 2003] "Report: Widespread Use of Microsoft Poses Security Risk. Organizations Should Diversify Their Software Mix, Says Industry Group." By Stacy Cowley. In InfoWorld (September 24, 2003). "Whatever Microsoft's strengths or failings as a developer of reliable software, the mere existence of an operating system monopoly is a critical security risk, argues a new report released Wednesday at a Computer & Communications Industry Association (CCIA) gathering in Washington, D.C. Written by seven IT security researchers, CyberInsecurity -- The Cost of Monopoly calls on governments and businesses to consider in their buying decisions the dangers of homogenous systems, and to diversify the software mix deployed in their organizations. It also urges the U.S. government to counterbalance Microsoft's user lock-in tactics by forcing the company to offer multiplatform support for its dominant applications, including Internet Explorer and Microsoft Office products... While Microsoft is a focus of the report, the company isn't solely responsible for the risky situation that now exists, the authors said... None of the report's authors were paid for their contributions, and the CCIA is merely acting as the paper's publisher and did not influence its content, according to the report's instigator, @stake Inc. Chief Technical Officer Dan Geer. The report's conclusions do, however, dovetail with CCIA's push for tighter regulatory controls on Microsoft and for greater diversity in the U.S. federal government's IT systems. The group plans to feature the report at this week's conference, and in its conversations with representatives of Congress and federal agencies. The report's authors said they hope it will aid corporate IT workers in efforts to convince executives at their companies that Microsoft's software shouldn't be deployed by default. 'There isn't a lot of talk about monoculture and security problems. Our hope is that we can bring this into the debate,' [Perry] Metzger said. Beyond recommending diversification, the paper suggests steps the U.S. government could take to mitigate the effects of Microsoft's monopoly position. Forced publication of APIs (application program interfaces) for Microsoft's Windows and Office software would help, as would requiring the company to work with other industry vendors on development of future specifications through a process similar to the Internet Society's RFC (request for comments) system, the report said..." Note: The "@stake Inc. Chief Technical Officer Dan Geer" mentioned above was fired in connection with his authorship contribution in this report. See: (1) "Security Expert Geer Sounds Off on Dismissal"; (2) "Former @stake CTO Dan Geer on Microsoft Report, Firing." Bibliographic reference for the report is cited below.

  • [September 30, 2003] "CyberInsecurity: The Cost of Monopoly. How the Dominance of Microsoft's Products Poses a Risk to Security." By Daniel Geer, Sc.D (Chief Technical Officer, @Stake), Charles P. Pfleeger, Ph.D (Master Security Architect, Exodus Communications, Inc.), Bruce Schneier (Founder, Chief Technical Officer, Counterpane Internet Security), John S. Quarterman (Founder, InternetPerils, Matrix NetSystems, Inc.), Perry Metzger (Independent Consultant), Rebecca Bace (CEO, Infidel), and Peter Gutmann (Researcher, Department of Computer Science, University of Auckland). Published by Computer & Communications Industry Association (CCIA). September 2003. 25 pages. "... As fast as the world's computing infrastructure is growing, security vulnerabilities within it are growing faster still. The security situation is deteriorating, and that deterioration compounds when nearly all computers in the hands of end users rely on a single operating system subject to the same vulnerabilities the world over. Most of the world's computers run Microsoft's operating systems, thus most of the world's computers are vulnerable to the same viruses and worms at the same time. The only way to stop this is to avoid monoculture in computer operating systems, and for reasons just as reasonable and obvious as avoiding monoculture in farming. Microsoft exacerbates this problem via a wide range of practices that lock users to its platform. The impact on security of this lock-in is real and endangers society. Because Microsoft's near-monopoly status itself magnifies security risk, it is essential that society become less dependent on a single operating system from a single vendor if our critical infrastructure is not to be disrupted in a single blow. The goal must be to break the monoculture. Efforts by Microsoft to improve security will fail if their side effect is to increase user-level lock-in. Microsoft must not be allowed to impose new restrictions on its customers -- imposed in the way only a monopoly can do -- and then claim that such exercise of monopoly power is somehow a solution to the security problems inherent in its products. The prevalence of security flaw in Microsoft's products is an effect of monopoly power; it must not be allowed to become a reinforcer. Governments must set an example with their own internal policies and with the regulations they impose on industries critical to their societies. They must confront the security effects of monopoly and acknowledge that competition policy is entangled with security policy from this point forward..."

  • [September 30, 2003] "Java Panel Pondering Web Services, Portal Proposals. J2EE 1.4 Readied for Approval." By Paul Krill. In InfoWorld (September 24, 2003). "Proposals to boost Web services and portal capabilities in Java are up for imminent votes by stewards of the programming language, according to an official at Java inventor Sun Microsystems. Java 2 Platform, Enterprise Edition (J2EE) 1.4, which adds Web services support and backing for the Web Services Interoperability Organization's Basic Profile for Web services, is up for a vote by an executive committee of the Java Community Process (JCP) in the next couple of weeks, said Onno Kluyt, director of the JCP program office at Sun. J2EE 1.4 will be voted on by the JCP Standard Edition Enterprise Edition Executive Committee (SE/EE), with results expected by the end of the year. Up for a vote this week by the SE/EE committee is JSR 168, which is intended to define a standard API allowing developers to write a portlet once and deploy it from any compliant server with little or no recoding. The vote is expected to be finalized in two weeks, according to Sun. JCP in the next two weeks also is conducting elections to its two executive committees. These committees are the ME (Micro Edition) committee, which oversees Java 2 Platform Micro Edition for consumer and embedded systems, and the SE/EC committee, overseeing Java technologies for the server and desktop. Five seats are up on each panel. In place of current member PalmSource, Sun, which nominates 10 members for each panel, is nominating service provider Vodafone to the ME executive committee. JCP members then vote on the nominations..." See also JSR 168 Portlet API. JSR #168 was approved in a final ballot; voting to approve: Apple Computer, Inc., BEA Systems, Borland Software Corporation, Caldera Systems, Cisco Systems, Fujitsu Limited, Hewlett-Packard, IBM, IONA Technologies PLC, Doug Lea Macromedia, Inc., Nokia Networks, Oracle, SAP AG, Sun Microsystems, Inc.

  • [September 30, 2003] "Client Quality Reporting for J2EE Web Services. Use SOAP Attachments to Report Client Response Times for Web Services." By Brian Connoll. In JavaWorld (September 19, 2003). This article documents the implementation of "a general-purpose architecture for recording client response times for J2EE (Java 2 Platform, Enterprise Edition) Web services. The response times recorded are actual client response times, so they accurately reflect a user's perspective of the service quality. The sample implementation was built using the Sun ONE (Open Network Environment) Application Server and IDE, but the general approach can be easily adapted to other J2EE implementations... While Web services ease the building of client-server systems, monitoring service quality is a significant problem. Consider a client application that submits a transaction on a user's behalf. A business transaction usually involves several Web service calls: an initial call to submit a work item, subsequent calls to check for completion, and a final call to get the result. Each call is a distinct HTTP/SOAP (Simple Object Access Protocol) exchange. Put yourself in the position of an IT department responsible for monitoring server load and forecasting future needs. The fundamental question you must answer is, 'How well am I serving my clients now, and what will I need to serve them in the future?' Answering this question is difficult if you have only HTTP logs. Clients care about transactions, but since each transaction consists of several HTTP requests, the best you can do to estimate service quality is to develop custom data-mining software that cursors through HTTP logs and builds a model of user transactions. Even so, the information you have is still limited because it can't reflect network transport or client application overhead. This article's key idea is that transaction service quality is best measured by the client. The approach adopted here allows the client to record actual transaction response times. A client application uploads response time reports to the server by appending them to the next up-bound transaction request. The server strips off these attachments and queues them for storage and offline analysis... This approach can be used to measure accurate response times from the perspective of a client application. The implementation is lightweight. No new network traffic is needed between the client and the server. Metrics payloads are queued for low-priority logging, so server resources can be reserved for application processing..."

  • [September 30, 2003] "Is New Office 2003 Suite Worth the Upgrade?" By Mario Morejon, Vincent A. Randazzese, and Michael Gros. In CRN (September 25, 2003). "Microsoft Office 2003 is slated to launch Oct. 21, and it's already available to volume buyers. But one question looms -- the same question, in fact, that hovers over every Microsoft release. Is the upgrade really worth it? ... One of the most significant developments in Office 2003 is the use of embedded XML throughout, which makes the suite an excellent developer's tool and allows data to be shared easily among applications and users. Developing department-level applications has never been easier now that Microsoft has introduced InfoPath 2003, a client-driven XML form editor that integrates with XML-driven data sources in a variety of ways. InfoPath can query XML-driven databases and has a database wizard that ports Access 2003 tables and converts them into XML forms. The tool also preserves database schemas, so users don't have to reinvent the wheel when integrating their database-driven applications. In the case of SQL Server, database connections are driven via OLE DB... Web services, too, can tie into InfoPath as long as they're discoverable via UDDI, and the UDDI lookup can be done without any coding. But InfoPath isn't perfect yet. SQL joins can't be used because primary keys generated in Access cannot be replicated nor created outside the Access environment. Essentially, a many-to-one relationship violation can occur between tables, and conversion of Access forms into InfoPath forms is not yet always possible. Also, InfoPath chokes up when handling repeating fields in Access forms. As for FrontPage 2003, it's much improved and should no longer be considered an HTML editor for novice users. Test Center engineers predict that the program will now give Macromedia Dreamweaver a run for its money. Users can create XML data-driven Web sites in four easy steps and publish them to Micosoft SharePoint Services sites with little understanding of XML or XSLT. Data inside XML files are kept live because XML files are not copied into the pages but are linked to databases instead. Web services also work well with FrontPage, but they have to be published to a catalog before data can be pulled from them. Users can hook up a Web service to a FrontPage site easily--in just four--steps. Another impressive FrontPage feature is the conditional formatting task pane, which controls what is viewable on a page. Users can determine what's viewable by highlighting content and determining what data can be put on a page based on field values in a conditional query. No coding is required to do this..."

  • [September 30, 2003] "RFID Ripples Through Software Industry. Sun, SAP, Oracle, IBM Integrate RFID Data Into Mainstream Applications." By Ephraim Schwartz. In InfoWorld (September 26, 2003). "Big name vendors including Sun, SAP, Oracle, and IBM have caught the RFID (Radio Frequency Identification) buzz. Spurred in part by a WalMart edict that requires suppliers to tag all shipping cases and palettes with RFID by 2006, the vendors are rewriting their enterprise applications to integrate RFID data.... 'Walmart's marching orders are heard across the industry,' said Joshua Greenbaum, principal, Enterprise Applications Consulting. The changes on queue include RFID extensions to Oracle's database and application server and SAP R3 applications, higher-level integration of RFID with Sun's SunOne integration platform, and integration with IBM's DB2 Information Integrator to facilitate the handoff of data from RFID readers to enterprise applications. Most industry analysts argue that RFID tagging is a transformational development that will ultimately change the way businesses plan, price, distribute, and advertise products. But for the present, enterprise application vendors are extending their products to handle an expected boom in RFID data. Until now, a bar coded item used to sit on a retail shelf and did not generate any data until it was scanned by a bar code reader. And then the data was read only once. RFID, on the other hand, is a passive technology that does not require human interaction to scan. A reader can extract location and product description data from a tagged item every 250 milliseconds. Some readers are capable of reading data from 200 tags per second. The result is a data increase of more than one thousand times above traditional scanning methods. In response, Sun Microsystems is developing a middleware product to manage the influx of RFID data to filter out noise and duplicate data, according to Solutions Product Architect Sean Clark. Currently in its pilot phase and commercially available by first quarter 2004, Sun's middleware will comply with Savant, an industry standard for this aspect of RFID filtering. 'Savant acts as the buffering layer between readers and enterprise applications,' Clark said. In addition, Sun is writing a software component that will implement its version of the RFID industry standard EPC (Electronic Product Code) Information Service..." See "RFID Resources and Readings."

  • [September 30, 2003] "What's Next for SQL Server?" By Lisa Vaas. In eWEEK (September 26, 2003). "Users demanded SQL Server bond tighter with Visual Studio .Net, and Microsoft Corp. has since heeded the call, putting into beta testers' hands a version that opens the database up to .Net-compliant languages. The next version of SQL Server, code-named 'Yukon,' was originally slated for a spring 2004 release. That deadline was pushed out to the second half of next year after customers said they expected Yukon to fit hand-in-glove with the next version of .Net, code-named Whidbey. The Yukon beta was released in July to some 2,000 customers and partners. eWEEK recently talked with Microsoft Group Product Manager Tom Rizzo to find out how the .Net integration that customers demanded, along with upcoming features such as native XML and Web Services support, will benefit enterprises." [Rizzo:] "From the data level, we have things like native XML support. You take data from SQL Server, put it into XML format and ship it to anything that understands XML, such as Oracle has some XML support, and [IBM's DB2 database]. XML is ultimate interoperability -- it's an industry-standard format, and it's self-describing. You know both the schema of the data as well as the data itself. You don't lose the context when you pass your data around. We upped the level of XML support in Yukon through a number of things. In 2000 we had XML support but -- it was shredding. (Shredding is the parsing of XML tag components into corresponding relational table columns.) In Yukon the key thing is we have an XML type. Like you have STRING and NUMBERS and all that inside the database, now you can declare with the native data type XML. Although we had XML support in 2000, and many leveraged it and were happy with it, now we have native support... One reason we [moved to a native data type for XML] it is to support XQuery. Also to support XQuery we had to build code so as to combine XML with relational query language. You can take the relational sorts of queries you're used to in the database world, where people select things from tables with filters on that data. You can combine XQuery statements with such relational queries..." See also "XML and Databases."

  • [September 30, 2003] "Web services Players Push Management Barrow. Actional, AmberPoint, Empirix Separately Unveil Wares." By Paul Krill. In InfoWorld (September 29, 2003). "Web services vendors Actional, AmberPoint, and Empirix this week will attempt to improve Web services management capabilities with a host of product releases... AmberPoint is unveiling Exception Manager for resolving operational and business exceptions in Web services systems, while Empirix is bolstering testing and monitoring with its e-Test suite 6.8 and OneSight 4.8. For its part, Actional will announce its SOAPstation Edge XML firewall software and MyServicesPortal dashboard for monitoring Web services activity and service-level agreement compliance. Actional's SOAPstation Edge enables Web services management to be conducted outside a firewall by extending the brokering capabilities of the company's SOAPstation product. An add-on product to SOAPstation, SOAPstation Edge, reduces redundant processing in SOAP messages, providing XML firewall capabilities and processing of messages and management policy in a single message passage, said Dan Foody, CTO at Actional. MyServicesPortal provides a portal for both technical and non-technical persons to examine factors such as how a service network affects a particular customer, said Foody. Users can customize their views of the management system. Service Stabilizer identifies and corrects undesirable operating conditions in Web services and services-based applications before they become problems, according to Actional. For example, Service Stabilizer can detect if a network is overloaded, Foody said... AmberPoint Exception Manager is intended to detect and resolve distributed business exceptions in Web services systems, ranging from simple data entry errors to complex faults, the company said. It enables businesses to react more quickly to operational and business contingencies and minimize inefficiencies... Empirix will announce new Web services monitoring capabilities for its e-Test suite and OneSight products. The products feature a script wizard to simplify scripting needed to test and monitor Web services, the company said. This capability is added in Version 6.8 of the e-Test suite and Version 4.8 of OneSight. The e-Test product is used prior to launching applications while OneSight is used to manage and monitor applications in production, according to Joe Alwen, vice president of marketing at Empirix..."

  • [September 30, 2003] "AmberPoint Introduces Distributed Web Services Exception Management Solution." From CBDi Newswire (September 30, 2003). "Using AmberPoint Exception Manager, enterprises can react to operational and business contingencies, minimize inefficiencies and reduce the costs of maintaining their Web services environments. Due to its distributed, agent-based architecture, AmberPoint Exception Manager is able to detect and resolve distributed exceptions, where the clues to the condition reside in multiple messages. AmberPoint Exception Manager provides capabilities for managing exceptions in distributed Web services environments. In addition to providing visibility into hard-to-resolve operational errors, the solution also handles exceptions that have business impact. For example, if a customer were to place a large order that could not be fulfilled, AmberPoint Exception Manager can alert the appropriate business manager to resolve the situation before the customer encounters the problem... Where Amberpoint is moving the goalposts with it's Exception Manager is in recognizing the potential complexity inherent in Web Services. Amberpoint is providing what you might regard as in-flight diagnostic capabilities that allow intelligent response to both business and technical problems. The starting point for the Exception Manager is that in distributed (and particularly federated) systems, it is likely to be commonplace that whilst the symptom of a problem may be obvious, the root cause may not... Amberpoint provides quite sophisticated exception management capabilities including: in-flight message comparison (prior to current, incorrect to working); filtering of messages dependent on a variety of conditions; filtering and identification of specific message combinations; pattern recognition; creation of data for drill down; allowing real time data correction and value update... The Exception Manager tool is interesting because it will be of particular value in the final stages of testing as well as in production and it demonstrates that Amberpoint is getting real world feedback that they are feeding into the product..."

  • [September 30, 2003] "Object-Oriented Database Field Shrinks Again." By Lisa Vaas. In eWEEK (September 29, 2003). "In a deal worth $26 million, object-oriented database companies Versant Corp. and Poet Holdings Inc. are merging, Versant officials have announced... Versant will swap 1.4 shares of Versant Common Stock for each Poet share. The Versant stock that will be given to Poet shareholders represents about 45 percent of outstanding shares. The move was unanimously approved by both companies' boards of directors but is subject to approval from the Securities and Exchange Commission and from shareholders. Such a merger is unsurprising in what International Data Corp. analyst Carl Olofson has deemed a saturated market for object-oriented databases. The market consists of consumers of very complex data, such as media content and scientific and technical applications... In a statement, Versant officials said the two companies will work on software that delivers storage, integration, analysis and the ability to act on real-time data. Poet's object database, Fast Objects, is used in embedded applications. Versant's object database, VDS, in used in high-performance, large-scale, real-time applications. The merged technologies will be designed to manage real-time, XML, and other hierarchical and navigational data, according to officials... The acquisition is important to Versant as it pursues the emerging technology of JDO (Java Data Objects), Chandra said. The JDO API is used to directly store Java domain model instances into databases. JDO allows developers to create a universal way of accessing data and thus the ability to choose databases from major or minor database vendors such as Oracle Corp, Sybase Inc., Versant or Poet, without the need to make code changes..." General references in "XML and Databases."

  • [September 30, 2003] "Opinion: Shakeout Looms in Web Services Management." By James Kobielus. In Computerworld (September 25, 2003). "Web services management (WSM) is one of the most innovative sectors in today's IT industry. Despite the general economic slump, dozens of start-ups have ventured into the WSM market over the past few years. Consequently, enterprise customers can choose from many sophisticated tools for managing their complex Web services middleware environments. WSM is no passing fad. WSM tools address a growing need in today's Web-oriented e-business environment. They help companies ensure that the performance, reliability, availability and security of Web services environments continue to comply with service-level agreements and quality-of-service requirements. By contrast, traditional IT management tools can't monitor the end-to-end performance, availability, reliability and security of Web services environments. Typically, organizations deploy management tools associated with particular application, server and network environments. This explains why companies turn to WSM for a holistic view of service performance, as well as invest in enterprise management frameworks from Computer Associates International (CA), Hewlett-Packard (HP), IBM Tivoli and other strategic vendors... But today's WSM market is overcrowded and due for a serious shakeout. Start-ups are having a tough time establishing WSM as a separate market from IT management tools. WSM tools don't eliminate the need for traditional management tools that focus on particular applications, systems and network environments. You can't optimize Web services if you don't have the tools for viewing and fixing problems that originate in the underlying infrastructure. Sensing an opportunity to strengthen their competitive positions, management vendors are adding WSM features to their offerings. Others are bootstrapping themselves into the WSM market through strategic acquisitions. We see evidence for the latter trend in CA's recent acquisition of Adjoin and HP's announcement of its intention to buy Talking Blocks. Over the next several years, traditional IT management vendors will dominate the WSM market as they leverage their established customer bases and product families. Likewise, vendors of application servers, integration brokers, operating environments and other Web services platforms will embed WSM features in their offerings..." See OASIS Web Services Distributed Management TC.

  • [September 30, 2003] "Integrating Services with XSLT." By Will Provost. From O'Reilly WebServices.XML.com (September 30, 2003). "For all the magic that XML, SOAP, and WSDL seem to offer in allowing businesses to interoperate, they do not solve the more traditional problems of integrating data models and message formats. Analysts and developers must still plod through the traditional process of resolving differences between models before the promise of XML-based interoperability is even relevant. Happily, there's more magic out there: having committed to XML, companies can take great advantage of XSLT to address integration problems. With XSLT one can adapt one model to another, which is a tried-and-true integration strategy, implemented in a language optimized for this precise purpose. In this article I'll discuss issues and techniques involved in bringing XSLT into web service scenarios and show how to combine it with application code to build SOAP intermediaries that reduce or eliminate the stress between cooperating data structures... XSLT can make many annoying integration problems go away and with relatively low effort at that. We remember that almost all integration issues will require bidirectional transformation. That is, data that's transformed on its way in, and perhaps stored somewhere, will eventually be requested and sent back out, and it will have to look right to the requester. Form is not the only problem here. It is important to avoid the trap of inbound transformations that produce redundant results for different inputs. In other words, there must be a one-to-one mapping between the external and internal value spaces. Precisely preserving information is key to service adaptation, and this is not always so simple.. As wonderful as XSLT is, it's not designed to solve all possible transformation problems. Generally, it's strong on structural work using node sets and progressively weaker working with single values and their components. String arithmetic, algorithms, and math are notable weak points..." See related resources in "Extensible Stylesheet Language (XSL/XSLT)."

  • [September 30, 2003] "Data Visualization Tools Emerge. Antarctica Systems, Others Help Relay Complex Information." By Cathleen Moore. In InfoWorld (September 29, 2003). "Data visualization is back on the map as a host of emerging vendors unveil products designed to help enterprises analyze reams of information. Antarctica Systems is unwrapping Version 4.0 of its Visual Net software designed to present map-based visual representations of complex data from sources such as databases, BI tools, and ERP applications. The real pain point in applications and data stores is at the UI level, said Tim Bray, founder and CTO of Antarctica Systems. In fact, tools such as BI typically suffer low adoption rates because of their complexity. 'What got everyone using computers is the advent of the GUI,' Bray said. 'We are a GUI for information spaces.' Visual Net 4.0 adds a visual configuration wizard that allows users to point and click to hook up back-end data records to the display front end. In addition, added support for DHTML brings a cleaner, more compelling user interface, Bray said. TheBrain Technologies recently released a Lotus Notes Connector Version 1.0 for its BrainEKP (Enterprise Knowledge Platform), which provides a relational, visual interface for multiple data repositories. The connector allows users to see a graphical representation of Lotus Notes information in the context of company projects, customer accounts, and business processes. Next month Mindjet will add XML support to its MindManager X5 Pro mapping and collaboration software. MindManager creates visual representations of the thinking and planning stages of the collaborative process..." See also the Visual Net 4.0 announcement: "Antarctica Systems Announces Visual Net 4.0. Maximizing Information Display to Reveal Clarity, Truth in Data."

  • [September 30, 2003] "Developers Show Their Independent Streak, Favoring Web-Based Apps." By Eric Knorr. In InfoWorld (September 26, 2003). "Software behemoths are trying to sell programmers on elaborate new paradigms; but as our survey results show, many programmers aren't buying. Web applications rule the enterprise. That's the indisputable conclusion to be drawn from this year's InfoWorld Programming Survey. Despite imperitives from Microsoft and others that developers abandon server-based HTML apps for fat desktop clients, the ease of 'zero deployment' through the browser continues to win the day. To build those Web apps, significant numbers of programmers favor such humble scripting languages as VBScript and Perl. Contrary to the hype that says Microsoft .Net and the Java elite have a lock on the programming world, many developers have settled on cheaper (and often faster) ways to build the Web applications they need to build. Click for larger view. Responses gathered in August come from a group of 804 programmers and their managers. Our survey mirrors trends identified by such research companies as IDC, Gartner, and Forrester... Our respondents aren't afraid of new technology, either. A robust 51 percent say that Web services are part of their server development and 52 percent are employing XML or object-oriented databases. At a solid 40 percent, the uptake on .Net should warm Microsoft's heart, considering that the .Net Framework officially launched only 18 months ago. Adoption of Microsoft's Java-like C# was somewhat less impressive at 22 percent, though still respectable for a new programming language... The war between guerilla and IT-sanctioned technology has persisted since the first PC slipped in the back door of a big corporation. But there's one thing nearly everyone can agree on: Nobody wants to write it twice if they don't have to. In our survey, when asked what the biggest obstacle to reusing software is, only 10 percent say programmer disinclination... No matter what languages or tools they use, developers of all stripes are feeling the heat from the business side to respond quickly to business needs. At the high end of application development, Web services and the movement toward SOA (service-oriented architecture) promise to deliver application components that can be recombined ad infinitum with minimal development time. But analysts agree that enterprise adoption of SOA will take many years. Meanwhile, programmers are finding their own way, often using simple scripting tools, to develop the Web applications they need fast..."

  • [September 30, 2003] "Sun Expands Push For Auto-ID." By Matt Villano. In InternetNews.com (September 19, 2003). "Already a major player in the Auto-ID market, Sun Microsystems this week announced an initiative for delivering the hardware, software and services that enable enterprises to link into the Elecronic Product Code (EPN) Network. The announcement coincided with news that the Santa Clara, Calif.-based services firm is creating a new Auto-ID business unit to work to develop and deliver a standards-based Auto-ID/EPC solution down the road. Sun's announcement came just weeks after retail giant Wal-Mart aired a mandate for its suppliers to become EPC compliant by Jan. 1, 2005. According to Jonathan Schwartz, executive vice president for Sun Software, the Sun initiative will help Wal-Mart suppliers and other enterprises integrate real-time supply chain data seamlessly into their existing business processes and enterprise assets, enabling companies to not only meet these new requirements but exceed them... As Schwartz explained it, the technology behind Sun's Auto-ID effort will be similar to the technology behind Radio Frequency Identification (RFID) tags, the microscopic chips that some companies and retailers have considered for security and tracking purposes of clothes and electronics. This kind of EPC technology helps make the supply chain more efficient, safe, and secure by tracking goods every step along the way, reducing threats of counterfeiting, tampering, and terrorism, while increasing compliance with industry and shipping regulations. More specifically, Sun said its software will deliver a dynamic federated service architecture that emphasizes reliability, availability and scalability (RAS) for Auto-ID pilots and deployments. The proposed solutions also will include lifecycle services to maximize the value of Auto-ID deployments, helping customers proactively architect, implement, and manage IT operations in heterogeneous environments. According to Julie Sarbacker, who will head the new Auto-ID business at Sun, most of the company's EPC offerings will be delivered through the Solaris OE and Linux-based hardware platforms, setting the stage for transparent integration into the EPC Network..." The datasheet says that Sun's EPC initiative highlights an architecture "designed around Auto-ID standards such as EPC, Savant System Interface, Object Name Service (ONS), and Physical Markup Language (PML) supply chains with applications that address counterfeiting, tampering, terrorism, and regulatory compliance..." See: (1) Sun Auto-ID home; (2) the announcement, "Sun Microsystems Announces Vision and Initiative for Enterprise Auto-ID/EPC Deployments. Newly Formed Sun Business Leads Auto-ID/EPC Product and Market Development Efforts." General references in "Radio Frequency Identification (RFID) Resources and Readings."

  • [September 29, 2003] "OAI-Rights White Paper." By Carl Lagoze (Cornell University Information Science), Herbert Van de Sompel (Los Alamos National Laboratory), Michael Nelson (Old Dominion University Computer Science), and Simeon Warner (Cornell University Information Science). From the Open Archives Initiative. September 26, 2003. "The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has become an important foundation for interoperability among networked information systems. It is widely used in a variety of domains including libraries, museums, government, and research. Like any vehicle for exchanging information, the OAI-PMH exists in a context where information holders have concerns about rights to the use of their information. Although the OAI-PMH is nominally about the exchange of metadata, this does not lessen the complexities of rights-related issues: The distinction between content (data) and metadata is fuzzy at best, especially vis-à-vis intellectual property, and many providers are justifiably wary about uncontrolled reuse of rich metadata that represents a significant intellectual effort. Since the only technical restriction on data exchanged via OAI-PMH is that it must use XML encoding, it is entirely feasible to use the protocol for transmission of content itself. Since the primary reason for making metadata available via OAI-PMH is usually eventual access to the resource described by the metadata, guidelines and frameworks for expressing rights to that resource are in the scope of the protocol. As a result of these issues, discussion of rights and their relationship to the OAI-PMH have been frequent throughout work on the protocol. This paper is intended as a foundation for work aimed at incorporating structured rights expressions into the OAI-PMH. This work will be undertaken by a technical group called OAI-rights, and will result in a set of OAI-PMH guidelines scheduled for release in second quarter 2004... This paper examines issues and suggests alternatives for the incorporation of rights expressions in the OAI-PMH along three dimensions: (1) Entity association, which covers the association of rights expression with metadata and data (resources). (2) Aggregation association, which covers whether rights expressions can be associated with entities in the OAI-PMH that group other entities. (3) Binding, which covers where rights expressions are placed in protocol responses..." See details in the 2003-09-26 news story: "RoMEO and OAI-PMH Teams Develop Rights Solution Using ODRL and Creative Common Licenses."

  • [September 25, 2003] "Patent Politics." By Paul Festa. In CNET News.com (September 25, 2003). "During a recent meeting held at Macromedia's San Francisco headquarters, Silicon Valley companies asked a familiar question: What to do about Microsoft? But the strategy event, sponsored by the World Wide Web Consortium, differed significantly from so many others, at which participants have typically gathered to oppose the software giant's power. This time, Microsoft was the guest of honor. 'There's no doubt that there are some people who are happy to see Microsoft get nailed for anything,' said Dale Dougherty, a vice president at computer media company O'Reilly & Associates. 'But for those of us who are part of the Web, we wanted the browser to be on every desktop. And if it has to be a Microsoft browser, OK.' What a difference a patent suit makes. With one staggering loss at the hands of a federal court jury in Chicago, Microsoft has won the support -- if not the sympathy -- of nearly the entire software industry, from standards organizations to corporate rivals that are rushing to defend the company's Internet Explorer browser... the [court] verdict is increasingly interpreted as a potentially crushing burden on the Web, threatening to force significant changes to its fundamental language, HTML. Microsoft's competitors fear that Eolas' lawyers will target them next, and its partners -- such as Macromedia and Sun Microsystems -- worry that an enjoined IE browser would be prohibited from running their software plug-ins without awkward technology alternatives. The result has been a complex shift of industry dynamics that has turned many traditional alliances and rivalries upside down, prompting long-suffering competitors in the browser market to side with archrival Microsoft. At the same time, as the Eolas case has progressed, critics have portrayed company founder and sole employee Mike Doyle as an opportunist, despite his claims to be acting on behalf of the Web against a rapacious captor... Microsoft might still pull out a victory at the appellate level. Moreover, even if Eolas' patent is upheld, the rest of the software industry may very well go with Microsoft's workarounds rather than face the prospect of abandoning development for the universally distributed IE... Given the daunting odds in any challenge to Microsoft, Doyle believes that his struggle exceeds biblical proportions. He said the often-cited comparisons to David and Goliath don't go far enough in conveying the ambition and travails of his quest, which he believes could reverse Microsoft's victory in the so-called browser war and break its control over much of the digital world... 'We're no big fan of Microsoft, but I'm a big fan of the Web,' said Dougherty, who is in charge of online publishing at O'Reilly and testified on behalf of Microsoft in its recent patent trial. 'What worries people is that this is the first successful patent offense on the Web, and lots of other things could be coming.' The prospect of having such a basic necessity as running plug-ins subject to the whim of Eolas has the industry in a near panic -- not least among those organizations whose rules restrict or ban the use of patented technologies, such as open-source browser makers and the W3C. Groups that advocate software that has open-source code say their licenses prohibit them from including patented technologies. The W3C in March reaffirmed its opposition to the use of royalty-encumbered technologies, after a lengthy public battle that ended in a near-ban. 'We have experience and proof that the specter of a fee stops standards development cold,' W3C representative Janet Daly said. 'It doesn't even have to be a firm guarantee. All you need is a little bit of fear, uncertainty and doubt that a developer is going to be slapped with a licensing fee, and the developer will leave that technology alone'..." See: (1) the W3C news item from 2003-09-23, "W3C Launches HTML Patent Advisory Group" with the PAG FAQ, Home Page, and Charter; (2) the news story of August 28, 2003: "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft"; (3) general references in "Patents and Open Standards."

  • [September 24, 2003] "Grab Headlines From a Remote RSS File. Retrieve Syndicated Content, Transform It, and Display the Result." By Nicholas Chase (President, Chase & Chase, Inc). From IBM developerWorks, XML zone. September 23, 2003. ['In this article, Nick shows you how to retrieve syndicated content and convert it into headlines for your site. Since no official format for such feeds exists, aggregators are often faced with the difficulty of supporting multiple formats, so Nick also explains how to use XSL transformations to more easily deal with multiple syndication file formats.'] "With the popularization of weblogging, information overload is worse than ever. Readers now have more sites than ever to keep up with, and visiting all of them on a regular basis is next to impossible. Part of the problem can be solved through the syndication of content, in which a site makes its headlines and basic information available in a separate feed. Today, most of these feeds use an XML format called RSS, though there are variations in its use and even a potential competing format. This article explains how to use Java technology to retrieve the content of a syndicated feed, determine its type, and then transform it into HTML and display it on a Web site. This process involves five steps: (1) Retrieve the XML feed; (2) Analyze the feed; (3) Determine the proper transformation; (4) Perform the transformation; (5) Display the result. This article chronicles the creation of a Java Server Page (JSP) that retrieves a remote feed and transforms it using a Java bean and XSLT, and then incorporates the newly transformed information into a JSP page. The concepts, however, apply to virtually any Web environment... The application uses a DOM Document to analyze the feed and determine the appropriate stylesheet, but you can further extend it by moving some of that logic into an external stylesheet. You can also adapt the system so that it can pull more than one feed, perhaps based on a user selection, with each one creating its own cached file. Similarly, you can enable the user to determine the interval between feed retrievals..." See general references in "RDF Site Summary (RSS)."

  • [September 24, 2003] "ISO to Require Royalties?" By Kendall Clark. From XML.com (September 24, 2003). ['Kendall Clark on the ISO's proposal to charge for using their country codes.'] "It has come to the attention of the W3C, as well as various other communities, that the ISO is thinking about imposing licensing fees for the commercial use of several of its standards, including 3166, the one which establishes country codes, as well as ISO 639 and ISO 4217, which establish language and currency codes, respectively. [In this] article I provide some background to the present controversy as well as sample the reaction of web and other internet developers and development communities... ISO, rather than taking a step forward to make possible wider access to its standards, is taking the opposite tack. It's considering requiring license fees be paid for the commercial use of the information contained in ISO 639, 4217, and 3166... It's not clear how far reaching this requirement might be. Would it require my Linux distribution maker to pay a fee for selling a CD that contains software, like the Python language and libraries, which uses ISO identifiers? What about all of the Web software, both client and server, which uses language and country codes extensively? The W3C's XML Recommendation makes reference to ISO 639 and 3166. Does that mean any product which uses an XML parser owes the ISO a fee? At least three important institutions have responded to perceived change in the ISO's licensing policy: the W3C, the Unicode Technical Committee, and INCITS... Does the ISO need a reliable means of funding? Absolutely. But it needs, at least in my view, a way which is independent of selling, at least at such exorbitant rates, its standards themselves. If it's a truly global standards body, it should be able to find funding from the UN (which might be able or more inclined to fund ISO if the US would pay its delinquet UN dues), from wealthy western nations (why not, since the G7 benefits the most from the ISO's work?), and even from philanthropically-minded individuals and corporations. However, some things which the ISO has standardized -- and language, currency, country identifiers, as well as date-time representations, are among those things -- should be put immediately into the public domain. Some of its standards are simply too crucial and too much in the public trust to be tied in any way to the ISO's revenue model..." See other details in "Standards Organizations Express Concern About Royalty Fees for ISO Codes."

  • [September 24, 2003] "Secure, Reliable, Transacted Web Services: Architecture and Composition." By Donald F. Ferguson (IBM Fellow and Chairman; IBM Software Group Architecture Board) Tony Storey (IBM Fellow), Brad Lovering (Microsoft Corporation Distinguished Engineer), and John Shewchuk (Microsoft Web Services Architect). With credits to 66 contributors. In Microsoft MSDN Library (September 2003). "The basic set of Web service specifications enables customers and software vendors to solve important problems. Building on their success, many developers and companies are ready to tackle more difficult problems with Web service technology. The very success of Web services has led developers to desire even more capabilities from Web services. Since meaningful tool and communication interoperability has been successful, developers now expect the enhanced functions to interoperate. In addition to basic message interoperability and interface exchange, developers increasingly require that higher-level application services interoperate. Many commercial applications execute in an environment ('middleware' or 'operating systems') that provide support for functions like security and transactions. IBM, Microsoft, and others in the industry are often asked to make Web services more secure, more reliable, and better able to support transactions. In addition we are asked to provide these capabilities while retaining the essential simplicity and interoperability found in Web services today. This paper provides a succinct overview for the set of Web service specifications that address these needs. For the details of the specifications we provide references to the actual documents. The main purpose of this paper is to briefly define the value these specifications provide to our customers. We also describe how these specifications complement each other to compose robust environments for distributed applications. We face a key engineering challenge: How do we give Web services new security, reliability, and transaction capabilities without adding more complexity than needed? ... IBM, Microsoft, and our partners are developing Web service specifications that can be used as the building blocks for a new generation of powerful, secure, reliable, transacted Web services. These specifications are designed in a modular and composable fashion such that developers can utilize just the capabilities they require. This 'component-like' composability will allow developers to create powerful Web services in a simple and flexible manner, while only introducing just the level of complexity dictated by the specific application. This technology will enable organizations to easily create applications using a Service-Oriented Architecture (SOA). Furthermore, IBM and Microsoft have demonstrated secure, reliable, transacted SOA applications that illustrate the richness of the business processes that can be created using this approach. Moreover, these demonstrations have been operating in a federated security environment on a heterogeneous environment consisting of IBM WebSphere and Microsoft .NET software. We anticipate that these Web Service technologies will be available in operating systems, middleware, with tools that will make it even easier for developers to use these technologies..." General references in "Web Services Implementation."

  • [September 23, 2003] "Experiences with the Enforcement of Access Rights Extracted from ODRL-Based Digital Contracts." By Susanne Guth [susanne.guth@wu-wien.ac.at], Gustaf Neumann, and Mark Strembeck (Department of Information Systems, New Media Lab, Vienna University of Economics and BA, Austria). Prepared for presentation at DRM 2003, October 27, 2003, Washington, DC, USA. 13 pages (with 38 references). "In this paper, we present our experiences concerning the enforcement of access rights extracted from ODRL-based digital contracts. We introduce the generalized Contract Schema (CoSa) which is an approach to provide a generic representation of contract information on top of rights expression languages. We give an overview of the design and implementation of the xoRELInterpreter software component. In particular, the xoRELInterpreter interprets digital contracts that are based on rights expression languages (e.g. ODRL or XrML) and builds a runtime CoSa object model. We describe how the xoRBAC access control component and the xoRELInterpreter component are used to enforce access rights that we extract from ODRL-based digital contracts. Thus, our approach describes how ODRL-based contracts can be used as a means to disseminate certain types of access control information in distributed systems... A contract typically represents an agreement of two or more parties. The contract specifies rights and obligations of the involved stakeholders with respect to the subject matter of the respective contract. Contracts in the paper world can be tailored to meet the needs of a specific business situation or to fit the requirements of individual contract partners. In principle, the same is true for digital contracts as they can be used in the area of digital rights management for example. Most often digital contracts are defined using special purpose rights expression languages (REL) as ODRL, XrML, or MPEG 21 REL for instance. In this connection one can differentiate between the 'management of digital rights' and the 'digital management of (arbitrary) rights'. We especially focus on contracts that contain information on digital rights, i.e., rights which are intended to be controlled and enforced in an information system via a suitable access control service -- in contrast to rights which are enforced by legislation or other 'social protocols'... In Section 2 we give an overview of the abstract structure of digital contracts. We especially describe how information within a digital contract is encapsulated in different contract objects. Section 3 then summarizes the contract processing procedures performed by a contract engine. Subsequently, Section 4 introduces the generalized contract schema CoSa and the software components we used to implement our system, before Section 5 shows how ODRL-based digital contracts are mapped to a runtime CoSa object model. Next, Section 6 describes the initialization of the xoRBAC access control service via a mediator component and the subsequent enforcement of the corresponding access rights. Section 7 gives an overview of related work, before we conclude the paper in Section 8..." See also: (1) Open Digital Rights Language (ODRL) Initiative website; (2) ODRL International Workshop 2004; (3) local references in "Open Digital Rights Language (ODRL)"; (4) general references in "XML and Digital Rights Management (DRM)."

  • [September 23, 2003] "Update: European Parliament Votes to Limit Scope of Software Patents. Issue Still Must be Debated by European Union Member States." By Paul Meller. In InfoWorld (September 24, 2003). "The European Parliament voted in favor of a law that goes some way toward limiting the scope for patents on software programs Wednesday. With 364 voting in favor, 153 against, and 33 abstentions, members of the European Parliament (MEPs) appear to have ignored heavy lobbying from both extremes in the debate by opting for a compromise solution. The Parliament was considering changes to the original text published by the European Commission (EC), the executive branch of the EU. Most of the changes were designed to tighten up the wording of the law to make it harder for people to obtain patents. For example, the MEPs agreed to an amendment which outlaws the patenting of algorithms. Another accepted amendment explicitly outlaws the patenting of business methods, such as the 'one-click' online shopping technique patented in the U.S. by Amazon.com. 'Inventions involving computer programs which implement business, mathematical or other methods and do not produce any technical effect beyond the normal physical interactions between a program and the computer, network or other programmable apparatus in which it is run, shall not be patentable,' the amendment read. This is the first of two votes on the software patent directive in the European Parliament. Before casting their ballots again, the directive, including the amendments agreed on by the MEPs Wednesday, will be debated by ministers from the 15 EU state governments... MEP Arlene McCarthy, a U.K. member of the Socialist Party, said the Parliament has sent a clear message: 'We do want strict limits on patentability of software. All the amendments that were adopted were in this direction,' she said. 'We have effectively rewritten the directive.' McCarthy led the debate when the bill was being discussed at committee stage in the Parliament and also drew up the amendments to be considered at this week's plenary session of the body. She said, however, that she expects the text supported by the Parliament today to be rejected by the 15 member state governments and by the directive's original author, the European Commission..." See: "Patents and Open Standards."

  • [September 23, 2003] "W3C Investigation Begins on HTML Standard." By Matt Hicks. In eWEEK (September 23, 2003). "The ramifications of the recent Web browser patent verdict against Microsoft Corp. could strike at the heart of the Web's common language -- HTML. The World Wide Web Consortium (W3C) is investigating whether the claims in the patent infringement lawsuit brought by Eolas Technologies Inc. and the University of California could require changes to both the current and future HyperText Markup Language specifications, W3C officials said on Tuesday. Eolas in its lawsuit has claimed that Microsoft infringed on its patent of technology which allows for the embedded applications within Web pages such as applets and plug-ins. Microsoft has disputed the claims and has promised to appeal a $521 million jury verdict handed down in August. Eolas' attorney also has said that the patent could apply to a broad range of Web technology. The W3C is forming a patent advisory group that will decide whether to recommend changes to HTML and could also call on the full standards body to conduct a formal legal analysis of the patent. 'This is a serious issue,' said Philipp Hoschka, W3C deputy director for Europe who also oversees HTML activities. 'As you know, we have tried for our specifications to be royalty free.' Hoschka wouldn't specify what portions of HTML the patent might affect. Determining whether any tags or HTML specifications fall within the patent's claims would be the HTML patent advisory group's role, he said. W3C patent advisory groups typically are formed to avoid royalties as the standards body develops technical specifications and usually involve the W3C member making patent claims, W3C spokeswoman Janet Daly said. In this case, the group will be working without the participation of the patent holder... Beyond suggesting changes to HTML, the advisory group also could become involved in the ongoing debate concerning 'prior art' -- a legal term in patent law referring to whether an invention existed prior to the filing of a patent. Hoschka declined to say whether any investigation into the existence of prior art could also lead to the W3C becoming more directly involved in the patent lawsuit. The W3C has sought legal opinions concerning prior art before. In 1999, it concluded after a yearlong examination that the then-proposed Platform for Privacy Preferences (P3P) standard for Web privacy did not infringe on an existing patent. Earlier this month, Lotus Notes creator Ray Ozzie had claimed in Microsoft had made prior art arguments during the trial and is expected to use that argument in an appeal..." See: (1) the W3C news item from 2003-09-23, "W3C Launches HTML Patent Advisory Group" with the PAG FAQ, Home Page, and Charter; (2) the news story of August 28, 2003: "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft"; (3) general references in "Patents and Open Standards."

  • [September 23, 2003] "Eolas Suit May Spark HTML Changes." By Paul Festa. In CNET News.com (September 19, 2003). ['The World Wide Web Consortium is on the verge of forming a patent advisory group in response to the Eolas patent suit. Fallout from Eolas' patent victory over Microsoft threatens to hit Web developers and HTML itself.'] "As anxiety builds throughout the Web over the patent threatening Microsoft's Internet Explorer browser, the Web's leading standards group is considering modifying the medium's lingua franca itself, HTML, to address the same threat. The World Wide Web Consortium (W3C) is on the verge of forming a patent advisory group, or PAG, in response to the Eolas patent suit, according to sources close to the consortium. That group would conduct a public investigation into the legal ramifications of the patent on Hypertext Markup Language, the signature W3C standard that governs how most of the Web is written, and other specifications related to it... the W3C is said to be contemplating changes to HTML, considered one of the consortium's more mature and settled specifications. The potential problem for HTML is that it describes a way of summoning content located on a server other than the one serving the page in question. The 'object' and 'embed' tags in HTML, consortium members worry, may fall under the wording of the Eolas patent. Options the PAG could recommend include a technical workaround or new wording in HTML and related specifications warning that authors who implement the tags in question should contact the patent holders and take out a license, if necessary. The HTML PAG could also, as have previous PAGs in other working groups, launch a drive to discover 'prior art,' or technologies older than the Eolas patent that could potentially invalidate it in court. The W3C established the PAG system after its P3P privacy preferences recommendation was threatened by patents. The groups have since been formed to respond to patent disputes among VoiceXML working group members. The PAG policy was codified with the rest of the W3C's patent-averse policy, which was ratified in March after a rancorous debate..." See: (1) the W3C news item from 2003-09-23, "W3C Launches HTML Patent Advisory Group" with the PAG FAQ, Home Page, and Charter; (2) the news story of August 28, 2003: "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft"; (3) "Patents and Open Standards."

  • [September 23, 2003] "OASIS Ratifies SAML 1.1. RSA Supports Latest Version in Products." By Paul Roberts. In InfoWorld (September 19, 2003). "The OASIS Internet standards consortium said Monday that its members ratified SAML (Security Assertion Markup Language) Version 1.1 as an official standard, approving changes to the specification will improve interoperability with other Web services security standards. The vote assigns the highest level of OASIS (The Organization for the Advancement of Structured Information Standards) ratification to SAML 1.1 and could open the door for wider adoption of the XML (Extensible Markup Language) framework for companies using Web services to conduct high value transactions, according to Prateek Mishra of Netegrity Inc., co-chair of the OASIS Security Services Technical Committee. SAML is a standard that supports so-called 'federated identity' systems in which user authentication and authorization information is securely exchanged between Web sites within an organization or between organizations. SAML enables a user to sign on once to Web-enabled services, instead of having to repeatedly log in when they move from one Web site or Web-enabled application to another... The new version of SAML includes a number of updates and fixes for problems identified in the 1.0 standard, he said. In particular, SAML 1.1 revised guidelines for the use of digital certificates to sign SAML user authentication exchanges, known as SAML assertions. SAML 1.0 standards were vague about how to digitally sign SAML assertions, creating interoperability problems between different companies implementing Web services using the 1.0 standard, Mishra said. Only a 'small group' of companies are currently interested in using digital certificates to sign SAML assertions. However, that group is growing, as companies look for ways to exchange sensitive data with employees and business partners while also verifying that digital transactions took place -- a capability known as nonrepudiation... Having handed off the SAML 1.1 standards, OASIS's Security Services Technical Committee is now at work on the SAML 2.0 specification, Mishra said. That version will come with major additions to the standard based on feedback from large companies. Among other things, the group is looking at ways to implement distributed log out, in which three or more Web sites that share a single login session will synchronize when a user terminates that session. OASIS also wants to harmonize SAML 2.0 with the Liberty Alliance's ID-FF layer, another federated identity, single-sign on standard..." See: (1) the announcement, "Security Assertion Markup Language (SAML) Version 1.1 Ratified as OASIS Standard. Baltimore Technologies, BEA Systems, Computer Associates, Entrust, Hewlett-Packard, Netegrity, Oblix, OpenNetwork, Reactivity, RSA Security, SAP, Sun Microsystems, Verisign, and Others Collaborate on Authentication and Authorization."; (2) "Security Assertion Markup Language (SAML)"; (3) "Liberty Alliance Specifications for Federated Network Identification and Authorization."

  • [September 23, 2003] "New ISO Fees on the Horizon?" By Evan Hansen. In CNET News.com (September 19, 2003). ['IT standards groups are rallying opposition to an ISO proposal to introduce usage royalties for widely adopted standards, including country codes.'] "Information technology standards groups are raising warning flags over a proposal that could raise fees for commonly used industry codes, including two-letter country abbreviations, used in many commercial software products. At stake is a tentative proposal from the International Organization for Standardization (ISO) to add usage royalties for several code standards, a move that opponents say could weaken standards adherence by forcing software providers to pay a fee for each ISO-compliant product they sell. The standards -- ISO 3166, ISO 4217, ISO 639 -- cover country, currency and language codes, respectively. Critics say the proposal could weaken standards adherence by forcing software providers to pay a fee for each ISO-compliant product they sell. The backlash illustrates growing sensitivity in software circles over belated intellectual property claims... The proposal is still in the early stages, and may yet be significantly altered or shelved. Still, technology standards groups -- including the International Committee for Information Technology Standards (INCITS), the World Wide Web Consortium (W3C) and the Unicode Technical Committee -- are rallying opposition. 'Charging (usage fees) for these codes would have a big impact on almost every commercial software product, including operating systems,' said Mark Davis, president of software consortium Unicode, which is seeking to set standard character sets for disparate computing systems. 'They're used in Windows, Java, Unix and XML. They're very pervasive.' ... The ISO's claims on the codes stem from copyrights it owns on documents that describe the standards. ISO generally does not make its standards freely available, but sells them to fund its operations. Whether those copyrights apply to the codes themselves has not yet been tested, according to opponents of the proposal. 'There has not been a detailed discussion of how they own that copyright for the codes themselves,' said Martin Duerst, W3C Internationalization Activity Lead. 'The copyrights may not apply to individual codes, but only to the whole collection of codes--like a dictionary, where each word is not copyrighted, but the entire collection of words and definitions is copyrighted.' Duerst said the ISO's proposal is troubling because so many other standards groups have adopted the ISO codes. For example, he said, the Internet Engineering Task Force (IETF) has largely adopted the ISO's country codes..." See details and references in the news story "Standards Organizations Express Concern About Royalty Fees for ISO Codes." General references (for language codes) in "Language Identifiers in the Markup Context."

  • [September 23, 2003] "When Good Institutions Go Bad." By Simon St. Laurent (Editor, O'Reilly & Associates). From O'Reilly Developer Weblogs (September 23, 2003). "The last few weeks have seen a dismaying upturn in the number of semi-public institutions which seem to out to make a buck rather than a contribution, risking the contributions they've already made. ISO has the potential to cause the largest trainwreck, with plans to require licensing fees from those who use their language codes (ISO 639), country codes (ISO 3166), and currency codes (ISO 4127). The W3C has posted a letter to ISO... It appears that ANSI (the US member body for ISO is already at work collecting these royalties, as this exchange suggests. Warnings have gone up on the ISO 3166 site as well... I've been a critic of the W3C's structure for a long time now, having doubts about the nature of vendor consortia. On these kinds of issues, however, the W3C seems to be well ahead of its peers. While the process of creating many W3C specifications may remain veiled in mystery, the specifications themselves are open for anyone to implement, free of charge -- and the W3C seems intent on keeping it that way, even in the face of recent patent lunacy. The larger problem this illustrates isn't the greedy nature of everyone, but rather the difficulties of trust in a world where organizations are underfunded and expected to scramble for dollars. Building organizations which are intended to promote the sharing of resources requires an independent source of funds. Otherwise, organizations will end up placing tolls on their results, impeding the very sharing they were set up to create..." See details and references in the news story "Standards Organizations Express Concern About Royalty Fees for ISO Codes." General references (for language codes) in "Language Identifiers in the Markup Context."

  • [September 22, 2003] "Add XML Parsing to Your J2ME Applications. Combine Mobile Data and Mobile Code on Your Mobile Device." By Soma Ghosh (Application developer, Entigo). From IBM developerWorks, Wireless. September 16, 2003. ['More and more enterprise and Java technology projects are making use of XML as a medium to store data in a portable fashion. But due to the increased processing power demanded by XML parsers, J2ME applications have largely been left out of this trend. Now, however, small-footprint XML parsers for the Java language are emerging that will allow MIDP programmers to take advantage of the power of XML.'] "The fusion of Java and XML technologies creates the powerful combination of portable code and portable data. But where does the Java 2 Platform, Micro Edition (J2ME) fit in? In this article, I'll show some of the progress that has been made in cutting XML parsers down to a size suited to J2ME applications and limited-resource platforms. I'll use the kXML package to write an application for the MIDP profile that can parse an XML document... In this article, you'll see how you can use J2ME to fuse Java technology and XML -- in other words, to fuse portable code with portable data. Designing J2ME applications with embedded parsers can be a challenge because of the resource constraints inherent in J2ME devices. However, with the gradual availability of compact parsers suited to the MIDP platform, XML parsing will soon will be a widely used feature of the Java platform on mobile devices... Both push and model parsers require an amount of memory and processing power that is beyond the capabilities of many J2ME devices. To get around those device limitations, a third type of parser, called a pull parser, can be used. A pull parser reads a small amount of a document at once. The application drives the parser through the document by repeatedly requesting the next piece. The kXML parser that I'll use in my sample application is an example of a pull parser... You can use XML parsers in J2ME applications to interface with an existing XML service. For example, you could get a customized view of news on your phone from an aggregator site that summarizes headlines and story descriptions for a news site in XML format. XML parsers tend to be bulky, with heavy run time memory requirements. In order to adapt to the MIDP environment, XML parsers must be small to meet the resource constraints of MIDP-based devices. They should also be easily portable, with minimum effort required to port them to MIDP. Two frequently used XML parsers for resource-constrained devices are kXML and NanoXML. kXML is written exclusively for the J2ME platform (CLDC and MIDP). As of version 1.6.8 for MIDP, NanoXML supports DOM parsing..."

  • [September 22, 2003] "Microsoft Seeks Stronger XML Ties. ERP Vendors Pour Cold Water on Office as Window to Enterprise Applications." By Joris Evers. In InfoWorld (September 19, 2003). "Microsoft's forthcoming Office 2003 suite offers enterprises a promise few vendors or analysts are willing to support. The software giant argues that organizations will realize significant business process improvements by using the Office 2003 suite as a window into back-end enterprise systems. Office 2003's support for XML, Microsoft contends, is the key to bridging this front-end to back-end gap. But enterprise application vendors such as SAP, PeopleSoft, and Siebel Systems are far more interested in using XML for back-end integration, not to support a new front end. SAP, a longtime Microsoft partner, hopes Microsoft's support for XML will improve integration between Office and SAP back-end systems -- as SAP users can already tie Excel to their enterprise applications. But SAP does not expect users to switch from using portals to access data in enterprise systems to using Office... A Microsoft showcase for using the new made-for-XML InfoPath Office application instead of Word, Excel, Outlook, or PowerPoint, Cooper Tire and Rubber is one of those pioneers. With the help of Microsoft, Cooper Tire is building an XML front end to its customized tire-mold management system. Using XML forms and InfoPath, the company will be able to track the movements of molds between its various locations, said Ron Sawyer, manufacturing IT manager at Cooper Tire. 'Right now, we do not have visibility of the molds as they are in transit, and we make estimates of how long it will take for a mold to get shipped out of one plant and arrive at the other,' Sawyer said. 'We are very new to using XML and wanted to stick with Office and the Microsoft tools because that is our standard.' About 40 employees at Cooper Tire will use XML forms. The forms are opened in InfoPath and interact with a Windows Server 2000 system that sends the data on to an Oracle database..." See also: "Microsoft Office 11 and InfoPath [XDocs]" and "XML File Formats."

  • [September 22, 2003] "Sun Touts Liberty for Digital Rights Management." By Gavin Clarke. In Computer Business Review Online (September 19, 2003). "Sun Microsystems hopes to replicate an industry initiative for federated identity in the field of Digital Rights Management (DRM), to stymie Microsoft Corp's own controversial plans to control distribution of electronic content. The company has thrown its weight behind the OMA wireless group's effort to define a DRM specification on mobile devices. Ultimately, though, Sun hopes to build a coalition of vendors and end-users similar to the Liberty Alliance Project to drive uptake of DRM. Sun CTO John Fowler said a Liberty-style group would have the advantage of including input into specifications from end-users. Sun has helped work on a DRM specification at the Open Mobile Alliance, whose list of 200 members includes hardware vendors, ISVs, mobile specialists and content providers such as AOL Time Warner and Sony Inc. Liberty's members include end-users such as Amex and General Motors... 'Liberty was less about vendors who have technology and about the user,' Fowler said. Sun additionally believes mobile DRM for mobile systems to be important, given the expected growth rates in use of cell phones and other devices. Mobile platforms are also dominated by Sun's Java 2 Micro Edition (J2ME), meaning any DRM specification could ultimately be built into the platform. Ironically, Microsoft is also an OMA member, meaning the company could end-up putting its name to DRM work that ultimately competes against its own. OMA is an amalgamation of formerly disparate wireless and mobile vendor groups, formed in June 2002..." General references in "XML and Digital Rights Management (DRM)."

  • [September 22, 2003] "SOAP Gains Traction: Q&A with Rebecca Dias." By Jack Vaughan. In Application Development Trends (September 19, 2003). Microsoft's Rebecca Dias discusses the status of interoperability between Microsoft's .NET and IBM Java, binary communications, the goals of Web Services Enhancements (WSE) V2, and related topics. Dias: "There's a great deal of traction in terms of just general SOAP message processing and interop, and that goes across the board. Actually, it's more than just IBM and Microsoft, it's the Java world as well as other worlds that exist out there. There are Lisp implementations, for instance, that are finding interop, as well as WSDL and the WS-I basic profile they've defined. There are about a 100 partners, if not more, collectively collaborating on profiling how you do interoperability of SOAP, WSDL and the basic Web services protocol. There are also numerous implementations deployed based on that interoperability... The key to [SOA] is the meta data provided to you in the different SOAP headers, so SOAP is very quintessential to that. If a standards specification comes out that defines a different way to do the encoding that is highly and widely adopted, there's no reason why that can't happen. But today, the spec is still SOAP and XML meta data. If you have two intermediaries that are intelligent, that understand and know that the next intermediary hop happens to be in the same technology domain, and knows that we can actually do some kind of binary format from here to there, there's no reason why that can't happen and why your corresponding infrastructure can't support that. And if it ends up going to the next hop, which happens to not be potentially aware or know how to deal with that binary format, those systems had better know how to translate that back to SOAP, otherwise you're losing the whole value of a highly heterogeneous, interoperable system..."

  • [September 22, 2003] "Sun Touts Fast Web Services Plan. Binary Encodings Key to Proposal." By Paul Krill. In InfoWorld (September 19, 2003). "Researchers at Sun Microsystems are working on an initiative called Fast Web Services, intended to identify and solve performance problems in existing Web services standards implementations. Key to Sun's approach is boosting performance through use of binary encodings as an alternative to textual XML representations. 'Our technology improves both transmission speed, [with] less data transmitted, and processing performance on sender and receivers. The format requires less processor time than XML,' said Marc Hadley, Sun's senior staff engineer for Web technologies, products, and standards, in an e-mail response to questions. Sun plans to have a prototype of Fast Web Services in its Java Web Services Developer Pack early in 2004. Sun Distinguished Engineer Eduardo Pelegri-Llopart gave a presentation on Fast Web Services at the SunNetwork conference in San Franciscothis week. Sun believes Web services is going to become the new paradigm for distributed systems going forward, he said. But Web services need to be tuned for performance while enabling interoperability, according to Pelegri-Llopart. 'We're trying to provide better performance. We don't want a solution that is specific to our implementation,' he said. Sun's plan requires changes from developers. 'We believe that developers are to a large degree lazy. They find a concept that they're comfortable with, they take that concept, and push it to the limit,' said Pelegri-Llopart. In Sun's view, the XML-based messaging that lies at the heart of current Web services technology carries with it a performance price. XML-based messages require more processing than protocols such as RMI (Remote Method Invocation), RMI/IIOP (RMI Over Internet Inter-ORB Protocol), or CORBA/IIOP; data is represented inefficiently and binding requires computation, according to Sun in a paper published in August. 'The main point here is there is almost an order of magnitude between straightforward Web services using XML encoding and an implementation that takes care of binary encoding,' Pelegri-Llopart said. Fast Web Services attempts to solve bandwidth problems, including on wireless networks, by defining binary-based messages, albeit while losing the self-descriptive nature of XML. Although not an attempt to replace XML messaging, Fast Web Services is intended for use when performance is an issue..." See: (1) "JavaOne: Fast Web Services," presentation by Santiago Pericas-Geertsen and Paul Sandoz (Sun Microsystems); (2) "Fast Web Services," by Paul Sandoz, Santiago Pericas-Geertsen, Kohuske Kawaguchi, Marc Hadley, and Eduardo Pelegri-Llopart (Sun Microsystems Web Services library; appendices include a WSDL Example and an ASN.1 Schema for SOAP).

  • [September 22, 2003] "Adobe E-Doc Format Under Siege." By David Becker. In CNET News.com (September 18, 2003). ['Adobe's popular PDF document-sharing format faces challenges from Autodesk and Macromedia, each looking to take a bite out of the market with their own new technology. Analysts say the rivals could be a real threat to Adobe, which attributed a major earnings boost last week to a new line of PDF products.'] "Adobe Systems' portable document format, long a de facto Internet standard, is under fire from competitors looking to muscle in on the electronic document market. Autodesk, the leading maker of drafting software for architectural and engineering documents, recently began an aggressive advertising campaign urging customers to share documents in Autodesk's own Design Web Format (DWF) rather than in Adobe's PDF. In addition, Macromedia introduced FlashPaper, a new component based on the company's widespread Flash animation format that allows documents to easily be incorporated into Web pages and printed... One of the most influential Web design writers, Jakob Nielsen, recently attacked the widespread use of PDF for displaying documents over the Web, declaring the format 'unfit for human consumption.' The challenges come at a key time for San Jose, Calif.-based Adobe, which attributed a major earnings boost last week to a new line of PDF-related products released earlier this year. While PDF is firmly established in the PC world, 'I think there's always the possibility of a real threat,' said Rob Lancaster, an analyst for research firm The Yankee Group. 'Adobe is attempting to entrench itself within business applications, extending the capabilities of PDF beyond its typical role as viewing software, and a big part of that appeal rests on the ubiquity of the viewing capability.' Chuck Meyers, a technology strategist for Adobe's ePaper division, characterized recent swipes at PDF as acknowledgement of the company's success in popularizing the format. 'The key thing that's happening is that as we get bigger and better...the area we're in is a little bit more interesting a target than it used to be,' he said. 'We're going to take heat from a variety of different directions.' The most pointed business attack has come from Autodesk, whose new advertising and marketing campaign focuses on the supposed faults of PDF for exchanging engineering documents. The campaign comes as a surprise turnaround, after Adobe highlighted compatibility with AutoCAD -- Autodesk's main application for architectural drafting--as a key selling point for Acrobat Professional, the new high-end version of its PDF authoring tool. Tony Peach, the director of DWF corporate strategy for Autodesk, says the campaign stems from customer inquiries about the best way to exchange engineering documents..."

  • [September 18, 2003] "No Standard for Standards." By Jim Ericson. In Line56 (September 18, 2003). "History shows the value of uniformity, but portal standards are not yet a path to better workplace advantage... We have written and written about the value of extensible languages and protocols, and lately we've been excited and led some of the cheering for the arrival of portal standards like JSR 168, the Java API for local portlets and WSRP, an interface to assemble and connect third-party portlets. We're as patient as the next bunch, but who wouldn't cheer for ease of content integration and portal interoperability? Well, amid the myriad and venerable Web standards movements in progress, the first 1.0 spec of WSRP has finally just arrived with approval of the OASIS standards body, and JSR 168 is in its latest final draft. All the vendors and integrators have lined up behind the standards with products that support JSR and WSRP. We should be happy for our cause, but we're not because now we know we have only scratched the portals standards surface. There doesn't appear to be a compelling competitive advantage for a standards-adopting first mover and besides, putting portal standards to use today will be nothing so easy as plugging a toaster into a wall socket... JSR 168 builds on existing practices and lets developers create portlets that are interoperable with portal servers and Web applications. It's really designed for local execution of portlets, says Phil Soffer, who manages products and standards compliance at Plumtree. 'JSR 168's biggest strength is simplicity and the tools available from Java vendors can be used right away or with few extensions,' Soffer says. The weaknesses are that quality of service cannot be guaranteed, and that it is hard to scale locally without the addition of multiple application servers. WSRP, or Web Services for Remote Portlets, is a cross-platform standard designed to let portlets execute remotely from a portal server. It's a 'plug-'n-play' standard for multiple proprietary portals, Web applications and content sources. A good thing is that the standards are complementary. A developer could build self-service HR into a JSR 168 portlet, wrap it as a WSRP service and expose it to other portals and applications. This way a .NET portal framework could look at a JSR-built portlet through the WSRP 'wrapper.' A not so good thing from a standardization view is that proprietary portal applications like Plumtree's can presently deliver a lot more functionality natively than can be delivered through standards-based interfaces. So rather than running JSR 168 natively, Plumtree puts the standard in a parallel engine that can be used as works best, while retaining native benefits like fault tolerance and caching..." See recently: "Web Services at Apache Hosts WSRP4J Open Source Project for Remote Portlets."

  • [September 18, 2003] "The State of the Python-XML Art, 2003." By Uche Ogbuji. From XML.com (September 11, 2003). The author updates his overall Python-XML survey to encompass notable developments over the past year, many of which have been mentioned in the previous XML.com Python articles. This article serves as a ready and rapid index to folks who want to process XML using "the best language available for the purpose." Ogbuji organizes the review in a table according to the areas of XML technology. This will give newcomers to Python a quick look at the coverage of XML technologies in Python and should serve as a quick guide to where to go to address any particular XML processing need. He rates the vitality of each listed project as either "weak", "steady", or "strong" according to the recent visible activity on each project: mailing list traffic, releases, articles, other projects that use it, etc. The table uses these categories for tools supporting Python-based processing: XML parsing engines, DOM, Data bindings and specialized APIs, XPath and XSLT, Schema languages, Protocols, RDF and Topic Maps, Miscellaneous. A year ago the author reported 34 Python-XML projects; this year he adds 24; most of the additions point to the impressive activity that continues on the Python-XML front..." See also "XML and Python."

  • [September 18, 2003] "Commentary: SOE - Service Oriented Everything?" By CBDi Forum Analyst. In CBDi Newswire (September 17, 2003). "The plethora of Service Oriented acronyms appearing is a sure sign that Service Orientation is the 'next big thing'. As with Object Orientation, expect Service Oriented Programming, Service Oriented Analysis and Design, etc., to take centre stage with developers in the near future. Already some vendors are telling us to 'watch out for SOx' as their product plans begin to take shape, whilst analysts rush to each invent their own SOxx acronyms in typical 'we thought of it first' style. Whatever the acronym, successful adoption of SOx and Web Services will not happen by a process of osmosis, simply allowing technology to drive Service Orientation from the bottom up. At a recent workshop we held for a large global company it was evident pockets of Web Service adoption were springing up across the organization often with little visibility between one group and another. This is not unexpected, and should not be looked upon as a bad thing or discouraged. In this case the individual results were successful, and as ever it is often preferable that people prove for themselves that new ideas work rather than have it dictated to them from on high... The CBDI Web Services Roadmap initiative is designed to help organizations properly manage the shift to Web Services and SOA. We provide the roadmap in recognition that this shift is a journey that won't happen overnight, but now it is evident the take up of Web Services is accelerating it looks like a good time to start..."

  • [September 18, 2003] "A Preview of WS-I Basic Profile 1.1." By Anish Karmarkar. From O'Reilly WebServices.xml.com (September 16, 2003). "On 12th August 2003 WS-I (Web Services Interoperability Organization) announced the release of the final specification of Basic Profile 1.0 a set of recommendations on how to use web services specifications to maximize interoperability. For developers, users, and vendors of web services and web services tools this is a big leap forward to achieving interoperability in the emerging and fast changing world of web services. But what else has WS-I been working on? WS-I recognizes the fact that Basic Profile 1.0 is just a beginning and that it's a long road toward web services maturity and interoperability. In its mission toward accelerating the adoption of web services and promoting interoperability, the Basic Profile Working Group, which developed Basic Profile 1.0, is tasked with generating Basic Profile 1.1 to incorporate attachments... Basic Profile 1.1, as the name indicates, is the next version of Basic Profile. It builds on 1.0, adding support for SOAP Messages with Attachments (SwA) and WSDL 1.1's Section 5 MIME bindings. As part of the process of releasing a Profile, other Working Groups within WS-I develop sample applications and test tools for the Profile. This ensures that the Profile is implementable and 'debugged' before its final release. Like Basic Profile 1.0, Basic Profile 1.1 will be released with sample applications and test tools. This article provides a preview of Basic Profile 1.1 based on the latest Working Group Draft. The Basic Profile Working Group has been working on Basic Profile 1.1 since January 2003. In the course of its development the WG identified more than 70 technical issues that needed to be resolved. Only a very few minor ones remain. Please remember that this preview is based upon a Working Group Draft; as a work in progress can (and almost certainly will) be modified as the draft Profile is reviewed and refined... The most widely implemented and accepted attachment technology is MIME. SwA combines MHTML and content-id URIs (CID) for referencing MIME parts in SOAP. Basic Profile 1.1 has selected SwA as the attachment technology and WSDL 1.1 Section 5 MIME bindings for describing SwA. Basic Profile 1.1, as with Basic Profile 1.0, clarifies, fixes, and subsets the relevant specs to make it more interoperable and removes ambiguities. This addresses a real need that developers and users of web services have when dealing with large binary data and transporting it within a SOAP 1.1 Envelope. The direction that Basic Profile 1.1 has taken fits very nicely with the direction that XMLP WG has taken with respect to attachments for SOAP 1.2, as documented in SOAP Message Transmission Optimization Mechanism (MTOM). Both use MIME and are based on SwA... Interoperable attachments is one of the features that is frequently demanded by developers and users of web services. The Basic Profile Working Group addresses this need by including SwA in Basic Profile 1.1, resolving ambiguities, and by filling in the gaps of existing specifications. Furthermore, Basic Profile 1.1 also enables language binding tools to generate appropriate APIs to take full advantage of attachments..." See: (1) "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services"; (2) Charter v1.1; (3) general references in "Web Services Interoperability Organization (WS-I)."

  • [September 17, 2003] "Sun Embraces Open-Source Database." By Matt Hicks. In eWEEK (September 17, 2003). "Sun Microsystems Inc. is standardizing its software products on an open-source database to store and manage non-relational data. The company has chosen Sleepycat's Berkeley DB database as the embedded database within its software line. The database is incorporated in key components of the Sun Java Enterprise System, formerly known as Project Orion, and the Sun Java Enterprise Desktop System, formerly known as Project Mad Hatter, both launched on Tuesday. Sleepycat President and CEO Mike Olson said Sun chose Berkeley DB not only because of the technology behind it but also because Sleepycat offers a dual license: It offers a free open-source license for using the database within open-source software and a paid commercial license for software vendors like Sun using Berkeley DB within commercial software..." See: (1) the announcement: "Sun Microsystems Selects Sleepycat Software for New Middleware and Desktop Initiatives. Sun to Standardize on Berkeley DB to Meet Non-Relational Data Management Needs Within Key Components of Sun Java Enterprise System and Sun Java Desktop System." (2) "Sleepycat Software Releases Berkeley DB XML Native XML Database"; (3) "Berkeley DB XML: An Embedded XML Database."

  • [September 17, 2003] "Microsoft, IBM Toast Next Era of Web Services. Companies Demonstrate Web Service Interoperability on Windows, Linux Platforms." By Paula Rooney. In CRN (September 17, 2003). "Microsoft and IBM united in New York to demonstrate preview code for the next set of Web service protocols designed to enable more complex, secure, cross-company e-business transactions. Microsoft Chairman Bill Gates, on hand with top IBM software executive Steve Mills, said the forthcoming WS-Security, WS-Reliable Messaging and WS-Transaction protocols are designed to enable the kind of e-business relationships many dot.com vendors hyped during the late 1990s. 'Web services are important to the foundation of the Internet, enabling e-commerce to become a reality,' Gates said during a briefing in New York. 'That rich new layer will take Web services to a new level... we hope to see implementation in .NET and Websphere. At a briefing in New York on Wednesday, Microsoft and IBM together demonstrated early WS-Security, WS-Reliable Messaging and WS-Transaction protocol code working in the form of a supply chain Web service application among a car dealer, manufacturer and supplier. The Web service application -- which replicates the same function as a costly Electronic Data Interchange (EDI) transaction of the past -- was running on disparate systems -- a Windows 2003 Server, a Linux-based Websphere server from IBM and Linux-based wireless handheld. The WS-Security, WS-Reliable Messaging and transactions specifications have been under development for more than a year. The demonstration on Wednesday -- a big milestone in the evolution of Web services -- proved interoperability of systems and the execution of a hassle-free secure, financial transaction between three partners, Gates and Mills said... While the two companies voiced continued commitment to standards, there remain a number of uncertainties that could undermine Web service interoperability, sources note. Privately, one IBM executive said the formal adoption of WS-Security by OASIS is expected 'very soon' -- within the next six months. The two other protocols -- WS-Reliable Messaging and WS-Transaction -- are due in 2004 or 2005. However, during the briefing, neither Gates nor IBM Steve Mills, senior vice president and group executive of IBM's Software Group, could say when complaint products will be delivered, or when the specification will be formally adopted and by which standards body. 'We're still evaluating that,' Gates said. 'WS-Security went to OASIS, that's a possibility. No decision has been made'." Article also published in TechWeb News.com. See: (1) "OASIS WSS TC Approves Three Web Services Security Specifications for Public Review"; (2) "Updated Specifications for the Web Services Transaction Framework"; (3) "Reliable Messaging."

  • [September 17, 2003] "Web Services Reliable Messaging Update." By Peter Abrahams. In IT-Director.com (September 15, 2003). "In March [2003] I wrote two articles about Web Services Reliable Messaging, describing two competing specifications: WS Reliability from Sun, Oracle and friends and WS Reliable Messaging from BEA, IBM, Microsoft and Tibco (BIMT). Since I wrote some progress has been made. Firstly OASIS set up a WS Reliable Messaging Technical Committee (WS-RM TC) and based its work on the Sun-Oracle specification... this committee has met several times and improved and expanded the specification... The OASIS specification recognises a Reliable Messaging Process (RMP) that does that on behalf of the application. However, just as with the BIMT specification, there is no definition of the application interface to the RMP... The specification is still very much a work in progress with several comments in the draft saying that sections must be improved or rewritten. On the 4th of September the TC had a face to face meeting. The meeting included the first successful tests of the protocol enabling communications between different implementations from Fujitsu, Hitachi, NEC and Oracle. The test harness included a 'network troublemaker' that simulated various error conditions that could affect the successful message delivery. The tests ran for 36 hours without problem... Looking at the OASIS and the BIMT specification there now seems little functional difference (obviously the detailed syntax is not identical). The only substantive difference I could find is that OASIS sends an acknowledgment (ACK) for each message separately; whereas BIMT has a construct that allows multiple messages to be acknowledged in on ACK. The BIMT construct will improve performance, by reducing message traffic, to some extent but does add an extra layer of complexity to the implementation..." See: "Reliable Messaging."

  • [September 17, 2003] "Gates, Mills Talk Up Web Services in NY." By Michael R. Zimmerman. In eWEEK (September 17, 2003). "Bill Gates and IBM Software chief Steve Mills joined together here today to give an update on their companies' combined work in advancing Web services... Gates and Mills, IBM Software Group's senior vice president and general manager, demonstrated for the first time reliable messaging and secure, authenticated transactions across a federated, heterogeneous environment. They also announced that they plan to take the specifications used to pull off the demonstration, and which the companies have been developing for three years, to open standards bodies soon, and that they would not seek royalties for the specs... The demo was based on an auto dealer/parts scenario that comprised three partners: an auto dealer, a parts supplier, and the parts manufacturer and a high-tech cocktail of DB2, SQL Server, WebSphere and .Net. The dealer was notified upon logging on of a windshield wiper shortage. The crowd followed as the dealer proceeded to place an order with the supplier, who in turn placed an order with the manufacturer. Sounds simple enough, but the underpinnings of the demonstration were actual Web services apps, developed with specs such as the Web Services (WS)-Coordination and WS-Atomic Transaction specs, both of which were created by IBM and Microsoft along with BEA Systems Inc. The former is a 'framework for providing protocols that coordinate the actions of distributed apps,' while the latter 'provides the definition of the atomic transaction coordination type that is to be used with the extensible coordination framework described in WS-Coordination.' Other specs put to work today were the WS-Federation and WS-Reliable Messaging..." See: "Updated Specifications for the Web Services Transaction Framework."

  • [September 17, 2003] "Microsoft, IBM Push Web Services Advances." By Mike Ricciuti. In CNET News.com (September 17, 2003). "Microsoft and IBM, usually bitter rivals, on Wednesday demonstrated how their competing software packages can interact using Web services and pledged cooperation in establishing additional standards. At a press briefing, Microsoft Chairman Bill Gates and Steve Mills, the executive in charge of IBM's software unit, demonstrated for the first time what Gates termed 'advanced' Web services capabilities designed by the two companies for linking business software. The companies showed off an application that links automotive parts suppliers, manufacturers and dealers via Web services that use new specifications to ensure security, reliable messaging, and transaction support. The companies said the demonstration, which used software from both Microsoft and IBM, including servers running Linux, would have been difficult to accomplish with older technologies... Gates said the new specifications are needed in addition to existing standards such as XML (Extensible Markup Language) and SOAP (Simple Object Access Protocol). 'This rich new layer will take us to the next level,' he said. 'This is the first time anyone has seen this running,' Gates said. 'We think what will come out of this is along the lines of what we did with earlier specifications. We will submit (these specifications) to a standards group as royalty-free standards.' Wednesday's demonstration, which sources said was largely arranged by Microsoft, indicates that the companies could be concerned that Web services isn't being used for the mission-critical applications, as they had envisioned. 'I think there is concern that they need to keep these ideas in people's minds,' Narsu said. 'There seems to be concern that adoption is shallow.' Nearly 90 percent of big companies surveyed earlier this year by Gartner Group said they were using XML, the key Web services technology. Most respondents said they were interested in Web services and were in early trials. But Web services is in its infancy. While effective, the technology can only connect applications at a rudimentary level. The advanced capabilities outlined by Gates are needed before Web services can become widely used as a way to link companies, analysts said...."

  • [September 17, 2003] "Enterprise Transformation: Agile Solutions Requires Developing for 'Choice'." From Defense Finance and Accounting Service [Ms. Audrey Davis, Director for Information and Technology, DFAS CIO]. "The Defense Finance and Accounting Service (DFAS) is in the frontline of systems integration, trying to cope with many legacy systems and thousands of interfaces. One of the primary missions of the agency is to unify financial support functions of an agency of the United States Department of Defense (DoD). Much of this effort has been on eliminating duplication/redundancy of systems through reuse and conformation to standards... This paper starts with sharing lessons-learned that are applicable to many organizations that are transforming themselves to be agile. Then wider needs are covered including: how to be more customer responsive, being proactive rather than reactive, and addressing new business requirements with declining budgets. The same set of principles given here applies for all information systems that offer diffused and distributed content that is difficult to manage, coordinate, and evolve. In this regard we will also be discussing the Business-Centric Methodology (BCM)... The Business-Centric Methodology (BCM) effort underway at OASIS addresses the challenges of agility and interoperability through the adoption of a business first philosophy. The BCM facilitates the capture of decision rationale and involves the business experts to scope, define, relate and manage the business semantics concisely. Business users and customers can communicate concerns and aspects of the business more easily and accurately than developers can. The BCM's declarative approach allows business users to take back the 'steering wheel' of development and integration, much like the car factory evolved from machinist-built Model Ts to the modern factory's process configured by the customer's job order. The BCM Contract (job order) approach handles potentially thousands of relevant Choice Points in an organization through patterns defined via predefined BCM Templates, rather than being lost in tactical software programs. The BCM provides a clean separation of concerns in four layers: Conceptual, Business, Extension, and Implementation. Each layer is defined by its primary aspects, which are natural and intuitive means for providing a solution for interoperability. This separation allows for maximum reusability in terms of both components and aspects..." See: "OASIS Forms Business-Centric Methodology Technical Committee"; (2) BCM TC website. [source, cache .DOC]

  • [September 17, 2003] "Web Services Management Heats Up." By Martin LaMonica. In CNET News.com (September 17, 2003). "The development of a Web services management standard continued to move forward, in a technology area fast becoming the next major competitive race among Web services providers. Computer Associates International, IBM and Web services management start-up Talking Blocks last Thursday submitted a technical specification to the standards group Organization for the Advancement of Structured Information Standards (OASIS) for consideration as an eventual industry standard... The goal of the Web Services Distributed Management (WSDM) technical committee at OASIS is to write a technical blueprint for products that track the performance of applications written according to Web services standards. The standard, due in January of next year, will ensure that Web services management wares from different companies will interoperate. The WSDM technical committee is slated to meet in two weeks to discuss the standard. ... Weeks before HP announced plans to acquire Talking Blocks, CA quietly purchased Adjoin, another Web services management company. Several other start-ups, including Actional, AmberPoint and Confluent have also introduced Web services management products. Analysts said that investment in the development of Web services management products reflects a growing need among businesses for tools that can spot Web services glitches and ensure that applications run according to predefined performance goals..." See details in the news story: "IBM, Computer Associates, and Talking Blocks Release WS-Manageability Specification." Also: (1) OASIS Web Services Distributed Management TC website; (2) "Talking Blocks, CA, and, IBM Announce Submission of Web Services Manageability Standard to OASIS. Leaders in Systems and Web Services Management Create and Jointly Submit Standard to OASIS Web Services Distributed Management Technical Committee."

  • [September 16, 2003] "Using XPath with SOAP." By Massimiliano Bigatti. From O'Reilly WebServices.XML.com (September 16, 2003). ['Max Bigatti shows that we don't always need heavyweight data binding for RPC-style SOAP processing. With a working example he shows how Java's Jaxen XPath processor can be used to implement a loosely coupled web service.'] "XPath is a language for addressing parts of an XML document, used most commonly by XSLT. There are various APIs for processing XPath. For the purposes of this article I will use the open source Jaxen API. Jaxen is a Java XPath engine that supports many XML parsing APIs, such as SAX, DOM4J, and DOM. It also supports namespaces, variables, and functions. XPath is useful when you need to extract some information from an XML document, such as a SOAP message, without building a complete parser using JAXM (Java API for XML Messaging) or JAX-RPC (Java API for XML-Based RPC). Moreover, the loosely-coupled nature of web services suggests that the use of dynamic data extraction is sometimes better than using static proxies like the ones produced using JAX-RPC. In the article I'll show a JAXM Web Service for calculating statistics and a generic JAXM client that uses the service, demonstrating the use of XPath for generic data extraction. The Jaxen library implements the XPath specification on the Java Platform. Jaxen supports different XML object models, including DOM4J, JDOM, W3C DOM, and Mind Electric's EXML. It supports so many object models by abstracting the XML document using the XML Infoset specification, which provides a representation of XML documents using abstract 'information items'... The full source code is available online. Notice that the full libraries required (JAXM, JAX-RPC, Axis and Jaxen) are not provided. They can be downloaded from the web sites mention in the Resources section below. The example uses JWSDP 1.1 JAXM and SAAJ APIs and reference implementations. The generic client uses Axis (which is JAXM complaint) and the Jaxen library..."

  • [September 16, 2003] "Enabling Smart Objects: Breakthrough RFID-Enabled Supply Chain Execution Infrastructure." Sun Microsystems White Paper: Sun and Auto-ID. September 9, 2003. 32 pages. "Using technology breakthroughs in radio frequency identification (RFID) design, the Massachusetts Institute of Technology (MIT) Auto-ID Center, along with the Uniform Code Council (UCC), is leading a group of more than 90 companies and research centers to define widely supported global standards in reading, finding, and formatting product information. These standards are being designed for use as a next generation of the bar code. The Auto-ID standards will create a cost-effective way to make the supply chain more efficient. The compelling aspect of an Auto-ID enabled operation is the association of information with product movement. The combination of tags, antennas, readers, and local computers ('Savants') provides a near real-time view of product status and location. Many companies have begun trials to determine how this new infrastructure can be best used to make significant improvements in enterprise cost structures or revenue capabilities... The key components of the Auto-ID standard are: Electronic Product Code (EPC), Radio frequency identification (RFID) tags, Tag readers, Savant servers, Object Name Service (ONS), and the Physical Markup Language (PML)... The EPC identifies individual products, but useful information about the product is written in a new, standard computer language called Physical Markup Language (PML). PML is based on the widely accepted, extensible markup language (XML), and is expected to become a universal standard for describing physical objects, processes, and environments. Thus PML can store any information that could be useful; for example, product composition, lot number, and date of manufacture. This information can be used to create new services and strategies. For example, a consumer could find out how to recycle a product's packaging, a retailer could set a trigger to lower prices on milk as expiration dates approach, or a manufacturer could recall a specific lot of product. PML is designed to be a dynamic data structure, with information that can be updated over time. For example, the PML record for a product can be updated to store the location of a product as it moves through a supply chain... Once EPC data are detected by the readers, they are passed to The Savant. The Savant acts as event manager, filtering out extraneous EPC reads or events. The ONS Server provides the IP address of a PML Server that stores information pertinent to the EPC. Data from the Savant is passed into the application infrastructure, or operations bus, either locally or over a WAN such as the Internet. From here, the data is made available to virtually any application that can make use of it..." See: (1) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)"; (2) Sun RFID resources at Auto-ID: Reinventing the Global Supply Chain; (3) "Radio Frequency Identification (RFID) Resources and Readings." [cache]

  • [September 16, 2003] "RFID: Driving Benefits Throughout the Supply Chain." By Norm Korey (IBM Global Services). In Wireless Business and Technology Volume 3, Issue 9 (September 2003). "RFID is an emerging, advanced wireless technology for item tagging that enables end-to-end asset awareness. At its core, RFID uses tags, or transponders that, unlike bar code labels, have the ability to store information that can be transmitted wirelessly in an automated fashion to specialized RFID readers, or interrogators. This stored information may be written and rewritten to an embedded chip in the RFID tag. When affixed to various objects, tags can be read when they detect a radio frequency signal from a reader over a range of distances and do not require line-of-sight orientation. The reader then sends the tag information over the enterprise network to back-end systems for processing. RFID tags can be introduced to goods during the manufacturing process, to an individual item, or at a pack, box, or pallet level. RFID systems are also distinguished by their frequency ranges. Low-frequency (30KHz to 500KHz) systems have short reading ranges and lower system costs. They are most commonly used in security access, asset tracking, and animal identification applications. High-frequency (850MHz-950MHz and 2.4GHz-2.5GHz) systems, offering long read ranges (greater than 90 feet) and high reading speeds, are used for such applications as railroad car tracking and automated toll collection...The uses of RFID tags are endless: animal identification, security access, anti-theft retail systems, asset and inventory tracking, automatic toll collection, wildlife and livestock tracking, house-arrest monitoring systems, manufacturing work-in-process data, shipping, container and air cargo tracking, fleet maintenance, etc... RFID tags will replace traditional barcode technology due to several intrinsic disadvantages of barcodes, including: (1) Loss/damage: Barcodes are prone to loss or damage because they are stuck to the outside of packages and so can easily be damaged; (2) Human interaction: Barcodes require human intervention to operate the barcode scanner ; (3) Limited information: Barcodes cannot be programmed or reprogrammed and can provide only the most basic product number information; (4) Stock storage space constraints: Barcodes require line-of-sight to be read... During the past decade, supply chain management has seen a complete overhaul of traditional logistics procedures as tight integration between warehouses, distribution, and retail has smoothed out duplication and improved time-to-market. Supply chain efficiencies are being driven by improvements in information accuracy and availability. However, further improvements have been constrained by the technology used to track goods through the supply chain. The use of RFID wireless technology changes that, providing organizations with an opportunity to significantly enhance supply chain processes as well as deliver improvements in customer service..." See: (1) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)"; (2) "Radio Frequency Identification (RFID) Resources and Readings."

  • [September 16, 2003] "James Clark Unveils a New XML Mode for GNU Emacs." By Michael Smith. From XMLHack (September 10, 2003). "James Clark has announced the alpha release of nXML, a new mode for editing XML documents from within GNU Emacs. It's a milestone in that it's the first open-source editing application to enable context-sensitive validated editing against Relax NG schemas. It also provides a clever mechanism for real-time, automatic visual identification of validity errors, along with flexible syntax-highlighting and indenting capabilities. The real-time validation feature is similar to a feature in the Topologi Collaborative Markup Editor, a relatively new commercial application that takes a number of novel approaches to XML editing. The Emacs/nXML implementation works like this: As you are editing a document, nXML does background re-parsing and re-validating of the document in the idle periods between the times when you are actually typing in content. It visually highlights all instances of invalidity it finds in the document. If you then mouse over one of the invalidity-highlighted points in the document, popup text appears describing the validity error..." The resources are available for download. Also of note: on September 5, 2003 a list "emacs-nxml-mode - New XML Mode for Emacs" was started on Yahoo!Groups "for discussion of a new major mode for GNU Emacs for editing XML, with support for RELAX NG. This is under development by James Clark. This group will discuss details of what features the mode should provide and how they should work. Also users will be able to get help on using the mode." See also the new "relaxng-user: A public mailing list for users of RELAX NG" with address relaxng-user@relaxng.org and the associated relaxng-user archives. General references in "RELAX NG."

  • [September 16, 2003] "Chicago Show Heralds New 'Internet of Things'. Electronic Product Code Network Launched at Conference." By Paul Roberts. In InfoWorld (September 15, 2003). "A Chicago symposium highlights technology that may fuel the next 50 years of economic growth: a global network of intelligent objects. The EPC (Electronic Product Code) Executive Symposium will run from Monday September 15, 2003 through Wednesday, September 17, and marks the official launch of the Electronic Product Code (EPC) Network, an open technology infrastructure developed by researchers worldwide. The network uses RFID (Radio Frequency ID) tags to enable machines to sense man-made objects anywhere in the world. The Symposium will introduce EPC technology to an audience of corporate executives, explaining how the EPC network works and how to implement EPC technology in corporate supply chain networks, according to the Auto-ID Center. The gathering has the backing of major technology companies including IBM Corp., SAP AG and Sun Microsystems Inc... VeriSign will unveil three new services that will allow organizations to manage EPC data using the Internet: ONS Registry, EPC Service Registry and EPC Information Services. Together, the new services will create a registry, similar to the Internet DNS (Domain Name System), that link an EPC with an IP (Internet Protocol) address. Using the services, companies will be able to use the Internet to track their products in the time between when they leave the manufacturing plant and arrive at the loading dock of a retail outlet, Brendsel said. Unlike the much-publicized 'smart shelf' trials, in which RFID technology is used inside retail outlets to provide real-time merchandise stocking information, companies will be focusing on trials outside the four walls of the retail outlet, he said. Also at the show, Intel Corp. will announce a partnership with ThingMagic LLC of Cambridge, Massachusetts, to deliver a new generation of RFID tag readers. The new generation of readers will be built on ThingMagic's Mercury4 Platform and use Intel's IXP420 XScale network processors, improving the power of the readers so that they can process multiple RFID protocols simultaneously, the companies said. When it comes to practical applications for EPC technology, the focus at the Auto-ID EPC Symposium will be on the supply chain..." General references in "Radio Frequency Identification (RFID) Resources and Readings."

  • [September 16, 2003] "IBM, Others Unveil RFID Offerings. Big Blue Will Offer Consulting and Implementation Services for RFID." By Stephen Lawson. In InfoWorld (September 15, 2003). "IBM Corp. and a truckload of other vendors joined the RFID (Radio Frequency ID) parade Monday at a meeting in Chicago that is shaping up as a coming-out party for the object-identification technology. The EPC (Electronic Product Code) Executive Symposium, running Monday through Wednesday, marks the official launch of the EPC Network, an infrastructure that uses RFID tags to let machines anywhere in the world sense a tagged object. RFID tags are like bar codes except that devices can read the information they contain using radio frequencies. Most participants in the Symposium are highlighting the use of RFID to track products through a corporate supply chain. IBM will offer consulting and implementation services and specialized software to companies that want to start using RFID, the company announced Monday at the show. It will help companies evaluate and adopt the new technology in phases and integrate IBM software into their existing back-end inventory database systems, IBM said in a statement. The software is based on WebSphere Business Integration middleware and can work with WebSphere Application Server, DB2 Information Integrator, Tivoli Access Manager and WebSphere Portal Server... Intermec Technologies Corp. announced the EasyCoder Intellitag PM4i printer, which can encode a product's identifying information into an RFID chip embedded in a label. It can do this while also printing a visible barcode and text onto the label, said Warren Payne, a representative of Intermec, in Everett, Washington. The printer is the first that can encode data to so-called 'frequency-agile' RFID tags made by Intermec, which are visible to reader devices using different frequencies in different countries, according to Doug Hall, director of printer marketing at Intermec. A company in Europe could write data to one of these tags using a frequency that's appropriate there and then ship the product to the U.S., where the same tag could be recognized by a reader device that uses another frequency. The printer will be available early next year. Pricing has not yet been set, Hall said. Also at the conference, Intermec demonstrated a system it developed with Georgia-Pacific Corp.'s packaging division in which the packaging producer can manufacture boxes with embedded RFID tags. When a company packs a product in the box, it can encode information in that embedded tag, Payne said. Typically, a company would put a barcode label on the box at the same time so the product could be identified in parts of the supply chain that don't yet use RFID..." See: (1) "IBM Announces Comprehensive New RFID Service. Helping Retailers and Consumer Packaged Goods Companies Boost Accuracy in Picking, Packing, Shipping. Cutting Theft in the Supply Chain."; (2) "Radio Frequency Identification (RFID) Resources and Readings."

  • [September 16, 2003] "Using XML Schemas Effectively in WSDL Design. Achieve a Higher Degree of Portability With These Best Practices." By Chris Peltz and Mark Secrist (HP Developer Resources). In XML Journal Volume 4, Issue 9 (September 2003). With source code. "Developers are beginning to develop more sophisticated Web services, exchanging complex XML documents rather than simple parameter types. As this shift takes place, development teams begin to grapple with different approaches to designing these Web services interfaces through the use of WSDL. In this article, we will focus on four specific areas of best practices that can be applied, particularly in the use of XML Schemas in a Web services design: XML Schema style, namespaces, XML and WSDL import for modularity, and use of schema types for platform interoperability. Through the use of these techniques, you will be able to achieve a higher degree of portability of your WSDL and XML Schemas and will realize improved reusability and interoperability between a broader collection of Web services platforms... Using a more modular schema design can maximize the potential for reuse in your organization. The proper refactoring and naming techniques can also simplify the generation of implementation classes for your platform. A modular design approach will also require an effective use of namespaces in your XML Schemas. Namespaces provide a mechanism to scope different elements or type definitions in your design. They can simplify how you reference or import types that might exist in external schema files. They can also be used to enforce versioning of your Web services. The techniques that were discussed to modularize XML Schemas can also apply to the design of the WSDL interfaces. If used properly, the import mechanism can provide a great amount of reusability of both the XML Schema types and the WSDL message types. This design can be further enhanced through the use of development and design tools. It's important to remember that each Web services platform might manage XML differently. Use of certain XML data types or schema structures may not be supported on certain platforms. In the design, you should pay close attention to these interoperability issues, adding testing where appropriate..." [alt URL]

  • [September 15, 2003] "Generation of XML Records across Multiple Metadata Standards." By Kimberly S. Lightle and Judith S. Ridgway (Eisenhower National Clearinghouse, Ohio State University, USA). In D-Lib Magazine (September 19, 2003). "This paper describes the process that Eisenhower National Clearinghouse (ENC) staff went through to develop crosswalks between metadata based on three different standards and the generation of the corresponding XML records. ENC needed to generate different flavors of XML records so that metadata would be displayed correctly in catalog records generated through different digital library interfaces. The crosswalk between USMARC, IEEE LOM, and DC-ED is included, as well as examples of the XML records... Because the native metadata for the ENC collections follow different metadata standards (USMARC and IEEE 1484.12.1-2002 Learning Object Metadata (LOM) Standard) and the metadata to be harvested via the NSDL OAI repository follows the Dublin Core metadata standard, ENC needed to develop crosswalks between these three standard metadata schemas. ENC also needed to generate different flavors of XML records so that metadata would be displayed correctly in catalog records generated through different digital library interfaces. XML is an open, text-based markup language that provides structural and semantic information to data based on a specific schema such as USMARC. These XML records are searched by the Autonomy search engine with the metadata displayed in two different formats: the format used for the ENC DL libraries (Learning Matrix, ICON, and GSDL) and that used for ENC Online. The XML records are also exported in a Dublin Core format, so they are available to the NSDL OAI harvester. XML records generated by the Learning Matrix, ICON, and GSDL are based on the IMS Learning Resource Metadata Specification and are the most straightforward to produce -- there is a one-to-one correspondence between the metadata that are entered in the cataloging tool and that which are displayed as part of the catalog record. ENC also has to generate a USMARC XML record from the digital library metadata to be searched via ENC Online. This requires the IEEE LOM metadata to be crosswalked to the USMARC metadata standard. A third flavor of XML record is generated from both USMARC and the IEEE LOM metadata. These XML records have been crosswalked to DC-ED so that they are harvestable by the NSDL and searchable through the NSDL.org interface. A fourth type of XML record is generated so that IEEE LOM metadata can be displayed in a USMARC format via the ENC Online interface. In the future, an XML record will be generated in the IEEE LOM format based on the USMARC metadata used to describe ENC resources... ENC is not unique in its need to produce different flavors of XML records to conform to multiple schemas. Just as ENC chose the IEEE LOM schema, digital libraries should choose a schema that best embodies the nature of their resources and their cataloging goals. Crosswalks that extend interoperability are essential so that the digital library collections can be accessible through a variety of portals and search interfaces. As more organizations share what they have learned as they strive for maximum interoperability of their records that richly describe digital resources, the development of crosswalks will be better understood and more easily accomplished..." See: "IMS Metadata Specification."

  • [September 15, 2003] "Problems Arise During UML 2.0 Finalization. Lack of Clarity, Inability to Implement Specification Cited as Obstacles to Early Adoption." By David Rubinstein. In Software Development Times (September 15, 2003). "The co-chairman of the task force working on the finalization of UML 2.0 has acknowledged that two important problems have emerged during this phase of review, but said they are being fixed and the specification is expected to be released as an Object Management Group Inc. available technology in April 2004. Bran Selic, who is IBM Corp.'s liaison to OMG from the Rational software group, said vendors and academicians trying to implement the UML 2.0 specification, which was approved by the OMG Architecture Board, are raising issues... One of the problems, Selic said, is that new mechanisms used to define the abstract semantics of the language are not scaling up as needed to get the most of using UML within a Model Driven Architecture. 'We might have to modify the package/merge mechanism. People want to make sure the models will fit on a disk.' The second problem involves removing some flexibility that was built into the compliance scheme to allow software designers to mix and match various parts of UML... The finalization task force posted OMG's final adopted specification on August 8, 2003 and adopters have until mid-September [2003] to call problems to the task force's attention. The draft of the final standard, also called the available technology, is set for the end of April 2004. The final available technology has three new capabilities that Selic said users were clamoring for -- the ability to model architectural structures, interactions and activities... The modeling of interactions now will let software designers combine simple interactions into larger sequences, and reuse them across different systems. For example, he said, to define an automated teller machine process, you first must define a sequence to enter a password. That password sequence could be reused in other processes that require it. The ability to model the flow of activities uses the BPEL4WS specification developed by IBM and Microsoft..." See: (1) "Unified Modeling Language Version 2.0" (overview from IBM); (2) "OMG Model Driven Architecture (MDA)."

  • [September 15, 2003] "IBM Package Gets E-Commerce Right." By Jim Rapoza. In eWEEK (September 15, 2003). "IBM's powerful e-commerce application gives enterprises all the capabilities they will need in a single platform and does so without sacrificing quality. However, companies interested in WebSphere Commerce will want to make sure that their needs are high-end enough to justify the high-end cost of the product... During tests, eWEEK Labs was impressed with the breadth of e-business capabilities in WebSphere Commerce 5.5 and the quality of all its features. Unlike many products that try to do everything and end up doing nothing well, WebSphere Commerce is the rare application that does many things and does many of them well. WebSphere Commerce 5.5, which shipped in June, is an excellent platform for running the most complex B2B and B2C e-commerce operations, and it can run both simultaneously, which allows for excellent integration between both sides of a company's business... On the B2B side, WebSphere Commerce now makes it possible to define and maintain a wide variety of value chains for different business models. This makes it possible to create private marketplaces, hosted services and complex multivendor purchase systems. We also liked the improved contracts and RFQ (request for quote) capabilities in WebSphere Commerce. These make it possible for business buyers to define unique product requirements that sellers can attempt to meet through custom design or through existing products... Like many other enterprise applications, WebSphere Commerce, which is based on Java 2 Platform, Enterprise Edition and XML, includes support for delivering and consuming Web services..."

  • [September 12, 2003] "Saving the Browser. By Ray Ozzie. In Ray Ozzie's Weblog. September 12, 2003 [or later]. An account of Lotus Notes development. "Some months back I became aware of the patent US 5,838,906 and the Eolas lawsuit against Microsoft, and followed a bit of conversation on the Net related to it. As many, I believed the the issue would quickly go away because of ample prior art. Regrettably, this seems not to be the case. It now seems that perhaps the browser itself and the browsing experience may have to be nontrivially modified as a result of the judgment. Although a bit late, if some of us perhaps dust off our old code, is there a chance that we could still save the browser through demonstration of clear prior art? For my own interest, and for the record, I recently spent a little time pursuing my intuition that Lotus Notes R3 might be viable prior art relative to the patent in question. I am not an attorney, and I am surely not well versed in the nuances of the case, but it seems to me after initial investigation that there is indeed quite a bit of relevance. I pursued this with the assistance of my brother, Jack Ozzie, and with another of my employees at Groove, Rob Slapikoff. Both Jack and Rob worked for me at Iris Associates in the development of Lotus Notes. Although I am personally responsible for a good deal of the 'browser' code in question, I asked Jack to help because he specifically did all of the work related to our 'object linking and embedding' technologies -- first a Lotus technology referred to internally as DIP, (Document Interchange Protocol), and later in loose collaboration with the Windows and Excel teams on what was referred to as either CDP (Compound Document Protocol) or OLE (Object Linking and Embedding). I asked Rob to help because he was, in essence, a Lotus Notes 'solution developer' at the time, and was very familiar with how one would quickly weave together a solution involving multiple applications. When I began this investigation, I thought that it might be challenging to recreate a scenario, given the feature set available Notes R3, that was close to what was described in the patent. In fact, however, the hard part was only in putting together a computing environment that ran Notes R3. Once we had Notes running, it only took about 15 minutes to reproduce what I've shown below, and there was no programming involved. Meaning, everything done was done with just the out-of-the-box UI of both Notes and Excel..." See: (1) W3C Mail Archives for 'public-web-plugins@w3.org'; (2) "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft."

  • [September 12, 2003] "Sun Updates J2EE for Web Services." By David Becker. In CNET News.com (September 12, 2003). "Sun Microsystems has released a preliminary version of an update to its Java 2 Enterprise Edition software, with support for a major new Web services standard. Sun announced late Thursday that a qualification release of the source code for version 1.4 of Java 2 Enterprise Edition (J2EE) is available to licensees. The release is intended to give developers an early look at additions to the J2EE code, so they can start building applications around the new features. J2EE has become one of the most significant variations on Sun's Java programming language, serving as the basis for a myriad of Web applications. The most significant addition to version 1.4 is support for Basic Profile, the comprehensive Web services standard released last month by the Web services Interoperability organization (WS-I). The WS-I is a consortium whose 150 members include representatives from major software makers and corporate customers. The WS-I profile is designed to allow disparate computing systems to exchange data, thus encouraging adoption of Web services. It includes specifications for some of the current building blocks of Web services, such as Simple Object Access Protocol (SOAP) 1.1, Web Services Description Language (WSDL) 1.1, Universal Description Discovery and Integration (UDDI) 2.0 and Extensible Markup Language (XML) formats. Ralph Galantine, Sun's group marketing manager for Java Web services technology, said the Santa Clara, Calif., company moved quickly on WS-I support. It did this based on feedback from J2EE licensees participating in the Java Community Process, which it uses to guide development of the various Java flavors..." See details in the news story: "Sun Announces J2EE V1.4 Support for WS-I Compliant Web Services Applications."

  • [September 12, 2003] "Web Services Portal Standard Approved." By Grant Gross. In Network World (September 12, 2003). "The OASIS standards consortium has approved a standard that members say will make it easier and cheaper to publish data on Web portals... WSRP eliminates the need for content aggregators to choose between having to host a content source at the location of the portal server or having writing different code for each remote content source, according to OASIS. Instead, WSRP would allow developers to write portal applications, called portlets, in the environment they like, without having to write new code for every proprietary portal. 'It takes enormous cost out of the equation,' said Rich Thompson, chairman of the OASIS WSRP Technical Committee. 'You also have the ability to get (content) out to a much larger audience very quickly.' WSRP allows remote portlet Web services to be created in several ways, such as using Java/J2EE or Microsoft's .Net platform. Web portals can include consumer-oriented Web sites, such as Yahoo.com, as well as corporate internal information sites. Support for WSRP is already available in a corporate portal product offered by Plumtree Software of San Francisco, Thompson noted, and other vendors are looking at offering WSRP-compatible portal software. There is also some interest from the Apache open source community, he added. Twenty-five OASIS member companies, including IBM, Microsoft, Novell and Vignette, helped work on the WSRP standard. Several OASIS members praised the release of the WSRP standard.'By providing a 'plug-n-play' standard that enables developers to capture portal content from compliant sources and make that content available to users in readily accessible portlets, WSRP unleashes the full potential power of Web services,' Dmitri Tcherevik, vice president and director of Web services at Computer Associates, said..." See details in the news story: "Web Services for Remote Portlets Specification Approved as OASIS Standard."

  • [September 11, 2003] "IE Patent Endgame Detailed." By Paul Festa. In CNET News.com (September 09, 2003). "Microsoft has suffered another legal setback in the patent dispute with software developer Eolas and is now advising Web authors on workarounds, as new details emerge of its plans to tweak Internet Explorer. A federal judge last week rejected Microsoft's post-trial claim that Eolas had misrepresented the facts in the patent case, which claimed the software giant had stolen browser technology relating to plug-ins... last week's loss on claims of 'inequitable conduct' heightened the sense that not only Microsoft but the entire Web may soon be forced to make substantial adjustments -- and that pages around the Web and on private intranets will have to be rewritten to work with an altered IE. 'If you're currently using a plug-in, you will have to change your pages quite significantly,' said one person familiar with Microsoft's post-verdict plans. 'There might be tools to help you do so, but currently they don't exist.' Regardless of whether the court orders Microsoft to change IE, the software giant has been conferring with its own engineers and those of companies that rely on the browser's ability to automatically launch and display multimedia programs with plug-ins -- an ability the court held to be, in its current form, an infringement of the Eolas patent. Now Microsoft, while expressing optimism that it will ultimately prevail over Eolas in the courts, is advising Web authors to take precautions and prepare for a post-Eolas world... While declining to comment on the specifics of the meeting or its plans for IE, Microsoft did warn that the Eolas patent threatened more than just Internet Explorer. 'This is not an issue just for IE,' said Wallent. 'This is a potential issue for Netscape Navigator, for Opera and for other browser vendors. This is an industry issue.' One attendee of the meeting who asked not to be named said that while Microsoft's workarounds were technically promising, their legal soundness was uncertain. Worse, this attendee said, the implementation of the workarounds would require a huge amount of work on the part of Web authors..." See also: "Patents and Open Standards."

  • [September 10, 2003] "Ten Favorite XForms Engines." By Micah Dubinko. From XML.com (September 10, 2003). "Although XForms is largely described as an update to the decade old classic HTML forms technology, XForms is also finding a home in many fresh areas where standards are increasingly vital, like content management and workflow systems. As a result, there are a large number of XForms engines currently under development by companies large and small. According to reports, at the time of publication as Proposed Recommendation, W3C XForms was the most widely implemented W3C specification, ever. This presents a challenge to those thinking about trying out XForms. This article offers a good starting point for XForms research. For each XForms engine, this article describes the software, system requirements, and other useful information as well as a screenshot. Keep in mind, too, there are even more XForms engines (in various stages of development) than presented here... Microsoft InfoPath, part of the Office 2003 System, offers similar functionality to many of the applications listed here. Microsoft's application sports a fantastic user interface for end users, despite an insistence on providing layout through nested tables. The internal format InfoPath uses, however, is an XSLT-generated modified version of XHTML, not XForms. A future article will provide a more in-depth comparison between InfoPath and XForms engines..." General references in "XML and Forms."

  • [September 10, 2003] "DRM: Some Restrictions May Apply." By Jim Rapoza. In eWEEK (September 08, 2003). "DRM is a technology that has yet to prove its worth, either in the area of intellectual property protection -- its traditional domain -- or in the corporate arena, where vendors are hoping to make inroads... Microsoft's forthcoming Rights Management Services for Windows Server 2003, expected this fall, could make it possible for companies to apply strong DRM restrictions to standard content created in Microsoft Office. Because the system will work in conjunction with Office and the server, it will be possible to apply protections to the content within Office, then manage the permissions through access to the server. One potentially very large drawback to this solution is that it will work only with the Microsoft 2003 series, meaning companies not using Office 2003, Outlook 2003 or Windows Server 2003 will not be able to take advantage of these protections. If the content can be viewed in a browser, however, it can be accessed using a plug-in for Internet Explorer. Solutions from document creation vendors are another possible source of DRM protection for companies. For example, Adobe Systems Inc. provides some fairly detailed, capable rights management capabilities within Acrobat. Using these features, companies can embed rights within the documents, ensuring that protections accompany the documents wherever they go. Embedded DRM has the advantage of being portable and letting approved users access documents anywhere and at any time. Still, companies must understand that no matter what their choice, DRM is not a cure-all. In eWEEK Labs' experience, any DRM restrictions can be easily defeated through the use of remote control, digital cameras, or even pen and paper. Keep this in mind when deploying DRM because if the restrictions become too annoying, even a normally honest user might tap one of these methods to get around the hassles of DRM. And once this happens, your whole investment in DRM goes out the window..." General references in "XML and Digital Rights Management (DRM)."

  • [September 09, 2003] "Semantic Web Enabled Web Services." By Dieter Fensel (University of Innsbruck) and Christoph Bussler (Oracle Corporation). From the Resources Collection of the Semantic Web Services Initiative (SWSI). April 2003. 36 pages (slides). "Web Services will transform the web from a collection of information into a distributed device of computation. In order to reach full potential, appropriate description means for web services need to be developed. For this purpose we developed a full-fledged Web Service Modeling Framework (WSMF) that provides the appropriate conceptual model for developing and describing web services and their composition. The philosophy of WSMF is based on the principle of maximal de-coupling complemented by scalable mediation service. This is a prerequisite for applying semantic web technology for web service discovery, configuration, comparison, and combination. This presentation provides a vision of web service technology, discussing the requirements for making this technology workable, and sketching the Web Service Modeling Framework..." See also the earlier (2002) paper "The Web Service Modeling Framework WSMF" by the same authors. The Semantic Web Services Initiative (SWSI) is "an ad hoc initiative of academic and industrial researchers, many of which are involved in DARPA and EU funded research projects. The SWSI mission is threefold: (1) to create infrastructure that combines Semantic Web and Web Services technologies to enable maximal automation and dynamism in all aspects of Web service provision and use, including (but not limited to) discovery, selection, composition, negotiation, invocation, monitoring and recovery; (2) to coordinate ongoing research initiatives in the Semantic Web Services area; (3) to promote the results of SWSI work to academia and industry..." General references in "Markup Languages and Semantics."

  • [September 09, 2003] "From UML to BPEL: Model Driven Architecture in a Web Services World." By Keith Mantell (IT Architect, IBM). From IBM developerWorks, Web Services. September 9, 2003. "The Business Process Execution Language for Web Services (BPEL) is an XML-based standard for defining how you can combine Web services to implement business processes. It builds upon the Web Services Definition Language (WSDL) and XML Schema Definition (XSD). This article describes a new tool from part of the Emerging Technologies Toolkit version 1.1 (ETTK) released on alphaWorks which takes processes defined in the Unified Modeling Language (UML) and generates the corresponding BPEL and WSDL files to implement that process. This capability is used to highlight some of the benefits of the OMG's Model Driven Architecture (MDA) initiative. Raising the level of abstraction at which development occurs will in turn deliver greater productivity, better quality, and insulation from underlying changes in technology... BPEL provides an XML notation and semantics for specifying business process behavior based on Web Services. A BPEL4WS process is defined in terms of its interactions with partners. A partner may provide services to the process, require services from the process, or participate in a two-way interaction with the process. Thus BPEL orchestrates Web Services by specifying the order in which it is meaningful to call a collection of services, and assigns responsibilities for each of the services to partners. You can use it to specify both the public interfaces for the partners and the description of the executable process... The UML to BPEL mapping tool is able to take models of processes developed in a UML tool, such as IBM Rational's XDE or Rose, and convert them to the correct BPEL and WSDL files necessary to implement that process. The Emerging Technologies Toolkit version 1.1 (ETTK) is an environment for trying out interesting new technologies, and now comes in two flavors: autonomic and webservices. This article focuses on the latter... The UML profile allows developers to use normal UML skills and tools to develop Web services processes using BPEL4WS. This approach enables service-oriented BPEL4WS components to be incorporated into an overall system design utilizing existing software engineering practices. Additionally, the mapping from UML to BPEL4WS permits a model-driven development approach in which BPEL4WS executable processes can be automatically generated from UML models. This approach highlights how the notion of MDA can be applied to other areas and at higher levels of abstraction, and insulate the developer from changes in the technology..." Note: the IBM Emerging Technologies Toolkit Version 1.1.1 "contains a WS-Policy demo, Self-Healing/Optimizing Autonomic Computing demo, Autonomic Computing Toolset, Common Base Event Data Format, Web Services Integration, Web Services Failure Recovery, IBM Grid Toolbox infrastructure along with a Grid Software Manager, WS-Reliable Messaging demo, and a JMX Bridge..." See also the live demo.

  • [September 09, 2003] "XKMS Does the Heavy Work of PKI." By Rich Salz (DataPower Technology). In Network World (September 08, 2003). "Public-key infrastructure is well suited for securing Web services, but PKI deployment is too cumbersome and costly for the technology to achieve widespread use. An upcoming standard from the World Wide Web Consortium aims to reduce the costs of PKI without sacrificing its benefits. XML Key Management Specification (XKMS) borrows the best of PKI without reducing scalability or security. XKMS creates a trust service that shields clients from complexity by providing an XML interface to PKI. The proposed standard is in the last-call phase with the W3C and several vendors are starting to develop XKMS toolkits and applications. PKI scales well because it does not require an online service such as Kerberos Key Distribution Center. Because Kerberos uses shared-secret cryptography, it's a likely target for hacker attacks. And because it contains so much sensitive information, it is usually not widely replicated, making it a potential single point of failure. PKI avoids both of these issues by using a set of public and private keys: Private keys are held only by an individual party; public keys can be distributed widely. With a PKI-secured message, an online service such as the KDC is not needed for any two parties to communicate securely. In addition, the ability to have a hierarchical key structure, and real-time analysis of the path through the hierarchy, makes it possible for parties to securely communicate without prior business arrangement. With XKMS, a client and application server share an XKMS service to validate each other and to process requests between them. XKMS replaces many PKI protocols and data formats, such as Certificate Revocation Lists, Online Certificate Status Protocol, Lightweight Directory Access Protocol, Certificate Management Protocol and Simple Certificate Enrollment Protocol, with one XML-based protocol. XKMS also can be implemented client-to-client, server-to-client, server-to-server... Traditionally, with PKI all trust decisions are offloaded to the crypto consumer. This requires complicated programming libraries and configuration information. For an example of this, look at the "trusted issuers" list in the security parameters section of your Web browser. With XKMS, trust decisions are given to a common server so they can be centralized and applied consistently across platforms. The only configuration information an XKMS client needs is the URL of the server, and the certificate the server will be using to sign its replies. Different trust models can be supported by using different URLs... Many XML Web services standards, including Security Assertions Markup Language and WS-Security , use digital signatures to protect the content of authentication and message data. Although it has not yet received the publicity that those specifications have received, XKMS might be the specification that makes Web services implementation feasible..." See: (1) "XML Key Management Specification (XKMS)"; (2) Security specifications.

  • [September 09, 2003] "WhereNet Adds BI to RFID Asset Management. Rules-Based Engine Uses Wireless Tags to Help Companies Keep Track of Resources." By Ephraim Schwartz. In InfoWorld (September 08, 2003). "WhereNet, a Santa Clara, Calif., company that helps companies wirelessly track the location of everything from shopping carts to shipping containers, will announce this week that it is adding a business intelligence, rules-based engine to its location-based software. The first iteration of the application, WhereSoft Yard Version 4.0, is targeted at deconsolidators -- companies that take imported cargo typically brought in by container ships, break it down, and send out the contents across the country to domestic warehouses and regional distribution centers. WhereSoft Yard is transitioning location-based data from tracking to resource management by using a rules engine along with the real-time location system... The Yard management application goes beyond knowing what containers have come into the yard to determining who the carrier is, what terminal the container came from, and how to keep like equipment from the same shipper next to each other. 'When a drayman brings in the next load, we want him to drop a load and pick up an empty [container]. If they are next to each other, he is in and out quickly,' The software will allow NYK to increase dock door usage, reduce yard congestion, and increase the number of daily turns in the yard. While WhereNet has long been working with RFID (Radio Frequency Identification) technology for improved resource management, Version 4.0 is the first of is kind, according to one analyst. 'It is new. It could be a big deal. Companies have location data in their database, the items associated with a container -- now WhereNet is providing an event management application on top of that,' said Bret Kinsella, global lead for Sapient Supply Chain group in Cambridge, Mass. RFID tagging technology is finding a wide array of uses. In April 2002, WhereNet was instrumental in a pilot program for a supermarket chain that put RFID tags on all of its shopping carts and handbaskets in a test store. The purpose was to track customer movement in order to understand traffic patterns and design stores more efficiently. The WhereNet solution is part of a bigger supply chain story whose goal it is to have all supply chain participants make decisions off the same set of data..." See: (1) the press release, "Robust Rules Engine Integrated with Location System Automates Processes and Increases Throughput at High-Volume Yard Near Ports of Los Angeles/Long Beach."; (2) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)."

  • [September 09, 2003] "Cisco Extends IP Phone Lineup. New Features Include Color Display and Touchscreen." By Stephen Lawson. In InfoWorld (September 09, 2003). "Desktop IP (Internet Protocol) phones inched closer to computing platforms on Tuesday with Cisco Systems Inc.'s announcement of a phone with a color touchscreen and the addition of XML (Extensible Markup Language) application support to two of its less expensive phones. IP phones can send and receive data as well as voice calls over the same kind of network that carries Web pages and application traffic. Cisco makes a wide range of IP phones, some of which already can be used as platforms for XML applications such as instant messaging, inventory checking, employee directories, flight schedules and headline news services...Cisco [has] unveiled the IP Phone 7970G, its first phone with a color display and a touchscreen. The new features make the phone easier to use and will let developers write applications that use images, said Troy Trenchard, Director of Product Marketing for Cisco's IP Communications group. In addition, the company upgraded its lower-end 7905G and 7912G phones with XML support. Those less expensive phones have small monochrome displays best suited to text-based software, he said. The introductions came on the eve of Cisco's Innovation Through Convergence Expo in Santa Clara, Calif., where Cisco partners will show off a variety of XML applications that can run on the phones... Two customers for the 7970G illustrate how its color image display can be used, according to Cisco's Trenchard. The Greater Toronto Airports Authority, in Toronto, Ontario, plans to set up the phones at security posts and send out images of at-large criminal suspects so guards can see who to look out for. The city of Herndon, Virginia, will use the phones at public safety agencies and distribute alerts about missing persons to them, he said. The 7970G can take the place of a PC for these kinds of applications, letting organizations bypass the management costs and security concerns about PCs, Trenchard said. The 7970G will begin shipping by the end of this year and be generally available in the first quarter of 2004 at a list price not to exceed US$995, according to Cisco. The 7905G and 7912G are available now for $135 and $165, respectively. The enhanced XML capabilities are set to become available at no cost in the upcoming release of Cisco CallManager software near the end of this year..." See also additional information appended to the Cisco announcement.

  • [September 09, 2003] "The Color of IP Telephony. Cisco Debuts the First Voice-Over-IP Phone With a Full-Color Touch Screen." By David M. Ewalt. In InformationWeek (September 09, 2003). "Cisco Systems introduced some color to the world of IP telephony on Tuesday [2003-09-09], releasing the first-ever voice-over-IP phone with a full-color touch screen... Because the 7970G has an improved display and can run programs written in XML, it can be used for a number of applications where the presence of a complete desktop PC might not be desired. Retail outlets can run quick price and inventory lookups. Hotels can put room-service and hospitality menus in each guest room. Or contact centers can augment the amount of data they give agents by providing them with a second screen of customer information... The 7970G will be available in the first quarter of next year, priced at $995... Cisco also unveiled changes to two of its entry-level phones, the 7905G and the 7912G. Both models will be updated with XML support, allowing them to support text-based applications. The phones are available now at $135 and $165, respectively, but the enhanced XML capabilities won't be available until Cisco releases the newest version of its CallManager software at the end of this year..." Note: "The Cisco IP Phones 7905G and 7912G support all XML tags for text and audio listed in Developing Cisco IP Phone Services... The Cisco IP Phones 7905G and 7912G support GET, POST plus the following HTTP headers: [1] INCOMING MSGs: Date, Expires, Refresh, Set-Cookie, Location, and Content-Length. [2] OUTGOING MSGs: Accept-Language, Connection, Cookie, Host, and Transfer-Encoding... Playing .raw audio files and unicast RTP streams is supported... XML tags for graphics are not supported on the Cisco IP Phones 7905G and 7912G. Given the smaller size of the display and no grey-scale support on these displays, graphics are not... The Cisco IP Phones 7905G and 7912G support the following XML Schema Instance (XSI) URLs only: (1) UserData:a:d; (2) Dial:<number>; (3) DialLine:<number>; (4) RTPTx://<IP> (Unicast RTP); (5) RTPRx://<IP> (Unicast RTP); (6) RTPTx://Stop (7) RTPRx://Stop..." [excerpted from "Q & A: XML Applications with Cisco IP Phones 7905G and 7912G"]. See details in the announcement: "Cisco Systems Unveils Color IP Telephone, A Powerful New Platform for Network-Based Productivity Applications. XML Developers at Cisco ITC Expo Demonstrate Advanced Business Applications on Cisco IP Phones."

  • [September 06, 2003] "Consultancies Aim to Ease Web Services Woes." By Jack McCarthy and Ed Scannell. In InfoWorld (September 06, 2003). ['IT services companies help deliver on the promise of a new computing paradigm. From InfoWorld's Special Report on "Outsourcing Web Services", Web Services on a Platter'] "A couple of years ago, in the hype over Web services, IT leaders were told that their prayers would at last be answered. A simple set of XML-based protocols would enable IT to create reusable application building blocks that could be recombined ad infinitum, slashing application-development and maintenance costs. And because Web services components were accessible over HTTP, they would herald a new era of zero-cost business integration. Of course, that hasn't happened... The promise remains compelling -- and most enterprise developers have at least given Web services a whirl. But planning, deploying and managing an enterprisewide Web services implementation can be dauntingly complex. So guess who's ready to jump in and lend a hand? IBM Global Services -- along with other monster consultancies, including Accenture, BearingPoint, Cap Gemini, Deloitte Consulting, EDS, and Hewlett-Packard's new Services division. All now offer Web services 'solutions' as part of their overall IT services portfolio. Meet the new boss; same as the old boss... 'When you mention SOAs to most IT shops, they are very hesitant,' Borges says. 'It's almost like looking at an ERP project 15 years ago. The notion of an SOA or ERP package is appealing and looks simple. But once you get into it, it is not. On the back end you have to do the preparatory work for a Web services or XML-based integration, such as building a connector. On the applications side, you have to do the proper data quality preparation, and that has always been a very expensive component.' In addition, Borges adds, IBM believes engineering an SOA rises above the level of IT and into the 'business stack,' where business processes must be re-examined or sometimes re-engineered to create an application infrastructure of reusable components. Rolling out a complete SOA implementation may be a long and complicated process, but it should prove cost effective over time, especially in businesses that still suffer from poorly integrated systems. The ultimate goal, after all, is self-service, so the business side can experiment with recombining applications to meet specific demands without overloading IT... Big consultancies that provide an overarching Web services solution can be divided into two groups: IBM and Hewlett-Packard on the one hand (both with their own huge hardware and software portfolios) and on the other, the large, independent organizations that pick and choose from third-party technology..."

  • [September 06, 2003] "Portal Vendors Rallying Around Standards." By Dennis Callaghan. In eWEEK (September 5, 2003). "Vignette Corp. became the latest portal software developer to announce new products built on the Web Services for Remote Portlets standard 1.0, which was ratified this week by the Organization for the Advancement of Structured Information Standards (OASIS). WSRP 1.0 can be used for interoperability between .Net and Java-based portal elements. Plumtree Software Inc. announced earlier this week new products that support WSRP 1.0, as well as Java Specifications Requirements (JSR) 168. Vignette plans to deliver beta-level support for WSRP consumption in its Vignette Application Portal during the second half of 2003, company officials said. In the first half of 2004, Vignette expects to continue with support for WSRP consumption and add support for WSRP publishing in Vignette Application Portal and Vignette Application Builder. This support will allow the software to publish, consume and manage remote Web services as portlets within the Vignette portal administration framework, officials said. In Vignette Application Portal, this means organizations will be able to subscribe to compliant Web services, provision those services for any number of portals and deliver visual, user-facing portlets to end users. Vignette Application Builder support for WSRP will allow organizations to produce customized portal applications that take advantage of existing enterprise application data and can be consumed by any compliant portal server, Vignette officials said. Plumtree earlier this week released the WSRP Portlet Consumer, a software component that acts as the intermediary between the portal and the raw WSRP portlet, or WSRP producer. The Plumtree WSRP Portlet Consumer can run on the same platform as the producer, on the portal, or on a middle tier between the portal and the WSRP producer, so that customers can scale the portal deployment to many business units, each with their own portlets... Both Plumtree and Vignette are also supporting JSR 168, which is nearing completion by the Java Community Process. The proposed standard is designed to establish a common interface for portlets to enhance efficiency of application delivery through portals..." See also: (1) "Plumtree Ships Products to Fully Support WSRP and Proposed JSR 168 Portlet Standards. Plumtree's Standards Implementation Consumes Portlets from BEA, Citrix, IBM, and Oracle."; (2) OASIS Web Services for Remote Portlets TC website; (3) "JSR 168 Portlet API Specification 1.0 Released for Public Review"; (4) general references in "Web Services for Remote Portals (WSRP)."

  • [September 06, 2003] "Microsoft Moves Forward on DRM." By David Becker. In CNET News.com (September 04, 2003). "Microsoft moved forward on its digital rights management strategy this week, releasing the first of several Windows add-ons associated with the technology and revealing pricing on its server software for corporate rights management. A key part of the company's strategy involves limiting access to digital files ranging from office memos to software applications. Primary components of the plan include Windows Rights Management Services (WRMS), server software that will manage access to corporate documents, and new Information Rights Management tools included in Office 2003, the forthcoming update of the company's widespread productivity package. The software giant also has spoken of broader plans for building 'next-generation secure computing base' technology, formerly known as Palladium, into a range of products. One of the first publicly available components in the rights management strategy is the Windows Rights Management Client, a free Windows add-on Microsoft released for download earlier this week. The client will be necessary for viewing any documents or files that tie into Windows Rights Management Services, including secure documents created in Office 2003. Versions of the client are available for the XP, Me, 98SE and 2000 versions of Windows and Windows Server 2003... Microsoft also revealed its pricing strategy this week for Windows Rights Management Services, the server software that will work in conjunction with the Windows Server 2003 operating system to track privileges for secured files. The software itself will be free for Windows Server 2003 users to install, but customers will have to pay for a client access license for every user who needs to access files protected by WRMS. Individual licenses will cost $37 per user, or $185 for a pack of five licenses. An 'external connector license' that allows blanket access for people outside a corporate network to access secured documents will cost $18,066..." See: (1) "New Office Locks Down Documents"; "Microsoft Announces Windows Rights Management Services (RMS)"; (3) general references in "XML and Digital Rights Management (DRM)."

  • [September 06, 2003] "Introducing the Portlet Specification, Part 2. The Portlet API's Reference Implementation Reveals Its Secrets." By Stefan Hepper and Stephan Hesmer. In JavaWorld (September 05, 2003). ['In this second and final article in Stefan Hepper and Stephan Hesmer's portlet series, the authors move beyond the Portlet API basics outlined in Part 1 to detail the API's reference implementation (RI), known as Pluto. They also offer a series of example portlets to illustrate how you can extend the API's standard functions.'] "Enterprise portal vendors use pluggable user-interface components, known as portlets, to provide a presentation layer to information systems. Unfortunately, in the past each vendor defined its own portlet API, producing incompatibilities across the industry. To standardize the process, the Java community launched Java Specification Request (JSR) 168: the Portlet Specification. Part 1 of this two-part series examined JSR 168 in detail. This final article focuses on the Portlet API's reference implementation (RI), also known as Pluto. The RI provides a working example portlet from which you can launch your own development efforts. We describe the RI's architecture, including the portlet container's plug-in concept and how to reuse the container in other projects; we explain how to install and use the RI, as well as how to quickly deploy portlets. The article concludes with a series of progressively more complex portlet examples... Pluto normally serves to show how the Portlet API works and offers developers a working example platform from which they can test their portlets. However, it's cumbersome to execute and test the portlet container without a driver, in this case, the portal. Pluto's simple portal component is built only on the portlet container's and the JSR 168's requirements. In contrast, the more sophisticated, open source Apache Jetspeed project concentrates on the portal itself rather than the portlet container, and considers requirements from other groups... The portal Web application processes the client request, retrieves the portlets on the user's current page, and then calls the portlet container to retrieve each portlet's content. The portal accesses the portlet container with the Portlet Container Invoker API, representing the portlet container's main interface supporting request-based methods to call portlets from a portal's viewpoint. The container's user must implement the portlet container's Container Provider SPI (Service Provider Interface) callback interface to get portal-related information. Finally, the portlet container calls all portlets via the Portlet API. [...] As we see, the Portlet Specification's RI features two main components: a portal and a portlet container. The portal acts as a simple test driver to run the portlet container. The portlet container acts as a generic component quickly adaptable to run in other portals, such as Jetspeed. The example portlets outlined in this article employ many important Portlet API concepts. Going forward, you can extend the examples further by using all of the Portlet API and Servlet API features. For example, you could enhance the Bookmark portlet with a servlet that outputs a complete markup inside a new window, such as a print preview, and communicates with the portlet via the HttpSession. Indeed, because portlets represent such powerful technologies, the possibilities are boundless..." See also: (1) Part 1; (2) "JSR 168 Portlet API Specification 1.0 Released for Public Review."

  • [September 06, 2003] "Don't Let DRM Lock You In. A Q and A with John Fowler, CTO, Sun Software." By John Fowler. In Features: Sun News, Video, and Resources (September 2003). "By including a proprietary digital rights management system into Microsoft Office, any data created in Microsoft Office can only readable and used by Microsoft tools which means that you must use the Microsoft platform. Don't think about this just in relation to PCs, this also extends to other kinds of devices that you use on your network. You are locking your data to a single vendor. This has always been Microsoft's strategy, but in the past it has been possible to work around this strategy. With Office 2003 and the inclusion of DRM, it will be impossible to work around. So in this case you can think of Microsoft as owning your data and it not being owned by you, because in order to read your data you have to have licensed technology from Microsoft and only Microsoft. From a long term strategic standpoint, this locks you into Microsoft into being your only technology provider to read your data. One of the other impacts is that Office 2003 documents will not be readable by prior versions of Office. Those of you who implement Office 2003 will also have to require company wide changes to suppliers and customers to exchange documents. This is a wonderful example of how Microsoft can extract licensing fees from someone because they take their dominant position with Office and use it to require an upgrade... An example of alternative technologies is StarOffice and and Open Office, which is distributed widely on several platforms, including on Windows and is very popular on the Windows platform. The Open Office project is open and all data formats are open and currently going through a standardization process through the Oasis standards body group. What this means to the customer is that your data will always be yours and you will never be locked out from being able to read and write your own data... Microsoft is intent on keeping its monopolistic position. Office is the dominant software productivity package. One of the primary behaviors of a monopolist is the inability to innovate in a competitive environment... I encourage everyone to look at the alternative technologies that are available today, such as StarOffice and OpenOffice. They provide interoperability with all software productivity programs, including Microsoft Office. If you need to upgrade to Office 2003, I would avoid upgrading as long as possible. I also encourage everyone to express your dissatisfaction to Microsoft and encourage them to join Oasis to work on industry standards for desktop productivity software..." See: (1) "New Office Locks Down Documents"; (2) "Microsoft Announces Windows Rights Management Services (RMS)."; (3) "Microsoft Releases Windows Rights Management Client"; (4) "XML and Digital Rights Management (DRM)"; (5) "XML File Formats for Office Documents."

  • [September 05, 2003] "IBM Exec Touts Need for BPEL Support, SOAs." By Barbara Darrow and Elizabeth Montalbano. In CRN (September 02, 2003). "Now that many plumbing issues have been sorted out, it's time to bring business process integration, transaction support and systems management into the Web services realm, according to one IBM executive. Toward that end, IBM is building BPEL (Business Process Execution Language) support -- as well as WS-Security support -- into its WebSphere application server, Tivoli systems management and other IBM products, said Bob Sutor, director of WebSphere Infrastructure Software for IBM Software. IBM already supports SOAP, WSDL and UDDI in most of its middleware software. BPEL is an emerging specification that would give programmers a way to formally describe processes underlying business applications so that they can be exposed and linked to processes in other applications. IBM and Microsoft submitted the spec to the Organization for the Advancement of Structured Information Standards (OASIS) for approval. For a while it appeared that BPEL was on a collision course with another specification effort backed by Oracle and others and winding its way through the World Wide Web Consortium (W3C) but those two efforts now appear to be converging. IBM is not the only vendor beating the BPEL drum. Microsoft has said that BPEL support will be built into upcoming BizTalk Server... 'In the next six months, I see a big focus on transactions and systems management, not just a lot of yelling and screaming,' Sutor said. Vendors, customers and solution providers now have to sort out where traditional in-house systems management ends and Web services management begins, Sutor told CRN. Sutor also said he sees a growing need for BPEL support and the adoption of service-oriented architectures, a move to more modular, loosely coupled application development. Service Oriented Architectures, or SOAs, are the latest incarnation of the distributed object architectures, exemplified by the older heterogeneous CORBA (Common Object Request Broker Architecture) and Microsoft-centric DCOM (Distributed Component Object Model) worldviews... IBM insists that its game plan will preserve existing investments in legacy applications, and claims that Microsoft's .Net worldview requires companies to rip and replace older applications and infrastructure. Instead of junking things, why not replace 'green screens' with Web interfaces, Sutor said. 'CICS has worked great for 35 years, why throw it out? Microsoft's model is to yank everything out even if it's [just] three or four years old. Well now they're seeing resistance to that from customers.' Of course, IBM, unlike Microsoft, stands to reap huge services revenue from knitting together diverse systems. IBM Global Services (IGS) makes billions doing just that..." See: (1) "Integrating CICS Applications as Web Services. Extending the Life of Valuable Information."; (2) "Business Process Execution Language for Web Services (BPEL4WS)."

  • [September 05, 2003] "Web Services Standards Fail to Unite." By Martin LaMonica. In ZDNet News (September 02, 2003). "IBM and Microsoft are declining to come onboard a unified Web services reliable-messaging specification, preferring to make their own way. A technical committee [forged ahead] on Thursday with the development of a Web services reliable-messaging specification without the backing of industry heavyweights IBM and Microsoft. Companies that back the specification (Fujitsu, Hitachi, NEC, Oracle and Sun Microsystems) demonstrated on Thursday how products based on the proposed Web Services Reliability standard can interoperate as designed. The proof-of-concept will take place at a meeting of the Web Services Reliability technical committee of the standards body Organisation for the Advancement of Structured Information Standards (OASIS). Reliable messaging is considered one of the most pressing additions to help drive adoption of Web services, which is a set of industry guidelines for building applications that can easily share information. Reliable-messaging standards are needed to help define how information can be shared between software programs as reliably as within a single application. Analysts said the lack of a single standard could ultimately hinder adoption of Web services. Despite the need for an industrywide standard, reliable messaging has been marred by rivalries among competing information technology providers. The Web Services Reliability specification was submitted to Oasis in February for consideration as an industrywide standard. The reliable-messaging function is designed to guarantee that data sent between computers via messages will arrive at the intended destination..." See: (1) "Fujitsu, Hitachi, NEC and Oracle Showcase Reliable Web Services. OASIS Members Demonstrate Successful Interoperability of Reliable Messaging for Web Services at Technical Committee Meeting." (2) "Reliable Messaging"; (3) "OASIS Members Form Technical Committee for Web Services Reliable Messaging"; (4) OASIS Web Services Reliable Messaging TC website.

  • [September 05, 2003] "XQuery from the Experts: Influences on the Design of XQuery. Book Excerpt Explores the Origins of the XML Query Language." By Don Chamberlin (IBM Fellow, Almaden Research Lab). From IBM developerWorks, XML zone. September 03, 2003. Excerpted from Chapter 2, "Influences on the Design of XQuery," in the book XQuery from the Experts: A Guide to the W3C XML Query Language (Addison-Wesley). "Early in its history, the XML Query Working Group confronted the question of whether XML is sufficiently different from other data formats to require a query language of its own. The SQL language is a very well established standard for retrieving information from relational databases and has recently been enhanced with new facilities called 'structured types' that support nested structures similar to the nesting of elements in XML. If SQL could be further extended to meet XML query requirements, developers could leverage their considerable investment in SQL implementations, and users could apply the features of these robust and mature systems to their XML databases without learning a completely new language. Given these incentives, the working group conducted a study of the differences between XML data and relational data from the point of view of a query language: (1) Relational data is 'flat,' organized in the form of a two-dimensional array of rows and columns. In contrast, XML data is 'nested', and its depth of nesting can be irregular and unpredictable... (2) Relational data is regular and homogeneous. Every row of a table has the same columns, with the same names and types. This allows metadata -- information that describes the structure of the data -- to be removed from the data itself and stored in a separate catalog. XML data, on the other hand, is irregular and heterogeneous... (3) Like a stored table, the result of a relational query is flat, regular, and homogeneous. The result of an XML query, on the other hand, has none of these properties. For example, the result of the query Find all the red things may contain a cherry, a flag, and a stop sign, each with a different internal structure... (4) Because of its regular structure, relational data is 'dense' -- that is, every row has a value in every column. This gave rise to the need for a 'null value' to represent unknown or inapplicable values in relational databases. XML data, on the other hand, may be 'sparse'...; (5) In a relational database, the rows of a table are not considered to have an ordering other than the orderings that can be derived from their values. XML documents, on the other hand, have an intrinsic order that can be important to their meaning and cannot be derived from data values. This has several implications for the design of a query language... The significant data model differences summarized above led the working group to decide that the objectives of XML queries could best be served by designing a new query language rather than by extending a relational language. Designing a query language for XML, however, is not a small task, precisely because of the complexity of XML data. An XML 'value,' computed by a query expression, may consist of zero, one, or many items, each of which may be an element, an attribute, or a primitive value. Therefore, each operator in an XML query language must be well defined for all these possible inputs. The result is likely to be a language with a more complex semantic definition than that of a relational language such as SQL..." See also: (1) W3C XML Query (XQuery) website; (2) "XML and Query Languages"; (3) "Meet the Experts: Don Chamberlin."

  • [September 05, 2003] "XML Matters: TEI -- the Text Encoding Initiative. An XML Dialect for Archival and Complex Documents." By David Mertz (Encoder, Gnosis Software, Inc). From IBM developerWorks, XML zone. ['XML is usually thought of as a markup technique utilized by programmers to encode computer-oriented data. Even DocBook and similar document-oriented DTDs focus on preparation of technical documentation. However, the real roots of XML are in the SGML community, which is largely composed of publishers, archivists, librarians, and scholars. The Text Encoding Initiative uses XML in the markup of literary and linguistic texts. TEI allows useful abstractions of typographic features of source documents, but in a manner that enables effective searching, indexing, comparison, and print publication -- something not possible with publications archived as mere photographic images.'] "The Text Encoding Initiative (TEI) is a decade older than XML itself, and older than other common documentation encoding XML schemas like DocBook. Specifically, TEI was developed -- in initial SGML form -- in 1987, almost an eternity in Internet time. Despite its age, TEI works at a different level than any other markup format that I am aware of, and remains the best solution to a certain class of problems... TEI aims to [enable encoding of] all the semantically significant aspects of literary texts, both old ones that predate XML technology, or indeed, computers in general, and newly created ones. Certainly the words themselves are the most important semantic feature of prose or poetical texts. But throughout the history of print -- or of writing in general -- other typographic features have been added to texts to encode subsidiary aspects of their meaning. The use of presentation elements -- such as various types of emphasis, indentation and margins, tables, pagination, line breaks (as in verse), graphics, and decorations -- has enhanced, elaborated, or modified the meanings of the words in books, essays, pamphlets, flyers, bills, poems, liturgicals, and all the other forms literary works take. Moreover, mere typographic features sometimes require an interpretive effort to fully decipher. As a trivial example, many books use italics to mark both foreign words and to mark the titles of other books. The semantic aspect of italicization depends on the verbal context, but clearly authors usually use such marks with distinct intentions. TEI aims to allow the markup of texts in a way that distinguishes all such meaningful aspects. TEI is not really just an 'XML schema'; it is more like a whole family of schemas, related in their general goal but varying in details of the tags and attributes used. In part, these schemas differ in being supported by different DTDs (or RELAX NG schemas). For example, TEI-Lite is a greatly simplified form of TEI that aims to support '90% of the needs of 90% of the TEI user community' (according to the TEI Web site). And other specializations are available as well. But even apart from actual specializations or subsets of the full TEI tag set, most users will utilize only a few of the tags available in the TEI DTD they are using. Different documents demand different markup, and different projects allow differing degrees of granularity... any tool that can work with XML can work with TEI. DTDs are available for several TEI variations, as are XSLT stylesheets of various sorts. Naturally, customizations for working with TEI in Emacs, Framemaker, and MS-Word can be found at the TEI Web site. An XMetal customization is also downloadable. An interesting online tool provided by the initiative lets you customize an XSLT stylesheet to produce just the HTML output you desire. A Web form lets you select a variety of options, then returns a stylesheet reflecting your customizations..." See: (1) "TEI PizzaChef Tool Supports XML DTD Generation"; (2) general references in "Text Encoding Initiative (TEI) - XML for TEI Lite."

  • [September 05, 2003] "HP Buys Talking Blocks. Web Services Management Technology Gained." By Matt Hamblen. In InfoWorld (September 04, 2003). "Hewlett-Packard announced Wednesday that it has signed a deal to acquire San Francisco-based Talking Blocks for its Web services management technology. The deal is expected to close by the end of the month, and until that time, no financial details will be released, said Nora Denzel, senior vice president of HP's Software Global Business unit. Denzel said Talking Block's Service Oriented Architecture helps integrate different internal systems with the systems of external business partners, serving as a communication pipe between IT infrastructure and business needs. The SOA software will support HP's Web Services Management Framework, which was announced earlier this year. SOA will be sold as a separate product to HP customers and will be an integrated part of HP's OpenView software... Stephen Elliot, an analyst at market research firm IDC, said the new software will allow HP to 'better collect management information from third-party products' made by IBM's Tivoli unit and Computer Associates International Inc. The Talking Blocks acquisition 'could potentially put them in a better spot' in the Web services management marketplace..." See HP Adaptive Enterprise strategy and other details in the announcement "HP to Acquire Talking Blocks. Technology Cuts Cost and Simplifies Management of Heterogeneous Resources, Advances HP Adaptive Enterprise Strategy."

  • [September 04, 2003] "Auto Industry Portal Kicks Into High Gear." By Ellen Messmer. In Network World (September 01, 2003). "The Big Three automakers are finally ready to make Covisint, the business-to-business Web portal they founded three years ago to reach suppliers, the central engine in their e-commerce and messaging systems. At last week's Auto-Tech conference, DaimlerChrysler said by year-end it would phase out its private supplier extranet and use Covisint to do business with its 9,000 suppliers. Ford said it intends to use Covisint to exchange electronic data interchange (EDI) orders and design data. And although GM will continue to use its own SupplyPower Web portal, the company said it expects Covisint to support its most important XML-based priority-messaging and document delivery service. This is a significant change from Covisint's current role as an online catalog and auction house for almost 100,000 registered trading partners, and more in keeping with the grand vision laid out when Covisint was announced in 2000. While it's taken time to rev up the engine, Covisint is now bringing in about $60 million in fees and expects to be profitable next year, said Brad Pfeiffer, client relationship director to Ford. 'We're just going to focus now on what we were founded to do -- provide an industry portal and the messaging piece,' he said. Even though the Big Three continue to bicker about some things, such as the use of XML, they say Covisint is now central to their e-commerce strategies with suppliers. Each car company is making more business applications available through the Covisint portal... Although [GM] is only using Covisint for online auctions with suppliers today, it eventually wants to use Covisint as a hub for XML-based messaging. 'Covisint is our messaging strategy,' Hanna said. GM announced last week that it is participating in what Covisint calls the priority-messaging pilot project with Ford, DaimlerChrysler and suppliers Delphi, Lear and JCI. According to Bill Penn, Covisint's chief architect, that involves exchanging time-critical documents in XML format in a way that automated updates can be made to back-end applications of carmaker and supplier. The pilot starts in November and 'we'll do it using the brokering technology in WebMethods,' Penn said. Like Ford, GM wants to see the auto industry convert from EDI to XML. However, GM prefers the version of XML documents known as ebXML defined by the Open Applications Group. GM has started using the SeeBeyond middleware internally to integrate XML-based ebXML data into 57 different applications, Hanna said. The fact that Ford and GM are butting heads over which versions of XML to use to define business documents might complicate the situation, but it won't run the effort off the road because translation between XML documents is possible, Penn said. The automakers also said they hope to reconcile their XML differences soon..." See the Covisint 2003-08-26 announcement: "Covisint Previews New Covisint Connect Messaging Service at AIAG's AUTO-TECH 2003. New Service on Track For Market Introduction Later This Year."

  • [September 04, 2003] "IBM in New Push For On-Demand Computing." By Peter Judge. In Techworld (September 03, 2003). "IBM is set for a new campaign to sell 'utility computing' or, so-called, computing on demand. This much-touted phrase stands for the concept whereby data centres (including storage, networking and server capacity) can be allocated to different tasks to increase efficiency. A new product, Tivoli Intelligent Orchestrator, shifts tasks between servers, while the company has announced a new UK customer for on-demand computing, drinks company Diageo. Orchestrator is part of Project Symphony, an IBM effort starting this autumn to sell utility computing to different kinds of customer... Tivoli Intelligent Orchstrator is the first IBM version of ThinkControl, a provisioning product IBM acquired in May, when it bought Think Dynamics. Written in J2EE, ThinkControl allocates computing power across a group of servers, increasing the ability to allocate resources, and reducing the need to have spare idle servers. The software supports other standards like XML, SNMP, SOAP and Open Grid Services Architecture, so re-jigging it to support different middleware and hardware platforms has been no trouble, said Carter. The product originally supported BEA middleware and Oracle databases; IBM added support for its DB2 database and WebSphere middleware. It runs on Linux or Windows, and manages Linux, Windows AIX or Sun Solaris servers -- with support for both HP-UX and IBM zSeries mainframes coming shortly... Orchestrator is currently being put through its paces in an IBM showpiece based round the US Open tennis tournament. Orchestrator is being used to recycle spare cycles from the website to speed up protein structure calculations in IBM's research labs. The website, which expects some 23 million visitors, is being run on IBM pSeries Risc Linux systems, but demand will be very varied. An xSeries Intel server running Windows is using Orchestrator to manage spare processing capacity. The research centre has a bank of pSeries Unix boxes running complex protein folding applications which require large amounts of processing..."

  • [September 04, 2003] "Self-Healing Systems." By Colleen Frye. In Application Development Trends (September 01, 2003). "Automation is not a new concept -- tools and technologies have been becoming more 'autonomic' with each generation. And the idea of delivering IT resources on an as-needed basis to respond to the peaks and valleys of demand -- prioritized by business need -- has been bandied about for years. Recently, though, the leading platform vendors have all rolled out plans for the data center of the future and are starting to deliver the technology that will make this vision possible. IBM has rolled out its 'Autonomic Blueprint' and 'on-demand' initiative. Microsoft announced its Dynamic Systems Initiative (DSI). Sun Microsystems has a detailed plan for its N1 technologies and utility computing. And Hewlett-Packard (HP) has its Adaptive Enterprise strategy. The plans encompass both hardware and software, outsourcing and in-sourcing. Large systems management vendors like Candle, Computer Associates and others, are laying out strategies for how they will support these plans... IBM's Autonomic blueprint details an architecture infrastructure that embraces Web services and a variety of open standards, including the Open Grid Services Architecture (OGSA) and The Open Group's Application Resource Measurement (ARM). The crux of the architecture is centered on intelligent control loops that collect information, make decisions and then make adjustments in the system... For its part, Microsoft has also laid out a blueprint. Microsoft's Dynamic Systems Initiative (DSI) is said to unify hardware, software and service vendors around a software architecture that centers on the System Definition Model (SDM). The SDM is said to provide a common contract between development, deployment and operations across the IT life cycle. SDM is a live XML-based blueprint that captures and unifies the operational requirements of applications with data center policies... Launched in May, HP's Adaptive Enterprise strategy also focuses on more closely linking business and IT. As part of the initiative, HP announced new Adaptive Enterprise services, including a set of business agility metrics, and new methodologies for designing and deploying application and network architectures to support constantly changing business needs. Also announced was software for virtualizing server environments and new self-healing solutions for HP OpenView. Hewlett-Packard's Darwin Reference Architecture is a framework for creating a business process-focused IT that dynamically changes with business needs, and has upgraded HP ProLiant blade servers... Sun Microsystems has also laid out its plans for both a more dynamic data center and utility computing. Sun's N1 architecture comprises foundation resources, virtualization, provisioning, and policy and automation. Foundation resources are the various IT components already in place. Virtualization allows for the pooling of those resources. Provisioning maps business services onto the pooled resources, and policy and automation enable a customer to create rule-defining performance objectives for a given service. Based on set policies, N1 will manage the environment, adding and removing resources as needed to maintain service-level objectives..."

  • [September 04, 2003] "Proposed Provisioning Technology Set To Go." By John Fontana. In ComputerWorld (September 04, 2003). "A forthcoming XML-based standard is living a double life. It is expected to foster integration of current provisioning and identity management software now and will evolve to support Web service in the future. The proposed standard is the Service Provisioning Markup Language (SPML) 1.0, which is set for ratification Oct. 31 by the Organization for the Advancement of Structured Information Standards (OASIS). The 1.0 specification is designed to help network executives break the logjam that holds back interoperability among current provisioning systems. These systems let companies automatically set up and deactivate user accounts across corporate networks and applications. But critics, namely IBM Corp. and Microsoft Corp., say SPML in its 1.0 form lacks features beyond simple addition and deletion of users. They say it's not flexible enough to integrate into the palette of Web services standards they are developing, known as WS-* (pronounced WS-Star), which includes WS-Security and WS-Federation. The two companies are working with OASIS to correct those shortcomings. The protocol, therefore, appears to satisfy short-term corporate needs while creating a starting point for developing a long-term solution that will work within Web services deployments. 'What this means is that SPML 1.0 will not become the be-all and end-all provisioning standard,' says Daniel Blum, an analyst with Burton Group. 'Something else will come along.' He says Microsoft and Web services standards partner IBM, which last year acquired provisioning vendor and SPML co-creator Access360, have valid points on the long-term viability of SPML... The interoperability SPML fosters was demonstrated in July when 10 vendors - BMC Software Inc., Business Layers Inc., Critical Path Inc., Entrust Inc., MyCroft, OpenNetwork Technologies Inc., PeopleSoft Inc., Sun Microsystems Inc., Thor Technologies Inc. and Waveset Technologies Inc. - held an interoperability test to show the addition and creation of users across their provisioning systems. 'Enterprise architects should start to consider SPML as real, deployable and valuable,' says Darran Rolls, chairman of the Provisioning Services Technical Committee (PSTC) at OASIS and director of technology for Waveset. What's also becoming real is the relationship between SPML and the Security Assertion Markup Language (SAML), an XML-based standard for exchanging user authentication and authorization data across corporate systems that OASIS ratified in October 2002. Together, SAML and SPML provide a standard way to create user accounts and then validate these users as part of an identity management infrastructure. The two are the glue for integrating Web single sign-on and provisioning software. SPML can use a SAML credential as one way to identify users to be provisioned to corporate systems..." General references in "XML-Based Provisioning Services."

  • [September 04, 2003] "HP to Grid-Enable Entire Product Line." By Jeffrey Burt. In eWEEK (September 4, 2003). "Hewlett-Packard Co. is adding to its Adaptive Enterprise strategy this week by announcing plans to grid-enable its entire product line and acquire a new company. The Palo Alto, Calif., company already grid enables its PA-RISC, Integrity, Alpha and ProLiant server lines by incorporating the Globus Toolkit and OGSA (Open Grid Services Architecture) 2.0, according to Nick van der Zweep, director of utility computing in HP's Enterprise Systems Group. Over the next 12 to 18 months, HP will integrate the Globus Toolkit as well as the next version of the OGSA standard, 3.0 -- which is due in October -- into its entire line of consumer and commercial products, from handheld devices up to its largest servers, van der Zweep said. 'Everything that HP ships will be grid-enabled,' he said. In addition, HP is creating a specific consulting group within its HP Services unit to deal with grid-based platforms. The group will offer management, deployment and support of grid architectures, van der Zweep said. Grid platforms fit well within HP's Adaptive Enterprise initiative, which is designed to virtualize IT resources to enable administrators to quickly allocate and deploy resources in response to business demands, he said. Grids enable IT administrators to turn most resources, from computers to Web services to storage and applications, into services, making them more easily accessible and manageable..." See details and references in the news story "HP Integrates Industry Grid Standards Across All Enterprise Product Lines."

  • [September 04, 2003] "HP to Grid-Enable All Systems, Offers Grid Services." By James Niccolai. In Network World Fusion (September 4, 2003). "HP reiterated its broad commitment to grid computing on Thursday, saying it would add grid capabilities to all of its systems over the next two to three years. The company is also broadening its service offerings to help businesses adopt the grid computing model, and now has 5,000 to 6,000 consultants in place worldwide to help customers get grids up and running, said Nora Denzel, senior vice president of HP's global software division. The consultants will provide management, deployment and lifecycle support for grid environments, she said. The product and service plans are intended to extend HP's Adaptive Enterprise strategy to make IT systems more responsive to its clients' business needs, the company said. Grid computing promises to let businesses treat groups of servers and storage equipment as if they were a single large machine, and to assign computing resources to applications on an as-needed basis. Proponents say it will help businesses save money by allowing them to use computing resources more efficiently, and can also make applications more reliable. System vendors like IBM and Sun have also been outlining grid strategies, and Oracle at its user conference next week will provide more details about its own efforts to grid-enable its database and other software. For its part, HP will integrate emerging grid standards, including the Globus Toolkit and Open Grid Services Architecture (OGSA), into all of its enterprise systems over the next two to three years, as well as into products like handheld computers and printers, Denzel said. 'In the hardware we'll ship grid software that has been integrated and tested, so that when you do go to a grid environment a system will be able to put itself onto the grid easily and quickly and there will be no testing required,' she said. HP also has a 'huge' software effort underway to simplify the creation and management of grids, including efforts to revamp its OpenView management tools for the task..." See: "HP Integrates Industry Grid Standards Across All Enterprise Product Lines."

  • [September 03, 2003] "Key Standards For Web Services-Enabled Portals Nearing Final Approval. JSR 168, WSRP Being Prepped for Industry Adoption." By Elizabeth Montalbano. In CRN (September 03, 2003). "Two key technology standards that link Web services to portals are a step closer to final approval by their respective standards bodies, industry sources said. JSR 168, the standard for remote portals out of the Java Community Process (JCP), recently went into final draft, which means it should be finalized in the next 30 days, according to a Sun Microsystems spokeswoman from the JCP, which creates and oversees Java standards. JSR 168 defines a standard API to provide a common interface for aggregating several content sources and applications front ends into one portal, according to the JCP. Sun already provides support for JSR 168 in its beta of Sun ONE Portal Server 6.2, available now. Meanwhile, the OASIS standards consortium this week has finalized the Web Services for Remote Portlets (WSRP) standard, which is backed by BEA Systems, IBM, Oracle, Plumtree Software and others as a way to create remote portlets that are platform- and language-independent, according to sources familiar with OASIS' plans... WSRP enables solution providers aggregating application and information through portals to easily integrate Web services into portals without having to write adapters or code to communicate with the platform running the services, according to OASIS. Since WSRP isn't tied to any specific language, it will work in tandem with JSR 168, said a Plumtree spokeswoman. In fact, the spokeswoman said vendors supporting Java-based portals aim to make version 2.0 of JSR 168 compatible with WSRP. To support Web services-enabled portal development, Plumtree Wednesday unveiled a set of development tools called the Plumtree Enterprise Web Development Kit, which provides developer tools for building personalized, interactive applications from Web services running on different platforms. The kit includes sample code and documentation for Java and .Net development environments, as well as software that bridges incompatibilities between Web services..." See: "JSR 168 Portlet API Specification 1.0 Released for Public Review."

  • [September 03, 2003] "Microsoft's Patent Loss Rattles Tech Community. Company Says it is Responding by Making Changes to Internet Explorer." By Paul Roberts. In InfoWorld (September 03, 2003). "Companies with products that work on the Internet are slowly waking up to the broad implications of a recent judgement against software behemoth Microsoft Corp. in a patent infringement case. The $520 million award to Eolas Technologies Inc. of Chicago and the University of California (UC) stemmed from a 1999 lawsuit in which Eolas and UC charged Microsoft with infringing on a 1998 patent owned by the university and licensed to Eolas. However, the verdict could spell trouble for a wide range of popular Web-based products and services, experts agree. That patent, U.S. number 5,838,906, was developed by Eolas president Michael Doyle at the University of California in San Francisco and covers technology that enables small computer programs, often referred to as 'applets' or 'plug-ins,' to be embedded in Web pages and interacted with through Web browsers like Microsoft's Internet Explorer. In response to the judgement against it, Microsoft said last week that it will be making changes to Internet Explorer (IE) that may affect a 'large number of existing Web pages,' according to a statement by the World Wide Web Consortium (W3C). In addition to pursuing post-trial motions against Eolas, Microsoft is also evaluating what changes may be necessary and will not comment on its work, according to company spokesman Jim Desler. The Redmond, Wash., company stands by its claims that it did not infringe on the Eolas patent, but will work to minimize the effect on customers of changes to IE and is cooperating with the W3C to coordinate that effort, he said. Computer security experts initially hailed the announcement, speculating that the ruling might spell the end of Microsoft's ActiveX controls, notoriously insecure software components that allow software developers to integrate specialized functionality with Web pages. But technology and legal experts agree that the ruling could affect a wide range of technology companies with products that interact with Web browsers, or services that rely on customer interaction through Web browsers... W3C is concerned about the implications of Eolas' patent claim, according to Janet Daly, the organization's head of communications. 'There certainly are concerns whenever patent issues ... appear to be relevant to basic technology. That gets the attention of the W3C membership,' she said. Past patent claims, such as those affecting the P3P (Platform for Privacy Preferences ) standard, have stopped development or the implementation of development standards, she said. As in that case, the W3C has legal and technology experts analyzing the Eolas patent and the legal decisions that led to the company's court victory over Microsoft, according to Daly. That analysis could take six months or more, but the group will make its findings public once they are known..." See: (1) "W3C Opens Public Discussion Forum on US Patent 5,838,906 and Eolas v. Microsoft"; (2) "Patents and Open Standards."

  • [September 02, 2003] "Opera, Mozilla Release New Browser Betas." By John Borland. In CNET News.com (August 28, 2003). "Two of the last remaining serious Web browser rivals to Microsoft's Internet Explorer each released new versions this week, promising faster and more stable surfing. The Mozilla project, which is creating an open-source version of the Netscape browser, released the beta, or public test of its version 1.5 software on Wednesday. Opera Software, the Norway-based commercial Web software developer, released an updated version of its latest browser on Thursday. Opera's 'technological lead is further expanded with today's release,' Jon von Tetzchner, the company's CEO, said in a statement. 'The feedback from our testers has been unison: Opera 7.20 significantly boosts speed and performance.' For years, the two browsers have been largely responsible for supporting the population of Web surfers who don't want to use Microsoft's Internet Explorer. But that group remains small. According to OneStat.com, more than 95 percent of Web surfers use Microsoft's browser today. By contrast, about 1.6 percent of surfers use Mozilla, and just 0.6 percent use Opera, OneStat estimates. Those figures may be somewhat undercounted, since Opera users often set their software to tell Web sites it is actually Internet Explorer in order to avoid configuration problems... According to the project's release notes, the new Mozilla beta release offers better Internet Relay Chat support, a spell-checker for the e-mail software, better XML support, and faster loading and improved standards support. Opera's beta version offers faster loading, improvements in the version associated with handheld computers, support for Hebrew and Arabic languages and other tweaks..." See the Opera announcement "First Opera for Bidirectional Languages. Opera 7.20 for Windows Fine-Tunes Speed and Performance."

  • [September 02, 2003] "Macromedia Tools Move Beyond Animation. New Forms Environment Turns Flash from Animation Engine to Business Building Block." By Charles Babcock. In InformationWeek (August 27, 2003). "Blue Iris is a system now in use at Alice Hyde Medical Center in Malone, NY, and other hospitals. The latest version of Blue Iris was built using Macromedia Inc.'s Flash MX Professional development tool, which lets doctors access information from wherever they can open a browser window, 'unchaining them from green-screen computer terminals' typically found in hospital patient systems, says Martin Fincham, VP of marketing at Mitem. In addition, the information presented to a doctor might show patient temperature, blood pressure, and other vital signs based on the latest readings. By clicking on the information, the doctor can get more vital-sign information, stretching back into the patient's history, without switching systems or making queries to a database. Mitem developers are using Flash MX Professional 2004, introduced this week, to build more enterprise-oriented and less entertainment-oriented applications, Fincham says. Blue Iris now has ways of presenting more information on a single screen, with the capability of expanding the area of the screen that has the information that the doctor is interested in... The change to a forms environment is one of many changes to Flash MX and other elements of the Macromedia Web-site product line introduced this week. Dreamweaver MX 2004 now supports Cascading Style Sheets, a World Wide Web Consortium standard since 1997, which sets and then implements a set of style conventions on a set of Web pages. Cascading Style Sheets 'are difficult to use to author pages. You need a WYSIWYG tool to use them,' says Meyrowitz, and Dreamweaver MX now provides one. Dreamweaver MX also includes MX Elements for HTML, or user interface components and effects that allow one view of information to dissolve or blur into another, and other special effects..." See the announcement: "Macromedia Announces Dreamweaver MX 2004. New Version Builds Foundation for Widespread Adoption of Cascading Style Sheets (CSS)." Related on CSS: "W3C CSS Working Group Publishes Three Cascading Style Sheets Working Drafts"; see general references in "W3C Cascading Style Sheets."

  • [September 02, 2003] "Typeless Schemas and Services." By Rich Salz. In O'Reilly WebServices.xml.com/ (September 02, 2003). ['Top web service thinkers are moving to a more document-centric approach.'] "... I want to look at what Noah Mendelsohn, Tim Ewald, and Don Box have been saying about W3C XML Schema and web services... Then next month, we'll look at how to use these ideas to drastically simplify WSDL. Noah Mendelsohn was one of the editors of XML Schema Part 1: Structures, the specification for the XML Schema Language. He's also an editor for many of the SOAP 1.2 specifications. Last October he spoke about XML Schema, with a talk titled 'what you might not know.' He said that schemas are used for three things: (1) Contracts: agreeing on formats. Think of this as distributed type safety, because your C/C++ compiler can't do type checking across a process boundary. (2) Tools: Know what the data will be. Think of this as making code wizards possible, automatically building bindings between data and your local programming language. (3) Validation: getting what was expected. Think of this as run-time type-checking, the cousin to the first item because you can't just blindly trust the sender. The difference between contracts and validation is important. The implementation of traditional RPC systems did not make this distinction, because RPC was all about preserving the function signature 'across the wire.' The contract specified what you were going to receive, and the validation decoded the network data and built the appropriate local datum... One of the great fissures in the XML community can be expressed as those who like the W3C XML Schema type system, and those who abhor it. Web services have, so far, been forced into the former camp, unnecessarily antagonizing the latter..."

  • [September 02, 2003] "What Interoperability Isn't." By Will Provost. In O'Reilly WebServices.xml.com/ (September 02, 2003). ['Will Provost examines apparent interoperability problems in web services.'] "The single buzzword 'interoperability' has grown to encompass a broad range of problems and potential problems. Sadly, it is no longer a precise term. To address the various so-called 'interoperability issues' and to solve problems, we must first break down this large set. In so doing we find that many issues are not fundamentally about interoperability over message content: some are more traditional enterprise integration problems, some are merely conventional. Reasonable expectations about Web-service interoperability can be set only after the range of true interoperability issues is understood. In this article we'll consider a few simple, common problems encountered in Web-service development. We'll challenge each of these apparent interoperability problems and sift them out into a few different classifications, many of which are really solved problems from other domains... multiple parties often face challenges in operating together, but not all such challenges fit the more narrow definition of 'interoperability issues' within the Web-services realm. Another case in point is a company that is considering adherence to a new industry specification, spelled out precisely in WSDL. Between their own existing practice, potential cooperating businesses, and the standard itself, analysts observe a number of mismatches: over basic data types, structures, and interaction styles. Probably the simplest specific problem is the difference between the WSDL-described part number type and the fields the company currently holds in its databases: one is 8 characters and one is 6. This is mundane, perhaps, but is just the tip of an iceberg, and at any rate, surely such gritty challenges are at the heart of the Web-service interoperability problem? Well, let's just say we're getting warmer. Yes, type mismatches are the stuff of interop failures. The question -- still, as with the naming-convention issue -- is, 'Do these mismatches result from diverse mappings from a shared statement -- such as a common WSDL descriptor?' They do not. In other words, what we're seeing in this case is not a problem of interoperability, but of integration. Yes, the same gruesome beast we've been fighting for all these years returns in a new disguise, and, in fact, our weapons will be the same ones..."

  • [September 02, 2003] "Documentum Debuts AIS. Documentum Improves Authoring-Tool Integration. [Content Management.]" By Mark Walter. In The Seybold Report: Analyzing Publishing Technologies Volume 3, Number 10 (August 29, 2003). ['Authoring Integration Services is a group of APIs that make it easy to tie XPress, InDesign and dozens of other tools into the Documentum repository. In effect, AIS confers first-class status on desktop authoring programs.] "This month Documentum introduced Authoring Integration Services, three new programming interfaces that will make it much easier for developers to tie popular desktop editing and production tools into a Documentum repository. Developed by Documentum's Rich Media business unit, the new authoring services are part of a larger effort by that Documentum group to target enterprise publishing as a solution set, complementing its current efforts in marketing and e-learning... Since the late 1970s, metropolitan newspapers have had systems that let multiple writers and editors simultaneously edit different stories on the same page -- and edit those stories to specific news holes -- because their system manages each story as a separate component, and keeps the geometry of the page in the database, where it can be accessed by the editing client. XML-based reference-publishing applications typically integrate content management with desktop authoring tools, such as Arbortext Epic or SoftQuad XMetaL. Any catalog-production system worth its salt provides a tight link between the page makeup tool and the database, so that changes on the pages are reflected back into the database. Documentum's Authoring Integration Services provides three standard interfaces for integrating authoring tools -- FTP, WebDav and File Sharing -- that as a group go significantly beyond the basic ODMA and WebDav implementations that suffice for most vendors... Tony Freeman, executive VP at DeepBridge, a well-known publishing-systems integrator based in New York, described the impact of the new services from an integrator's perspective: 'AIS moves key content-creation applications like XPress and InDesign from afterthought to first-class status in the Documentum suite. Now we can take advantage of the mothership's extraordinary workflow systems, deep XML handling, and integrated Web content management. The next challenge will be to craft a complete editorial or copy-management system by tightly integrating CopyDesk, InCopy, and linked Word files. The Documentum APIs permit this functionality.' ... Documentum is lately establishing itself as a clear front-runner in content management for enterprise publishing. The breadth and depth of its Authoring Integration Services only increases its lead..."

  • [September 02, 2003] "Army Opens Up to Interoperability." By Dawn S. Onley. In Government Computer News (September 01, 2003). Lt. Gen. Steven W. Boutelle recently became a three-star general and is now the Army's CIO. He manages command, control, communications and computers programs. He says his goals include bolstering bandwidth and improving the networks that connect Army systems and other military units. [On what the Army doing to promote interoperability:] "The Army, along with the other services, is focused on several things to support interoperability. One is communications systems. They must be able to communicate, be they radio, wire or digital. We're doing that through the Joint Tactical Radio System program. JTRS will bring together legacy waveforms and some new waveforms. That will give us a common radio. The second piece is common standards. Most systems and all the services have converged on IP Version 4 moving to Version 6. IPv6 will be very expensive and very painful, but with tremendous rewards. There will be quite a few years that we run both IPv4 and IPv6. But even when you get the communications systems talking and the protocols talking, the next thing you have to do get down to the data elements and have them talk. The Defense Department has tried for many, many years to get common data elements. I think our fallback position is probably Extensible Markup Language (XML) for interoperability and data elements between different types of systems... We should never use a proprietary standard or a military standard, unless there is a terribly pressing reason that we cannot come up with a commercial standard to meet the needs. We need to use commercial standards to take advantage of the strides made by industry. Let industry make the investments to improve those items, and then we buy them off the shelves. Let industry push the upgrades..."

  • [September 02, 2003] "A Compact Syntax for W3C XML Schema." By Erik Wilde. From XML.com (August 27, 2003). ['Erik Wilde introduces a compact alternative syntax for W3C XML Schema.'] "W3C XML Schema (WXS) is a very powerful and also a rather complex schema language. One of the problems when working with WXS is the fact that it uses an XML syntax, which makes schemas verbose and hard to read. In this article I describe a compact text-based syntax for WXS, called XML Schema Compact Syntax (XSCS), which reuses well known syntactic constructs from DTDs; and I also present a Java implementation for converting the compact syntax to the XML syntax and vice versa. The W3C XML Schema specification is based on the model of schema components, which are abstract representations of various WXS constructs (such as simple types, complex types, attributes, elements, and various other things). W3C XML Schema also defines an XML representation of these components, but the separation of the specification into the abstract components and the XML syntax makes it obvious that WXS's XML syntax can be replaced. WXS XML syntax is meant to be consumed by machines; it can be parsed and transformed using standard XML technologies and thus fits well into the XML landscape. However, XML is verbose. And the WXS XML syntax is often criticized as being too complex. Indeed it is a complex language, but the syntactic complexity could be alleviated by introducing a new syntax which is more appropriate for human users. This approach has been inspired by RELAX NG Compact Syntax, which defines an alternative syntax for RELAX NG's XML syntax. RELAX NG's compact syntax has become quite popular and makes it much easier for beginners to start using the language and for experts to be able to deal with complex schemas. XSCS's goal is to accomplish the same for WXS. When working with WXS, the syntax in many cases is a problem, especially when schemas are large and hard to read. As a result, WXS development tools in most cases invent their own representation, often a graphical one... We've implemented Java software that transforms WXSs between XML syntax and XSCS and vice versa. The software is available from the XSCS Project Page. The XSCS parser consists of two components: the generated parser class and a class that generates a DOM representation in WXS XML syntax. When converting from XSCS to XML, a DOM tree of the schema is first generated and then written to a file using a standard DOM serializer module. From XML to XSCS, the process starts by parsing the XML file using a standard DOM parser and then handing over the generated DOM tree to the XSCS serializer component. All coding and tests have been conducted using the Xerces parser library; other DOM implementations could be used too. Adding XSCS support to existing WXS tools is easy to do because the syntax does not change any of the semantics of WSX. It simply requires an additional parser and serialization module. However, XML/XSCS conversion can also be done separately using our Java tools or an equivalent implementation. Currently, there is no WXS processor supporting XSCS syntax directly, but first and foremost XSCS is intended to be an interface for human users, who can use the existing Java tool to transform from and to XSCS..."

  • [September 02, 2003] "A Report From Extreme Markup Languages 2003." By James Mason. From XML.com (August 27, 2003). ['Jim Mason, one of the co-chairs of the Extreme Markup Languages conference, reports on this recent annual gathering of deeply involved XML enthuasiasts and innovators.'] "The annual family reunion for connoisseurs of structured documents, Extreme Markup Languages, gathered again in Montréal, August 4-8. Tommie Usdin (Mulberry Technologies) chaired, assisted by Debbie Lapeyre (Mulberry), Steve Newcomb (Coolheads Consulting), Michael Sperberg-McQueen (W3C), and me. What brings us back to Montréal every year? It's not just the food, which can indeed be wonderful. What we like most at Extreme is the opportunity for networking, controversy, and intellectual challenge. From Usdin's opening keynote, 'It's the Markup, Stupid', to Sperberg-McQueen's 'Playing by the Rules', the latest edition of his eagerly awaited annual wrap-up, the focus was on what makes markup work and how we can stretch its limits... Every year there are a few papers which stand out simply because they demonstrate neat tricks with markup applications. Thomas Passin (Mitretek Systems) showed how he uses topic maps to manage bookmarks across multiple web browsers. David Birnbaum (University of Pittsburgh) presented a technique for correlating medieval manuscript collections. And Ken Holman (Crane Softwrights) delivered a real tour de force: generating XSLT stylesheets using the techniques of 'literate programming'... A theme that ran through the whole conference, not just in the presentations but also in the conversations, was that some things seemed to be rushed into standardization before they were fully tested or their consequences understood. In 'Typing in Transformations' Jeni Tennison (Jeni Tennison Consulting) looked at the problems introduced into XSLT 2.0 by the requirements for strong typing; her comments were the cause of a frequent lament that ran something like, 'If Jeni's having problems, what hope is there for the rest of us?' A frequently discussed subject was the adverse consequences of letting XQuery have undue influence on datatypes, XPath, and other standards that are needed for more than just database operations. The number of papers looking at schema languages -- and proposing alternatives to W3 Schema -- seemed to reflect a general malaise. As an alternative, some even suggested extending DTDs to support datatypes, namespaces, and XSLT..." See: (1) the event listing for Extreme Markup Languages 2003; (2) the online conference proceedings.

  • [September 02, 2003] "XML Exposes Rich Network Data. Network, Systems Management Vendors Tackle Web Services Limitations." By Scott Tyler Shafer. In InfoWorld (September 01, 2003). "Until recently, the enterprise was primarily concerned about Web services development and deployment scenarios. Now, networking and systems management vendors are paving the way for companies to discover and manage Web Services at run time. Computer Associates, Hewlett-Packard, and IBM are among the vendors looking to speed the adoption of Web services architectures by building additional capabilities into existing systems management platforms. Meanwhile, at the network layer, F5 and Datapower Technology are altering network management platforms to gather richer information on the health and performance of various network elements via new XML and SOAP interfaces. The moves suggest vendors are addressing gaps in Web services management at deeper infrastructure layers. Accelerating the effort is an emerging standard called WSDM (Web Services Distributed Management), pronounced 'wisdom'. Proposed to the OASIS standards body in July, WSDM is a model for managing a Web Services-oriented architecture. Born of HP's work on the Web Services Management Framework, the WSDM specification is expected to be complete by January 2004. It will define a standard way to use and manage Web services. Enterprise customers will benefit from the ability to define and manage the performance and availability attributes of Web services architectures, according to Hewlett-Packard... CA, IBM, and HP are all racing to release management modules that will help enterprises manage existing and future Web Services. Hewlett-Packard is working on HP OpenView Web Service Management Engine, which was developed in March and is the foundation for the proposed WSDM standard. Due out in late fall, the management engine is described by Smith as a collection of tools that manage Web services environments. Specifically, the tool allows an enterprise to provision a Web services-based application and create SLAs on performance and availability. It also determines the authentication and authorization requirements for subscribing to the new application. The engine itself will intercept packets and route them to the appropriate requested Web service. CA for its part is working on Unicenter WSDM. Currently in beta with customers, the platform also focuses on performance and availability of Web services. 'What we're focused on building with WSDM is a standard set of metrics regarding health and availability of Web services applications,' Hochhauser said. 'With WSDM, a business partner can understand what is happening on both their side of the application and the business partner's side.' IBM is adding WSDM support functions to its Tivoli products. According to David Cox, an architect at Tivoli Systems, Tivoli has created an events and monitoring application that measures system and transaction performance. It will be ready at the end of the year..."

  • [September 02, 2003] "New Office Locks Down Documents." By David Becker. In CNET News.com (September 02, 2003). "As digital media publishers scramble to devise a foolproof method of copy protection, Microsoft is ready to push digital rights management into a whole new arena -- your desktop. Office 2003, the upcoming update of the company's market-dominating productivity package, for the first time will include tools for restricting access to documents created with the software. Office workers can specify who can read or alter a spreadsheet, block it from copying or printing, and set an expiration date. The technology is one of the first major steps in Microsoft's plan to popularize Windows Rights Management Services, a wide-ranging plan to make restricted access to information a standard part of business processes. Analysts say it represents a badly needed new avenue for boosting sales of Microsoft's server software and an opportunity to lock out competitors, including older versions of Office. It also gives businesses that skipped on the last round or two of Office upgrades a new reason to bite this time... The new rights management tools splinter to some extent the long-standing interoperability of Office formats. Until now, PC users have been able to count on opening and manipulating any document saved in Microsoft Word's '.doc' format or Excel's '.xls' in any compatible program, including older versions of Office and competing packages such as Sun Microsystems' StarOffice and the open-source OpenOffice. But rights-protected documents created in Office 2003 can be manipulated only in Office 2003. 'There's certainly a lock-in factor,' said Matt Rosoff, an analyst with Directions on Microsoft. 'Microsoft would love people to use Office and only Office. They made very sure that Office has these features that nobody else has.' Information Rights Management (IRM) tools will be included in the professional versions of all Office applications, including the Word processor and Excel spreadsheet programs. To use IRM features, businesses will need a server running Microsoft's Windows Server 2003 operating system and Windows Rights Management Services software. The server software will record permission rules set by the document creator, such as other people authorized to view the document and expiration dates for any permissions. When another person receives that document, they briefly log in to the Windows Rights Management server -- over the Internet or a corporate network -- to validate the permissions..." See other details in "Microsoft Announces Windows Rights Management Services (RMS)."

  • [September 02, 2003] "Location Object Authorization Policies." By Hannes Tschofenig (Siemens AG), Jorge R. Cuellar (Siemens AG), John B. Morris, Jr (Director, Internet Standards, Technology & Policy Project, Center for Democracy and Technology), Henning Schulzrinne (Columbia University, Department of Computer Science), and James M. Polk (Cisco Systems). IETF GEOPRIV Working Group, Internet Draft. Reference: 'draft-tschofenig-geopriv-authz-00.txt'. August 2003, expires: February 2004. 18 pages. "The policy rules defined in this document extend the Extensible Markup Language (XML) Configuration Access Protocol (XCAP) and in particular the XML schema in "Extensible Markup Language (XML) Configuration Access Protocol (XCAP) Usages for Setting Presence Authorization." Geopriv adds authorization policies beyond what is offered in XCAP-USAGE. The XML schema in XCAP-USAGE is extended with Geopriv specific content as described in this document. The authorization policies described in this document try to satisfy the Elements E through L defined in the Core Draft. Section 2 enumerates the Elements E through L with a description of a possible way to address them. This includes XML schema snippets and examples. Section 3 in a future version will provide a full XML schema..." See: (1) "IETF Publishes Internet Drafts for XML Configuration Access Protocol (XCAP)"; (2) SIP for Instant Messaging and Presence Leveraging Extensions (simple) IETF Working Group. [cache]

  • [September 02, 2003] "A Presence-based GEOPRIV Location Object Format." By Jon Peterson (NeuStar, Inc). IETF GEOPRIV Working Group, Internet Draft. Reference: 'draft-peterson-geopriv-pidf-lo-01'. September 2, 2003, expires March 2, 2004. 16 pages. Section 2.2.3 provides the XML Schema definition for this extension to PIDF. "This document describes an object format for carrying geographical information on the Internet. This location object extends the Presence Information Data Format (PIDF), which was designed for communicating privacy-sensitive presence information and which has similar properties... Geographical location information describes a physical position in the world that may correspond to the past, present or future location of a person or device. Numerous applications used in the Internet today benefit from sharing location information (including mapping/navigation applications, 'friend finders' on cell phones, and so on). However, such applications may disclose the whereabouts of a person in a manner contrary to the user's preferences. Privacy lapses may result from poor protocol security (which permits eavesdroppers to capture location information), inability to articulate or accommodate user preferences, or similar defects common in existing systems. The privacy concerns surrounding the unwanted disclosure of a person's physical location are among the more serious that confront users on the Internet. Consequently, a need has been identified to convey geographical location information within an object that includes a user's privacy and disclosure preferences and which is protected by strong cryptographic security. Previous work has observed that this problem bears some resemblance to the general problem of communicating and securing presence information on the Internet. Presence provides a real-time communications disposition for a user, and thus has similar requirements for selective distribution and security. Therefore, this document extends the XML-based Presence Information Data Format (PIDF) to allow the encapsulation of location information within a presence document. This document does not invent any format for location information itself. Numerous already existing formats based on civil location, spatial coordinates, and the like have been developed in other standards fora. Instead, this document defines an object that is suitable for both identifying and encapsulating pre-existing location information formats, and for providing adequate security and policy controls to regulate the distribution of location information over the Internet..." See also "Presence Information Data Format (PIDF)." [cache]

  • [September 02, 2003] "RPID: Rich Presence Information Data Format." Edited by Henning Schulzrinne (Department of Computer Science, Columbia University). With Vijay Gurbani (Lucent), Paul Kyzivat (Cisco Systems), and Jonathan Rosenberg (dynamicsoft). Internet Engineering Task Force, Internet Draft. Reference: 'draft-ietf-simple-rpid-00.txt'. July 31, 2003, expires: January 2004. 20 pages. Section 8 provides XML Schema Definitions. "The Rich Presence Information Data Format (RPID) adds elements to the Presence Information Data Format (PIDF) that provide additional information about the presentity and its contacts. This information can be translated into call routing behavior or be delivered to watchers, for example. The information is designed so that much of it can be derived automatically, e.g., from calendar files or user activity... The PIDF definition describes a basic presence information data format for exchanging presence information in CPIM-compliant systems. It consists of a <presence> root element, zero or more <tuple> elements carrying presence information, zero or more <note> elements and zero or more extension elements from other name spaces. Each tuple defines a basic status of either 'open' or 'closed'. This document provides additional status information for presentities and defines a Rich Presence Information Data Format for Presence (RPID) to convey this information. This extension has three main goals: (1) Provide rich presence indication that is at least as powerful as common commercial presence systems. Such feature-parity simplifies transition to CPIM-compliant systems, both in terms of user acceptance and protocol conversion. (2) Maintain backwards-compatibility with PIDF, so that PIDF- only watchers and gateways can continue to function properly, naturally without access to the functionality described here. We make no assumptions how the information in the RPID is generated. Experience has shown that users are not always diligent about updating their presence status. Thus, we want to make it as easy as possible to derive RPID information from other information sources, such as calendars, the status of communication devices such as telephones, typing activity and physical presence detectors as commonly found in energy-management systems. The information in a presence document can be generated by a single entity or can be composed from information published by multiple entities. Many of the elements correspond to data commonly found in personal calendars. Thus, we attempted to align some of the extensions with the usage found in calendar formats such as iCal and xCal..." See also "Presence Information Data Format (PIDF)." [cache]

Earlier Articles August 2003

  • [August 29, 2003] "MIT to Uncork Futuristic Bar Code." By Alorie Gilbert. In CNET News.com (August 29, 2003). "A group of academics and business executives is planning to introduce next month a next-generation bar code system, which could someday replace with a microchip the series of black vertical lines found on most merchandise. The so-called EPC Network, which has been under development at the Massachusetts Institute of Technology for nearly five years, will make its debut in Chicago on Sept. 15, at the EPC Symposium. At that event, MIT researchers, executives from some of the largest global companies, and U.S. government officials intend to discuss their plans for the EPC Network and invite others to join the conversation. The attendee list for the conference reads like a who's who of the Fortune 500: Colgate-Palmolive, General Mills, GlaxoSmithKline, Heinz, J.C. Penney, Kraft Foods, Nestle, PepsiCo and Sara Lee, among others. An official from the Pentagon is scheduled to speak, along with executives from Gillette, Johnson & Johnson, Procter & Gamble and United Parcel Service... EPC stands for electronic product code, which is the new product numbering scheme that's at the heart of the system. There are several key differences between an EPC and a bar code. First, the EPC is designed to provide a unique serial number for every item in the system. By contrast, bar codes only identify groups of products. So, all cans of Diet Coke have the same bar code more or less. Under EPC, every can of Coke would have a one-of-a-kind identifier. Retailers and consumer-goods companies think a one-of-a-kind product code could help them to reduce theft and counterfeit goods and to juggle inventory more effectively. 'Put tags on every can of Coke and every car axle, and suddenly the world changes,' boasts the Web site of the Auto-ID Center, the research group at MIT leading the charge on the project. 'No more inventory counts. No more lost or misdirected shipments. No more guessing how much material is in the supply chain -- or how much product is on the store shelves.' Another feature of the EPC is its 96-bit format, which some say is large enough to generate a unique code for every grain of rice on the planet... Working on the standards problem is AutoID, a new arm of the Uniform Code Council, the nonprofit that administers the bar code, or Universal Product Code. AutoID, announced in May, plans to pick up where MIT's Auto-ID Center leaves off, assigning codes, ironing out technical standards, managing intellectual property rights, publishing specifications, and providing user support and training..." See: (1) following bibliographic entry on PML servers; (2) Inaugural EPC Executive Symposium, September 15 - 17, 2003; (3) "Physical Markup Language (PML) for Radio Frequency Identification (RFID)."

  • [August 29, 2003] "PML Server Developments." By Mark Harrison, Humberto Moran, James Brusey, and Duncan McFarlane. White Paper. Auto-ID Centre, Institute for Manufacturing, University of Cambridge, UK. June 1, 2003. 20 pages. "This paper extends our previous white paper on our PML Server prototype work. We begin with a brief review of the Auto-ID infrastructure, then consider the different types of essential data which could be stored about a tagged physical object or which relate to it. In our data model we distinguish between data properties at product-class level and at instance-level. Product-class properties such as mass, dimensions, handling instructions apply to all instances of the product class and therefore need only be stored once per product class, using a product-level EPC class as the lookup key. Instance-level properties such as expiry date and tracking history are potentially unique for each instance or item and are logically accessed using the full serialised EPC as the lookup key. We then discuss how a PML Service may use data binding tools to interface with existing business information systems to access other properties about an object besides the history of RFID read events which were generated by the Auto-ID infrastructure. The penultimate section analyses complex queries such as product recalls and how these should be handled by the client as a sequence of simpler sub-queries directed at various PML services across the supply chain. Finally, we introduce the idea of a registry to coordinate the fragmented PML Services on a supply chain in order to perform tracking and tracing more efficiently and facilitate a complex query, which requires iterative access to multiple PML Services in order to complete it... The key to the Auto-ID architecture is the Electronic Product Code (EPC) which extends the granularity of identity data far beyond that which is currently achieved by most bar code systems in use today. The EPC contains not only the numeric IDs of the manufacturer and product type (also known as stock-keeping unit or SKU) but also a serial number for each item or instance of a particular product type. Whereas two apparently identical instances or items of the same product type may today have the same bar code, they will in future have subtly different EPCs, which allows each one to have a unique identity and to be tracked independently. In order to minimise the costs of Radio Frequency Identification (RFID) tags, the Auto-ID Centre advocates that only a minimal amount of data (the EPC) should be stored on the tag itself, while the remaining data about a tagged object should be held on a networked database, with the EPC being used as a database key to look up the data about a particular tagged object. Within the Auto-ID infrastructure, the Savant, Object Name Service (ONS) and PML Service are all networked databases of some form. Edge Savants interface directly with RFID readers and other sensors and generate Auto-ID event data, typically consisting of triples of three values (Reader EPC, Tag EPC, Timestamp) and an indication of whether the tag has been 'added' or 'removed' from the field of the tag readers. The Object Name Service (ONS) is an extension of the internet Domain Name Service (DNS) and provides a lookup service to translate an EPC number into an internet address where the data can be accessed. Data about the tagged object is communicated using the Physical Markup Language (PML) and the PML Service provides additional information about the tagged object from network databases. The Physical Markup Language (PML) does not specify how the data should be stored, only how it should be communicated. It should be possible for many different types of existing information systems to act as data sources to the PML Service, and for the data to be queried and communicated using the PML language and by reference to the PML schema rather than by reference to the particular structure/schema of the various underlying databases in which the values are actually stored..." See "Physical Markup Language (PML) for Radio Frequency Identification (RFID)."

  • [August 29, 2003] "The End of Systems Integrators?" By Erika Morphy. In CRMDaily.com News (August 29, 2003). "As with the applications themselves, verticalization has become the latest thing in integration technology, says Steve Bonadio, senior program director in Meta Group's enterprise application strategies group. Siebel continues to expand development of its integration tool, UAN, as does SAP with its respective integration product package, Xapps. Even PeopleSoft has gotten into the act, rolling out its version of an integration on-ramp this week -- Process Integration Packs, or 'PIPs,' for CRM. The premise behind each of these products is roughly the same: to help customers cut down on integration costs by providing standardized interfaces for business processes and discrete systems and applications. This, of course, was once strictly the domain of systems integrators. Could it be, CIOs of every stripe and size wonder, that their dependence on these service providers will diminish -- if not end -- as more application providers start to pay attention to integration linkages and hooks? The short answer: Not likely. The longer answer: There are other competitive and market-development pressures that are eroding systems integrators' stranglehold on IT budgets... Products such as UAN, Xapps and PIPs are making life easier for customers, which is good, as that was their intent. 'Application vendors talk about the fact that integration is too costly, and that is one reason why many companies are hesitant to deploy more enterprise software,' Gartner research analyst Ted Kempf told NewsFactor's CIO Today Magazine. 'So they try to make it easier by providing integration packages.' ... Siebel's Universal Application Network was designed to do just that, Bharath Kadaba, Siebel's vice president and technical manager of UAN, told NewsFactor's CIO Today. Rather than having a systems integrator, such as webMethods, code all information about business objects and processes into an integration platform by hand, UAN provides models for doing so... First introduced last year, UAN is at heart a tool that is predicated on partnerships with independent enterprise-application integration vendors, such as webMethods and Tibco. Now, Siebel is broadening its functionality to provide vertical expertise. Last week, it announced the availability of UAN integration applications for the communications, media and energy industries on the webMethods integration platform... It is a similar story with SAP's xApps and its renamed tech platform, NetWeaver. NetWeaver leverages Web-services technology to integrate the xApps application with mySAP and other software. The applications, or xApps, automate specific business processes, such as project management. 'What SAP is saying is that the next generation of applications, as far as they are concerned, will not be applications for accounting or CRM, but will be end-to-end business applications -- or even cross-multiple business applications,' Gartner research analyst Simon Hayward told NewsFactor's CIO Today. SAP's first Xapps, X-Application Resource and Program Management, aligns corporate resources to specific projects. It is the perfect application for pharmaceutical companies, Tim Bussiek, vice president of xApps marketing, told NewsFactor's CIO Today. Typical big pharma companies might launch 5,000 projects each year, all very expensively staffed and equipped. SAP's new xApps tool allows them to evaluate these projects on an ongoing basis, Bussiek said..."

  • [August 27, 2003] "BPEL and Business Transaction Management: Choreology Submission to OASIS WS-BPEL Technical Committee." By Tony Fletcher, Peter Furniss, Alastair Green, and Robert Haugen (Choreology Ltd). Copyright (c) Choreology Ltd, 2003, subject to OASIS IPR Policy. Working paper presented to the OASIS Web Services Business Process Execution Language Technical Committee. "An overall motivation for this submission is given in an article by one of the authors, Alastair Green, in the September issue of Web Services Journal (see following bibliographic entry). From the 27-August-2003 posting of Peter Furniss: "... [WRT] the announcements of a raft of issues on "business transaction management". These all relate to the long-promised submission from Choreology on how to handle transactions in BPEL... The submission gives the background and context for the BTM issues and proposes syntax constructs as solutions for [items] 54 to 59" in the issues list... "BTM Issue A (BPEL issue 53), Desirable for WS-BPEL to include Business Transaction Management (BTM) programming constructs which are compatible with WS-T, BTP and WS-TXM, "There are three multi-vendor specifications which address the needs of business transaction management for Web Services: Business Transaction Protocol 1.0 (OASIS Committee Specification, June 2002); WS-Transaction (proprietary consortium, August 2002), and the very recently published WS-TXM (proprietary consortium, August 2003). In our view BTP Cohesions, WS-T Business Activity, and WS-TXM Long-Running Actions are the most relevant aspects of these specifications for WS-BPEL. These aspects overlap to a very high degree, each effectively utilizing a two-phase (promise/decide) outcome protocol. (We should emphasize that there has been little time to analyze or assimilate WS-TXM, so this is a provisional conclusion with respect to that specification). WS-BPEL should be equipped with the ability to create and terminate business transactions, and to define process participation in such transactions, in a way which is compatible with the intersection of these three capabilities. This will minimize dependence on future standardization efforts in the BTM area... It is should be noted that a 'business transaction' is normally performed in support of some economic transaction -- that it coordinates actions that have an effect on the parties and their relationships that go beyond the lifetime of the transaction itself. Since a BPEL process cannot directly manipulate data with a lifetime longer than the process, but always delegates to a web-service, the invoked web-services will either themselves be participants in the business transaction (strictly, the invocation will trigger the registration of participants) or the BPEL process will register as a participant and then make non-transaction invocations on other web-services. In the former case, the invoked web-services are 'business-transaction aware'; the BPEL process will export the context to it and the web-services will implement the transactional responsibilities internally. Similarly, a BPEL process, as an offerer of a web-service, may import a context from a non-BPEL application -- in which case it is itself a business-transaction aware web-service from the perspective of its caller -- and either registers as a participant or passes the context on in its own invocations..." General references in "Business Process Execution Language for Web Services (BPEL4WS)."

  • [August 26, 2003] "Grid Security: State of the Art. Expanded Grid Security Approaches Emerge." By Anne Zieger (Chief Analyst, PeerToPeerSource.com) From IBM developerWorks (August 2003). "Today, emerging grid security efforts are also beginning to address application and infrastructure security issues, including application protection and node-to-node communications. Among other advances, emerging grid security approaches are integrating Kerberos security with PKI/X.509 mechanisms, securing peer connections between network nodes and better protecting grid users and apps from malicious or badly formed code. One of the best-known security approaches for Grid computing can be found within the Globus Toolkit, a widely used set of components used for building grids. The Toolkit, developed by the Globus Project, offers authentication, authorization, and secure communications through its Grid Security Infrastructure (GSI). The GSI uses public key cryptography, specifically public/private keys and X.509 certificates, as the basis for creating secure grids. X.509, perhaps the most widely implemented standard for defining digital certificates, is very familiar to enterprise IT managers, and already supported by their infrastructure. At the same time, it's flexible, and can be adopted neatly for use in the grid. Among the GSI's key purposes are to provide a single sign-on for multiple grid systems and applications; to offer security technology that can be implemented across varied organizations without requiring a central managing authority; and to offer secure communication between varied elements within a grid... Grid security research is just beginning to address the operational and policy issues of concern to enterprise IT managers. Going forward, however, grid security efforts should embrace technologies rapidly, while they're still at the cutting edge of mainstream corporate development. For example, in recent months, the Global Grid Forum has begun to look at security in a grid-based Web services environment. GGF is working with Open Grid Services Architecture (OGSA), a proposed Grid service architecture based on the integration of grid and Web services concepts and technologies. Members of the OGSA security group plan to realize OGSA security using the WS-Security standard backed by IBM, Microsoft, and VeriSign Inc. Among other features, WS-Security offers security enhancements for SOAP messaging and methods for encoding X.509 certificates and Kerberos tickets. While the OGSA security group's work is in its early stages, its final work product should be yet another factor contributing to grid's increasing acceptance in enterprise life. With critical technologies like Web services being securely grid-enabled, grid technology should soon be central to just about any enterprise's networking strategy..." Article also in PDF format.

  • [August 26, 2003] "RSS Utilities: A Tutorial." By Rodrigo Oliveira (Propertyware). From Java Developer Services' technical articles series. August 2003. "RSS ('Really Simple Syndication') is a web content syndication format. RSS is becoming the standard format for syndicating news content over the web. As part of my recent contract with Sun Microsystems, I was tasked with the development of a JSP Tag Library to be used by anybody with a basic understanding of RSS, JavaServer Pages, and HTML. The taglib is mostly geared towards non-technical editors of web sites that use RSS for aggregating news content. My goal was to develop a JSP tag library that would simplify the use of RSS content (versions 0.91, 0.92 and 2.0) in web pages. The RSS Utilities Package is the result of that project. It contains a set of custom JSP tags which make up the RSS Utilities Tag library, and a flexible RSS Parser. This document describes how to use the parser and the library provided in the RSS Utilities Package. The zip [distribution] file contains a jar file, rssutils.jar, providing the classes needed to use the utilities, and a tld file rssutils.tld which defines JSP custom tags for extracting information from RSS documents... The parser was a by-product of the project. Although the parser was developed with the tag library in mind, it is completely self-contained, and it can be used in Java applications. To do so, however, you obviously need to know how to write at least basic Java code; if you know how to write Hello World in the Java language, you are probably all set... The RSS object generated by the parser is a Java object representation of the RSS document found at the provided URL [http://mydomain.com/document.rss]. Use the methods provided by the RSS object to get a handle to other RSS objects, such as Channels and Items. The RssParser can also parse File objects and InputStream objects... RSS provides a simple way to add and maintain news -- as well as other content -- on your web site, from all over the web. Even though RSS is a simple XML format, parsing and extracting data out of XML documents hosted elsewhere on the web can be a bit tricky-- or at least tedious -- if you have to do it over and over again. The RSS Utilities Package leverages Custom Tag and XML Parsing technologies to make the "Real Simple Syndication" format live up to its name..." The first release of the RSS Utilities Package is available for download. General references in "[RDF Site Summary | Real Simple Syndication] (RSS)."

  • [August 26, 2003] "Integrating CICS Applications as Web Services. Extending the Life of Valuable Information." By Russ Teubner. In WebServices Journal Volume 3, Issue 9 (September 2003), pages 18-22. With 4 figures. ['Web services promise to lower the costs of integration and help legacy applications retain their value. This article explains how you can use them to integrate mainframe CICS applications with other enterprise applications.'] "IBM's CICS (Customer Information Control System) is a family of application servers that provides online transaction management and connectivity for legacy applications. There are two basic models for integrating CICS applications as Web services, both of which include the use of adapters. The differences between these models depend upon where the Web services exist, how they operate under the covers, and the types of applications you want to integrate. In this article, we refer to these models as connectors and gateways. Connectors run on the mainframe and can use native interfaces that permit seamless integration with the target application. Gateways run off the mainframe on middle-tier servers and often use traditional methods such as screen-scraping... Connectors allow you to transform your legacy applications into Web services without requiring the use of additional hardware, without changes to the legacy application, and without falling back upon brittle techniques like screen scraping. Compared to gateways, connectors yield better performance by running on the host, and more reliable operation due to the elimination of the many layers data must pass through due to screen-scraping... Unlike connectors, gateways typically run on a physical or logical middle tier. Where the gateway runs is important because there are so few options for accessing the host from the middle-tier servers, which means gateways usually involve some form of screen-scraping. The solution is tightly coupled in that the integration is between the gateway and a specific application. Any changes to the application will break the integration. When gateways communicate with terminal-oriented legacy applications they open a terminal session with the legacy application, send a request to the application, receive the terminal datastream, use HLLAPI to capture the screen data, process the screen data, convert the contents to XML, and ship the XML document to the requester... IBM's CICS Transaction Server includes facilities that allow third-party vendors to create connectors that can immediately enable legacy applications as Web services. These facilities provide additional benefits over gateways, such as improved performance and increased stability compared to their screen-scraping counterparts. By using the same industry-standard technologies as Web services, some connectors make it possible for applications to transparently invoke CICS transactions within a Web services architecture and receive the resulting data as well-formed XML. For organizations that want to retain the value of their CICS applications, the combination of XML-enabling connectors and Web services offers a practical and powerful integration solution. Web services are not a trend, but an industry-wide movement that can provide a long-term solution for companies that want to integrate legacy applications and data with new e-business processes. In the end, companies need to assess the value of the data contained in their CICS applications. Most companies have already determined that such data is highly valuable and they are looking for ways to preserve their investments. Given that recent surveys show the top strategic priorities of CIOs and CTOs are integrating systems and processes, the use of Web services for legacy integration will grow rapidly..." [alt URL]

  • [August 26, 2003] "Structured Documents: Searching XML Documents via XML Fragments." By David Carmel, Yoelle S. Maarek, Matan Mandelbrod, Yosi Mass, and Aya Soffer (IBM Research Lab in Haifa, Mount Carmel, Haifa). Presented July 30, 2003 at the 26th Annual International ACM SIGIR Conference [ACM Conference on Research and Development in Information Retrieval] (Toronto, Canada). Published in the the Conference Proceedings, pages 151-158. "Most of the work on XML query and search has stemmed from the publishing and database communities, mostly for the needs of business applications. Recently, the Information Retrieval community began investigating the XML search issue to answer information discovery needs. Following this trend, we present here an approach where information needs can be expressed in an approximate manner as pieces of XML documents or 'XML fragments' of the same nature as the documents that are being searched. We present an extension of the vector space model for searching XML collections via XML fragments and ranking results by relevance. We describe how we have extended a full-text search engine to comply with this model. The value of the proposed method is demonstrated by the relative high precision of our system, which was among the top performers in the recent INEX workshop. Our results indicate that certain queries are more appropriate than others for the extended vector space model. Specifically, queries with relatively specific contexts but vague information needs are best situated to reap the benefit of this model. Finally our results show that one method may not fit all types of queries and that it could be worthwhile to use different solutions for different applications.' .. We present here an approach for XML search that focuses on the informational needs of users and therefore addresses the search issue from an IR viewpoint. In the same spirit as the vector space model where free-text queries and documents are objects of the same nature, we suggest that query be expressed in the same form as XML documents, so as to compare 'apples and apples'. We present an extension of the vector space model that integrates a measure of similarity between XML paths, and define a novel ranking mechanism derived from this model. We evaluate several implementations of our model on the INEX collection and obtained good evidence that the use of XML fragments with an extended vector space model is a promising approach to XML search. By sticking to the well known and tested model where the query and document are of the same form, we were able to achieve very high precision on the INEX topics. The initial results also indicate that queries that are well specified in terms of the required contexts, are best situated to reap the benefit of more complex context resemblence measures and statistics. However, these results should still be considered as initial due to the limited set of queries studied here. A deeper analysis and more than a few, almost 'anecdotal', queries should be discussed as soon as larger test collections become available. Finally, we are convinced that one method will not fit all types of queries and that it could be worthwhile to use different solutions for different types of applications..." See also the earlier paper online: "An Extension of the Vector Space Model for Querying XML Documents via XML Fragments," by David Carmel, Nadav Efraty, Gad M. Landau, Yoelle S. Maarek, and Yosi Mass [cache]

  • [August 26, 2003] "Development of SNMP-XML Translator and Gateway for XML-Based Integrated Network Management." By Jeong-Hyuk Yoon, Hong-Taek Ju, and James W. Hong. In International Journal of Network Management Volume 13, Issue 4 (July/August 2003), pages 259-276. "The research objective of our work is to develop a SNMP MIB to XML translation algorithm and to implement an SNMP-XML gateway using this algorithm. The gateway is used to transfer management information between an XML-based manager and SNMP-based agents. SNMP is widely used for Internet management, but SNMP is insufficient to manage continuously expanding networks because of constraints in scalability and efficiency. XML based network management architectures are newly proposed as alternatives to SNMP-based network management, but the XML-based Network Management System (XML-based NMS) cannot directly manage legacy SNMP agents. We also implemented an automatic specification translator (SNMP MIB to XML Translator) and an SNMP-XML gateway... We developed a gateway which translates messages between SNMP and XML/HTTP. For this gateway, we proposed a translation algorithm which changes SNMP MIB into the XML Schema as a method of specification translation, and implemented an MIB to XML translator which embodied the algorithm. Also, we defined the operation translation methods for interaction translation. SNMP has limits in scalability and efficiency when managing increasingly large networks. Research on XML-based NMS is evolving to solve these shortcoming of SNMP-based NMS. XML-based NMS uses XML in network management to pass management data produced in large networks. XML-based NMS delivers management data in the form of an XML document over the HTTP protocol. This method is efficient for transferring large amounts of data. However, an XMLbased NMS cannot manage the legacy SNMP agent directly. If a manager cannot communicate with an SNMP agent, it is not practical in the real world where SNMP is used worldwide. Because most Internet devices are equipped with an SNMP agent, and network namagement was performed by the agent, we studied how to manage the legacy SNMP agent using the advantage of XML-based network management simultaneously. Because of the excellent compatibility and user-friendly features of XML, integration of data into XML is expected to accelerate in the future. Because of the excellent compatibility and userfriendly features of XML, integration of data into XMLis expected to accelerate in the futue. Specifically, in order to use XML as middleware for information transmission between different systems, a standard translation method to change SNMP MIB to XML within transmission of information for network and system management is necessary. In future work, we need to enhance the translation algorithm through a performance evaluation of the algorithm. For enlarging scalability, we need to study how one manager can manage many SNMP agents distributed to large networks such as enterprise networks. Distributed processing is the method presented here. For example, one XML-based manager governing several distributed SNMP-XML gateways through networks can expand the scope of management..."

  • [August 26, 2003] "Universal Plug and Play: Networking Made Easy." By Stephen J. Bigelow. In PC Magazine (September 2003). "A technology called Universal Plug and Play (UPnP) is starting to make networking-configuration hassles a thing of the past. Just as Plug and Play (PnP) technology changed the way we integrate hardware with our PCs, UPnP will ease the way we add devices to a network. With PnP, you no longer need to configure resources for each device manually, hoping there are no conflicts. Instead, each device identifies itself to the operating system, loads the appropriate drivers, and starts operating with minimal fuss. PC-based networks, however, still require a cumbersome setup and configuration process, and devices such as printers, VCRs, PDAs, and cell phones are still difficult or impossible to network... With UPnP, adding devices to your network can be as easy as turning them on. A device can automatically join your network, get an IP address, inform other devices on your network about its existence and capabilities, and learn about other network devices. When such a device has exchanged its data or goes outside the network area, it can leave the network cleanly without interrupting any of the other devices. The ultimate goal is to allow data communication among all UPnP devices regardless of media, operating system, programming language, and wired/wireless connection. To foster such interoperability, UPnP relies on network-related technologies built upon industry-standard protocols such as HTTP, IP, TCP, UDP, and XML... UPnP is an open networking architecture that consists of services, devices, and control points. Services are groups of states and actions. For example, a light switch in your home has a state (either on or off) and an action that allows the network to get or change the state of the switch. Services typically reside in devices. A UPnP-compliant VCR might, for example, include tape handling, tuning, and clock services -- all managed by a series of specific actions defined by the developer. Devices may also include (or nest) other devices. Because devices and their corresponding services can vary so dramatically, there are numerous industry groups actively working to standardize the services supported by each device class. Today, there are four standards: Internet Gateway Device (IGD) V 1.0; MediaServer V 1.0 and MediaRenderer V 1.0; Printer Device V 1.0 and Printer Basic Service V 1.0; and Scanner (External Activity V 1.0, Scan V 1.0, Feeder V 1.0, and Scanner V 1.0). Industry groups will produce XML templates for individual device types, which vendors will fill with specific information such as device names, model numbers, and descriptions of services... There is one caveat with regard to UPnP: security..." See the recent news story "UPnP Forum Releases New Security Specifications for Industry Review."

  • [August 26, 2003] "Sun Seeks to Spur App Server Adoption. High Availability Stakes Raised in Upgrade." By Paul Krill. In InfoWorld (August 26, 2003). "Sun Microsystems hopes to make a bold leap in the Java application server space with its upcoming Sun ONE Application Server 7 Enterprise Edition, featuring high availability. Having trailed companies such as BEA Systems and IBM in market share, Sun is looking to turn things around by focusing on a high availability database layer in the product that is based on technology acquired through its aquisition of Clustra Systems in 2002. Sun's high availability technology is intended to ensure 99.999 uptime for applications such as e-commerce transactional systems, according to Sun officials, who discussed the technology during a chalk talk session in San Francisco... The high availability database layer features state information on transactions. Transactional loads can be shifted between application servers in the network if needed, Keller said. While the current version of the enterprise application server, release 6.5, has had high availability support, Version 7's support of the Clustra technology boosts real-time database functionality and scalability, to 24 processors per system. Version 7, which is set to ship in September for $10,000 per processor, also is compliant with the J2EE 1.3 Java specification, which features container management support for access to a database without requiring programmer involvement, according to Sun. Load balancing in Version 7 will enable uptime when taking down an application server for maintenance. Additionally, the high availability layer enables performance boosts through the addition of more processors, rather than having to add more application servers... Sun will add J2EE 1.4 compliance to the application server, featuring conformance to Web services specifications, in 2004, Sun officials said..."

  • [August 26, 2003] "Transacting Business with Web Services, Part I. The Coming Fusion of Business Transaction Management and Business Process Management." By Alastair Green (Choreology Ltd). In WebServices Journal Volume 3, Issue 9 (September 2003), pages 32-35. "Business transaction management (BTM) is a promising new development in general-purpose enterprise software. Most large companies are devoting significant resources to the problem of reliable, consistent integration of application services. BTM offers previously inaccessible levels of application coordination and process synchronization, radically simplifying the design and implementation of transactional business processes. Business process management (BPM) needs to be enriched by BTM for users to see the potential value of BPM realized in practice. XML is already widely deployed as a useful lingua franca enabling the creation of canonical data standards for particular industries, trading communities, and information exchanges. The extended family of Web services standards (clustered around the leading duo of SOAP and WSDL) is gaining growing acceptance as an important way of providing interoperable connectivity between heterogeneous systems. Many organizations are also examining the use of BPM technologies, exemplified by the current OASIS initiative, Web Services Business Process Execution Language (WS BPEL). Increasingly, attention is turning to the special problems associated with building transactional business processes and reliable, composable services. This is where BTM technology comes into its own. In this article I'm going to look at the rationale for and current status of BTM, and how vendors and users are thinking about the integration or fusion of BTM with BPM, particularly in the OASIS BPEL standardization effort. BPEL, as a special-purpose programming language designed to make processes portable across different vendors' execution engines, can become a very useful standard programming interface for business transactions in the Web services world... Full-scale BTM software needs to implement interoperable protocols that define three phases of any transactional interaction, whether integrating internal systems, or automating external trades and reconciliations: (1) Phase One: Collaborative Assembly: The business-specific interplays of messages that assemble a deal or other synchronized state shift in the relationship of two or more services. A useful general term for such an assemblage of ordered messages is collaboration protocol. Examples include RosettaNet PIPs, UN/Cefact trade transactions, and the FIX trading protocol. In the future, BPEL abstract processes should help greatly in defining such protocols. Reliable messaging has an important role in this assembly phase, but as a subordinate part of a new, extended concept of GDP (guaranteed delivery and processing). (2) Phase Two: Coordinated Outcome: The coordination of an outcome that ensures that the intended state changes occur in all participant systems, consistent with the business rules or contracts which govern the overall transaction. Examples of relevant coordination protocols are WS-Transaction (Atomic Transaction and Business Activity, supplemented by WS-Coordination) and BTP (the OASIS Business Transaction Protocol) and the recently released WS-TXM (Transaction Management, part of the WS-Composite Application Framework). A coordination protocol requires three related sub-protocols: a control protocol, which creates and terminates a coordination or transaction (present in BTP); a propagation protocol, which allows a unique transaction identity to be used to bind participating services to a coordination service (this sub-protocol is mostly defined by WS-Coordination); and an outcome protocol, which allows a coordination service to reliably transmit the instructions of a controlling application to the participants, even in the event of temporary process, processor or network failures. WS-T, BTP, and WS-TXM, contain very similar outcome protocols... (3) Phase Three: Assured Notification: Notification of the result of the transaction to the parties involved, ensuring that they're all confident of their final relationship to their counterparties. Ideally, this requires a reliable notification protocol, which allows the different legal entities or organizational units to receive or check the final outcome, including partial or complete failures..." General references in "Business Process Execution Language for Web Services (BPEL4WS)." [alt URL]

  • [August 25, 2003] "Macromedia Plays Drag-and-Drop Game." By Gavin Clarke. In Computer Business Review Online (August 25, 2003). "Macromedia Inc is eyeing up Delphi and Visual Basic developers with a web-programming environment exploiting both drag-and-drop and XML web services... Flash MX Professional 2004 is designed to exploit the Flash player's popularity as a deployment environment by introducing development techniques and workflows uncommon to existing Flash development environment but familiar to application coders. Flash currently uses a so-called timeline design metaphor, friendly to visually creative programmers but not those comfortable with drag-and-drop. Macromedia hopes drag-and-drop will attract Microsoft Corp's Visual Basic and Borland Software Corp's Delphi developers whose tools, company president of products Norm Meyrowitz said, haven't morphed to become web centric... Flash MX Professional 2004 also uses web services with scriptable data binding that supports SOAP and XML in addition to Macromedia's Flash Remoting. There is also integration with Microsoft's Visual SourceSafe, to manage source code and project files. Macromedia becomes the latest in a growing string of companies, including BEA and Sun Microsystems Inc, attempting to appeal especially to the Visual Basic crowd... Dreamweaver features enhanced support for Cascading Style Sheets (CSS) that helps reduce consumption of network bandwidth by separating design from content, multi-browser validation to check tags and CSS rules across different browsers, updated Flash Player, and drawing tools in Fireworks for greater control over bitmap and vector images. In attempt to provide a unified look and feel, Macromedia will also unveil Halo, a set of design guidelines on the principle of Apple Computer Inc's OS X Aqua interface..." Note SYS-CON Media's announcement for a new MX Developer's Journal. See details in the announcement "Macromedia Announces Dreamweaver MX 2004. New Version Builds Foundation for Widespread Adoption of Cascading Style Sheets (CSS)." [temp URL]

  • [August 25, 2003] "Macromedia Unveils MX 2004 Lineup. New versions of Flash, Dreamweaver, Fireworks Slated for September Release." By Dave Nagel. In Creative Mac News (August 25, 2003). With screen shots. "Macromedia has unveiled its new lineup of graphic design tools in the MX 2004 family. These include two new versions of Flash, Flash MX 2004 and Flash MX Professional 2004; Dreamweaver MX 2004; and Fireworks MX 2004. The company has also introduced the new Studio MX 2004 suite, as well as Flash Player 7. All are expected to be available next month for Mac OS X and Windows. Completely new to Macromedia's lineup are split versions of Flash, the standard version and the Professional edition. The standard Flash MX 2004 adds new functionality and gains several workflow enhancements. It includes new Timeline Effects for adding common effects like blurs and drop shadows without scripting; pre-defined behaviors for navigation and media control; ActionScript 2.0 for enhanced interactivity; support for cascading style sheets for producing hybrid Flash and HTML content; spell checking and global search; accessibility features; and Unicode and localization tools. It also gains a high-performance compiler for improving playback considerably, including playback of content created for earlier versions of the Flash Player... The Professional edition includes all of the new features of Flash MX 2004, as well as a beefed-up application development environment for developing rich Internet applications and tools for delivering video with interactivity and custom interfaces. It adds forms-based programming capabilities as an alternative to timeline-based development and offers connectivity to server data with scriptable binding, supporting SOAP, XML and Flash Remoting. For video, Flash Professional includes a streamlined development workflow with Apple FInal Cut Pro and other video editing systems. With the new Flash Player 7, it provides support for full-motion, full-frame video and progressive downloads. And it includes pre-built components for building custom interfaces and easily compositing text, animated graphics and images into a video presentation. It also gains for developing content for mobile devices... Dreamweaver and Fireworks have also been boosted into the MX 2004 fold. The primary focus of Dreamweaver MX 2004 is the simplification of cascading style sheets, with the entire design environment built around CSS for precise control over design elements. It offers support for SecureFTP, dynamic cross-browser validation functionality, built-in graphics editing, integration with Microsoft Word and Excel (including copying and pasting formatted tables) and updated support for ASP.NET, PHP and ColdFusion technologies. It will ship with MX Elements for HTML, which includes starter and template components for Web pages, including preset cascading style sheets..." See also the announcement "Macromedia Announces Dreamweaver MX 2004. New Version Builds Foundation for Widespread Adoption of Cascading Style Sheets (CSS)."

  • [August 25, 2003] "Goals of the BPEL4WS Specification." By Frank Leymann, Dieter Roller, and Satish Thatte. Working document submitted to the OASIS Web Services Business Process Execution Language TC. See the posting from Diane Jordan and the original posting, with attachment. The memo articulates ten (10) overall goals of the "original" BPEL4WS Specification, presented as a record of the "Original Authors' Design Goals for BPEL4WS." It covers: Web Services as the Base, XML as the Form, Common set of Core Concepts, Control Behavior, Data Handling, Properties and Correlation, Lifecycle, Long-Running Transaction Model, Modularization, and Composition with other Web Services Functionality. "This note aims to set forward the goals and principals that formed the basis for the work of the original authors of the BPEL4WS specification. The note is set in context to reflect the considerations that went into the work, rather than being presented as a set of axioms. Much of this material is abstracted from comments and explanations embedded in the text of the specification itself. This is intended to be informative and a starting point for a consensus in the WSBPEL TC for the work of the TC. The goals set out here are also reflected in the charter of the WSBPEL TC... BPEL4WS is firmly set in the Web services world as the name implies. In particular, all external interactions occur through Web service interfaces described using WSDL. This has two aspects: (1) the process interacts with Web services through interfaces described using WSDL and (2) the process manifests itself as Web services described using WSDL. We concluded that although the binding level aspects of WSDL sometimes impose constraints on the usage of the abstract operations, in the interests of simplicity and reusability we should confine the exposure of process behavior to the 'abstract' portType (i.e., 'interface') level and leave binding and deployment issues out of the scope of the process models described by BPEL4WS. The dependence is concretely on WSDL 1.1, and should remain so, given the timeline for the WSBPEL TC, and the likelihood that WSDL 1.1 will remain the dominant Web service description model for some time to come. At the same time we should be sensitive to developments in WSDL 1.2 and attempt to stay compatible with them..." Note from Satish Thatte's post: "As promised, the goals document is attached. As I said during the last phone meeting, this document only covers high level design points... If TC members feel that there are any important aspects not yet covered here please let us know and we will try to address those concerns..." General references in "Business Process Execution Language for Web Services (BPEL4WS)." [source .DOC]

  • [August 22, 2003] "J2ME Connects Corporate Data to Wireless Devices. Sacrificing Proprietary Gimmicks for Software Portability, J2ME Leads the Way." By Tom Thompson. In InfoWorld (August 22, 2003). "The vast differences among portable devices -- Pocket PCs running Windows CE, PDAs running Palm OS or Linux, cell phones running the Symbian OS -- pose significant problems for developers. Even the cell phones from a single vendor such as Motorola (the company I work for) can vary widely in processor type, memory amount, and LCD screen dimensions. Worse, new handsets sporting new features, such as built-in cameras and Bluetooth networking, are released every six months to nine months. For IT managers whose chief concern is that applications running on device A today also run on device B tomorrow, the best choice among development platforms is J2ME, a slimmed-down version of Java tailored for use on embedded and mobile devices. Most handset vendors implement their own Java VM, and third-party VMs provide Java support in Palm and Pocket PC devices. For a broad range of devices, past, present, and future, J2ME provides a high degree of security and application portability -- but not without drawbacks... J2ME limits support for vendor-specific hardware features to accommodate variations among devices. J2ME tackles hardware variations in two ways. First, J2ME defines an abstraction layer known as a configuration, which describes the minimum hardware required to implement Java on an embedded device. The J2ME configuration that addresses resource-constrained devices such as mobile phones and low-end PDAs is the CLDC (Connected Limited Device Configuration). Second, J2ME defines a second abstraction layer, termed a profile, that describes the device's hardware features and defines the APIs that access them. Put another way, profiles extend a configuration to address a device's specific hardware characteristics. J2ME currently defines one profile for CLDC devices: the MIDP (Mobile Information Device Profile). In addition to stipulating the basic hardware requirements, the MIDP implements the APIs used to access the hardware... Down the road, the JCP proposes a new JTWI (Java Technology for the Wireless Industry) specification. In JTWI, a number of optional J2ME APIs -- such as MMAPI and WMA (Wireless Messaging APIs) -- become required services. Even in its current state, J2ME offers developers the ability to write once and deploy a business application across the wide range of wireless gear currently available. J2ME's abstraction layers also provide a hedge against vendor lock-in, and they help cope with the rapid changes in today's wireless devices. Developers may have to craft the midlet's interface to address the lowest-common-denominator display, but that's a small price to pay compared with writing a custom client application for each device the corporation owns..." See: (1) "J2ME Web Services Specification 1.0," JSR-000172, Proposed Final Draft 2; (2) "IBM Releases Updated Web Services Tool Kit for Mobile Devices."

  • [August 22, 2003] "'Java Everywhere' is for World Domination. Why the Latest Wireless Buzz Matters to All Developers." By Michael Juntao Yuan. In JavaWorld (August 22, 2003). "The buzzword from the 2003 JavaOne conference was 'Java everywhere.'... Java runtimes are built into more than 150 devices from more than 20 manufactures. All five major cell phone manufactures have committed to the Java platform. In addition to manufacturer support, Java has also gained widespread support from the wireless carrier community. Wireless carriers are conservative and wary of any security risks imposed by new technologies. As part of the J2ME specification process, carriers can influence the platform with their requirements. As a result, all major wireless carriers around the world have announced support for J2ME handsets and applications. For developers, J2ME applications can take advantage of not only ubiquitous device support, but also ubiquitous network support. A major effort has been made to support games on J2ME handsets. Mobile entertainment has proven to be an extremely profitable sector. In Europe, simple ring-tone download has generated $1.4 billon in revenue last year. In comparison, the entire global J2EE server market is $2.25 billion. J2ME games are content rich, over-the-air downloadable, and micro-payment-enabled. The J2ME gaming sector is projected to grow explosively and create many new Java jobs in the next couple of years. In fact, J2ME games are already the second largest revenue source for Vodafone's content service. Notable recent advances in the J2ME space: (1) The Mobile 3D Graphics API for J2ME [JSR 184] promises to bring 3D action games to Java-enabled handsets. Nokia presented an impressive demonstration at JavaOne. (2) The Advanced Graphics and User Interface Optional Package (JSR 209) will provide Swing and Java 2D support on PDA-type devices. At JavaOne, SavaJe Technologies, a smaller vendor, demonstrated a prototype smart phone device running Java Swing. (3) IBM has already ported its SWT [Standard Widget Toolkit] UI framework to Pocket PC devices as part of its Personal Profile runtime offering. (4) The Location API for J2ME [JSR 179] enables novel applications not possible in the desktop world. The API can determine a user's location either from a built-in GPS device or from a phone operator's triangular location signals in compliance with the enhanced 911 government requirements. (5) The completion of the SIP (Session Initiation Protocol) API for J2ME [JSR 180] enables the development of instant messaging applications on mobile devices. That will finally facilitate convergence between the popular desktop IM applications and wireless SMS messaging systems. (6) The Security and Trust Services API for J2ME [JSR 177] allows J2ME phones to access the device's embedded security element, e.g., the SIM (Subscriber Identity Module) card for GSM phones. JSR 177 enables support for more powerful and flexible security solutions for financial and other mobile commerce applications. (7) The J2ME Web Services Specification [JSR 172] supports Web services clients on mobile devices... The central message from this year's JavaOne is that the long overdue Java client-side revolution has finally arrived in the form of 'Java everywhere.' To paraphrase JavaOne keynote speaker Guy Laurence from Vodafone: the Java mobility train has already left the station, you are either on board or not. Every time you pick up your Java-enabled cell phone, think about the opportunities you might have missed..."

  • [August 21, 2003] "webMethods Extends UAN Support." By Demir Barlas. In Line56 (August 21, 2003). "For joint Siebel/webMethod customers, a shortcut to solving some potentially messy integration problems. webMethods has extended its support of the Universal Application Network (UAN), developed by customer relationship management (CRM) software provider Siebel, to include applications for the communications, media, and entertainment (CME) industries. UAN reflects a basic Siebel philosophy: the importance of the business process. UAN is a standards-based architecture that serves as a kind of hub, using business processes to drive the ways in which applications communicate. In theory, this means that point-to-point integration between applications can be bypassed, because business processes themselves reach out to UAN, which then touches all applications within that process. If this makes UAN sound like an integration solution in itself, beware; it isn't. UAN derives its efficacy from Siebel's partnerships with integration software providers like webMethods and TIBCO. Scott Opitz, SVP of marketing and business development for webMethods, explains further. 'Siebel's used our tools to build the connections,' he says. 'It's about pre-configuring business processes and eliminating the need for you to define them from scratch to support an integration environment.' In CME, as in other verticals, the integration environment can get tricky. For example, cable companies that until recently provided just one kind of service many also find themselves providing Internet access, video-on-demand, and so forth. That means the same customer could show up in different databases (including Siebel systems), so companies interested in distilling a single view of the customer would either have to do point-to-point integration or rely on something prepackaged, like the UAN. In this context, UAN would also be useful to run industry-specific business processes in the quote-to-cash cycle..." See also: (1) the note on UAN from the Siebel white paper; (2) a related article: "WebMethods Releases Integration App Based on Siebel's UAN," by Kimberly Hill, in CRMDaily.com News (August 21, 2003). Details in the announcement "Siebel Systems and webMethods Announce Expanded Offering for Universal Application Network. Siebel Integration Applications for Communications, Media and Energy Industries Now Available on webMethods Integration Platform."

  • [August 21, 2003] "Web Services Basic Profile for Industry and J2EE 1.4." By Gavin Clarke. In Computer Business Review Online (August 13, 2003). [The WS-I Basic Profile announcement] "means Sun Microsystems' latest server edition of Java, Java 2 Enterprise Edition (J2EE) 1.4, can now proceed to market, with the specification's final publication expected by the end of December. Sun and Java Community members postponed the most recent proposed release date of the already delayed J2EE 1.4, in an attempt to ensure the specification was in lock-step with the industry's latest web services specifications... Sun expects a number of vendors to launch sample J2EE 1.4 applications during in coming months as the Java Community Process (JCP) completes final release of Test Compatibility Kits (TCKs) and reference implementations for certification. The WS-I is, meanwhile, also planning a set of test tools and sample applications for Java and Microsoft Corp's C Sharp will be made available in the next few months. However, WS-I will not orchestrate or co-ordinate a testing regime for vendors to certify their products are compatible with the Basic Profile. Instead, the organization is relying on goodwill and market pressure to drive certification, hoping ISVs will not want to risk the shame of having a planned WS-I logo removed from them... Sun believes its own JCP-driven certification process can step in to help ensure conformity, in Java at-least, by embedding the Basic Profile 1.0 into the J2EE platform specification. Under JCP rules, J2EE 1.4 vendors must need to undergo testing using the TCK and reference implementations, ensuring they are conformant with the platform. Mark Hapner, distinguished engineer and chief web services strategist for Sun and the company's WS-I board representative, said: 'We are efficient at taking on the role of WS-I certification.' Interoperability is a fundamental issue, and one of the largest issues in Basic Profile 1.0 has been an attempt to ensure consistency in fault handling and error handling between Java and .NET web services. 'If you can't communicate what the fault is, you don't know what to do,' Cheng said. He believes the Basic Profile will mean vendors correctly implement SOAP 1.1, WSDL 1.1, UDDI 2.0, XML 1.0 and XML Schema in products, themselves, so users don't need to build out what is regarded as basic infrastructure. Hapner, said the Basic Profile would be integrated into J2EE's component model, viewed as fundamental building block of Java web services, to support web services' truly 'global computing model'..." See: "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services."

  • [August 21, 2003] "NETCONF Configuration Protocol." Edited by Rob Enns (Juniper Networks). IETF Network Working Group, Internet-Draft. Reference: 'draft-ietf-netconf-prot-00'. August 11, 2003, expires February 9, 2004. 73 pages. "There is a need for standardized mechanisms to manipulate, install, edit, and delete the configuration of a network device. In addition, there is a need to retrieve device state information and receive asynchronous device state messages in a manner consistent with the configuration mechanisms. There is great interest in using an XML-based data encoding because a significant set of tools for manipulating ASCII text and XML encoded data already exists... NETCONF uses a remote procedure call (RPC) paradigm to define a formal API for the network device. A client encodes an RPC in XML and sends it to a server using secure, connection-oriented session. The server responds with a reply encoded in XML. The contents of both the request and the response are fully described in XML DTDs or XML schemas, or both, allowing both parties to recognize the syntax constraints imposed on the exchange. A key aspect of NETCONF is an attempt to allow the functionality of the API to closely mirror the native functionality of the device. This reduces implementation costs and allows timely access to new features. In addition, applications can access both the syntactic and semantic content of the device's native user interface. NETCONF allows a client to discover the set of protocol extensions supported by the server. These 'capabilities' permit the client to adjust its behavior to take advantage of the features exposed by the device. The capability definitions can be easily extended in a noncentralized manner. Standard and vendor-specific capabilities can be defined with semantic and syntactic rigor. The NETCONF protocol is a building block in a system of automated configuration. XML is the lingua franca of interchange, providing a flexible but fully specified encoding mechanism for hierarchical content. NETCONF can be used in concert with XML-based transformation technologies such as XSLT to provide a system for automated generation of full and partial configurations. The system can query one or more databases for data about networking topologies, links, policies, customers, and services. This data can be transformed using one or more XSLT scripts from a vendor-independent data schema into a form that is specific to the vendor, product, operating system, and software release. The resulting data can be passed to the device using the NETCONF protocol..." See other details in the news story "IETF Network Configuration Working Group Releases Initial NETCONF Draft." [cache]

  • [August 20, 2003] "Embedded Markup Considered Harmful." By Norman Walsh. From XML.com (August 20, 2003). "XML is pretty simple: there's plenty of complexity to be found if you go looking for it: if you want, for example, to validate or transform or query it. But elements and attributes in well formed combinations have become the basis for an absolutely astonishing array of projects. Recently I've encountered a design pattern (or antipattern, in my opinion) that threatens the very foundation of our enterprise. It's harmful and it has to stop... It came as a surprise to me when I discovered that the RSS folks were supporting a form of escaped markup. Webloggers often publish a list of their recent entries in RSS and online news sites often publish headlines with it. Like most XML technologies, there's enough flexibility in it to suit a much wider variety of purposes than I could conveniently summarize here. Surprise became astonishment when I discovered that the folks working on the successor to RSS weren't going to explicitly outlaw this ugly hack. When I discovered that this hack was leaking into another XML vocabulary, FOAF, I became outright concerned... The idea of escaping markup goes against the fundamental grain of XML. If this hack spreads to other vocabularies, we'll very quickly find ourselves mired in the same bugward-compatible tag soup from which we have struggled so hard to escape. And evidence suggests that it's already spreading. Not long ago, the question of escaped markup turned up in the context of FOAF. The FOAF specification condones no such nonsense, but one of the blogging tools that produces FOAF reacted to a users insertion of HTML markup into the 'bio' element by escaping it. The tool vendor in question was quickly persuaded to fix this bug. Escaped Markup Must Stop There is clear evidence that the escaped markup design will spread if it isn't checked. If it spreads far enough before it's caught, it will become legacy..."

  • [August 20, 2003] "Should Atom Use RDF?" By Mark Pilgrim. From XML.com (August 20, 2003). ['Mark Pilgrim explains the use of RDF in the new Atom syndication format.'] "The problem with discussing RDF (where that means: 'I think this data format should be RDF') is that you can support any four of these RDF issues (model, syntax, tools, vision), in any combination, while vigorously arguing against the others. People who believe that the RDF conceptual model is a good thing may think that the RDF/XML serialization is wretched, or that there are no good RDF tools for their favorite language, or that the Semantic Web is an unattainable pipe dream, or any combination of these things. People who are familiar with robust RDF tools (such as RDFLib for Python) -- and, thus, never have to look at the RDF/XML serialization because their tools hide it from them completely -- may nonetheless think that RDF/XML is wretched. People who defend the RDF/XML syntax may have nothing polite to say about the vision of the Semantic Web. And around and around it goes... This is a problem with 'I think this format should be RDF' discussions. Many people who are thought to be pro-RDF are, in fact, against it in one or more ways (the model is limiting, the syntax is wretched, the tools are buggy or nonexistent, the vision is stupid). And many people who are perceived as anti-RDF are in fact in favor of it in one or more ways (the model is good, the serialization is no more complex than straight XML, the tools work well enough, the Semantic Web is worth the wait). For the record, I think that the RDF model is sound, the tools work for me, the serialization is wretched, and the Semantic Web is an unattainable pipe dream. If I appear to be wavering over time, sometimes pro-RDF, sometimes anti-RDF, it may be that I'm simply arguing different facets... How can we allow you to use your RDF tools on Atom, and do the right thing with reusing existing ontologies, and keep the syntax simple for people who simply want to parse Atom feeds in isolation, as XML? We can make the XSLT transformation normative... every platform that has robust RDF tools (a small but growing number) also has robust XSLT tools. But Atom-as-RDF is not the primary mode of consuming Atom feeds. There are dozens, perhaps more than 100, tools that consume syndication feeds now. Some of them have already been updated to consume Atom feeds and the format hasn't even been finalized yet. Most will be updated once the format is stable. And, to my knowledge, only one (NewsMonster) handles them as RDF, and it already has the infrastructure to transform XML because it does this for six of the seven formats called 'RSS' (the seventh is already RDF). In other words, we're hedging our bets. Whether a vocal minority likes it or not, RDF is very much a minority camp right now. It has a lot to offer -- I saw that first-hand as it forced us to clarify our model -- but it hasn't hit the mainstream yet. On the other hand, it seems perpetually poised to spring into the mainstream. Tool support is obviously critical here (since they help hide the wretched syntax), and the tools are definitely maturing. So should Atom be consumed as RDF? It depends. If you want to, and have the right tools, you can. You'll need to transform it into RDF first, but we'll provide a normative way to do that. If you don't want to, then you don't have to worry about it. Atom is XML..."

  • [August 20, 2003] "The Semantic Web is Closer Than You Think." By Kendall Grant Clark. From XML.com (August 20, 2003). "The W3C's web ontology language, now called OWL, was advanced to W3C Candidate Recommendation on 19-August-2003. While there is a lot of talk these days about the Semantic Web being the crack-addled pipe dream of a few academic naifs, in reality it's a lot closer to realization than you might be thinking... I'm not suggesting that we stand on the brink of a fully achieved, widespread Semantic Web. I am suggesting that some of the major pieces of the puzzle are now or will soon be in place. OWL, along with RDF, upon which it builds, are two such very major pieces of the Semantic Web puzzle... OWL is an ontology language for the Web, which builds on a rich technical tradition of both formal research and practical implementation, including SHOE, OIL, and DAML+OIL. The technical basis for much of OWL is the part of the formal knowledge representations field known as Description Logics (aka 'DL'). DL is the main formal underpinning of such diverse kinds of knowledge representation formalisms as semantic nets, frame-based systems, and others... OWL includes an RDF/XML interchange syntax, an abstract, non-XML syntax, and three sublanguages or variants, each of different expressivity and implementational complexity (OWL Lite, OWL DL, and OWL Full). The takeaway point is simple: OWL is real stuff; whether it's the right real stuff, whether it can gain critical mass, whether it can or will operate at web scale -- these are and will remain open questions for the foreseeable future. But the foundation is solid... What can be done with an ontology language for the Web? In short, you can formally specify a knowledge domain, describing its most salient features and constituents, then use that formal specification to make assertions about what there is in that domain. You can feed all of that to a computer which will reason about the domain and its knowledge for you. And, here's the most tantalizing bit, you can do all of this on, in, and with the Web, in both interesting and powerful ways... OWL has been specifically crafted out of its Webbish forerunners, particularly SHOE and DAML+OIL, to take advantage of some of the interesting things about the Web. What is interesting about the Web? Lots of things, including its scale, its distributedness, its relatively low barriers of access and accessibility. OWL is intended to be an ontology language that has some of these features: it should operate at the scale of the Web; it should be distributed across many systems, allowing people to share ontologies and parts of ontologies; it should be compatible with the Web's ways of achieving accessibility and internationalization; and it should be, relative to most prior knowledge representation systems, easy to get started with, non-proprietary, and open..." See: "W3C Releases Candidate Recommendations for Web Ontology Language (OWL)."

  • [August 20, 2003] "OWL Ascends Within Standards Group." By Paul Festa. In CNET News.com (August 18, 2003). "As part of its ongoing effort to give digital documents meaning that computers can understand, the Web's leading standards body advanced a key protocol as a candidate recommendation. The World Wide Web Consortium's (W3C) Web Ontology Language (OWL), a revision of the DAML+OIL Web ontology language, forms just one part of what the consortium calls its 'growing stack' of Semantic Web recommendations. The W3C for years has braved skepticism directed at its Semantic Web initiative, which aims to get computers to 'understand' data rather than to just transfer, store and display documents for computer users. Other documents in the Semantic Web stack include the Extensible Markup Language (XML), a general-purpose W3C recommendation for creating specialized markup languages, and the Resource Description Framework (RDF), which integrates different methods of describing data. OWL, by contrast, goes a step beyond existing recommendations to provide for more detailed descriptions of content. 'OWL can be used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms,' according to the W3C's OWL overview, the first of the set of six OWL drafts released Monday. 'This representation of terms and their interrelationships is called an ontology. OWL has more facilities for expressing meaning and semantics than XML (and) RDF...and thus OWL goes beyond these languages in its ability to represent machine interpretable content on the Web'..." See details in the news story "W3C Releases Candidate Recommendations for Web Ontology Language (OWL)."

  • [August 20, 2003] "Building Interoperable Web Services: WS-I Basic Profile 1.0." By Jonathan Wanagel, Andrew Mason, Sandy Khaund, Sharon Smith, RoAnn Corbisier, and Chris Sfanos (Microsoft Corporation). From the Microsoft Prescriptive Architecture Group (PAG). Series: Patterns & Practices. August 12, 2003. 133 pages. "This guide covers WS-I Basic Profile contents, use within Microsoft development tools, coding compliance challenges, degrees of freedom for customers and best options based on technical and non-technical requirements. The Guide is intended to help software architects and developers design and code Web services that are interoperable. We emphasize "interoperable" because we assume that you already understand how to implement a Web service. Our goal is to show you how to ensure that your Web service will work across multiple platforms and programming languages and with other Web services. Our philosophy is that you can best achieve interoperability by adhering to the guidelines set forth by the Web Services Interoperability (WS-I) organization in their Basic Profile version 1.0. In this book, we will show you how to write Web services that conform to those guidelines. Focusing on interoperability means there are some Web service issues that fall outside the scope of the discussion. These issues include security, performance optimization, scalability, and bandwidth conservation..." Also available in PDF format... To encourage interoperability, the WS-I is creating a series of profiles which will define how the underlying components of any Web service must work together. Chapter 2 [of this Guide] discusses the first of these profiles, called the Basic Profile, and includes the following topics: (1) The Basic Profile's underlying principles; (2) An explanation of the WS-I usage scenarios; (3) An explanation of the WS-I sample application, which demonstrates how to write a compliant Web service; (4) An explanation of the testing tools, which check that your implementation follows the Basic Profile guidelines. Chapter 3 lists some general practices you should follow for writing Web services or clients that conform to Basic Profile. Chapter 4 assigns each of the profile's rules to one of four possible levels of compliancy and, on a rule-by-rule basis, shows how to adjust your code to make your Web service comply with the profile's rules. Chapter 5 assigns each of the profile's rules to one of four possible levels of compliancy and, on a rule-by-rule basis, shows how to adjust your code to make your Web service client comply with the profile's rules. Appendix A goups the Basic Profile's rules according to their level of compliancy for implementing a Web service. Appendix B groups the Basic Profile's rules according to their level of compliancy for implementing a Web service client..." See "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services."

  • [August 20, 2003] "Canonical Situation Data Format: The Common Base Event." By IBM Staff Members: David Ogle (Autonomic Computing), Heather Kreger (Emerging Technologies), Abdi Salahshour (Autonomic Computing), Jason Cornpropst (Tivoli Event Management), Eric Labadie (WSAD PD Tooling), Mandy Chessell (Business Integration), Bill Horn (IBM Research - Yorktown), and John Gerken (Emerging Technologies). Reference: ACAB.BO0301.1.1. Copyright (c) International Business Machines Corporation. 66 pages. With XML Schema. IBM submission to the OASIS Web Services Distributed Management TC. "This document defines a common base event (CBE) and supporting technologies that define the structure of an event in a consistent and a common format. The purpose of the CBE is to facilitate effective intercommunication among disparate enterprise components that support logging, management, problem determination, autonomic computing and e-business functions in an enterprise. This document specifies baseline that encapsulate properties common to a wide variety of events, including business, autonomic, management, tracing and logging type events. The event format of the event is expressed as an XML document using UTF-8 or 16 encoding. This document is prescriptive about the format and content of the data that is passed or retrieved from component. However, it is not prescriptive about the ways in which how individual applications are to store their data locally. Therefore, the application requirement is only to be able to generate or render events in this format, not necessarily to store them in this format. The goal of this effort is to ensure the accuracy, improve the detail and standardize the format of events to assist in designing robust, manageable and deterministic systems. The results are a collection of specifications surrounding a 'Common Base Event' definition that serves as a new standard for events among enterprise management and business applications... The goal of this work is to provide more than just an element definition for a common event. In addition, an XML schema definition is provided. This document's scope is limited to data format and content of the data; how the data is sent and received and how an application processes the data is outside the scope of this document... When a situation occurs, a 3-tuple must be reported: (1) the identification of the component that is reporting the situation, (2) the identification of the component that is experiencing the situation (which might be the same as the component that is reporting the situation), and (3) the situation itself... The sourceComponentId is the identification of the component that was affected or was impacted by the event or situation. The data type for this property is a complex type as described by the ComponentIdentification type that provides the required data to uniquely identify a component... The reporterComponentId is the identification of the component that reported the event or situation on behalf of the affected component. The data type for this property is a complex type as described by the ComponentIdentification type that provides the required data to uniquely identifying a component... The situationInformation is the data that describes the situation reported by the event. The situation information includes a required set of properties or attributes that are common across products groups and platforms, yet architected and flexible enough to allow for adoption to product-specific requirements..." See also the note from Thomas Studwell posted 2003-08-20 to the OASIS WSDM TC list ['IBM Submits Common Base Events Specification to WS-DM TC']: IBM is pleased to announce the submission of the 'Canonical Situation Format: Common Base Event Specification' (CBE) to the Web Services Distributed Management Technical Committee (WS-DM) of OASIS. This submission has been developed in collaboration with a number of industry leaders and is being supported in this submission by Computer Associates International, and Talking Blocks, Inc., both key members of the WS-DM TC. This submission will be moved for acceptance by the WS-DM TC for consideration in the WS-DM TC standards on Thursday, August 21, 2003. The general principles behind the CBE specification were presented to the WS-DM TC on July 28 [2003] during the WS-DM TC face to face meeting..." See also "Management Protocol Specification." [source .DOC]

  • [August 20, 2003] "[Unicode] Identifier and Pattern Syntax." By Mark Davis. Public review draft from the Unicode Technical Committee. Reference: Proposed Draft, Unicode Technical Report #31. Date: 2003-07-18. "This document describes specifications for recommended defaults for the use of Unicode in the definitions of identifiers and in pattern-based syntax. It incorporates the Identifier section of Unicode 4.0 (somewhat reorganized) and a new section on the use of Unicode in patterns. As a part of the latter, it presents recommended new properties for addition to the Unicode Character Database. Feedback is requested both on the text of the new pattern section and on the contents of the proposed properties... A common task facing an implementer of the Unicode Standard is the provision of a parsing and/or lexing engine for identifiers. To assist in the standard treatment of identifiers in Unicode character-based parsers, a set of specifications is provided here as a recommended default for the definition of identifier syntax. These guidelines are no more complex than current rules in the common programming languages, except that they include more characters of different types. In addition, this document provides a proposed definition of a set of properties for use in defining stable pattern syntax: syntax that is stable over future versions of the Unicode Standard. There are many circumstances where software interprets patterns that are a mixture of literal characters, whitespace, and syntax characters. Examples include regular expressions, Java collation rules, Excel or ICU number formats, and many others. These patterns have been very limited in the past, and forced to use clumsy combinations of ASCII characters for their syntax. As Unicode becomes ubiquitous, some of these will start to use non-ASCII characters for their syntax: first as more readable optional alternatives, then eventually as the standard syntax. For forwards and backwards compatibility, it is very advantageous to have a fixed set of whitespace and syntax code points for use in patterns. This follows the recommendations that the Unicode Consortium made regarding completely stable identifiers, and the practice that is seen in XML 1.1. In particular, the consortium committed to not allocating characters suitable for identifiers in the range 2190..2BFF, which is being used by XML 1.1. With a fixed set of whitespace and syntax code points, a pattern language can then have a policy requiring all possible syntax characters (even ones currently unused) to be quoted if they are literals. By using this policy, it preserves the freedom to extend the syntax in the future by using those characters. Past patterns on future systems will always work; future patterns on past systems will signal an error instead of silently producing the wrong results..." Note: See also the 2003-08-20 notice from Rick McGowan (Unicode, Inc.), said to be relevant to anyone dealing with programming languages, query specifications, regular expressions, scripting languages, and similar domains: "The Proposed Draft UTR #31: Identifier and Pattern Syntax will be discussed at the UTC meeting next week. Part of that document (Section 4) is a proposal for two new immutable properties, Pattern_White_Space and Pattern_Syntax. As immutable properties, these would not ever change once they are introduced into the standard, so it is important to get feedback on their contents beforehand. The UTC will not be making a final determination on these properties at this meeting, but it is important that any feedback on them is supplied as early in the process as possible so that it can be considered thoroughly. The draft is found [online] and feedback can be submitted as described there..." General references in "XML and Unicode."

  • [August 19, 2003] "XML for e-Business." By Eve Maler (Sun Microsystems, Inc). Tutorial presentation. July 2003. 105 pages/slides. ['This tutorial was delivered at the CSW Informatics XML Summer School on 28-July-2003, and subsequently edited slightly to incorporate fixes, notes, and timestamps.'] The presentation provides an opportunity to: "(1) learn about the Universal Business Language (UBL) and its significance to, and place in, modern e-business; (2) study UBL's design center and underlying model -- a model that may be useful for many information domains; (3) study UBL as an application of XML, and its lessons for other large XML undertakings; (4) take a look at some real UBL inputs and outputs along the way... UBL is an XML-based business language standard; it leverages knowledge from existing EDI and XML B2B systems; it applies across all industry sectors and domains of electronic trade; it's modular, reusable, and extensible in XML-aware ways; it's non-proprietary and committed to freedom from royalties; it is intended to become a legally recognized standard for international trade... The Electronic Business XML initiative (ebXML) is a joint 18-month effort of OASIS and UN/CEFACT, concluding in May 2001. The work continues in several forums today with over 1000 international participants; the ebXML vision is for a global electronic marketplace where enterprises of any size, anywhere, can find each other electronically and conduct business by exchanging XML messages... The ebXML stack for business web services includes: Message contextualization [Context methodology]; Standard messages [Core components]; Business agreements [CPPA]; Business processes [BPSS]; Packaging/transport [ebMS]... The ebXML Core Components Technical Specification is at version 1.90; it is syntax neutral and ready for mapping. This includes the Context Methodology work, which likewise is syntax neutral rather than syntax bound. UBL proposes to flesh out the ebXML stack, using the UBL Context Methodology with ebXML Context Methodology and the UBL Library with ebXML Core components... The ebXML Core Components substrate allows for correlation between different syntactic forms of business data that has the same meaning and purpose; UBL is striving to use the CCTS metamodel accurately... UBL offers important and interesting solutions: as a B2B standard, it is user-driven, with deep experience and partnership resources to call on; it is committed to truly global trade and interoperability; its standards process is transparent. As an XML application, it is layered on existing successful standards; it is tackling difficult technical problems without losing sight of the human dimension..." [adapted/excerpted from the .PPT version] See the canonical source files in OpenOffice and Microsoft PPT formats. On UBL, see: (1) OASIS Universal Business Language TC website; (2) general references in "Universal Business Language (UBL)." [cache .PPT]

  • [August 19, 2003] "Turn User Input into XML with Custom Forms Using Office InfoPath 2003." By Aaron Skonnard. In Microsoft MSDN Magazine (September 2003). "Office InfoPath 2003 is a new Microsoft Office product that lets you design your own data collection forms that, when submitted, turn the user-entered data into XML for any XML-supporting process to use. With an InfoPath solution in place, you can convert all those commonly used paper forms into Microsoft Office-based forms and end the cycle of handwriting and reentering data into your systems. Today organizations are beginning to realize the value of the mountains of data they collect every day, how hard it is to access it, and are striving to mine it effectively. InfoPath will aid in the design of effective data collection systems... The Web Services platform builds on XML by using it for information exchange over protocols like TCP, HTTP, SMTP, and potentially many others. Combining XML with these open protocols makes it possible to build an infrastructure for sharing information between business processes in a standard way. All that is needed to reap the benefits across the enterprise is an easy way to get previously hand-written data into XML. InfoPath, previously known as XDocs, is a new member of the Microsoft Office System of products that let's you do just that. InfoPath provides an environment for designing forms built around XML Schema or Web Services Description Language (WSDL) definitions. In a matter of seconds, you can use InfoPath to build a new form that's capable of outputting XML documents conforming to an XML Schema Definition (XSD) or communicating with a Web Service conforming to a WSDL definition. XML Web Services and InfoPath can be used together to replace their legacy information-gathering techniques. InfoPath is chock-full of functionality, including rich client functionality and off-line capabilities that surpass those of traditional Web Forms. Best of all it's much easier to use than traditional Web Services development environments... InfoPath makes it easy for anyone to design, publish, and fill out electronic forms based on XML and Web Services technology, which offers many advantages over traditional techniques used today... This article will focus on the main features of InfoPath..."

  • [August 19, 2003] "The Trojan Document." By Erika Brown. In Forbes Magazine (August 18, 2003). "Bruce Chizen [President and Chief Executive Officer of Adobe Systems Incorporated] is pushing hard to make Adobe more relevant to big business. It's a bold bet that puts the company directly in Microsoft's way. Contractors used to dread getting approval for a Wal-Mart parking lot off a Kansas highway. They had to drive to a local Department of Transportation office, fill out multiple forms and photocopy blueprints; the stack of paper would be mailed to a district office, then to engineering, then to state-agency designers and back again. The ordeal took two months. Six months ago the agency began using new forms software from Adobe Systems, and the process has been cut to three weeks. Now a transportation official scans the contractor's designs, saves the file in Adobe's Portable Document Format, or PDF, and submits it online with the proper electronic forms. As each state official puts his digital signature to the plan, the file sends itself to the next person for approval and ultimately to a database at the Department of Transportation headquarters. 'These forms are so smart they're like their own applications,' says Cynthia Wade, head of technology for the department... Chizen spent the last two years redesigning products, replacing sales staff and buying up smaller firms to gird Adobe for a new assault on the corporate market. The grand plan: Convince companies that every single document they produce should be turned into an Adobe PDF. It used to be that a document created in Acrobat was the only thing that could become a PDF. Now, with Adobe's new software, a Word memo, an Excel spreadsheet, a Web site, a videoclip or a hybrid combination of all four formats can be converted to a PDF. Adobe has begun selling software that gives any of these documents the ability to be read by Adobe Reader, as well as tell company servers where to send itself, who can read it, who has made changes to it and what data within it should go into which part of the database. 'The ubiquity of Reader means we can build more applications to take advantage of that platform,' says Chizen. 'It's like what Microsoft has in Office.' Well, not quite. But at least Chizen is showing real chutzpah in stepping between Microsoft and its customers. Adobe has managed to get by for two decades without incurring the wrath of Redmond. Now Microsoft is paying attention. Its new electronic-forms product, InfoPath, is due out later this year with the next version of Office. Like Acrobat, it will use the Internet programming language XML to make forms more interactive but, in typical Microsoft fashion, InfoPath is designed to work within Office and doesn't read Adobe forms... The promotions group at Macy's West has been testing the new Acrobat Pro for two months. Designers, art directors and buyers are huddling over ad copy and catalog pages online. Michael Margolies, Macy's technology director, expects the proofing of print materials will go from days to minutes. Pfizer is using Adobe software to manage its clinical trials. A doctor types into a PDF form on Pfizer's Web site, making the data on patient progress work in real time. Chizen's hunt for new revenue is off to a good start..."

  • [August 19, 2003] "Acrobat Challenges InfoPath. Adobe Takes a Giant Step Forward Into Direct Competition with Microsoft." By Jon Udell. In InfoWorld (August 15, 2003). "I've always regarded Adobe's PDF as an odd creature, neither fish nor fowl. I'm intensely annoyed when I have to view a multicolumn PDF document onscreen. Some monitors rotate into a portrait orientation, but mine -- and probably yours - are landscape devices. Every time I scroll from the bottom of column No. 1 to the top of column No. 2, I taste the worm at the PDF apple's core... So I was delighted to learn, in a recent conversation with Adobe senior product manager Chuck Myers, that the ongoing integration of XML into PDF is about to shift into high gear... The backstory includes initiatives such as XMP (Extensible Metadata Platform), which embeds XML metadata in PDF files; and Tagged PDF, which enables PDF documents to carry the structural information that can be used, for example, to reflow a three-column portrait layout for landscape mode. So far, though, XML data hasn't been a first-class citizen of the PDF file --especially those PDF files that represent business forms. Acrobat 5 does support interactive forms. It also has a data interchange format called FDF (Forms Data Format), for which an XML mapping exists. But as Myers wryly observes, 'There's one schema, from Adobe, we hope you like it.' Acrobat 6 blasts that limitation out of the water. It supports arbitrary customer-defined schemas, Myers told me. That's a huge step forward, and brings Acrobat into direct competition with Microsoft's forthcoming InfoPath. Look at Adobe's interactive income tax form. That document is licensed, by the Document Server for Reader Extensions, to unlock the form fill-in and digital signature capabilities of the reader. Filling in a form and then signing it digitally is an eye-opening experience. It's more interesting now that the form's data is schema-controlled and, Myers adds, can flow in and out by way of WSDL-defined SOAP transactions. The only missing InfoPath ingredient is a forms designer that nonprogrammers can use to map between schema elements and form fields. That's just what the recently announced Adobe Forms Designer intends to be. I like where Adobe is going. The familiarity of paper forms matters to lots of people..." See: (1) "Extensible Metadata Platform (XMP)"; (2) Enhanced Adobe XML Architecture Supports XML/PDF Form Designer and XML Data Package (XDP)"; (3) "Microsoft Office 11 and InfoPath [XDocs]."

  • [August 19, 2003] "Hands-on XForms. Simplifying the Creation and Management of XML Information." By Micah Dubinko (Cardiff Software). In XML Journal Volume 4, Issue 8 (August 2003). "Organizations have evolved a variety of systems to deal with the increasing levels of information they must regularly process to remain competitive. Business Process Management (BPM) systems presently take a wide variety of shapes, often including large amounts of ad hoc scripting and one-off implementations of business rules. Such systems tend to be developed incrementally, and pose a significant obstacle to continued development and maintenance. A World Wide Web Consortium (W3C) specification called XForms aims to change this situation. This article compares XForms to ad hoc solutions to produce a real-life application: the creation of XML purchase orders... Of the several efforts that are under way to define XML vocabularies for business, the most promising seems to be UBL, the Universal Business Language. At the expense of being slightly verbose, the vocabularies defined by UBL do a remarkable job of capturing all of the minor variations that occur in real-world business documents across diverse organizations. For the sample application I chose a purchase order... Microsoft InfoPath, currently in beta as part of Office System 2003, offers a better user experience than HTML forms, but still relies heavily on scripting through an event-driven model. As the remainder of this article will show, a declarative approach as used in XForms can eliminate a substantial amount of complexity from the overall solution. Since XForms is designed to be used in concert with a 'host language,' I chose a combination of XHTML 1.1 and XForms for the solution, even though a DTD for the combined language isn't available... The two main challenges facing developers deploying XForms solutions today are deciding on a host language and configuring stylesheets for all target browsers. Eventually XHTML 2.0, including XForms as the forms module, will be finalized, providing a known and stable target for browsers to implement and designers to write toward. Until that time, however, a reasonable approach is to use XForms elements within XHTML 1.0 or 1.1, without the luxury of DTD validation... XForms has made vast strides in 2003, becoming a technology suitable for production use by early adopters. Already, businesses are using XForms to produce real documents. The combination of an open standard with a wide variety of both free and commercial browsers makes a powerful business case for deploying XForms solutions. Unlike many other XML standards, XForms has remained small, simple, and true to its roots, addressing only well-known and well-understood problems, and providing a universal means to express solutions to these problems. Part of the appeal of XForms is the reuse of proven technologies, such as XPath, for which developers are more willing to invest the time necessary for learning. XForms can also leverage existing XML infrastructure, including XML Schema and Web services components..." A fuller treatment is presented in "UBL in XForms: A Worked Example." See also: (1) W3C XForms: The Next Generation of Web Forms; (2) general references in "XML and Forms." [alt URL]

  • [August 19, 2003] "XForms Building Blocks." By Micah Dubinko (Cardiff Software). Draft Chapter 2 (20 pages) from XForms Essentials: Gathering and Managing XML Information, [to be] published by O'Reilly & Associates as part of the Safari Bookshelf. 'More Than Forms; A Real-World Example [based upon UBL]; Host Language Issues; Linking Attributes. "This chapter goes into greater detail on the concepts underlying the design of XForms, as well as practical issues that come into play, including a complete, annotated real-world example. A key concept is the relationship between forms and documents, which will be addressed first. After that, this chapter elaborates on the important issue of host languages and how XForms integrates them... Despite the name, XForms is being used for many applications beyond simple forms. In particular, creating and editing XML-based documents is a good fit for the technology. A key advantage of XML-based documents over, say, paper or word processor templates, is that an entirely electronic process eliminates much uncertainty from form processing. Give average 'information workers' a paper form, and they'll write illegibly, scribble in the margins, doodle, write in new choices, and just generally do things that aren't expected. All of these behaviors are manually intensive to patch up, in order to clean the data to a point where it can be placed into a database. With XForms, it is possible to restrict the parts of the document that a given user is able to modify, which means that submitted data needs only a relatively light double-check before it can be sent to a database. One pitfall to avoid, however, is a system that is excessively restrictive, so that the person filling the form is unable to accurately provide the needed data. When that happens, users typically either give bad information, or avoid the electronic system altogether..." About the book XForms Essentials: "The use of forms on the web is so commonplace that most user interactions involve some type of form. XForms -- a combination of XML and forms -- offers a powerful alternative to HTML-based forms. By providing excellent XML integration, including XML Schema, XForms allows developers to create flexible, web-based user-input forms for a wide variety of platforms, including desktop computers, handhelds, information appliances, and more. XForms Essentials is an introduction and practical guide to the new XForms specification. Written by Micah Dubinko, a member of the W3C XForms working group and an editor of the specification, the book explains the how and why of XForms, showing readers how to take advantage of them without having to write their own code. You'll learn how to integrate XForms with both HTML and XML vocabularies, and how XForms can simplify the connection between client-based user input and server-based processing. XForms Essentials begins with a general introduction to web forms, including information on history and basic construction of forms. The second part of the book serves as a reference manual to the XForms specification. The third section offers additional hints, guidelines, and techniques for working with XForms..." See also the preceding bibliographic entry, online version of the book, and the author's XML and XForms blog.

  • [August 19, 2003] "Object-Oriented XsLT: A New Paradigm for Content Management." By Pietro Michelucci. In XML Journal Volume 4, Issue 8 (August 2003). "What could be better for managing content than separating data from presentation? How about separating data from data? XsLT can actually be used to allow for different levels of data abstraction; this can reduce the complexity of managing Web content by an order of magnitude and facilitate code reuse. What I'm talking about here is object-oriented XsLT... Isolating content from presentation was the original purpose of stylesheet languages. In the conventional approach, there is just one data layer (XML) and one presentation layer (HTML), with XsL transformations (XsLT) in between. This two-layer architecture simplifies Web site management by allowing content providers to edit their data without concern for stylistic issues, and, conversely, by permitting graphics designers to set the visual tone without regard for specific content. While the two-layer model has been fruitful, XsL transformations (XsLT) empower us to extend data abstraction through the use of multiple data layers. Toward this end, I have created a general-purpose XsLT that you can easily use to apply multiple serial XsL transformations to an XML data document. OOX, like OOP, isn't just about stringing together multiple transformations, using extra data layers, or treating schemas like interfaces. It's an approach to content management and Web architecture that involves the judicious application of data abstraction and the reuse of transformation objects. When applied strategically, OOX can result in a low-maintenance Web site that is quickly built, logically organized, and robust to structural content changes... there are feature-rich software tools on the market to facilitate Web development and content management. Many of these tools function by storing proprietary metadata, which describe both structural and thematic aspects of the Web site. For example, metadata might be used to programmatically maintain navigation links on all pages of a Web site. These metadata are not directly accessible to the Web developer, so even though the software uses them internally for content management, they may impede fine-level control. Furthermore, migrating from one of these content management tools to another can prove vexing because the tools often do not recognize each other's metadata. In contrast to most content management tools, OOX relies exclusively upon W3C-based technologies. Therefore, in adopting OOX as a Web development paradigm, it is possible to exercise complete control over your Web site without getting locked into proprietary technology. Furthermore, flexible tools can work in concert with OOX development. OOX may not be suitable for all developers. But if you have dabbled in XML and aren't afraid to explore the power afforded by XsLT, you might be surprised at what the latest addition to alphabet soup has to offer for content management..." [alt URL]

  • [August 19, 2003] "XML MetaData for Simplifying Web Development." By George M. Pieri and Arnoll Solano. In JavaPro Magazine (August 2003). ['Achieve more efficient code development and maintenance while freeing yourself from object properties and getting new functionality without recompiling.'] "Web application development has become time consuming. Making a simple change to display a new database field on screen often involves recompiling business classes and then all the resources that use those business classes. You can simplify this Java development process by using XML to deliver your data and to describe the business objects that are responsible for building the data. Using metadata to describe your business objects and presentation components can speed up development... Much of application development revolves around building business objects. These objects usually represent the entities of the system such as customers, invoices, and products. The responsibilities of a typical business object are to retrieve, add, update, delete, and validate data. The data usually comes from a data source, which can be a database such as Microsoft SQL Server or Oracle. In the applications that we developed we use the term databean to describe the typical business object because its primary responsibilities revolve around data. In building business objects, or databeans, it is important to make them stateless to free you from the time-consuming process of maintaining properties. Stateless objects have no properties and instance variables that maintain state, which saves you time from having to add, get, and set methods every time an end user requests a new column to be displayed on one of your Web pages. All the data is returned each time a method is called. This characteristic also has the extra benefit of improving performance because the object can be reused quickly. It is possible to use XML to return the data without having business object properties. The start and end tags around the data field represent the field name, which frees you from having to maintain field names ... Representing your visual components with metadata has many advantages. It enables you to add a new column to your view XML and within minutes have it show up automatically on the grid. No longer do you have to modify the HTML and then make sure that everything lines up correctly. The entire color of the grid can be changed just by modifiying the view metadata along with fonts and many other attributes. It is also easy to identify what columns are used for which screens and makes modifications quickly. Using XML to serve up your data helps you have business objects without properties, which speeds up code development and, more importantly, code maintainence. In addition, using XML to describe your data has many more benefits. Metadata can be used to describe your business objects by abstracting out their functionality, DataBean.xml, which allows you to change the SQL behind your business objects without recompiling code. It can also be helpful in describing your presentation layer such as menus and grids that are commonly developed. We have used these approaches successfully and have greatly reduced our development time and have become more efficient at making code modifications..." [alt URL]

  • [August 19, 2003] "Discover Key Features of DOM Level 3 Core, Part 1. Manipulating and Comparing Nodes, Handling Text and User Data." By Arnaud Le Hors and Elena Litani (IBM). From IBM developerWorks, XML zone. August 19, 2003. ['In this two-part article, the authors present some of the key features brought by the W3C Document Object Model (DOM) Level 3 Core Working Draft and show you how to use them with examples in Java code. This first part covers manipulating nodes and text, and attaching user data onto nodes.'] "The Document Object Model (DOM) is one of the most widely available APIs. It provides a structural representation of an XML document, enabling users to access and modify its contents. The DOM Level 3 Core specification, which is now in Last Call status, is the latest in a series of DOM specifications produced by the W3C. It provides a set of enhancements that make several common operations much simpler to perform, and make possible certain things you simply could not do before. It also supports the latest version of different standards, such as Namespaces in XML, XML Information Set, and XML Schema, and thus provides a more complete view of the XML data in memory. The first part of this article covers operations on nodes; the second part focuses on operations on documents and type information, and explains how to use DOM in Xerces. We show you how DOM Level 3 Core can make your life easier when working with nodes, whether it is renaming a node, moving nodes from one document to another, or comparing them. We also show you how DOM Level 3 Core lets you access and modify the text content of your document in a more natural way than having to deal with Text nodes that tend to get in the way. Finally, we explain to you how you can use the DOM Level 3 Core to more easily maintain your own structure that is associated with the DOM... DOM Level 3 can do a lot of work for you. First, it allows you to store a reference to your application object on a Node. The object is associated with a key that you can use to retrieve that object later. You can have as many objects on a Node as you want; all you need to do is use different keys. Second, you can register a handler that is called when anything that could affect your own structure occurs. These are events such as a node being cloned, imported to another document, deleted, or renamed. With this, you can now much more easily manage the data you associate with your DOM. You no longer have to worry about maintaining the two in parallel. You simply need to implement the appropriate handler and let it be called whenever you modify your DOM tree. And you can do this with the flexibility of using a global handler or a different one on each node as you see fit. In any case, when something happens to a node on which you have attached some data, the handler you registered is called and provides you with all the information you need to update your own structure accordingly... In Part 2 [of the series], we will show you other interesting features of DOM Level 3 Core, such as how to bootstrap and get your hands on a DOMImplementation object without having any implementation-dependent code in your application, how the DOM maps to the XML Infoset, how to revalidate your document in memory, and how to use DOM Level 3 Core in Xerces..." Article also in PDF format. See: (1) W3C Document Object Model (DOM) website; (2) DOM Level 3 Core Issues List; (3) general references in "W3C Document Object Model (DOM)."

  • [August 19, 2003] "Low Bandwidth SOAP." By Jeff McHugh. From O'Reilly WebServices.xml.com (August 19, 2003). "With the mobile phone industry reporting better than expected sales, and news that, by the end of this year, smart phones are expected to outsell hand-held computers, it should come as no surprise that wireless application development is on the rise. Sun recently announced that by the end of 2004 there may well be more than 200,000,000 Java-enabled mobile handsets. Yet, with all the attention being paid to these microdevices (i.e., low resource mobile devices), it's surprising to learn that a developer wishing to build a wireless application using XML, SOAP, and web services is left behind. Why is this? First, a microdevice by definition has an extremely limited amount of memory. Second, traditional packages such as Xerces (for XML) and Axis (for SOAP) are far too large and resource-intensive to work on microdevices. A examination of Xerces.jar file should adeptly demonstrate this fact; it's over one megabyte in size. Microdevices are simply too small to be expected to work with packages originally designed for desktop clients and servers. Fortunately this issue is well recognized by the larger wireless community. Sun, in particular, is currently in the stage of finalizing JSR 172, a specification that addresses the use of XML, SOAP, and web services on microdevices. The downside is that, given past experience, it's not unreasonable to expect at least ten to twelve months to pass before finalization and widespread implementation. But that shouldn't deter anyone wishing to create a wireless application today, for doing so is quite possible using a powerful, free, and open source package readily available from Enhydra.org. This article explains the basics of building web service servers and clients using Enhydra's KSOAP implementation. A key ingredient for any web services application is SOAP. The problem with developing a wireless SOAP/XML application -- and the reason for the above-mentioned JSR 172 -- revolves around the following issues. First, the common XML and SOAP packages currently available are quite large and contain hundreds of classes. Second, these packages depend on features of the Java runtime that simply don't exist on a microdevice. I'm thinking specifically about the Connected Limited Device Configuration (CLDC) specification which did away with nearly the entire core set of Java classes normally present in the J2EE and J2SE distributions: AWT, Swing, Beans, Reflection, and most java.util and java.io classes have simply disappeared. The purpose of this 'bare bones' Java runtime is to accommodate the limited footprint of the KVM -- a low-memory virtual machine running on a microdevice. This is where Enhydra.org comes to the rescue. KSOAP and KXML are two packages available from the web site designed to enable SOAP and XML applications to run within a KVM. They are thin, easy to use, and well documented. Combined into a single jar file, they take up less than 42K... By leveraging KSOAP for your wireless application, you can help make it a more powerful and reliable one. Since much of the infrastructure is provided, you as the developer can spend more time focusing on the important aspects of development such as the business logic..."

  • [August 19, 2003] "J2ME Web Services Specification 1.0." JSR-000172. Proposed Final Draft 2. By Jon Ellis and Mark Young. Date: July 14, 2003, Revision 10. Release date: July 18, 2003. Copyright (c) 2003 Sun Microsystems, Inc. 86 pages. The specification has been developed under the Java Community Process (JCP) version 2.1 as Java Specification Request 172 (JSR-172). Comments to jsr-172-comments@sun.com. The specification builds on the work of others, specifically JSR-63 Java API for XML Processing and JSR-101 Java API for XML based RPC. "The broad goal is to provide two new capabilities to the J2ME platform: access to remote SOAP- and XML-based web services, and the parsing of XML data. There is great interest and activity in the Java community in the use of web services standards and infrastructures to provide the programming model for the next generation of enterprise services. There is considerable interest in the developer community in extending enterprise services out to J2ME clients... The main deliverables of the JSR-172 specification are two new, independent, optional packages: (1) an optional package adding XML Parsing support to the platform. Structured data sent to mobile devices from existing applications will likely be in the form of XML. In order to avoid including code to process this data in each application, it is desirable to define an optional package that can be included with the platform; (2) an optional package to facilitate access to XML based web services from CDC and CLDC based profiles. The goal of the 'JAXP Subset' optional package is to define a strict subset wherever possible of the XML parsing functionality defined in JSR-063 JAXP 1.2 that can be used on the Java 2 Micro Edition Platform (J2ME). XML is becoming a standard means for clients to interact with backend servers, their databases and related services. With its platform neutrality and strong industry support, XML is being used by developers to link networked clients with remote enterprise data. An increasing number of these clients are based on the J2ME platform, with a broad selection of mobile phones, PDAs, and other portable devices. As developers utilize these mobile devices more to access remote enterprise data, XML support on the J2ME platform is becoming a requirement.In order to provide implementations that are useful on the widest possible range of configurations and profiles, this specification is treating the Connected Limited Device Configuration (CLDC) 1.0 as the lowest common denominator platform... JAX-RPC is a Java API for interacting with SOAP based web services. This specification defines a subset of the JAX-RPC 1.1 specification that is appropriate for the J2ME platform. The functionality provided in the subset reflects both the limitations of the platform; memory size and processing power, as well as the limitations of the deployment environment; low bandwidth and high latency. The web services API optional package should not depend on the XML parsing optional package; it must be possible to deliver the web services optional package independent of XML parsing... The WS-I Basic Profile (WS-I BP) provides recommendations and clarifications for many specifications referenced by this specification, and its superset -- JAX-RPC 1.1. To provide interoperability with other web services implementations, JAX-RPC Subset implementations must follow the recommendations of the WS-I BP where they overlap with the functionality defined in this specification..." See other details in: (1) the original JSR document; (2) the news story "IBM Releases Updated Web Services Tool Kit for Mobile Devices."

  • [August 19, 2003] "Use of SAML in the Community Authorization Service." By Von Welch, Rachana Ananthakrishnan, Sam Meder, Laura Pearlman, and Frank Siebenlist. Working paper presented to the OASIS Security Services TC. August 19, 2003. 5 pages. "This document describes our use of SAML in the upcoming release of our Community Authorization Service. In particular we discuss changes we would like to see to SAML to address issues that have come up both with current and planned development. A virtual organization (VO) is a dynamic collection of resources and users unified by a common goal and potentially spanning multiple administrative domains. VOs introduce challenging management and policy issues, resulting from often complex relationships between local site policies and the goals of the VO with respect to access control, resource allocation, and so forth. In particular, authorization solutions are needed that can empower VOs to set policies concerning how resources assigned to the community are used -- without, however, compromising site policy requirements of the individual resources owners. The Community Authorization Service (CAS) is a system that we have developed as part of a solution to this problem. CAS allows for a separation of concerns between site policies and VO policies. Specifically, sites can delegate management of a subset of their policy space to the VO. CAS provides a fine-grained mechanism for a VO to manage these delegated policy spaces, allowing it to express and enforce expressive, consistent policies across resources spanning multiple independent policy domains. Both past and present CAS implementations build on the Globus Toolkit middleware for Grid computing, thus allowing for easy integration of CAS with existing Grid deployments. While our currently released implementation of CAS uses a custom format for policy assertions, the new version currently in development uses SAML to express policy statements. In this document we describe our use of SAML with some issues we have encounters with its use..." Note on CAS: "Building on the Globus Toolkit Grid Security Infrastructure (GSI), Community Authorization Service (CAS) allows resource providers to specify course-grained access control policies in terms of communities as a whole, delegating fine-grained access control policy management to the community itself. Resource providers maintain ultimate authority over their resources but are spared day-to-day policy administration tasks (e.g., adding and deleting users, modifying user privileges)... The second Alpha release (alphaR2) of the Community Authorization Service includes a CAS server, CAS user and administrative clients as well as a CAS-enabled GridFTP server. Other portions of the Globus Tookit (e.g., the Gatekeeper, MDS, replica management) are not CAS-enabled at this time and are not included in this release... The Globus Toolkit uses the Grid Security Infrastructure (GSI) for enabling secure authentication and communication over an open network. GSI provides a number of useful services for Grids, including mutual authentication and single sign-on... GSI is based on public key encryption, X.509 certificates, and the Secure Sockets Layer (SSL) communication protocol. Extensions to these standards have been added for single sign-on and delegation. The Globus Toolkit's implementation of the GSI adheres to the Generic Security Service API (GSS-API), which is a standard API for security systems promoted by the Internet Engineering Task Force (IETF)..." See general references in "Security Assertion Markup Language (SAML)." [cache]

  • [August 19, 2003] "Use of SAML for OGSA Authorization." From the Global Grid Forum OGSA Security Working Group. Submitted for consideration as a recommendations document in the area of OGSA authorization. GWD-R, June 2003. 16 pages. "This document defines an open grid services architecture (OGSA) authorization service based on the use of the security assertion markup language (SAML) as a format for requesting and expressing authorization assertions. Defining standard formats for these messages allows for pluggability of different authorization systems using SAML. There are a number of authorization systems currently available for use on the Grid as well as in other areas of computing, such as Akenti, CAS, PERMIS, and VOMS. Some of these systems are normally used in decision push mode by the application -- they act as services and issue their authorization decisions in the form of authorization assertions that are conveyed, or pushed, to the target resource by the initiator. Others are used in decision pull mode by the application -- they are normally linked with an application or service and act as a policy decision maker for that application, which pulls a decision from them... With the emergences of OGSA and Grid Services, it is expected that some of these systems will become OGSA authorization services as mentioned in the OGSA Security Roadmap. OGSA authorization services are Grid Services providing authorization functionality over an exposed Grid Service portType. A client sends a request for an authorization decision to the authorization service and in return receives an authorization assertion or a decision. A client may be the resource itself, an agent of the resource, or an initiator or a proxy for an initiator who passes the assertion on to the resource. This specification defines the use of SAML as a message format for requesting and expressing authorization assertions and decisions from an OGSA authorization service. This process can be single or multi-step. In single step authorization, all the information about the requested access is passed in one SAML request to the authorization service. In multi-step authorization, the initial SAML request passes information about the initiator, and subsequent SAML requests pass information about the actions and targets that the initiator wants to access. The SAML AuthorizationDecisionQuery element is defined as the message to request an authorization assertion or decision, the DecisionStatement element is defined as the message to return a simple decision, and the AuthorizationDecisionStatement the method for expressing an authorization assertion. By defining standard message formats the goal is to allow these different authorization services to be pluggable to allow different authorization systems to be used interchangeably in OGSA services and clients..." See also "Security Architecture for Open Grid Services." [cache]

  • [August 19, 2003] "Southwest Airlines Shows SAML's Promise." By Terry Allan Hicks, Ray Wagner, and Roberta J. Witty (Gartner). Gartner Research Note. Reference Number: FT-20-7798. August 13, 2003. ['Enterprises that manage large numbers of external identities should consider SAML-based cross-domain trust.'] "On 12 August 2003, Oblix, which develops identity-based security solutions, announced that Southwest Airlines has completed a large-scale implementation of the Oblix NetPoint identity management and access control product. The NetPoint implementation uses single sign-on based on the SAML standard to secure communications within Southwest's internal networks and with its suppliers and other external business partners. Southwest is one of the first to use SAML-enabled identity management on a large scale to perform cross-domain trust. This implementation also marks an early step in the movement toward federated identity management. However, this approach appears to deliver many of the real-world benefits of federated identity management without the use of additional technologies or standards, such as Liberty and WS-Federation. The Oblix system enables Southwest to vouch for the identity of an employee who accesses external partners' networks (for example, an aircraft mechanic looking for technical documentation). The partner grants session access to the Southwest employee without performing authentication on its own site. With this approach, Southwest enjoys enhanced employee productivity, and the external partner does not need to manage credentials for large numbers of outside users. According to some estimates, identity management solutions deliver an average three-year return on investment of as much as 300 percent. The use of standards such as SAML may drive up return on investment still further by offering cost savings for security administration, help desk support and application development..." Note also available in HTML format. See: (1) the announcement, "Southwest Airlines Deploys Industry Leading SAML Implementation On Oblix NetPoint. NetPoint SAML Solution Enables User Authentication and Authorization Across Corporate Extranets."; (2) general references in "Security Assertion Markup Language (SAML)."

  • [August 19, 2003] "A Web Services Strategy for Mobile Phones." By Nasseam Elkarra. From O'Reilly WebServices.xml.com (August 19, 2003). ['Planning to deploy information services on mobile phones? This article gives an overview of the various technologies and routes available for mobile web service development.'] "In most web services presentations, the speaker has a slide of a mobile phone, a PDA, a computer, and other devices communicating with a web service via SOAP and HTTP. You quickly envision a utopia of universal access but overlook the fact that your old Nokia doesn't do XML web services. If you have a J2ME-enabled phone connected to the Internet, it's very possible to interact with web services directly. However, the majority of mobile phone users do not have these phones, which means an alternative mode of access must be provided... VoiceXML is a language for building voice applications much like you hear when calling customer service hotlines. It is an XML-based standard developed by the W3C's Voice Browser Working Group. Most VoiceXML developer portals give you access to a phone number for testing your application; however, VoiceXML is not limited to phones and can actually be accessed by any VoiceXML-enabled client. This client can be the usual phone, but it could also be an existing Web browser with a built in VoiceXML interpreter. A good example of this is the multimodal browser being developed by IBM and Opera based on the XHTML+Voice (X+V) proposed specification. The term 'multimodal' simply refers to multiple modes of interaction by extending user interfaces to include input from speech, keyboards, pointing devices, touch pads, electronic pens, and any other type of input device. The W3C also has a Multimodal Interaction Working Group that is developing standards to turn the concept of universal accessibility into a reality... The Wireless Application Protocol (WAP) is a set of standards to enable wireless access to Internet services from resource-constrained mobile devices. WAP provides an entire architecture to make a mini-Web possible by defining standards such as the Wireless Markup Language (WML) and WMLScript... The Wireless Messaging API (WMA) package gives you access to SMS functionality but there are third party packages that are more suitable for XML messaging. kSOAP, another open source project from Enhydra, is a lightweight SOAP implementation suitable for J2ME... With the availability of packet-switched, always-on networks for mobile phones becoming more widespread, mobile access to data will become easier than ever. web services seem like the natural solution for integration problems, but mobile phones do not have the privilege of guaranteeing support for the core web services technologies. However, you can still effectively deploy a web service for mobile clients by deploying a client interface using existing technologies available. Technologies such as SMS, WAP, and VoiceXML can be utilized to make this possible. As more mobile phones support J2ME, you can even choose to deploy a pure SOAP client without the need for a middleman..." See also "Java Web Services in a Nutshell, by Kim Topley, with sample Chapter 3: 'SAAJ (SOAP with Attachments API for Java)'.

  • [August 19, 2003] "Members Offer Glimpse Inside WS-I Consortium." By John Hogan. In SearchWebServices.com News (August 18, 2003). "Throw 150 software engineers together in a room to discuss interoperability standards and what do you get? A raging debate, certainly. A consensus is a little trickier. For the vendor-backed Web Services Interoperability Organization (WS-I), getting a group of engineers to reach a consensus was a matter of deciding what was critical for making Web services specifications work with one another and dropping debate on everything else. The result was Basic Profile 1.0, a set of guidelines for Web services interoperability that was released at last week's XML Web Services One conference. In a wide-ranging interview with SearchWebServices.com, WS-I board members from IBM Corp. and Oracle Corp. and the chairman of the Basic Profile working group talked about the inner workings of the consortium and how users of Web services technology will soon be able to judge for themselves whether solutions are truly interoperable. 'There's nothing magical about WS-I,' said Rob Cheng, Oracle's representative on WS-I's 11-member board. 'There was a need. There was a demand. There was motivation to do it. So we got together and did it.' [...] Chris Ferris, chairman of the Basic Profile working group and a senior software engineer at IBM, said a perfect example was Simple Object Access Protocol (SOAP) encoding, a method of encoding 'type' information in XML messages. Members argued about what type systems to use between different development platforms. How did they resolve this issue? The working group dropped the idea of SOAP encoding interoperability in favor of XML Schema as the type system for Web services... 'Fully 44% of the [interoperability] issues we tackled, of the 200-odd issues, were around the WSDL specification,' Ferris said. The working group had to clarify WSDL and 'clean up the ambiguity aspects of it,' such as how to use it with SOAP and the Universal Description, Discovery and Integration (UDDI) registry. This will likely be the case when the WS-I tackles interoperability for other Web services specifications, Ferris said. Some functions, or options, of an underlying specification will be 'must options' for vendors to follow. Other functions can be added as a service to users, 'but when you do, you're on your own' in terms of interoperability with other products, Ferris said... Glover and Ferris predicted that WS-I has at least 10 years of work ahead to fine-tune various Web services specifications in areas such as security, reliable messaging, management and orchestration. Cheng said the order in which these issues will be handled rests entirely with the demands of WS-I's 170 member companies and the implementation issues that arise as they develop applications that can deliver Web services. Next in line for the WS-I is security interoperability. Ferris said a planning group has already outlined the scope of the effort and is awaiting the final release this month or next of the Web Services Security specification by the OASIS standards group. The focus of the security profile, which Ferris predicted would be complete within a year, will be to narrow down options within the specification to the 'must haves'..." See details in the news story "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services."

  • [August 19, 2003] "WS-I Basic Profile Set." By Darryl K. Taft. In eWEEK (August 18, 2003). "After a long period of hype around Web services, the Web Services-Interoperability organization last week announced the official delivery of WS-I Basic Profile 1.0. WS-I BP 1.0 is a set of specifications that guarantee Web services interoperability if users adhere to the profile's guidelines and if vendors include support for it in products. The profile identifies how Web services specifications should be used together to create interoperable Web services. Although WS-I BP 1.0 has been available as a draft standard in public review for almost a year, the formal announcement means several vendors will endorse the profile to guarantee their offerings adhere to the standard, thus eliminating much of the research and guesswork customer organizations had to go through to find interoperable implementations... Rob Cheng, a senior product manager at Oracle Corp., of Redwood Shores, Calif., and chair of the WS-I marketing committee, said when he talks to customers about Web services, 'the real thing they focus on is that companies will not have to worry about plumbing anymore. 'This profile will reduce cost and complexity and will reduce early-adopter risks. The Basic Profile 1.0 lays the foundation for all the future work we'll be doing,' said Cheng at the XML Web Services One conference here. 'This means developers don't have to delve into the details of the technologies and try to pick and choose what will work,' said Mark Hapner, chief Web services strategist at Sun Microsystems Inc., of Santa Clara, Calif., and Sun's representative on the WS-I board. 'Now there's unanimity amongst the vendors, and there's an underlying set of scenarios represented by the WS-I sample applications.' This fall, the WS-I group will release test tools and sample applications to support the profile, available in both C# and Java. 'The test suite will allow a developer to get a specific analysis about whether they're compliant [with the BP 1.0] spec or not and, if not, what the issues are,' Hapner said..." See details in the news story "WS-I Releases Basic Profile 1.0a Final Specification for Interoperable Web Services."

  • [August 19, 2003] "OWL Flies As Web Ontology Language. W3C Seeks More Implementations." By Paul Krill. In InfoWorld (August 18, 2003). "The World Wide Web Consortium (W3C) on Tuesday issued its Web Ontology Language, its acronym spelled and pronounced 'OWL,' as a W3C Candidate for Recommendation... According to the W3C, OWL is a language for defining structured Web-based ontologies that enable richer integration and interoperability of data across application boundaries. Some implementations already exist. Early adopters include bioinformatics and medical communities, corporate enterprise, and governments. OWL enables applications such as Web portal management, multimedia collections that cannot respond to English language-based search tools, Web services and ubiquitous computing. 'Essentially, an ontology is the definition of a set of terms and how they relate to each other for a particular domain and that can be used on the Web in a number of different ways,' said Jim Hendler, co-chairman of the W3C Web Ontology Working Group, which released OWL... While earlier languages have been used to develop tools and ontologies for specific user communities such as sciences, they were not compatible with the architecture of the World Wide Web in general, in particular the Semantic Web, said W3C. OWL uses both URLs for naming and the linking provided by RDF (Resource Description Framework) to add the following capabilities to ontologies: distributable across many systems; scalable for the Web; compatible with Web standards for accessibility and internationalization; and open and extensible..." Further detail in the news story "W3C Releases Candidate Recommendations for Web Ontology Language (OWL)."

  • [August 18, 2003] "Nation's ebXML Standard to Be Adopted Asia-Wide." By Sim Kyu-ho. In Korea IT Times [The Electronic Times] (August 18, 2003). "e-Business Extensible Markup Language (ebXML) proposed by Korea will be adopted in the first version of Asian guidelines. Jang Jae-gyung, manager of standard development at the Korea Institute for Electronic Commerce (KIEC), has also been named head of the new agency, which will develop Asian edition of ebXML. If the initiative turns out to be successful, therefore, the nation's ebXML technology could become a part of global standards. According to KIEC on August 17 [2003], the ebXML Asia Committee decided at the 9th eAC meeting recently held in Bangkok, Thailand to link e-document guidelines proposed by Korea with Hong Kong's e-government project to set up ebXML Asia guidelines and a library before the end of this year. eAC launched a taskforce called the Core Component Task Group, or CCTG, to push craft guidelines, and named Korea and Taiwan to co-char the organization. At the meeting, attendants also agreed to issue 'ebXML Asia Interoperability Certificate' to 12 businesses and organizations in 6 countries in a way of guaranteeing messasing functionality and reliability. In Korea, Pos DATA, Korea Trade Information Communication, InoDigital and Samsung SDS will be granted the certificate for their ebXML solutions..." See also the announcement "ebXML Asia Committee Starts New ebXML Interoperability Certification Program. Twelve Organizations Receive Certifications on ebXML Message Service specification 2.0." General references in "Electronic Business XML Initiative (ebXML)." [alt URL]

  • [August 18, 2003] "ebXML Seen as SME Web Service Enabler. Government, Private Sector Begin Pilot Projects." By Sasiwimon Boonruang. In Bangkok Post (August 06, 2003). "The Government and the private sector have adopted the Electronic Business Extensible Markup Language (ebXML) standard to boost national competitiveness, launching Internet-based paperless trading pilot projects and a collaborative e-tourism project. ebXML is an open standard around web services that will be crucial in three major aspects -- setting standards for data, standards for data interchange, as well as for electronic service interchange -- according to the National Electronics and Computer Technology Centre (Nectec) director Dr Thaweesak Koanatakool. Speaking at a seminar on ebXML Awareness Day last week, Dr Thaweesak noted that these three open standards were important infrastructure necessary to develop one-stop e-government services, for collaborative B2B e-commerce and to provide an opportunity for the local software industry. Meanwhile, the Information and Communications Technology (ICT) Ministry will now appoint a committee on data interchange standard. IT veteran and honorary president of the ATCI Manoo Ordeedolchest said traditional e-commerce was the interaction between humans and computers, but that we would soon be seeing computer-to-computer interactions. But this new economy will not bring benefits here unless SMEs were also part of this electronic business. Smaller firms needed to use ICT and in order to create competitiveness ebXML technology or web services were the solution, said Mr Manoo, who is also a consultant to the ICT minister... According to Commerce Ministry's Business Development deputy director general Skol Harnsuthivarin, the department was now working with other organisations to cope with the problem of data interchange by applying the ebXML standard. To achieve the target of paperless trading in the year 2005, the department and all agencies in the ministry first have to complete integration within the ministry by the end of next year and then extend it to the external partners. An Internet-based paperless trading pilot project is now being conducted with the cooperation of the Customs Department, the E-commerce Resource Centre (ECRC), the Institute for Innovative IT of Kasetsart University (i3t-KU), the Business Development department and private companies such as Minebea (Thailand), TKK, and CTI Logistics. The project aims to analyse the system in terms of traditional EDI and ebXML, to find a suitable way to promote the utilisation of ICT in SMEs and to boost competitiveness through the B2B e-business. It also pushes for the development of data interchange and service interchange standards in order to accommodate APEC's paperless trading project. Another ebXML pilot project is collaborative e-tourism, conducted by i3t-KU, ECRC, and Datamat. Objectives are to promote SMEs in the tourism industry to use ICT to cut costs, to enhance efficiency and to expand their markets..." General references in "Electronic Business XML Initiative (ebXML)."

  • [August 18, 2003] "Coalition Uses Web for Emergency Notification. System Uses Web Services, Off-The-Shelf Software." By Grant Gross. In InfoWorld (August 18, 2003). "The 9-1-1 emergency service in Oregon has expanded to include instant notifications to school administrators, hospitals and other people who need timely emergency notifications, thanks to a coalition of Oregon local governments and technology vendors using Web services and off-the-shelf software. The Regional Alliances for Infrastructure and Network Security (RAINS) launched its RAINS-Net technology platform, which sends live emergency information to selected users over the Internet and by cell phone. The creators of RAINS-Net are billing it as an extension of 9-1-1 service, in which the existing computer-aided dispatch system is connected to the Internet and sends alerts to officials who need to know about emergency situations in their neighborhoods... When a 9-1-1 call comes into a dispatch center, the information an operator types into the dispatch center computers can be routed to a cell phone message or a pop-up dialog box on a PC. In the case of an emergency event like a hazardous waste spill, those people on the RAINS-Net network would be notified immediately, and the dialog box might direct them to additional multimedia information, such as a video on how to respond to a hazardous waste spill. The RAINS-Net system, which goes live on Thursday, already has about 1,000 files that provide additional information on emergency situations. In some cases, such as a crime in progress, the RAINS-Net system would wait until police show up on the scene before notifying people on the network, so that police can assess the situation before raising concerns, Jennings said. The system uses the nine-digit zip code to route messages to recipient, so that a school in one neighborhood wouldn't get an emergency message about a fire across town. The system also has the capability of sending out city-wide emergency messages to appropriate recipients. RAINS-Net initially integrates the technologies of RAINS sponsor companies, including FORTiX, Tripwire, Centerlogic, and Jennings' Swan Island Networks by using XML and Web services. More companies are working with RAINS to integrate their technologies into the RAINS-Net as new capabilities are added..." See also the press release.

  • [August 18, 2003] "Service-Oriented Architecture Explained." By Sayed Hashimi (NewRoad Software). From O'Reilly ONDotnet.com (August 18, 2003). "SOA (service-oriented architecture) has become a buzzword of late. Although the concepts behind SOA have been around for over a decade now, SOA has gained extreme popularity of late due to web services. Before we dive in and talk about what SOA is and what the essentials behind SOA are, it is a useful first step to look back at the evolution of SOA. To do that, we have to simply look at the challenges developers have faced over the past few decades and observe the solutions that have been proposed to solve their problems... In the context of SOA, we have the terms service, message, dynamic discovery, and web services. Each of these plays an essential role in SOA. A service in SOA is an exposed piece of functionality with three properties: (1) The interface contract to the service is platform-independent; (2) The service can be dynamically located and invoked; (3) The service is self-contained -- that is, the service maintains its own state. Service providers and consumers communicate via messages. Services expose an interface contract. This contract defines the behavior of the service and the messages they accept and return. Because the interface contract is platform- and language-independent, the technology used to define messages must also be agnostic to any specific platform/language. Therefore, messages are typically constructed using XML documents that conform to XML schema. XML provides all of the functionality, granularity, and scalability required by messages. That is, for consumers and providers to effectively communicate, they need a non-restrictive type of system to clearly define messages; XML provides this... Dynamic discovery is an important piece of SOA. At a high level, SOA is composed of three core pieces: service providers, service consumers, and the directory service. The role of providers and consumers are apparent, but the role of the directory service needs some explanation. The directory service is an intermediary between providers and consumers. Providers register with the directory service and consumers query the directory service to find service providers Although the concepts behind SOA were established long before web services came along, web services play a major role in a SOA. This is because web services are built on top of well-known and platform-independent protocols. These protocols include HTTP, XML, UDDI, WSDL, and SOAP. It is the combination of these protocols that make web services so attractive. Moreover, it is these protocols that fulfill the key requirements of a SOA. That is, a SOA requires that a service be dynamically discoverable and invokeable. This requirement is fulfilled by UDDI, WSDL, and SOAP. SOA requires that a service have a platform-independent interface contract. This requirement is fulfilled by XML. SOA stresses interoperability. This requirement is fulfilled by HTTP. This is why web services lie at the heart of SOA... As complexity grows, researchers find more innovative ways to answer the call. SOA, in combination with web services, is the latest answer. Application integration is one of the major issues companies face today; SOA can solve that. System availability, reliability, and scalability continue to bite companies today; SOA addresses these issues. Given today's requirements, SOA is the best scalable solution for application architecture..."

  • [August 18, 2003] "DocBook for Eclipse: Reusing DocBook's Stylesheets." By Jirka Kosek. From XML.com (August 13, 2003). ['Use XSLT to integrate your own documentation into the Eclipse IDE.'] "DocBook is a popular tool for creating software documentation among developers. One reason for its success is the existence of the DocBook XSL stylesheets, which can be used to convert DocBook XML source into many target formats including HTML, XHTML, XSL-FO (for print), JavaHelp, HTML Help, and man pages. The stylesheets can be further customized to get other outputs as well. In this article I am going to show you how easily you can integrate DocBook documents into the Eclipse platform help system by reusing existing stylesheets... auxiliary help files are usually XML or HTML-based, so we can use XSLT to generate them. If you have your documentation in DocBook and you want to feed it into the help system, the only thing you need is to extend the existing stylesheets to emit the auxiliary files together with a standard HTML output. That is even easier if you reuse some existing DocBook XSL stylesheet templates. The whole Eclipse platform is developed around the idea of plugins. If you want to contribute your help documents to the Eclipse platform, you have to develop a new help plugin. The plugin is composed of the HTML and image files, the table of contents file in XML, and the manifest file... As the Eclipse help is based on HTML, we can reuse existing stylesheets that generate multiple HTML files from DocBook XML source. However, we need to extend these stylesheets to generate the table of contents file and the manifest file... Software documentation is an area where you can very effectively use XML and XSLT to do multichannel publishing. If you stick to using a well-known and standardized vocabulary like DocBook, you can benefit from usage of existing stylesheets and other conversion tools. If you want to plug your DocBook documentation into some new help format, you can quite easily hack existing stylesheets to generate a new output format. The method for creating output for the Eclipse platform help described in this article can be used for almost any HTML based online help system..." General references in "DocBook XML DTD."

  • [August 18, 2003] "XSLT Recipes for Interacting with XML Data." By Jon Udell. From XML.com (August 13, 2003). ['Udell explores alternative ways of making XML data interactive using XSLT.'] "In last month's column, 'The Document is the Database', I sketched out an approach to building a web-based application backed by pure XML (and as a matter of fact, XHTML) data. I've continued to develop the idea, and this month I'll explore some of the XSLT-related recipes that have emerged. Oracle's Sandeepan Banerjee, director of product management for Oracle Server Technologies, made a fascinating comment when I interviewed him recently. 'It's possible,' he said, 'that developers will want to stay within an XML abstraction for all their data sources'. I suppose my continuing (some might say obsessive) experimentation with XPath and XSLT is an effort to find out what that would be like. It's true that these technologies are still somewhat primitive and rough around the edges. Some argue that we've got to leapfrog over them to XQuery or to some XML-aware programming language in order to colonize the world of XML data. But it seems to me that we can't know where we need to go until we fully understand where we are... It's crucial to be able to visualize data. As browsers are increasingly able to apply CSS stylesheets to arbitrary XML, the XHTML constraint becomes less important. The Microsoft browser has been able to do CSS-based rendering of XML for a long time. Now Mozilla can too. Safari doesn't, yet, but I'll be surprised if it doesn't gain that feature soon. So while I'm sticking with XHTML for now, that may be a transient thing. Of more general interest are the ways in which XPath and XSLT can make XML data interactive... The techniques I've been exploring for the past few months are, admittedly, an unorthodox approach to building Web applications. The gymnastics required can be strenuous, and some of the integration is less than seamless. But the result is useful, and along the way I've deepened my understanding of XPath and XSLT. Is it really advisable, or even possible, to make XML the primary abstraction for managing data? I'm still not sure, but I continue to think it's a strategy worth exploring..." General references in "Extensible Stylesheet Language (XSL/XSLT)."

  • [August 18, 2003] "Introducing Anobind." By Uche Ogbuji. From XML.com (August 13, 2003). ['Uche Ogbuji introduces anobind, his new Python databinding tool.'] "My recent interest in Python-XML data bindings was sparked not only by discussion in the XML community of effective approaches to XML processing, but also by personal experience with large projects where data binding approaches might have been particularly suitable. These projects included processing both data and document-style XML instances, complex systems of processing rules connected to the XML format, and other characteristics requiring flexibility from a data binding system. As a result of these considerations, and of my study of existing Python-XML data binding systems, I decided to write a new data Python-XML binding, which I call Anobind. I designed Anobind with several properties in mind, some of which I have admired in other data binding systems, and some that I have thought were, unfortunately, lacking in other systems: (1) A natural default binding, i.e., when given an XML file with no hints or customization; (2) Well-defined mapping from XML to Python identifiers; (3) Declarative, rules-based system for finetuning the binding; (4) XPattern support for rules definition; (5) Strong support for document-style XML, especially with regard to mixed content; (6) Reasonable support for unbinding back to XML; (7) Some flexibility in trading off between efficiency and features in the resulting binding... In this article I introduce Anobind, paying attention to the same considerations that guided my earlier introduction of generateDS.py and gnosis.xml.objectify... Anobind is really just easing out of the gates. I have several near-term plans for it, including a tool that reads RELAX NG files and generates corresponding, customized binding rules. I also have longer-term plans such as a SAX module for generating bindings without having to build a DOM..." See also: (1) Python & XML, by Christopher A. Jones and Fred L. Drake, Jr.; (2) general references in "XML and Python."

  • [August 18, 2003] "Binary XML, Again." By Kendall Grant Clark. From XML.com (August 13, 2003). ['The old chestnut of a binary encoding for XML has cropped up once more, this in time in serious consideration by the W3C. Kendall Clark comments on the announcement of the W3C's Binary XML Workshop.'] "The [W3C] workshop announcement is interesting in its own right and worth quoting... from a 'steadily increasing demand,' the W3C has decided to get in front of those of its vendor-members which want 'to find ways to transmit pre-parsed XML documents and Schema-defined objects, in such a way that embedded, low-memory and/or low bandwidth devices can' get in on the XML game... The workshop announcement also mentions a few tantalizing details, including talk of 'multiple separate implementers' having some success with an ASN.1 variant of XML... The other interesting thing of note here is that the W3C is talking about a binary variant of (parts of) the XML Infoset. What difference that could make remains to be seen, but it's interesting enough to pay some attention to it. There are at least two issues at this workshop: binary variants and, as the workshop announcement says, 'pre-parsed' artifacts; they seem orthogonal to each other..." The W3C Workshop on Binary Interchange of XML Information Item Sets will be held September 24-26, 2003 in Santa Clara, California. The Workshop goal is "... to study methods to compress XML documents, comparing Infoset-level representations with other methods, in order to determine whether a W3C Working Group might be chartered to produce an interoperable specification for such a transmission format." For background and discussion, see: (1) the workshop Call for Participation; (2) the thread on XML-DEV, including a key posting from Liam Quin (W3C XML Activity Lead); (3) "Fast Web Services" (Sun Microsystems paper).

  • [August 18, 2003] "JavaOne: Fast Web Services." Presentation by Santiago Pericas-Geertsen and Paul Sandoz (Sun Microsystems). JavaOne 2003 San Francisco, June 2003. "Current Web service application frameworks perform at more than an order of magnitude worse than similar technologies that use binary representations for messages (for example, RMI and RMI/IIOP). The performance difference is due to the fact that messages are represented in the XML infoset: the result of which is (i) large message sizes and (ii) slow serialization/deserialization (for example, marshalling/unmarshalling) of messages. A binary representation of the XML infoset has been proposed as a possible solution to this problem. However, the so-called 'Binary Infoset Encoders' have shown only moderate performance improvements for server-side computing and have not been widely adopted. In this talk, we argue that 'Binary Schema-binding Frameworks' offer a much better solution to this problem. This approach relies on the assumption that the schema of a message is known by both peers. This common knowledge, together with suitable encodings, result in small message sizes and fast serialization/deserialization. Abstract Syntax Notation One (ASN.1) is a technology and set of standards for abstractly defining messages for distributed communication that are separate from a set encodings or message representations. A number of XML-related ASN.1 standards are being defined, namely an XML encoding so that ASN.1 messages can be represented as XML, and a mapping from W3C XML Schema (XSD) to ASN.1 schema. Thus, ASN.1 is perfectly suited for the definition of a Binary Schema-binding Framework that can be easily integrated into Java API for XML-based RPC (JAX-RPC) and Java API for XML Messaging. Our preliminary results show that the resulting performance is comparable to that of RMI/IIOP..." See also the ASN.1 work program description and the following bibliographic entries.

  • [August 18, 2003] "The Emergence of ASN.1 as an XML Schema Notation." By John Larmouth (Larmouth T & PDS Ltd, Bowdon, UK). Presentation given at XML Europe 2003, May 5-8, 2003. With slides in HTML and .PPT format. "This paper describes the emergence of ASN.1 as an XML schema notation. Use of ASN.1 as an XML schema notation provides the same functionality as use of W3C XML Schema (XSD), but makes compact binary representations of the data available as well as XML encodings. ASN.1 also provides a clear separation of the specification of the information content of a document or message from the actual syntax used in its encoding or representation. Examples of representation differences that do not affect the meaning (semantics) being communicated are the use of an attribute instead of an element in an XML encoding, or of space-separated lists instead of repeated elements. Examples are given of ASN.1 specification of an XML document, and some comparisons are made with XSD and RELAX NG... The focus of ASN.1 is very much on the information content of a message or document. A distinction is drawn between whether changes in the actual representation of a message or document affect its meaning (and hence its effect on a receiving system), or are just variations of encoding that carry the same information. Thus the use of an XML attribute rather than a child element does not affect the information content. Nor does the use of a space-separated list rather than a repetition of an element. ASN.1 tools provide a static mapping of an XML schema definition to structures in commonly-used programming languages such as C, C++ and Java, with highly efficient encode/decode routines to convert between values of these structures and the informnation content of XML documents. By contrast, most tools based on XSD or RELAX NG are more interpretive in nature, providing details of the infoset defined by the XML document through enquiries by the application or by notifications to the application (a highly interactive - and CPU intensive procedure)..." See also the ASN.1 website, "What ASN.1 can offer to XML." [PDF from IDEAlliance, cache]

  • [August 12, 2003] "Fast Web Services." By Paul Sandoz, Santiago Pericas-Geertsen, Kohuske Kawaguchi, Marc Hadley, and Eduardo Pelegri-Llopart. Sun Microsystems Web Services library. Appendices include a WSDL Example and an ASN.1 Schema for SOAP. With 21 references. August 2003. "Fast Web Services is an initiative at Sun Microsystems aimed at the identification of performance problems in existing implementations of Web Services standards. Our group has explored several solutions to these problems. This article focuses on a particular solution that delivers maximum performance gains. Fast Web Services explores the use of more efficient binary encodings as an alternative to textual XML representations. In this article, we identify the performance of existing technologies, introduce the main goals of Fast Web Services, both from a standards and an implementation perspective, highlight some of the use cases for Fast Web Services, discuss standards and associated technologies needed for Fast Web Services, present an example in which XML and the proposed binary encoding are compared. and describe the Java prototype that has been used to obtain some compelling performance results... Fast must define the interoperability between Fast peers. In addition, it must define the interoperability with existing Web Services that do not support Fast. The approach is to: 'Use Fast when available, and use XML otherwise.' [...] Fast annotations for WSDL allow services to explicitly state that a binding can support the Fast encoding (in addition to XML). Although Fast does not require any modification to WSDL, specifically to the SOAP binding, it may be appropriate to formalize the contract to state clearly that the binding supports Fast in addition to XML... Fast is not a Java-only technology: it is designed to be platform-independent, just like existing Web Services. This expands the interoperability to non-Java platforms, such as C#, C and C++ or scripting languages such as Perl and Python. Standards are crucial: Fast will not be deployed and implemented by vendors unless it has good standards traction backed by parties influential in the Web Services space. Fast web services is designed to maximize the performance of Web Services in a number of domains, while minimizing developer impact and ensuring interoperability. The performance gains from Fast WS are very substantial although its applicability is not universal; there are some issues due to its loss of self-description that are not present when using XML encoding. Performance results obtained from the Java prototype provide compelling evidence that it is possible for a Web Service implementation to perform at speeds close to that of binary equivalents such as RMI and RMI/IIOP. If performance is an issue then Fast may be the answer, and a number of use cases in which Fast can be used were presented. Sun Microsystems is participating in the ITU-T SG-17 to ensure that Fast Web Services is standardized. The majority of the standardization process is complete (or close to be completed) given that X.694 and the ASN.1 encoding rules represent a significant proportion of work. X.695 represents the finishing touches that are needed for a well proven technology such as ASN.1 to be applied to Web Services..." See also the preceding item.

  • [August 12, 2003] "The XML Enabled Directory." By Steven Legg (Adacel Technologies Ltd) and Daniel Prager (Department of Computing and Mathematics, Deakin University, Victoria, Australia). IETF Internet Draft. Reference: 'draft-legg-xed-roadmap-00.txt'. Intended Category: Standard Track. 10 pages. "The XML Enabled Directory (XED) framework leverages existing Lightweight Directory Access Protocol (LDAP) and X.500 directory technology to create a directory service that stores, manages and transmits Extensible Markup Language (XML) format data, while maintaining interoperability with LDAP clients, X.500 Directory User Agents (DUAs), and X.500 Directory System Agents (DSAs). This document introduces the various XED specifications. The main features of XED are: (1) semantically equivalent XML renditions of existing directory protocols; (2) XML renditions of directory data; (3) the ability to accept at run time, user defined attribute syntaxes specified in a variety of XML schema languages; (4) the ability to perform filter matching on the parts of XML format attribute values; (5) the flexibility for implementors to develop XED clients using only their favoured XML schema language... The XED framework does not aim for a complete specification of the directory in one schema language (e.g., by translating everything that isn't ASN.1 into ASN.1, or by translating everything that isn't XML Schema into XML Schema), but rather seeks to integrate specifications in differing schema definition languages into a cohesive whole. The motivation for this approach is the observation that although XML Schema, RELAX-NG and ASN.1 are broadly similar, they each have unique features that cannot be adequately expressed in the other languages. Thus a guiding principle for XED is the assertion that the best schema language in which to represent a data type is the language of its original specification. Consequently, a need arises for the means to reference definitions not only in different documents, but specified in different schema languages... This document and the technology it describes are a product of a joint research project between Adacel Technologies Limited and Deakin University on leveraging existing directory technology to produce an XML-based directory service..." See the following bibliographic entry ("XED: Schema Language Integration") and initial drafts of several related IETF IDs: (1) "Directory XML Encoding Rules for ASN.1 Types"; (2) "ASN.1 Schema: An XML Representation for ASN.1 Specifications"; (3) "Translation of ASN.1 Specifications into XML Schema"; (4) "Translation of ASN.1 Specifications into RELAX NG"; (5) "LDAP: Transfer Encoding Options"; (6) "XED: Schema Operational Attributes"; (7) "XED: Matching Rules"; (8) "XML Lightweight Directory Access Protocol." [cache]

  • [August 12, 2003] "XED: Schema Language Integration." By Steven Legg (Adacel Technologies Ltd) and Daniel Prager (Department of Computing and Mathematics, Deakin University, Victoria, Australia). IETF Internet Draft. Reference: 'draft-legg-xed-glue-00.txt'. Intended Category: Standard Track. August 7, 2003. 14 pages. "This document defines the means by which an Abstract Syntax Notation One (ASN.1) specification can incorporate the definitions of types and elements in specifications written in other Extensible Markup Language (XML) schema languages. References to XML Schema types and elements, RELAX NG named patterns and elements, and Document Type Declaration (DTD) element types are supported. Non-ASN.1 definitions are supported by first defining an ASN.1 type whose values can contain arbitrary markup, and then defining constraints on that type to restrict the content to specific nominated datatypes from non-ASN.1 schema definitions. The ASN.1 definitions in this document are consolidated in Appendix A..." [cache]

  • [August 12, 2003] "Mindreef SOAPscope 1.0. Bring SOAP Protocol into View with Handy Diagnostic Tool." By Joe Mitchko. In Web Services Journal Volume 3, Issue 7 (July 2003), pages 54-55. "Mindreef SOAPscope 1.0 is a Web services diagnostic tool, designed to provide toolkit-independent logging and monitoring of SOAP network traffic. SOAPscope is composed of two components, a network sniffer and a browser-based message viewer. The sniffer component is designed to capture SOAP request and response messages within the HTTP protocol traffic and persist the information to an embedded relational database. The message viewer component is a browser-based Web application that allows a user to view the persisted SOAP request and response messages and more. Since it is browser-based, the viewer opens the door for remote and collaborative debugging sessions. The SOAPscope viewer provides a pseudocode and XML view of message details, and two ways to monitor SOAP traffic -- log view or live view. The log view provides message history and search capabilities while the live view allows for real-time debugging. In addition, a handy WSDL viewer allows you to punch in a WSDL URL and view it in either native XML or in pseudocode mode. Some of the more advanced features of the tool allow you to modify and resend previously captured SOAP requests -- handy for on-the-fly debugging... I found the viewer's user interface to be very clean, easy to read, and relatively uncluttered. The information displayed was basically accurate and bug free. In addition, both the XML and pseudocode views have color-coded text, making it easy to see SOAP-specific tags, namespace information, and message request and response content. All SOAP message content and log information is stored in an embedded database. Although it is basically transparent, you will need to do a little database management in order to purge or back up the database. Nothing in the way of log maintenance is provided in SOAPscope for this release. Luckily, database maintenance instructions are included in the documentation and are relatively easy to follow... It's not often that you find a tool that is so well thought out and designed...The amount of functionality provided is just right, neither overloading the GUI with seldom-used features nor leaving you to find some other diagnostic tool because it doesn't do enough..." Update: see the announcement for SOAPscope 2.0: "Mindreef Announces Availability of SOAPscope 2.0. Features First WSDL Interoperability Checker, Including Rules for WS-I Basic Profile 1.0." [alt URL]

  • [August 12, 2003] "Instant Logging: Harness the Power of log4j with Jabber. Learn How to Extend the log4j Framework with Your Own Appenders." By Ruth Zamorano and Rafael Luque (Orange Soft). From IBM developerWorks, Java technology. August 12, 2003. With source code. ['Not only is logging an important element in development and testing cycles -- providing crucial debugging information -- it is also useful for detecting bugs once a system has been deployed in a production environment, providing precise context information to fix them. In this article, Ruth Zamorano and Rafael Luque, cofounders of Orange Soft, a Spain-based software company specializing in object-oriented technologies, server-side Java platform, and Web content accessibility, explain how to use the extension ability of log4j to enable your distributed Java applications to be monitored by instant messaging (IM)'] "The log4j framework is the de facto logging framework written in the Java language. As part of the Jakarta project, it is distributed under the Apache Software License, a popular open source license certified by the Open Source Initiative (OSI). The log4j environment is fully configurable programmatically or through configuration files, either in properties or XML format. In addition, it allows developers to filter out logging requests selectively without modifying the source code. The log4j environment has three main components: (1) loggers control which logging statements are enabled or disabled. Loggers may be assigned the levels 'ALL, DEBUG, INFO, WARN, ERROR, FATAL, or OFF'. To make a logging request, you invoke one of the printing methods of a logger instance. (2) layouts format the logging request according to the user's wishes. (3) appenders send formatted output to its destinations... The log4j network appenders already provide mechanisms to monitor Java-distributed applications. However, several factors make IM a suitable technology for remote logging in real-time. In this article, we cover the basics of extending log4j with your custom appenders, and document the implementation of a basic IMAppender step by step. Many developers and system administrators can benefit from their use..." See also: "Jabber XML Protocol."

  • [August 12, 2003] "JBoss Fork Spawns Apache Project. ASF Begins Work on New J2EE Server Called Geronimo." By Robert McMillan. In InfoWorld (August 11, 2003). "A rift between the developers of the open source JBoss J2EE (Java 2 Enterprise Edition) application server has brought the Apache Software Foundation (ASF) into the J2EE game. The ASF announced last week that it had begun work on a new J2EE server called Geronimo, which the foundation believes will be a more business-friendly alternative to the other open source J2EE servers currently available, according to Apache Software Foundation Chairman Greg Stein. Companies such as IBM and BEA Systems sell commercial J2EE servers, but open source implementations of Sun's J2EE specification are popular among developers looking for a low-cost alternative to IBM's WebSphere and BEA's WebLogic... There are already two popular open source J2EE servers in circulation: JBoss and the Jonas server. But both have had difficulties in obtaining J2EE certification from Sun Microsystems, and neither is available under an Apache-style software license, which is considered more conducive to commercial development. 'There isn't a certified server out there, and there certainly isn't one that has a low restriction license like ours,' said Stein. Geronimo will have an easier time obtaining J2EE certification than did its open source rivals, because the ASF's non-profit status makes the application server a candidate for Sun scholarship, which would pay for certification, Stein said. A certified version of Geronimo is expected in the next year..." See: (1) the Apache Geronimo project website and the proposal; (2) Java 2 Platform, Enterprise Edition (J2EE).

  • [August 12, 2003] "STnG: A Streaming Transformations and Glue Framework." By K. Ari Krupnikov (Research Associate, University of Edinburgh, HCRC Language Technology Group). In [Preliminary] Proceedings for Extreme Markup Languages 2003, held in August 2003, Montréal, Québec. "STnG (pronounced 'sting') is a framework for processing XML and other structured text. In developing STnG, it was our goal to allow complex transformations beyond those afforded by traditional XML transforming tools, such as XSLT, yet make the framework simple to use. We claim that to meet this goal, a system must: (1) support and encourage the use of small processing components; (2) offer a hierarchical tree-like view of its data; (3) factor out facilities for input chunking through a pattern/action model; (4) not provide processing facilities of its own, instead invoking processors written in existing languages. STnG is built around common XML tools and idioms, but can process arbitrary structured text almost as easily as XML. In the first part of this paper, we show how these requirements result in powerful and flexible systems, and how they can be achieved. The balance of this paper describes a processing framework we have developed in Java that implements these requirements... The entire transformation requires only one pass on the source document and is comfortably described in one STnG. We could achieve the same effect with standalone XSLT stylesheets, Java programs, and perhaps a makefile to describe dependencies between these components, as well as between intermediate results. While for some elaborate tasks such complexity is warranted, STnG can simplify many common processing scenarios considerably. Note that beyond complexity that grows with the number of different components required to accomplish a task, a standalone application would include considerably more code than the DOM handler fragment in this STnG, as it would need to instantiate and configure a parser and navigate to the desired fragments using custom code, as well as handle potential errors -- all tasks factored out into STnG. More likely than not, this custom code would not be as robust as a standard, reusable component. See: (1) the Extreme Markup Languages 2003 Program and (2) the event listing.

  • [August 12, 2003] "XIndirect: Indirect Addressing for XML." By W. Eliot Kimber (ISOGEN International, LLC). In [Preliminary] Proceedings for Extreme Markup Languages 2003, held in August 2003, Montréal, Québec. "This paper describes and explains the XIndirect facility, a W3C Note. The XIndirect Note defines a simple mechanism for representing indirect addresses that can be used with other XML-based linking and addressing facilities, such as XLink and XInclude. XIndirect is motivated primarily by the requirements of XML authoring in which the management of pointers among systems of documents under constant revision cannot be easily satisfied by the direct pointers provided by XLink and XInclude. Indirect addressing is inherently expensive to implement because of both the processing demands of multi-step pointers and the increased system complexity required to do the processing. XLink and XPointer (and by extension, XInclude) explicitly and appropriately avoid indirection in order to provide the simplest possible solution for the delivery of hyperlinked documents, especially in the context of essentially unbounded systems, such as the World Wide Web. XIndirect enables indirect addressing when needed without adding complexity to the existing XML linking and addressing facilities -- by defining indirection as a separate, independent facility, processors that only need to support delivery of documents are not required to support indirection simply in order to support XLink or XInclude. Rather, when indirection management is required, developers of XML information management systems can limit the support for indirection to closed systems of controlled scope where indirection is practical to implement. The paper illustrates some of the key use cases that motivate the need for the XIndirect facility, describes the facility itself, and discusses a reference implementation of the XIndirect facility..." See: (1) the Extreme Markup Languages 2003 Program and (2) the event listing.

  • [August 12, 2003] "Datatype- and Namespace-Aware DTDs: A Minimal Extension." By Fabio Vitali, Nicola Amorosi, and Nicola Gessa (Department of Computer Science, University of Bologna). In [Preliminary] Proceedings for Extreme Markup Languages 2003, held in August 2003, Montréal, Québec. "DTDs and XML Schema are important validation languages for XML documents. They lie at opposite ends of a spectrum of validation languages in terms of expressive power and readability. Differently from other proposals for validation languages, DTD++ provides a DTD-like syntax to XML Schema constructs, thereby enriching the ease of use and reading of DTDs with the expressive power of XML Schema. An implementation as a pre-processor of a Schema-validating XML parser aids in ensuring wide support for the language... The literature seems to agree that schema languages for XML documents lie between the two extremes of a DTD, that has maximum terseness and readability, but minimum expressive power, and XML Schema, that has the greatest expressive power but a much lesser clarity and conciseness. Additionally, coexistence of different schema languages within the same document still are not straightforward. DTD subsets are required when using general entities, and some parsers get overly confused dealing with entities defined in a DTD, and elements and attributes in a different schema document. Our proposal aims at finding a reasonable compromise between the expressive power of XML Schema and the ease of use and compactness of a DTD. What we decided first was that there was no sense in creating a completely new language; extending an existing syntax with features taken from another existing language seemed, and still seems now, a much better approach. Of course, the final result is still incomplete and partial. In particular, support for keys and unique values that exist in XML Schema has not been provided yet. Still, the experience so far with the DTD++ language appears to be interesting and rewarding..." See: (1) the Extreme Markup Languages 2003 Program and (2) the event listing.

  • [August 09, 2003] "New and Improved String Handling." By Bob DuCharme. From XML.com (August 06, 2003). ['In the Transforming XML column Bob DuCharme explains some of the new and improved string handling functions -- for concatenation, search, and replace -- in XSLT/XPath 2.0.'] "In an earlier column, I discussed XSLT 1.0 techniques for comparing two strings for equality and doing the equivalent of a 'search and replace' on your source document. XSLT 2.0 makes both of these so much easier that describing the new techniques won't quite fill up a column, so I'll also describe some 1.0 and 2.0 functions for concatenating strings. Notice that I say '1.0' and '2.0' without saying 'XSLT'; that's because these are actually XPath functions available to XQuery users as well as XSLT 2.0 users. The examples we'll look at demonstrate what they bring to XSLT development. The string comparison techniques described before were really boolean tests that told you whether two strings were equal or not. The new compare() function does more than that: it tells whether the first string is less than, equal to, or greater than the second according to the rules of collation used. 'Rules of collation' refers to the sorting rules, which can apparently be tweaked to account for the spoken language of the content... New features such as data typing and a new data model may make XSLT and XPath 2.0 look radically different from their 1.0 counterparts, but many of these new features are straightforward functions that are familiar from other popular programming languages. The compare(), replace(), and string-join() functions, which will make common coding tasks go more quickly with less room for error, are great examples of this..." For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."


Earlier XML Articles


Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/xmlPapers200309.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org