Other collections with references to general and technical publications on XML:
- XML Article Archive: [August 2002] [July 2002] [April - June 2002] [January - March 2002] [October - December 2001] [Earlier Collections]
- Articles Introducing XML
- Comprehensive SGML/XML Bibliographic Reference List
[September 30, 2002] "XML to Drive Office Update." By Peter Galli. In eWEEK (September 30, 2002). "The next version of Microsoft Corp.'s Office productivity suite will come with XML support baked into Word, allowing users to, among other things, more effectively mine their data. Code-named Office 11, the suite will feature built-in support for XML in Word, allowing developers to create 'smart' documents that automatically search for code or updates as needed. In addition, the software -- the first beta of which is expected to be announced by company CEO Steve Ballmer at Gartner Symposium in Orlando, Fla., on October 9  -- will allow developers to use Word as a development platform to create XML templates and solutions, as well as re-purpose content with database and Web service interaction, said Jeff Raikes, Microsoft's group vice president of productivity and business services, in an interview with eWeek... Arbitrary schemas, also known as "open" or "customer" schemas, let users define their own tags in a way that suits their businesses. In Office 11, applications will be able to interact with customer-defined schemas -- unlike Office XP, in which Excel 2002 uses Spreadsheet XML..." See also the image for "Programmable Task Pane and Baked-In XML Let Data be Imported Into Word File" and the text from the interview: "[Raikes:] what we are doing that is very exciting is using XML capabilities more fully in the next major release such that the Office tools become a way to connect into functionality that will be specific to certain verticals. For example, we were just reviewing the idea of Excel accepting information done in a schema for financial reporting, so you can compare different companies. There's a vertical characteristic to that, but it's not a specific product for that. Today we have a small-business version [and a] student and teacher license edition, but there will be nothing beyond this. But there are things that people will do that will supplement Office for various vertical industries... Office 11 is a big step forward in this regard because of the arbitrary schema approach, where you have a standard schema for reporting financial information and immediately be able to access that data without massaging it. In Word too, which is a tool people use to create content. If you can have access to XML schema associated with content databases within your organization, you can dramatically pull that boilerplate text... the arbitrary XML schema element is the most interesting because your ability to connect up with business application systems grows phenomenally because you don't have to write Excel specifically for a financial reporting schema or a content management schema. It just knows how to accept a new schema and do the right thing. And that's what we do in Office 11... our Task Pane now also becomes programmable in XML so that as part of accepting that schema the actions that are associated with how to operate on that schema can be defined by the person who programmed the solution as well. So in effect it becomes a mechanism that guides you through the solution that's built on Office..." See: "Microsoft Office 11 and XDocs."
[September 30, 2002] "The Importance of TMX." By David Pooley (SDL International). In Globalization Insider: The LISA Newsletter Volume XI, Number 3.6 (September 26, 2002). ISSN: 1420-3693. TMX Special Issue. "TMX stands for Translation Memory eXchange. OSCAR (Open Standards for Container/Content Allowing Re-use) is the LISA Special Interest Group responsible for its definition. OSCAR members include translation tools developers, service providers and other interested parties (e.g., large translation clients). They came together over five years ago to specify a way in which translation memory data could be exchanged between tools and/or vendors with little or no loss of critical data in the process. OSCAR has recently voted TMX version 1.4 as an accepted standard... TMX is an XML format for the interchange of translation memory data. As such, it consists of elements (with attributes) that provide information about translation "segments". The size of a segment is not pre-defined and it will usually be a phrase, sentence or paragraph. For most tools using TMX, the default segment size is a sentence. Within each segment of TMX, there are optionally elements that provide information about the formatting contained in the segment (change of font, hyperlink etc.). TMX also provides for the definition of text "subflows" such as footnotes and index entries..." Note: The TMX 1.4a Specification (OSCAR Recommendation, 10-July-2002) "defines the Translation Memory eXchange format (TMX). The purpose of the TMX format is to provide a standard method to describe translation memory data that is being exchanged among tools and/or translation vendors, while introducing little or no loss of critical data during the process... TMX is defined in two parts: (1) A specification of the format of the container (the higher-level elements that provide information about the file as a whole and about entries). In TMX, an entry consisting of aligned segments of text in two or more languages is called a Translation Unit (the <tu> element); (2) A specification of a low-level meta-markup format for the content of a segment of translation-memory text. In TMX, an individual segment of translation-memory text in a particular language is denoted by a <seg> element. TMX is XML-compliant. It also uses various ISO standards for date/time, language codes, and country codes. TMX files are intended to be created automatically by export routines and processed automatically by import routines. TMX files are 'well-formed' XML documents that can be processed without explicit reference to the TMX DTD. However, a 'valid' TMX file must conform to the TMX DTD, and any suspicious TMX file should be verified against the TMX DTD using a validating XML parser..." References in: (1) the TMX website; (2) "Translation Memory Exchange"; (3) "Markup and Multilingualism."
[September 30, 2002] "TMX 1.4a." By Yves Savourel, (OSCAR). In Globalization Insider: The LISA Newsletter Volume XI, Number 3.6 (September 26, 2002). ISSN: 1420-3693. TMX Special Issue. "... a standard is only as good as its implementations. TMX follows that rule as well. A compliance kit is incorporated with the new version. This should help developers to implement solid and interoperable TMX functionalities. Tools vendors can develop import and export functions so their applications can read and write TMX documents. Those TMX files must be valid, that is: well-formed XML that can be validated against the TMX DTD. However some aspects of the implementation cannot be verified by the DTD (for example: what type of inline elements the document uses to enclose inline codes). One way to verify a tool does a good job is to provide test case and check that the model TMX documents [in the compliance kit] are the same as the ones generated by the tool. TMX 1.4a has two [certification] levels: Level 1 is for TM with no inline codes (e.g., strings from a resource file), Level 2 is for formats that have inline codes (e.g., HTML content, where bold, italics, etc. are inline codes). Depending on what type of original format you are working with, you should get TMX Level 1 or Level 2. A tool that offer HTML support but doesn't generate TMX document with inline codes is not TMX-compliant. Also keep in mind that tools may perhaps only import TMX or only export TMX (or do both). There are compliance tests for each of those aspects... In general standards such as TMX, OLIF, TBX, or XLIFF are good because they allow the users to have their assets -- whatever they are -- stored in a common and open format. This permits them to use various applications with the same data, and to migrate to newer and better tools without loosing too much data..." [Next version:] "Some additional work to be done would be to provide an XML schema for TMX, in addition to the current DTD, so we can take advantage fully of XML features. A possible addition, linked to XML Schema, would be to allow for non-TMX constructs inside a TMX document, using XML namespaces. This would be more flexible than the <prop> element and the ts attribute currently used for extensibility purpose. And finally, there is the yet-to-be-resolved issue of segmentation. This is not a problem specific to TMX - it affects any TM repository and translation tool in general. Hopefully the Segmentation and Word Count Working Group newly created at OSCAR will be able to bring some solution to the problem, But this will take time..." References in: (1) the TMX website; (2) "Translation Memory Exchange"; (3) "Markup and Multilingualism."
[September 30, 2002] "A Conceptual Markup Language that Supports Interoperability between Business Rule Modeling Systems." By Jan Demey, Mustafa Jarrar, and Robert Meersman (VUB STARLab, Vrije Universiteit Brussel, Belgium). Paper prepared for CoopIS 2002 (Tenth International Conference on Cooperative Information Systems, October 30 - November 1, 2002, University of California, Irvine). 17 pages. "The Internet creates a strong demand for standardized exchange not only of data itself but especially of data semantics, as this same internet increasingly becomes the carrier of e-business activity (e.g., using web services). One way to achieve this is in the form of communicating 'rich' conceptual schemas. In this paper we adopt the well-known CM technique of ORM, which has a rich complement of business rule specification, and develop ORM-ML, an XML-based markup language for ORM. Clearly domain modeling of this kind will be closely related to work on so-called ontologies and we will briefly discuss the analogies and differences, introducing methodological patterns for designing distributed business models. Since ORM schemas are typically saved as graphical files, we designed a textual representation as a marked-up document in ORM-ML so we can save these ORM schemas in a more machine exchangeable way that suits networked environments. Moreover, we can now write style sheets to convert such schemas into another syntax, e.g., pseudo natural language, a given rule engine's language, first order logic... The ORM conceptual schema methodology is fairly comprehensive in its treatment of many 'practical' or 'standard' business rules and constraint types. Its detailed formal description makes it an interesting candidate to nontrivially illustrate our XML based ORM-markup language as an exchange protocol for representing ORM conceptual models...We describe the main elements of the ORM-ML grammar and demonstrate it using a few selected elementary examples. A complete formal definition of the grammar for this ORM-ML is an XML Schema instance... ORM-ML allows the representation of any ORM schema without loss of information or change in semantics, except for the geometry and topology (graphical layout) of the schema (e.g., location, shapes of the symbols), which we however easily may provide as a separate graphical style sheet to the ORM Schema... Verbalization of a conceptual model is the process of writing its facts and constraints in pseudo natural language sentences, which assumedly allows non-experts to (help) check, validate, or even build conceptual schemas. In ORM-ML, generating such verbalizations from agreed templates (i.e., 'template NL' syntax) parameterized over the ORM schema is done by building separate XML-based style sheets. Moreover, multilingual style sheets also become easier by translating these template sentences into different languages, its parameter values (which come from the ORM schema) translated by a human or machine..." See: (1) "STARLab Object Role Modeling Markup Language (ORM-ML) Represents ORM Models in XML"; (2) "STARLab ORM Markup Language (ORM-ML)." [cache]
[September 30, 2002] Employment Tax e-file System Implementation and User Guide 2003. Forms 940, 940PR, 941, 941PR, 941SS, and Related Schedules. US Internal Revenue Service, Electronic Tax Administration. IRS Publication 3823 (Draft). 135 pages. Appendix B: Name Control Conventions; Appendix C: Street Abbreviations; Appendix D: Postal Service State Abbreviations and Zip Code Ranges; Appendix E: Sample Form 9041; Appendix F: Sample Form 8633; Appendix G: Glossary of Terms. "This publication contains the procedural guidelines and validation criteria for the Employment Tax e-file System. Planned implementation of the System for Internal Revenue Service (IRS) Processing Year 2003 includes the following forms, schedules, and attachments: Form 941; Forma 941PR; Form 941SS; Form 940; Form 940PR; Form 941 Schedule B; Forma 941PR Anexo B; Form 941c; Forma 941c; PIN Registration; Payment Record. Formatted return files will be transmitted electronically via the IRS Electronic Management System (EMS), located at the Tennessee Computing Center (TCC). Formatted PIN Registration files will be transmitted electronically via EMS, located at the Austin Submission Processing Center (AUSPC). Software Developers and Transmitters should use the guidelines provided in this document along with electronically published XML Schemas, and Test Scenarios in order to develop and test their software for use with this system. You may obtain additional copies of this publication by calling 1-800-829-3676. The publication is also available electronically on the IRS Web Site, in the 94x XML Developers' Forum (www.irs.gov)..." See references in the news item "US Internal Revenue Service Establishes Online XML Developers' Forum for Employment Tax E-file System." [cache]
[September 30, 2002] "OMB Names Microsoft and IBM Tools as E-Gov Platforms." By Jason Miller. In Volume 21, Number 28 (September 16, 2002), page 7. "Slowly, a plan is emerging for taking the administration's 24 Quicksilver projects from ideas on paper to interactive online services. Mark Forman, the Office of Management and Budget's associate director for IT and e-government, said by next summer agencies will choose from two Web service platforms that will let the e-government initiatives more easily handle transactions. He said the platforms will be the IBM Grid Computing Platform and the next generation of Microsoft .Net using Extensible Markup Language. The platforms will build on what OMB identified in April as the two underlying technologies for the 24 projects, Java2 Enterprise Edition and .Net... To provide links between disparate applications, the selected Web services platforms use technologies such as Web Service Definition Language; Universal Discovery, Description and Integration; and Simple Object Access Protocol. The links between unrelated applications will let agencies share transaction engines or services more easily because different pieces of software or hardware will have interfaces with one another through XML schemas. Forman said there are many areas where Web services could cut costs by integrating functions. For instance, financial management could include debt collection, payment processing and reporting applications, he said. The keys to applying a Web services platform include identifying agencies' common functions and interdependencies and evaluating the barriers that prevent them from sharing information, Forman said. The plan for meshing some services comes as many agencies are preparing to release the second iterations of their initial sites and portals, although some have yet to roll out their first online services... Two other projects came online earlier this summer, Forman said. The Health and Human Services Department put an E-Grants portal prototype online, and the Treasury Department's Simplified and Unified Tax and Wage Reporting project put the 94x series of IRS forms on the Web..." See: (1) preceding bibliographic entry; (2) Federal Enterprise Architecture Program Management Office (Fea-Pmo) - 24 Presidential Priority E-Gov Initiatives; (3) IRS (US Internal Revenue Service) e-file system.
[September 30, 2002] "RLG Best Practice Guidelines for Encoded Archival Description." By RLG EAD Advisory Group. August 2002. 34 pages. "The objectives of the guidelines are: (1) To facilitate interoperability of resource discovery by imposing a basic degree of uniformity on the creation of valid EAD-encoded documents and to encourage the inclusion of elements most useful for retrieval in a union index and for display in an integrated (cross-institutional) setting. (2) To offer researchers the full benefits of XML in retrieval and display by developing a set of core data elements to improve resource discovery. It is hoped that by identifying core elements and by specifying 'best practice' for those elements, these guidelines will be valuable to those who create finding aids, as well as to vendors and tool builders. (3) To contribute to the evolution of the EAD standard by articulating a set of best practice guidelines suitable for interinstitutional and international use. These guidelines can be applied to both retrospective conversion of legacy finding aids and the creation of new finding aids... The document focuses on general issues that cross institutional boundaries." A 2002-09-27 posting from Merrilee Proffitt provides background to this publication; see "RLG Best Practice Guidelines for Encoded Archival Description Now Available": "These guidelines were developed by the RLG EAD Advisory Group between October 2001 and August 2002 to facilitate interoperability of resource discovery by imposing a basic degree of uniformity on the creation of valid EAD-encoded documents, encourage the inclusion of particular elements, and develop a set of core data elements. In fall 2001, RLG charged a reconstituted EAD Advisory Group with revising RLG's existing guidelines for three reasons: (1) an awareness that encoding practices have evolved considerably since pioneering repositories began submitting finding aids under the original 1998 RLG encoding guidelines; (2) an appreciation that the community of EAD practitioners has grown markedly since then, including a significant expansion outside the United States; (3) the knowledge that the impending release of EAD 2002, the updated version of the DTD would of itself require changes in the encoding guidelines. Nine experienced EAD users worked with program officer Merrilee Proffitt to evaluate and rework the existing guidelines; members of the group surveyed best practice documents from a number of different repositories and projects before beginning their task. Group members settled on two key objectives. One was to identify and define the use of a minimal set of EAD elements and attributes complete enough to assure that information in finding aids is adequate to serve the users' needs and yet parsimonious enough to prevent excessive encoding overhead on the creators. Their second objective was to assure that the guidelines stand a reasonable chance of meeting the needs of an international encoding community..." See references in: (1) RLG EAD Support Site; (2) EAD Round Table Help Pages; (3) "Encoded Archival Description (EAD)."
[September 30, 2002] "Web Services Wars Take Artistic Turn." By Stuart J. Johnston. In XML Magazine Volume 3, Number 6 (October/November 2002), pages 8-9. ['Choreography or orchestration? Industry leaders duke it out over standards for process assembly and management.'] "Lack of standards for service orchestration and business process modeling has been acknowledged as one of the thorniest problems slowing widespread adoption of Web services to date. Several XML-based workflow description languages have sprung up, but so far none has been adopted as a standard. In addition, some experts and vendors have suggested that an additional higher-level syntax, which some term 'choreography,' is also needed. In late June, Sun Microsystems and three partners introduced just that -- a proposed choreography standard aimed at filling in the gaps between existing orchestration technologies. Developed by Intalio, BEA Systems, SAP, and Sun, the Web Services Choreography Interface specification (WSCI) -- pronounced 'whiskey' -- is an XML specification for the flow of messages between interacting Web services... Current orchestration languages -- such as Microsoft's XLang, IBM's Web Services Flow Language (WSFL), and the Business Process Management Initiative's Business Process Modeling Language (BPML) -- focus on the nuts and bolts of passing business-oriented messages back and forth. To date, this has been a very task-oriented approach; although, XLang does attempt to deliver some functionality in what some term 'the choreography layer.' ... In early August, IBM and Microsoft, along with BEA Systems, announced a new specification resulting from a merger of XLang and WSFL [BPEL4WS]... In addition to the orchestration announcement, the IBM/Microsoft coalition proposed two other XML subspecifications: WS-Transaction and WS-Coordination. If adopted, WS-Transaction could threaten work done on other transaction languages, possibly including Business Transaction Protocol (BTP), a standards project overseen by the Organization for the Advancement of Structured Information Standards (OASIS). While both coalitions say they will submit their specifications to a recognized standards body, neither of them has committed to which one or how soon. A likely choice for either spec would be OASIS, because it governs Electronic Business using XML (ebXML) as well as Universal Description, Discovery, and Integration (UDDI), and is the body to which Microsoft, IBM, and VeriSign submitted the WS-Security specification in June. The question of standards bodies also raises questions as to whether any of the parties plan to charge royalties for the use of intellectual property they may have embedded in either specification. The Sun coalition has already announced that WSCI is available on a royalty-free basis. At press time, however, the IBM/Microsoft coalition would say only that it would follow the royalty policies of the standards body they submit it to... BEA Systems is part of both coalitions, something that analysts say points to the company's potential role as a peacemaker between the warring factions..." See: "Business Process Execution Language for Web Services (BPEL4WS)."
[September 28, 2002] "ConTeXtualized Local Ontology Specification via CTXML." By Paolo Bouquet and Stefano Zanobini (Dept. of Computer Information and Communication Technologies, University of Trento, Italy), Antonia Donà and Luciano Serafini (ITC-Irst, Trento, Italy). Presented at the AAAI-02 Workshop on Meaning Negotiation (MeaN-02), July 28, 2002, Edmonton, Alberta, Canada. 8 pages, with 6 references. "In many application areas, such as the semantic web, knowledge management, distributed databases, it has been recognized that we need an explicit way to represent meanings. A major issue in all these efforts is the problem of semantic interoperability, namely the problem of communication between agents using languages with different semantic. Following [a paper by Bonifacio, Bouquet, and Traverso], we claim that a technological infrastructure for semantic interoperability between 'semantically autonomous' communities must be based on the capability of representing local ontologies and mappings between them, rather than on the attempt of creating a global, supposedly shared, conceptualization. The goal of this paper is to define a theoretical framework and a concrete [XML-based] language for the specification of local ontologies and mappings between them... Despite the effort for defining a standard semantic for various domains, people seem to resist to such an attempt of homogenization. Partly, this is due to practical problems (it can be very costly to change the overall organization of a database, or the classification of large collections of documents. But we believe that there are also theoretical reasons why this homogeneity is not accepted, and in the end is not even desirable. In fact, lots of cognitive and organizational studies show that there is a close relationship between knowledge and identity. Knowledge is not simply a matter of accumulating 'true sentences' about the world, but is also a matter of interpretation schemas, contexts, mental models, [and] perspectives which allow people to make sense of what they know. Therefore, any attempt of imposing external interpretation schemas (and a definition of meaning always presupposes some interpretation schema, at least implicitly) is perceived as an attack to an individual's or a community's identity. Moreover, interpretation schemas are an essential part of what people know, as each of them provides an alternative lens through which reality can be read. Thus, imposing a single schema is always a loss of global knowledge, as we throw away possibly innovative perspectives... If we accept that interpretation schemas are important, then we need to approach the problem of semantic interoperability from a different perspective. Instead of pushing towards a greater uniformity, we need a theoretical framework in which: (1) different conceptualizations (called 'local ontologies') can be autonomously represented and managed (and, therefore, we call them contextualized); (2) people can discover and represent relationships between local ontologies; (3) the relationships between local ontologies can be used to semantic-based services without destroying the 'semantic identity' of the involved parties... We see meaning negotiation as the process that dynamically enable agents to discover relationships between local ontologies. The goal of this paper is to create an 'environment' in which the preconditions for meaning negotiation are satisfied. In particular, on the one hand, we define a theoretical framework in which local ontologies and mappings between them can be represented; on the other hand, we provide a language for describing what we call a context space, namely a collection of contexts and their mappings; this language is called ConTeXt Markup Language (CTXML) and is based on XML and XML-Schema. Local ontologies are represented as contexts... we will use knowledge management as our main motivation for contextualized local ontologies; however, as we said at the beginning, we believe that similar motivations can be found in any semantically distributed application, e.g., the semantic web..." General references: "Conceptual Modeling and Markup Languages." [cache]
[September 28, 2002] "Linguistic Based Matching of Local Ontologies." By Bernardo Magnini, Luciano Serafini, and Manuela Speranza (ITC-Irst Istituto per la Ricerca Scientifica e Tecnologica, Trento, Italy). Presented at the AAAI-02 Workshop on Meaning Negotiation (MeaN-02), July 28, 2002, Edmonton, Alberta, Canada. "This paper describes an automatic algorithm of meaning negotiation that enables semantic interoperability between local overlapping and heterogeneous ontologies. Rather than reconciling differences between heterogeneous ontologies, this algorithm searches for mappings between concepts of different ontologies. The algorithm is composed of three main steps: (i) computing the linguistic meaning of the label occurring in the ontologies via natural language processing, (ii) contextualization of such a linguistic meaning by considering the context, i.e., the ontologies, where a label occurs; (iii) comparing contextualized linguistic meaning of two ontologies in in order to find a possible matching between them. differences, but by designing systems that will enable interoperability (in particular, semantic interoperability) between autonomous communities. Autonomous communities organize their 'local knowledge' according to a local ontology. A local ontology is a set of terms and relations between them used by the members of the autonomous community to classify, communicate, update, and, in general, to operate with local knowledge. Materializations of a local ontology can be, for instance, the logical organization of a web site used by the community to share information, the directory structure of a shared file system, the schema of a database used to store common knowledge, the tag-structure of an XML schema document used to describe documents or services shared by the members of the community. In all these cases, we think that two of the main intuitions underlying local ontologies are the following: (1) Each community (team, group, and so on) within an organization has its own conceptualization of the world, which is partial (i.e., covers only a portion of the world), approximate (i.e., has a degree of granularity), and perspectival (i.e., reflects the community's viewpoint on the world -- including the organization and its goals and processes); 2. There are possible mappings between different and autonomous conceptualizations. These mappings cannot be defined beforehand, as they presuppose a complete understanding of the two conceptualizations, which in general is not available. This means that these mappings are discovered dynamically via a process that we call meaning negotiation... In the [Section 2] we define a theoretical framework, context space, were local ontologies and mappings between local ontologies are represented. A context space is composed of a set of contexts and a set of mappings. Contexts are the main data structure used to represent local knowledge, mappings represent the results of matching two (or general many) contexts. In the Section 'Linguistic-based interpretation' we describe the computing of the local semantics of a context. Knowledge in a context is represented by structured labeled 'small' linguistic expressions, as complex noun phrases, prepositional phrases, abbreviations, etc. The semantics of this structure is computed by combining the semantics of each single label... In the last section we describe how the local semantics of the labels of different contexts are compared in order to find possible overlaps and mappings between two structures and finally we draw some conclusions..." [cache]
[September 28, 2002] "Framework for a Music Markup Language." By Jacques Steyn. Paper presented at MAX 2002 - International Conference Musical Application using XML, September 19 - 20, 2002, State University of Milan, Italy. "Objects and processes of music that would be marked with a markup language need to be demarcated before a markup language can be designed. This paper investigates issues to be considered for the design of an XML-based general music markup language. Most present efforts focus on CWN (Common Western Notation), yet that system addresses only a fraction of the domain of music. It is argued that a general music markup language should consider more than just CWN. A framework for such a comprehensive general music markup language is proposed. Such a general markup language should consist of modules that could be appended to core modules on a needs basis... What is lacking is an HTML-like music markup language; one that is as simple, yet powerful enough. Creating such a language has become possible after the introduction of XML, but there is as yet no widely accepted language for music, and those that have been introduced focus only on small and particular subsets of CWN (Common Western Notation). Known attempts of XML-based music markup languages are: 4ML (Leo Montgomery), FlowML (Bert Schiettecatte), MusicML (Jeroen van Rotterdam), MusiXML (Gerd Castan), and MusicXML (Michael Good), all of which focus on subsets of CWN. ChordML (Gustavo Frederico) focuses on simple lyrics and chords of music. MML ('Music Markup Language', Jacques Steyn) is the only known attempt to address music objects and events in general. In this paper I will investigate the possible scope of music objects and processes that need to be considered for a comprehensive or general music markup language that is XML-based. The proposed general music markup language, in this case MML, is a work in progress and far from complete. It is possible that further modules will be introduced, or that the organization of modules change due to practical demands. But even in its incomplete state it presently seems to be the only XML-based attempt to describe a very large scope of the domain of music. Other current attempts at marking music focus on a subset of CWN, which is useful in the early days of an XML-based markup language addressing music issues, but which do not address important issues such as performed music or playlists. Hopefully MML can serve as a basis for future joint efforts to comprehensively describe music using XML as basis..." Also available in printable HTML and PDF. See: "XML and Music." [cache, conference reference]
[September 28, 2002] "The Music Encoding Initiative (MEI)." By Perry Roland (Digital Library Research & Development Group, University of Virginia Library). Paper presented at MAX 2002 - International Conference Musical Application using XML, September 19 - 20, 2002. State University of Milan, Italy. ['This paper draws parallels between the Text Encoding Initiative (TEI) and the proposed Music Encoding Initiative (MEI), reviews existing design principles for music representations, and describes an Extensible Markup Language (XML) document type definition (DTD) for modeling music notation which attempts to incorporate those principles.'] "... TEI is mute regarding the 'proper' way to compose text. Even when texts are initially created using the TEI DTD, they are still essentially transcriptions of an ur-text. Similarly, the MEI does not attempt to encode all musical expression, but instead limits itself to the written form of music, i.e., common music notation (CMN). Like the TEI, the MEI must also remain unconcerned with how music is created. It is not primarily an aid to musical composition just as the TEI does not function as an aid in the creation of text. Some may see the adoption of CMN as the basis for encoding as too limiting. Legitimate arguments could be made for an entirely new form of music notation for the purpose of electronic transcription. However, common music notation is applicable to a wide range of contemporary and, perhaps more importantly, historical music. It has been eloquently described by Selfridge-Field as 'the cornerstone of all efforts to preserve a sense of the musical present for other and later performers and listeners'. Given its expressiveness, extensibility, nearly universal usage, and longevity, there seems to be little reason not to adopt CMN as the starting point for the MEI. The fact that the MEI fundamentally conceives of music as notation does not limit its usefulness for encoding performance and analytical information. While it cannot rival a human rendition, a basic performance suitable for many purposes may be mechanically derived from the notation. Of course, any additional information necessary to complete this process may also be encoded. Likewise, descriptive and critical information may be included to assist bibliographic and analytical applications. Ultimately, a limited scope makes the design of a representation easier. For example, both the pitch and rhythm models can be greatly simplified when non-CMN requirements are not considered... Because progress toward an encoding standard for music notation is much more feasible when not locked into constant re-invention of past wheels, large parts of the design of the MEI DTD are drawn from existing standards. On the largest scale, the MEI is modeled upon the TEI. At lower levels, the Acoustical Society of America (ASA) system is used to record pitch information, performancespecific data is encoded using elements which have similar names and functions as those in the Musical Instrument Digital Interface (MIDI) standard, most of the mark up for text is designed to be familiar to users of HTML, and TEI header and Dublin Core elements form the basis of the meta-data components. Of course, the Unicode standard underlies the character encoding model for XML, obviating the need to re-invent special character encoding schemes. Finally, while it is not a formal standard, a well-known, authoritative source [Gardner Read, Music Notation: A Manual of Modern Practice, 2nd ed., 1979] has been used as the basis for the grammar for music notation parts of the MEI..." An alpha version XML DTD is available. See: (1) "Music Encoding Initiative (MEI)"; (2) general references in "XML and Music." [cache, conference reference]
[September 28, 2002] "A Comparison of XPDL, BPML, and BPEL4WS." By Robert Shapiro (President and Chief Technology Officer, Cape Visions). Published by ebPML.org. 'Rough Draft' version 1.4, August 27, 2002. 17 pages. "The Business Process Modeling Language (BPML) is representative of a new family of process definition languages intended for expressing abstract and executable processes that address all aspects of enterprise business processes, including in particular those areas important for webbased services. Microsoft's XLANG is another member of this family, as is IBM's Web Services Flow Language (WSFL). These latter two have now been combined in BPEL4WS. In this paper we focus on a comparison of BPML with XPDL, the WfMC proposed standard for an XML-based process definition interchange language. Comments in red have been added to extend the comparison to BPEL4WS, hereafter abbreviated to BPEL... Our primary objective is to clarify the differences between the BPML and XPDL (and BPEL) paradigms. We are interested in exposing what can be done with one language and cannot be done, or done only with difficulty in the other. When simple extensions are possible, we propose them. We are also concerned about the work being done by the three standards organizations: WfMC, OMG, and BPMI..." Note: "ebPML.org promotes a new vision for IT infrastructures shared by many and based on the convergence of several technologies and standards, including but not limited to: Business Process Management Systems, ebXML, Web services, and Content standards such as OAGIS the standard of the open application group, or RosettaNet." See: (1) "Business Process Execution Language for Web Services (BPEL4WS)"; (2) XML-Based Workflow and Process Management Standards: XPDL, Wf-XML"; (3) "Business Process Modeling Language (BPML)." [source .DOC 2002-09-28, fetch from www.ebpml.org/ for update]
[September 27, 2002] "Sun Sneak-Previews Next Version of J2EE." By Richard Karpinski. In InternetWeek (September 26, 2002). "Sun Microsystems this week is previewing the latest version of its J2EE server environment, the first release to include full, baked-in support for Web services protocol. The preview is happening at its JavaOne Japan Developer Conference, and includes a first look at some of the Web services integration Sun has planned for J2EE 1.4. Application server vendors are just beginning to roll out servers supporting the last J2EE release, 1.3, which despite its 'dot' moniker included many significant upgrades. Users also are only beginning to move up to J2EE 1.3. Despite the work involved in a major J2EE upgrade, enterprises are closely watching the latest move, particularly the tighter integration of Web services protocols into the J2EE platform. J2EE 1.4 includes support for UDDI and ebXML registries, SOAP transactions, XML schemas, and processing and the Web Services Description Language (WSDL). Most application-server vendors already provide fairly comprehensive Web services protocol support, but a formal J2EE version release bakes that support right into the standard platform. 'Using J2EE v 1.4 Web services developers won't have to carefully pick and choose in order to achieve interoperability. They will get it by design,' said Mark Hapner, Sun's architect and co-specification lead for J2EE v 1.4, in a statement..." See "Java 2 Platform, Enterprise Edition 1.4 (J2EE 1.4) Specification (JSR 151)."
[September 27, 2002] "Content at Your Fingertips: Better Ways to Classify and Tag." By Michael P. Voelker (Equinox Communications, Inc). In Transform Magazine (October 2002). ['From manual to automated approaches and from content creation to content searching, metatagging helps businesses combat infoglut.'] "[A taxonomy provides] the structure of topics and subtopics that comprises a virtual filing cabinet in which content can be sorted. Placing content into a topic 'bucket' requires no special technology at its most basic level; anyone who's chosen a specific subfolder in which to save a file has done as much. But this fully manual approach becomes impractical and potentially inaccurate as the volume of information increases. It also doesn't necessarily allow for searchable retrieval of content, something that the application of metadata to content does. There are two schools of thought as to when metadata should be applied to content (in a process known as metatagging). The first school advocates applying tags at creation, a theory that, not surprisingly, many vendors in the content management and taxonomy software vendor community support. The second school calls for categorization of content at the search end using various algorithms that analyze content for meaning. These algorithms aren't dependent upon metadata applied to the content along the creation path. This method, again not surprisingly, is championed by categorization and search vendors. Anyone who has followed technology headlines has seen convergence in the marketplace between classification and search vendors and between classification and content management technologies in recent months. For example, search vendor Inktomi acquired classification vendor Quiver in July, and divine acquired Northern Light in January. Where acquisitions haven't occurred, partnerships have, such as taxonomy and classification vendor Stratify integrating with Plumtree and BEA portal offerings, and content management vendor Interwoven integrating with Inktomi. Of course, by definition content management has a classification component, and vendors have worked to enhance the tagging capabilities of their products. For example, Stellent recently added a Content Categorizer that suggests metadata to users when documents are checked in. Interwoven's MetaTagger suggests metadata fields to users and also provides fully automated tagging, if desired. Standard taxonomies (geographic locations and SIC codes) combined with any customized vocabularies developed at the installation ensure the consistency of available metadata that is applied. In automated mode, MetaTagger uses a hybrid approach of rules-based and statistical analyses to suggest metadata and can also 'discover' a customized taxonomy by reviewing collections of stored content. Documentum added auto-categorization and auto-tagging to its content management platform this summer via a new Content Intelligence Services (CIS) module. With prebuilt and custom-designed taxonomies as guides, CIS uses pattern recognition and rules-based logic to assign metatags to content in documents, Web pages, XML components and media assets. The process can be fully automated or combined with manual review..." See also (1) Taxonomy and Classification Vendors and (2) Balancing Human Intervention With Automation.
[September 27, 2002] "'Bluefin' to Provide Standard SAN Management Interface." By Roger Reich. In InfoStor Volume 6, Number 9 (September 2002), pages 22-24. "The industry is primed to tackle issues surrounding storage management-one of the top concerns of storage users today. For example, the Storage Networking Industry Association (SNIA) recently announced the launch of the Storage Management Initiative (SMI), a program dedicated to developing a storage management standard. At the heart of SMI is the Bluefin specification. The Bluefin specification for storage area network (SAN) management began years ago, when SANs had just emerged and multi-vendor interoperability problems loomed large. At the time, no standard interface existed to allow products from multiple vendors to reliably interoperate for the purpose of monitoring and controlling resources in a storage network. Interface technology at the time (developed primarily for the networking, or "pre-SAN," industry) was unable to provide reliable and secure control of resources in complex, heterogeneous SANs. And no single vendor was capable of driving a de facto interface for SAN management... In 2000, the Partner Development Program (PDP) consortium was established, with 17 member companies: BMC Software, Brocade, Computer Associates, Compaq, Dell, EMC, Emulex, Gadzoox, Hewlett-Packard, Hitachi Data Systems, IBM, JNI, Prisa Networks, QLogic, StorageTek, Sun, and Veritas. This consortium began work on a specification code-named "Bluefin." The objective was to create a standard that would be transferred to the SNIA for completion. The PDP group embraced a new object-oriented interface technology, called Web-Based Enterprise Management (WBEM), being developed by the Distributed Management Task Force (DMTF) as a foundation for Bluefin. The object model that will be expressed through the WBEM architecture is an extension of the Common Information Model (CIM), also developed by the DMTF. The Bluefin specification will help to accelerate work completed by the DMTF and SNIA. The SNIA's Disk Resource Management (DRM) Technical Working Group has laid the groundwork for developing CIM/WBEM technology for use in vendor products and held its first public demonstration of storage management using the technology in 1999..." [Note: "The core of Bluefin is an object model, built with the CIM (Common Information Model) standard, and a language binding and protocol solution that employs CIM-XML (CIM operations over HTTP), and SLP. Bluefin goes beyond just specifying the object model and documents what implementations need to do in order to acheive interoperability."] See: (1) "SNIA Announces Bluefin SAN Management Specification Using WBEM/MOF/CIM"; (2) "DMTF Common Information Model (CIM)."
[September 27, 2002] "Sun Software Supports CIM." By Lisa Coleman. In InfoStor Volume 6, Number 9 (September 2002), pages 1, 20. "Claiming to be the first systems vendor to offer storage management software based on the Common Information Model (CIM) standard, Sun Microsystems recently released its StorEdge Enterprise Storage Manager (ESM) software. ESM rounds out Sun's storage management software line by providing storage area network (SAN) visualization, topology reporting, device configuration, and diagnostics in a centralized platform, according to Steve Guido, product line manager in Sun's network storage product group. But relative to competing products, CIM compliance may be the primary differentiating feature. 'CIM is the basis for all of our open standards work, and it will [afford] long-term customer benefits in terms of scalability and rapidly accelerating device support by providing interoperability among various components in SANs and other topologies,' says Guido. The software is also compliant with the Web-Based Enterprise Management (WBEM) standard and is part of the Sun Open Net Environment (SunONE). Steve Kenniston, an analyst with the Enterprise Storage Group, says that Sun's ESM is on par with other vendors' management software, but differs in its support for the CIM and WBEM standards. 'It has all the features and function sets at a standards-based level to interoperate with not only Sun's management console, but also management consoles from [other vendors]. It really opens up what they'll be able to manage,' says Kenniston... Sun officials cite EMC as their main competitor, while acknowledging that vendors such as BMC Software, IBM, and Veritas also offer some of the same capabilities that ESM provides... Although Sun claims to be the first systems vendor to release CIM-based SAN management software, another company is also claiming to be first with CIM-based management software. StorScape, a joint venture of Eurologic Systems and Hermes SoftLab, is expected to release CIM-based storage management software next month..." See "DMTF Common Information Model (CIM)."
[September 27, 2002] "Holistic Web Services." By Jeremy Allaire. From Jeremy Allaire's Radio: An exploration of media, communications and applications over the Internet. September 24, 2002. "It's been really interesting to watch the continuing efforts of the Internet industry to define and deliver platforms for web services. I've been involved in helping to define XML protocols for distributed computing for a long time, and Macromedia has put together an extremely powerful yet simple set of software to help people build web services. Last week's Web Services II conference, held by InfoWorld, pulled together the latest thinking and technology in this crucial emerging space. You can read extensive coverage of the event here. But I've continued to be annoyed by the narrow-minded thinking that has dominated discussions of web services to date. Like many IT technologies, the central thrust of the web services worldview has emerged from the classic middleware infrastructure providers, which has in turn colored our thinking on web services significantly. The essential problem is that web services as defined by a core collection of XML protocols (SOAP, WSDL, UDDI) focuses almost entirely on API-level application integration, rather than a broader view of how software will be created, distributed and consumed in the future. Most importantly, it lacks any real notion of what the user experience of web services will be (though there are nascent standards efforts such as the Portlet specs, and WSUL)... The over-focus on protocols and middleware reflects the political economy of the IT industry, where control of programming languages, APIs and runtimes draw the most attention because the stakes are so high. There are many people who have actively considered a broader view of web services, one that encompasses the client-side user experience as well as the back-end plumbing that enables transparent use of logic and data in the network. But those people have been few and far between, and certainly not very visible in the broader industry discourse on web services..."
[September 27, 2002] "IBM Plans WebSphere for Web Services." By Matt Berger. In InfoWorld (September 26, 2002). "IBM Corp. is working to develop new software combining elements of its WebSphere and Tivoli product lines that is designed to allow companies to monitor, meter and bill for Web services, a company executive explained Wednesday. Known as Project Allegro, the effort aims to provide an infrastructure for hosting Web-based applications and services, said Bob Sutor, IBM's director of e-business standards strategy. One example could be a Web-based application run by a bank that gives people access to their banking information using standards-based transactions to collect data from multiple sources. Allegro will be able to monitor the health of the Web service, meter how often customers access the service, and manage the process for charging and billing customers that access a service through what IBM called "electronic contracts." Those contracts could be designed to charge users per transaction or on a subscription basis, Sutor said... Using standard technologies such as XML (Extensible Markup Language) and SOAP (Simple Object Access Protocol), the software is being designed to manage any number of Web-based applications no matter where they reside, whether it be on a company's internal servers or elsewhere on the Web. Customers most suited to use Allegro would include carriers that provide wireless services to customers; application service providers that host things like customer relationship management software; as well as enterprises that use Web services internally to allow data to be shared across disparate servers, according to IBM... The software will combine pieces of IBM's WebSphere application server and portal server, in addition to its Tivoli management and security software. An early implementation of the concept was embodied in a project IBM detailed late last year called the Web Services Hosting Technology, Sutor said..."
[September 27, 2002] "IBM Readying New WebSphere Software For Web Services Hosting. Project Allegro Product Available Next Year." By Elizabeth Montalbano. In CRN (September 25, 2002). "IBM is readying new software to provide a Java- and XML-based infrastructure for the hosting of Web services, an IBM executive told CRN Wednesday. The product, currently in development under Project Allegro, will be branded under the WebSphere software line and combine functionalities from WebSphere application server, portal and commerce products, as well as Tivoli security and systems management software, said Bob Sutor, IBM's director of e-business standards strategy. The new product is built on a host of Web services standards such as WSDL and SOAP and will rely heavily on the WS-Security standard, for which IBM recently announced support in its Tivoli and WebSphere products, Sutor said. IBM will make a reference implementation of the product available on its developerWorks site by the end of the year, with the full product available sometime next year, Sutor said. IBM has not decided on the name or pricing of the product, he added. Sutor used the example of a human resources outsourcing function to illustrate what the Project Allegro product will do. Sutor said that if a company has a Web-based application it wants to offer as a Web service -- e.g., the ability to access salary history and other human resources functions online -- there are contract, billing functions and user registration functions to consider... 'The definition of Web services can sometimes confuse people because they're not sure if you're talking about the application or all the computer science that goes along with that, such as SOAP and WSDL,' Sutor said. He said that Allegro actually brings those two definitions together into one product, showing how IBM is using the Web services standards it has built into its software products as a 'modular framework' to build an infrastructure allowing people to offer Web-based services... Sutor admitted that the flexible definition of Web services -- which has been used both to define a Web-based application offered as a service, as well as the actual linking of applications using XML over the Internet -- might be puzzling to solution providers and companies that want to use the technology..."
[September 27, 2002] "Extending RSS 2.0 With Namespaces." By Morbus Iff [aka Kevin Hemenway]. 2002-09. "With the recent release of RSS 2.0 by Userland, there's been a healthy amount of discussion over the smallest part of the spec: Extending RSS. This document attempts to clarify that section, by discussing the creation of the blogChannel Module from Dave Winer, and the underlying principles of namespaces. This document is not intended to be the 'end-all, be-all' of namespace discussions. For the sake of simplicity, I've left a lot of things out and I'm not talking in all the 'right technical terms'. If you have a healthy knowledge of namespaces already, you'll probably find something to nitpick about, and that's ok... this document is not definitive but rather an end-level, low-tech bundle of joy. This is a work-in-progress... Without getting overly complicated, a namespace is like a toy chest. 'You can buy as many toys as you want', your Mother says, 'but at the end of the day, be sure to put them away'. You know your Mother has a reason for saying this, because you've seen your Father fall a few times, thus breaking your new toys. The loss of time, money, and back pain mowing lawns is not a pleasant one..." See otherwise RSS-DEV and "RDF Site Summary (RSS)."
[September 25, 2002] "At the Center of The Patent Storm." By Paul Festa and Daniel J. Weitzner (Director, W3C Technology and Society Activity). In CNET News.com (September 25, 2002). ['If you want to adhere to the latest official protocol for building a Web application, it could cost you. Welcome to the latest controversy roiling the World Wide Web Consortium, the standards body responsible for shepherding Web technologies like XML and HTML. This particular controversy began brewing nearly a year ago when the W3C first contemplated a change that would let its working groups incorporate technologies that already had intellectual property claims -- and royalties -- attached to them. The W3C subsequently backed away from that stance in the face of strong opposition and reaffirmed its policy of only recommending royalty-free technologies. But the issue is far from over... Weitzner spoke to CNET News.com about the fate of the W3C's royalty-free policy and potential royalty exception, and about the role of the Semantic Web.'] Excerpts: "What's the core motivation, then, for promoting a royalty-free policy? [Weitzner:] "The critical concern that has led us to push so hard for a royalty-free policy is that for all the different Web software implementers it would be terribly hard to negotiate with the parent holders. They don't have their own patent portfolios or intellectual property lawyers. How are they going to do it? That doesn't mean they can go and steal the IP, but we want to prevent putting them into a situation where they have to negotiate on such a lopsided playing field. We want to create standards that can be implemented without infringing on patents. In some cases, patent holders will still see it in their advantage to disseminate their technology for free... The goal is to avoid these very difficult, time-consuming licensing negotiations that require that implementers have lawyers and patent portfolios that they can trade with. We think there are very important pieces of the Web that can be developed by people who don't have those resources." ... Like the open-source groups? [Weitzner:] "The open-source community has played a really important role at the W3C because, clearly, royalty-bearing standards create a fundamental problem for open-source software. But the need for royalty-free standards would exist even if there were no open-source solutions. What has been so successful about the Web is that its technology can be implemented everywhere by large and small developers, who don't have to worry about these licensing negotiations. The need for royalty-free standards goes beyond just the needs of the open-source community..." See: (1) Patent Policy Working Group Royalty-Free Patent Policy [W3C Working Draft 26-February-2002]; (2) earlier summary of the W3C patent policy discussions; (3) general references in "Patents and Open Standards."
[September 25, 2002] "Introduction to Xindice. An Open Source Native XML Database System." By Arun Gaikwad (Independent Software Consultant). From IBM developerWorks, Web architecture, XML zone. September 2002. ['This article is an introduction to an Open Source Native XML Database System, called Xindice (pronounced zeen-dea-chay). It is also an introduction to Native XML Database concepts.'] "Xindice is an Open Source Native XML Database System. In this article, you will learn how to: (1) Install Xindice; (2) Create and delete collections; (3) Insert and delete documents into these collections; (4) Use XQuery to query these documents. You can perform these operations on the command line or embed them in Java programs using the Java API. You will also learn to use the Java API to write JDBC style programs to communicate with Xindice. An XML Database System is something which you may think is unnecessary but once you start using it, you wonder how you would survive without it. I say this from personal experience. When I first heard of Native XML Database Systems about two years ago, I completely ignored them thinking that it was just hype. At that time, I was involved in the development of a project for a large financial brokerage company. We were using XML to send and receive financial feed data. It was necessary to save the feed data in some kind of permanent storage. As a Relational Database programmer, my first choice was to use a Relational Database System to save these XML documents. I decided to use CLOBs (Character Large Objects) with a modern RDBMS to save these documents. Since the RDBMS supported a Java API to insert and retrieve CLOBs, this was a very easy task. As our project evolved, I found that this approach had a major drawback. This was nothing but DIDO (Document In, Document Out). Retrieving partial documents or nodes from a DOM tree was not possible. I would have found a tool which saved the XML documents, performed database-like queries on nodes, and retrieved partial or full documents very useful. This is when NXDs came into the picture. If I had to do this project all over again, I would definitely use an NXD. If you need simple DIDO functionality, you might want use an RDBMS to save your documents, but for extended functionality such as Query and Update you should consider an NXD. Sometimes people try to save XML documents into Normalized Relational Database tables by mapping the document nodes into Relational format. This is not always easy. It is relatively easy to build an XML document from RDBMS tables, but not to store them because XML documents are hierarchical and almost free format..." Also available in PDF format. See: "XML and Databases."
[September 25, 2002] "Canonical XML Encoding Rules (cXER) for Secure Messages. An ASN.1 Schema for Secure XML Markup." By Phil Griffin (Griffin Consulting). Presentation prepared for the RSA Conference 2002 Europe, October 7 - 10, 2002, Le Palais des Congrès de Paris, Paris, France. 19 pages. The ZIP package contains PPT format with additional notes. "ASN.1 is a schema for encoded values: Types describe general structure of abstract values; Each builtin type defines a class, a set of distinct values; Constraints restrict a class and the validity of values; Encoding rules define how abstract values are transferred... Encoded ASN.1 values are binary or text:  Binary and XML Canonical Forms (Distinguished Encoding Rules => DER, Canonical XML Encoding Rules => cXER)  Each DER encoding maps to a cXER value; The Canonical XML Encoding Rules (cXER) are defined in: ISO/IEC 8825-4 -- ITU-T X.693 ASN.1 XML Encoding Rules (XER). The same ASN.1 value is cXER encoded in one and only one way as a single long string containing no 'white-space' characters outside of data... ASN.1 XML Benefits:  A single schema for all values, Binary and text encodings are all based on ASN.1 types (Eliminates multiple schema mappings and Provides an efficient schema for XML values)  ASN.1 <=> XML communications (ASN.1 applications can send and receive XML values, Efficient transfer, simple signature processing)..." See: (1) OASIS XML Common Biometric Format (XCBF) TC; (2) the related paper "X9.84:2002: Biometric Information Management and Security." [ZIP source with included PPT, from Griffin Consulting]
[September 24, 2002] "RSA Solves Your Company's Identity Crisis." By Steve Gillmor. In InfoWorld (September 24, 2002). ['As a founding member of the Liberty Alliance and a driver for the SAML (Security Associates Markup Language) standard, RSA Security is actively involved in finding ways to secure business transactions. CEO Art Coviello met with InfoWorld Test Center Director Steve Gillmor and InfoWorld editors to talk about the strategic importance of security standards for Web services and explain the differences between identity management and authentication.'] "The big strategic thing that is happening right now is the definition of standards around Web services and around security standards for Web services. We're integrally involved in that through our association as a founding member of the Liberty Alliance and as one of the drivers for the SAML standard. The Liberty Alliance is basically designed to create a standard for federated identities so that your identity can be passed to multiple Web sites and be recognized. Creating that standard would allow RSA to create a security solution that makes that identity a trusted identity. It's a very important development, not only for commerce on the Internet but [also as] an opportunity for us to provide a secure solution for these identities... Identity management is more than just the provisioning and creation of an identity. Where RSA adds tremendous value ... is adding the trust, the verification that you are who you say you are. We do that with a combination of technologies: time synchronous tokens, digital certificates, and the ability to manage biometric information if that's one of the ways you use to establish [identity]. It's obviously a heck of a lot more than just creating an identity. It's also more than creating a digital certificate. It's managing those identities, protecting those identities, and making sure that people can trust that that identity is really you... One of the things that we're working on is the SAML, which is a standard for passing on these identity credentials [and] privileges. Not only proving that you are who you say you are, but [verifying] what you get to do, what your authorizations are. For instance, as a purchasing person, you have the ability to sign off on [certain items]. These assertions can be passed along using this standard and understood by other applications that comply with the standard. RSA would be providing the material that would go into the SAML assertion. Not only the trusted identity but also, with our Web access management product RSA Clear Trust, we have an authorization engine that defines what rights and privileges you have and signs you on to multiple applications across the Web..." See: "Security Assertion Markup Language (SAML)."
[September 24, 2002] "The Liberty Alliance Gets New Members, and a President." By Sebastian Rupley. In PC Magazine (September 24, 2002). "The Liberty Alliance -- a consortium of business and technology companies seeking to implement federated standards for authenticating online identities -- has announced 26 new member companies and the appointment of a new president. Michael Barrett, vice president of Internet strategy at American Express, one of the leading partner companies in the Liberty Alliance, will head the organization. The news comes on the heels of Sun Microsystems' delivering the first software layer for implementing Liberty Alliance applications and two weeks after an announcement that a slew of new companies had joined the alliance. Now that the Liberty Alliance has released its 1.0 specification for Liberty-enabled Web services, and Sun has produced a software layer to allow companies to build and test applications, one of the remaining questions is who will oversee standards... the Liberty specification from July was created by the founders of the alliance -- 17 companies, including Sun and AOL -- but that the founders were not officially designated as overseers of Liberty standards. The announcement of Barrett's new role as alliance president did not include any information about new policies for overseeing Liberty standards, although Barrett's role will clearly be to oversee the disparate solutions that various companies may deliver... The other top administrators of the Liberty Alliance include vice president Ian Johnson, who is senior director of strategic technologies at Vodafone, and secretary Bill Smith, who is director of Liberty Alliance technology for Sun Microsystems. Among the 26 new companies joining the alliance are Discover Card, Merck, and Wells Fargo..." See: (1) the announcement: "Liberty Alliance Project Announces New Management Board President and New Members. Michael Barrett, American Express to Help Guide Progress of Growing Consortium - Now More than 120 Companies Strong."; (2) LA complete list of members; (3) Sun Offers Developers Interoperability Prototype for Liberty; (4) "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[September 24, 2002] "IBM Releases WebSphere Studio 5.0." By Stacy Cowley. In InfoWorld (September 24, 2002). "Version 5.0 OF IBM's WebSphere Studio software for developing Web applications is scheduled for release Tuesday, featuring new tools for working with legacy applications written in Cobol and PL1 and support for the latest batch of open-source standards and software. WebSphere Studio 5.0 is the first piece of IBM's WebSphere 5.0 line to launch, and it will be followed soon by other products including WebSphere Application Server 5.0, a version of which is built into the new WebSphere Studio. The software comes in two versions. WebSphere Studio Application Developer is for building, testing, debugging and deploying Java and J2EE applications. The more advanced IBM WebSphere Studio Enterprise Developer includes additional tools for working with legacy applications... Version 5.0 brings to WebSphere Studio new features for coordinating the workflow of multiple back-end applications. It also adds support for J2EE, Version 7.2 of both Red Hat Linux and SuSE Linux, Version 2.0 of the IBM-backed Eclipse open-source IDE (integrated development environment) and several new Web services standards. The key advantage of the new WebSphere Studio is its openness, said WebSphere Director of Marketing Scott Hebner. More than 175 plug-ins from vendors including Rational Software, Interwoven,and Macromedia are now compatible with Eclipse, giving developers significant customization choices and the freedom to focus on building applications rather than integrating tools, he said..." See the announcement: "First In Industry, IBM Delivers Single, Cross-Enterprise Development Environment For Web Services. IBM WebSphere Studio Version 5 Relies on Latest Open Technologies to Advance Web Services."
[September 24, 2002] "Analysis: IBM's Tool Strategy -- How WebSphere Stacks Up." By Richard Karpinski. In InternetWeek (September 24, 2002). "IBM is placing a major emphasis on development tools as part of its next-generation IT strategy. It's a bit of a departure -- tools have never been a big business -- but one that many other leading software players are aping as well. Here's how IBM's tool strategy stacks up with some key rivals... IBM vs. Microsoft: This is the key battle of course, and one IBM is throwing all its weight behind. Microsoft's strength is its ubiquity, which starts with Windows, obviously, but also extends to its development tools, from its C++ tools on through the popular Visual Basic and Web-age features like ASP and now .Net. IBM's major weapon here is Eclipse, its open-source development tools framework project, which it hopes can give it the developer seats and third-party market that will help it catch up with Microsoft's Visual Studio.Net. Can support for Linux, Eclipse, Apache, and other open-source projects -- backed with its deep expertise in the legacy world of mainframes and message queuing -- help IBM topple Microsoft? The verdict is out... IBM vs. Sun: Now [IBM and Sun] are pursuing similar tools strategies. They both have development tools, open-source tools frameworks (IBM's Eclipse to Sun's NetBeans) and the full suite of middleware server software. Sun has been much later to the game in supporting Web services (Microsoft's early backing of SOAP arguably threw Sun for a loop) and is just now catching up with what it calls 'LAMP' -- Linux, Apache, MySQL, and PHP. But Sun is fully backing those technologies now. For these two competitors, the thing to watch is who can better play the standards and open-source game..."
[September 24, 2002] "The Java Architecture for XML Binding (JAXB)." Public Draft. Version 0.7. Status: Pre-FCS. September 12, 2002. 188 pages. Edited by Joseph Fialli and Sekhar Vajjhala. The JAXB Public Draft and API Documentation are available for download. The Java Architecture for XML Binding (JAXB) "provides an API and tools that automate the mapping between XML documents and Java obects. JAXB makes XML easy to use by compiling an XML schema into one or more Java technology classes. The combination of the schema derived classes and the binding framework enable one to perform the following operations on an XML document: (1) unmarshal XML content into a Java representation; (2) access, update and validate the Java representation against schema constraint; (3) marshal the Java representation of the XML content into XML content. JAXB gives Java developers an efficient and standard way of mapping between XML and Java code. Java developers using JAXB are more productive because they can write less code themselves and do not have to be experts in XML. JAXB makes it easier for developers to extend their applications with XML and Web Services technologies. The public version of the specification is available with the following enhancements over the previous released early access version V0.21: Support for a subset of W3C XML Schema and XML Namespaces; More flexible unmarshalling and marshalling functionality; Validation process enhancements..." From the JAXB V0.7 Introduction: "The primary components of the XML data-binding facility described in this specification are the binding compiler, the binding framework, and the binding language. (1) The binding compiler transforms, or binds, a source schema to a set of content classes in the Java programming language. As used in this specification, the term schema includes the W3C XML Schema as defined in the XML Schema 1.0 Recommendation [XSD Part 1 and Part 2]. (2) The binding runtime framework provides the interfaces for the functionality of unmarshalling, marshalling, and validation for content classes. (3) The binding language is an XML-based language that describes the binding of a source schema to a Java representation. The binding declarations written in this language specify the details of the package, interfaces and classes derived from a particular source schema..." As of 2002-09, 'Java Architecture for XML Binding (JAXB)' is in Public Draft Review through The Java Community Process. A pre-release version of the Reference Implementation and User's Guide is expected by early Q4 CY2002. Note: The Java Web Services Developer Pack (Java WSDP) is also available for download; it has been tested on Solaris 2.8, Solaris 2.9, Windows 2000, Windows XP, and RedHat Linux 7.2. Java WSDP contains: Java API for XML Messaging (JAXM), Java API for XML Processing (JAXP), Java API for XML Registries (JAXR), Java API for XML-based RPC (JAX-RPC), SOAP with Attachments API for Java (SAAJ), JavaServer Pages Standard Tag Library (JSTL), Java WSDP Registry Server, Web Application Deployment Tool, Ant Build Tool, and Apache Tomcat 4.1.2 container. Java WSDP is "an integrated toolset that in conjunction with the Java platform allows Java developers to build, test and deploy XML applications, Web services, and Web applications. The Java WSDP provides Java standard implementations of existing key Web services standards including WSDL, SOAP, ebXML, and UDDI as well as important Java standard implementations for Web application development such as JavaServer Pages (JSP) technology pages and the JSP Standard Tag Library. These Java standards allow developers to send and receive SOAP messages, browse and retrieve information in UDDI and ebXML registries, and quickly build and deploy Web applications based on the latest JSP standards." [cache JAXB v07]
[September 24, 2002] "Liberty Alliance Plans Interoperability with Passport." By John Blau. In InfoWorld (September 24, 2002). "The Liberty Alliance Project, which is developing Web technology to facilitate single-sign-on authentication, plans to support interoperability between its system and Microsoft's rival Passport system. 'We see opportunities for interoperability between Passport and Liberty Alliance; this option could be part of a 1.1 specification, possibly later this year,' said Paul Madsen, product manager at Entrust in Addison, Texas, on Tuesday at The Burton Group's Catalyst conference in Munich, Germany. Entrust is a member of the Liberty Alliance consortium, which is made up of vendors, service providers, and enterprise users. The Liberty Alliance, which unveiled its first public release in July, is promoting a standard specification that will allow users to travel the Internet and access applications over networks using a single sign-on. Users logging into a Web site supporting the specification, for instance, could then visit other password-protected Web sites that support the technology without having to sign in again... The Passport single sign-on service allows users to access password-protected sites that support the Microsoft technology without having to re-enter their user name and password each time. According to Madsen, the Liberty Alliance is working on a 2.0 version, which will further simplify the sign-on process. The group hopes to release that version in the first quarter of 2003, he said. At least one industry observer views the Liberty Alliance largely as a U.S.-dominated group, although its members include companies from Japan, the U.K., Germany, and Finland..." See: "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[September 23, 2002] "Comparison of DAML-S and BPEL4WS." By Sheila McIlraith and Dan Mandell (DAML Research Project, Knowledge Systems Lab, Stanford University). Initial Draft. September 05, 2002 (or later). With 9 references. "... DAML-S and BPEL4WS have broad and somewhat complementary objectives. DAML-S's ServiceProfile complements and extends ideas in UDDI. DAML-S's ServiceGrounding connects the application level content description of a service to communication level descriptions in WSDL. It is the ServiceModel (aka ProcessModel) in DAML-S that relates most closely to the business process model in BPEL4WS. Both provide a mechanism for describing a business process model. With so many candidate formalisms for describing a business process (e.g., XLANG, WSFL, BPMI, BPML, now BPEL4WS, etc.) DAML-S was designed to be agnostic with respect to a process model formalism. Rather, it aimed to provide the vocabulary and agreed upon (necessary) properties for a process models. In so doing, we hoped to remain compatible with what we anticipated would eventually be an agreed upon standard for process modeling. If such a standard did not come to pass, DAML-S would provide a way of talking about different process models, in keeping with the approach and spirit of NIST's Process Specification Language (PSL). Here are some of the features that distinguish/differentiate DAML-S from BPEL4WS..." See: (1) DAML-based Web Service Ontology (DAML-S); (2) "Business Process Execution Language for Web Services (BPEL4WS)." [Posting to W3C list]
[September 23, 2002] "DAML-S: Web Service Description for the Semantic Web." Presented at The First International Semantic Web Conference (ISWC), June, 2002. 15 pages (with 26 references). By 'The DAML Services Coalition' (alphabetically: Anupriya Ankolenkar, Mark Burstein, Jerry R. Hobbs, Ora Lassila, David L. Martin, Drew McDermott, Sheila A. McIlraith, Srini Narayanan, Massimo Paolucci, Terry R. Payne and Katia Sycara). "In this paper we present DAML-S, a DAML+OIL ontology for describing the properties and capabilities of Web Services. Web Services -- Web-accessible programs and devices -- are garnering a great deal of interest from industry, and standards are emerging for low-level descriptions of Web Services. DAML-S complements this effort by providing Web Service descriptions at the application layer, describing what a service can do, and not just how it does it. In this paper we describe three aspects of our ontology: the service profile, the process model, and the service grounding. The paper focuses on the grounding, which connects our ontology with low-level XML-based descriptions of Web Services. We are developing a DAML+OIL ontology for Web Services, called DAML-S, with the objective of making Web Services computer-interpretable and hence enabling the following tasks: discovery, i.e., locating Web Services (typically through a registry service) that provide a particular service and that adhere to specified constraints; invocation or activation and execution of an identified service by an agent or other service; interoperation, i.e., breaking down interoperability barriers through semantics, and the automatic insertion of message parameter translations between clients and services; composition of new services through automatic selection, composition and interoperation of existing services; verification of service properties; and execution monitoring, i.e., tracking the execution of complex or composite tasks performed by a service or a set of services, thus identifying failure cases, or providing explanations of different execution traces. To make use of a Web Service, a software agent needs a computer-interpretable description of the service, and the means by which it is accessed. This paper describes a collaborative effort by BBN Technologies, Carnegie Mellon University, Nokia, Stanford University, SRI International, and Yale University, to define the DAML-S Web Services ontology. We [...] discuss the important problem of the grounding, i.e., how to translate what is being sent in a message to or from a service into how it is to be sent. In particular, we present the linking of DAML-S to the Web Services Description Language (WSDL). DAML-S complements WSDL, by providing an abstract or application level description lacking in WSDL..." See (1) DAML-based Web Service Ontology (DAML-S); (2) "DARPA Agent Mark Up Language (DAML)." [cache]
[September 23, 2002] "Arbortext Helps US Airways' Mechanics Find Information Faster." By Linda Rosencrance. In ComputerWorld (September 23, 2002). "US Airways Group Inc., for example, must create, publish and maintain more than 13 different publications that support the maintenance of its entire fleet of 300 aircraft, which handle more than 1,400 flights every day. Finding the right information used to take US Airways' mechanics as long as 15 minutes using a combination of microfilm and paper documents, says Stanley Davis, manager of electronic publications at the Arlington, Va.-based airline. To shorten those delays, the airline turned to publishing software from Ann Arbor, Mich.-based Arbortext Inc. to overhaul its documentation production process and convert its manuals from print to an electronic format, says Davis. Using Epic Editor, Arbortext's XML-based authoring and editing software, US Airways created a central data store of content components that can easily be searched, managed, tracked and improved, Davis explains. Changes that occur in one manual are now easily reflected in other manuals... Based on XML and related standards, Arbortext's Epic Editor is used to create information in a media-independent format that can be stored on a file system or in a content management system. Epic creates a single XML-based source of information and automates the publishing to all types of media, including the Web, print, CD-ROM and wireless devices. The software constrains the author so the structure and content of the information conform to the XML data model the system designer specifies. Because of these constraints, the information is consistent across all platforms. Out of the box, Epic Editor works with file systems and Documentum Inc.'s Documentum 4i. An optional adapter integrates Epic Editor with Oracle Corp.'s 8iFS repository. Also available are integrations with repositories from Empolis GmbH and Xyvision Enterprise Solutions Inc. Epic Editor is compatible with Windows 95, 98, 2000 and NT 4.0, and Solaris 7 and 8... 'The ultimate success of your implementation depends on your data model, so that's the one area where you must not skimp,' says P.G. Bartlett, a spokesman for Arbortext. 'Whatever investment you make in outside experience will be returned many times in lower implementation costs and greater rewards'..."
[September 23, 2002] "OAGIS 8: Practical Integration Meets XML Schema." By Mark Feblowitz. In XML Journal Volume 3, Issue 9 (September 2002), pages 22-28. "When asked to join the OAGIS modernization project (OAGIS 8), I leapt at the chance. Here were two renowned specifications just waiting to get acquainted: (1) Open Applications Group's OAGIS, the solid, proven Integration Specification, an early XML application (1998) with a lot of miles on it, and (2) W3C's XML Schema Recommendation, a sophisticated new metamodeling chassis, ready to be road tested The outcome exceeded expectations: OAGIS's established and widely used family of horizontally focused, DTD-encoded interchange messages was updated, eliminating major usability issues inherent in a DTD-based implementation of this scale. At the same time, Schema enabled the horizontal OAGIS specification to employ an extensible architecture, OAGIS 8 Overlay Extensibility, to address the specific needs of vertical industries. Vertical market fit versus broad horizontal reusability has long been a sticking point between competing integration standards - Overlay Extensibility enabled an approach that leverages the strengths of both models. Developing a specification that promotes both horizontal coverage and vertical specialization wouldn't have been possible without Schema's advanced capabilities. However, the experience also opened our eyes to some challenging aspects of developing a usable, practical XML Schema solution to a complex real-world problem. This article describes the Open Applications Group Integration Specification, discusses the enhancements made possible by rearchitecting to Schema, and explores the challenging aspects of applying current Schema technology. Despite those challenges, OAGI architects were able to work with Schema to craft a new OAGIS that sustains proven strengths and adds desirable and innovative features, most notably, Overlay Extensibility..." See the six associated figures:  Purchase Order Interchange, Integration Scenario;  E-business Integration Scenario;  EAI Integration Scenario;  Purchase Order BOD schema definition;  Extending PurchaseOrderHeader;  Table 1: Characteristics of OAGIS 8 BODs. General references: see "Open Applications Group."
[September 23, 2002] "CDuce: A White Paper." By Véronique Benzaken, Giuseppe Castagna, and Alain Frisch. From the CDuce project. Version 0.27. June 2002 (or later). 15 pages (with 12 references). "In this paper, we present the functional language CDuce, discuss some design issues, and show its adequacy for working with XML documents. Peculiar features of CDuce are a powerful pattern matching, first class functions, overloaded functions, a very rich type system (arrows, sequences, pairs, records, intersections, unions, differences), precise type inference and a natural interpretation of types as sets of values. We also discuss how to add constructs for programming XML queries in a declarative (and, thus, optimizable) way and finally sketch a dispatch algorithm to demonstrate how static type information can be used in efficient compilation schemas." Note: "The starting point of the work on semantic subtyping and CDuce was XDuce. CDuce extends XDuce with first-class and late-bound overloaded functions, and generalizes the boolean connectives (explicit union, intersection, negation types)... CDuce ('seduce') is a new typed functional language with innovative features: (1) a rich type algebra, with recursive types and arbitrary boolean combinations (union, intersection, complement); (2) a natural notion of subtyping, which allows to use a value of a given type where a value of a supertype is expected; (3) overloaded functions with late binding (dynamic dispatch); (4) a powerful pattern matching operation, with dynamic dispatch on types and recursive patterns. Altough is CDuce is a general programming language, it features several characteristics that make it adapted to XML documents manipulation (transformation, extraction of information, creation of documents). Our point of view and our guideline for the design of CDuce is that a programming language for XML should take XML types into account to allow: [i] static verifications of properties for the applications (for instance, ensuring that a transformation produces a document of the expected type); [ii] good integration in a general purpose typed programming language; [iii] static optimizations of applications and storage (knowing the type of a document seems important to store and extract information efficiently)..." See bibliographic entry following. [source Postscript]
[September 23, 2002] "Semantic Subtyping: Theoretical Foundations for the CDuce Type System." By Alain Frisch, Giuseppe Castagna, and Véronique Benzaken. Paper presented at LICS 2002 (IEEE Symposium on Logic in Computer Science), July 22-25, 2002, Copenhagen, Denmark. 10 pages. "Usually subtyping relations are defined either syntactically by a formal system or semantically by an interpretation of types in an untyped denotational model. In this work we show how to define a subtyping relation semantically, for a language whose operational semantics is driven by types; we consider a rich type algebra, with product, arrow, recursive, intersection, union and complement types. Our approach is to 'bootstrap' the subtyping relation through a notion of set-theoretic model of the type algebra. The advantages of the semantic approach are manifold. Foremost we get 'for free' many properties (e.g., the transitivity of subtyping) that, with axiomatized subtyping, would require tedious and error prone proofs. Equally important is that the semantic approach allows one to derive complete algorithms for the subtyping relation or the propagation of types through patterns. As the subtyping relation has a natural (inasmuch as semantic) interpretation, the type system can give informative error messages when static type-checking fails. Last but not least the approach has an immediate impact in the definition and the implementation of languages manipulating XML documents, as this was our original motivation." See CDuce referenced in the preceding bibliographic entry. [source Postscript]
[September 23, 2002] "SVG - The Future of Web Rendering?" By Bill Trippe. In The Gilbane Report Volume 10, Number 6 (July/August 2002), pages 1-11. "... there is still a critical gap between graphically-rich content that is difficult if not impossible to integrate with other enterprise data, and XML data that can be integrated with virtually any enterprise application but usually ends up rendered as graphically-challenged HTML. This month we publish an excerpt from SVG for Designers: Using Scalable Vector Graphics for Next-Generation Web Sites, a new book by Bill Trippe and Kate Binder, published by McGraw Hill... in our article Bill looks at why Scalable Vector Graphics (SVG) has the potential to fill this gap. Whether you think SVG will take over the Web or not, it is difficult not to be intrigued with what SVG can do... SVG holds this promise for a few simple reasons. First, vector graphics are a necessary complement to the bitmap graphic formats such as JPG and GIF that now dominate the Web. Vector graphics mean better quality and greater precision for many types of illustrations and artwork, especially technical illustrations and other kinds of artwork created by computer-aided design programs. Second, SVG brings an industry standard approach to creating vector graphics on the Web. Up until now, there have been only proprietary methods for creating vector graphics. Third, and, perhaps most importantly, the SVG standard provides more than vector graphics handling, as it allows for the incorporation of vector graphics, bitmap graphics, text, style sheets, and scripts. Users of SVG cannot only create stand-alone illustrations; they can also create and exercise greater control over the design of entire Web pages. They can also flexibly incorporate other text, other graphics, data, and scripts. And finally, because SVG files are text files, they can be easily generated and manipulated, allowing for applications like data-driven graphics and personalization. SVG gives the graphic designer, using virtually the current standard industry toolbox, the power to create live Web images. Unlike bitmap images, SVG images can dynamically update as the designer, the Web developer, or the end user enter or change data and otherwise interact with the Web image. SVG files can be scripted to automatically take this information and modify the existing graphic or regenerate the graphic. Importantly, SVG often provides this flexibility using less disk space and memory, providing faster upload and download times, and putting more creative control into the graphic designer's hands than current static bitmap technology... Ultimately, SVG will prove itself in how it is used in real-world applications. The compelling thing is that SVG is an entirely open, entirely textual format. It can be easily generated from a database for applications such as dynamic page serving. It can also be modified on the fly for such applications as personalization... Adobe is building SVG support into their products, as is Corel. Perhaps more significantly, database vendors and content management companies are adding SVG support, as they understand well how central SVG is likely to become to Web development and publishing..." Book reference: SVG For Designers: Using Scalable Vector Graphics in Next-Generation Web Sites, by Bill Trippe and Kate Binder (ISBN: 0072225297; August 2002; see Amazon.com). General references: "W3C Scalable Vector Graphics (SVG)."
[September 23, 2002] "Father of Java Has His Eye on 'Jackpot'." By Darryl K. Taft and James Gosling. In eWEEK (September 23, 2002). ['In the seven years since its introduction, Java has made rapid advancements into the enterprise. In the burgeoning field of Web services, Sun Microsystems Inc.'s Java-based technology is in tight competition with Microsoft Corp.'s .Net initiative for the heart and soul of developers. James Gosling, a Sun vice president and fellow, and father of Java, spoke with eWEEK Senior Writer Darryl K. Taft about Web services, the future of Java, open source software and its impact on the software business, and Sun's success in the tools business.'] "... I've been mostly working on analysis and transformation tools based on having a complete semantic model of the application... It's one where I have the application as a database and then can do analysis on it, though it's not exactly a database, it's more data structure. I keep sort of an annotated parse tree, which means that instead of the way that most tools look at programs as a series of lines and text, with punctuation and letters, left to right on a page, top to bottom, I actually have all of the different entities all related so I can do things like find all the places a particular variable was used, trivially. If I want to rename a class, that's a trivial operation. You're including accounting for changing that name every place that it's occurred. It's easy for me to do things like if you've got any such variable and you make it private, then go and find all the places where that variable is used, turns them into instantiations of access or methods, and if the access or methods don't exist then construct them as well. That's not a difficult thing to do in my experiment test bed. It's still kind of early. It's still a research labs project. The project is called Jackpot." [...] On Web Services: "The way that I look at it, people have been building Web services under different names for 20 or 30 years, so there are a lot of protocols that people have used to build communications between components and across networks, and they've been in pretty wide use for quite a long time. The distinction that SOAP brings to the party over CORBA and XML, some of them are interesting, but I wouldn't call any of them life-changing. One of the descriptions of XML is that it is HTML for a silicon-based life form. Namely it's this observation that we've been building distributed systems for years out of using CORBA and RMI [Remote Method Invocation] and all of that. But as a matter of common practice, people haven't been doing a lot of interconnection between disjointed organizations that also are distributed. And we've had these facilities available on the Web for years through HTML -- things like auctioning and booking reservations. What people have done when they want to write an application that finds things on an auction service is they've essentially screen-scraped the HTML and that's worked perfectly well. It's awkward, but XML is really. One way of looking at XML is a way of cleaning up that process. So I don't see how that changes everything. There's certainly a mindset and a business proposition people have to answer about -- do you want to have what services you are offering on the Web available to other organizations available through something other than the Web? So do you want to have the ability to have applications that other people write talk to your airline reservations or auctioning or online payments or whatever? And I think the real hard issues there are the business ones..."
[September 23, 2002] "Transform Data Into Web Applications With Cocoon. Use Java to Implement Logic in Cocoon." By Lajos Moczar. In Java World (September 20, 2002). ['If you've read about Apache Cocoon or started dabbling with it, you might be wondering about the best approach for implementing your custom Java logic into it. This article will get you coding with XSPs (Extensible Server Pages) and actions. Lajos Moczar walks you through a few examples of each, including database and email examples, and wraps up with some design principles that'll help you figure out how and when to use these components.'] Cocoon is officially defined as an XML publishing engine, and while technically correct, the description does not do the product justice. The best way to understand Cocoon is to view it as a framework for generating, transforming, processing, and outputting data. Think of Cocoon as a machine that receives data from a wide variety of sources, applies various processing to it, and spits data out in the desired format... We could also define Cocoon as a data flow machine. That is, when you use Cocoon, you define the data paths or flows that produce the pages that make up your Web application. Even a simple hello-world.html page has a data flow defined to serve the page. In Cocoon, you can implement logic in four main ways: Using components called transformers: They do exactly what their name implies: they transform incoming data according to the rules they are given. The classic example is the TraxTransformer, which you can see in action in the pipeline above. In the pipeline using various components that help choose the correct processing path based on various request/session/URI settings. In the pipeline based on stock or custom Java-processing units called actions. Using input files that mix Java and content -- these are called Extensible Server Pages (XSPs). This article covers this list's last two approaches: XSPs and actions. If you develop with Cocoon to any extent, you'll end up using them and probably liking them. Plus, you'll be happy to know that in both cases, you are essentially programming within a servlet context. More correctly, both components (in fact, all Cocoon components) have access to request, response, session, and context objects. Much of the logic you implement interacts with these objects in some way... the overview of XSPs and actions gives you some idea of the possibilities that Cocoon offers... the components provide rather well defined areas in which to implement your own logic. The built-in logicsheets and actions that come with Cocoon can help you do things that you would have to code from scratch in another framework. The advantage is that you can get your Cocoon-based application up and running much faster. And when you couple this with all the other powerful components that Cocoon offers -- like matchers, selectors, generators, transformers, serializers, and readers -- you can build yourself quite powerful Web applications..." [alt URL]
[September 23, 2002] "Netegrity Ships SAML-Ready Security Platform." By Richard Karpinski. In InternetWeek (September 23, 2002). "Netegrity said Monday it has begun shipping a new version of its access- and identity-management platform, its first supporting Security Assertion Markup Language (SAML) standards. The SAML 1.0 standard, which was highlighted in an interoperability test earlier this summer, is expected to be approved by the OASIS group by the end of this month. It provides a mechanism for enterprises to trade so-called 'authentication' tokens between different systems, which will enable applications such as single sign-on. Netegrity's SiteMinder 5.5 enables federated identity and security via support for SAML, Microsoft's .Net Passport platform, and the Kerberos authentication technology. SiteMinder 5.5 enables a proprietary SiteMinder identity to be mapped to a SAML-based identity. SiteMinder creates a standards-based SAML assertion for that individual and makes it available to other sites to consumer. As for Microsoft Passport, Siteminder lets users log in just one time with their Passport ID and then log into all Passport-enabled Web sites and enterprise apps that support Passport authentication. Kerberos support, meanwhile, lets users log in their Microsoft desktop using Windows credentials that are provided with single sign-on to the SiteMinder protected environment..." See the announcement: "Netegrity Ships SiteMinder 5.5 with SAML, Passport, and Kerberos Support. Enables Enterprises to Extend their Security Infrastructure with Federated Identity Services."
[September 23, 2002] "Practical Matters Rule IBM's Tactics With Competitors." By Brier Dudley. In Seattle Times (September 23, 2002). Excerpt from an edited transcript of a recent interview The Seattle Times had with Steve Mills, IBM Software Division (Senior Vice President and Group Executive). [Q:] How are XML (extensible markup language) and Web services progressing and when will they be broadly adopted? [Mills:] "We've been looking to XML for quite a few years as a way to improve the interfaces between applications, between business processes. The track to leverage XML in Web services is a very long-term, multidecade process. It's a very long process when one considers what are in fact millions of applications that have to be in some way interfaced through this Web services technology and linked together. It's an important change in the industry but a very long change." [Q:] What about the current activity around Web services? [Mills:] "It is certainly rolling now in the sense that we're seeing early adopter activity. Hundreds of businesses around the world are beginning to work with the technology and look for ways to apply it. But you've got to think about it not in terms of single use of the technology but widespread deployment. It's still rising on the height curve but it will reach its peak here from a business standpoint probably over the next year. I would suspect as we get into 2004 everybody will accept this as a commonplace thing. It's like repaving all of the roads in Seattle -- it's probably an important thing to do but you're not going to do it all at once." [Q:] Does IBM plan to collect royalties on the Web-services standards it's developing with Microsoft? [Mills:] "There's been no collection on any of the Web-services standards or proposed standards that have come out so far, nor are there any plans to collect royalties on any of those things. I think that the reality is that what has been coming out here, we've been pushing into Oasis and WC3, which are the accepted standards bodies, which have all come out royalty-free..."
[September 21, 2002] "Web Services Security Core Specification." Edited by Phillip Hallam-Baker (VeriSign), Chris Kaler (Microsoft), Ronald Monzillo (Sun), and Anthony Nadalin (IBM). Working Draft 01. 20-September-2002. 46 pages. Document identifier: WSS-Core-01. Posted 2002-09-21 by Anthony Nadalin to the WSS TC as "WSS-Core Draft." Comments from external reviewers may be sent to the 'wss-comment' mailing list. From the document abstract: "This specification describes enhancements to the SOAP messaging to provide quality of protection through message integrity, message confidentiality, and single message authentication. These mechanisms can be used to accommodate a wide variety of security models and encryption technologies. This specification also provides a general-purpose mechanism for associating security tokens with messages. No specific type of security token is required; it is designed to be extensible (e.g., support multiple security token formats). For example, a client might provide proof of identity and proof that they have a particular business certification. Additionally, this specification describes how to encode binary security tokens, a framework for XML-based tokens, and describes how to include opaque encrypted keys. It also includes extensibility mechanisms that can be used to further describe the characteristics of the tokens that are included with a message." From the section 1 Introduction: "...the focus of this specification is to describe a single-message security language that provides for message security that may assume an established session, security context and/or policy agreement... The Web services security language must support a wide variety of security models. The following list identifies the key driving requirements for this specification: Multiple security token formats; Multiple trust domains; Multiple signature formats; Multiple encryption technologies; End-to-end message-level security and not just transport-level security." Note in the posting: "Here is the initial draft of WSS-Core for review. Below is a high level overview of items that were done to achieve the initial draft: (1) merged WS-Security and WS-Security Addendum; (2) merged the framework from WS-Security XML Token into WSS-Core; (3) removed specifics on Kerberos and X509 tokens from the Binary Security Token section in WS-Security." See: (1) Web Services Security TC (WSS) website; (2) "Web Services Security Specification (WS-Security)."
[September 21, 2002] "Web-Services Security Quality of Protection." By Tim Moses (Entrust), with review and comment by Zahid Ahmed. Strawman document posted 2002-09-20 to the WSS QoP TC discussion list. September 17, 2002. 11 pages. "Problem statement: WSS allows Web-service providers to implement a security policy. The term security policy is used in this context to mean: 'a statement of the requirements for protecting arguments in a WS API, including: (1) how actors are to be authenticated, using what mechanisms and with what parameter value ranges; (2) which XML elements are to be encrypted, for what individual recipients, recipient roles or keys, using what algorithms and key sizes; (3) which XML elements are to be integrity protected, using what mechanisms, with which algorithms and key sizes, and (4) what additional qualifications the service consumer must demonstrate in order to successfully access the API'. This is a relatively restrictive use of the term 'security policy'. A more comprehensive definition addresses such requirements as: (1) privacy (retention period, intended usage, further disclosure); (2) trust (initial parameters of the signature validation procedure, including those keys or authorities that are trusted directly, policy identifiers, maximum trust path length), and (3) non-repudiation (requirements for notarization and time-stamping)..." Note the context: "The attached strawman contains some technical details of the proposed approach, including pidgin XSD. But, these details are provided only for the benefit of those of us who have difficulty dealing with 'the abstract'. What I am trying to say is: 'let's debate concepts, not minutiae'. The latter is the job of a TC..." See the news item of 2002-09-21: "Discussion Forum for Web Services Security Quality of Protection." [source .DOC]
[September 20, 2002] "Euro-XML." By Rick Jelliffe. From XML.com (September 18, 2002). ['Rick Jelliffe gives us the lowdown on how to cope with the euro character in XML documents, covering the ramifications for XML and HTML, and the special cases for Windows character set encodings.'] "The new European currency, the euro, has a symbol € in Unicode 3.2 as character U+20AC. How can we use it with XML? There are three ways of representing the euro in XML: (1) numeric character references, (2) character entity references, and (3) direct characters. This article examines these and other more arcane but important ramifications... You can enter the Euro character as data in element content or attribute values using number character references in any XML document: hexadecimal € or decimal €. This character is allowed both in XML 1.0 and the proposed XML 1.1. Numeric character references will not be recognized in CDATA marked sections and cannot be used in XML names, such as element names, attribute names and IDs... A friendlier alternative is to use the standard entity €. This can be used in the same places that you can use numeric character references. An entity must have a declaration. The most failsafe approach is to supply your own: make sure your document has a DOCTYPE declaration with the following declaration as part of its internal subset... Third, if you are using UTF-8 or UTF-16, then you can enter the character directly. Your GUI may provide a mechanism, and editors aimed for publishing will also provide some mechanism..."
[September 20, 2002] "The State of the Python-XML Art." By Uche Ogbuji. From XML.com (September 18, 2002). ['... the first installment of a new column on XML.com, Python & XML. Well-known XML Pythoneer Uche Ogbuji will be writing monthly on using Python and XML together. To kick things off, Uche starts by providing a high-level view of the various XML processing tools available for Python.'] "Welcome to the first Python-XML column. Every month I'll offer tips and techniques for XML processing in Python and close coverage of particular packages. Python is an excellent language for XML processing, and there is a wealth of tools and resources to help the intrepid developer be productive. In what follows I'll survey these tools and resources, giving a sense of how broadly Python supports XML technologies and giving you a head start on the more in-depth topics to follow. One of the best things about Python-XML is the active community of practitioners and contributors. From introductory texts to references to mailing lists, these resources will provide answers to most questions worth asking about Python and XML... The following table lists the currently available Python-XML software that I judge to be significant... The user interface specifications in question are in XML, but this is not really enough to call it an XML processing tool for Python. However, you can certainly use the tools I mention for convenient manipulation of pyglade specifications. The general rules of thumb for including software are, first, whether it implements a technology or set of technologies strongly associated with XML; and, second, whether it does so in a way that is useful for any arbitrary XML file I may want to process. I've organized the table according to the areas of XML technology. This will give newcomers to Python a quick look at the coverage of XML technologies in Python and should serve as a quick guide to where to go to address any particular XML processing need. I rate the vitality of each listed project as either 'weak', 'steady' or 'strong' according to the recent visible activity on each project: mailing list traffic, releases, articles, other projects that use it, etc... In the next article I'll tour the many facilities added to core Python by the PyXML package..." See: (1) Python & XML, by Christopher A. Jones and Fred L. Drake, Jr; (2) [older] references in "XML and Python."
[September 20, 2002] "XML Canonicalization." By Bilal Siddiqui. From XML.com (September 18, 2002). ['Canonicalization is used to determine the logical equivalence of two XML documents and forms a vital part of the W3C's XML digital signature technology. In part one, Bilal Siddiqui explains why canonicalization is important, and how the W3C specification says it should be done.'] "This two part series discusses the W3C Recommendations Canonical XML and Exclusive XML Canonicalization. In this first part I describe the process of XML canonicalization, that is, of finding the simplified form of an XML document, as defined by the Canonical XML specification. We'll start by illustrating when and why we would need to canonicalize an XML document... XML defines a format for structuring data so that information can be meaningfully interchanged between communicating parties. The rules for XML authoring are flexible in the sense that the same document structure and the same piece of information can be represented by different XML documents. Consider [these two] Listings which are logically equivalent, i.e., they follow the same document structure (the same XML Schema) and are meant to convey the same information. In spite of being logically equivalent, the XML files of Listings 1 and 2 do not contain the same sequence of characters (or sequence of bytes or octets). In this case the character and octet sequences of the two XML files differ due to the order of attributes appearing in the room element. There can be other reasons for having different octet streams for logically equivalent XML documents. The purpose of finding the canonical (or simplified) form of an XML document is to determine logical equivalence between XML documents. W3C has defined canonicalization rules such that the canonical form of two XML documents will be the same if they are logically equivalent. Whenever we are required to determine whether two XML documents are logically equivalent, we will canonicalize each of them and compare the canonical forms octet-by-octet. If the two canonical forms contain the same sequence of octets, we will conclude that the two XML files are logically equivalent. Before we tart exploring the technical details of the canonicalization process, let's see when and why you would need to test logical equivalence between XML documents. ... In the second article in this series, we will take this concept further and discuss more advanced concepts such as dealing with parts of XML documents, CDATA sections, comments and processing instructions. We will also discuss tricky situations where canonicalization process renders XML documents uesless for their intended function..."
[September 20, 2002] "Brother, Can You Spare a DIME? Direct Internet Message Encapsulation." By Rich Salz. From XML.com (September 18, 2002). ['"XML Endpoints" is Rich Salz's Web services column. This month Rich continues his look at how to send binary attachments with SOAP, covering the DIME protocol and the WS-Attachments specification.'] "Last month we talked about the reasons for associating attachments with SOAP messages, and we looked at the initial SOAP Messages with Attachments (SwA) note. This month we look at Direct Internet Message Encapsulation (DIME), a binary message format; and we'll also look briefly at the WS-Attachments specification, which provides a generic framework for SOAP attachments, and a definition for a DIME-based instantiation of that framework. Both specifications are being developed through the IETF and are available as internet drafts. Interestingly, they are not being developed through an official IETF Working Group but are being published as the work of individuals. Microsoft created DIME and was the original promoter for its adoption; the current DIME draft has IBM and Microsoft authors, as does the WS-Attachments document... If we read between the lines, we can conclude that DIME is a part of the global XML Architecture effort led by IBM and Microsoft... Like SwA or MIME, DIME is a message format -- it is not a network protocol like HTTP. The biggest surprise to most XML developers will be that DIME is a binary format: fields have fixed size, as opposed to being terminated by a newline character, numbers often have fixed sizes, bytes are written in a specified order -- the common 'network byte order': most significant byte first -- and so on... A DIME message consists of a series of one or more records joined together to make a single application message. Records aren't numbered: their position is implied by their position in the data stream. Thus DIME requires a stream protocol like TCP and is unsuitable for UDP, a datagram protocol. Many multimedia protocols are 'lossy' and a datagram approach make sense. Multiple units of application data or payloads can be encapsulated in a single DIME message stream. Payloads that don't fit into a single DIME record packet can be divided into chunks and sent in pieces..." See: "Direct Internet Message Encapsulation (DIME)."
[September 20, 2002] "Sun Releases Liberty Alliance Tool." By Wylie Wong. In CNET News.com (September 18, 2002). ['Sun Microsystems on Wednesday unveiled a new open-source software development tool designed to help businesses start testing and building online identification systems using the new Liberty Alliance standard.'] "Sun executives say the Java-based tool is the first open-source implementation of the Liberty Alliance standard and a prototype of Sun's forthcoming server software, called Identity Server 6.0, which will manage computer user's access and authentication. The Liberty Alliance Project is an effort to establish a universal online authentication system that serves as an alternative to Microsoft's proprietary Passport online ID system. Both efforts have the same goal: let people surf the Web without having to constantly re-enter passwords, names and other data at different sites. About half a dozen companies--including Sun, Novell, RSA Security and Entrust -- have announced they are planning to support Liberty in their software products. The Liberty Alliance Project, which has the support of big-name companies such as United Airlines, American Express, MasterCard and General Motors, released the first version of the standard in July. Sun rounded up Liberty Alliance partners, but Sun Chief Executive Scott McNealy recently told CNET News.com that the idea for the project came from Visa International. Sun executives, who plan to release Identity Server 6.0 at year's end, said they developed the new open-source tool because their customers wanted to test out the Liberty standard. Businesses that use the open-source tool can transfer their software code and re-use it with Sun's Identity Server, said Andrew Eliopoulos, Sun's marketing director for the product..." See references in the news item "Sun Offers Developers Interoperability Prototype for Liberty."
[September 20, 2002] "The Next Wave of Integration Platforms." By Don Roedner (IONA). In EAI Journal Volume 4, Number 9 (September 2002), pages 16-19. ['Just as the Web changed information access, Web services will change integration and how we look at applications. The next generation of integration brokers will leverage Web standards to create an information superhighway for enterprise applications.'] "... the next generation of integration brokers must provide more than the ability to leverage Web service standards. They must offer a platform that provides a set of services required for integration. Like the last generation of integration brokers, these platforms will include adapters to existing applications and technologies, data transformation tools, and BPM services... While traditional integration brokers are good at exchanging data between systems, a Web services approach is more of a service or action approach. It's about invoking an action or performing a behavior, rather than just moving data around. This approach is of a higherlevel granularity since a typical Web service is a composition of many applications. Using this approach, information assets are exposed as services, orchestrated into higher-level functions, and exposed as complete services. Other internal applications, or external trading partners, will access these services. The integration broker must supply services that allow the orchestration of smaller components into larger ones. It must also supply the platform necessary for non-engineering staff to expose, deploy, and manage those services. To support trading partner collaboration on the Internet, these integration brokers must support popular business-to-business (B2B) protocols such as RosettaNet and e-business eXtensible Markup Language (ebXML). The next generation of integration brokers must take all the benefits of the current generation, add support for Web services integration standards, and provide a platform to support enterprise and inter-enterprise deployments. The result will be an infrastructure that lets IT departments use the methodologies and tools most appropriate to the task at hand. They'll be able to: (1) Use process, data, and functional integration methods when appropriate (2) Create services from existing mainframe applications and consume them using Microsoft tools (3) Perform integration initiatives incrementally without sacrificing a sound IT architecture. The next generation of integration brokers will leverage Web standards to create an information superhighway for enterprise applications..."
[September 20, 2002] "Electronic Business Registries." By JP Morgenthal. In EAI Journal Volume 4, Number 9 (September 2002), pages 13-14. ['This article is a brief history of leading registry standards and where they are headed. Unfortunately, as no single group is looking at the bigger picture, there is an integration issue.'] "... recent developments in the area of Enterprise Application Integration (EAI) and Business-to- Business (B2B) electronic commerce have forced a need for electronic business registries computers use to locate other computers and services. Here, since the computer is the consumer, the underlying implementation and associated access interface are a high design priority. Indeed, there's been a growth in the number of these registries -- each with differing information models and application programming interfaces -- lowering the opportunity for these registries to be used simultaneously and to be interoperable. Current Registry Standards Here's a brief overview of leading registry standards that satisfy the needs of machine-based consumption... [X.500, Domain Name Service (DNS), Electronic Business XML (ebXML), Universal Description, Discovery, and Integration (UDDI), Microsoft .NET Passport, Verisign (Certificate authorities: Public Key Infrastructure 'PKI' standards and emerging W3C standards)...] Each of the registries we've covered was created for a specific purpose. The purpose of X.500 was to support X.400 messaging standards. UDDI was designed to support the needs of a growing Web Services community. ebXML was designed to be the next-generation Electronic Data Interchange (EDI). Unfortunately, no one group is looking at the bigger picture -- that the information stored in each of these registries is highly applicable to other applications beyond those originally intended. This leaves the need to support multiple interfaces against the same data set or to define interoperability across all standards. We have an integration issue again because these standards were all defined with a narrow focus. Because of the large body of installed registries, each with its own information models and service interfaces, it has become increasingly complex to think about consolidation. Vendors have put significant effort and dollars into their implementations and applications, which they wrote and deployed against a particular standard..."
[September 19, 2002] "Sun Offers Building Blocks for Liberty Alliance Applications." By Sebastian Rupley. In PC Magazine (September 18, 2002). "As part of its SunNetwork event in Silicon Valley this week, Sun Microsystems announced one of the first interoperability prototype technologies based on the Liberty Alliance 1.0 specification. Sun Microsystems is a founding member of the Liberty Alliance Project, which advocates open standards for protecting identities online and has over 100 member companies. Sun officials foresee alliance participants using the new software layer to test their own solutions for authenticating external network users and for cross-departmental network authentication of customers 'A primary thing that people miss about the Liberty Alliance is that it's not just for establishing cross-company online trust and identifying users who come from the outside,' said Andy Eliopoulos, Sun's senior director of product marketing for ID management solutions, in an interview with PC Magazine. 'Many companies have customer information stored in 'silos' within various departments. One department has no idea whether to authenticate a customer who may be known to another department. With this new tool we have can help developers build solutions to do all these kinds of Liberty-enabled authentication'... Sun would like to have its own products become part of the infrastructure for new Web services, so the prototype offering partly serves self-interest. The company anticipates Liberty Alliance partners using the prototype in conjunction with the Sun ONE Identity Server 6.0, which is the primary hub around which all of Sun's identity management efforts revolve. The Identity Server facilitates policy-driven access management to network services, identity administration, and directory services. But the new technology from Sun is designed to let developers pick outside products as well, says Eliopoulos. 'Our Identity Server is based on open standards, including SAML 1.0 (Security Assertion Markup Language) and SOAP (Simple Object Access Protocol),' he says. 'The new prototype we have works with open standards, and that can happen with Identity Server or with other products.' As specifications and prototypes for technology based on Liberty Alliance technology emerge, one big question that remains is exactly who oversees common practices and standards for the technology. That issue could become important as large Liberty partners, such as American Express and Lufthansa, seek to establish cross-company trust online. 'As of right now, there is no official overseeing body for Liberty Alliance standards, but that could change,' says Eliopoulos. 'The Liberty specification, from July, was created by the founders of the Liberty Alliance, and that's 17 companies, but it remains to be seen who will oversee standards'..." See: (1) Interoperability Prototype for Liberty; (2) "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[September 19, 2002] "Next-Gen Web Services: CTOs Grapple With Security, Data Transformation." By Heather Harreld. In InfoWorld (September 19, 2002). "As enterprises begin to gravitate toward Web services to build collaborative applications by creating reusable business processes linked via XML to form the next generation of enterprise applications, they must first grapple with issues such as security and data transformation, according to a Thursday panel of CTOs at the InfoWorld Next-Generation Web Services II: The Applications conference. As opposed to the traditional definition of collaboration encompassing end users, Web services collaboration is focused squarely on tying together business processes to allow companies to nimbly create new applications as quickly as they are conceived, to slash the time and cost of application development. Possible collaborative applications could be most effectively leveraged in situations such as a marketplace, where monumental connectivity problems have stifled many companies from effectively linking with their partners. Or they could be used to track order fulfillment rates within a supply chain in real-time, or by banks to authorize credit card usage, according to the panelists... Textron is using SOAP in a messaging broker at the integration layer while eyeing ebXML (e-business XML) as the next version of EDI. Although not necessarily focusing on end-users collaborating, these collaborative applications will evolve to be a set of deterministic interfaces that people know they can count on, while simultaneously offering up a piece of code that can be reused, said Todd Johnson, president of JamCracker..." The Next-Generation Web Services II conference runs through September 20 in Santa Clara, California.
[September 19, 2002] "Federated Identity Face-Off." Interview by Stuart J. Johnston and Dan Ruby. In XML Magazine (October/November 2002). ['In a virtual debate, IBM's Bob Sutor and Sun's Simon Phipps pull no punches on competing federated identity management strategies.' See also the reference list for Federated Identity Resources] Excerpts: Now that WS-Security is in OASIS and has gained Sun's support, how do you see it evolving? [Phipps:] "We're very happy to have WS-Security brought into OASIS so that it can make a contribution to the ongoing security discussion. Microsoft and IBM have done an about-face and have introduced it as a royalty-free specification into OASIS, and we felt that that needed to be encouraged and embraced. So what we've been embracing, fundamentally, is the contribution more than the proposal. There are a couple of principles that we are committed to. One of them is working with open communities to evolve new marketplaces that are level playing fields. Sun is committed to doing that through OASIS, and the fact that the SAML work was already going on at OASIS means that we're deeply committed to SAML." [Sutor]: Now that it is in a standards body, we would expect WS-Security to morph -- you never know how much. We would certainly expect inputs from other people. Look at how we brought SOAP 1.1 to the W3C two years ago, and fairly soon now we expect a SOAP 1.2 to come out, and it's not exactly the same. A lot of what they're doing with SOAP 1.2 has to do with how you bind it to underlying transports. That wasn't fully expressed in SOAP 1.1. But this was an open effort and whatever the industry decided SOAP 1.2 needed, that was done..." Where are there points of overlap or competition between WS-Security and Liberty Alliance? [Sutor]: "Liberty is not a Web services spec. WS-Security defines a set of SOAP extensions, using SOAP just as it is designed. For example, it provides a convention for how to put a SAML assertion in a SOAP header, which will support Liberty as it defines its protocols, conventions, and workflow. From a Web-services perspective, WS-Security is the more fundamental spec..." [Phipps:] "Liberty is a movement by the users of network identity to specify what they need for vendors to provide for them. On the other hand, the ideas in WS-Security up until now were those developed in-house by a monopolist, and only then crossed the border into being a contribution to an open discussion. Liberty's proposed mechanism is comparable with the whole WS-Security road map that has been articulated. WS-Security is just a small piece of an architecture. It says nothing about how to federate..." See: (1) "Web Services Security Specification (WS-Security)"; (2) "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[September 19, 2002] "Age of Discovery. [End Tag.]" By Adam Bosworth (Vice President, Engineering, BEA Systems Inc). In XML Magazine (October/November 2002). "In 1998 I delivered a paper at an XML meeting in Paris calling for an open architecture that would allow all applications on all machines on all platforms to interact. Central to that vision is the concept of information discovery, in which a data model is exposed and a query language is used to query across the data model. In short, databases. But that kind of information discovery is still far from the reality on today's Web. The vision described in the paper has been held back because of delays in the rollout of trust mechanisms, a querying standard, and database support for XML documents. We are slowly getting these problems solved, and within 12 to 18 months, we should see solid versions of relational databases that expose their information as 'virtual XML documents' that can be queried using XML Query. But one more challenge remains. The problem is that XML doesn't distinguish between real objects, properties, and relations. For example, a city is a real object found by a name and a region in which it is situated. A purchase order is a real object found by a PO. A person is a real object typically described by some ID. The person's name or date of birth is a property, however. Line items and ship instructions are properties of the purchase order. Population and elevation are properties of the city. As for relations, a person may 'live' in a city. The purchase order may have been 'ordered by' a person. And so on. But in XML, a <PERSON> tag looks just like an <AGE> tag looks just like a <CITY> tag. R.V. Guha, who has been thinking about these problems for longer than XML has been around, has a suggestion that I think will be helpful, a simple usable convention for identifying real objects and referencing them in any XML document..."
[September 19, 2002] "Xperanto Brings XML Queries to DB2. IBM Takes A Hybrid Approach to Management of Enterprise Data." By Lee Sherman. In XML Magazine (October/November 2002). "With data scattered amongst relational databases, XML documents, flat files, and e-mail stores, enterprises are looking for a single standard that can present a common view of this data. The action around Web services has so far focused on component interfaces such as Java and .Net, but with its recently unveiled Xperanto initiative IBM is recognizing that there is a whole set of Web services that are not built using interfaces to applications but are instead based on old-fashioned data access. What's needed, say analysts such as the META Group's Dan Scholler, is a high-level interface to those data stores that can query Web services. One answer, at least as far as IBM is concerned, is to allow for both the creation and consumption of Web services using standard SQL rather than wait for companies to come up to speed on XQuery, a proposed language for accessing information in XML documents. 'XQuery does allow you to do this in a pure XML environment, but there isn't a skill set out there and it's not clear what the advantage is over doing it in SQL,' he said. Instead, Xperanto takes a hybrid approach to data management, focusing on providing native XML support with the core relational database engine. Web services appear as functions within the SQL environment. Xperanto includes four major capabilities, said Nelson Mattos, IBM distinguished engineer and director of information integration. The first is the ability to federate data wherever that data is stored -- through the DB2 relational connect and data joiner functionality. Next is data placement (through the DB2 replication product family), the ability to move the data closer to an application once it has been integrated to improve performance and address issues of scalability. This capability is followed by the translation layer, which allows the data to be transformed and presented in an XML format using native XML interfaces (through the DB2 XML extender, which has been in DB2 since 1999, and MQ Series middleware). Once that's done, it then becomes possible to invoke Web services to access and manipulate data in real time... Xperanto is a broad initiative that allows DB2 to play several different roles in the creation and consumption of Web services. It extends the reach of DB2-based applications to real-time data. It allows DB2 to act as the infrastructure for Universal Description, Discovery, and Integration (UDDI)-through the alphaWorks product. And finally, it allows it to serve as a repository for XML artifacts, such as style sheets..."
[September 19, 2002] "The Importance of Metadata: Reification, Categorization and UDDI." By Karsten Januszewski (Microsoft Corporation). From the Microsoft XML Web Services Developer Center (September 2002). ['Look behind the UDDI metadata structure to see how to best employ it within a UDDI registry, both in the UDDI Business Registry (UBR) and in UDDI Services of Microsoft Windows .NET Server; see how to create custom categorization schemes that allow users to solve particular problems in description and discovery.'] "Categorization is arguably the most important feature of Universal Description, Discovery and Integration (UDDI), yet it is the least understood. The ability to attribute metadata to services registered in UDDI, and then run queries based on that metadata is absolutely central to the purpose of UDDI at both design time and run time. This article will explain the thinking behind the UDDI metadata structure and then demonstrate how to best employ that metadata structure within a UDDI registry, both in the UDDI Business Registry (UBR) and in UDDI Services of Windows .NET Server. It explains how to create custom categorization schemes that allow users to solve particular problems in description and discovery... UDDI provides typed metadata through several means: First, three of the four central entities in UDDI (providers, services and tModels) can be adorned with what might be thought of as property bags: collections of typed name/value pairs that describe that given entity. Each of the properties in the bag comes from a known classification system... Adorning UDDI entities with these property bags provides entities with the critical metadata and context that can be used to discover and consume them. The corollary to adorning an entity with properties is the ability to search for that entity based on those properties. The UDDI API was designed to support a complex range of queries based on metadata ascribed to these bags. Queries are written that look for properties based on the classification scheme they are associated with. In other words, in writing a query "to find services in the United States", one must provide not only the appropriate value that represents the United States but also the classification scheme from which that value originates. In this way, queries can be written that have contextual intelligence about the properties being searched for. Other features make the UDDI query engine able to handle a range of scenarios. For example, queries can do an exact match of all the properties in a bag or can match just one property in a bag. Or, a query can search across bags contained in both providers and services. The querying capacity in the UDDI API provides a great deal of flexibility in terms of writing focused, precise queries. Through these two parallel facilities -- adorning properties to entities and searching for entities based on well-known properties -- UDDI entities are reified. Below, the article will delve into exactly how to accomplish this... Classification and typed metadata is key to the ability of UDDI to solve the problems of reification of data both in the enterprise and in the public sphere. Well-architected Web service software applications will employ UDDI as an infrastructure, taking advantage of the many possibilities of employing this complex categorization system to different entities for both design-time and run-time usage..." See: "Universal Description, Discovery, and Integration (UDDI)."
[September 18, 2002] "Sun Offers Liberty Development Tool." By James Niccolai. In InfoWorld (September 18, 2002). "Sun Microsystems has released an open-source tool for developers that will allow them to begin testing network identity applications that use the Liberty Alliance specification, the company said Wednesday at the SunNetwork conference here... Launched in July, version 1.0 of the Liberty specification should allow users to sign on once to a Web site or network application, and then visit other sites without having to re-enter their password. Later versions will also store a credit card number, address and other information, making it more convenient to shop and use other services on the Web, proponents say. The specification was developed by the Liberty Alliance Project, a group led by Sun that also includes prominent businesses such as United Airlines, American Express, and General Motors. It was developed as an alternative to Microsoft's Passport, which provides single sign-on access to Web sites that support that technology. Called the Interoperability Prototype for Liberty, Sun pitched the tool as the first open-source implementation of the Liberty Alliance specification based on Java. Applications tested with it will be compatible with Sun's Sun ONE Identity Server 6.0 product, which is in beta now and will be Sun's first commercial product with built-in support for the technology when it is launched later this year..." [Website description: "Interoperability Prototype for Liberty is the first open-source implementation of the Liberty Alliance Version 1.0 specification based on Java technology. IPL is designed to help developers learn how the project Liberty Alliance Version 1.0 specification can be implemented. Written for the Java 2 platform, IPL provides the foundation for building liberty into applications and testing interoperability between liberty compliant solutions such as the Sun ONE Identity Server version 6.0. IPL consists of sample Java source code libraries, implementing the Liberty version 1.0 specification, and is not designed for commercial deployment. IPL is licensed as open source under the Sun Microsystems Open Source License."] See: (1) Interoperability Prototype for Liberty; (2) "Liberty Alliance Specifications for Federated Network Identification and Authorization"; (3) the text of the announcement in "Sun Announces the Sun Interoperability Prototype - Industry's First Identity Prototype for Developers. Offers 'Hands On' Experience to Accelerate Network Identity Application Development and Interoperability. Based on the Liberty Alliance v1.0 Specification."
[September 19, 2002] "IBM Supports WS-Security Spec in Products." By Richard Karpinski. In InternetWeek (September 18, 2002). "IBM on Wednesday detailed plans to begin supporting the recently announced WS-Security specification -- which it helped co-author -- in products including its WebSphere application server and Tivoli management platform. WS-Security provides a framework for securing Web services, from how to apply encryption and authentication technologies to how to ensure the integrity of Web services messages. The spec defines a series of SOAP extensions that add security features to the Web services protocol stack. IBM's move represents one of the first implementations of this important new spec, which just recently was placed on a standards track at the OASIS group... Security concerns are often cited as one of the biggest barriers to Web services adoption. The core Web services protocols, such as SOAP, do not have built-in security mechanisms, so enterprises need a way to secure SOAP messages they'll be sending over open networks like the Internet. Only until security issues are settled will Web services move beyond corporate firewalls. IBM said version 5 of its WebSphere application server will support WS-Security in the fourth quarter; Tivoli Access Manager will add support early next year. In addition to basic Web services security, IBM will add features to enable federated identity management capabilities to its software products, which will make it possible for users to more flexibly consume Web services..." See: "Web Services Security Specification (WS-Security)."
[September 19, 2002] "Understanding XML Namespaces." By Aaron Skonnard (DevelopMentor). First published in MSDN Magazine, July 2001. Updated July 2002. "Namespaces are the source of much confusion in XML, especially for those new to the technology. Most of the questions that I receive from readers, students, and conference attendees are related to namespaces in one way or another. It's actually kind of ironic since the Namespaces in XML Recommendation is one of the shorter XML specifications, coming in at just under 10 pages, excluding appendices. The confusion, however, is related to namespace semantics as opposed to the syntax outlined by the specification. To fully understand XML namespaces, you must know what a namespace is, how namespaces are defined, and how they are used. The rest of this column is dedicated to answering these three questions, both syntactically and abstractly. By the time you finish reading this, you'll understand how namespaces affect the family of XML technologies... A namespace is a set of names in which all names are unique. For example, the names of my children could be thought of as a namespace, as could the names of California corporations, the names of C++ type identifiers, or the names of Internet domains. Any logically related set of names in which each name must be unique is a namespace... A namespace is a set of names in which all names are unique. Namespaces in XML make it possible to give elements and attributes unique names. Although namespaces tend to be the source of much confusion, they're easy to comprehend once you become familiar with how they're defined and used, both syntactically and abstractly..." See references in "Namespaces in XML."
[September 18, 2002] "Model Programs. Business Process Modeling Tools Help Put the Enterprise In Perspective." By Kevin Jonah. In Government Computer News Volume 21, Number 27 (September 09, 2002), pages 52-54. "The main benefits to government of BPM tools are clear. They help agencies make better spending decisions and comply with regulations, and provide a road map for cross-agency collaboration. But the corresponding arrival of BPM religion in the government and a new wave of application technologies has offered another benefit: the opportunity to reuse all that modeling information to devise new automated processes, which reduces software development costs and speeds the response of agencies to e-government requirements... IDEF [a key set of standards for most government data and process modeling] was created in the 1970s by the Air Force's Program for Integrated Computer Aided Manufacturing. It was extended by the National Institute of Standards and Technology with support from the Defense Department's Office of Corporate Information Management and issued as a Federal Information Processing Standard... IDEF now consists of 16 specifications for various types of information modeling. The specification most relevant to business process modeling is the IDEF3 Process Description Capture Method. IDEF3 is a format for capturing information about the relationship between events -- the steps in a process -- and the situations, or states that occur within the process... new standards for describing the underlying information in models have been developed, making it possible to more easily move model data from one type of analysis tool to another, and to quickly generate automated processes with models. The most important of these new modeling languages are the Unified Modeling Language (UML) and Business Process Modeling Language (BPML)... While UML doesn't correspond directly to IDEF3, some modeling tools can bridge the gap. Popkin Software's System Architect, for example, can move models from IDEF3 to UML use cases and back. BPML is a different animal from UML and IDEF3. It is a dialect of XML designed for the world of asynchronous distributed systems -- in other words, Web services. The first draft of BPML was made public on March 8 last year. While IDEF3 and UML are used to capture information about processes, BPML is intended to actually drive automated processes, according to the Business Process Modeling Initiative, a consortium of companies that is developing BPML and a related standard, the Business Process Query Language. BPML connects automated processes across applications through Web services and application messaging standards such as the Simple Object Access Protocol, Electronic Business XML, RosettaNet and Microsoft BizTalk. It incorporates data flow, event flow and control of the process, along with providing for business rules, transaction requirements and security roles within a process. While many companies have announced that they will support BPML, few have implemented it. BPML is still something of a work in progress. But major infrastructure companies like IBM Corp., Hewlett-Packard Co. and Sun Microsystems Inc. have thrown their support behind BPML. Middleware and application vendors, and even major corporate customers like General Electric and insurer Swiss Re, also are on board, so BPML eventually will have a major impact... Microsoft, BEA Systems Inc. and IBM recently announced the Business Process Execution Language for Web Services (BPEL4WS), which is closely related to BPML. Popkin said he sees the two converging..." See "Business Process Modeling Language (BPML)" and "Business Process Execution Language for Web Services (BPEL4WS)."
[September 18, 2002] "Sun Official Urges Convergence On Web Services." By Paul Krill. In InfoWorld (September 18, 2002). "Competing proposals on multiple Web services choreography specifications should be deliberated on by an industry standards organization, not by individual vendors each pursuing their own path, a Sun official said Wednesday at the SunNetwork 2002 conference here... Noting competing efforts for choreography of Web services, to boost business-to-business transactions, Sun's Mark Bauhaus, vice president of Java Web services, stressed the need for industry unity... There have been discussions about having appropriate standards for Web services choreography within business-to-business transactions and for internal, intra-corporate communications, said Bauhaus. 'What we'd like to do is get these into a royalty-free, open environment,' Bauhaus said. Sun in August submitted to W3C a proposed specification called Web Services Choreography Interface (WSCI), for an XML-based interface description language to describe the flow of messages exchanged in Web services. The proposal was considered important for Web services in areas such as e-business. Shortly thereafter, IBM, Microsoft, and BEA released details of a proposed specification called Business Process Execution Language for Web Services (BPEL4WS), to serve a similar purpose. BPEL4WS has not been submitted to a standards body. Last week, the W3C Web Services Architecture Working Group, at the request of Oracle, recommended forming a new working group to ponder convergence of Web services choreography specifications. The vote was affirmative, although Microsoft reportedly voted against it, and BEA and IBM reportedly abstained. BEA also has participated in WSCI. Sun officials on Wednesday would not say how Sun voted on the measure, but Bauhaus said the Oracle proposal represented a step in the right direction. An analyst said Microsoft and Sun were vying for the hearts and minds of developers in the area of Web services..."
[September 18, 2002] "IBM Takes the Wraps Off Web Services Security Software." By Ed Scannell. In InfoWorld (September 18, 2002). "IBM officials on Wednesday announced software that helps developers and corporate users build more secure Web services, which the company will incorporate into WebSphere Application Server 5.0 and Tivoli's Access Manager 4.1 later this year and early next. The new software is essentially intended to manage high-volume business transactions as well as serve to integrate critical functions within Tivoli and WebSphere. It will adhere to the WS-Security specification which IBM co-authored with Microsoft, company officials said. IBM said the announcement represents their first efforts to deliver on its promise of delivering software that allows developers and users to deploy federated identification-based services from within its key middleware products... IBM Tivoli Access Manager 4.1, scheduled for a November release, will feature new federated identity management interfaces that enable customers to plug in support for identity standards. This next release will initially feature out-of-the box support for the XML Key Management Specification (XKMS), company officials said. IBM will extend this capability to include support for various identity standards such as the Security Assertions Markup Language (SAML), Kerberos, XML Digital Signatures, and other security tokens formats as they mature in standards organizations. Additionally, IBM will support secure token management, trust brokering, integrated identity mapping, and credential mapping services. Version 5 of WebSphere Application Server will support WS-Security in the fourth quarter and in IBM's Tivoli Access Manager 4.1, early next year, company officials said. This specification defines a standard set of SOAP extensions that can be used to provide integrity and confidentiality in Web services applications, they said. The new Web services trust broker software can allow organizations to automate the process of entering into trusted business relationships, regardless of the type of security mechanism used by the other company. IBM's intent is to support the broadest range of brokering methods such as Microsoft TrustBridge, Kerberos tokens, Public Key Infrastructure (PKI) credentials, and other means of delegating trust that develop in the future. IBM plans to deliver this software in Tivoli and WebSphere software..." See the announcement: "IBM Unveils Industry's First Software for Secure Web Services. Websphere and Tivoli Software Enable Businesses to Securely Extend Web Services Applications to Business Partners, Customers, Suppliers."
[September 18, 2002] "Localization Within a Document Format. Tailor your documents to fit a wide range of languages and cultural conventions." By Uche Ogbuji (Principal Consultant, Fourthought, Inc). From IBM developerWorks, XML zone. September 2002. ['Internationalization support is one of XML's key strengths. Unfortunately, too few XML formats provide mechanisms for localizing content. This tip shows you how to develop localized XML formats.'] "One of the key strengths of XML is its support for internationalization. Its core character set, Unicode, provides a mechanism to support more regionally popular systems -- such as the ISO-8859 variants in Europe, Shift-JIS in Japan, or BIG-5 in China. This is good. Fortunes are spent refitting applications for international deployment after they have been originally developed with a parochial point of view. Yet there is more to internationalization than support for international character repertoires. It is also important to be able to represent information in a way that can be tailored to a particular set of language and cultural conventions. This is what's known as localization... In the data format itself (which is where XML comes in) some aspects of localization, such as date format and order of names, can be addressed with basic XML facilities. One approach is to use international standard forms; a good example of this is dates, where it is best to use the ISO 8601 standard... Another common localization issue is presentating translations of labels, messages, descriptions, and the like. XML 1.0 provides for the specification of the language used in element content and attribute values... The xml:lang attribute can have any value allowed by RFC 1766. This means that one can use values representing primary designations of languages (en for English, es for Spanish, and so forth). You can be more specific by adding the region where the language variant used is prevalent (for example, en-US for American English, en-GB for British English, or es-MX for Mexican Spanish). Notice that you do not need to declare a namespace here: The xml namespace is implicitly defined in every document. Also note that the language designation affects all children of the relevant element, and all other descendant content. And even though the xml:lang attribute is given special mention in the XML specification, you must still provide for it in your schema." See: (1) "Markup and Multilingualism"; (2) "Language Identifiers in the Markup Context"; (3) "XML and Unicode"
[September 18, 2002] "Grid Computing: Electrifying Web Services." By Dirk Hamstra. In Web Services Journal Volume 2, Issue 9 (September 2002), pages 60-64. "Grid computing makes it possible to dynamically share and coordinate dispersed, heterogeneous computing resources. Flexibility and ubiquity are essential characteristics of Web services technologies such as WSDL (Web Services Description Language), SOAP (Simple Object Access Protocol), and UDDI (Universal Description, Discovery, and Integration). The Open Grid Services Architecture (OGSA) combines technologies to unlock and exploit grid-attached resources. OGSA defines mechanisms to create, manage, and exchange information between Grid Services, a special type of Web service. The architecture uses WSDL extensively to describe the structure and behavior of a service. Service descriptions are located and discovered using Web Services Inspection Language (WSIL). By combining elements from grid computing and Web services technologies, OGSA establishes an extensible and interoperable design and development framework for Grid Services that includes details for service definition, discovery, and life-cycle management. Information systems are no longer just defined by what they can process, but also where they can connect to. For example, the growing demands for computing power in simulation and engineering design projects, is increasingly satisfied by 'on-demand' sharing of CPU cycles and disk space across distributed networks. The ultimate goal for these interconnected networks, or grids, is to make IT-power as commonplace and omnipresent as electricity... Grid computing can be described as the coordinated, transparent, and secure sharing of IT resources across geographically distributed sites based on accepted computing standards... From a technical perspective, OGSA is critical to streamlining and accelerating the creation and deployment of grid applications. The architecture unifies existing grid development approaches and is extensible from a technical perspective to incorporate future developments. However, to expand the reach of grid applications beyond the level of a single enterprise the OGSA needs to more thoroughly address issues concerning: (1) Use of WSDL extensions; (2) Definition and use of service ports; (3) Heterogeneous, end-to-end, security; (4) Grid Service manageability... The extensibility of WSDL has implications for interoperability with existing WSDL clients and servers. Full interoperability and accommodation of non-OGSA WSDL clients by grid servers will accelerate the adoption of the extensibility elements. This is important since the handling of extensions is spotty at best in most current toolkits..." See: "Web Services Description Language (WSDL)."
[September 18, 2002] "Cross-Site Scripting. Use a Custom Tag Library to Encode Dynamic Content." By Paul S. Lee (I/T Architect, IBM Global Services). From IBM developerWorks, Security. September 2002. ['Cross-site scripting is a potentially dangerous security exposure that should be considered when designing a secure Web-based application. In this article, Paul describes the nature of the exposure, how it works, and has an overview of some recommended remediation strategies. The article demonstrates that the majority of the attacks can be eliminated when a Web site uses a simple custom tag library to properly encode the dynamic content.'] "Most Web sites today add dynamic content to a Web page making the experience for the user more enjoyable. Dynamic content is content generated by some server process, which when delivered can behave and display differently to the user depending upon their settings and needs. Dynamic Web sites have a threat that static Web sites don't, called 'cross-site scripting,' also known as 'XSS.' A Web page contains both text and HTML markup that is generated by the server and interpreted by the client browser. Web sites that generate only static pages are able to have full control over how the browser user interprets these pages. Web sites that generate dynamic pages do not have complete control over how their outputs are interpreted by the client. The heart of the issue is that if untrusted content can be introduced into a dynamic page, neither the Web sites nor the client has enough information to recognize that this has happened and take protective actions,' according to CERT Coordination Center, a federally funded research and development center to study Internet security vulnerabilities and provide incident response. Cross-site scripting is gaining popularity among attackers as an easy exposure to find in Web sites. Every month cross-site scripting attacks are found in commercial sites and advisories are published explaining the threat. Left unattended, your Web site's ability to operate securely, as well as your company's reputation, may become victim of the attacks. This article is written to raise the awareness of this emerging threat and to present a solution implementation for Web applications to avoid this kind of attack..."
[September 18, 2002] "Forever Free Services?" By P.J. Connolly and Tom Yager. In InfoWorld Issue 37 (September 13, 2002), page 42. Test Center Research Report, Web Services Applications. "Web Services, so the story goes, typify the best aspects of modern business software. Their foundation technologies, including XML, SOAP, and WSDL, were hewn by the sharpest minds in the IT industry and made free for the benefit of all. But as vendors turn Web services into products, skeptics fear that the technology's openness may be bait. Standards could be subverted into springboards for patents, proprietary adaptations may hamper interoperability, and open-source implementations could be subjected to licenses." See the dialog between Tom and P.J. [Tom] "Enterprise computing founders such as Digital Equipment, Intel, and IBM spent fortunes on R&D to create what everyone now takes for granted. They recovered those costs through fees and licenses, the application of which did not slow the acceptance of what they created. You simply can't expect commercial-grade technology to develop without a price tag attached... Web services, like the Web itself, are gaining traction because they're built from brilliant ideas that make it easier to do business. The minds that hatch such ideas don't come cheap. If the ubergeeks' employers can't profit from their inventions, what will motivate vendors to fund the research that evolves into standards? Unless you propose to move all of these brainy people to a bread-and-water commune, IT will have to pay for those deep thoughts... It's natural for vendors to explore all possible ways to extract revenue from their inventions. Patents and licenses generate income that companies would be foolish to disregard. When vendors act reasonably in that pursuit, as major Web services players have so far, the market rewards them..." [P.J.] "There's nothing wrong with making money from the sweat of your brow. I simply object to seeing the standards process abused by vendors trying to lock users into proprietary solutions. Besides, you're forgetting how much of the foundation work for the Internet was done in the nonprofit sector. Large chunks of Sun's Solaris, in particular, and a lot of other TCP/IP implementations, including Microsoft's, still bear the copyright of the Regents of the University of California for a reason... It's smart for vendors to put their efforts behind developing standards for Web services. Also, it's a good idea for standards bodies to resist the temptation to adopt proposals that could become the subject of future holdups through transfers of patent ownership. Forgent's claim to own the rights to the JPEG specification is just the kind of problem that IT doesn't need to relive every few years. If a vendor wishes to have a patented technology adopted as a standard, it must be prepared to assign its rights to the standards body for the lifetime of the patent... Here's the tough part: Exactly what is the difference between reasonable returns and shameless greed? Sure, we all know it when we see it, but sometimes, market discipline doesn't kick in until it's too late..." General references in "Patents and Open Standards."
[September 18, 2002] "XML-Style PKI. Does XKMS Have the Key?" By Jon Udell. In InfoWorld Issue 37 (September 13, 2002), pages 1, 16. ['The XML Key Management Specification offers hope of freeing developers from the pit of PKI despair. XKMS addresses one of the chief obstacles to workable Web services security: the complexity of Public Key Infrastructure. The problems are bigger than XKMS can solve, but it takes important steps in the right direction. XKMS has lots of right ideas: minimal client footprint, service-oriented architecture, DNS integration, and trust-provider agnosticism. The emerging model of Web services could benefit from all these things, but the road to XKMS adoption is tarred with inertia.'] "In discussions about Web services security, a large elephant enters the room: Public Key Infrastructure. PKI is a foundation of the trust services to which the SAML (Security Assertions Markup Language) and Liberty Alliance specifications refer. It also enables the signing and encryption of parts of documents as described by the WS-Security spec. Long before the Web services revolution began, PKI deployment and use was lagging behind expectations. E-commerce drove the adoption of server-side certificates, but client-side certificates, which can authenticate users to Web sites as well as sign and encrypt e-mail, never caught on. The emerging end-to-end style of Web services is going to force the issue. Channel security (that is, an HTTPS connection) won't be flexible enough for business documents that route through a chain of intermediaries, each responsible for signing, encrypting, or validating parts of those documents. Granular, item-level security is coming, and that's going to require more cryptographic keys, more certificate chains, and more people who know how to make all this stuff work... Nobody pretends there is an easy way out of the dilemma. Nevertheless, the XKMS (XML Key Management Specification), originally sponsored by VeriSign, Microsoft, and webMethods, takes important steps in the right direction. First and foremost, it pushes the logic of finding and validating certificates out of the client and into the cloud. XKMS is a Web service; if clients of that service can shed hard-coded certificate-processing logic, it will help in several ways. Mobile devices, in particular, could be streamlined. As VeriSign principal scientist Phillip Hallam-Baker points out, certificate processing is unwieldy both in terms of code (about 750KB) and data (VeriSign's Certificate Revocation List has grown to 3MB). Everyone would benefit from the dynamic nature of the service-oriented approach. In addition to insulating clients from these kinds of flaws, XKMS promises to shield them from the vicissitudes of normal PKI evolution -- for example, the shift from batch-mode certificate checking using certificate revocation lists to real-time checking using the OCSP (Online Certificate Security Protocol). What XKMS doesn't do is offload core cryptographic operations, including key generation and signing, from the client... XKMS is abstract enough to support alternative certification schemes such as PGP's (Pretty Good Privacy) Web of trust, or the linked local namespaces of SPKI/SDSI (Simple Public Key Infrastructure/Simple Distributed Security Infrastructure, or 'spooky/sudsy'), an idea that influenced the design of Groove. These systems enable natural bottom-up trust, arising from ordinary discourse, as opposed to synthetic top-down trust rooted in institutional authorities..." See: (1) "XML Key Management Specification (XKMS)"; (2) "Security Assertion Markup Language (SAML)."
[September 18, 2002] "XML and Personalized Financial Plans." By Kenneth J. Hughes, Karl B. Schwamb, and Hans Tallis. In XML Journal Volume 3, Issue 9 (September 2002). "XML greatly facilitates the design of publishing systems whose capabilities include not only control of formatting, but also control of content selection according to the needs of the individual reader. Customers of financial services firms don't need to settle for advice written to a general readership; they can receive plans so personalized and uniquely suited to them that they'll feel a team of experts conferred regarding their situations and wrote specific, 200-page recommendations for planning their financial futures. Through the application of XML standards and technologies to the financial planning process, the system described in this article automatically produces highly personalized financial plans. The article emphasizes the role XML plays in such a financial planning system, and presents techniques that are applicable to any document personalization system that must operate and evolve in a production setting. Specifically, we discuss the use of: (1) XMLC in the Web presentation layer for managing the data that drives personalization; (2) XSLT for assembling personalized documents; (3) DocBook for representing personalized documents; (4) Meta-XSLT for generating user-readable documentation of the personalization process... The customer data is stored in a conventional relational database - it's extracted by custom Java code that performs numerous calculations to analyze the customer's situation and produce tailored recommendations. The analysis results are represented in a custom XML document that's completely data-centric. XSLT stylesheets are employed in a "pull" style to conditionally include text fragments and graphics. The target representation of the result is in DocBook, a rich XML application for representing books and articles... Once a financial plan has been generated in the intermediate DocBook form, the standard DocBook XML stylesheets are used to produce output in XHTML (for the Web) or RTF (for postprocess manual customization). PDF output, for high-quality print production, is also produced with the assistance of custom XSL-FO stylesheets. The fact that source code for these standard stylesheets is available is an important risk-reduction factor; they can be customized to produce the desired output styling, if necessary. Since they were included with DocBook, this greatly reduced the amount of custom code needed to produce high-quality output... Some key areas of financial analysis are best represented by rules that can be executed within a rule-based system. While several rule standards have been proposed, such as SRML, RuleML, and BRML, none have the status of an approved standard, and no path exists for documenting the rules. Taking a wider look at the financial planning landscape, it's clear that once a plan is produced, customers prefer to have their plans monitored as they make updates to their financial situation. Many of the XML standards that already exist for trading and exchanging financial transaction data, such as IFX, could be used to update customer data over time and to perform plan monitoring. The monitoring activity can be used to make automatic portfolio adjustments, notify customers of deviations from plan goals, and cross-sell and up-sell financial instruments..."
[September 18, 2002] "XML Excellence at Ford Motor Company." By Tim Thomasma (Senior Technical Specialist, Ford Motor Company's Infrastructure Architecture group; Co-chair of the Automotive Industry Action Group's XML/EDI Work Group). In XML Journal Volume 3, Issue 9 (September 2002), pages 10-14. "XML and Web services promise to support new rapid-response business practices: as soon as a customer returns a product for repair, the information generated from this business event can be sent automatically to manufacturing, design, customer service, and finance people as well as to systems and databases - and those of the suppliers - so that all the appropriate resources of the extended enterprise are immediately engaged in satisfying this customer now and in the future. Instead of handcrafted point-to-point interfaces and integration connections, we need standard connection points that are well known throughout the computing industry and supported by all products. This is XML excellence, applied to integration. We think the promise is achievable. Most of the pieces are in place. We see several elements of XML excellence that are involved in realizing the promise: (1) Use the overall structure and integration best practices of the Open Applications Group Integration Specification (OAGIS); (2) Use and support relevant standards; (3) Manifest excellence in a focused solution that delivers business value to the company; (4) Drive use and compliance internally and externally to the company; (5) Plan for broad deployment of these techniques - event-driven messaging, Web services, and richer electronic collaboration... The U.N. Centre for Trade Facilitation and Electronic Business (UN/CEFACT) has recommended OAGIS as a temporary document syntax for payloads using the ebXML infrastructure, while its new ebXML Core Components specification proposal goes through standardization. In addition to the UN/CEFACT, the Open Applications Group, Inc. (OAGI), maintains close relationships with the Organization for the Advancement of Structured Information Systems (OASIS) and the Web Services Interoperability Organization (WS-I). The OAG, OASIS, and WS-I consortia generally have representatives from the same software companies on their governing boards. From the start, OAGIS was designed for use across all industries and in every geography. The software companies that built it, including JD Edwards, Oracle, PeopleSoft, QAD, and SAP, intended to use it to reduce the costs of application integration for their customers, no matter what industry or country their customers do business in. We try to avoid creating our own application-specific or Ford Motor Company-specific XML document definitions for application integration or electronic collaboration with trading partners..."
[September 17, 2002] "Metadata for DRM. Rights Expression Languages Overview." By Grace Agnew (Rutgers, The State University of New Jersey). Presentation given at the NSF Middleware Initiative (NMI) and Digital Rights Management (DRM) Workshop, September 9, 2002, Georgetown University Conference Center, Washington, D.C., USA. 41 pages. A Rights Expression Language (REL) "(1) Documents offers and agreements between rights holders and end users, providing rights to license, distribute, access and use resources; (2) Communicates rights, conditions on the exercise of rights, and other context relevant to the rights transactions; (3) Defines the parties and concepts engaged in offers or agreements for the exercise of rights that are exercised against content; (4) Expresses the underlying business model(s) of the community sharing the DRM; (5) Employs data dictionary and a standard syntax to provide interoperable, logically consistent, semantically precise documentation for rights transactions... Administrative metadata records provenance, fixity, context, reference, structure, and management of resources, rights metadata may be a subset... an integration of administrative, descriptive, structural and rights metadata..." Examples are given in XrML and ODRL markup notation. Note the related paper published in D-Lib Magazine (July/August 2002): "Federated Digital Rights Management: A Proposed DRM Solution for Research and Education." See also the referenced resources used in connection with the workshop. General references in "XML and Digital Rights Management (DRM)." [PPT format, cache]
[September 17, 2002] "PDF on the Fly, Part 2: Users and Systems." [Workflow] By Bernd Zipper. In The Seybold Report Volume 2, Number 11 (September 10, 2002), pages 8-13. ['When you need more speed or features than Adobe's Distiller offers, you can turn to a system or code library from one of the firms in this article. In addition to describing the products' strengths and weaknesses, we interview some publishers whose workflows are regularly turning out PDF on the fly.'] "In the first part of this story, which was published in our August 19 issue, we described several code libraries that programmers might use to integrate a complete system that generates PDF from Web forms, databases and other dynamic information sources. Now we will examine some alternative approaches, ranging from XSL formatters to packaged server applications, and some specific customer experiences... Among the most interesting approaches is to combine PDF creation with XML-coded data. Some companies are writing proprietary code, and we discuss those below. There is also an XSL formatter being developed under the aegis of the Apache Software Foundation... Lots of background information, along with full source code, is available on the Web site. As with all open-source projects, it is primarily of use to programmers and integrators. One such integrator is Pansite, a company based in Essen, Germany; originally developed under the code name 'Datasheet,' the software subsequently became Panpage version 1.0. Panpage is completely based on Apache's Formatting Objects...On the commercial side, we see two firms playing an active role. One, a pioneer in the conversion of PDF and PostScript via XSL-FO, is the U.S. vendor RenderX, which has been active in this area since 1999. The 'native' XEP Rendering Engine that RenderX sells will convert XSL-FO files (W3C recommendation of Oct. 15, 2002) into PDF or PostScript. The application is written entirely in Java and will run on any system that supports Java 2 (JDK/JRE 1.2.1). Even Microsoft's Java VM is supported. A glance at the company's home page is worthwhile. There you can see examples of the various capabilities of the Rendering Engine. XEP can be obtained from the Web site as an 'evaluation edition,' and you can also find licensing information there. XSL Formatter V2, from the Japanese company Antenna House, also supports conversion from XSL into PDF. The XSL Formatter provides an integrated Windows ActiveX Control formatting engine and is constructed from modules. According to the company, the application can easily be integrated into existing environments via a command-line interface or a COM interface. The end-user workflow proves to be very easy to set up afterward. After previewing the XML data in a browser, the contents of the browser window are 'virtually printed' and simultaneously converted from XML to PDF...XSL-FO is a highly interesting technology that is only at the beginning of its development. Many of the large vendors understand this, and Quark, Adobe and Corel are already developing plans to implement this technology in their own products. But the smaller companies, which are frequently more innovative and adventuresome, have a head start -- particularly in the development of Java applications..."
[September 17, 2002] "Spanish Developer Readying New Cross-Media System." By Luke Cavanagh. In The Seybold Report Volume 2, Number 11 (September 10, 2002), pages 1, 18-20. "Quasar is introducing a system based on Adobe InDesign, Java servlets, a comprehensive workflow plan and replicable databases. Despite some initial restrictions in text editing and XML handling, we think this is a promising contender. A pilot installation will help smooth the rough edges... Among Quasar's most interesting developments is the real-time integration of Adobe InDesign with a proprietary, browser-based text editor. Unlike many systems on the market, the Quasar customer won't get any choices for the editor or for the pagination engine. That approach can be successful, as Digital Technology (DTI) has demonstrated, but it can backfire if there are weaknesses in the implementation... The system handles metadata nicely, and the forms can be easily changed by using Adobe GoLive. Assets with IPTC metadata attached will have these fields pre-populated upon import to the system...While Quasar's cross-media output is strong, the editor does not fully support XML. Files are stored as XML in the database, but they are only tagged to the paragraph level. And the tags are not assigned by users, but are instead generated based on proprietary markup placed in the file by the system. This architectural decision was based both on Quasar's belief that users shouldn't have to learn XML skills and on its intent to avoid having to validate XML data. The result may be a red flag to buyers considering the new functionality of Adobe InCopy 2.0 (which Quasar does not support) or EidosMedia's Xsmile editor. On the plus side, the editor has some strong features, such as advanced table management and bidirectional links to Microsoft Office modules. The table program has some worthy features, such as the ability to create tables within tables. Perhaps more interesting is the ability to take a table into Microsoft Excel, perform Excel calculations and return it to the editing program with the results of the calculations intact...based on our brief introduction to the Quasar system, we aren't convinced yet that it offers a real advantage over EidosMedia's Methode, Seinet's Xtent, or even the offerings of traditional newspaper vendors such as CCI Europe..."
[September 17, 2002] "What Are Topic Maps?" By Lars Marius Garshol. From XML.com (September 11, 2002). ['Anyone who has attended an XML conference over the last two years or so is likely to be aware of Topic Maps, an XML technology that helps in indexing and describing the relationships between information items in documents. In "What Are Topic Maps?" Lars Marius Garshol introduces Topic Maps, explains what they look like, and what they can do.'] "Many years ago, I started looking into SGML and XML as a way to make information more manageable and findable, which was something I had been working on for a long time. It took me several years to discover that, although SGML and XML helped, they did not actually solve the problem. Later I discovered topic maps, and it seemed to me that here was the missing piece that would make it possible to really find what you were looking for. This article is about why I still think so... The topic map takes the key concepts described in the databases and documents and relates them together independently of what is said about them in the information being indexed... this means managing the meaning of the information, rather than just the information. The result is an information structure that breaks out of the traditional hierarchical straightjacket that we have gotten used to squeezing our information into. A topic map usually contains several overlapping hierarchies which are rich with semantic cross-links like 'Part X is critical to procedure V.' This makes information much easier to find because you no longer act as the designers expected you to; there are multiple redundant navigation paths that will lead you to the same answer. You can even use searches to jump to a good starting point for navigation. The most common use for topic maps right now is to build web sites that are entirely driven by the topic map, in order to fully realize the their information-finding benefits. The topic map provides the site structure, and the page content is taken partly from the topic map itself, and partly from the occurrences. This solution is perfect for all sorts of portals, catalogs, site indexes, and so on. Since a topic map can be said to represent knowledge about the things it describes, topic maps are also ideal as knowledge management tools... So, to sum up, topic maps make information findable by giving every concept in the information its own identity and providing multiple redundant navigation paths through the information space. These paths are semantic, and all points on the way are clearly identified with names and types that tell you what they are. This means you always know where you are, which prompted Charles Goldfarb to call topic maps 'the GPS of the information universe.' Topic maps also help by making it possible to relate together information that comes from different sources through merging and published subjects. A future article will discuss this..." See the list of 'Tools and references' at the end of the article and "(XML) Topic Maps.".
[September 17, 2002] "Simple Text Wrapping." By Antoine Quint. From XML.com (September 11, 2002). ['Antoine Quint celebrates the end of the long French summer holidays with another installment of "Sacre SVG!" One of the more irritating omissions from the SVG specification is support for text wrapping. In his column this week Antoine shows how you can implement text wrapping yourself.'] "SVG 1.0 includes support for manipulating and representing text. There's an entire chapter devoted to text in the specification. Text in SVG is real text; to write Hello World! in an SVG document, you have to write something like <text>Hello World!</text>. This comes in handy with regard to accessibility as it means that SVG text is searchable and indexable. Looking through the chapter we can see a number of appealing text features: precise text positioning, support for bidirectional text, text on a path, and so on. However, you'll find that text wrapping is missing. Let's see what can be done with the current set of SVG 1.0 features to extend it to do some simple text wrapping... The main thing is to be able to break a string into a multiline paragraph, given a column width. Next, we might take a crack at text alignment: left, right, center, and full justification. Line-breaking will only be done on spaces, no funny stuff with hyphens or dictionaries. That's it. For refinements, we'll consider CSS for font properties, line intervals, and text rendering quality. But we also want to provide a nice architecture for our component; we're going to give it a nice XML front-end... [thus] we've come up with a pretty neat and useful extension to SVG by using it as a 2D Graphics API. But the great thing is that it's more than an API. It's also got an XML front-end and allows us to build higher-level blocks with higher level of semantics. We will explore all of this further with XForms-related work in the coming months..." See: "W3C Scalable Vector Graphics (SVG)."
[September 17, 2002] "Identity Crisis." By Kendall Grant Clark. From XML.com (September 11, 2002). ['In the XML-Deviant column this week, Kendall Clark continues his look at the W3C Technical Architecture Group's "Architectural Principles of the World Wide Web" document, the TAG's definitive document of how the Web works. This week Kendall looks at the document's concept of identity of resources.'] "Members of the W3C's Technical Architecture Group (TAG) are preparing the 'Architectural Principles of the World Wide Web' (APW), a document intended to serve as a definitive statement of what the TAG has discovered and defined about what makes the Web work. As I described last week, the APW contains four substantive sections: an introduction, a section on identifiers and resources, a section on formats, and a section on protocols. The structure of the document reflects the structure of the Web's architecture, which the APW says consists of identifiers, formats, and protocols. In last week's column I discussed the APW's introduction and some general issues of terminology, especially the confusion, as I see it, of principle with practice. In this week's column, I examine APW Section 2, Identifiers and Resources..." See: "W3C Publishes Working Draft of Architectural Principles of the World Wide Web."
[September 17, 2002] "What Are XForms?" By Micah Dubinko. From XML.com (September 11, 2002). ['As the XForms specification evolves at the W3C, Micah has been keeping his article up to date. This week we're publishing the third revision, which includes changes made to XForms in August.'] "A new technology, XForms, is under development within the W3C and aims to combine XML and forms. The design goals of XForms meet the shortcomings of HTML forms point for point: (1) Excellent XML integration [including XML Schema]; (2) Provide commonly-requested features in a declarative way, including calculation and validation; (3) Device independent, yet still useful on desktop browsers; (4) Strong separation of purpose from presentation; (5) Universal accessibility. This updated article gives an introduction to XForms, based on the 21-August-2002 Working Draft, which is described as being a close precursor to a Candidate Recommendation draft... What does the future hold for XForms? One certainty is that connected devices are becoming more diverse than ever, appearing in all shapes and sizes. Web forms no longer lead sheltered lives on the desktop. By offering a more flexible, device-independent platform, XForms will provide interactivity on such devices. Most organizations now have substantial investments in XML. Since XForms technology reads and writes XML instance data, it enables existing investments to be leveraged as building blocks in a larger system more smoothly than with ordinary XHTML forms. Additionally, Web Services (or the Semantic Web, depending on your upbringing) will increase the amount of information interchange over the Web, which will in turn increase the need for information to be entered -- through XForms. Those who have struggled with traditional forms and associated scripting will appreciate the consistent set of form controls and declarative actions in the specification, as well as the powerful and familiar XSLT-like processing model. Calculations, validations, and business logic will also be expressible without scripts. Updated forms are one of the major changes in XHTML 2.0, the most significant change to HTML since its conception in 1993. Developers and users alike are looking forward to the final W3C Recommendation for XForms..." See: (1) W3C XForms - The Next Generation of Web Forms; (2) "W3C Publishes Preview Candidate Recommendation for XForms Specification"; (3) See: "XML and Forms."
[September 17, 2002] "Web Services-Portal Is on Tap for New Mexico." By Jim Rapoza. In eWEEK (September 16, 2002). "The state of New Mexico is harnessing the power of portal technology and Web services to transform a loosely related collection of government sites into an integrated portal that is organized by communities of interest rather than agency focus. Because the state's Web sites differ not only in content but also in the technology behind them (using everything from Java 2 Enterprise Edition to .Net to standard Web tools), the New Mexico state portal (known as the Multi-Agency Portal, or MAG Portal) needed to be neutral when it came to technology. Portal project members also chose these platforms because of their support for Web services standards such as SOAP (Simple Object Access Protocol) and Web Services Description Language. A key requirement of the portal initiative was that there be almost no rewriting of Web applications and services already in use. 'Our vision was to use Web services to enable remote procedures,' Stafford said. Although many services in the MAG Portal consist of standard Web applications, others are newly developed Web services. Security is also a major issue in the portal and Web services development. In addition to traditional issues such as access control and securing the services, Stafford and his staff had to deal with a large number of potentially unsecured systems within their networks, meaning that safe internal access wasn't a foregone conclusion... To deal with some of these security issues, the state turned to Oblix Inc.'s NetPoint. The main focus of the NetPoint implementation is to maintain identity management and handle LDAP-based access control. In addition, it was possible for the development team to begin some work in securing services through Security Assertion Markup Language [SAML] because Oblix is a big backer of this standard for securing Web services and because NetPoint includes SAML implementations. In addition to internal implementation of Web services, many of the agencies are working on outward-facing Web services. 'Some agencies are already building extranets to exchange information using XML with external government agencies,' Martinez said. The Environment Department, for example, is working on sharing information with the federal Environmental Protection Agency. 'The states and the EPA are working together on defining XML schemas for information that we already share, such as air quality reports,' she said..." See: (1) "Security Assertion Markup Language (SAML)"; (2) "Environmental Protection Agency (EPA) Central Data Exchange (CDX)."
[September 17, 2002] "Adventures in High-Performance XML Persistence, Part 2. Benchmarking XML Parsing." By Cameron Laird (Vice president, Phaseit, Inc). From IBM developerWorks, XML zone. September 2002. ['XML-oriented applications vary enormously in performance. This article, the second in a series on XML persistence, presents basic information you should know about XML parsing, including several principles for measuring XML parsing performance that are important for any XML developer who wants more speed.'] "In this series on XML performance, I frequently return to three general engineering principles: (1) The performance of two functionally identical programs can often vary by orders of magnitude; (2) It's often less costly to tune a technology that an organization can accept, rather than impose an unfamiliar technology, even if the latter option provides more speed; (3) There are clear benefits to articulating project requirements clearly and completely. Project requirements need to be operational, or objective. For example, a requirement that states 'The system must respond to the user's query within four seconds, when the application is run on a 400 MHz Pentium with 64 MB of RAM' is far more serviceable than 'We should use a fast parser.' Before applying these principles to the marketplace for XML parsers, I'd like to tie up a few loose ends. I've added an addendum, 'When to use an XML database', to Part 1 of this series, which introduced XML persistence. Persistence mechanisms work together with parsing to make up the most important parts of an XML programming infrastructure... When the performance of a particular project lags behind what it needs to be, the following are generally the least expensive tactics with the potential to give significant improvements: Faster hardware; Programmatic caches or pre-fetches; An API (SAX for DOM, for example) that matches the specific application well and therefore conserves maintainability, but also demands fewer processing resources; A faster parsing engine... In general, expat-based parsers are faster than good Java-based ones by a factor of at least two. Specialty XML parsers best expat by about the same factor. Different parsers for the same language can vary in speed by two full orders of magnitude. In analyzing parsing performance, especially with DOM, keep an eye on memory usage. As with raw performance, this varies greatly between different parsers and application algorithms. The next installment of this series on XML performance will methodically step through a practical exercise of measuring and enhancing the speed of a demonstration application..." See also Part 1, "Adventures in high-performance XML persistence, Part 1. A high-performance Tcl-scripted XSLT engine."
[September 17, 2002] "Microsoft: All XML, All The Time." By Charles Cooper. In CNET News.com (September 12, 2002). ['Jim Allchin is not buying the argument that there's any confusion about Microsoft's message on Web services. If anything, he says, it's just the opposite.'] "Earlier this summer, Bill Gates allowed that Microsoft's .Net Web services strategy was progressing more slowly than anticipated. But Allchin, who is responsible for Microsoft's platform strategy, maintains that the industry adoption of XML (Extensible Markup Language) Web services is proceeding apace, any speed bumps notwithstanding. 'In terms of the overall vision of XML Web services, I think it's been quite successful,' he says. Allchin, who recently oversaw the first update to Microsoft's Windows XP operating system, sat down with CNET News.com to discuss XML and the future of consumer-oriented Web services, a segment that was once promoted as the future of online commerce... [CC: 'the adoption of XML has been uneven'] JA: 'I don't know if I agree with you. Within the customer-facing, OK. But inside businesses, there are tons of apps being written. If anything (is) surprising me, it's the opposite. Think about what we have today: We have only the basics--SOAP (Simple Object Access Protocol), UDDI (Universal Description, Discovery and Integration), and three or four others--and we do have WS security. If you go back to the beginning of this year, we didn't even have that. With Hailstorm, there's no question we can have a long conversation about it. We made a push there, and we learned a bunch of stuff and we really retrenched. With XML Web services, we came out with Visual Studio.Net. It's a rock-solid product. We've had lots of good feedback, and in terms of interconnecting devices of all types, I think it's the right vision, and I think it's happening'...."
[September 16, 2002] "Iona Easing Web Services Integration." By Darryl K. Taft. In eWEEK (September 16, 2002). "Iona's Orbix E2A XMLBus Edition v5.4 delivers new features that enable the bridging of Common Object Request Broker Architecture (CORBA) systems to Web services, the company said... Examples of integration scenarios in which Iona customers have begun to deploy Web services to optimize their CORBA investments include integrating CORBA applications with other internal applications -- typically based on Java 2 Enterprise Edition, mainframe or Microsoft technologies -- without tampering with the stable CORBA system or incurring the expense of developing new CORBA interface code. Businesses also are exposing existing CORBA systems -- and their information or functionality -- to other parts of the organization, where CORBA previously had been a barrier to interoperability. New features of the Waltham, Mass., company's Orbix E2A Web Services Integration Platform XMLBus Edition v5.4 include CORBA data type support, including direct support for common CORBA design patterns such the Factory Model; support for Iona's Orbix 2000, Orbix 3 and ORBacus, and Borland VisiBroker CORBA technologies; and secure SSL-based dispatching of Web services to CORBA systems with propagation of security credentials..." See details in the news item of 2002-09-16: "IONA Orbix E2A XMLBus Version 5.4 Connects Corba with Web Services."
[September 16, 2002] "Iona Links CORBA Roots With Web Services." By Richard Karpinski. In InternetWeek (September 16, 2002). "Iona, a vendor with deep roots in the CORBA world, released a new version of its Web services platform that links these two approaches to distributed enterprise computing Iona's Orbix E2A XMLBus 5.4 delivers new features for fusing CORBA -- Common Object Request Broker Architecture -- systems to Web services. CORBA is a rigorous, complex distributed-computing system that has had great success in some circles (especially in defense, high-tech/telecom and finance) but never was able to take root on a wide basis. It was an answer, in many respects, to Microsoft's COM distributed computing model, but it could never reach a similar critical mass. In many ways CORBA had its thunder stolen by Java and J2EE, but true CORBA backers note that most Java deployments -- relying more on servlets and JSPs than true distributed Enterprise Java Beans -- don't come close to the power or elegance of a full CORBA deployment. Adding Web services protocols to the CORBA mix gives CORBA users an easy way to get data and program calls in and out of a distributed CORBA system, said Rebecca Dias, IONA's Web services product manager..." See details in the news item of 2002-09-16: "IONA Orbix E2A XMLBus Version 5.4 Connects Corba with Web Services."
[September 16, 2002] X-Fetch Performer 2.1 Benchmark Documentation." Benchmark and product information. From Republica Corporation. September 2002. 16 pages. "As XML is applied on new fields of data processing, the performance and stability of XML tools must be considered more carefully with IT decisions. This document displays a performance comparison between the most common XML processing techniques according to the benchmark package published in IBM Developerworks in September 20011. There are two different approaches to XML handling: (1) Event-based string parsing, for example SAX-parsing; (2) Using object models, like DOM. Both approaches have their pros and cons but none of them is universally (in all respects) better to the other. Republica's contribution to XML-based e-business is the combining of the best features of these techniques in the EJB-compatible X-Fetch Performer... Event-based parsing means that the XML-reading component (called XML parser) constructs an event queue out of the input XML document. This queue is then interpreted by the application (i.e., the component that needs the information appearing in the document). This approach is fast and does not consume memory: even the largest documents and data streams can be fluently processed. The main cons in event-based parsing are code complexity (which leads to losses in design and implementation resources) and that the XML document cannot be modified or new document cannot be generated... To be able to generate or modify an XML document, one has to build an object representation of the document. This means that all compounds appearing in XML (e.g., elements, attributes, processing instructions) are stored in a data structure (usually tree-form), and modifications of that are (eventually) rendered as modifications in the original XML document. The process of forming the object representation out of a given XML document is called 'parsing' and the operation of producing XML string out of an object representation is called 'serialization'. Object models provide better access to data and tools for manipulating XML. However, object models consume memory and they cannot operate on data streams... X-Fetch Performer provides access to data via both techniques, with the additional features: XML Parsing and Generation; XML Validation (DTD and Schema); XML Filtering and Content-based Routing (patent-pending technology); Efficient Data Queries (XPath); EJB Compatibility; Built-in Interfaces for SAX and DOM; User Manuals (containing also tutorials and examples with full Java source code); On-line Helpdesk Support..." See the announcement of 2002-09-16: "Republica Releases X-Fetch Performer 2.1, Accelerating XML Application Development."
[September 16, 2002] "Intell Chief Calls for Knowledge Base." By Dan Caterinicchia. In Federal Computer Week (September 14, 2002). "Data authored and tagged in Extensible Markup Language (XML) and combined with search capabilities across governmental databases is a key element in ensuring that the types of intelligence lapses associated with last year's terrorist attacks do not repeat themselves, according to the Marine Corps' top intelligence official. Brig. Gen. Michael Ennis, director of intelligence at Marine Corps headquarters, said the information to prevent last September's attacks was available for intelligence community users to find, but they did not have the ability to analyze and act on it in a timely fashion. He said the daily briefings from most Defense Department intelligence offices are all 'cut and paste' jobs with no analysis. 'The difference between a database and knowledge base is that a knowledge base is written in XML and tagged so the user can create the knowledge they want,' Ennis said during a September 12  panel at the Homeland Security and National Defense Symposium in Atlantic City, N.J. Ennis said there are literally thousands of databases that must be tapped in the government's evolving homeland security vision, but the challenge is producing a 'tailored, fused product' for users and turning that data into knowledge that can be acted upon... The government should follow private-sector examples of data sharing, including Travelocity, Amazon.com, Napster and MapQuest, he said, adding that the common threads that make those firms successful in their respective domains are: (1) Compatible file formats. (2) Distributed search capabilities. (3) The use of portlets, style sheets and wizards to guide users. (4) The flexibility and timeliness of the data available. Ennis said defense officials would love to have MapQuest-like capabilities on the Secret Internet Protocol Router Network, where they could put in numerous specifications for an area and get a tailored map from the National Imagery and Mapping Agency within seconds. That capability is not currently possible..."
[September 16, 2002] "Web Services to Unify Applications." By Carolyn A. April and Ed Scannell. In InfoWorld (September 16, 2002). "As IT executives wrestle with islands of disconnected enterprise apps, vendors are scurrying to offer simpler application bridges driven by XML, Web services standards, and prebuilt business processes. CRM giant Siebel is the latest application vendor seeking to advance its integration play through open standards. The company this week will reveal the first fruits of its UAN (Universal Application Network) initiative in the form of a technology deal with middleware stalwart Tibco, building on the company's agreement last week with IBM. At the same time, San Francisco-based CRM outfit Salesforce.com has revealed plans to evolve its XML technology into a SOAP service, paving the way for easier integration of its hosted service with enterprise applications... San Mateo, Calif.-based Siebel's UAN initiative, first announced in April, is a federated, partner-based project geared toward developing a set of prepackaged business processes and common object models that will allow the company's CRM software to more easily integrate with other applications such as SAP's ERP suite. Siebel hopes these out-of-the-box templates for processes such as 'quote to order' will lower the pain threshold for melding applications and, in turn, encourage wary customers to upgrade their Siebel package. The resulting process library, sold by Siebel, will be available this fall, according to company officials. Other UAN integration partners, including webMethods, Vitria, and SeeBeyond, are updating their middleware engines to run the Web services-based business processes. Microsoft, which is readying a CRM application for its .Net platform, does not currently belong to UAN but 'is exploring the possibility with Siebel currently and will likely join,' said Dave Wascha, BizTalk Server product manager at Microsoft in Redmond, Wash. As part of this week's deal with Tibco, Siebel will be using Tibco's BusinessWorks development environment and tools to help build the best-practice processes as well as test and certify them, according to Raj Masaruwala, Tibco COO, who touted the broad use of Web services standards, including XSLT for its transformation of data models in different applications... Siebel is one of many racing toward Web services and XML as the latest application glue, including PeopleSoft with Apps Connect, SAP with its xApps, and J.D. Edwards with OneWorld XP, according to Jon Derome, an analyst at Boston-based Yankee Group..."
[September 16, 2002] "Intel Spinoff Looks to Boost Processing Power." By Jennifer Mears. In Network World (August 19, 2002, 2002). "Randy Smerik convinced Intel's management to spin off the company [focusing on high-speed content processing], and he incorporated it as Tarari last month. The company publicly launches this week with $13 million from Crosspoint Venture Partners and XMLFund in the bank; a new headquarters in San Diego; and a staff of 37. Intel holds a minority investment in Tarari. The spinoff plans to ship its first product, a PCI card that will boost network security processing, later this year. Tarari is building what Smerik refers to as 'content processors,' specialized processing engines that can look inside packets and intelligently route traffic based on internal payloads. Tarari's core technology is a silicon platform that is based on reprogrammable hardware, ASICs and software that can be designed to process specific applications - and do it at gigabit speeds. Tarari plans to sell the plug-and-play devices to OEMs that would fit them into network equipment and servers to boost intelligent processing power... Tarari's product will make offerings from companies such as Cisco, IBM, F5 Networks, Symantec, Oracle, Microsoft and Hewlett-Packard run better. Initially, Tarari will introduce specialized engines for network security and XML-based Web services, where Smerik's team sees the most need. 'Enterprises, data centers, service providers and telecommunications companies all want to raise the bar in how they can intelligently control and handle traffic,' Smerik says. 'They want to put virus checking in the network,' he says. 'They want to have XML switching and XML processing on servers. And they want to do all that without bogging down the network, which is what happens today. We solve that pain.' Companies such as Array Networks, Nauticus Networks and Inkra all have products that speed Secure Sockets Layer (SSL) acceleration, and others such as DataPower Technologies and Sarvega tackle XML processing. Smerik looks at such companies not as competitors, but as possible partners..." See the 2002-08-20 announcement.
[September 16, 2002] "Intel Spin-Off Tarari Tackles Layer 7 Processing." By Craig Matsumoto. In EE Times (August 19, 2002). "An Intel Corp. spin-off is using reprogrammable hardware to tackle the Extensible Markup Language (XML)-processing and virus-detection markets, both of which require deep inspection of incoming packets. Launching Monday (2002-08-19), Tarari Inc. (San Diego) is not chasing the same market as network processors, which tend to concentrate on the headers of Internet Protocol packets. Tarari's chips would handle so-called Layer 7 processing, which refers to the application layer atop the Open Systems Interconnect reference model. Layer 7 information includes the actual data being sent -- the content of a Web page, for example. Some network processors can tap Layer 7 information, but primarily for classification duties. Tarari officials say their hardware targets more complex functions -- specifically, XML processing and virus detection -- both of which require detailed examination of a packet's payload. Those functions can be handled in software, but only at limited speeds... Similar ideas are being touted in network security, where startups are developing dedicated hardware for traditionally software-based products such as firewalls..."
[September 16, 2002] "A Proposal for Unifying the Efforts of UBL and UN/CEFACT's Core." From Ray Walker, via Jon Bosak. Geneva, September 13, 2002. 2 pages. [A posting from Jon Bosak (Chair, OASIS UBL TC) to the UBL Comment list references a proposal to unify UBL and UN/CEFACT efforts. The document's proposal is said to be scheduled for discussion at the next UBL meeting 1-4 October 2002 in Burlington, MA, USA.] "Groups working under the auspices of the United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT) and the Organization for the Advancement of Structured Information Standards (OASIS) are currently progressing the development of complementary electronic business specifications. Specifically, these are the core component specifications developed with UN/CEFACT and the Universal Business Language (UBL) specifications developed by the OASIS UBL Technical Committee... Taking into account that [...] it is proposed that the OASIS UBL TC and UN/CEFACT core component and related syntax activities be incorporated into the work plan of the UN/CEFACT Applied Technologies Group (ATG)... All members of the OASIS UBL TC will be granted full member status within the UN/CEFACT Forum, ATG, and ATG working groups immediately upon ratification of this proposal. They shall continue to enjoy membership rights in accordance with the terms of reference of the ATG. UN/CEFACT encourages the OASIS UBL TC members to join the work of other groups or working groups. The resulting work will be published by UN/CEFACT as royalty free, no license required, technical specifications to the world's electronic business community..." General references in: (1) "Universal Business Language (UBL)"; (2) UN/CEFACT Core Components Specifications Project Team; (3) "Joint UN/EDIFACT and ASC X12 Core Component Development Initiative."
[September 16, 2002] "Tame the Information Tangle: XML Data Management Systems." By Paul Sholtz. In New Architect Magazine Volume 7, Issue 10 (October 2002), pages 36-40. "Encoding information in XML and exposing it on the Web will help overcome these hurdles and enable fine-tuned, database-like queries on a global scale. Of course, if all the world's data is to be encoded in XML, we'll need more efficient ways to store and manage large volumes of XML data. To address that need, a new breed of document storage and management systems has appeared that's been specially optimized for publishing XML documents on the Web... If creating and maintaining relational data mappings seems like too much work for the scope of your XML application, one attractive alternative is to use a native XML database (NXD). The concept of a native XML database was first introduced by Software AG during the marketing campaign for its Tamino product line. Since then, the term has come into common usage among other companies developing similar products. NXDs are optimized for the storage and management of XML documents. Like other modern data management systems, they provide support for transactions, security, concurrent access, and query languages. Formally, a native XML database can be defined as a data management system that exhibits the following characteristics: (1) XML documents are the fundamental unit of logical storage in the system (similar to the way in which rows in a table are the fundamental unit of logical storage in a relational database system). (2) The system defines a logical model for XML documents, and stores and retrieves documents according to that model. At the very least, the model must include support for elements, attributes, PCDATA, and document order. Some examples of logical models that meet these requirements include the XPath data model and the XML InfoSet. (3) The system is independent of any underlying physical storage model. For example, it could be implemented using relational, hierarchical, object-oriented, or proprietary storage formats. NXDs are often a good choice for storing document-centric XML information. For example, NXDs support XML query languages that let you perform highly specialized queries like "find all documents where the second paragraph contains an italicized word." Most NXDs provide other powerful and sophisticated text-searching features, such as thesaurus support, word stubbing (for matching all forms of a word: swim, swam, and swimming, for example), and proximity searches (find all instances where the word "lake" occurs within five words of "swim"). These are extremely useful features when you're working with traditional documents, although they are usually much less important if you are working with data-centric XML information. There are other reasons you might want to consider using an NXD. Many such repositories are able to understand a DTD or an XML Schema, and can therefore provide data validation on the fly, as information is stored or updated. NXDs can also persist information such as document order, processing instructions, comments, CDATA sections, and entity usage, while many systems that attempt to store XML data into relational databases cannot..." See: "XML and Databases."
[September 16, 2002] "When Good Servers Meet Bad Clients: Coherity XML Database (CXD) 3.0." By Kurt Cagle. In New Architect Magazine Volume 7, Issue 10 (October 2002), pages 48-49. ['CXD has a fast database engine can handle large numbers of documents. Offers a wide range of search query and transformation formats, including XSQL, XPath, XSLT, and, in the next version, XML Query. Supports Web services, SOAP, and .NET interfaces.'] "Coherity introduced its Coherity XML Database (CXD) for midrange to high-end enterprise customers in March. Its first customer was Hewlett-Packard, and there's enough in CXD to make its adoption understandable. CXD is currently supported on Windows NT/2000, Linux, Solaris, and HP-UX... The CXD system involves three principle layers. The back-most tier, the database itself, is a high-performance indexing system that creates distinct 'nodes' for each XML file entered into the system. These nodes are arranged linearly -- there is no direct connection between different nodes (or 'rows,' as they're called in the product documentation), though you can create indirect connections by using internal keys and other references. Once entered, both the data and the connections are indexed for quick retrieval. The result is a system that makes it possible to add content without the need for explicit schema requirements on the documents themselves, which is all too often the reality when working with various XML documents. You can associate schemas with source documents, however, making for more efficient indexing. But the flexibility of not being forced to use schemas makes CXD a good choice for handling large batches of legacy documents. This back tier is perhaps CXD's strongest feature. The database system is remarkably fast, and it easily can handle the inevitable scaling that all document-centric databases face. The system's middle tier involves four distinct methods of querying content: XSQL, XPath, XSLT, and XML Query... The Coherity CXD database server can definitely handle the increasing amount of XML circulating through the enterprise. Its indexing architecture is well designed and optimized for any number of applications, and it is well positioned to take advantage of Web services as they become more prevalent. On the other hand, the CXD client interface is a toy at best -- barely up to the simple task of administering the few key pieces of information necessary to make the application run, and useless as a tool for performing any meaningful work. Coherity says it has added a new and enhanced GUI and documentation revisions in the September software update..."
[September 14, 2002] "Physical Markup Language Update." By Christian Floerkemeier and Robin Koh. Auto-ID Center, Massachusetts Institute of Technology, Cambridge, MA, USA. June 2002. 9 pages. [The article gives 'an overview of the efforts to develop a Physical Markup Language (PML). The main goal of the Physical Markup Language is to provide a common, standardized vocabulary to represent and distribute information related to Auto-ID enabled objects. In this document, the types of information modeled in this vocabulary and their main usage scenarios are discussed. This brief also describes the division of the development effort into a PML Core component and a PML Extension component. The former focuses on developing a vocabulary to model the data directly generated by the Auto-ID infrastructure -- such as location and telemetry information. The work related to the PML Extensions leverages existing developments in the world of e-commerce standards to combine the low-level, instance-specific Auto-ID generated data with high-level product- and process-related information.'] "The main use of the PML language is to act as a common interface between the various components in the Auto-ID infrastructure... To facilitate the orderly development of the Physical Mark-Up Language (PML), research has been divided initially into two primary sections: PML Core and PML Extensions. PML Core provides the common standardized vocabulary to distribute information directly captured from the Auto-ID infrastructure e.g., location, composition and other telemetric information. As this level of data was not readily available before Auto-ID, PML Core has to be developed to represent it. PML Extensions are used to integrate information that is not generated by the Auto-ID infrastructure and is aggregated from other sources. The first extension that will be implemented is the PML Commerce Extension. The PML Commerce Extension involves the rich choreography and process standards that enable transactions within and between organizations to take place. Many organizations are already working on these standards and Auto-ID will evaluate and integrate the ones that best fit its users' requirements... The PML development team decided to regard XML Schemas as the implementation syntax for the PML and rely on the Unified Modeling Language (UML) to represent the model and share the PML definitions with its users. UML was chosen since it was perceived as a widely adopted standard for system specification and design. This approach will allow us to benefit from the advantages of XML Schemas over DTDs and at the same time still enable us to easily share the definitions and underlying models with a wider audience... The original XML specification did not include a way to combine several vocabularies when composing a document. This capability is however essential if the reuse of industry standard vocabularies is to be promoted rather than forcing each application to reinvent the same definitions. The XML Namespace specification was written to address this requirement. The framework to support PML Core and PML Extensions will be based on a combination of XML Schemas and Namespaces. The XML Schemas define and document the vocabulary. They also intend to allow for a straightforward validation of structural and semantic accuracy. The XML Namespaces enable the reuse of existing XML-based e-commerce standards within the framework..." See details in: "Physical Markup Language (PML)." [cache]
[September 14, 2002] "Wireless World Eerie Possibilities." By Ephraim Schwartz. In InfoWorld Issue 36 (September 09, 2002), page 28. "Every January, Saudi Arabia gets millions of pilgrims from around the world coming to Mecca for the hajj. It creates a huge problem for logistics, crowd control, and security. To that end, Luxoft is developing the software that works with an RFID tag -- think of it as a smart UPC code -- that will be given to each visitor as part of their visa. Unlike the UPC code on a product in the supermarket, which describes the product, the tag being developed by Luxoft identifies each person in detail, including name, country of origin, where they are staying, and even what language they speak. Stationary RFID readers around Mecca will pick up the data on each passerby for the purpose of monitoring crowd flow and predicting where people are going and how situations might unfold. The data will flow into a command center, Suvorov says, and depending on the information, local team leaders in the field could delay an event or change the direction of the group they are leading. For pilgrims, there might be kiosks with readers. If visitors are lost, the system can read their tags and give them directions, in their native language, to where they are staying. Luxoft is also working with Boeing to add RFID tags to critical parts in commercial airlines. Why? All parts have limitations, and they need to be maintained. There appears to be a black market for stolen airplane parts, Suvorov says, some of which are even recovered from crash sites. What if one such part ends up on a commercial airplane but its location makes it extremely difficult to get at? A self-describing RFID tag could provide a maintenance person with information about the part. Never mind that it was stolen; the first thing the mechanic needs to know is if it still works..." On the XML connection, see the preceding bibliographic reference and: (1) "Auto-ID Center Uses Physical Markup Language in Radio Frequency Identification (RFID) Tag Technology"; (2) "Integration Tightens ERP, Supply Chain Connections: RFID, XML Technologies Play Key Role In Ongoing Strategy," by Rick Gurin.
[September 14, 2002] "Speeding Up SOAP." By James R. Borck. In InfoWorld (September 13, 2002). "Companies gauging the ROI of adding Web services to their application-delivery mix must consider the oft-overlooked costs associated with increased network overhead and processing loads. Although Web services may represent a boon for developers, their weighty text-based XML messages, which are many times larger than the payload they're responsible for carrying, demand opening, rewrapping, and pushing across highly distributed network paths. As SOAP requests and responses mount, processing and communications capabilities will quickly show their shortfall. Aiming to reduce the friction in multitier application architectures, Chutney Technologies released its flagship product, Apptimizer HA (High Availability) 4.1 for SOAP, which caches and reuses high-volume calls to data and programmatic objects... Apptimizer is interoperable with most enterprise application servers, including BEA WebLogic, IBM WebSphere, Microsoft IIS, and Sun ONE (Open Net Environment). Installation of a single engine supports multiple servers and clusters simultaneously -- a consideration of note when examining pricing. Fortifying Apptimizer's socket-based communications is SmartSocket technology from Talarian, now Tibco, which we weigh as a plus for guaranteeing reliability in real-time platforms such as this. The Java-based administration interface offered a good vantage for centrally managing our server and cache farms. Although we would like to see Apptimizer grow to include more proactive server-administration tools, embracing performance tuning and alert messaging, or SNMP for example, we were able to easily perform basic operations on our clusters, such as replication and backup... Apptimizer supports both Apache and Microsoft SOAP with support for Microsoft SOAP 3 slated for updating in November. SOAP objects and responses, as well as WSDL, now become cacheable entities replete with comparable methods of expiration, session control, and logging. The toolkit represents a good foray into a developing architecture, although we would have preferred to see better monitoring and administration tools specific to the SOAP engine, as well as inclusion of transport protocols other than HTTP..." See the announcement from Chutney Technologies: "Chutney Technologies Eliminates Web Services Bottlenecks with New Chutney Apptimizer for SOAP."
[September 14, 2002] "WS Standards Hit the Streets. Toolkits Tackle Standards." By Carolyn A. April , Heather Harreld , and Paul Krill. In InfoWorld Issue 36 (September 09, 2002), pages 1, 34-36. "Promising access to a wider set of standards, developer toolkits are emerging to address growing enterprise demand for internal Web services and application integration. As toolkits from the likes of Microsoft, IBM, and Cape Clear arrive to help developers expose enterprise applications as Web services, momentum builds around a broader effort to unify standards. To that end, the W3C this month is expected to make available for public review a proposed Web services reference architecture, according to Eric Newcomer who is working with the W3C on the draft design and is the CTO of Iona in Dublin, Ireland... The proposed architecture defines the relationships and roles of the Web services sender, receiver, and intermediaries such as a third-party security layer or billing service, Newcomer said. Additionally, the architecture will help define the functionality that gets added on top of a Web services message, and it will show how to represent such things as the registry, metadata, and semantic rules. 'It will bring order to chaos,' Newcomer said. 'It can be used to guide future specifications for Web services as well.' The architecture comes as enterprises continue to wrestle with the deployment of Web services to integrate business applications. The toolkits further this process by promising to link systems and business units in nontraditional ways. In August, Microsoft released a beta version of the its WSDK (Web Services Development Kit), to enable developers to build Web services that comply with the company's WS-Security, WS-Attachments, and WS-Router specifications... IBM for its part has released a development environment to boost deployment of Web services in conjunction with existing applications, said Stefan Van Overtveldt, WebSphere's director of technical marketing at IBM. The kit supports SOAP, UDDI, WSDL, and WS-Security. New kits will be released with support for new standards, Van Overtveldt said. 'The key thing for us is to have a toolkit that corresponds with a specific level of Web services so that you're guaranteed [interoperability] as you develop applications." See also the sidebar 'Dueling Toolkits: Microsoft vs. IBM,' by Jon Udell: "Although Microsoft's WSDK is just catching up to IBM's WSTK with regard to WS-Security and DIME -- which the WSTK demonstrated in July alongside the more conventional MIME-oriented SOAP with Attachments -- it breaks new ground with support for (and demonstration of) WS-Routing/WS-Referral. The SOAP router, which dedicates the ASP.Net interface for processing HTTP requests with custom handlers, acts on the To and Via elements of the WS-Routing specification. In the demo included with the toolkit, a SOAP message bounces from one instance of a service to another; a referral file uses the rewriting feature of WS-Referral to alter the route dynamically. These are the early days for this technology, but the prospects are intoxicating... The XKMS (XML Key Management Specification), which pushes a chunk of PKI complexity into the cloud, offers some hope. VeriSign and Entrust implement XKMS services today, and IBM's WSTK includes an early XMKS demo. But there's a scary amount of inertia to overcome. Until we get key distribution and management schemes that people can understand and use, Web services security is speeding toward a brick wall'..."
[September 14, 2002] "Controlling Media." By Stuart J. Johnston. In InfoWorld Issue 36 (September 09, 2002), pages 14-16. "... Although many companies have taken advantage of rich media for years, they typically used it only on the corporate network, which had the bandwidth and security to warrant it. Transmitting proprietary information outside the firewall until recently has been bandwidth-constrained and risky. Further, PCs are open devices that offer would-be intruders many points to intercept the data streams. As corporate use of audio and video increases, IT will need to increase control over those streams. The shift to multimedia communications, both inside and outside the firewall, means that DRM (digital rights management) is fast becoming another tool in the IT department's information security kit. New and soon-to-be-released media services products from RealNetworks and Microsoft, both of which contain proprietary DRM systems, take the next step toward transforming streaming audio and video into competitive essentials. Both companies' DRM technologies provide granular approaches to media licensing, allowing IT to specify users' abilities to access digital streams, what they can do with them, and for how long. DRM systems typically are comprised of a server that stores and transmits the audio and video data, a licensing authority that often resides on the same server, and rights-enabled clients. Microsoft's Windows Media Player 9 and accompanying server and media-encoding products entered beta on Sept. 4. RealOne Player 2 entered beta in late August. Meanwhile, RealNetworks shipped final code of its Helix Universal Server in July. Microsoft's media server software will ship simultaneously with WMP9 later this year. One question still to be addressed is what level of security can really be achieved? Both systems use PKI technology that allows only the intended receiver of a stream or a file to open it. Still, Microsoft's DRM system was hacked last year, requiring the company to quickly upgrade it. Although that may strike fear into the hearts of record labels and movie studios more than into IT shops, it does give pause. Although RealNetworks' DRM hasn't been hacked, company officials acknowledge that it's only a matter of time..." See: "XML and Digital Rights Management (DRM)."
[September 14, 2002] "Liberty For All?" By P.J. Connolly. In InfoWorld Issue 36 (September 09, 2002), page 26. "... I finally dug out the specifications for authentication and identity federation that the PR folks at the Liberty Alliance Project sent to me back in mid-July and looked them over as promised. Of course, those documents were already two-month-old drafts by the time I got them, and I would be surprised to learn that nothing's changed since May. Obviously, one thing that has changed is the underlying goal, or at least the name by which we call it. The drafts still refer to 'single sign-on,' but Liberty's spokesfolk have already softened that to 'simplified sign-on,' and wisely so... What appeals to me most about the Liberty Alliance project is that it's open to just about anyone who wants to play by the rules -- and because those rules aren't set exclusively by one vendor seeking world domination, the playing field is relatively level... The second thing that grabs me is that adopting Liberty doesn't require a major upheaval of a site's authentication scheme. Most of the 'Liberty-specific' details -- particularly the XML schema that Liberty uses as a framework -- can be slid into existing authentication methods without affecting site security or stability. These specs are the start of Liberty's efforts. For now, drafters recognize that the best they can do is recommend; requirements will undoubtedly be part of future iterations. That's basically a good thing -- it can't be easy to draft a specification when foundation technologies including SOAP, SAML (Security Assertion Markup Language, and thin client markup languages such as HDML (Handheld Device Markup Language) and WML (Wireless Markup Language) are just coming together. Under these circumstances, it would be nigh impossible for those driving the Liberty Alliance to come up with a meaningful branding program. But a 'Liberty logo' indicating that XYZ Corp.'s Web site conforms to or uses the Liberty federation methods is necessary. Sure, not having a logo may spare the project's membership from a nasty bun-fight over who gets to display the logo. But I'd feel a lot safer about federating my identity between sites if I knew how it was being done, or at least whose method I was trusting..." See: "Liberty Alliance Specifications for Federated Network Identification and Authorization."
[September 14, 2002] "Web Services Applications." By Jon Udell. In InfoWorld (September 13, 2002). ['Business process integration and application integration are the top goals for Web services, but first Web services must cross the last mile to the desktop. Our survey shows that a broad mix of client-side technologies is preparing to meet the need. The browser remains the dominant Web services client, but there is a growing demand for context-preserving user interfaces and real-time two-way communication. Early solutions show promising innovation, but they push ahead of standards. For now, survey respondents are watching and waiting.'] "When the 2002 InfoWorld Web Services Applications Survey asked readers to name their top three goals for Web services, respondents cited business process integration (63 percent), application integration (58 percent), and Web-based application development (57 percent). Achieving these objectives will require more than an XML-enabled business Web that carries conversations made of SOAP messages among machines and applications. The modernization of EDI, left unfinished in the wake of the dot-com flameout, will finally go forward and wring vast inefficiencies out of b-to-b processes. But where are the people in this mechanized utopia? They are the consumers of the goods and services that the system cranks out. More subtly, they are the touchpoints of the business Web. Consider the canonical Web services example, an XML-ized purchase order making its way through a purchase cycle. At each step, a person must apply the grease of human judgment that keeps the cogs turning. How that person interacts with the cloud of Web services is a crucial question that affects both Web services plumbing and application design... The browser is today, and will for some time remain, the dominant way to interact with Web services on the desktop. More accurately it's a platform that supports many different modes of interaction. Cloud-based SOAP clients can reflect data into the browser as HTML. The browser can host Java, ActiveX, Flash, or other kinds of components that make SOAP calls; or the browser itself can make SOAP calls using its built-in script engine. The browser can also suck in raw XML data and process it locally, perhaps even while offline, using built-in parsing and transformation engines. As a broader definition of Web services takes hold, the browser's role may solidify even more. SOAP is morphing from an RPC (Remote Procedure Call) protocol into a general document exchange protocol. The schism that had divided the SOAP and REST (Representation State Transfer) architectural styles has narrowed. Alongside Google's purely SOAP-oriented API now stands Amazon's hybrid API; it responds to SOAP calls but also supports standard Web URLs that route queries through server-side XSLT (Extensible Stylesheet Language Transformation) to produce vanilla XML or HTML results..." See the sidebar 'A New Breed of Smart Desktop': "The three technologies -- Altio's AltioLive, Digital Harbor's PiiE (Professional Interactive Information Environment), and Fourbit's Fablet -- use XML not only to describe application behavior and local/remote componentry, but also to normalize data drawn from disparate sources into a common pool that can be locally sorted, queried, and stored. Each uses a proprietary storage scheme, but all are well-positioned to use -- and could help drive the market for -- embedded XML databases..."
[September 14, 2002] "Collaxa WSOS 2.0: An Introduction." A Collaxa Technical White Paper available from the company development website. Draft version: WSOS 2.0 beta 1. Last modified: September 6, 2002. 23 pages. ['This white paper describes how the Collaxa Web Service Orchestration Server and BPEL Scenario can integrate web services into collaborative business processes and long-running business transactions. The Collaxa WSOS is based, soup-to-nuts, on open standards (XML, SOAP, WSDL, BPEL4WS, BTP, WS-Coordination and WS-Transaction) and interoperates with Microsoft .Net, IBM WebSphere and BEA Workshop services'] " The orchestration requirements (asynchronous interactions, flow coordination, business transaction management and activity monitoring) are common to all applications that need to coordinate multiple synchronous and asynchronous web services into a multi-step business transaction. Implementing them in custom code as part of each service-oriented application is complex and difficult to maintain... A new set of Web Services standards (BPEL4WS, WS-Transaction and WSCoordination) and a new category of software infrastructure called the Web Service Orchestration Server are emerging to reduce the cost and complexity associated with delivering and managing these types of distributed, process-centric , service-oriented applications... The Collaxa Web Service Orchestration Server helps enterprises reduce the cost and complexity of orchestrating web services into long-running business transactions and collaborative business processes. It is based on interoperability standards (such as XML, SOAP, WSDL, BPEL4WS, WS-Coordination, WS-Transaction and JMS) and works with your existing IT infrastructure including portals, application servers and messaging infrastructure... ... The BPEL Scenario is an innovative and flexible orchestration abstraction that enables developers to capture the flow, interaction logic and business rules that tie a set of services into an end-to-end business process. The Orchestration Server encapsulates the facilities needed to execute BPEL Scenarios and guarantee the integrity of the long-running business transaction or collaborative business process. The Orchestration Console provides administration, debugging capabilities and activity monitoring to help enterprises manage distributed business processes..."
[September 14, 2002] "WSOS Tunes Up Services." By James R. Borck. In InfoWorld (July 5, 2002). ['WSOS is a cost-effective tool for conducting transactions in the unreliable world of distributed Web services. Easy-to-use Java constructs allow developers to quickly build branched, parallel-tasking services-based applications. The solid debugging and management console will advance ongoing development and administration efforts.'] "Web Services aim to reduce the complexity and cost of business processes automation. But using application components distributed beyond the control of any single IT department raises serious reliability concerns. Knowing that CTOs must better orchestrate BPM (business process management) in such environments, Collaxa brings to market WSOS (Web Service Orchestration Server). WSOS provides mechanisms to control the sequencing and flow of Web services conversations by stitching ad hoc services together with underpinnings of reliability. WSOS establishes reliability by enabling recovery from failure and ensuring the safe completion of a business process by maintaining persistence, even when transactions are extended in time and involve multiple partners... The run-time environment core of WSOS is the Orchestration Server, which provides an application server container that coordinates the ebb and flow of services interactions. Deployed to the container is Collaxa's ScenarioBeans, a JSP (JavaServer Pages)-like approach that has most everything a developer needs to model application communications via Web services. Based on standards such as XML, SOAP (Simple Object Access Protocol), WSDL (Web Services Description Language), and BTP (Business Transaction Protocol), ScenarioBeans contain business logic, deployment descriptors, and the elements necessary to make an application available as a Web service, including WSDL files and SOAP listeners. In all, with familiar programming constructs and new features such as hot deployable applications, WSOS offers the underpinnings for substantial time savings during development and deployment... for all its capabilities, its limited breadth also hinders its usefulness. Although Collaxa has imminent plans to release support for the open-source Jboss, provisions for any enterprise-class application servers, such as Oracle9i and IBM WebSphere, are not expected until year's end. Also, it uses the RPC (Remote Procedure Call) style of SOAP services, making it incompatible with Microsoft's document style layout. It has no integrated communication support for other XML-based BPM languages on the table, such as WSFL (Web Services Flow Language) or XLANG. But as it matures, WSOS should prove a reliable framework for building Web services..."
[September 14, 2002] "Wal-Mart Mandates Secure, Internet-Based EDI For Suppliers." By Richard Karpinski. In InternetWeek (September 12, 2002). "This past week, Wal-Mart made a little-noted disclosure that could nonetheless have a huge impact on its bottom line -- and on e-business in general. It said has begun rolling out a new EDI platform from vendor iSoft Corp. that will eventually drive its tens of thousands of suppliers to do EDI transactions with Wal-Mart via the Internet -- rather than over more expensive value-added networks (VANs). In a related note, vendor IPNet Solutions rolled out its own AS2 solution for Wal-Mart suppliers last week. Wal-Mart and its vendors will be leveraging the relatively new AS2 standard, which adds significant security and scalability to Internet-based EDI. It is testing AS2-based EDI with its largest suppliers today and is beginning to reach out to small and medium suppliers this month. Analysts cite the retail giant's move as a major landmark that could have major e-business repercussions... Wal-Mart has more than 14,000 suppliers who process more than $217 billion worth of transactions via EDI annually. The AS2 standard grows out of work in the Internet Engineering Task Force to build a more reliable and secure way of moving messages -- not just EDI transaction sets but any sort of message -- via the Internet. It includes built-in support for public key encryption (PKI), which secures every transaction sent over the network, not just the communications channel itself. Vendor iSoft's Commerce Suite Software takes advantage of the AS2 standard to provide trading community management, public key infrastructure technology, and IP-based secure communication infrastructure. Data transmitted over public and private global networks using AS2 will be digitally signed, secure and non-repudiated..." See also: (1) EDI over the Internet-AS2 interop texting (Drummond); (2) announcement August 27, 2002: "Over 20 Different AS1 and AS2 Software Products Certified by DGI for UCC Sponsored Interoperability Tests": "Drummond Group Inc. (DGI), a vendor neutral consultancy and leader in software interoperability testing, announced today a list of 23 products successfully passing the latest round of interoperability testing for the AS1and AS2 EDI/XML over the Internet specifications. The AS1 and AS2 standard offers companies a direct and secure method to communicate EDI or XML transactions over the Internet; which saves money, and adds flexibility and control on how the data is utilized or reported. The tests were sponsored under the Uniform Code Council Inc. (UCC) Interoperability Compliance Program... The following companies passed the AS2 test: bTrade, inc., Cleo Communications, Hewlett Packard, Cyclone Commerce, Global eXchange Services, Intertrade Systems Corporation, IPNet Solutions, iSoft, Sterling Commerce, TIBCO Software Inc., Vitria and webMethods, Inc... AS2 (Applicability Statement 2) is the draft specification standard by which vendor applications communicate EDI (or other data such as XML) over the Internet using HTTP. When implemented, AS2 will enable users to connect, deliver and reply to data securely and reliably over HTTP."
[September 14, 2002] "Data Security Oath." By Matt Hicks. In eWEEK Volume 19, Number 36 (September 09, 2002), page 31. "For IBM fellow Rakesh Agrawal, modern database systems need to take a cue from the medical profession by adopting a trusted relationship between enterprises collecting data and the customers providing it that is similar to the one between physicians and patients... Hippocratic databases would negotiate the privacy of information exchanged by a consumer to companies. The database owner would have a policy built into the database about storage and retrieval of personal information, and the database donor would be able to accept or deny it. Each piece of data would have specifications of the database owner's policies attached to it. The policy would specify the purpose for which information is collected, who can receive it, the length of time the data can be retained and the authorized users who can access it... Phil Bernstein, a senior researcher at Microsoft Research, in Redmond, Wash., agreed with the concept of a Hippocratic database but said privacy can't stop there: It needs to extend beyond databases to areas such as applications, system engineering and XML protocols... IBM researchers have prototyped the Hippocratic database concept to work with the P3P (Platform for Privacy Preferences) standard from the World Wide Web Consortium, which helps determine the data a Web site can collect. P3P allows a Web site to encode its collection and use practices in XML in a way that can be compared with a user's preferences. The standard itself doesn't include a way to enforce that a site follows its policy, but the prototype allows for the database to check whether the site owner's and user's preferences match. With the Hippocratic database and its components, metadata tables would be defined for each type of information collected, IBM officials said. A Privacy Metadata Creator would generate the tables to determine who should have access to what data and how long that data should be stored. A Privacy Constraint Validator would check whether a site's privacy policies match a user's preferences..." See the W3C Platform for Privacy Preferences (P3P) Project.
[September 14, 2002] "Data Launched on Path to Integration." By Renee Boucher Ferguson. In eWEEK Volume 19, Number 36 (September 09, 2002), page 13. "Ascential Software Corp. and SAP AG are rolling out tools that will enable data normalization -- a key step in making Web-services-based application integration work. The next version of Ascential's DataStage data integration platform, code-named Twister, will add Web services functions to complement the metadata management and data quality assurance features in the data movement tool. Twister, by supporting Simple Object Access Protocol, Web Services Description Language and XML, provides wizards that allow IT managers to treat Web services sessions as a source and target of data in the data integration process. Users will be able to take integration events and expose them as Web services through standard protocols. This will allow companies to create and manage scalable, complex data integration infrastructures and enable specific functions as Web services, according to officials at Ascential, of Westboro, Mass... Dale Powers, enterprise data architect at NStar, a utility provider in Westwood, Mass., is using Ascential to build a data warehouse and data marts that will conduct performance and reliability analysis for NStar's distribution network. Powers wants to maintain a repository of rules as he defines data move- ment processes. Powers is exploring ways to capture external customer data and move it inside NStar's firewalls, perhaps through Web services. The idea of integrating metadata from multiple environments becomes even more important as companies look to conduct more business-to-business and business-to- consumer transactions, he said. 'The underlying challenge is always having your data quality at a level that can send a message,' said Powers. 'I am assuming XML is at the core, and messages have data and metadata. If you can't translate that and relate that to your internal nomenclature, your messages can't have the meaning you want them to.' For instance, an enterprise has to ensure that the word 'customer' means the same thing in data sources that are being integrated, or data corruption will likely result..."
[September 14, 2002] "Developing Enterprise Java Applications Using DB2 Version 8." By Grant Hutchison (DB2/IBM Integration Center, IBM Toronto Lab). From IBM, DB2 Developer Domain. September 2002. "IBM DB2 Universal Database (UDB) supports all the key Internet standards, making it an ideal database for use on the Web. It has in-memory speed to facilitate Internet searches and complex text matching combined with the scalability and availability characteristics of a relational database. DB2 UDB supports WebSphere, Java, and XML technology, which make it easy for you to deploy your e-business applications. DB2 Version 8 also adds self-managing and resource tuning (SMART) database technology to enhance the automation of administration tasks. DB2 Universal Database supports many types of Java programs. It provides driver support for client applications and applets written in Java using JDBC. It also provides support for embedded SQL for Java (SQLJ), Java user-defined functions (UDFs), and Java stored procedures. This paper discusses the Java application development environment provided by the DB2 UDB Universal Developer's Edition Version 8 (UDE)... DB2 UDB XML technology: ... To facilitate storing XML documents as a set of columns, the DB2 XML Extender provides an administration tool to aid the designer with XML-to-relational database mapping. The Document Access Definition (DAD) is used to maintain the structural and mapping data for the XML documents. The DAD is defined and stored as an XML document, making it simple to manipulate and understand. New stored procedures are available to compose or decompose the document. New XML features for DB2 Version 8 include: (1) XML schema validation; (2) XML stylesheet (transformation) support; (3) SQL/XML enhancements including: XMLAGG, XMLATTRIBUTES, XMLELEMENT, and XML2CLOB... Mapping from XML to relational data is simple using the WebSphere Studio RDB to XML mapping editor. The WebSphere Studio product is a replacement for the previous DB2 XML Extender Wizard used to create document access definition (DAD) files. You can map columns in one or more relational tables to elements and attributes in an XML document. You can generate a DAD script to either compose XML documents from existing DB2 data, or decompose XML documents into DB2 data. You can also create a test harness to test the generated DAD file..."
[September 13, 2002] "Proposal to Unify Web Services Standards Gets Backing." By James Niccolai and Paul Krill. In InfoWorld (September 13, 2002). "A proposal by Oracle that could help unify emerging specifications for orchestrating Web services met with a mostly positive reaction Thursday at a meeting of the World Wide Web Consortium. The database vendor asked a W3C working group to form a new industry-wide working group whose charter would be to find consensus among a handful of emerging Web services standards for choreographing business-to-business transactions. Oracle said it was concerned that too many overlapping specifications, supported by various vendors, already exist. The proposal was put to a vote Thursday at a meeting of the W3C's Web Services Architecture Working Group in Washington. The proposal drew 16 votes in favor, eight abstentions, and one vote against, from Microsoft, according to Jeff Mischkinsky, Oracle's director of Web services standards, who attended the meeting... According to the W3C, two votes were taken. The first, to determine whether action needs to be taken on Web services choreography and orchestration, was unanimous. The second vote, on whether that work should be done within the W3C, received support from an 'overwhelming majority,' said W3C spokeswoman Janet Daly. The W3C would not provide official tallies on who voted in what way. According to Oracle's Mischkinsy, among those abstaining were BEA Systems and IBM. A spokesman for BEA confirmed that his company was among those who abstained but declined further comment, saying the meeting was not intended to be a public one. Spokespeople for IBM and Microsoft were unable to confirm their companies' positions late Thursday. Microsoft also declined comment on Friday as well. Microsoft, BEA, and IBM have promoted a Web services choreography specification dubbed BPEL4WS (Business Process Execution Language for Web Services), which was proposed shortly after Sun Microsystems submitted a similar plan, called Web Services Choreography Interface, to W3C. BPEL4WS has not been formally submitted for W3C perusal. The working group's vote does not necessarily mean a new working group will be formed for Web services choreography, Daly said. The entire W3C must now ponder the issue. Under Oracle's proposal, the working group would help define a unified choreography language that would be based on WSDL. It argued that Web services standards should be developed in the open and be made available on a royalty-free basis. Novell has made a similar argument. Daly said technology recommended by W3C must be available on a royalty-free basis. At issue is a specification that would provide developers with a high-level view of how different types of Web services could be brought together, or 'choreographed,' to form larger, more complex applications. For example, the specification might provide a standard way of describing a Web service for authorizing credit cards which could then be tied into, say, an online auction service..."
[September 11, 2002] "Oracle Appeals to W3C on Web Services." By Wylie Wong. In ZDNet News (September 11, 2002). "In order to avoid a conflict over Web services standards, Oracle has asked the World Wide Web Consortium to decide on the language to use Oracle hopes to avert a battle over rival efforts to create Web services standards by asking the leading oversight group to pick a winner. On Thursday, the software giant will ask the World Wide Web Consortium (W3C) to decide on the standard 'choreography' language that will allow multiple Web services to work together within and between businesses. At least two proposals are on the table. In June, Sun Microsystems created the Web Services Choreography Interface, or WSCI, in partnership with SAP, Intalio and others. In August, Microsoft and IBM merged their competing languages -- called Xlang and Web Services Flow Language (WSFL), respectively -- to create a combined language called Business Process Execution Language for Web Services (BPEL4WS). Executives from BEA Systems, which helped create both WSCI and BPEL4WS, said in August that they would work with the rest of the industry to settle on one standard. Oracle executives say they hope the W3C agrees to the task -- and either choose one of the specifications as the standard or combines them into a single standard. Don Deutsch, Oracle's vice president of standards strategy and architecture, said the main goal is to settle on one royalty-free standard to prevent the emerging Web services market from fragmenting. Web services won't work effectively unless the entire tech industry coalesces around a single set of standards, analysts say. Royalty-free licensing is also important in ensuring mass adoption because it prevents patent holders from charging people to use the specification, Deutsch added... Oracle's proposal could help ease tensions between Sun and the tandem of Microsoft and IBM. Sun has been embroiled in a bitter feud with Microsoft and IBM over Web service standards, including a year-long dispute over Sun's desire to join a Web services coalition that Microsoft and IBM created to promote the technology. Sun executives, over the past several months, have been the most vocal in publicly expressing concern that IBM and Microsoft have the ability to charge 'tolls' to developers -- in the form of royalties on patents -- for using the Web services specifications they jointly have created, such as Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL). Neither Microsoft nor IBM has formally stated a desire to charge royalties on the standards, which are in part based on patents held by them. Sun, for example, first balked at supporting a Web services security specification, called WS-Security, until its three creators -- Microsoft, IBM and VeriSign -- agreed to make the technology royalty-free. Oracle executives said they expect that the W3C's Web Services Architecture Working Group will vote on Thursday on its proposal to create a committee to take the handful of existing choreography specifications and create a unified standard..."
[September 10, 2002] "OASIS to Aid Web Services Management." By Darryl K. Taft. In eWEEK (September 10, 2002). "The Organization for the Advancement of Structured Information Standards, or OASIS, standards consortium Tuesday announced the formation of a technical committee to facilitate distributed systems management over the Internet. Called the OASIS Management Protocol Technical Committee, the new group will seek ways to help businesses manage their own Web services and oversee their interaction with services offered by other companies, OASIS said. The OASIS Management Protocol will be designed to manage desktops, services and networks across an enterprise or across the Internet. OASIS officials said the organization is looking at several Web services standards and operations for potential use in the Management Protocol, including XML, the Simple Object Access Protocol (SOAP), Open Model Interface (OMI) and the Distributed Management Task Force's Common Information Model (CIM). 'The widespread need for the integration of systems and network management tools is causing the industry to take a more holistic approach to the management of networks--and Web services provide the ideal vehicle for making that happen,' Winston Bumpus, director of open technologies and standards at Novell Inc. and chair of the OASIS Management Protocol Technical Committee, said in a statement. 'Our work at OASIS will help level the playing field and allow companies to manage systems regardless of the platform they use.' The Management Protocol joins several Web services standards currently being developed within OASIS, officials of the organization said. Other specifications include Universal Description, Discovery and Integration (UDDI) for discovery, Electronic Business using eXtensible Markup Language (ebXML) for electronic business commerce, WS-Security for secure Web services, Web Services of Interactive Applications (WSIA) for interactive Web applications, Web Services for Remote Portals (WSRP) for remote portals and others, OASIS officials said. Meanwhile, database vendor Sybase Inc. announced it has received ebXML Messaging Interoperability Certification for its Sybase Web Services Integrator technology..." See: (1) the announcement "OASIS Members to Develop Web Services Management Protocol"; (2) general references in "Management Protocol Specification."
[September 10, 2002] "Get Ready for XForms. Next generation of Web forms will help you build online forms that are extensible and suitable for any platform." By Joel Rivera and Len Taing (Summer Interns, IBM Research). From IBM developerWorks, XML Zone. September 2002. ['Traditional HTML forms violate many of the tenets of good markup language design, frequently mixing presentation and data. In this article, Joel Rivera and Len Taing introduce you to XForms, an extension of XHTML that represents the next generation of Web forms. Though XForms is still in an embryonic state, it holds great promise: For instance, a form written with XForms can be written once and displayed in optimal ways on several different platforms. This article will give you a head start on this important emerging XML technology.'] "XForms enables support for multiple devices and structured form data, like XML documents. With XForms, developers can also generate dynamic Web forms without scripting, to include multiple forms within the same page, and to constrain data in various useful ways. Finally, while each of the XForms parts -- namely the data model, the view, and the controller -- is completely separable and usable with other technologies, significant additional value can be realized through how well these parts integrate together into an application. In this primer, we present an introduction to some of the most useful aspects of XForms, and guide you through a simple application example. This article is based on the XForms 1.0 Working Draft, issued in July 2002... With XForms, you can define Web forms in a way that successfully separates purpose from presentation. You focus more effort on the content of the form and the data being collected, and less on the style of presentation. The language defines a powerful event model that obviates the need for custom scripts to handle simple, form-related tasks. With XForms, the developer's primary focus is on the data to be collected. Using standard XML schemas, the structure and type of the data is explicitly defined. XForms extends this model by allowing for the specification of additional constraints and dependencies. The XForms processor evaluates and enforces these constraints without the need for additional code. The processor checks data types and constraints before the data is submitted for processing. The XForms specification also allows for the creation of dynamic forms through data-driven conditionality. You no longer must write special code to generate custom forms based on user responses. XForms can adapt forms on the fly, as data is collected and conditions change. Navigation through XForms is handled by the XForms event model, independent of the client rendering. You can present the same XForms as a single page on one client and as multiple pages on another, without having to worry about saving state and presenting appropriate navigation controls. Because of its concise specification and powerful features, forms created with XForms tend to be much easier to maintain than traditional Web forms. Code is not intermixed with presentation markup. Additional data type checking code is not necessary. Data structure is divorced from form presentation markup..." See: "XML and Forms."
[September 10, 2002] "Oracle Calls For Web Services Unity." By Paul Krill. In InfoWorld (September 10, 2002). "Oracle is looking to get industry consensus on a Web services specification for choreographing interactions in business-to-business transactions, an alternative to the current plethora of proposed specifications devised to meet this aim. The database vendor wants the World Wide Web Consortium at its meeting in Washington beginning Wednesday to form an industry-wide working group to guide development of standards governing Web services choreography. Under Oracle's plan, the working group will develop a unified choreography language to be based on WSDL. The proposal is to be made at a meeting of the W3C Web Services Architecture Working Group. Oracle pledges to have support from several other companies. 'We think that we're in a key decision point as to whether Web services activities [should] be carried out in a relatively closed environment with results that may not be freely available to be widely implemented versus being developed in an open forum, with the results being available on a royalty-free basis,' said Don Deutsch, Oracle vice president of standards strategy and architecture. Web services choreography pertains to developing standard XML-based mechanisms for b-to-b collaboration across supply chains on the Internet, according to Oracle. Organizations are afraid that projects started today will need to be significantly reworked if vendors do not reconcile differences, the company said. A number of choreography-related proposals have been proposed recently, including BPEL4WS (Business Process Execution Language for Web Services), from IBM, Microsoft and BEA Systems, and Web Service Choreography Interface, from Sun Microsystems. Other proposals include WSCL (Web Services Conversation Language), BPML (Business Process Modeling Language), and ebXML BPSS (Business Process Specification Schema). Oracle's plan seeks to include deliverables such as a requirements document, usage scenarios and specifications of the choreography language and associated XML Schema and a test suite for interoperability..."
[September 10, 2002] "Web Services for Programmers." By Cameron Laird. In Byte Magazine (September 09, 2002). "For the past year, web services have been the Next Big Thing. Leaving aside the marketing hoopla and the strategic intrigues that swirl around their future, though, web services are simply useful additions to a programmer's toolkit... One way to think of web services is as the latest realization of RPC. The first RPCs made the network transparent for function invocations. Web services do that, and are also able to pass whole objects across network connections, even between dissimilar platforms running programs written in different languages... Web services have many logically distinct parts, and correspondingly many new acronyms to learn. Keep in mind the overall architecture: If the Web is about making information and services available to humans, web services do the same for computer processes. Data on the web generally appears in .html source, and is transported by hypertext transport protocol (HTTP). Web services combine many of the same ideas in slightly different ways. They generally rely on XML-RPC or the Simple Object Access Protocol (SOAP) to transport data. These are protocols that can be layered over HTTP or other transport methods. HTTP is the most popular base for XML-RPC and SOAP right now, simply because there's so much infrastructure in place to support HTTP... Here's the summary, then: XML-RPC and SOAP are RPCs that can work across languages, across processor architectures, and across operating systems. They work over HTTP, so they slip through current-generation firewalls easily. They're universal -- they can deal in any sort of computer data. SOAP is the fancier big brother of XML-RPC -- it does more, at the cost of more complexity. WSDL is the "schema language" that formalizes SOAP signatures. UDDI gives a registry of WSDL documents, or equivalently, SOAP-based services. This is how the industry understands these acronyms now, even though the strict definitions differ in details such as the relation between SOAP and HTTP..."
[September 10, 2002] "Documentum Delivers New Version of Document-Management Tools." By Richard Karpinski. In InternetWeek (September 10, 2002). "Documentum on Tuesday released a new version of its enterprise content-management platform, featuring expanded collaboration capabilities and the ability to manage so-called 'fixed' content such as images and records. The new Documentum 5 release offers a single platform for managing enterprise documents, Web content, digital assets, and now fixed content -- such as records, reports, and scanned images. All of these different content types are stored and managed via an integrated repository, which can be fed into and accessed from across the enterprise... Documentum 5 also includes new tools to enable real-time, cross-enterprise collaboration, including the ability to jointly create and archive content and integrate tools such as chat, e-mail, desktop sharing, and online conferencing with the document platform. Documentum 5 also adds support for so-called fixed content management capabilities, including the ability to handle reports, records, images, and scanned documents. For instance, reports management, commonly referred to as 'output management,' captures and stores reports from enterprise applications in the Documentum repository in PDF or XML and distributes them through Documentum's Web publishing capabilities. The new platform also adds security features, including enhanced authentication through native public key infrastructure (PKI) and certificate support, audit trail enhancements, and full content encryption. Content is further secured within and beyond the repository with digital rights-management capabilities..." See details in the announcement: "Documentum Delivers Major New Release of Market Leading Enterpise Content Management Platform -- Documentum 5. Extends ECM with next-generation capabilities including reports management, records management, collaboration, trusted content services and rapid deployment."
[September 10, 2002] "Native XML Management with Documentum." Documentum Technical White Paper. June 2002. 13 pages. "Managing XML entails a number of very specific requirements. These requirements are met by an industrial-strength content management platform with native XML capabilities: (1) Storing and managing XML. An XML content management solution must be able to store and manage XML documents in their native format, preserving the hierarchical structure and links between components and documents. Further, the system must validate the syntax and structure of XML content to ensure that it?s well formed, allowing reuse by all applications. (2) Search capabilities. Powerful searching is another critical requirement for a native XML repository and so is the ability to add intelligence to XML content through automatic tagging and classification. In addition to full-text searches of the content itself, the system must provide the ability to search on content attributes. A native XML management exposes its search capabilities through a query language as well as through an API. (3) Content transformation. XSL transformation is extremely important for delivery of XML content to any number of applications. XSL handles the automated transformation of the XML content into the required format and form factor. An enterprise XML content management solution has to provide a robust and flexible XSLT (eXtensible Style Language Transformations) engine. (4) Flexible access. Applications must have easy and flexible access to XML content. This access should be possible from standard development environments such as J2EE or .NET. The capability to store and manage XML content natively becomes particularly important when applications attempt to retrieve the content from the repository in the required transformation. Only a solution managing XML natively can accomplish this task with little complexity and lots of flexibility... Documentum provides the only enterprise-grade content management solution with native XML capabilities across the entire content management platform. Documentum treats XML the same way as any other content including documents, Web content, and rich media, while understanding and supporting all XML-specific requirements and leveraging the capabilities of XML. Unlike other vendors, Documentum supports XML content natively without need for a separate XML solution or repository. With the Documentum end-to-end XML content management platform, global companies are well positioned to leverage their content using XML for increased efficiency and competitive advantage..."
[September 10, 2002] "Open Source Software Use Within UK Government." By UK Office of the Envoy. [UK] Open Source Software Policy Version 1. Date: 15/7/02. 8 pages. "Open Source Software (OSS) is software whose source code is openly published, is often developed by voluntary efforts and is usually available at no charge under a licence defined by the Open Source Initiative which prevents it from being redistributed under a more restrictive licence. The UK's response [...] has been through mandating open standards and specifications in its e-Government Interoperability Framework (e-GIF) and allowing market driven products to support these. It is now considered necessary to have a more explicit policy on the use of OSS within UK Government and this document details that policy. It does however need to be read in conjunction with current advice and guidance on procurement matters from OGC. UK Government in this context includes central government departments and their agencies, local government, the devolved administrations as voluntary partners, and the wider public sector, e.g., non-departmental public bodies (NDPBs) and the National Health Service... The key decisions of this policy are as follows: (1) UK Government will consider OSS solutions alongside proprietary ones in IT procurements. Contracts will be awarded on a value for money basis. (2) UK Government will only use products for interoperability that support open standards and specifications in all future IT developments. (3) UK Government will seek to avoid lock-in to proprietary IT products and services. (4) UK Government will consider obtaining full rights to bespoke software code or customisations of COTS (Commercial Off The Shelf) software it procures wherever this achieves best value for money. (5) UK Government will explore further the possibilities of using OSS as the default exploitation route for Government funded R&D software..." Cf. Sincere Choice from Bruce Perens, "MS 'Software Choice' scheme a clever fraud": "Microsoft's new 'Software Choice' campaign is all for your right to choose... as long as you choose Microsoft. It's too bad that Intel and the U.S. Government couldn't see through the rhetoric. Microsoft has responded with a clever Software Choice campaign that, read quickly, appears to fight discrimination and call for choice, while actually promoting policies that would lock out Free Software. For example, it promotes the embedding of royalty-bearing software patents into "open" standards. Of course Free Software producers don't charge copyright royalty fees, and thus can't afford to pay for patent royalties, so they would not be able to implement any standard that contains royalty-bearing patents..." References: "Patents and Open Standards." [UK document, PDF broken, also in Word .DOC format]
[September 10, 2002] "Unicode: The Quiet Revolution." By Jim Felici. In The Seybold Report Volume 2, Number 10 (August 19, 2002), pages 11-15. ['Revolutions are supposed to be noisy [but] systematically, quietly, thoroughly, Unicode has changed the way every major operating system works, the way nearly every new document is created. It has put multilingual intelligence into Internet search engines... Most recall that Unicode has something to do with two-byte characters. While that was once true, it isn't any longer. This article looks at Unicode's practical impacts and the direction of the ongoing revolution.'] "...The people who create Web search engines can't embrace Unicode fast enough; for them, it's a revolution that couldn't come too soon. Unicode allows them to create a single search system that will work as well in China and Delhi as in Moscow and New York (not to mention working for New Yorkers in Beijing and Russians in Delhi). A single search can be multilingual, and the same search can be made from anywhere. Database vendors are equally enthusiastic, and for the same reasons; archiving and retrieval issues in repositories are essentially the same as they are on the Web. Nonstandard encodings create information ghettoes, where data can be concealed by virtue of the way it was written. Under the new regime, legacy encodings can be decoded one last time and all data converted into the lingua franca Unicode format. But at this point, Unicode hits a wall: language. It can match numbers to characters, but it's not Unicode's job to match characters to languages... A single code point may identify a particular character, but this says nothing about the language that character was chosen to express. Nor does it say anything about how that character should look, as single Han characters may also vary substantially in form from place to place... The ink that's used to write most standards isn't dry before various parties begin to tack on amendments, enhancements and personalizations... Interestingly, the opposite is happening with Unicode. For example, the employment of private use areas or PUAs -- [private use area] ranges of code points set aside for characters that haven't been made a part of the standard Unicode character set is being discouraged except in closed environments, simply because the code-point defintions aren't standard. 'Many application developers seem to be coming to the conclusion that PUA characters are more trouble than they're worth,' according to John Hudson at Tiro Typeworks in Vancouver, BC. 'Adobe, who have been using PUA assignments for many years, recently decided to draw back from this approach and try to abandon it completely. PUA code points are simply too unreliable.' Most common Latin typographic variants such as alternate ligatures have been given Unicode code points. The myriad alternate forms for many characters in typefaces such as ITC Avant Garde as well as in Asian ideographic languages are also accommodated, with the help of smart font formats, such as OpenType. More standardization by font vendors will translate into more accurate document imaging than ever before, with fewer and fewer exceptions to the rule. 'My gut feeling is that we are still on the learning curve,' says Ken Whistler, a Unicode founding father, now working in internationalization software at Sybase and as technical director of Unicode, Inc., 'but that the worst of the annoyances will be out of the way in the next five years.' Thomas Phinney, program manager for western fonts at Adobe Systems, agrees that the worst part of the switch to Unicode is behind us. 'Five years ago,' he says, 'we were perhaps one tenth of the way up the adoption curve, and now we're something like one-third of the way. Although I fully expect there to be significant holdout' applications, even in five years, we'll be over the top hump of the curve. Unfortunately, that final tailing-off period will take a long time'..." See "XML and Unicode."
[September 10, 2002] "Netegrity Marks New Era of Access Control." By Steve Gillmor. In InfoWorld (September 06, 2002). ['Test Center Director Steve Gillmor and InfoWorld editors sat down with Netegrity CEO Barry Bycoff and CTO Deepak Taneja to discuss Netegrity's integrated approach, enterprise portals, and the impact of Web services.'] Q: 'Can you sort out for us all the federation approaches out there?' Bycoff: You mentioned Web services and Liberty. We've been asked to become a founding member of WS1. We are very active [as a] sponsor of the Liberty Alliance. So we are all about standards. This is the company that built the predecessor to SAML [Security Assertion Markup Language], prior to turning it over to OASIS. So we're promoting standards. That's our way of winning. Keeping our architecture open is another major focus for us, because we realize that there are competitive products out there and we need to support open standards like JSR168 [for] portal interoperability and some of the provisioning standards that are coming. So we're very active [in support of] standards and our architecture is very open, because of the environments we support... Taneja: As this concept of an enterprise portal starts to take over, as companies start to think about working with their business partners, suppliers, customers, and so on through a single set of infrastructure components, they have started to realize that they in fact cannot expect to deal with a single security system or a single identity management system. So even though they might standardize on a single access control solution or a single identity management system, they're going to have to work with their partners and suppliers, who in fact will have different security engines, different identity engines. And that raises the kinds of scenarios where people are getting authenticated by one company in one spot and then are trying to do something or trying to access an application that's owned by a second company. That is the typical scenario that a lot of these standards bodies and pseudo standards bodies or alliances are trying to deal with. And it all has to do with [the question], how do you make multiple security engines, [full] identity engines in a distributed world, work with each other? ...I think the Liberty Alliance certainly is using SAML, so they're not trying to reinvent the wheel at that lowest level. They're trying to go one step beyond where the SAML committee went. So I don't think we'll have competing standards as far as that basic request response model and the definition of the assertion itself [are concerned]. What we may have is another layer of standards that show up a level above the simple SAML approach. And it's not clear yet whether standards above and beyond SAML will in fact be accepted widely. SAML certainly at this point has wide acceptance. Microsoft has agreed to support SAML as part of the WS-Security initiative, so a SAML assertion showing up inside a SOAP envelope is something that Microsoft will be able to parse, and in fact is willing to generate as well. The Liberty Alliance, as I said, is using SAML, so we feel pretty confident that SAML will be an important standard. The application server vendors, both BEA and IBM, are committed to supporting SAML. At the recent Burton Group conference, all of the security vendors pretty much came out in support of SAML. There was a great interoperability demonstration. So SAML, I think, is going to be accepted. What happens beyond SAML remains to be seen..." See: "Security Assertion Markup Language (SAML)."
[September 10, 2002] "What Matters?" By C. M. Sperberg-McQueen. Closing Keynote. Extreme Markup Languages Conference, Montréal, 9-August-2002. ['A lightly revised version of the closing address of the Extreme Markup Languages conference in August 2002'] "... there's a third thing that is equally important. Both the serialization form and the data structure stand in a natural relation, an inescapable relation, with a well-understood mechanism for validation. This is not just any tree; it is a parse tree for a sentence in the language defined by the grammar expressed in our DTD. If that sentence is in that language, the document is valid. If it's not, the document is not valid. Everything linguists and computer scientists have learned about context-free languages since 1957 comes to bear here. Until we have alternatives, not just for the surface syntax and the data structure, but for validation, all of our thought experiments about other things we could do, other markup languages we could design, will remain what they are now: important, useful, interesting, but just thought experiments. Until we provide validation, a natural data structure, and a readable serial syntax, we don't have anything that can seriously compete with SGML or XML. Those three things working together help explain why SGML is an outstanding exception to what we might call Arms's Rule -- for Bill Arms, the head of the Corporation for National Research Initiatives in Washington, best known to many of us as the former employer of Guido van Rossum, the inventor of Python. Arms once said he had a rule for figuring out what technology was going to matter (in the betting-on-the-future sense) -- never need to wait more than five years, according to Arms's Rule, from the time you first hear about a new technology, because within five years either the technology will have succeeded and it will be universal and it will be obvious to you that this is something you have to learn about; or it will have failed and disappeared and it will be obvious that it is not something you have to learn about. So when he heard about SGML in 1986 or 1987, he said, 'That's interesting,' and he waited five years. And he found an exception to his rule. SGML hadn't succeeded, in the sense that it hadn't completely dominated its obvious application niche -- bearing in mind of course that because of the absence of a fixed predefined semantics the application niche for SGML is information processing, which covers a lot of ground. Even within the obvious or traditional' areas, SGML was not universally accepted, but it also hadn't disappeared. There were people who had started using SGML, and they were certainly frustrated that the rest of the world hadn't also adopted SGML; but, unlike early adopters of most other technologies which don't achieve universal uptake, they weren't giving up and going home. They were saying, 'No, no: we're right, you're wrong, this is better, why should we give it up?' And ten years later, in 1996, it was still approximately the same situation. There is a fierce loyalty of people who got interested in SGML in the days before XML, and of many people, too, who came the party only with XML, because of these three things working together: the serial form, the data structure, and validation. Personally, I think validation may have been the most important..."
[September 10, 2002] "Web Services: Still Not Ready for Prime Time." By Ben Worthen. In CIO Magazine (September 01, 2002). ['Web services is internet or other IP-based network applications built with four emerging standards, including XML, which allow the applications to talk to each other without human intervention. Early adopters are seeing the promise with small-scale implementations. But deployment remains a challenge. The Internet is a poor platform for today's version of the technology. Also, there are no industry-accepted open security standards for XML. Early adopters are configuring workarounds. Some companies are setting up VPNs to essentially bring the user behind the company's firewall. Health services company CareTouch set up a VPN to run a Web service that lets patients schedule an appointment with physical therapists. The dedicated connection uses middleware to guarantee delivery.'] "...By far the biggest concern among Web services users is security. A recent survey, 'Enterprise Development Management Issues 2002,' by Santa Cruz, Calif.-based market researcher Evans Data found that security and authentication were the number-one hurdles for 48 percent of the 400 IT executives interviewed -- more than double that of the runner-up, bandwidth, at 22 percent. Mark Hansen, vice president and CIO of Long Grove, Ill.-based Kemper Insurance, says that security is actually two separate problems: one technical, one business-based. On the technical front, there are no industry-accepted security standards for XML. And even if there were, nobody is sure on the business side of what the contract language would have to be in order to convince CIOs that they could safely use a company that their Web services found on the Internet for important transactions. Who are you going to trust -- and how is that trust going to be validated -- in the ideal world of Web service talking to Web service? While it's just a matter of time until standard security protocols emerge, it is enough to give a CIO trying to use Web services a headache. Currently, every XML security protocol on the market is a proprietary vendor offering and therefore not truly open. Hugo Haas, Web services activity lead with the Cambridge, Mass.-based World Wide Web Consortium (W3C), the standards setting group, says that at this point W3C hasn't even finished determining everything an XML security standard would require, let alone deciding on a standard. Until the security issues are cleared up, the one-time transactions that would come from a Web services "yellow pages" (known as universal description, discovery and integration, or UDDI) are only a dream..."
[September 10, 2002] "IBM Gears Up for Modeling." By Darryl K. Taft. In eWEEK Volume 19, Number 36 (September 09, 2002), pages 1. 13. "IBM is readying new features for its open-source development platform that will speed application development, but what users are most intrigued by is planned full support for Model Driven Architecture. The support for MDA, which will be included in Eclipse Version 2.0 when it is released this fall, will enable developers to create applications based on models rather than hand coding, which also reduces cost. To make it happen, IBM will tap its EMF (Eclipse Modeling Framework) technology and incorporate it into Eclipse 2.0, according to company officials. MDA will be a core development technology for integrating tools with Eclipse. Eclipse currently includes limited support of MDA through plug-ins from Rational Software Corp., said Sridhar Iyengar, an IBM distinguished engineer with the company's Application and Integration Middleware group, in Raleigh, N.C. EMF, internal IBM technology the company uses in its WebSphere integration solutions, is a step above other options, which enable integration of tools and data only at the metadata and model levels, said Iyengar... Iyengar said the technology is key for users developing Web services and UML (Unified Modeling Language) and XML applications. Along with MDA support, Eclipse 2.0 will include a new plug-in design featuring wizards to ease deployment of Eclipse plug-ins; support for Sun Microsystems Inc.'s Java Development Kit 1.4; and enhanced team programming models to enable a team of developers to work together more easily using Eclipse. But support for MDA is the key, said Eric Newcomer, chief technology officer at Iona Technologies plc., a Web services and enterprise application integration products supplier... Rational -- which also licenses its modeling technology to Microsoft Corp. -- Hewlett-Packard Co. and Borland Software Corp. are among the growing number of development tool vendors that support MDA. In May, Sun introduced MDA support in its NetBeans open-source development platform..." See: "OMG Model Driven Architecture (MDA)."
[September 10, 2002] "Finding the Right Formula For UDDI. Web-services Specification to Play Key Role." By Jeffrey Schwartz. In VARBusiness (September 06, 2002). "As customers start using Web services to link disparate applications, they will need a way to efficiently organize and keep track of all those Web services. Most experts believe they will do that through repositories based on a specification called universal description and discovery interface (UDDI). Some believe UDDI will be as pervasive in directories and repositories as XML is in defining metadata and SOAP is in encapsulating software components. Simply put, UDDI makes it possible to search any registry for specific Web services, such as business rules and SOAP-based components. In addition to helping users and applications find specific Web services, UDDI will allow them to query data that describes how those services are used. 'UDDI is really a registry of Web services, wherever they're deployed, and descriptions of how to interact with them,' says Chris Kurt, program manager for UDDI.org at Microsoft and a member of Microsoft's XML standards team. UDDI also specifies a set of APIs to interact with the registry and provides references on how to interface with applications. UDDI registries have basic white and yellow pages models for finding services. Solution providers will increasingly find UDDI implemented in various forms of repositories, including application servers, middleware, databases and directories. IBM is already shipping a UDDI registry based on its WebSphere application server suite; Novell recently announced plans to release a UDDI server by year's end that will run atop its eDirectory software; and Microsoft's forthcoming .Net servers will support UDDI through microcode in the new server platforms and via its SQL Server database. While UDDI will be key to tying together Web services, the market is still nascent. 'Everyone's trying to get mind share,' says Michael Neuenschwander, an analyst at the Burton Group. In July, work on UDDI was turned over to Oasis, a standards body that governs e-business and Web-services standards, including XML. The move coincided with the release of version 3 of the UDDI spec, which gives the standard key enterprise capabilities, such as support for XML-based security and policy management, internationalization and a subscription API that generates messages when changes are made to a UDDI repository. And late last month, Oasis announced the formation of a technical committee that will develop the technical standards and best practices. Members of the technical committee include BEA Systems, Hewlett-Packard, IBM, Microsoft, Novell, Oracle, Sun and Verisign, among others... In order for Web services to proliferate, customers and solution providers should be looking at UDDI as a key component of that infrastructure. Sutor describes it as a catch-22. 'You need to have a registry to get the growth of Web services, but you need a whole bunch of Web services to put in the registry to make it useful,' Sutor says. So, what will make UDDI useful? It's likely to proliferate within organizations for sharing business logic among applications. For example, Microsoft's Kurt says, if an enterprise wants to make a change-of-address service originally built for an HR application for other apps, a UDDI registry can help internal developers, or even end users, find the software components and business rules for using those programs..." See: "Universal Description, Discovery, and Integration (UDDI)."
[September 10, 2002] "Seybold: XML, Weblogs Make Mark on Publishing Industry." By Matt Berger. In InfoWorld (September 09, 2002). "The publishing industry is in a state of flux as new technologies emerge for managing content and delivering it to end users, a panel of industry pundits said Monday at the opening of the Seybold publishing industry conference here. True to the adage, content is still king, according to Patricia Seybold, a noted publishing industry analyst and chief executive of her self-titled consulting and analyst company in Boston. However, new technologies are redefining what the publishing industry should consider content... Dozens of vendors will be on hand here to discuss new software products developed to address the growing market for Web-based publishing. Adobe Systems has sent some of its top executives to promote its publishing software while Documentum, Arbortext and others will show their latest offerings for content management. The introduction of XML (Extensible Markup Language) and Web services have created new opportunities for content producers, Seybold said. Published information no longer has to be static. By employing these new Web-based technologies, content producers can deliver information in a personalized way, depending on who is reading it and what type of device they are using. Tim Bray, one of the co-creators of XML, spoke Monday alongside Seybold. He discussed what he sees as other defining changes in the industry, such as improved graphical user interfaces, and stated in passing that as far as tagging data goes, 'nobody is doing it' in the publishing business. Despite his doubts about XML's uptake, a number of industry players are latching onto the technology and the industry-specific schemas designed for tagging content, Seybold said. Reuters Group, for example, uses an XML schema known as NewsML to tag articles so that they can be more easily repurposed and better managed through the production process, she said. Another example is mechanic tools maker Snap-on, which built an online product catalog that offers information and e-commerce tools for its entire inventory of tools, Seybold said. The company tagged each product with an XML description so that partners can repurpose the information for use on their own Web sites, and so customers can gather relevant information related to each tool. Notable about the Kenosha, Wisconsin, company, Seybold said, is that it employs experienced auto mechanics to author the data in the XML tags. It is an example of how companies should 'marry the subject-matter expert with the content producers' so that published information can grow in value..."
[September 10, 2002] "BPML Kick-Start from System Architect." By Peter Williams. In InformaticsOnline (September 09, 2002). ['Business process modelling language gets first implementation'] "The new business process modelling language (BPML) 1.0 specification will be given a kick-start with its first implementation in a new release of System Architect enterprise modelling tool from Popkin Software. BPML provides a formal approach to modelling end-to-end business processes. It also supports XML-based process definitions to help communication between multiple vendors' systems and modelling tools used for web services. It is developed by the Business Process Management Initiative (BPMI) organisation, a large consortium of vendors and users that includes IBM, Hewlett Packard (HP), BEA, Sun and SAP, and modelling tools companies Rational, Casewise and Popkin... 'BPML is desperately needed but it has been slow coming,' said Tim Jennings, research production director at analyst Butler Group. 'Integration is no longer an IT problem but a business issue. Businesses are beginning to think in terms of business processes instead of technical links.' Jennings said that some BPML specialists with their own proprietary tools had probably been reluctant to migrate to the new standard. Along with Popkin's move, the rapid progression towards web services could help establish BPML as the standard of choice for modelling business processes. The recently announced business process execution language for web services (BPEL4WS) initiative driven by IBM, BEA and Microsoft, uses semantically similar notation to BPML 1.0, sharing the same keywords..." See: (1) the announcement from Popkin "Popkin Software to Offer Integrated Support for Release 1.0 of Business Process Modeling Language (BPML). New Standard Offers Transactional and Collaborative Business Process Modeling."; (2) "Business Process Modeling Language (BPML)."; (3) "Business Process Execution Language for Web Services (BPEL4WS)."
[September 10, 2002] "Business Process with BPEL4WS: Learning BPEL4WS, Part 2. Creating a Simple Process." By Rania Khalaf (Software Engineer, IBM TJ Watson Research Center). From IBM developerWorks, Web services. August 2002. ['The recently released Business Process Execution Language for Web Services(BPEL4WS) specification is positioned to become the Web services standard for composition. It allows you to create complex processes by creating and wiring together different activities that can, for example, perform Web services invocations, manipulate data, throw faults, or terminate a process. These activities may be nested within structured activities that define how they may be run, such as in sequence, or in parallel, or depending on certain conditions. This series of articles aims to give readers an understanding of the different components of the language, and teach them how to create their own complete processes. The first part of the series will take readers through creating their first simple process. Subsequent parts will extend the example in different ways to illustrate and explain the key parts of the language, including data manipulation, correlation, fault handling, compensation, and the different structured activities in BPEL4WS.'] "In order to demonstrate how activities may be created and aggregated with BPELWS, I will describe a simple example that processes loan requests. This article will illustrate the main aspects of a composition, as well as show how the WSDL descriptions of services relate to and are used by the BPEL4WS process definition. A complete process is created while explaining the use of partners for interaction, containers for holding messages, and the activities for interacting with the outside world, namely <receive>, <reply>, and <invoke>. In addition to describing how the process will run, I also show how to deploy and run it using the BPWS4J engine available on alphaWorks... In the next part of this article, I will go through some more parts of the BPEL4J language and illustrate their usage by adding more activities to the loan approval example. In order not to be confusing, the additions will keep bringing the sample closer to the one in the specification and BPWS4J release. In the meantime, you may want to read the other articles available about the language and the runtime..." Also in PDF format. See "Business Process Execution Language for Web Services (BPEL4WS)."
[September 10, 2002] "Business Process with BPEL4WS: Understanding BPEL4WS, Part 1. Concepts in Business Processes." By Sanjiva Weerawarana and Francisco (Paco) Curbera (Research Staff Members, IBM TJ Watson Research Center). From IBM developerWorks, Web services. August 2002. ['The recently released Business Process Execution Language for Web Services (BPEL4WS) specification is positioned to become the Web services standard for composition. It allows you to create complex processes by creating and wiring together different activities that can, for example, perform Web services invocations, manipulate data, throw faults, or terminate a process. These activities may be nested within structured activities that define how they may be run, such as in sequence, or in parallel, or depending on certain conditions. This series of articles aims to give readers an understanding of the different components of the language, and teach them how to create their own complete processes. The first part of the series will take readers through creating their first simple process. Subsequent parts will extend the example in different ways to illustrate and explain the key parts of the language, including data manipulation, correlation, fault handling, compensation, and the different structured activities in BPEL4WS.'] "Today Web services can communicate with each other, advertise themselves, and be discovered and invoked using industry-wide specifications. However, until last week, linking these services together into a business process or a composition gave the user a number of conflicting specifications to choose from -- as was the case with WSFL from IBM and XLANG from Microsoft. The Business Process Execution Language for Web Services (BPEL4WS) represents the merging of WSFL and XLANG, and with luck, will become the basis of a standard for Web service composition. BPEL4WS combines the best of both WSFL (support for graph oriented processes) and XLANG (structural constructs for processes) into one cohesive package that supports the implementation of any kind of business process in a very natural manner. In addition to being an implementation language, BPEL4WS can be used to describe the interfaces of business processes as well -- using the notion of abstract processes. We will elaborate further on this is future articles... Ee briefly explain the main underlying concepts of BPEL4WS. We consider the overall view of what BPEL4WS is about and then about partners, faults, compensation, and lifecycle. In the future articles of this series we expect to discuss various specific aspects of BPEL4WS in detail..." Also available in PDF format. See "Business Process Execution Language for Web Services (BPEL4WS)."
[September 06, 2002] "JUNOScript: An XML-based Network Management API." By Philip A. Shafer and Rob Enns (Juniper Networks). IETF Network Working Group, Internet-Draft. Reference: 'draft-shafer-js-xml-api-00'. August 27, 2002, expires February 25, 2003. "JUNOScript is an XML-based API for managing devices. It allows access to both operational and configuration data using a simple RPC mechanism. Sessions can be established using a variety of connection-oriented access methods. This document describes the framing protocol, message content, and capabilities of this API. Design decisions and motivations are also discussed. No attempt is made to formally define a protocol, but rather to document the capabilities of JUNOScript as part of the discussion of XML-based network management... In January 2001, Juniper Networks introduced an Application Programming Interface (API) for the JUNOS network operating system. This API is part of our XML-based Network Management (XNM) effort and is marketed under the trademark JUNOScript. This document describes the protocol used by the API and provides some insight into its design and implementation. JUNOScript allows full access to both operational and configuration data using a light-weight remote procedure call (RPC) encoded in XML . JUNOScript uses a simple model, designed to minimize both the implementation costs and the impact on the managed device. The model does not require on-box tools such as XSLT, since these may limit the size and number of machines that can implement the model. We aimed for simplicity and ease of implementation in most design issues, but not at the expense of expressiveness... By using XML-based tools, configuration data for an entire network can be retrieved from a database and transformed into a format acceptable by each particular managed device. It can then be handed to the device, the results tallied with those of other devices, and results displayed in multiple presentation formats, such as HTML or PDF... A JUNOScript session consists of a series of RPCs between the client and the server. The client sends an <rpc> element and receives an <rpc-reply> element in response. The contents of each direction of the JUNOScript session form a complete XML document, allowing the entire session to be saved to an XML file for post-processing, or to be used as a component of a UNIX pipeline. The session begins with each side sending an XML declaration and the start tag for the top-level element, <junoscript>. Each of these include a set of attributes that direct the operation of the other side. The XML declaration must contain the version of XML in use and should contain the character-set encoding... Experience with XML-based network management protocols under JUNOS has been both fruitful and educational. JUNOScript has been well received among developers, in-house testers, customers, and third party application vendors..." [cache]
[September 06, 2002] "The PERMIS X.509 Role Based Privilege Management Infrastructure." By David W. Chadwick [WWW] and Alexander Otenko (I.S. Institute, University of Salford, Manchester, UK). Pages 135-140 in Proceedings of the Seventh ACM Symposium on Access Control Models and Technologies [SACMAT 2002], sponsored by SIGSAC (ACM Special Interest Group on Security, Audit, and Control), June 3-4, 2002, Monterey, California, USA. "This paper describes the output of the PERMIS project, which has developed a role based access control infrastructure that uses X.509 attribute certificates (ACs) to store the users' roles. All access control decisions are driven by an authorization policy, which is itself stored in an X.509 attribute certificate, thus guaranteeing its integrity. All the ACs can be stored in one or more LDAP directories, thus making them widely available. Authorization policies are written in XML according to a DTD that has been published at XML.org. The Access Control Decision Function (ADF) is written in Java and the Java API is simple to use, comprising of just three methods and a constructor. There is also a Privilege Allocator, which is a tool that constructs and signs attribute certificates and stores them in an LDAP directory for subsequent use by the ADF... In order to control access to a resource, both authentication and authorization are needed... The latest version of X.509, due to be published in 2002, is the first edition to standardize an authorization technique and this is based on attribute certificates and Privilege Management Infrastructures (PMIs). A PMI is to authorization what a PKI is to authentication. Consequently there are many similar concepts shared between PKIs and PMIs. A public key certificate (PKC) is used for authentication and maintains a strong binding between a user's name and his public key, whilst an attribute certificate (AC) is used for authorization and maintains a strong binding between a user's name and one or more privilege attributes. The entity that digitally signs a public key certificate is called a Certification Authority (CA), whilst the entity that digitally signs an attribute certificate is called an Attribute Authority (AA). The root of trust of a PKI is sometimes called the root CA1 whilst the root of trust of the PMI is called the Source of Authority (SOA). CAs may have subordinate CAs that they trust, and to which they delegate the powers of authentication and certification. Similarly, SOAs may delegate their powers of authorization to subordinate AAs. If a user needs to have his signing key revoked, a CA will issue a certificate revocation list (CRL). Similarly if a user needs to have his authorization permissions revoked, an AA will issue an attribute certificate revocation list (ACRL)... The PERMIS project wanted to specify the authorization policy in a language that could be both easily parsed by computers, and read by the SOAs, with or without software tools. We looked at various pre-existing policy languages e.g., Ponder [The Ponder Policy Specification Language] and Keynote [The KeyNote Trust-Management System Version 2], but found that none were ideally suited to our needs. We decided that XML was a good candidate for a policy specification language, since there are lots of tools around that support XML, it is fast becoming an industry standard, and raw XML can be read and understood by many technical people (as opposed to ASN.12 for example, which uses a binary encoding). First we specified an XML DTD for our X.500 PMI RBAC Policy. The DTD is a meta-language that holds the rules for creating the XML policies... The generality of the PERMIS API has already proven its worth. In another research project at Salford we are designing an electronic prescription processing system. We have found that the PERMIS API can be easily incorporated into the electronic dispensing application. With a suitable policy the ADF is able to make decisions about whether a doctor is allowed to issue a prescription or not, whether a pharmacist is allowed to dispense a prescription or not, and whether a patient is entitled to free prescriptions or not..." Cache paper, DTD.
[September 06, 2002] "RBAC Policies In XML for X.509 Based Privilege Management." By David W. Chadwick and A. Otenko (University of Salford). Paper presented at IFIP / SEC 2002 (17th International Conference on Information Security), May 7-9, 2002, Cairo, Egypt. Department of Electronics and Electrical Communications, Faculty of Engineering, Cairo University. Organized by IFIP Technical Committee 11 on Security and Protection of Information Processing Systems. "This paper describes a role based access control policy template for use by privilege management infrastructures where the roles are stored as X.509 Attribute Certificates in an LDAP directory. There is a brief description of the X.509 privilege management model, and how it can be used to implement RBAC. Policies that conform to the template are written in XML, and the template is specified as a DTD. A future version will specify it as an XML schema. The policy is designed to be used by the PERMIS API, a Java specification for an Access Control Decision Function based on the ISO 10181 Access Control Framework and the Open Group's AZN API... The X.509 RBAC policy defines the subject and target domains governed by the policy, the role hierarchies supported by the policy, which roles may be assigned to which subjects by which trusted SOAs, and which roles are needed to perform which actions on which targets under which conditions. In policy based RBAC, a policy is defined which states the rules for assigning roles to users, and permissions to roles. The policy can then be used to control the accesses to all the targets within the policy domain. We have specified an API in the Java language, the Permis API, that reads in the XML policy, parses it, and then uses it to control access to targets within the policy domain. The API caller, typically an application gateway, passes the authenticated name of the user, and this is used to retrieve the user's attribute certificates (ACs) from the configured LDAP directory. Each signed AC is checked against the policy, and nonconformant ACs are discarded. Valid roles are extracted from the remaining ACs. The API caller then passes the user's requested action on his chosen target, and again this is checked against the policy. The API returns either granted or denied to the caller. In this way, a single policy can be used to control access to all the resources in a domain... We believe the policy is widely applicable to many different types of application, and we are already using it for electronic tendering, electronic prescribing, and several database access applications..." See the EC PERMIS Project website and Salford references. Note previous item. [alt URL, cache]
[September 06, 2002] "OpenDRM: A Standards Framework for Digital Rights Expression, Messaging and Enforcement." By John S. Erickson (Hewlett-Packard Laboratories, Norwich, Vermont, USA). The OpenDRM Project. Revised September 2002. 11 pages. Paper prepared for the NSF Middleware Initiative (NMI) and Digital Rights Management (DRM) Workshop, September 9, 2002. ['The lack of open, accessible, interoperable standards for digital rights management has often been cited by stakeholders as a leading cause for the slow adoption of DRM technologies. The fact that layered standards can contribute to interoperability should be obvious; that DRM standards developed in an open environment can contribute to the public interest is a more subtle, but equally important point. This document is a collection of thoughts that I have been developing and maintaining for several years on the notion of a multi-layered, open DRM standards architecture, which I think of as OpenDRM. Some aspects of this argument have been articulated in earlier works...']. Excerpts: "At its core, OpenDRM should provide a framework that defines open interfaces between at least three architectural levels of abstraction: rights expression languages, rights messaging protocols and mechanisms for policy enforcement and compliance... So-called rights expression languages provide the basis for expressing rights information and usage control policies, and as such can supply the payload vocabularies for a variety of rights messaging applications including: (1) Intellectual property rights (IPR) information discovery; (2) Simple policy expression, including constraints on access to resources; (3) Rights negotiation and trading, including rights requests and/or claims by information users; (4) The expression of rights agreements and electronic contracts (e-Contracts). Minimally, a rights language should provide vocabulary and syntax for the declarative expression of rights and rights restrictions. In order to guarantee interoperability and evolvability, we would expect a rights language to be inherently extensible: it must provide an open-ended way to express rights not anticipated by the language 'core.' Such extensions might accommodate new operations on content, including uses that are specific to particular media domains, or new contextual constraints... We believe that rights languages will prove to be important enablers of interoperability for systems that mediate access to resources. Although the principles underlying IPR-specific metadata have been around since at least 1993, the importance of standardized vocabularies and formats for expressing IPR policies has only recently begun to be appreciated at the application level [e.g., XRML, ODRL]. More recently, at least one alternative rights metadata schema has been developed for articulating a rightsholder's desire to turn over the public domain specific rights to works [Creative Commons]. It should be assumed that IPR policies must be expressed at different levels of abstraction, and therefore different vocabularies will be appropriate. To avoid chaos and to facilitate interoperability, we believe that some sort of rights language ontology will be required: a set of reusable terms for the basic rights concepts that can be mapped to different syntaxes. This is precisely the work that was begun by the <indecs> project in 1998, and is a fundamental basis of ODRL and, more completely, the <indecs>2-RDD Rights Data Dictionary project... This concept of interoperability through shared ontology is similar to what the ebXML working group has been trying to achieve with their core components approach to building interoperable business objects. Following that model, existing or future IPR expression languages could interoperate through translation via this shared semantic layer, rather than necessarily forcing applications and services to use a single common language..." See: (1) "XML and Digital Rights Management (DRM)"; (2) "OASIS Rights Language." [cache]
[September 06, 2002] "Testing Tools Are Key To Web Services' Success. Early adopters are finding value in the new breed of products." By Mary Hayes and Rick Whiting. In InformationWeek Issue 904 (September 02, 2002), pages 56-60. "Web services have the potential to vastly improve everything from software coding to the user experience. But the fundamentals of Web services -- assembling applications over the Web using open interfaces and protocols--create their own challenges. What can companies do to ensure that the Web services they build will operate flawlessly and stand up to the rigors of E-business?... Several technologies are integral to Web services, including XML, the universal messaging protocol; Simple Object Access Protocol; the Universal Description, Discovery, and Integration format for application identification; and Web Services Definition Language, a common language for application description. These technologies work together to retrieve and deliver data, and the world's biggest software vendors, including IBM, Microsoft, and Oracle, have agreed to support them in their Web-oriented applications and operating systems. The Parasoft SOAPtest tool that ABN Amro is using measures three main areas of quality testing: functionality (how well the program works), load (how it performs under strain), and regression (whether any code changes result in problems). SOAPtest can evaluate both the performance of Soap transactions at the server level and the user experience at the client level. It does black-box testing, which compares the actual responses from a Web service to the desired responses, and white-box testing, which tests the internal construction of the components that provide Web services... Large-scale, distributed Web-services applications present testing challenges for companies such as Covarity Inc., which develops applications for financial-services companies, including .Net Web services used to connect borrowers and loan providers. 'You don't control the configurations,' CTO Jeff Fedor says. 'You don't control the environment as much as you do in an enterprise. We're surprised every day by user configurations.' Covarity uses Rational's XDE Web-services development tool and the vendor's Purify tool to check new Web-services applications for memory leaks, Quantify to test newly developed code for performance bottlenecks, and Robot for load-testing..."
[September 06, 2002] "Weaving A Web of Ideas. Engines that search for meaning rather than words will make the Web more manageable." By Steven M. Cherry. In IEEE Spectrum Online (September 2002). "... If we couldn't build intelligent software agents to navigate a simplistic Web, can we really build intelligence into the 3 billion or 10 billion documents that make up the Web? ... The first step is to get a fulcrum under the mountain and lift it, and it is well under way. That fulcrum is the extensible markup language (XML)... XML builds on a second fundamental Web technique: coding elements in a document... The resource description framework (RDF) is the third component of the Semantic Web. An RDF makes it possible to relate one URI to another. It is a sort of statement about entities, often expressing a relation between them. An RDF might express, for example, that one individual is the sister of another, or that a new auction bid is greater than the current high offer. Ordinary statements in a language like English can't be understood by computers, but RDF-based statements are computer-intelligible because XML provides their syntax -- marks their parts of speech, so to speak... The Semantic Web notion that ties all the others together is that of an ontology -- a collection of related RDF statements, which together specify a variety of relationships among data elements and ways of making logical inferences among them. A genealogy is an example of an ontology. The data elements consist of names, the familial relationships... 'Syntax,' 'semantics,' and 'ontology' are concepts of linguistics and philosophy. Yet their meanings don't change when used by the theorists in the Semantic Web community. Syntax is the set of rules or patterns according to which words are combined into sentences. Semantics is the meaningfulness of the terms -- how the terms relate to real things. And an ontology is an enumeration of the categories of things that exist in a particular universe of discourse (or the entire universe, for philosophers)... Valuable as the Semantic Web might be, it won't replace regular Web searching. Peter Pirolli, a principal scientist in the user interface research group at the Palo Alto Research Center (PARC), notes that usually a Web querier's goal isn't an answer to a specific question. 'Seventy-five percent of the time, people are engaged in what we call sense-making,' Pirolli says. Using the same example as Berners-Lee, he notes that if someone is diagnosed with a medical problem, what a family member does first is search the Web for general information. 'They just want to understand the condition, possible treatments, and so on.' PARC researchers think there's plenty of room for improving Web searches. One method, which they call scatter/gather, takes a random collection of documents and gathers them into clusters, each denoted by a single topic word, such as 'medicine,' 'cancer,' 'radiation,' 'dose,' 'beam.' ... The method works by precomputing a value for every word in the collection in relation to every other word. 'The model is a Bayesian network, which is the same model that's used for describing how long-term memory works in the human brain,' Card says. The current king of the Web search world, Google, doubts the Web will ever be navigable by computers on their own According to this picture of long-term memory (there are others), neurons are linked to one another in a weighted fashion (represented by synapses)... For Autonomy, Bayesian networks are the starting point for improved searches. The heart of the company's technology, which it sells to corporations like General Motors and Ericsson, is a pattern-matching engine that distinguishes different meanings of the same term and so 'understands' them as concepts. Autonomy's system, by noting that the term 'engineer' sometimes occurs in a cluster with others like 'electricity,' 'power,' and 'electronics' and sometimes with 'cement,' 'highways,' and 'hydraulics,' can tell electrical from civil engineers. In a way, Autonomy builds an ontology without XML and RDFs..." See: "XML and 'The Semantic Web'."
[September 05, 2002] "XML Method Call (XMC)." By Adam Megacz (The XWT Foundation). IETF Internet-Draft. Reference: 'draft-megacz-xmc-01.txt'. September 01, 2002, expires February 2003. "This memo describes the XML Method Call (XMC) protocol. XMC a simple presentation layer protocol for network transactions with method call semantics and payloads consisting of small object trees. XMC specifies the request and response protocol, an XML representation for the object trees, and a tree-encoding for graphs. XMC is forward and backward compatible with XML-RPC. XMC clients can make calls to XML-RPC servers, and XML-RPC clients can make calls to XMC servers... Each network transaction consists of a single request sent in its entirity by the client, followed by a single response sent in its entirity by the server. Both the request and the response consist of object trees built from a small set of primitive types. The object trees are encoded in XML, providing an easily human-readable wire format which aids debugging. A standardized mapping from graphs to trees is provided for applications which need to encode multi-reference data. XMC assumes that the request and response are each small enough to be held entirely in main memory before being passed up to the application layer..." [cache]
[September 05, 2002] "BPML | BPEL4WS: A Convergence Path Toward a Standard BPM Stack." BPMI.org Position Paper. August 15, 2002. ['The release by BEA, IBM, and Microsoft of BPEL4WS, a new language for the modeling of executable business processes, adds another candidate specification to the emerging standard BPM stack. The future directions announced for BPEL4WS follow the footsteps of BPML identifying possible paths of convergence for the BPM industry.'] "Prior to this BPEL4WS release, the emerging BPM industry has been considering multiple alternative paths for the modeling of executable business processes. Microsoft pioneered the adoption of the Pi-Calculus model with XLANG, IBM rejuvenated the use of Petri Nets with WSFL, and BPMI.org unified the two approaches with BPML 1.0. Alongside such parallel efforts, other organizations advocated radically different approaches for business process modeling, such as ebXML BPSS developed by OASIS... BPML and BPEL4WS share similar roots in Web Services (SOAP, WSDL, UDDI), take advantage of the same XML technologies (XPath, XSDL), and are designed to leverage other specifications (WS-Security, WS-Transactions). Beyond these areas of commonality, BPML supports the modeling of real-world business processes through its unique support for advanced semantics such as nested processes and complex compensated transactions, capabilities BPEL4WS has yet to address. The authors of the BPEL4WS specification acknowledge such limitations in Section 13 of their recent draft, thus identifying a clear path of convergence toward a model similar to BPML's. Now that the BPM industry has started to consolidate on a common vision for Business Process Management, BPMI.org's original mission is more critical than ever. The Initiative's mission is to promote and develop the use of Business Process Management (BPM) through the establishment of standards for process design, deployment, execution, control, and optimization. In that respect, BPMI.org is not only interested in the execution side of business processes -- currently covered by specifications such as BPML and BPEL4WS -- but also their design by business analysts through the development of the Business Process Modeling Notation (BPMN), as well as their deployment, control, and optimization, through the development of the Business Process Query Language (BPQL). With such developments, BPMI.org remains the first and only independent organization fully dedicated to the development of a royalty-free BPM stack... Web Service Choreography Interface (WSCI), which was approved as a note by the W3C on August 8, 2002, is best described as a process interface definition language for business processes, is the largest common denominator to BPML and BPEL4WS. By offering 'out-of-the-box' interoperability across these two languages as well as ebXML BPSS and WfMC's XPDL, WSCI has greatly contributed to the consolidation of a standard BPM stack..." See: (1) See: "Business Process Modeling Language (BPML)"; (2) "Business Process Execution Language for Web Services (BPEL4WS)." [cache]
[September 05, 2002] "Web Services Firm Defies IT Giants with BPML Standard." By [CW360 Staff]. In ComputerWeekly.com (September 05, 2002). ['Business process management specialist Popkin Software has announced plans to support Business Process Modelling Language (BPML) 1.0 in its System Architect enterprise modelling tool.'] "BPML, proposed by the Business Process Management Initiative (BPMI), which features Popkin as an author and member, provides a standard for the modelling of executable end-to-end business processes. It supports XML schema-based definitions for streamlining communications among systems and modelling tools used in Web services. However BPML overlaps with a newly introduced specification, Business Process Execution Language for Web Services (BPEL4WS), proposed by industry heavyweights IBM, Microsoft, and BEA Systems in August. IBM and BEA are both members of BPMI. BPEL4WS 'competes directly with BPML,' said Martin Owen, head of consulting at Popkin in the UK and the company's BPMI representative. 'Both serve the same purpose. There's a lot of overlap between the two,' Owen said. BPML has a two-year to three-year lead in development over BPEL4WS, but neither specification can be considered superior to the other at the moment, he said. Popkin has been examining both standards and talks about cooperation between the two camps have been ongoing, Owen said. Popkin is examining both standards and intends to comply with all modelling techniques in its tool, Owen said. Proponents of BPML have been supportive of BPEL4WS, saying the two specifications are so similar technically that they are likely to converge..." See: (1) the announcement from Popkin "Popkin Software to Offer Integrated Support for Release 1.0 of Business Process Modeling Language (BPML). New Standard Offers Transactional and Collaborative Business Process Modeling."; (2) "Business Process Modeling Language (BPML)."; (3) "Business Process Execution Language for Web Services (BPEL4WS)."
[September 05, 2002] "Schema Extraction from XML Collections." By Boris Chidlovskii (Xerox Research Centre Europe, Grenoble Laboratory, Meylan, France; WWW). Pages 291-292 in Proceeding of the Second ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2002, Portland, Oregon, USA, July 14-18, 2002). Program Session: Federating and Harvesting Metadata. "XML Schema language has been proposed to replace Document Type Definitions (DTDs) as schema mechanism for XML data. This language consistently extends grammar-based constructions with constraint- and pattern-based ones and have a higher expressive power than DTDs. As schemas remain optional for XML, we address the problem of XML Schema extraction. We model the XML schema as extended context-free grammars and develop a novel extraction algorithm inspired by methods of grammatical inference. The algorithm copes also with the schema determinism requirement imposed by XML DTDs and XML Schema languages... Our contribution in this paper is three-fold. First, we adopt the XML schema formalism based on extended context-free grammars (ECFG) with range regular expressions allowed in nonterminal productions; such regular expressions combine grammatical forms and constraints for nonterminals and element groups similarly to constructions of XML Schema language. Second, with the proposed schema model, we address the problem of schema extraction from XML collections. The ECFG-based schema model makes the extraction problem more complex than in the DTD extraction case, so we identify three important components, namely, (1) induction the context-tree grammars from XML documents represented as structured examples, (2) generalization of content strings into regular expressions, and (3) constraining datatypes for simple XML elements. The second problem is the same as with the DTD extraction, but the first and third ones are relevant to the powerful schema mechanisms offered by novel XML schema languages. For the first problem, we extend the method of CFG inference from structural examples. For the datatypes constraining, we develop an algorithm based on the subsumption relationships among elementary datatypes in XML Schema language. For the content generalization, we propose a solution alternative to the DTD extraction, in order to cope with the occurrence constraints in schemas. Third, we address the determinism requirement, imposed by both XML DTDs and XML Schema to easily validate XML data against corresponding schema. Determinism can essentially constrain the power of ECFG model, as a large part of grammars does not provide the feature. We study both horizontal and vertical determinism that address the ease of vertical and horizontal navigation in an XML document tree... We have developed a novel schema extraction from XML documents. To our knowledge, this is the first attempt to induce XML schema that unifies the expressive power of ECFGs and the determinism requirement. We have identified three important components of the extraction algorithm, namely, the grammar induction itself, content generalization and tight datatype identification and have developed sophisticated solutions for each of them..." See the related paper "Schema Extraction from XML Data: A Grammatical Inference Approach" presented at the KRDB 2001 Workshop on Knowledge Representation and Databases, Rome, Italy, September 15, 2001. [cache KRDB 2001 paper]
[September 05, 2002] "Architectures for Intelligent Systems." By John F. Sowa In IBM Systems Journal Volume 41, Number 3 (2002). Special Issue on Artificial Intelligence. "People communicate with each other in sentences that incorporate two kinds of information: propositions about some subject, and metalevel speech acts that specify how the propositional information is used -- as an assertion, a command, a question, or a promise. By means of speech acts, a group of people who have different areas of expertise can cooperate and dynamically reconfigure their social interactions to perform tasks and solve problems that would be difficult or impossible for any single individual. This paper proposes a framework for intelligent systems that consist of a variety of specialized components together with logic-based languages that can express propositions and speech acts about those propositions. The result is a system with a dynamically changing architecture that can be reconfigured in various ways: by a human knowledge engineer who specifies a script of speech acts that determine how the components interact; by a planning component that generates the speech acts to redirect the other components; or by a committee of components, which might include human assistants, whose speech acts serve to redirect one another. The components communicate by sending messages to a Linda-like blackboard, in which components accept messages that are either directed to them or that they consider themselves competent to handle..." Note from a posting of Sowa: "This paper outlines an architecture that is being developed by the new company, VivoMind LLC, which is in the process of being organized now. That architecture is based on message passing technology that we intend to make publicly available in order to make it easier to incorporate independently developed modules into an AI system. Some modules may be free, open-source code, and others may be commercial, proprietary code. But as long as they observe the common interfaces specified in the architecture, they can all communicate and interact as part of a distributed system..." Also available from the author's website. See: "Conceptual Modeling and Markup Languages."
[September 05, 2002] ebXML Message Service Specification. Version 2.0. Produced by the OASIS ebXML Messaging Services Technical Committee. 1-April-2002. 70 pages. ['Version 2.0 of this Technical Specification document is presented to the OASIS membership for consideration as an OASIS Technical Specification, April 2002.'] "This specification is one of a series of specifications realizing the vision of creating a single global electronic marketplace where enterprises of any size and in any geographical location can meet and conduct business with each other through the exchange of XML based messages. The set of specifications enable a modular, yet complete electronic business framework. This specification focuses on defining a communications-protocol neutral method for exchanging electronic business messages. It defines specific enveloping constructs supporting reliable, secure delivery of business information. Furthermore, the specification defines a flexible enveloping technique, permitting messages to contain payloads of any format type. This versatility ensures legacy electronic business systems employing traditional syntaxes (i.e. UN/EDIFACT, ASC X12, or HL7) can leverage the advantages of the ebXML infrastructure along with users of emerging technologies... The specification defines the ebXML Message Service Protocol enabling the secure and reliable exchange of messages between two parties. It includes descriptions of: (1) the ebXML Message structure used to package payload data for transport between parties, (2) the behavior of the Message Service Handler sending and receiving those messages over a data communications protocol. This specification is independent of both the payload and the communications protocol used. Appendices to this specification describe how to use this specification with HTTP and SMTP..." See also: (1) the 2002-09-05 announcement "ebXML Messaging Service Specification Approved As OASIS Standard"; (2) "Electronic Business XML Initiative (ebXML)."
[September 05, 2002] "OASIS Group Approves Key Part of ebXML Specification." By Richard Karpinski. In InternetWeek (September 05, 2002). "The OASIS group, a key maker of Web services standards, Thursday said a new e-business messaging specification has been approved as an OASIS standard. The ebXML Message Service Specification 2.0 was approved by OASIS' membership, a requirement for final standards approval. The new standard provides a secure method for exchanging electronic business transactions over the Internet. It is the culmination of work by OASIS and the United Nations Center for Trade Facilitation and Electronic Business, or UN/CEFACT. The new service 'effectively bridges legacy EDI with emerging Web services-based infrastructure, technologies, interaction patterns, and XML documents,' according to Brian Gibb, vice chair of the OASIS ebXML Messaging Services technical committee and director of standards and applied technology for Sterling Commerce. The OASIS standards process is a lengthy one. First, the messaging specification was created by a group technical committee. It was then deployed by at least three organizations. That was followed by a 90-day open review followed by the balloting of the full OASIS membership. ebXML Messaging Service v2 was developed by Commerce One, Cyclone Commerce, eXcelon, Fujitsu, GE Global Exchange, IBM, Intel, Mercator, SAP, SeeBeyond, Sonic Software, Sterling Commerce, Sun Microsystems, Sybase, and webMethods, among others..." See also: (1) the 2002-09-05 announcement "ebXML Messaging Service Specification Approved As OASIS Standard"; (2) "Electronic Business XML Initiative (ebXML)."<
[September 05, 2002] "Web Services and Context Horizons." By Clay Shirky (WWW). In IEEE Computer Magazine Volume 35, Number 9 (September 2002), pages 98-100. ['For Web services to succeed, software engineers must rethink what the terms "local" and "global" mean in an Internet-scale operating system.'] "In their current state, Web services are simply plumbing for the exchange of XML documents using SOAP. The most widely known example of a Web service is the Google API, which lets remote applications send search requests to the Google search engine packaged as a SOAP call. Google executes the request and returns a structured XML document containing the results to the calling program. This lets software designers build Internet searching directly into an application, without having to write a custom screen-scraper function to extract Google results from an ordinary Web page. More interesting possibilities lie in using Web services to integrate multiple functions... A single request can trigger a cascade of subrequests until it becomes impossible to design a Web services application that knows in advance all the parts it will ultimately use... There are many kinds of context horizons, some specific to individual applications. Financial institutions, for example, require various atomic transactions that are difficult to link together via the kind of 'loose coupling' that the Web relies on. Three of the most important are trust horizons, semantic horizons, and coordination horizons. (1) Trust horizons: Unlike the one-to-one matching between individual users and machines that typically occurs today, trust horizons will require more subtle constructions. The obvious tools for handling trust horizons -- a set of identities for people, machines, and transactions capable of validation and tracking -- do not yet exist, and the concept is surrounded by contentious privacy and control issues. As Web services grow, the imperative to manage increasingly complex trust horizons will drive development of Internet-scale identity and authorization systems... (2) Semantic horizons: Web services' current state is analogous to international snail mail. The ability to send a letter from one country to another does not guarantee that the recipient will be able to read it. The problem is lack of a global ontology, a single framework for describing everything. A seemingly simple phrase such as 'catalytic' means one thing to an automotive engineer, another to a biochemist, and yet another to a business consultant. Because Web services must support numerous user groups, a request passing from one context to another with a subtly altered meaning will be a constant possibility. Indeed, the ease of creating XML documents will motivate more people to propose standards. Sorting out the semantic collisions, and determining how to write applications that know what to do when they are talking to remote Web services that do not share their semantic scope, represent significant challenges... (3) Coordination horizons: There is no limit on the number or type of Web services that could be woven together to create a particular application, nor is there a limit to the number of applications a given Web service can participate in. Depending on the handling of trust horizons, a given Web service might not even know how many different applications it is part of, and a given calling program might not know how many Web services are participating in a given request... To succeed even moderately, Web services will require a new software engineering philosophy that redefines what the terms local and global mean in a largescale networked environment..."
[September 04, 2002] "Tool Gives WSDL Programmers a Hand." By Darryl K. Taft. In eWEEK (September 04, 2002). "In a move to establish itself as a key provider of technology for creating and editing Web Services, Cape Clear Software Inc. Wednesday announced a free editor for the Web Services Definition Language (WSDL). The Cape Clear WSDL Editor delivers an environment for rapid WSDL development and supports both novice and experienced programmers, the company said. Cape Clear officials said the WSDL Editor includes wizards that eliminate some of WSDL's complexity; WSDL validation, which simplifies testing; and support for the rapid creation of Web Services from XML Schema. Other features include the import of any XML Schema, including industry standards; support for WSDL validation, where WSDL is tested against WSDL Schema; support for WSDL profiling, so WSDL can be validated against customized profiles for specific requirements such as compatibility with Web Services Interoperability organization profiles; support for advanced WSDL capabilities, such as imports, faults, Simple Object Access Protocol (SOAP) headers, multiple bindings and parameter ordering; and support for the latest WSDL specification. 'The WSDL Editor is to Web Services development what WYSIWYG HTML editors were to Web page development,' said John Maughan, business manager for Cape Clear's CapeStudio Web services development in a statement. 'It offers an intuitive graphical environment for the design of Web Services and, in particular, assists developers who wish to create Web Services from existing XML interfaces... Many developers are struggling with the complexities of WSDL; the WSDL Editor is designed to help them out'..." See: (1) the news item "Cape Clear Software Releases Free WSDL Editor Graphical Tool"; (2) "Web Services Description Language (WSDL)."
[September 04, 2002] "Get Up To Speed With SMIL 2.0. An XML-Based Approach to Integrating Multimedia into Web Content." By Anne Zieger (Chief Analyst and founder, PeerToPeerCentral.com). From IBM developerWorks, XML Zone. September 2002. ['SMIL 2.0, the Synchronized Multimedia Integration Language, has begun to establish itself as an important new approach for integrating multimedia into Web content. SMIL, which offers XML-based approaches for controlling the timing and presentation of multimedia elements, has begun to attract the support of many large software vendors and toolmakers, making it increasingly accessible for developers. In this article, Anne Zieger provides an overview of SMIL and describes several tools available to make SMIL coding simpler.'] "For developers outside the multimedia world, the Synchronized Multimedia Integration Language, or SMIL, may be something of an obscure technology. But at least among a few key players, SMIL has begun to establish itself as an important approach to presenting multimedia online. SMIL support has crept into technologies backed by Adobe, Microsoft, and perhaps most prominently, media delivery leader Real Networks. A wide variety of smaller vendors have begun to provide SMIL authoring tools and players as well. In days to come, as support for the current 2.0 specification grows, working with SMIL could become a standard strategy for any developer whose work requires some form of multimedia asset control. If the growing roster of tool creators is any indication, building presentations in SMIL should become easier as well... SMIL is an XML-based language that allows authors to write interactive multimedia presentations without using multimedia management tools such as Macromedia Director. Authors can describe the timing of multimedia presentations, associate hyperlinks with media objects and define the layout of the presentation onscreen. The SMIL 2.0 spec, for its part, is a series of markup modules defining semantics and XML syntax for certain SMIL functions ... As SMIL's popularity grows, developers are branching out into tools and tactics borrowed from other coding environments. Independent projects adding power or functionality to SMIL include PerlySMIL, a tool that creates dynamic SMIL files using Perl, and Cheshire Cat, a project that integrates SMIL with industry standard multimedia authoring tool Macromedia Director. Future projects bringing SMIL into other programming worlds seem likely, with Java technology-related projects an especially likely target: (1) Soja, a Java-based SMIL 1.0 player already created by the French non-profit development house Helio; (2) Schmunzel SMIL 1.0 player created in Java technology by SunTREC Salzburg; (3) X-SMILES, a Java-based open browser supporting XML. As SMIL 2.0 adoption continues, Java technology projects embracing the 2.0 standard are almost certain to follow. The already flourishing group of tools for SMIL is also likely to grow in coming months, as Web design specialists reach out for new multimedia options and multimedia houses continue to seek smoother Web delivery..." See: (1) W3C Synchronized Multimedia website; (2) "Synchronized Multimedia Integration Language (SMIL)."
[September 04, 2002] "Explore Online XML Data With Java Programming. Forecast the Weather and More With XML Parsing." By Dan Becker (Software developer, IBM Software Group). From IBM developerWorks, Java technology. September 2002. ['As publishing information on the Internet becomes more prevalent, it makes sense to discover and query this information. This article explains how to use Java programming to get Web-based XML data, parse the data, filter out the elements and attributes you need, and to perform work with the requested information. This article will enable you to adapt this code to explore all sorts of Web data. The article was first published in the August 2002 issue of the IBM developerWorks Journal, edited by Theodore J. Shrader.'] "The Extensible Markup Language(XML) is one technology that you can use to describe data to make it easier for businesses and consumers to share information. Examples of XML information on the Web include weather information, stock tickers, package shipment trackers, airline fares, auction prices, and jokes of the day. Why would you want to access this data? Perhaps you want to save and track the weather data for your home-town, or perhaps you want to write a small utility to track your packages. Assuming that you are a Java developer, you will find much of this XML information easy to parse and manipulate. Although there are HTML pages that display this information, much of the data originates in XML format and is converted to HTML at the Web server. Many Web sites offer information in both formats: HTML for older browsers and surfers who simply want to see the data, and XML format for net-savvy programmers who want to collect and analyze the data. Rather than scraping the data from an HTML page, it is much easier to use Java programming to parse and collect the XML information. First, standard XML parsers exist and are easy to download. Second, because the structure of a document might change over time, often it is easier to locate XML elements and attributes than HTML tags... The article explains how you can take advantage of Java programming to explore online XML data. Perhaps the biggest step is to find sites that are publishing the content in which you are interested. Once you find a site that interests you, the process of extracting data is the same for all XML documents: first, you request the document, then you parse it, and finally, you filter out the element and attribute data that are interesting. By using the standard XML parsers, you get a more robust tool than writing one yourself. In addition, by using an XML document, your code is more capable of handling any data rearrangement that an HTML parser might miss..."
[September 03, 2002] "Microsoft, IBM Diverge on Services Standards." By Darryl K. Taft. In eWEEK (September 02, 2002). "As base specifications for Web services standards begin to reach maturity, Microsoft Corp. and IBM, longtime partners in the arena, appear to be moving in different directions to fill proprietary needs. As a result, developers could ultimately be forced into one of several development camps. To date, the two companies have worked together to ensure compatibility and interoperability of the foundations of Web services. For instance, last week at the XML Web Services One conference here, each company showed Java 2 Enterprise Edition and .Net code being swapped in and out in a basic stock-trading Web services application. However, beyond the basic XML, SOAP (Simple Object Access Protocol), Web Services Description Language, and Universal Description, Discovery and Integration standards, and more recent WS (Web Services)-Security, WS-Coordination and WS-Transaction specifications, the companies are diverging and setting the groundwork for their own specialties... Robert Sutor, IBM's director of e-business standards strategy, in Armonk, N.Y., said IBM has no plans to support WS-Routing and may, down the road, support a routing specification of its own. Sutor said veering off on certain standards is only natural and is part of the process of competing... IBM is also striking out on its own, building a Web services structure to deal with SLAs (service-level agreements), said Giovanni Pacifici, manager of Service Management Middleware at the IBM T.J. Watson Research Center, in Hawthorne, N.Y. The technology provides a language to express SLA contracts, support guaranteed performance and handle complex dynamic fluctuations in service demand, Pacifici said. This SLA management system enables service providers to offer the same Web service at different performance levels, depending on contracts with their customers. The technology will become available through IBM's AlphaWorks Web Services tool kit this month and will feature the WSLA XML-based document that gives SLA definitions..."
[September 03, 2002] "Users Cast Wary Eye at Web Services." By Carol Sliwa. In Computer World (September 02, 2002). ['IT managers are interested but worry about immature standards, lack of skills.'] "IT professionals on an exploratory mission at last week's XML Web Services One conference here expressed keen interest in testing out new technologies to address some of their most painful application integration headaches. But their interest was tempered by a variety of concerns, including immature and sometimes overlapping standards, the potential for differing implementations of those standards by vendors and a dearth of skills at some companies to build Web services that use standard Internet technologies such as XML and the Simple Object Access Protocol to link disparate applications... One ray of hope for [Forum] attendees such as Ensign was a daylong joint presentation by two of the groups working on key Web services standards -- the World Wide Web Consortium and the Organization for the Advancement of Structured Information Standards. But even though the cooperative spirit was encouraging, some users were left with just as many questions as answers... Advanced security issues such as rights management are of great concern to financial services firms as well as to publishers such as LexisNexis, which manages content from a wide range of sources and must control access to meet its business obligations to its content providers and customers. Ensign said he now sees potential overlap among three standards -- Security Assertion Markup Language, Extensible Access Control Markup Language and Extensible Rights Markup Language... 'That's an expensive problem to solve if we have to invent our own solution to every single permissions issue as it comes along,' Ensign said. He added that if standards are implemented by vendors in a clear and consistent way, 'our customers and our external service providers can afford to implement their end of any of these service bargains'... 'Having been burned several times, I still need something that's multivendor and interoperable and not driven by one or two vendors, even if they're really good ideas,' agreed Stephen Whitlock, a Seattle-based enterprise security architect at The Boeing Co. 'We need some assurance that it's going to work, that we can switch vendors if we need to.' Whitlock said he looks forward to the day when standards are finalized to address data security at the endpoints of a transaction, since Secure Sockets Layer protects data only during transmission..."
[September 03, 2002] "Information Aggregation." By Carolyn A. April, Heather Harreld, and Matt Berger. In InfoWorld (August 30, 2002). "XML's coming-of-age as a data equalizer is fueling a furious information management push as vendors ranging from BEA Systems and IBM to a cadre of small players aim to simplify the way companies access data scattered across the enterprise. EII (enterprise information integration) technology is middleware that sits on top of applications and other systems. It provides transactional access to data from such disparate sources as packaged applications, e-mail, or content management servers, and delivers it in standard XML format to external targets. Although approaches to EII vary from XML querying to data modeling, they all eliminate the need to physically upload and centralize data, unlike ETL (extraction, translation, and loading) tools for data warehousing or content management databases. Instead, EII leaves data where it is, leveraging metadata repositories across multiple back-end sources to pull information transparently into new applications or portals... EII allows users to query and search more data types tucked away in systems across the enterprise, while also easing integration and development with one-time-only coding to myriad sources. For example, a developer building an enterprise portal could issue a single request for specific data and the EII engine would cull every back-end source, transform the relevant data, and return it in XML format, even if it was originally stored in multimedia such as video. Application server giant BEA is the latest heavyweight to get behind EII with an initiative last month called Liquid Data. Liquid Data is expected to sit in front of databases and file systems, allowing users to search for data in various locations, including databases from Microsoft, Oracle, and IBM. BEA, which has been moving aggressively to offer integration solutions, has not announced how it will position Liquid Data in its product line or when it will be released. Database giants IBM, Microsoft, and Oracle are concocting ways to leverage XML and enable multiformat data access, transformation, and integration. Meanwhile, the EII space is seeing a raft of small companies rolling out solutions ahead of their larger brethren... For its part, IBM is touting its forthcoming Xperanto initiative, which is expected to exploit XQuery as well as the XML Schema standard to describe data and XSLT to carry out transformation as data moves in and out of systems, according to IBM officials. Xperanto will be delivered as part of the next version of DB2 Universal Database as a way to present a federated approach to data integration through XML, text search, and data mining technologies, said Nelson Mattos, director of information integration and distinguished engineer at IBM's Silicon Valley Labs..." On BEA's Liquid Data, see the presentation by Colin D. Harnwell "BEA Liquid Data: Turning Distributed Data into Information."
- XML Articles and Papers August 2002
- XML Articles and Papers July 2002
- XML Articles and Papers April - June, 2002
- XML Articles and Papers January - March, 2002
- XML Articles and Papers October - December, 2001
- XML Articles and Papers July - September, 2001
- XML Articles and Papers April - June, 2001
- XML Articles and Papers January - March, 2001
- XML Articles and Papers October - December, 2000
- XML Articles and Papers July - September, 2000
- XML Articles and Papers April - June, 2000
- XML Articles and Papers January - March, 2000
- XML Articles and Papers July-December, 1999
- XML Articles and Papers January-June, 1999
- XML Articles and Papers 1998
- XML Articles and Papers 1996 - 1997
- Introductory and Tutorial Articles on XML
- XML News from the Press