The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: August 31, 2007
XML Daily Newslink. Friday, 31 August 2007

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com



Future of HTTP at Center of Debate
Carolyn Duffy Marsan, Network World

Experts are now weighing the options of 'tweaking' or 'revamping' HTTP, the Internet's main publishing protocol. "The Hypertext Transfer Protocol (HTTP) is the main way that information is published and retrieved from the World Wide Web. The protocol was developed jointly by the IETF and the World Wide Web Consortium (W3C). In use on the Web since 1990, HTTP was standardized in 1999 in RFC 2616. The IETF recently held a session to debate whether HTTP should be tweaked to fix known errors or completely reworked to address its well-known security weaknesses. Internet luminaries are lining up on either side of the debate. Pushing for minimal corrections of HTTP are Web inventor Tim Berners-Lee, the W3C and engineers from Microsoft, Adobe, and HP. This camp outlined its recommendations for tweaking HTTP in a draft document published by the IETF in June 2007: "The current plan is to incorporate known errata, and to update the specification text according to the current IETF publication guidelines." Others are arguing for the standards bodies to bolster the authorization mechanisms available for HTTP and make them mandatory. This camp argues that built-in authorization would help eliminate the widespread problems of spoofing and phishing, although it also would remove anonymity. HTTP doesn't include built-in security. Instead, two optional security mechanisms known as basic and digest access authentication are outlined in a separate standard known as RFC 2617. Neither of the standardized HTTP authentication methods is popular. Instead, most Web developers use HTML forms with session keys stored in cookies to secure HTTP communications and ensure message confidentiality and integrity. However, cookies have well-known security and privacy problems, too. Participants in the IETF debate favored setting up a new working group to fix HTTP's errors and create a document that outlines known security holes. Whether the group would address HTTP authentication mechanisms remains to be seen. The IETF has tried but failed twice before to establish a working group to fix HTTP problems." [Note: Mark Nottingham, co-chair of the Chicago HTTPbis BOF at the IETF 69, has prepared a BoF Summary document and a proposed "HyperText Transfer Protocol Revision (http-bis) Charter". He actively maintains an "RFC2616bis Issues" list; see URI references in the WG Proposal document.]

See also: references cited in the HTTP WG Proposal


Apache Tuscany SCA Java 0.99 is Released
Developer Team, Apache Tuscany Project

Members of the Apache Tuscany team have announce the 0.99-incubating release of the Java SCA project. Apache Tuscany provides a runtime based on the Service Component Architecture. SCA is a set of specifications aimed at simplifying SOA Application Development which are being standardized at OASIS as part of Open Composite Services Architecture (Open CSA). This release of Apache Tuscany SCA builds on the stability and modularity established with the previous releases and includes more complete implementation of SCA specifications, support for distributed SCA domains, SCA policy, OSGi implementation types, and pub/sub support with notification components. Start up time and memory footprint of the runtime has been reduced and there have been numerous bug fixes. This is expected to be the last point release before the 1.0 final release. Apache Tuscany welcomes your help. Any contribution, including code, testing, improving the documentation, or bug reporting is always appreciated." The Tuscany community is working to create a robust and easy to use infrastructure that simplifies the development of service-based application networks and addresses real business problems posed in SOA. An essential characteristic of SOA is the ability to assemble new and existing services to create brand new application that may consist of different technologies. Service Component Architecture (SCA) defines a 'simple' service-based model for construction, assembly and deployment of services (existing and new ones). Tuscany is an effort undergoing incubation at the Apache Software Foundation (ASF), sponsored by the Apache Web services PMC. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

See also: the Apache Tuscany Project


Introducing Service Component Architecture (SCA)
David Chappell, Chappell and Associates Technical Paper

"This overview provides an architectural introduction to SCA. The goal is to provide a big-picture view of what this technology offers, describe how it works, and show how its various pieces fit together. Service Component Architecture (SCA) defines a general approach to doing both of these things. Now owned by OASIS, SCA was originally created by a group of vendors, including BEA, IBM, Oracle, SAP, and others. The SCA specifications define how to create components and how to combine those components into complete applications. The components in an SCA application might be built with Java or other languages using SCA-defined programming models, or they might be built using other technologies, such as the Business Process Execution Language (BPEL) or the Spring Framework. Whatever component technology is used, SCA defines a common assembly mechanism to specify how those components are combined into applications... Different people look at SCA in different ways. The specifications offer plenty of options, and so when someone says 'SCA', he might mean any or all of the things these specs define. Similarly, different vendors are almost certain to emphasize different aspects of SCA. One vendor might support SCA's assembly aspects and its new programming model for Java components, for example, but not the C++ version of this model. Another might support only SCA's assembly aspects, completely ignoring the new Java and C++ programming models. And since these specifications explicitly allow vendor extensions, look for each vendor to provide some customization in its SCA products. Still, SCA is unquestionably an interesting technology. By providing an alternative to older approaches such as EJB and JAX-WS, it can offer a new way to create Java business logic for a service-oriented world. By providing an assembly mechanism for components implemented using various technologies, it can help knit together an increasingly diverse environment." Background from the Blog entry: "Service Component Architecture (SCA) isn't an especially simple technology. It's also hard to figure out just by reading the specs. To help make clear what SCA is, I've written a white paper that introduces the topic. My original goal for this paper was to create a straight tutorial that all of SCA's creators would agree was accurate. This turned out to be impossible. Different vendors have different perspectives on how SCA should be presented and even on which parts of the technology are important. Accordingly, while I've done my best to be accurate and objective, I'm certain that there will be some disagreement with how I've described or positioned a few things. Still, my hope is that there's value in an overview of the topic from somebody with no particular vendor ax to grind."

See also: the blog entry


The Archived-At Message Header Field
Martin Duerst (ed), IETF Internet Draft

IETF announced the availability of a new Internet Draft from the on-line Internet-Drafts directories, edited by Martin Duerst (Aoyama Gakuin University, Japan): RFC 2369 ("The Use of URLs as Meta-Syntax for Core Mail List Commands and their Transport through Message Header Fields") defines a number of header fields that can be added to Internet messages such as those sent by email distribution lists or in netnews. One of them is the 'List-Archive' header field that describes how to access archives for the list. This allows to access the archives as a whole, but not the individual message. There is often a need or desire to refer to the archived form of a single message. This memo defines a new email header field, 'Archived-At:', to provide a direct link to the archived form of an individual email message. This provides quick access to the location of a mailing list message in the list archive. It can also be used independently of mailing lists, for example in connection with legal requirements to archive certain messages. If one has the message, why would one need a pointer to it? It turns out that such pointers can be extremely useful. A user may want to refer to messages in a non-message context, such as on a Web page, in an instant message, or in a phone conversation. In such a case, the user can extract the URI from the 'Archived-At' header field, avoiding the search for the correct message in the archive. A user may want to refer to other messages in a message context. Referring to a single message is often done by replying to that message. However, when referring to more than one message, providing pointers to archived messages is a widespread practice. The 'Archived-At' header field makes it easier to provide these pointers. A user may want to find messages related to a message at hand. The user may not have received the related messages, and therefore needs to use an archive. The user may also prefer finding related messages in the archive rather than in her MUA, because messages in archives may be linked in ways not provided by the MUA. The 'Archived-At' header field provides a link to the starting point in the archive from which to find related messages. Mailing list expanders and email archives are often separate pieces of software. It may therefore be difficult to create an 'Archived-At' header field in the mailing list expander software. One way to address this difficulty is to have the mailing list expander software generate an unambiguous URI, e.g. an URI based on the message identifier of the incoming email, and to set up the archiving system so that it redirects requests for such URIs to the actual messages. If the email does not contain a message identifier, a unique identifier can be generated. Such a system has been implemented and is already in productive use at W3C; source code for this implementation is available online. [Update: An IETF-Announce posting on September 10, 2007 reported that the IESG has approved "The Archived-At Message Header Field" (draft-duerst-archived-at-09.txt) as a Proposed Standard. "This document has been reviewed in the IETF but is not the product of an IETF Working Group. The IESG contact person is Chris Newman. Protocol Quality: This was discussed on the message header field review mailing list. During development of this specification, a decision was made to allow folding whitespace in the middle of the URL, but to otherwise use a simpler and more restrictive syntax than RFC 2369 which covers a related topic. It has been used by the W3C and Gmane."]

See also: the W3C message-id code


Better Navigation: Site Navigation Links
Norm Walsh, Blog

"It's been ages since I wrote about site navigation links. As the XProc specification works its way towards Last Call, I'm reminded of their value. As a spec editor, whenever review begins in earnest, I get a large number of suggestions of the form: 'In 3.4.5, I suggest XXX instead of YYY'. Or words to that effect: the commentor makes brief reference to some part of the specification followed by some suggestion about the prose therein. For simple editorial changes, I can just search for the affected prose in my Emacs buffer, make the suggested change, and move on. But for technical changes or more sweeping editorial suggestions, its often necessary (or at least useful) to review the prose as it currently appears in the spec. That means [somehow] finding '3.4.5', and that means searching or scrolling or finding the Table of Contents and clicking. Finding the Table of Contents also involves some degree of searching or scrolling, so that's not always fastest. What would be nice is a quick way to go directly to the right place: in other words, site navigation links. So: by tweaking the stylesheets to produce a bunch of 'link' elements in the HTML 'head', I get nearly instant access to all the likely places (Section, Subsection, Chapter, Appendix, Versions [rel="alternate"]). In Firefox, you need the 'cmSiteNavigation' toolbar extension; I don't know what you might need in other browsers. The 'Sections', 'Chapters', and 'Appendices' pulldown link menus give you access to what you'd expect. I took advantage of the 'Bookmarks' pulldown to provide direct access to all of the STEPS described in the specification. You can play with these links yourself in the current Editor's Draft. [Ed. note: I tried this, using the cmSiteNavigation toolbar extension; it's by far the fastest method I'm (now) aware of for direct linking to inner document locations and alternate versions.]


Is WS-Transaction Useable in the Real World Today?
Ian Robinson, Blog

The WS-Transaction specifications can be used by a Web service to include the processing of the service in a distributed transaction. But Web services themselves are often simply a means to integrate existing applications into new composites and/or a means to expose those applications to new types of clients/channels. So, in a bottom-up design where new Web services that can exploit WS-Transaction capabilities are wrappering existing back-end applications, do the back-end applications have to have been designed to be used with transactions? It depends... The back-end applications might provide a core business service that has been doing its job well for many years and runs in an environment that ensures the transactional integrity of any data updates performed by the application, such as a CICS program. Or the back-end application might be a database stored procedure, or a purchase order workflow or anything else. WS-AT is typically useful only when the back-end application runs in an environment that supports some form of distributed 2PC, although the precise manner of the 2PC really doesn't matter since the WS-AT provider can adapt the AT protocol messages to the desired domain-specific transaction protocols. WS-BA is typically useful when any work performed by the back-end application can be undone/reversed through a compensation handler that drives another back-end application—or by the same back-end application with a different set of data. WAS provides robust runtime support and application assembly for Web services to exploit WS-AT or WS-BA which can be used in highly available configurations as well as mediated/proxied toplogies. The development tasks associated with supporting WS-AT and WS-BA in WAS are focussed on application assembly and configuration rather than Java programming, the only coding requirement being the CompensationHandler class (a plain old Java object) in the case of WS-BA.

See also: the May 2007 WS-TX announcement


Web 2.0 Firms Try to Tie Mashups Into SOAs
Shane Shick, InfoWorld

Nexaweb and Kapow Technologies are partnering up to help companies create Web 2.0 apps that tie into their SOA and help automate business processes. Companies that want to move beyond cutting and pasting information into Excel spreadsheets will have to adopt a set of APIs and create lightweight mashup applications that automate business processes. The joint webcast event was a first of a projected series of online discussions hosted by software provider Nexaweb, which also announced a partnership with Kapow Technologies. Under the agreement, Kapow will help companies implement its Mashup Server, a visual scripting tool for making Web 2.0 applications, along with Nexaweb's platform, whose platform focuses on AJAX (asynchronous Java and XML) within service-oriented data systems. The idea is that companies that are trying to create a more efficient IT environment based on SOA could also develop online applications that tie into those systems. Kapow Technologies CTO Stefan Andreasen said the partnership would facilitate the creation of so-called enterprise mashups, which combine services in network-based data into multiple applications and make them available through a dashboard of some kind. These kinds of tools make it much easier for companies to offer self-service features to their users, he said, while making use of existing IT infrastructure. The mashups will give users better-automated tasks and the ability to deliver software programs more quickly; they would in essence be what developers are calling RIAs that use APIs, such as SOAP and RSS, and offer a downloadable "offline" mode as well as an online one.


Office Open XML Standardization to Drag Into Next Year
John Fontana, Network World

The long and contentious battle to standardize Office Open XML won't end this weekend when ISO member countries cast votes, but is likely headed for a special meeting where specific questions regarding the 6,000-page specification will need to be resolved. The vote, slated for September 2, 2007 is one of the last phases of nearly five months of work by the International Organization for Standardization's (ISO) on a proposal to standardize Ecma-376 Office Open XML (ooXML). The specification is derived from Microsoft's Office Open XML, which is the default file format in Office 2007. Ecma deemed Office Open XML a standard in December, and the ISO has been working on a fast-track proposal to consider doing the same. The issue has polarized the industry with detractors questioning Microsoft's true intentions. The ISO has already approved the OpenDocument Format (ODF) as a standard, giving it credibility among organizations that prefer standards-based technology, and Microsoft is gunning to land the same designation for the specification it presented to Ecma. Critics say with all the politicking going on from both sides that handicapping the September 2 outcome is near impossible. The September 2, 2007 vote, rather than resolving the issue once and for all, will raise technical and other questions about the specification that the ISO, Microsoft and Ecma may have to answer at a special week-long meeting slated for early 2008. The September 2 vote will be among the 140 ISO member countries, and it is from that process that questions will arise regarding technical and other aspects of ooXML, also known as DIS 29500 at the ISO. Those questions will come from member countries that vote 'no with comments.' The subcommittees that work under the Joint Technical Committee 1 (JTC1), which is responsible for information technology standardization at ISO, are not required to consider comments from countries that vote 'yes with comments.' If the comments warrant the ISO's special Ballot Resolution Meeting (BRM) it will be held Feb. 25-29, 2008 in Geneva, but there is no guarantee that it will be needed. And if it is, it could run much longer than a week depending on the number of comments. The conclusion is that whatever happens September 2 and whatever spin is put on the results by Microsoft or the opponents to ooXML, the question of ISO standardization is far from over.

See also: Standard ECMA-376


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
EDShttp://www.eds.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
SAP AGhttp://www.sap.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2007-08-31.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org