This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com
- Lombardi Moving BPM Online: Process Discovery as a Service
- Put to the Test: Lombardi Takes BPM Mainstream
- Standalone XQuery Implementation in .NET?
- Grindstone to the GRDDL
- Enterprise Mashups Meet SOA
- BEA Systems Position Paper: W3C Workshop on Web of Services for Enterprise Computing
- TAG Paper for W3C Workshop on Web of Services for Enterprise Computing
Lombardi Moving BPM Online: Process Discovery as a Service
Ephraim Schwartz, InfoWorld
The announcement last week from Lombardi Software that it will add a SaaS (software as a service) component dubbed Blueprint to its on-premises BPM suite highlights both the continuing growth of the hosted model and its limitations. In offering Blueprint, Lombardi becomes one of the first vendors to offer a SaaS solution for BPM. However, Blueprint is not Lombardi's entire BPM suite. Rather it is one component, covering the preliminary process discovery portion of BPM during which people-to-people collaboration is essential. An end-to-end SaaS solution for BPM may still be a way off. The hosted solution uses a shared workspace within the browser that non-technical users can use, but it also supports the Business Process Modeling Notation (BPMN) standard for business analysts comfortable working with a standard industry tool. The underlying data model for the on-premises and SaaS components are identical so that once the processes that need improvement are defined the data is sent back and forth to the hosted solution behind the firewall through a feed similar to RSS. According to Phil Gilbert, Lombardi CTO, SaaS is the right delivery mechanism to tap the many different players who only understand their piece of the workflow puzzle.
See also: BPMN
Put to the Test: Lombardi Takes BPM Mainstream
Derek Miers, Intelligent Enterprise
In late 2005, the Object Management Group (OMG) began working on a Business Process Maturity Model (BPMM) standard designed to help organizations assess and grow their process management capabilities. Understanding process maturity helps managers assess their performance in the right context and chart a course toward achieving larger corporate goals. Lombardi Software, a pure-play business process management suite (BPMS) vendor, is taking this issue head on by embedding BPMM capabilities into its core TeamWorks product, into the Lombardi for Office (LFO) add-on product and into Blueprint, its just-announced on-demand modeling tool set. Think of Blueprint as a process capture tool that blends Wiki-style editing with WebEx collaboration and a Six Sigma-problem-solving focus. Alternatively, you could just regard Blueprint as an on-demand Business Process Modeling Notation (BPMN) standard environment that lets a project team (including modeling neophytes) collaborate over process development. The model storage format is based on OMG's new Business Process Description Metamodel (BPDM) standard, so models can potentially be moved to any other BPDM-enabled modeling tool or deployment environment. Although BPDM has not yet been formally released or adopted, Lombardi has been very active in its development. Look for more vendors to adopt this standard in the coming months. With LFO and Blueprint, Lombardi looks beyond BPMS power users and engages ordinary business users that need and want the benefits of process improvement. The collaborative, on-demand nature of Blueprint demonstrates a new direction that may help take BPM mainstream.
Standalone XQuery Implementation in .NET?
Staff, Microsoft XML Team's WebLog
XQuery 1.0 and XPath 2.0 are now W3C Recommendations, thanks in part to the contributions of several Microsoft employees over the years. An earlier draft of the XQuery specification is supported in SQL Server 2005 , and you can send an XQuery to the server using the ADO.NET that shipped in Visual Studio 2005. These features are becoming widely used. Now that the XQuery family of specifications is complete, it's fair to ask what our implementation plans might be, We had been working on an implementation of XQuery in the .NET Framework that operated over standalone XmlDocuments several years ago, and showed the work in progress in the first beta of what became Visual Studio 2005. That was not shipped for several reasons . The most compelling reason was that it was obvious in 2004 that the W3C Recommendation would not be complete before that product release was frozen. Another reason was that this seemed like it was just XSLT in a different skin, whereas what our customers really wanted was a more powerful implementation that provided advanced features such as in-memory indexes and support for mapping parts of the query to the back-end server, including use of the XML datatype and XQuery provided by SQL server. This would have been much more work, and would have required us to go far beyond what the W3C was standardizing—so our focus back then was to make sure the back end server support for XQuery was rock solid. We have had occasional requests from our user community for a client side implementation of XQuery that operates over standalone XmlDocuments, but we see no clear groundswell of demand yet. We very much wish to hear from our user community about their requirements that could be met with XSLT 2.0 and XQuery 1.0. We announced last week that we are actively working on an XSLT 2.0 implementation.
Grindstone to the GRDDL
Dave Beckett, Journalblog
My Raptor RDF parsing / serialising library has been doing GRDDL processing of a sort to make RDF triples for several years but it was only in Raptor 1.4.14 announced 31 January that I finally got round to managing the recursion through XML Namespace URIs and HTML head profiles, so that I was covering the majority of the spec. That was my coding over the Christmas break I took in the UK. In the last few weeks I have been working on the GRDDL tests, some of which themselves are in beta, and getting my code through them, or fixing the tests. I'm happy that finally I've got to the stage where I think either: (a) I pass a test or (b) the test has the wrong result. I'm currently waiting for the answer to my last report to the WG and they could still change or add to the spec but I expect it'll be Last Call very shortly. I'll wait until their reply before I ship a new version of Raptor with the most recent changes, which you can read now in the draft release notes if you want to know more. So apart from diving into Raptor Subversion, you can kick the tires of the fixes right now with the Raptor parser demo for GRDDL and you'll need a URI with some GRDDL-compatible markup for the URI box. [Note: "GRDDL is a mechanism for Gleaning Resource Descriptions from Dialects of Languages. This GRDDL specification introduces markup based on existing standards for declaring that an XML document includes data compatible with the Resource Description Framework (RDF) and for linking to algorithms (typically represented in XSLT), for extracting this data from the document. The markup includes a namespace-qualified attribute for use in general-purpose XML documents and a profile-qualified link relationship for use in valid XHTML documents. The GRDDL mechanism also allows an XML namespace document (or XHTML profile document) to declare that every document associated with that namespace (or profile) includes gleanable data and for linking to an algorithm for gleaning the data."]
See also: GRDDL
Enterprise Mashups Meet SOA
Dave Linthicum, InfoWorld
The line is blurring between the enterprise and the Web. Mashups live on that porous perimeter, offering the reusability of an SOA plus very rapid development using prebuilt services outside the firewall. Soon, we may live in a world where it's difficult to tell where the enterprise stops and the Web begins. But just having the ability to create mashups doesn't mean they'll be valuable. You need to properly provision and manage the services available for mashups and understand their purpose and place in an SOA. The task is threefold. First, you must prepare existing infrastructure to support mashups. Second, you need to understand your requirements. And third, you've got to wrap your head around the potential value that mashups can and cannot bring. Although mashups originate with Web 2.0, which epitomizes development on the fly, mashups in the enterprise require preparation. You need to build and support an SOA that's "mashable" with services and content, as well as with APIs that are both local and remote to the enterprise. Among other things, that means existing enterprise application services must be able to access Internet-hosted services safely. Google Maps mashups, which hook the wildly popular mapping service to some database that includes street addresses, have become almost cliche. More complex mashups approach composite applications (those that are made up of many services), an advanced SOA concept. For instance, you could mash up a customer database with marketing metrics, then mash up the results even further with sales forecast processes. You own and maintain some of the information and services; some are accessible over the Internet. So, who's providing these services? SaaS (software as a service) players such as Salesforce.com seem to have the largest number of enterprise-class services, with service marketplaces such as StrikeIron in the mix, as well as services from vertical sites such as finance, retail, and health care. All have provisioned services, data, and content that are consumable over the Web. Mashup preparation can be divided into six familiar stages: requirements, design, governance, security, deployment, and testing. These are core architectural bases you must touch if you are to arrive safely in the promised land of mashups on top of an SOA.
See also: Mashup Platform Vendors
BEA Systems Position Paper: W3C Workshop on Web of Services for Enterprise
David Orchard, BEA Systems White Paper
Much of the Web Services infrastructure is in place within and without the W3C. Messaging specifications that are final or fairly close to final: SOAP 1.2, XOP/MTOM, WS-Addressing, WS-ReliableMessaging, WS-Security (and other WS-Security* specifications), WS-Transactions. Description formats are similarly in advanced stages: WS-Policy and WSDL 2.0. Discovery efforts in UDDI are finished. As well, WS-BPEL is well advanced. There is a base level of interoperability between the specifications defined in the WS-I Profiles, and there are more profiles emerging for the later specifications. There are other messaging, description and discovery efforts that are not in the standards process: WS-MetadataExchange, WS-Eventing, WS-Transfer, WS-Management*. There are some areas that have made little public progress: intermediary support, client-side routing, message flow control, A certain faction of developers doing 'services' is promoting XML using HTTP as a transfer protocol, sometimes called REST. They do not support using the various WS- specifications for message transfer. At the same time as the development of 'Web Services 1.0', Web 2.0 technologies have been gaining in popularity, such as 'mashups' that perform Web integration. There are clearly two architectures in play, the WS-* architecture that promotes many operations (typically on fewer resources) and the REST architecture that promotes few operations (generic interface) on more resources. There is a need for more appropriate machine readable descriptive capabilities for Web based services. The promise of WSDL 2.0 has not materialized and is unlikely to do so. Part of this is more support and higher productivity for an AJAX and non-Ajax clients interacting with many different components/services/widgets. This would foster increased productivity in Web based services, and potentially provide for integration with the description-centric Web services community. An observation is that there does not appear to be technology available for easily integrating Web services with the Web either from offering SOAP services to Web clients or the converse of SOAP/WSDL clients consuming REST services.
TAG Paper for W3C Workshop on Web of Services for Enterprise Computing
Noah Mendelsohn (for the W3C TAG), White Paper
Although a few TAG members have direct experience building and supporting enterprise-grade networking systems, most of us have far deeper knowledge of the World Wide Web and of the technologies that have been used to build it. This white paper is intended to set out a few of the issues as we understand them, and to share some ideas about architectural tradeoffs. We do not attempt here to suggest what "the right answers" should be, but rather to offer some ideas that we hope will promote useful discussion. In keeping with the overall style of the workshop, we focus mainly on analyses motivated by use cases, and conclude with some discussion of the implications. Specifically, we ask the question: should WS and the Web be disjoint systems that share some technology, or should the two be more tightly integrated? To explore that question, we present as use cases three variations on the same theme: providing Web and/or WS-based control and query of an Internet-connected printer. The first use case discusses a traditional Web-based control interface; the second explores the characteristics of a pure WS-based approach; the third presents a printer that supports both interfaces simultaneously. Ten years ago, it would have been unusual to find a Web server embedded in a printer. Indeed, if one had asked a printer manufacturer about including such a capability one might have gotten a quite puzzled: "We know how to control printers. The Web is for getting stock quotes and reading news reports. They're different." Yet today, it's common to find Web servers embedded in printers. Web Services resources, however, are often not enabled for Web access. In this note we ask whether those resources too might benefit from better Web support. Given that much of the WS stack already uses Web technologies such as URIs and HTTP we also ask a related question: are those Web technologies being used by WS in a way that maximizes value? The value of network effects is extraordinary when hundreds of millions of resources are interconnected on a global scale. There is also great value in exploiting the nearly ubiquitous deployment of Web proxies, user agents, Web-enabled databases, and other tools, all of which depend on appropriate use of Web technologies such as URIs, HTTP GET, etc. So, we think that integration of Web and WS technologies is worth at least very careful thought.
See also: the program listing
XML Daily Newslink and Cover Pages are sponsored by:
|BEA Systems, Inc.||http://www.bea.com|
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/