This issue of XML Daily Newslink is sponsored by:
- Process Component Models: The Next Generation in Workflow?
- Achieving Separation of Concerns Using BPEL
- Employ Metadata to Enhance Search Filters
- DITA: Reusable XML
- Strategic Security: Get a Handle on Authentication
- The Ranvier URL Mapper: Letting URL Structure Invoke Application Work
- A Look at the First HTML 5 Working Draft
- Proposed Recharter of IETF Public-Key Infrastructure (X.509 PKIX) WG
Process Component Models: The Next Generation in Workflow?
Tom Baeyens, InfoQueue
This article arguments that the gap between the analysis and the implementation of business processes is far bigger then the marketing of today's workflow tools might suggest. Also it will propose a much more realistic way of dealing with this situation. The current standards and initiatives will be explained with enough depth so that you can see how they relate to the movements and why. In the discussions, I'll identify the strengths and weaknesses of each discussed technology and describe the proper and improper ways of using them. At the end, a new type of workflow technology is introduced called process component model. This type of framework can handle multiple process languages and it can support process languages that better support the transition from analysis process diagrams to executable processes. BPEL is an executable process language, which is good for integration purposes, but it's not suited for supporting Business Process Management cause of its tight coupling with technical service invocations. BPMN serves the analysts in drawing analysis diagrams, but it's not executable. XPDL is a less adopted file format, which might be superseded by BPDM. The gap between analysis languages and executable languages still remains too big to be practical. In order to create a more realistic approach to BPM for widespread adoption, we need to start by making a better distinction between analysis process models and executable process models. Once we abandon the idea that non-technical business analysts can draw production-ready software in diagrams, we can come to a much more realistic and practical approach to business process management. When linking an analysis process model with an executable process implementation, the clue is not to include too many of the sophisticated details of the analysis process notation in the diagram. By using only the intersection of what the analysis language and the executable process language offers, a common language can be created for the business analyst and the developers, based on one single diagram. Different environments and different functional requirements require different executable process languages. The current idea that one process language would be able to cover all forms of BPM, workflow and orchestration is just too ambitious. And if such an effort would succeed, the resulting process language would be way too complex for practical usage...
See also: BPEL4People references
Achieving Separation of Concerns Using BPEL
Stephen B. Morris, Informit
The vast majority of software producers focus exclusively on domain-specific solutions. In this way, software is becoming more customized and, correspondingly, less generic. While some end users (particularly large corporate customers) may be able to request features that closely fit their business processes, it's likely that most of us end up with a poor fit between our deployed software and our business process needs. The end result is massive cross-vendor duplication of software development that tries to implement code as well as business process logic. An interesting separation of concerns is becoming possible by the use of BPEL (Business Process Execution Language), which allows for business process logic to be expressed in a specific language and to be tied into external software. This reduces (and potentially eliminates) the need to code business process logic in a traditional programming language (such as Java or C++/C). In turn, this provides a clear separation between software features and business processes. By taking the business process logic (e.g., workflow management) out of the application code, the latter becomes simpler and more focused. In this article, I'll review the idea and merits of separating software features from business processes in the context of BPEL. Along the way, we'll see how this leads neatly to the need for highly generic software. The latter is (in my opinion) a pressing concern for all software developers... I think that IT should endeavor to become as streamlined as possible and BPEL/web services suggests itself as a possible path to take. By removing business process logic from code, we would see the potential emergence of generic software for web services use. Business process logic would then reside in a BPEL layer that would orchestrate the required service calls. This would help to reduce the growing complexity of software and systems.
Employ Metadata to Enhance Search Filters
James Leigh, DevX.com
In this article the author shows how to use metadata for pooling information already resident in an application to create a flexible search interface that reduces complexity and increases users' productivity. Easily customizable and configurable software is becoming increasingly important, and a flexible interface for searching is one way in which software is becoming more configurable. The key to achieving flexibility is through using metadata. Consider an application that stores customer, item, and order information in a database. The interface for searching through orders could apply any number of filters, but presenting all possible combinations together can very quickly become overwhelming for users. It is often beneficial to allow some customization or configuration for choosing the appropriate filters based on several factors including the business process, the role of the individual, or those that are specific to the user's needs. With traditional query templates, the complexity grows quickly with every new search filter that is added. However, by using metadata to model a query and its filters, you can reduce the complexity of the software, while creating a more flexible solution... Metadata can be loosely defined as data about data, or in this case search-filter data about order data. The World Wide Web Consortium (W3C), the group responsible for XML standards, recommends using Resource Description Framework (RDF) for representing metadata (in XML or other formats). You can store RDF in a variety of formats, but the example discussed here will use an RDF/XML file because it has the widest support. XML tags are used to structure the file format of RDF/XML. The outer tags represent the resources, their nested tags represent properties, and inside the property tags is a property value, which may be text or another resource tag.
DITA: Reusable XML
Rutrell Yasin, Government Computer News
IBM and software vendor JustSystems announce the availability of a methodology that allows organizations to break up huge Extensible Markup Language (XML) documents into reusable pieces. "The Darwin Information Typing Architecture (DITA) Maturity Model," co-authored by IBM and JustSystems, is the first step-by-step process for implementing DITA, officials from the companies said. DITA can be applied to content that is highly branded or regulated and broadly leveraged, including technical documents, marketing materials and regulatory filings, according to Paul Wlodarczyk, vice president of solutions consulting at JustSystems: "The DITA Maturity Model recognizes that each organization is adopting DITA at its own pace. So, the model starts from square one, laying out the key steps that any organization can take to successfully adopt DITA." One of DITA's most attractive features is its support for incremental adoption. However, organizations at different stages of adoption claim radically different numbers for cost of migration and return on investment. To address these issues, the DITA Maturity Model divides DITA adoption into six levels, each with its own required investment and associated return on investment. As a result, users can assess their own capabilities and goals relative to the model and choose the initial adoption level appropriate for their needs and schedule.
Strategic Security: Get a Handle on Authentication
Roger A. Grimes, InfoWorld
It's a common dilemma: You host multiple Web-accessible applications, for both internal customers and external users. A few of your developers are keeping up on the last programming trends and security models, while some of your highest-seniority employees are stuck in programming models outdated a decade ago. You've got a hodgepodge of access and authentication methods, along with a lot of client-server interaction, and a little bit of Web services and SOA, as well as Citrix or Terminal Services thrown in. There are even a few people still dialing in on phone lines to access dumb terminal-based applications. Truth be told, if someone asked what you thought of the situation, you'd reply it's a deck of cards just waiting to be pushed over by the right inquisitive hacker. You've got to get control of your applications and authentication models, so where do you start and what do you do? There are six broad areas that you'll need to address: education, strategy, standardization, policies, remediation, and retirement. Education: educate people about the various authentication components. Essentially, you want to explain identity, authentication, authorization, and access control (and accounting/auditing), or simply AAA, as parts of a systematic process, each of which can be accomplished using various methods. And you want to push for more maturity on each of those concepts. If single users end up with multiple identities, you need an identity management system (or maybe federated identities, if multiple companies are involved). You want to move authentication from passwords to something more sophisticated, such as two-factor authentication. You want to move access control from Discretionary Access Controls (DAC) to client-server impersonation and eventually Role-Based Access Control (RBAC). Finally, the data you protect must be categorized according to sensitivity and protected accordingly...
The Ranvier URL Mapper: Letting URL Structure Invoke Application Work
David Mertz, IBM developerWorks
The responsibility of a Uniform Resource Identifier (URI) is to uniquely name a resource in the world. The most familiar subset of URIs is Uniform Resource Locators (URLs), which take on the additional responsibility of providing a description of how to obtain the named resource (in other words, a network location and protocol to use in fetching an electronic document or stream). Some URIs are Uniform Resource Names (URNs) without being URLs, that is, they name a resource, but do not provide specific details on how to obtain it... The oldest and most popular Web servers have generated URIs whose form directly mirrors the file system structure of the machine that hosts resources. The URI schema itself specifies a hierarchical "path" component of URIs (though a path is potentially empty), but does not require any literal mapping between a URI path structure and a file system. Ranvier is a Python package you can integrate into Web application frameworks to map incoming URL requests to source code. It does this by a mechanism of delegation-and-consumption, which differs from more common regular expression-based URL rewriting. Ranvier also serves as a central registry of all the URLs in a Web application and can itself generate the URLs necessary for cross-linking pages. The registry function allows Ranvier to assure the integrity of links and automate coverage analysis. Ranvier is pure Python code and does not have any third-party dependencies; it should be usable (with a bit of adaptor code) in any Python-based Web application framework... There are certainly many cases where domain resources are semantically hierarchical, and not merely in a way that mirrors peculiarities of the development framework and tools used in implementation. Ranvier provides a flexible way of organizing dispatch of functional aspects of URI processing into multiple reusable blocks of code.
A Look at the First HTML 5 Working Draft
Charles Humble, InfoQueue
See also: HTML 5 references
Proposed Recharter of IETF Public-Key Infrastructure (X.509 PKIX) WG
Staff, IETF Announcement
The IESG Secretary announced the availability of a proposed modified charter submitted for the Public-Key Infrastructure (X.509) PKIX working group in the Security Area of the IETF. The IESG has not made any determination as yet. As proposed: "The PKIX Working Group was established in the fall of 1995 with the goal of developing Internet standards to support X.509-based Public Key Infrastructures (PKIs). Initially PKIX pursued this goal by profiling X.509 standards developed by the CCITT (later the ITU-T). Later, PKIX initiated the development of standards that are not profiles of ITU-T work, but rather are independent initiatives designed to address X.509-based PKI needs in the Internet. Over time this latter category of work has become the major focus of PKIX work, i.e., most PKIX-generated RFCs are no longer profiles of ITU-T X.509 documents. PKIX has produced a number of standards track and informational RFCs... PKIX will continue to track the evolution of ITU-T X.509 documents, and will maintain compatibility between these documents and IETF PKI standards, since the profiling of X.509 standards for use in the Internet remains an important topic for the working group... PKIX will pursue new work items in the PKI arena if working group members express sufficient interest, and if approved by the cognizant Security Area director. For example, certificate validation under X. 509 and PKIX standards calls for a relying party to use a trust anchor as the start of a certificate path. Neither X.509 nor extant PKIX standards define protocols for the management of trust anchors. Existing mechanisms for managing trust anchors, e.g., in browsers, are limited in functionality and non-standard. There is considerable interest in the PKI community to define a standard model for trust anchor management, and standard protocols to allow remote management. Thus a future work item for PKIX is the definition of such protocols and associated data models.
See also: the earlier PKIX WG Charter
XML Daily Newslink and Cover Pages are sponsored by:
|BEA Systems, Inc.||http://www.bea.com|
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: firstname.lastname@example.org
Newsletter unsubscribe: email@example.com
Newsletter help: firstname.lastname@example.org
Cover Pages: http://xml.coverpages.org/