This issue of XML Daily Newslink is sponsored by:
SAP AG http://www.sap.com
- DSDL Part 7: Character Repertoire Description Language (CREPDL)
- An Empirical Examination of Open Standards Development
- SMS: The Short Message Service
- Business Process Expert: BPMN Choreography and Multi-Pool Processes
- IODEF/RID over SOAP
- Cultural Considerations for SOA Adoption in the Federal Sector
- WSO2 Bringing Ruby to SOA
- 2008 Summer Olympic Games via NBC Universal and Silverlight 2.0
- Web 3.0: Chicken Farms on the Semantic Web
- Shred XML Documents Using DB2 pureXML
- Spring Web Flow for Better Workflow Management in JSF
DSDL Part 7: Character Repertoire Description Language (CREPDL)
ISO/IEC JTC 1/SC 34 Secretariat, FCD Ballot Version Announcement
Toshiko KIMURA (Japanese Industrial Standards Committee/ITSCJ) of the ISO/IEC JTC 1/SC 34 Secretariat reported on the availability of the draft specification ISO/IEC FCD 19757-7, Information technology -- Document Schema Definition Languages (DSDL)—Part 7: Character Repertoire Description Language (CREPDL). This [FCD] International Standard defines a set of Document Schema Definition Languages (DSDL) that can be used to specify one or more validation processes performed against Extensible Markup Language (XML) documents. A number of validation technologies are standardized in DSDL to complement those already available as standards or from industry. The main objective of this International Standard is to bring together different validation-related technologies to form a single extensible framework that allows technologies to work in series or in parallel to produce a single or a set of validation results. The extensibility of DSDL accommodates validation technologies not yet designed or specified. This part of ISO/IEC 19757 provides a language for describing character repertoires. Descriptions in this language may be referenced from schemas. Furthermore, they may also be referenced from forms and stylesheets... Clause 5 introduces kernels and hulls of repertoires. Clause 6 describes the syntax of CREPDL schemas. Clause 7 describes the semantics of a correct CREPDL schema; the semantics specify when a character is contained by a repertoire described by a CREPDL schema. Clause 8 defines CREPDL validators and their behaviours. Clause 9 defines conformance of CREPDL processors. Finally, Annex A provides examples of the application of CREPDL. According to DSDL Part 1: "The multi-part 19757 International Standard "integrates constraint description technologies into a suite that: (1) provides user control of names, order and repeatability of information objects and their properties—elements and their attributes; (2) allows users to identify restrictions on the coexistence of information objects; (3) allows specific information object within structured documents to be validated; (4) allows restrictions to be placed on the contents of specific elements and attributes, including restrictions based on the content of other elements in the same document; (5) allows the character set that can be used within specific elements to be managed, based on the application of the ISO/IEC 10646 Universal Multiple-Octet Coded Character Set (UCS); (6) allows default values to be assigned to element contents and attribute values, and provides facilities for the incorporation of predefined fragments of structured data to be incorporated within documents; (7) extends SGML DTDs to include functions such as namespace-controlled validation and datatypes by adapting XML techniques for these capabilities to SGML."
See also: DSDL references
An Empirical Examination of Open Standards Development
Rajiv Shah and Jay P. Kesan, Conference Presentation
This project uses empirical data to provide insights into the impact of open standards. This work moves beyond the existing literature by considering a large number of open standards, instead of handpicked case studies. The results of this research will be timely, as governments are advocating and sometimes mandating the use of open standards... We found inequalities in the impact of open standards that suggest a power law relationship. The implications are that standard organizations need to recognize this property and shift their strategies during the development process. First, standards organizations should happily accept that many standards will not become widely popular. This is just a simple property when you create lots of standards. This also carries implications. Open standards organizations should be flexible and adaptable in their approach towards the development of open standards. There will be some open standards that will require special guidance or rules because of their enormous impact, and vice-versa. After all, standards that are likely to have a high impact are often recognizable during the development process. They usually have more participants, are longer, and have more divisive debates. These results agree with our regressions that found longer standards generated higher impact. Standards organizations should not be afraid to play politics by instituting different rules or procedures to address problems during the standards development. Secondly, standards organizations should recognize that many high quality and potentially useful standards may be overlooked... The analysis of the impact of open standards found that the duration of the development process does not affect the impact of a standard. This finding carries significant policy implications as reforms are currently underway to speed up the IETF development process. [Citation via Bob Glushko]
See also: the Conference Highlights
SMS: The Short Message Service
Jeff Brown, Bill Shipman, and Ron Vetter; IEEE Column "How Things Work"
While most cell phones are used for their original intent—making telephone calls wirelessly—these devices are also loaded with other features that are often little used or even ignored. One feature that users have begun to fully exploit in recent years is the short message service or text messaging. This basic service allows the exchange of short text messages between subscribers. SMS technology evolved out of the Global System for Mobile Communications standard, an internationally accepted cell phone network specification the European Telecommunications Standards Institute created. Presently, the 3rd Generation Partnership Project maintains the SMS standard. SMS messages are handled via a short message service center that the cellular provider maintains for the end devices. The SMSC can send SMS messages to the end device using a maximum payload of 140 octets. This defines the upper bound of an SMS message to be 160 characters using 7-bit encoding. It is possible to specify other schemes such as 8-bit or 16-bit encoding, which decreases the maximum message length to 140 and 70 characters, respectively. Text messages can also be used for sending binary data over the air... There are significant differences between aggregators, and you should look closely at more than one before choosing. Several companies that sell aggregator services do not actually maintain SMPP connections with carrier SMSCs, but go through another aggregator instead. The custom APIs that aggregators provide differ widely—some require a constant socket connection, while others use XML over HTTP and do not rely on a constant connection. Some aggregators have relatively inexpensive testing programs and allow you to test their API on a demonstration short code. Next-generation SMS applications will incorporate location-based capabilities that are now being incorporated into mobile handsets. This will enable a new set of innovative services that are targeted and personalized, further refining mobile advertising models and driving revenue growth for carrier operators, aggregators, and mobile content providers... We have established the hardware, software, and network infrastructure necessary to build and deploy advanced SMS applications. We have registered a short code and established a new company, Mobile Education, to develop two-way SMS-based applications.
See also: the reference page
Business Process Expert: BPMN Choreography and Multi-Pool Processes
Bruce Silver, SAP BPX Article Series
This article is the final Part 6 in an Article Series "BPMN and the Business Process Expert." OMG's Process Modeling Notation (BPMN) "has become the standard language of the Business Process Expert, usable for descriptive process modeling, simulation analysis, and even executable implementation design of end-to-end business processes. BPMN extends the familiar swimlane flowchart paradigm with events, the key to incorporating exceptions into process models and mapping to today's SOA middleware." Part 6 addresses the most basic concept of all: What is a process? More specifically, what is a single BPMN process, as opposed to multiple processes linked by message flow choreography in a single business process diagram (BPD)? In the real world, an end-to-end business process may be composed of multiple BPMN processes interacting through choreography. We know that a BPMN process is confined to a pool, and a BPD can contain multiple pools, but what does a pool really signify? A pool is simply a container for a BPMN process. If your BPD does not have more than one pool, the pool containing your process might not even be drawn—but it is always there. If your diagram shows choreography between your process and an external process, a requesting client or invoked service provider, often the pool names may indicate the organization behind each process, but the pool actually represents the process, not the organization... This series of articles explains BPMN's ability to represent end-to-end business processes in diagrams that business people can understand, yet which retain remarkable precision and expressive power. Unlike traditional process modeling notations, BPMN puts events and exception handling right in the diagram itself, without requiring specification, or even knowledge, of the technical implementation. The combination of this business-friendly 'abstract' representation with precise orchestration semantics lets BPMN process models serve as the foundation of executable process implementations, with implementation properties layered on top of the model. These implementation properties are added by IT, often in direct collaboration with business, and leveraging a common underlying model. This type of collaborative approach is essential if BPM is to realize its promise of improved agility and responsiveness to changing business needs. If BPMN is the 'language' of this emerging collaboration, the Business Process Expert is the agent of change. Knowing how to use BPMN to model processes correctly and effectively has become the critical skill for all BPXs to master.
See also: the article series
IODEF/RID over SOAP
Kathleen M. Moriarty and Brian H. Trammell (eds), IETF Internet Draft
Members of the IETF Extended Incident Handling (INCH) Working Group have released an updated version of the specification for "IODEF/RID over SOAP." The Incident Object Description Exchange Format (IODEF) specification describes an XML document format for the purpose of exchanging data between CSIRTS or those responsible for security incident handling for network providers (NPs). The defined document format provides an easy way for CSIRTS to exchange data in a way which can be easily parsed. In order for the IODEF documents to be shared between entities, a uniform method for transport is necessary. SOAP will provide a layer of abstraction and enable the use of multiple transport protocol bindings. IODEF documents and extensions will be contained in an XML Real-time Inter-network Defense (RID) envelope inside the body of a SOAP message. For some message types, the IODEF document or RID document may stand alone in the body of a SOAP message. The RIDPolicy class of RID (e.g., policy information that may affect message routing) will appear in the SOAP message header. HTTP/TLS (RFC 4346) has been selected as the required SOAP binding for exchanging IODEF/RID messages. The primary reason for selecting HTTP/TLS is due to the existence of multiple successful implementations of SOAP over HTTP/TLS, and to its being widely understood, despite the additional overhead associated with this combination. Excellent tool support exists to ease the development of applications using SOAP over HTTP. BEEP may actually be better suited as a transport for RID messages containing IODEF documents, but does not yet have wide adoption. Standards exist for the HTTPS or HTTP/TLS binding for SOAP, and a standard is in development for SOAP over BEEP... Documents intended to be shared among multiple constituencies must share a common format and transport mechanism. The Incident Object Description Exchange Format (IODEF) defines a common XML format for document exchange. This draft outlines the SOAP wrapper for all IODEF documents and extensions to facilitate an interoperable and secure communication of documents. The SOAP wrapper allows for flexibility in the selection of a transport protocol. SOAP will be used to provide the messaging framework and can make distinctions as to how messages should be handled by each participating system. SOAP has been selected because of the flexibility it provides for binding with transport protocols, which can be independent of the IODEF/RID messaging system.
See also: IODEF specification references
Cultural Considerations for SOA Adoption in the Federal Sector
Judith Myerson, IBM developerWorks
This article focuses on the cultural considerations across organizational boundaries in the federal sector. It shows how one can build blocks of SOA while maintaining adherence to appropriate organizational cultural aspects. You can implement SOA using any service-based technology with loose coupling among interacting software agents. All show SOA as a paradigm shift from IT as a technology provider to IT as a business enabler, and the resulting cultural shift in the way IT is done. This article looks closely at the cultural considerations involved with SOA development and implementation in the federal sector. It explains: (1) The process of adopting SOA with an SOA reference model, an SOA maturity model, and SOA governance; (2) What's missing from the Federal Enterprise Architecture (FEA); (3) Suggestions for managing cultural changes in the federal sector. The SOA reference model provides a foundation for building discoverable, reusable services central to a federal agency's mission. On top of this foundation is the SOA maturity model to gauge the quality and maturity of SOA adoption by the agency. SOA governance provides the mechanisms to control the desired behavior of the organization as the SOA adoption matures. One model example is the Office of Management and Budget (OMB) Federal Enterprise Architecture (FEA). Also known as the Component-Based Architecture (CBA), this model focuses on providing services to citizens, not to other SOA participants. The FEA starts with the Performance Reference Model (PRM), followed by the lower layer of the Business Reference Model (BRM). Next is the Service Components Reference Model (SRM), which is built upon the Data Reference Model (DRM) and Technical Reference Model (TRM), plus a security and privacy profile that describes how to integrate information security requirements into the reference models. What's apparently missing from the FEA is the SOA Security Reference Model (SSRM), which allows it to go beyond the security and privacy profile. The SSRM helps to address the security requirements of the SOA due to increased exposure to risks and vulnerabilities of loose coupling of the services and operations across organizational boundaries... SOA security must be factored into the SOA life cycle, reflecting the fact that security is a business requirement and not just a technology attribute. By including security in SOA life cycle management, all documents must be evaluated to ensure that security requirements are met. An SSRM helps address the security requirements of SOA, as security is applicable to the entire SSRM—across infrastructure, application, business services, and development services. This model consists of IT security services, security policy infrastructure, business security services, and security enablers. Governance and risk management provide the mechanism to implement and enforce security policies within the larger SOA environment. The SSRM has also been the subject of interest in military software engineering.
See also: the FEA web site
WSO2 Bringing Ruby to SOA
Paul Krill, InfoWorld
With the release of open-source software, WSO2 seeks to bridge the Ruby programming language and the Ruby on Rails Web framework with the SOA and Web services spaces. The company is set to debut WSO2 WSF/Ruby (Web Services Framework for Ruby) 1.0, providing a Ruby extension to support the Web Services WS-* stack. Ruby developers can incorporate security and reliable messaging capabilities needed for trusted, enterprise-class SOAP-based Web services. But the product also supports the alternative REST (Representational State Transfer) Web services. While Ruby has been popular in the Web 2.0 realm, sometimes it needs to talk to legacy architectures. With the new framework, developers could build a Web application using Ruby and then hook into enterprise infrastructures, such as JMS (Java Message Service) queues. For example, a Web site might be built with Ruby that then needs to link to an order fulfillment system based on an IBM mainframe or minicomputer WSO Chairman/CEO Sanjiva Weerawarana: "Ruby, as you know, has become a very popular language the last few years, and what we are enabling is for Ruby to become part of an enterprise SOA architecture." With the December release of Ruby on Rails 2.0, the builders of Rails swapped out a SOAP library and replaced it with REST capabilities. In doing this, David Heinemeier Hansson, the founder of Rails, stressed that that SOAP and its attendant WS-* stack had become too complex. But Weerawarana stressed REST may not always be sufficient: "The REST preference is a perfectly fine position to take if you don't need any kind of these security and reliability infrastructure capabilities. WSO2's framework would replace the SOAP capabilities removed in Rails 2.0... WSF/Ruby 1.0 binds WSO2's Web Services Framework for C into Ruby to provide an extension based on three Apache projects. These include: Axis 2C, which is a Web services runtime to support REST and SOA; Sandesha/C, supporting WS-Reliable Messaging; and Rampart/C, for WS-Security capabilities. See: "WSO2 Web Services Framework for Ruby Brings Enterprise-Class SOA Capabilities to Popular Web Language. WS02 Ruby Extension Combines Ruby on Rails Ease of Use with the Security and Reliability that Enterprises Demand."
2008 Summer Olympic Games via NBC Universal and Silverlight 2.0
S. "Soma" Somasegar, Somasegar's WebLog
See also: the Silverlight Wikipedia article
Web 3.0: Chicken Farms on the Semantic Web
Jim Hendler, IEEE Computer "Web Technologies" Column
The explosive growth of blogs, wikis, social networking sites, and other online communities has transformed the Web in recent years. The mainstream media has taken notice of the so-called Web 2.0 revolution -- stories abound about events such as Facebook's huge valuation and trends like the growing Hulu-YouTube rivalry and Flickr's role in the current digital camera sales boom. However, a new set of technologies is emerging in the background, and even the Web 2.0 crowd is starting to take notice... The Semantic Web involves several chicken-and-egg problems. First, the applications require, in part or whole, data that is available for sharing either within or across an enterprise. Represented in RDF, this data can be generated from a standard database, mined from existing Web sources, or produced as markup of document content. Machine-readable vocabularies for describing these data sets or documents are likewise required. The core of many Semantic Web applications is an ontology, a machine-readable domain description, defined in RDFS or OWL. These vocabularies can range from a simple "thesaurus of terms" to an elaborate expression of the complex relationships among the terms or rule sets for recognizing patterns within the data. Web 3.0 applications require extensions to browsers, or other Web tools, enhanced by Semantic Web data. As in the early days of the Web when we were creating HTML pages without being quite sure what to do with them, for a long time people have been creating and exchanging Semantic Web documents and data sets without knowing exactly how Web applications would access and use them. The advent of RDF query languages, particularly SPARQL, makes it possible to create three-tiered Semantic Web applications similar to standard Web applications. These in turn can present Semantic Web data in a usable form to end users or to other applications, eliciting more obvious value from the emerging Web of data and documents... What we see in Web 3.0 is the Semantic Web community moving from arguing over chickens and eggs to creating its first real chicken farms. The technology might not yet be mature, but we've come a long way, and the progress promises to continue for a long time to come.
See also: the reference page
Shred XML Documents Using DB2 pureXML
Salvador Ledezma and Bert Van Der Linden, IBM developerWorks
This article discusses two methods for XML decomposition in DB2 for Linux, UNIX, and Windows. As XML data continues to proliferate in the enterprise, it is not always possible to store the XML as XML. Perhaps you are working with a legacy data architecture or other requirements restrict your storage to be relational. In fact, it is not uncommon to send and receive messages as XML data, whereas the messages are composed from and decomposed to relational data. Two features of DB2 help in this situation, where the data cannot be stored as XML: SQL/XML publishing functions and XML decomposition. First, the SQL/XML publishing functions help compose XML data from relational data. This article focuses on ways to "shred" XML data in DB2. Shredding is the process of mapping XML elements and attributes into relational tables and columns. One way to shred in DB2 is through the use of an annotated XML schema. If the XML data contains an XML schema, it is the easiest and fastest way to perform decomposition. If the mapping is significantly complex and involves multiple tables, existing tools automate both the mapping and decomposition steps. Another, perhaps less-known, method for shredding is through the use of the SQL/XML function XMLTABLE. It is useful when an XML schema does not exist. Using the XMLTABLE function can be more complex since the decomposition steps must be manually coded. This means that the developer explicitly must state, using XQuery expressions, how a particular XML element is mapped to a table and column. Still, it is this flexibility that makes XMLTABLE decomposition more powerful than annotated XML schema decomposition, giving it the ability to perform some types of mappings that annotated XML schema decomposition cannot. This article shows some decomposition examples using both annotated XML schema and the XMLTABLE function. It also shows some examples that annotated XML schema decomposition does not support, yet XMLTABLE does. Finally, it concludes with a comparison of best practices for each and provide some recommendations for their use.
See also: the pureXML Wiki
Spring Web Flow for Better Workflow Management in JSF
Ravi Shankar Nair, Java World Magazine
JavaServer Faces is a Java-based Web application framework that simplifies the development of user interfaces for Java Web applications. Unlike other traditional request-driven MVC Web frameworks, JSF uses a component-based approach. The state of UI components is saved when the client requests a new page and then is restored when the request is returned. Out of the box, JSF uses JavaServer Pages as its display technology. It can also accommodate other display technologies, such as XML User Interface Language. JSF is a successful technology for component-based Web UI development. One of the main advantages of JSF's component-based approach is that you need not revamp your application every time client technologies evolve, as they tend to do. The popularity of JSF also is growing with the emergence of JSF extension frameworks like Seam, MyFaces, and ICEfaces... Enterprise systems require more advanced workflow management than JSF provides. JSF falls short when it comes to embedding rules to determine the destination and routing of non-JSF pages, as well as handling exceptions and global transitions. In this article I share my experience of integrating the open source Web application framework Spring Web Flow with JSF. JSF beginners through intermediate developers will benefit from the practical examples, illustrations, and code snippets in the article. You will learn something about JSF and Spring Web Flow, as well as being introduced to an integrated development solution that leverages them both. Spring Web Flow is a very powerful open source solution for implementing navigation logic and managing application state, especially in rapidly evolving application scenarios. As I've shown, it integrates very nicely with JSF. It also integrates well with Spring MVC and Struts.
XML Daily Newslink and Cover Pages are sponsored by:
|BEA Systems, Inc.||http://www.bea.com|
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/