This issue of XML Daily Newslink is sponsored by:
- OASIS XLIFF Version 1.2 to be Considered for Standardization
- Introduction to the Eclipse Business Intelligence and Reporting Tools
- Apache Wicket 1.3 Set for Java Web Development
- IESG Approves GEOPRIV Revised Civic Location Format for PIDF-LO
- The Good in WS-*
- An IPFIX-Based File Format
- Spolsky (and Usdin and Piez) on Specs
- JSF Testing Tools
OASIS XLIFF Version 1.2 to be Considered for Standardization
Staff, OASIS Announcement
Members of the OASIS XML Localization Interchange File Format (XLIFF) Technical Committee have submitted an approved Committee Specification document set for XLIFF 1.2 to be considered as an OASIS Standard. The XLIFF 1.2 Specification defines the XML Localization Interchange File Format (XLIFF), designed by a group of software providers, localization service providers, and localization tools providers. The purpose of this vocabulary is to store localizable data and carry it from one step of the localization process to the other, while allowing interoperability between tools. It is intended to give any software provider a single interchange file format that can be understood by any localization provider. The specification is tool-neutral, supports the entire localization process, and supports common software, document data formats, and markup languages. The specification provides an extensibility mechanism to allow the development of tools compatible with an implementer's data formats and workflow requirements. The extensibility mechanism provides controlled inclusion of information not defined in the specification. XLIFF is loosely based on the OpenTag version 1.2 specification and borrows from the TMX 1.2 specification. However, it is different enough from either one to be its own format. The Version 1.2 specification set includes a Core prose document, XML schemas, Representation Guide for HTML, Representation Guide for Java Resource Bundles, and Representation Guide for Gettext PO (defines a guide for mapping the GNU Gettext Portable Object file format to XLIFF).
See also: the announcement
Introduction to the Eclipse Business Intelligence and Reporting Tools
Jason Weathersby, InfoQueue
Eclipse's Business Intelligence and Reporting Tools (BIRT) project is an open source project based on the popular Eclipse IDE and is used to build and deploy reports in a Java/J2EE environment. Some of the key downloads available with the project include: (1) BIRT Designer: Used to construct reports. At the center of BIRT is the report designer, which is a set of Eclipse plug-ins that make up the designer perspective providing drag-and-drop capabilities to quickly design reports. The reports designs are created and stored in an XML format. (2) Report Editor: The Report Editor is used to construct the report and acts as a canvas for positioning and formatting report elements. Within this View, there are tabs for Layout, Master Page, Script, XML Source, and Preview. The XML Source tab displays the XML source code for the report design. It is possible to edit the XML within this tab, although it is generally best to use the Layout View. (3) Web Viewer: An example J2EE application used to deploy reports, containing a JSP tag library to ease the integration with existing web applications. Once report development is complete, the reports can be deployed using the BIRT example Web Viewer. The viewer has been improved for BIRT 2.2 and is an AJAX based J2EE application that illustrates using the BIRT engine to generate and render report content. (4) BIRT Charting package: Supports building sophisticated actionable charts. The BIRT project had its first major release in the summer of 2005 and has garnered over a million downloads since its inception. The BIRT project web site includes an introduction, tutorials, downloads, and examples of using BIRT. In this article, we will begin by first describing the BIRT designer, which is used to build report designs, and conclude by discussing the example BIRT Viewer, which is used to deploy the designs and generate the completed reports.
Apache Wicket 1.3 Set for Java Web Development
Paul Krill, InfoWorld
See also: the Apache Wicket Project
IESG Approves GEOPRIV Revised Civic Location Format for PIDF-LO
Martin Thomson and James Winterbottom (eds), IETF Internet Draft
The Internet Engineering Steering Group (IESG) announced that the "Revised Civic Location Format for PIDF-LO" specification has been approved as an IETF proposed Standard. The document defines an XML format for the representation of civic location. This format is designed for use with PIDF Location Object (PIDF-LO) documents and replaces the civic location format in RFC 4119 ("A Presence-based GEOPRIV Location Object Format"). The format is based on the civic address definition in PIDF-LO, but adds several new elements based on the civic types defined for DHCP, and adds a hierarchy to address complex road identity schemes. The format also includes support for the "xml:lang" language tag and restricts the types of elements where appropriate. The approved version -07 Internet Draft document was reviewed by the GEOPRIV working group, where it has reached consensus for publication as an IETF RFC. Document Quality: The XML Schema contained within this document has been checked against Xerces-J 2.6.2. In addition to updating RFC 4119, the document is also a normative reference in IETF 'draft-ietf-ecrit-lost' ("LoST: A Location-to-Service Translation Protocol"). There are three known implementations of this specification. The IETF GEOPRIV Working Group was chartered to assess the authorization, integrity and privacy requirements that must be met in order to transfer location information, or authorize the release or representation of such location information through an agent.
The Good in WS-*
Stuart Charlton, Blog
Responding to Ganesh Prasad and Steve Vinoski on matters of REST vs SOAP/WS-* ("it would greatly clear the air if a REST advocate sat down and listed out things in SOAP/WS-* that were 'good' and worth adopting by REST"), Stuart Charlton surveys what's good and what could be improved in WS-Security, WS-Trust, WS-SecureConversation, WS-Coordination, WS-AtomicTransaction, WS-Choreography Description Language, SAML, WS-BPEL, and other specifications. With respect to WS-BPEL, Charlton says: "It raises the abstraction bar for a domain language specifying sequential processes... It's more focused on programmers (and hence, vendors selling programmer tools) than on the problem space of BPM and Workflow; it relies on a central orchestrator, and thus seems rather like a programming language in XML. It's very XML focused; binding to specific languages requires a container-specific extension like Apache WSIF or JCA or SCA or .... BPEL4People and WS-HumanTask are a work in progress; considering the vast majority of business processes involve people, I'd say this is a glaring limitation. BPEL treats data as messages, not as data that has identity, provenance, quality, reputation, etc. [What's happening here in the RESTful world?] I think there is a big opportunity for a standard human tasklist media type. I haven't scoured around the internet for this, if anyone knows of one, please let me know. This would be a win for several communities: the BPM community today has no real standard, and neither does the REST community. The problem is pretty similar whether you're doing human tasks for a call center or for a social network, whether social or enterprise. Look at Facebook notifications as a hint. Semantics might include "activity", "next steps", "assignment", etc. One could map the result into a microformat, and then we'd have Facebook-like mini-feeds and notifications without the garden wall. As for a "process execution language" in the REST world, I think, if any, it probably would be a form of choreography, since state transitions occur through networked hypermedia, not a centrally specified orchestrator.
See also: Stuart Charlton's BEA blog
An IPFIX-Based File Format
Brian H. Trammell (et al. eds), IETF Internet Draft
Members of the IP Flow Information Export (IPFIX) Working Group have released an initial -00 Internet Draft for "An IPFIX-Based File Format." The IPFIX WG has developed a MIB module for monitoring IPFIX implementations. Means for configuring these devices have not been standardized yet. Per its charter, the WG is developing an XML-based configuration data model that can be used for configuring IPFIX devices and for storing, modifying and managing IPFIX configurations parameter sets; this work is performed in close collaboration with the NETCONF WG. The IETF Proposed Standard "Information Model for IP Flow Information Export" defines an XML-based specification of template, abstract data types and IPFIX Information Elements can be used for automatically checking syntactical correctness of the specification of IPFIX Information Elements. The new "IPFIX-Based File Format" document describes a file format for the storage of flow data based upon the IPFIX Message format. It proposes a set of requirements for flat-file, binary flow data file formats, then applies the IPFIX message format to these requirements to build a new file format. This IPFIX-based file format is designed to facilitate interoperability and reusability among a wide variety of flow storage, processing, and analysis tools... [Note, in relation to W3C's Efficient XML Interchange (EXI) Working Group Charter and Deliverables:] Over the past decade, XML markup has emerged as a new 'universal' representation format for structured data. It is intended to be human-readable; indeed, that is one reason for its rapid adoption. However XML has limited usefulness for representing network flow data. Network flow data has a simple, repetitive, non-hierarchical structure that does not benefit much from XML. An XML representation of flow data would be an essentially flat list of the attributes and their values for each flow record. The XML approach to data encoding is very heavyweight when compared to binary flow encoding. XML's use of start- and end-tags, and plain-text encoding of the actual values, leads to significant inefficiency in encoding size. Typical network flow datasets can contain millions or billions of flows per hour of traffic represented. Any increase in storage size per record can have dramatic impact on flow data storage and transfer sizes. While data compression algorithms can partially remove the redundancy introduced by XML encoding, they introduce additional overhead of their own. A further problem is that XML processing tools require a full XML parser... This leads us to propose the IPFIX Message format as the basis for a new flow data file format.
Spolsky (and Usdin and Piez) on Specs
Michael Sperberg-McQueen, Blog
Joel Spolsky [...] has an interesting riff on specifications and their discontents, which feels relevant to the perennial topics of improving the quality of W3C (and other) specs, and of the possible uses of formalization in that endeavor. Excerpt (from the Talk at Yale): "the hard-core geeks tend to give up on all kinds of useful measures of quality, and basically they get left with the only one they can prove mechanically, which is, does the program behave according to specification. And so we get a very narrow, geeky definition of quality: how closely does the program correspond to the spec. Does it produce the defined outputs given the defined inputs. The problem, here, is very fundamental. In order to mechanically prove that a program corresponds to some spec, the spec itself needs to be extremely detailed. In fact the spec has to define everything about the program, otherwise, nothing can be proven automatically and mechanically. Now, if the spec does define everything about how the program is going to behave, then, lo and behold, it contains all the information necessary to generate the program! And now certain geeks go off to a very dark place where they start thinking about automatically compiling specs into programs, and they start to think that they've just invented a way to program computers without programming. Now, this is the software engineering equivalent of a perpetual motion machine." [... Sperberg-McQueen:] "In their XML 2007 talk on 'Separating Mapping from Coding in Transformation Tasks', Tommie Usdin and Wendell Piez talk about the utility of separating the specification of an XML-to-XML transform ('mapping') from its implementation ('coding'), and provide a lapidary argument against one common way of trying to make a specification more precise: 'Code-like prose is hard to read.' Has there ever been a more concise diagnosis of many reader's problems with the XML Schema spec? I am torn between the pleasure of insight and the feeling that my knuckles have just been rapped, really hard. [Deep breath.] Thank you, ma'am, may I have another?"
See also: Talk at Yale Part 1 of 3
JSF Testing Tools
Srini Penchikala, InfoQ
Unit testing JSF based web applications has been considered difficult because of the constraints of testing JSF components outside the container. Most of the web-tier testing frameworks follow black-box testing approach where developers write test classes using the web components to verify the rendered HTML output is what is expected. Frameworks such as HtmlUnit, HttpUnit, Canoo WebTest, and Selenium fall into this category. The limitation of these frameworks is that they only test the client side of a web application. But this trend is changing with the recently released JSFUnit and other JSF testing frameworks such as Shale Test and JSF Extensions that support white-box testing to test both client and server components of the web application. And projects like Eclipse Web Tools Platform (WTP) and JXInsight are also helping in the development and testing of JSF applications... JSFUnit, which is built on HttpUnit and Apache Cactus, allows integration testing and debugging of JSF applications and JSF AJAX components. It can be used for testing both client and server side JSF artifacts in the same test class. With JSFUnit API, the test class methods can submit data on a form and verify that managed beans are properly updated. JSFUnit includes support for RichFaces and Ajax4jsf components. Beta 1 version of this framework was released last month and the second Beta Version release is scheduled for the end of next month.
XML Daily Newslink and Cover Pages are sponsored by:
|BEA Systems, Inc.||http://www.bea.com|
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/