The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: March 30, 2007
XML Daily Newslink. Friday, 30 March 2007

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
BEA Systems, Inc. http://www.bea.com



First Working Draft for Rule Interchange Format Core Design
Harold Boley and Michael Kifer (eds), W3C Technical Report

Members of W3C's Rule Interchange Format (RIF) Working Group have published a First Public Working Draft for "RIF Core Design." The Working Group invites comments through 27-April-2007. The RIF Core document "specifies the core design for a format that allows rules to be translated between rule languages and thus transferred between rule systems. RIF Core defines a set of foundational concepts shared by all RIF dialects. The overall RIF design takes the form of a layered architecture organized around the notion of a dialect. A dialect is a rule language with a well-defined syntax and semantics. This semantics must be model-theoretic, proof-theoretic, or operational in this order of preference. Some dialects might be proper extensions of others (both syntactically and semantically) and some may have incompatible expressive power. However, all dialects are required to extend the RIF Core dialect. From a theoretical perspective, RIF Core corresponds to the language of definite Horn rules (Horn Logic) with equality (and with a standard first-order semantics). Syntactically, however, RIF Core has a number of extensions to support features such as objects and frames, URIs as identifiers for concepts, and XML Schema data types. These features make RIF a Web language. However, RIF is designed to enable interoperability among rule languages in general, and its uses are not limited to the Web. The semantics of RIF has provisions for future extensions towards dialects that support pure FOL, dialects that support negation as failure (NAF), business (or production) rules, reactive rules, and other features. Eventually, it is hoped that RIF dialects will cover a number of important paradigms in rule-based specification and programming. Our main target paradigms include production rules, logic programming, FOL-based rules, reactive rules, and normative rules (integrity constraints). The central part of RIF Core is its Condition Language. The condition language defines the syntax and semantics for the bodies of the rules in the core of RIF and the syntax for the queries. However, it is hoped that the condition language will have wider applicability in RIF. In particular, it might be reusable as a sublanguage for specifying the conditional part of the bodies in production rules, reactive rules, and in normative rules.

See also: Business Rule Languages


What Else Should Schematron Have?
Rick Jelliffe, O'Reilly Technical

"Schematron uptake is on the increase, and the beta implementation of ISO Schematron is chugging away. The relevant working group at ISO (ISO/IEC JTC1 SC34 WG1) has asked me to look into preparing an update for the standard; most of the other ISO DSDL family of schema languages have just been through a round of corrections based on initial experience, and I want to prepare something by the end of May [2007]. There won't be any changes that would break existing ISO Schematron schemas. And I don't think there would be any extra logical apparatus or changes to the class of logic required; and certainly nothing that would prevent implementation in XSLT 1 by default. I am interested in gathering a wish list, especially things where you have extended Schematron. The candidates I see at the moment include: (1) A new annex with the Query Language Binding for XSLT v2. (2) Editorial corrections, in particular changes coming from Murata's Japanese translation for JIS.—translation involves one of the best kinds of review a standard can get. (3) Suggestions from the new Schematron-Love-In mailing list: for example, some users would like SVRL to be more thorough and there are some sentences or concepts in the standard that are perfectly clear to the editor but which apparently require mind-reading abilities. (4) The W3C Rules Interchange Format RIF core design is a source of review material too. Schematron itself is based on providing a very simple framework which provides extra-logical abstractions (patterns, phases, diagnostics, abstract patterns) and with an emphasis on ready implementability, rather than any concern with exposing a formal class of logic. ISO Schematron does specify Schematron using predicate logic and my intent is that the spec does not ignore logic-theoretic categorization, but Schematron always errs on the side of pragmatism: what abstractions will help users? What low-hanging fruit does XSLT and XPath allow? What source code actually exists that demonstrates implementability and actual requirements? I certainly expect that some uncomplicated RIF rulesets may be be convertible into Schematron. Should we add an element 'reject' with the same operation as 'report' to allow implementations to fail validity where a reject test succeeds, but when a report fails? (5) A couple of recent implementation projects that Topologi and Allette Systems have been doing, integrating Schematron into a larger cradle-to-grave design.

See also: Schematron as ISO DSDL Part 3


RDFa Use Cases: Scenarios for Embedding RDF in HTML
Ben Adida and Michael Hausenblas (eds), W3C Technical Report

The W3C XHTML2 Working Group and the Semantic Web Deployment Working Group have jointly published the First Public Working Draft for "RDFa Use Cases: Scenarios for Embedding RDF in HTML." Current web pages, written in HTML, contain significant inherent structured data. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites. An event on a web page can be directly imported into a user's desktop calendar. A license on a document can be detected so that the user is informed of his rights automatically. A photo's creator, camera setting information, resolution, and topic can be published as easily as the original photo itself, enabling structured search and sharing. RDFa is a syntax that expresses this structured data using a set of elements and attributes that embed RDF in HTML. An important goal of RDFa is to achieve this RDF embedding without repeating existing HTML content when that content is the structured data. RDFa is designed to work with different XML dialects, e.g. XHTML1, SVG, etc., given proper schema additions. In addition, RDFa is defined so as to be compatible with non-XML HTML. An XHTML document marked up with RDFa constructs should validate, and a non-XML HTML document marked up with RDFa remains compliant. RDFa uses existing HTML constructs and HTML-compatible extensions to specify RDF 'content'. It is not about embedding RDF/XML syntax into HTML documents. This "Use Cases" document presents the major use cases where embedding structured data in HTML using RDFa provides significant benefit. Each use case explores how publishers, tool builders, and consumers benefit from RDFa. In parallel, the reader is encouraged to look at the RDFa Primer, and RDFa Syntax.

See also: the W3C Semantic Web


The Real Issues With XPDL, BPEL, and BPMN
Bruce Silver, Intelligent Enterprise Weblog

Keith Swenson is one of the true superheroes of BPM, and a pioneer in the development of interoperability standards. Known for his stalwart defense of XPDL, he periodically feels called upon to insist that XPDL does not compete with BPEL then usually adding that XPDL is actually better. But I've always felt that Keith obscures the real difference between XPDL and BPEL and their relationships to the "real" BPM standard, which is BPMN. XPDL captures the diagram while BPEL captures the process semantics. Keith dismisses the latter as just the information an "execution engine" would need to know. Technically that's true of BPEL, I suppose. But which of these best represents the process model? Bottom line is that neither XPDL nor BPEL today meets the real need of the BPM community, which is a portable serialization of process models—not diagrams, models—that is independent of implementation architecture. OMG is supposedly developing that based on BPDM, its formal metamodel for BPMN, now nearing finalization. I said last spring at OMG Think Tank that in BPDM's absence, XPDL had a window of opportunity to become the de facto serialization standard for BPMN. But by focusing on diagrams not models, and positioning itself versus BPEL not BPDM, XPDL has let that window close. They might argue that adding BPMN compliance rules and semantics to XPDL is not their job but OMG's. But that was in fact the opportunity, soon to disappear. Here's the puzzling part. I've actually seen a draft of BPDM and see no signs of a BPMN schema. Actually I found the thing near-incomprehensible; there was something about MOF and XMI but not a schema. It made me wonder whether BPDM would actually include a schema for BPMN, or just some kind of production rules that ensure conformance to the BPDM metamodel. If OMG does not publish a BPMN schema, I see more consternation in BPM-Land and a second chance for XPDL to get it right.

See also: Standards for Business Process Modeling


How SOA Increases Your Security Risk
Bert Latamore, ComputerWorld

Service-oriented architecture changes the security equation by introducing a greater reliance on third parties for application development and operation. According to Ray Wagner, managing vice president of information security and privacy at Gartner Inc., SOA may increase the number of securoty-related exchanges hugely. "Doing this hundreds of times an hour may have implications for computing loads, but it really is just a change of degree," not a qualitative change. A second major exposure is more technical and harder to intercept. "XML basically can contain any kind of executable or data, including things designed to do damage," Wagner warns. Again, every organization accepting XML-encoded files, which is the vast majority of organizations today, is exposed already. But SOA promises to increase the number of XML transfers—and, therefore, the exposure—by orders of magnitude, while the huge volume of these transmissions in the SOA architecture also complicates the problem of intercepting the occasional piece of malware in that flow, even as it attracts increasing attention from criminals. products are already appearing to address this problem. Crossbeam Systems Inc., a unified threat management (UTM) vendor focused on SOA security, and Forum Systems Inc. have created an alliance to combine Crossbeam's X-Series security services switches, a high-performance, high-reliability UTM solution, with Forum's XWall Web services firewall and the Forum Sentry Web services gateway for a best-of-breed solution for intercepting malware in XML and other transmissions entering the enterprise. A third concern, Wagner says, is that the session model for identity management does not fit the more complex needs of SOA. In a simple transaction, the user authenticates at the beginning of the session, and that authentication carries through the session. However, in an SOA model, the user may initiate a transaction and disconnect from the server, while the transaction flows through a group of back-end services, so the user has no direct connection to the final transaction. "The most promising approach to this solution uses the Security Assertion Markup Language (SAML) to create a representative identity that can be attached to the transaction."

See also: SAML references


Chopping Down Trees: How To Build Flatter BPEL Processes
Michael Havey, SOA World Magazine

The natural visualization of a business process is of boxes and arrows arranged in a tree-like formation. A large process with numerous conditional paths forms a rather expansive tree that can't fir on a computer screen or printed page. If the process has loops, these are often represented as arrows pointing back to earlier boxes, resulting in an untidy graph structure. Although BPEL isn't a visual process language, its XML representation can form code trees that are no less cumbersome. A receive inside a sequence inside a flow inside a switch inside a pick, even if properly indented, can make a coder see double. This technique article shows how to model BPEL 1.1 processes in a special flat form that represents even the most onerous processes in just a few levels of structure. A process modeled in this form, represented visually, more closely resembles a neat pile of sticks than a tree. Aesthetics aside, the flat approach is fundamentally better suited to SOA orchestration than the tree approach. Flat BPEL is good SOA. The first example provides some tips on how to map, element-by-element, existing BPEL processes to the flat form. Concurrency is a notable exception. In BPEL, concurrent execution of activities is modeled with a flow activity. There are two sorts of flows: those that merely perform a set of actions in parallel and those that model what state machine guru David Harel calls orthogonal states (or a set of states that apply to the same entity simultaneously). Two possible design approaches are to flatten the flow into mutually exclusive form, or to explicitly build the logic to support orthogonal states. Concurrency is a delicate subject, and the flattening heuristic does not work in all cases. The moral of the example is to preserve the spirit of flat even when faced with the challenge of orthogonal states.


LogiXML Adds Geographic Information System Integration To Logi 8
Staff, LogiXML Announcement

LogiXML continues to add value to Logi 8, the company's new interactive, Web-based business intelligence (BI) platform by introducing Geographic Information System (GIS) data and technology through a partnership agreement with ESRI, the world leader in GIS. Logi 8 is a pure Web-based, unified and XML-based solution that offers robust BI functionality, including managed and ad hoc reporting, OLAP analysis and BI data services, accessible by technical and non-technical users in organizations of all sizes. The GIS Mapping features of Logi 8 support integration of GIS data with organizational data to present compelling geographic representations that let customers visualize the spatial component of their business data. This integration provides ways to project and promote understanding and decision making around key trends in terms of their geographic impact. For example, this approach helps to answer questions like, "Where are the closest business locations to my current location?" You can then further drill down or drill through to other related reports to answer questions such as, "What is the revenue for a particular location?" The initial service will offer a connection to ESRI's Arc Web Service that allows Logi 8 users to integrate GIS data into reports through a subscription to the ESRI service. Additional service options from ESRI and links to other GIS sources will be added to the Logi 8 GIS Mapping as customer demand for this service expands. Logi Report, LogiXML's free reporting product, is now part of many applications in production at thousands of organizations around the world. Used by thousands of small and medium-sized organizations worldwide, LogiXML products are built on standards-based technologies for easy integration, upgraded on an aggressive schedule to maintain technology leadership, and are cost-effectively priced to support implementation by Independent Software Vendors (ISVs), Value Added Resellers (VARs), consulting companies and user organizations.


60-Mile Signal in San Francisco
Eric Griffith, Wi-Fi Planet

Wi-Fi signals going long distances are nothing new—many contests have been held to see how far the signals can be extended, year after year. But this one isn't a contest: Intel has set up a Wi-Fi link in downtown San Francisco that it claims is capable of reaching 60 miles (100 kilometers). Intel has also developed a "steerable antenna" (with some tech developed at the State University in Russia; it and U of C both have Intel facilities) that can steer a Wi-Fi signal around obstacles like buildings and trees. These are directed signals—it's not omni-directional. The steerable antenna on a tower would, in theory, be immune to being knocked out of alignment. Besides the fact that this first attempt at a steerable antenna was made of more wood and wires than the fake panels behind the Millennium Falcon's cockpit, Intel is reserving the technology for emerging markets. Intel has plans to serve such areas with its Classmate PC—a $300 laptop program with the same goal as the One Laptop Per Child program—to get kids computing, even those in remote villages with no wired infrastructure, let alone wireless. Eventually, a Wi-Fi signal could be bounced to a village and the smart antennas could steer the signal to villagers. The theory is that towers with Wi-Fi antennas might cost significantly less than doing the same thing with WiMax or other existing long-distance wireless technology. It also avoids the need for licensing spectrum, since Wi-Fi runs on globally unlicensed radio frequencies.


Copyright: Fair Use Is Your Friend
David DeJean, InformationWeek

Nine out of 10 people would probably tell you copyright is all about big companies maximizing their revenue from the content they own at the expense of the consumer. The 10th person would tell you copyright is a cornerstone of our American way of life, but he'd turn out to be lawyer for the RIAA, the Recording Industry Association of America. In fact, copyright is as much about your right to make fair use of copyrighted content as it is about the "intellectual property" of corporations. For 11 minutes of quiet, reassuring good sense on the subject I recommend a podcast interview with Anthony Falzone, executive director of the Fair Use Project at Stanford University. Falzone offers a definition of fair use and provides some examples from the latest fair-use case law. He outlines four factors that help determine whether a use of copyrighted material is fair, and emphasizes the transformative nature of the use, which was a concept that was new to me. What he says is that if your use of a copyrighted work doesn't serve a substantially different purpose that its original use, then you're probably violating its copyright: if you intend to criticize something on a TV news program, for example, but merely rerun the entire program and add a comment at the end, you haven't transformed it sufficiently to defend against copyright violation. [Rodney Green, VP of Corporate Development at IMN, and Anthony Falzone, Executive Director of the Fair Use Project at Stanford University, examine fair use in copyright law, especially as it pertains to user-generated content available on the Internet. Tony first defines what is meant by fair use and provides some classic examples. They then turn the discussion to the four factors that tend to guide fair use analysis, including the nature of the use you make, the nature of the work being borrowed, how much of a work you can use, and the effect of your new work on the market for the original material. They close by exploring special considerations that one needs to be aware of when using copyrighted material for commercial purposes.]

See also: Creative Commons Project


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
SAP AGhttp://www.sap.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2007-03-30.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org