The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: January 02, 2009
XML Daily Newslink. Friday, 02 January 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation

Five New Working Drafts for Rule Interchange Format Specification
Harold Boley, Gary Hallmark (et al., eds), W3C Technical Reports

W3C has announced the publication of five new Working Drafts by the Rule Interchange Format (RIF) Working Group, developed as part of the W3C Semantic Web Activity. Comments are invoited through January 23, 2009. W3C's Rule Interchange Format (RIF) Working Group was chartered to produce a core rule language plus extensions which together allow rules to be translated between rule languages and thus transferred between rule systems. The Working Group endeavors to balance the needs of a diverse community—including Business Rules and Semantic Web users -- specifying extensions for which it can articulate a consensus design and which are sufficiently motivated by use cases. Since the 2008-07 Last Call Working Draft of RIF Basic Logic Dialect (BLD), the Working Group has been developing other key dialects, components, and test cases. The new publications include: (1) "RIF Use Cases and Requirements", with minor changes, specifies use cases and requirements for the W3C Rule Interchange Format, a family of rule interchange dialects that allows rules to be translated between rule languages and thus transferred between rule systems. The purpose of this RIF-UCR document is to provide a reference to the design of RIF and a guide for users and implementers to the current technical specifications of RIF dialects. RIF-UCR also delivers a structured context for formulating future technical specifications of further RIF dialects. Each dialect targets at a cluster of similar rule languages and enables platform-independent interoperation between them. (2) "RIF Core" defines a common subset of RIF-BLD and RIF-PRD based on RIF-DTB 1.0. The RIF-Core presentation syntax and semantics are specified as restrictions on RIF-BLD. The XML serialization syntax of RIF-Core is specified via a mapping from the presentation syntax, and a normative XML schema is also provided. (3) "RIF Datatypes and Built-Ins 1.0", with various improvements, specifies a list of primitive datatypes, built-in functions and built-in predicates expected to be supported by RIF dialects such as the RIF Basic Logic Dialect. Each dialect supporting a superset or subset of the primitive datatypes, built-in functions and built-in predicates defined here shall specify these additions or restrictions. Some of the datatypes are adopted from XML-SCHEMA data types; a large part of the definitions of the listed functions and operators are adopted from "XQuery 1.0 and XPath 2.0 Functions and Operators". (4) "RIF Production Rule Dialect (PRD)" now supplies complete operational semantics. Production rules are rule statements defined in terms of both individual facts or objects, and groups of facts or classes of objects. They have an if part, or condition, and a then part, or action. The condition is like the condition part of logic rules (as covered by the basic logic dialect of the W3C rule interchange format, RIF-BLD). The then part contains actions, which is different to the conclusion part of logic rules which contains only a logical statement. Actions can add, delete, or modify facts in the knowledge base, and have other side-effects. (5) "RIF Test Cases" is an early stages of test suite which describes the test cases developed by the Rule Interchange Format (RIF) Working Group, intended to aid in the conformance evaluation of RIF implementations and thus promote interoperability. They can help identify problems both with the software developed to implement RIF specifications and with the specifications themselves.

See also: the W3C news item

StratML Digests Obama Administration's Agenda
Joab Jackson, Government Computer News

The working group for the Strategic Markup Language (StratML) has rendered the Obama-Biden agenda for the upcoming administration into an Extensible Markup Language-based format, one that highlights the strategic goals for the next administration. The Strategy Markup Language (StratML) Part 1 [StratML Core, ANSI/AIIM 21-200Y] is produced through AIIM by members of the U.S. Federal CIO Council XML Working Group and XML Community of Practice (xmlCoP). The StratML standard "defines an XML vocabulary and schema for the core elements of strategic plans. It formalizes practice that is commonly accepted but often implemented inconsistently. StratML will facilitate the sharing, referencing, indexing, discovery, linking, reuse, and analyses of the elements of strategic plans, including goal and objective statements as well as the names and descriptions of stakeholder groups and any other content commonly included in strategic plans. It should enable the concept of "strategic alignment" to be realized in literal linkages among goal and objective statements and all other records created by organizations in the routine course of their business processes. StratML will facilitate the discovery of potential performance partners who share common goals and objectives and/or either produce inputs needed or require outputs produced by the organization compiling the strategic plan, and facilitate stakeholder feedback on strategic goals and objectives." The plan for the Obama-Biden agenda was also submitted to a searchable repository of other StratML plans, which was developed as a prototype by XML server vendor MarkLogic. Although it was not specifically written for government use, StratML could potentially be useful for agencies. The 1993 Government Performance and Results Act (GPRA) mandates that agencies develop long-term strategic plans and annual performance reports. StratML will offer a uniform format for presenting these plans and, eventually, reporting on their success. StratML uniformity could benefit agencies and other interested parties. For example, the GPRA requires agencies to identify their stakeholders when assembling their strategic plans. To do this, agencies often hold meetings, or solicit feedback in other ways, with varying degrees of success. Codifying the Obama/Biden agenda serves has also helped the StratML development team further define what still needs to be done to StratML. While the AIIM working group is finalizing the StratML core, future work will be done to incorporate terms for describing performance plans and reports, and to allow users to place goals and objectives in multiple categories of taxonomies.

See also: the StratML web site

WS-SX, WS-TX, and WS-RX Specifications Submitted for OASIS Standard Ballot
Staff, OASIS Announcement

OASIS announced that nine specifications from the Web Services Secure Exchange (WS-SX) TC, Web Services Transaction (WS-TX) TC, and Web Services Reliable Exchange (WS-RX) TC have been submitted for ballot as OASIS Standards beginning 2009-01-16. Specifications from the WS-SX TC include: WS-SecurityPolicy 1.3, WS-SecureConversation 1.4, and WS-Trust 1.4. The OASIS WS-SX TC was chartered to define extensions to OASIS Web Services Security to enable trusted SOAP message exchanges involving multiple message exchanges and to define security policies that govern the formats and tokens of such messages. Specifications from the WS-TX TC include: Web Services Business Activity (WS-BusinessActivity) Version 1.2, Web Services Atomic Transaction (WS-AtomicTransaction) Version 1.2, and Web Services Coordination (WS-Coordination) Version 1.2. The OASIS WS-TX TC was chartered to define a set of protocols to coordinate the outcomes of distributed application actions, specifying an extensible framework for developing coordination protocols through continued refinement of the Web Services Coordination (WS-Coordination v 1.0) specification submitted to the TC; in addition, the TC was tasked to continue refinement of protocols for two coordination types that use the WS-Coordination framework: atomic transaction (AT) and business activity (BA), based on the Web Services Atomic Transaction (WS-AtomicTransaction v 1.0) and Web Services Business Activity (WS-BusinessActivity v 1.0). Specifications from the WS-RX TC include: Web Services Make Connection (WS-MakeConnection) Version 1.1, Web Services Reliable Messaging Policy Assertion (WS-RM Policy) Version 1.2, and Web Services Reliable Messaging (WS-ReliableMessaging) Version 1.2. The OASIS WS-RX TC was chartered to define a protocol for reliable message exchanges between two Web services, through continued development of the Web Services Reliable Messaging specification submitted to the TC, and to define a mechanism by which Web services express support for reliable messaging as well as other related useful parameters; this mechanism would be based upon the Web Services Reliable Messaging Policy Assertion ("WS-RM Policy") specification.

JavaScript and jQuery: Web Apps as Highly Interactive as Desktop Apps
Riccardo Govoni,

Modern browsers have greatly improved their performances, development tools, and compatibility. Even though most web sites still adhere to the page paradigm, rendering their content as it would appear in a newspaper or book, browsers can support highly interactive applications that rival traditional desktop apps. Historically, developers who wanted to support complex or highly visual application interactions had to choose between desktop applications that were local to the user's PC and ones built on top of browser plugins like Flash. Today, JavaScript has advanced enough to support this class of applications as well. Powerful JavaScript libraries provide abstractions from browser incompatibilities and low-level implementation details. They make designing interfaces that behave in a way familiar to the user (for example, drag-and-drop, selection and repositioning of visual items, panning, zooming, mouse gestures, and so on) simple. This article demonstrates how to leverage this power using JavaScript and jQuery to deliver a new class of web applications. jQuery is the popular JavaScript library that abstracts away browser incompatibilities and offers a programmer-friendly interface based on CSS selectors and methods chaining. Included with the article are an example that mimics a full-featured desktop application and the full source code... To use jQuery in your web applications, you need only include one JavaScript file in your HTML code. You can then manipulate the Document Object Model (DOM), use CSS selectors to identify parts of the page that you want to affect, and apply mutator methods that modify all the matching elements... jQuery offers many more methods to transform the DOM and bind callbacks to all possible browser events and is highly extensible. For the purposes of this article, the example uses two additions to the bare library: jQuery UI, a set of specialized functions to deal with visual interactions, and the mousewheel plugin, which allows an app to react to scrolling events generated by the mouse wheel.... [The example shows] how to transform a static web page into an appealing interactive experience. We leverage JavaScript and jQuery DOM manipulation functions to mimic complex interactions that are traditionally reserved for heavier client applications. Now these interactions can live in the lightweight environment of a browser window as well. Many other features are left to explore, such as mouse gestures and keyboard events binding. A natural next step would be to evolve the demo interface into a complete desktop and layout manager backed by a model that supports real-time filtering, layout, data mutations, and interfacing with external data providers. For this purpose, I started an open-source project, Rhizosphere, for creating the ultimate JavaScript interactive experience and innovative data visualizations...

Standards and Opportunities: When Smart Buildings Meet the Smart Grid
Toby Considine,

In December 2008, the group working to advance the OpenADR specification to a U.S. national and perhaps international standard, began to hold discussions in an discussion in an open forum at OASIS. OpenADR (Automated Demand Response) is a California developed specification developed for the regulated electricity providers in that state. Demand-Response (DR) refers to live negotiations between the grid and its end nodes (buildings) to reduce demand before a shortfall causes problems. DR is a very important first step on the road to transacted energy, and solves some big problems in the short term... The interface between the grid and the buildings should not not concern itself with the underlying technology and control protocols. It should not be based upon BACnet, or OPC, or LON or any number of other low level control system protocols. The interface must be one that enables business decisions. Control systems should offer up service interfaces for choreographed response. Whatever offer and counter offer DR requires, whether amount of load shed or maximum load used or time to respond must be in the interface, but no deep process. The smartgrid to building/industry/home interface is about how the Service Oriented Building can respond to the Service Oriented Grid. Just as in other services, the underlying processes should be hidden.

See also: the OASIS discussion list

DeepEarth: A Mapping Control Using Silverlight
Abel Avram, InfoQueue

DeepEarth is a mapping control combining Microsoft's Virtual Earth with Silverlight 2.0. The open source project was released on CodePlex by its creators, a team of .NET enthusiasts. "DeepEarth is a mapping control powered by the combination of Microsoft's Silverlight 2.0 platform and the DeepZoom (MuliScaleImage) control. At its core, it builds on these innovative technologies to provide an architecture for bringing together layers for services, data providers, and your own custom mapping elements together into an impressive user experience. Also featured are in depth examples of how you can leverage Virtual Earth Web Services to take advantage of advanced GIS service functionality... DeepEarth Version 1.0 represents a stable mapping platform ready for your project. It was conceived in a series of blog posts that through the collaborative efforts of readers exposed the then undocumented feature of Silverlight 2 to configure a custom tilesource to the MultiScaleImage (DeepZoom) control. Half a dozen developers from different sides of the globe now contribute to the project... DeepEarth provides imagery as tile layers, a robust Virtual Earth implementation supporting the official token based tile access and web services. Other features include: fully implemented map control with property and event model; fully templated set of map navigation controls; layers for inclusion of Points, LineStrings and Polygons (OGS); conversion library for geography to screen coordinate systems; geocoding—find an address); reverse Geocoding—getting an address from a point on the map; routing (Directions); marque zoom selection (default Ctrl-click and drag or from menu toggle; map rotation... DeepEarth has a lot to catch up with Google's Earth which offers 3D views of certain areas allowing the user to fly around the buildings and many other features like lights and shadows simulating the real view under the sun. DeepEarth is available for use under the Open Source (OSI) Microsoft Public License (Ms-PL).

See also: the DeepEarth development web site

Workflow Orchestration Using Spring AOP and AspectJ
Oleg Zhurakousky, InfoQueue

Scenario: You need to implement a flow-like process, preferably embedded and you want it to be configurable, extensible and easy to manage and maintain. Do you need full scale BPM engine which comes with its own load of abstractions that might seem heavy for a simple flow orchestration you are looking for, or are there any light-weight alternatives we can use without committing to a full scale BPM engine? This article demonstrates how to build and orchestrate highly configurable and extensible yet light-weight embedded process flow using Aspect Oriented Programming (AOP) techniques. The current examples are based on Spring AOP and Aspect J, however other AOP techniques could be used to accomplish the same results. Our problem is a process itself: what is the process? Process is a collection of coordinated activities that lead to accomplishing a set goal. Activity is a unit of instruction execution, and is the building block of a process. Each activity operates on a piece of shared data (context) to fulfill part of the overall goal of the process. Parts of the process goal that have been fulfilled signify accomplished facts which are used to coordinate execution of remaining activities. This essentially redefines the process as nothing more than a pattern of rules operating on the set of facts to coordinate execution of the activities which define such process. In order for the process to coordinate execution of the activities it must be aware of the following attributes: (1) Activities: activities defining this process; (2) Shared data/context: defines mechanism to share data and facts accomplished by the activities; (3) Transition rule: defines which activity comes next after the end of previous activity, based on the registered facts; (4) Execution Decision: defines mechanism to enforce Transition rule; (5) Initial data/context (optional): initial state of the shared data to be operated on by this process... The approach here demonstrates how to use two layers of AOP to assemble, orchestrate and control the process flow... The approach is a lightweight and embedded. It uses existing Spring infrastructure and is build on the premise where process is a collection of orchestrated activities. Each activity is a POJO and is completely unaware of any infrastructure/controller components that manage it. This presents several benefits. Aside from typical architectural benefits of loose coupling and with the ever growing popularity and adoption of technologies such as OSGi, keeping activities and activity invocation control separate, opens the door for an activity to be implemented as an OSGi service, allowing each activity to become independently managed units (deployed, updated, un-deployed etc...). Testing is another benefit... Separating control logic (intercepting filters) from business logic (POJO Activities) will also allow you to plug in more sophisticated rules façade to process fact rules, as well as testing where testing transition control logic should not affect the business logic implemented by the underlying activity.

Wide-Area Networks: A Floating Backbone for Internet over the Ocean
Michael C. Jaeger and Gero Muehl, IEEE Distributed Systems Online

In remote areas of the world, data transmission and access to the Internet often require expensive satellite connections or low-bandwidth radio links. Access at sea adds the difficulties of a drifting host in platform and significant cost, and high-bandwidth Internet connections unfeasible. For example, equipping an aircraft with Boeing's Connexion service has cost about US$500,000, and equipping a ship costs $60,000. Additionally, such services have monthly fees—more than $1,000 per month per ship using Connexion. Sufficient customer demand can help distribute costs, such as on a long-haul airplane, which usually carries more than 500 passengers over 24 hours. However, remote locations, cargo or small ships, and so on don't have the means to distribute high base costs over many clients. Remote areas need a low-cost, non-satellite-based solution that provides reliable high-bandwidth Internet coverage. Our system offers such a solution, in which a mesh of individual sensor nodes over the ocean—essentially a wide-area sensor network—forms an Internet backbone... Apart from classic fields in computer science, such as network routing, creating the backbone embraces emerging fields such as autonomic computing and sensor networks, as well as transmission protocol engineering and aspects of oceanography. Engineering and deploying such a system doesn't necessarily require a large undertaking. Rather, we anticipate that the scientific community will establish a common standard for the proposed mesh that allows independent design and development of various node types for the different deployment options. Some standards and protocols already exist. However, when it comes to negotiation and routing, some areas require proposals, evaluation, and testing before being standardized. After conducting a conceptual analysis of our proposed solution and setting the theoretical foundations, researchers must evaluate the system within a realistic testbed under real-world conditions at a low-budget scale. A preliminary version of the backbone could be installed in a contained area such as the Baltic Sea.


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: