This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com
- W3C Working Group Updates Multimodal Architecture and Interfaces
- Common Alerting Protocol (CAP) Based Data-Only Emergency Alerts Using the Session Initiation Protocol (SIP)
- Using the Analytic Hierarchy Process to Evaluate Cloud Applications
- Cloud Data Management Interface (CDMI) Media Types
- Apache Whirr Development Team Announces Whirr for Cloud Services
- OGC Calls for Participation in FAA SAA Information Dissemination Pilot
- Tackling Architectural Complexity with Modeling
- XForms for Libraries: An Introduction
W3C Working Group Updates Multimodal Architecture and Interfaces
Jim Barnett (ed), W3C Technical Report
Members of the W3C Multimodal Interaction Working Group have published a revised version of the Multimodal Architecture and Interfaces specification, updating the previous Working Draft of 2009-12-01. A diff-marked version of this document is available for comparison purposes. The principal normative changes from the previous draft are: the inclusion of state charts for modality components; the addition of a 'confidential' field to life-cycle events; the removal of the 'media' field from life-cycle events.
This document "describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents. MMI provides for interoperability among modality-specific components from different vendors—for example, speech recognition from one vendor and handwriting recognition from another.
The MMI framework is motivated by several basic design goals: (1) Encapsulation: the architecture should make no assumptions about the internal implementation of components, which will be treated as black boxes; (2) Distribution: the architecture should support both distributed and co-hosted implementations; (3) Extensibility: the architecture should facilitate the integration of new modality components; for example, given an existing implementation with voice and graphics components, it should be possible to add a new component like a biometric security component without modifying the existing components; (4) Recursiveness: the architecture should allow for nesting, so that an instance of the framework consisting of several components can be packaged up to appear as a single component to a higher-level instance of the architecture; (5) Modularity: the architecture should provide for the separation of data, control, and presentation.
The Interaction Manager, which coordinates the different modalities, is the Controller in the MVC paradigm. The Interaction Manager (IM) is responsible for handling all events that the other Components generate. Normally there will be specific markup associated with the IM instructing it how to respond to events. This markup will thus contain a lot of the most basic interaction logic of an application. Existing languages such as SMIL, CCXML, SCXML, or ECMAScript can be used for IM markup as an alternative to defining special-purpose languages aimed specifically at multimodal applications. The IM fulfills multiple functions. For example, it is responsible for synchronization of data and focus, etc., across different Modality Components as well as the higher-level application flow that is independent of Modality Components. It also maintains the high-level application data model and may handle communication with external entities and back-end systems..."
Common Alerting Protocol (CAP) Based Data-Only Emergency Alerts Using the Session Initiation Protocol (SIP)
Brian Rosen, Henning Schulzrinne, Hannes Tschofenig (eds), IETF Internet Draft
Members of the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group have published an initial public working draft for Common Alerting Protocol (CAP) Based Data-Only Emergency Alerts Using the Session Initiation Protocol (SIP). From the document Abstract: "The Common Alerting Protocol (CAP) is an XML document format for exchanging emergency alerts and public warnings. CAP is mainly used for conveying alerts and warnings between authorities and from authorities to citizen/individuals. This document describes how data-only emergency alerts allow to utilize the same CAP document format.
Data-only emergency alerts may be similar to regular emergency calls in the sense that they have the same emergency call routing and location requirements; they do, however, not lead to the establishment of a voice channel. There are, however, data-only emergency alerts that are targeted directly to a dedicated entity responsible for evaluating the alerts and for taking the necessary steps, including triggering an emergency call towards a Public Safety Answering Point (PSAP).
In the architectural model there are two envisioned usage modes; targeted and location-based emergency alert routing. 'targeted' has a deployment variant where a device is pre-configured to issue an alert to an aggregator that processes these messages and performs whatever steps are necessary to appropriately react on the alert. In many cases the device has the address of the aggregator pre-configured and corresponding security mechanisms are in place to ensure that only alert from authorized devices are processed... In aanother scenario the alert is routed using location information and the Service URN. In this case the device issuing the alert may not know the message recipient (in case the LoST resolution is done at an emergency services routing proxy rather than at the end host). In any case, a trust relationship between the alert-issuing device and the PSAP cannot be assumed, i.e., the PSAP is likely to receive alerts from entities it cannot authorize...
Several security considerations are recognized when using SIP to make data-only emergency alerts utilizing CAP. For example, an adversary could forge or alter a CAP document to report false emergency alarms. To avoid this kind of attack, the entities must assure that proper mechanisms for protecting the CAP documents are employed, e.g., signing the CAP document itself. Section 22.214.171.124 of the CAL specification describes the signing of CAP documents. This does not protect against a legitimate sensor sending phrank alerts after being compromised. Another threat: Theft of CAP documents described in this document and replay of it at a later time. Certain attributes make the CAP document unique for a specific sender and provide time restrictions; it is recommended to make use of SIP security mechanisms, such as SIP Identity, to tie the CAP message to the SIP message... When an entity receives a CAP message it has to determine whether the entity distributing the CAP messages is genuine to avoid accepting messages that are injected by adversaries. For some types of data-only emergency calls the entity issuing the alert and the entity consuming the alert have a relationship with each other and hence it is possible (using cryptographic authentication ) to verify whether a message was indeed issued by an authorized entity. There are, however, other types of data-only emergency calls where there is no such relationship between the sender and the consumer. In that case incoming alerts need to be treated more carefully, as the possibilities to place phrank calls are higher than with regular emergency calls that at least setup an audio channel...."
Using the Analytic Hierarchy Process to Evaluate Cloud Applications
Brijesh Deb, IBM developerWorks
In this article the author demonstrates a step-by-step application portfolio assessment approach to determining the suitability of your enterprise applications for the cloud, based on the Analytic Hierarchy Process (AHP). "Given the concerns and risk involved in cloud computing initiatives, each enterprise has to assess its application portfolio based on its business imperatives, technology strategy, and risk appetite before embarking on a flight into the clouds. With this assessment that involves multiple competing criteria of varied nature, impact, and priority, the author demonstrates how a multi-dimensional statistical approach using the Analytic Hierarchy Process (AHP) can be used to help decide which, if any, of your enterprise applications belong in the cloud.
Some of the questions businesses need to ask themselves before undertaking cloud initiatives are: (1) What factors should I consider for cloud enablement of my enterprise applications? How do I judge different competing priorities? (2) How do I identify the applications and services that are best suited for moving to a cloud environment based on business priority and technical fitment? (3) How do I prioritize enterprise applications and services for a 'phase-smart' cloud enablement? How can I avoid that 'gut feeling' and bring objectivity into the evaluation? (4) What are the different risks involved?
An approach based on the Analytic Hierarchy Process (AHP) can be used in a corresponding analysis. The techniques used in the AHP quantify relative priority for a given set of criteria on a ratio scale. AHP offers advantages over many other MCDA methods. It provides a comprehensive structure to combine both quantitative and qualitative criteria in the decision-making process. AHP also brings an ability to judge consistency in analysis process to the table: This helps reduce anomalies and heighten objectivity.
There are several components, or steps, involved in using AHP to evaluate the suitability of an application for the cloud. These include defining criteria hierarchy, determining criteria priority, comparing your application against the criteria, and calculating overall AHP score..."
Cloud Data Management Interface (CDMI) Media Types
Krishna Sankar and Arnold Jones (eds), IETF Internet Draft
An initial level -00 IETF Internet Draft has been published for Cloud Data Management Interface (CDMI) Media Types. This document "describes several Internet media types defined for the Cloud Data Management Interface (CDMI) by the Storage Networking Industry Association (SNIA). CDMI is the functional interface that applications will use to create, retrieve, update and delete data elements from the cloud. As part of this interface the client will be able to discover the capabilities of the cloud storage offering and use this interface to manage containers and the data that is placed in them. In addition, metadata can be set on containers and their contained data elements through this interface.
A storage cloud is a storage service hosted either on-premise or off-premise, definitely across a network. An important part of the cloud model, in general, is the concept of a pool of resources that is drawn from, on demand, in small increments (smaller than what one would typically purchase by buying equipment). By abstracting data storage behind a set of service interfaces and delivering it on demand, a wide range of actual offerings and implementations are possible. The only type of storage that is excluded from this definition is that which is delivered, not based on demand, but on fixed capacity increments.
The CDMI defines a set of functional interfaces (data paths) and management interfaces (control paths) to create, retrieve, update, and delete data elements from a storage cloud. Another important concept in this standard is that of metadata. When managing large amounts of data with differing requirements, metadata is a convenient mechanism to express those requirements in such a way that underlying data services can differentiate their treatment of the data to meet those requirements. CDMI also defines an extensible metadata system for storage clouds.
The access control of the CDMI end point URLs are beyond the scope of this specification; if required, applications should use appropriate URL authentication and authorization techniques. For fine grained control of the CDMI objects, the CDMI specification contains the Access Control Lists (ACL) and Access Control Entries (ACE), as described in the CDMI specification... The CDMI specification has a set of metadata fields to facilitate the access and other audit markers. The CDMI metadata system is extensible and the implementations can add more metadata as required by the security posture of the domain..."
Apache Whirr Development Team Announces Whirr for Cloud Services
Tom White, Apache Announcement
Members of the Apache Whirr Development Team have announced the first release of Apache Whirr as a set of libraries for running cloud services such as Apache Hadoop, ZooKeeper, and Cassandra. Currently in the Apache Incubator, Whirr provides: "(1) A cloud-neutral way to run services—you don't have to worry about the idiosyncrasies of each provider; (2) A common service API—the details of provisioning are particular to the service; (3) Smart defaults for services—you can get a properly configured system running quickly, while still being able to override settings as needed..."
According to the project documentation: "The choice of cloud provider should be a configuration option in Whirr, rather than require the user to use a different API. This may be achieved by using cloud-neutral provisioning and storage libraries such as jclouds, libcloud, or fog, or similar. However, Whirr's API should be built on top these libraries and should not expose them to the user directly. In some cases using cloud-specific APIs may be unavoidable (e.g. EBS storage), but the amount of cloud-specific code in Whirr should be kept to a minimum, and hopefully pushed to the cloud libraries themselves over time.
Whirr prefers minimal image management: Building and maintaining cloud images is a pain, especially across cloud providers where there is no standardization, so Whirr takes the approach of using commonly available base images and customizing them on or after boot. Tools like cloudlets look promising for maintaining custom images, so this is an avenue worth exploring in the future...
The Whirr API should not be bound to a particular version of the API of the service it is controlling. For example, you should be able to launch two different versions of Hadoop using the same Whirr API call, just by specifying a different configuration value. This is to avoid combinatorial explosion in version dependencies... Whirr doesn't mandate any particular solution. Runurl is a simple solution for running installation and configuration scripts, but some services may prefer to use Chef or Puppet. The details should be hidden from the Whirr user..."
See also: the Whirr Design document
OGC Calls for Participation in FAA SAA Information Dissemination Pilot
Staff, Open Geospatial Consortium Announcement
"The Open Geospatial Consortium (OGC) has issued a Request for Quotations/Call for Participation (RFQ/CFP) to solicit proposals in response to requirements for the Special Activity Airspace (SAA) Dissemination Pilot, sponsored by the US Federal Aviation Administration (FAA). Responses are due by October 18, 2010. The FAA SAA Dissemination Pilot will extend the SAA SWIM Services to enable the dissemination of SAA information (including updates and schedule changes) to National Airspace System (NAS) stakeholders and other external users via services that implement OGC Web Services (OWS) standards.
In support of the Next Generation Air Transportation System (NextGen), the FAA SWIM program seeks to achieve systems interoperability and information management for diverse Air Traffic Management (ATM) systems using Service-Oriented Architectures (SOA). The FAA SWIM capabilities include supporting the exchange of SAA information between operational ATM systems. SAAs include Special Use Airspaces (SUA)—regions of airspace designated for use by the military ensuring no other traffic uses that airspace during scheduled times... Navin Vembar, the AIM Modernization Segment 2 Program Manager, FAA notes that 'the FAA SAA Dissemination pilot is an opportunity to work closely with industry to demonstrate that OGC Web Services and international standards facilitate the FAA's efforts to communicate with our stakeholders in an automated manner'...
OGC pilots are part of OGC's Interoperability Program, a global, hands-on collaborative prototyping program designed to rapidly develop, test and deliver proven candidate specifications into OGC's Specification Program, where they are formalized for public release. OGC Interoperability Initiatives are designed to encourage rapid development, testing, validation and adoption of OGC standards..."
See also: the OGC Interoperability Initiatives
Tackling Architectural Complexity with Modeling
Kevin Montagne, ACM Queue
"The ever-increasing might of modern computers has made it possible to solve problems once thought too difficult to tackle. Far too often, however, the systems for these functionally complex problem spaces have overly complicated architectures. In this article I use the term architecture to refer to the overall macro design of a system rather than the details of how the individual parts are implemented. The system architecture is what is behind the scenes of usable functionality, including internal and external communication mechanisms, component boundaries and coupling, and how the system will make use of any underlying infrastructure (databases, networks, etc.).
It should be standard practice to research the architectural options for new systems—or when making substantial overhauls to existing ones. The experiments should be with lightweight models rather than a full system, but it is vital that these models accurately capture the evolving behavior of the system. Otherwise the value of the modeling process is diminished and may lead to erroneous conclusions. I typically start by trying to understand the functional problem space in an abstract fashion...
There are numerous technical modalities to consider when designing or evaluating architecture: performance, availability, scalability, security, testability, maintainability, ease of development, and operability. The priority ordering of these modalities may differ across systems, but each must be considered. How these modalities are addressed and their corresponding technical considerations may vary by system component. For example, with request/reply and streaming updates, latency is a critical performance factor, whereas throughput may be a better performance factor for flow-through message processing or bulk-request functionality.
Modeling is an iterative process. It should not be thought of as just some type of performance test. Here is a list of items that could be added to further the evaluation process. [Good design will]: (a) Use the model to evaluate various infrastructure choices; these could include messaging middleware, operating system and database-tuning parameters, network topology, and storage system options; (b) Use the model to create a performance profile for a set of hardware, and use that profile to extrapolate performance on other hardware platforms; any extrapolation will be more accurate if the model is profiled on more than one hardware platform; (3) Use the performance profiles to determine if multiple instances of the publisher (horizontal scaling) are likely to be required as the system grows..."
XForms for Libraries: An Introduction
Ethan Gruber, Chris Fitzpatrick, Bill Parod, Scott Prater; Code4Lib
"XForms applications can be used to create XML metadata that is well-formed and valid according to the schema, and then saved to (or loaded from) a datastore that communicates via REST or SOAP. XForms applications provide a powerful set of tools for data creation and manipulation, as demonstrated by some projects related to library workflows that are described in this paper.
XForms, like its close cousin, XSLT, inhabits the grey zone between being a programmable toolkit and a data serialization standard. Simple forms can be created fairly rapidly, though they will be more verbose than their HTML counterparts. However, the benefits of XForms do not really begin to manifest themselves until you begin to design forms with complex structures, dependencies, and runtime behaviors. If all you want to do is create a simple, static address form, you would be better off sticking with HTML. However, if you want to create a metadata editor that fully encapsulates and enforces the constraints of a mature and rich standard, such as MODS or EAD, the time spent mastering XForms will pay off in the long run.
Given the sustainability of the standard and the ability to create and edit complex XML models, XForms has great potential for the library community. Not only can input forms and XML serializations be created with XForms, but applications can be woven into digitization and publication workflows, controlled vocabularies can be managed, and web services commonly used in libraries can be easily hooked to the forms. With numerous institutions exploring the standard, XForms holds promise for becoming a mainstream form of application development over the next several years..."
See also: the W3C Forms Working Group
XML Daily Newslink and Cover Pages sponsored by:
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: firstname.lastname@example.org
Newsletter unsubscribe: email@example.com
Newsletter help: firstname.lastname@example.org
Cover Pages: http://xml.coverpages.org/