The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: August 09, 2010
XML Daily Newslink. Monday, 09 August 2010

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus

OASIS SCA-Bindings Technical Committee Releases Specs for Public Review
Mike Edwards and Anish Karmarkar (eds), OASIS Public Review Drafts

Members of the OASIS Service Component Architecture / Bindings (SCA-Bindings) Technical Committee have approved two Committee Draft specifications for public review through October 05, 2010. This OASIS TC was chartered to standardize bindings for the SCA services and references to various communication protocols, technologies, and frameworks. For each SCA binding technology under development, the TC is evolving the respective starting point contribution documents to produce one or more specification documents, XML Schema definition documents, possible WSDL documents, and possible language dependant artifacts as appropriate, from which compliant tools and runtimes for that SCA binding technology can be built.

TestCases for the SCA Web Service Binding Specification Version 1.1 (Committee Draft 01 / Public Review 01) defines "the TestCases for the SCA Web Service Binding specification. The TestCases represent a series of tests that an SCA runtime must pass in order to claim conformance to the requirements of the SCA Web Service Binding specification...

The SCA Bindings test cases follow a standard structure. They are divided into two main parts: (1) Test Client, which drives the test and checks that the results are as expected; (2) Test Application, which forms the bulk of the testcase and which consists of Composites, WSDL files, XSDs and code artifacts such as Java classes, organized into a series of SCA contributions. The basic idea is that the Test Application runs on the SCA runtime that is under test, while the Test Client runs as a standalone application, invoking the Test Application through one or more service interfaces...."

The Test Assertions for the SCA Web Service Binding Version 1.1 Specification (Committee Draft 01 / Public Review 01) defines "the Test Assertions for the SCA Web Service Binding specification. The Test Assertions represent the testable items relating to the normative statements made in the SCA Assembly specification. The Test Assertions provide a bridge between the normative statements in the specification and the conformance TestCases which are designed to check that an SCA runtime conforms to the requirements of the specification..."

See also: Test Assertions for SCA Web Service Bindings

Implementation Feedback Invited for W3C XMLHttpRequest Specification
Anne van Kesteren (ed), W3C Technical Report

The W3C Web Applications Working Group now invites implementation feedback on the Candidate Recommendation of the XMLHttpRequest specification. In connection with this CR, members of the WG have maintained a disposition of comments document for the last Last Call Working Draft. A list of changes is available via a Web view or through the CVS instance.

The XMLHttpRequest specification defines an API that provides scripted client functionality for transferring data between a client and a server.

Detail: "The XMLHttpRequest object implements an interface exposed by a scripting engine that allows scripts to perform HTTP client functionality, such as submitting form data or loading data from a server. It is the ECMAScript HTTP API. The name of the object is XMLHttpRequest for compatibility with the Web, though each component of this name is potentially misleading. First, the object supports any text based format, including XML. Second, it can be used to make requests over both HTTP and HTTPS; some implementations support protocols in addition to HTTP and HTTPS, but that functionality is not covered by this specification. Finally, it supports "requests" in a broad sense of the term as it pertains to HTTP; namely all activity involved with HTTP requests or responses for the defined HTTP methods.

In order to exit the Candidate Recommendation (CR) stage the following criteria for the XMLHttpRequest Specification must have been met: (1) There will be at least two interoperable implementations passing all test cases in the test suite for this specification. An implementation is to be available (i.e. for download), shipping (i.e. not private), and not experimental (i.e. intended for a wide audience). The working group will decide when the test suite is of sufficient quality to test interoperability and will produce implementation reports, hosted together with the test suite. (2) A minimum of six months of the CR stage will have elapsed (i.e. not until after 3-February-2011); this is to ensure that enough time is given for any remaining major errors to be caught. The CR period will be extended if implementations are slow to appear. (3) Text, which can be in a separate document, exists that explains the security considerations for this specification. This may be done in a generic manner as they are most likely applicable to various APIs. The working group will decide whether the text is of sufficient quality..."

See also: the W3C Web Applications Working Group

New IETF Internet Draft: Considerations for Captive Portals in HTTP
Mark Nottingham (ed), IETF Internet Draft

A new Informational IETF Internet Draft Considerations for Captive Portals in HTTP has been published by Mark Nottingham, Chair of the IETF Hypertext Transfer Protocol Bis (HTTPBIS) Working Group. "'Captive portals' are a commonly-deployed means of obtaining access credentials and/or payment for a network. This initial level -00 Internet Draft discusses issues of their use for HTTP applications, and proposes one possible mitigation strategy."

From the document Introduction: "It has become common for networks to require authentication, payment and/or acceptance of terms of service before granting access. Typically, this occurs when accessing 'public' networks such as those in hotels, trains, conference centres and similar networks. While there are several potential means of providing credentials to a network, these are not yet universally supported, and in some instances the network administrator requires that information (e.g., terms of service, login information) be displayed to end users. In such cases, it has become widespread practice to use a 'captive portal' that diverts HTTP requests to the administrator's web page. Once the user has satisfied requirements (e.g., for payment, acceptance of terms), the diversion is ended and 'normal' access to the network is allowed.

Typically, this diversion is accomplished by one of several possible techniques; (1) IP interception: all requests on port 80 are intercepted and send to the portal; (2) HTTP redirects: all requests on port 80 are intercepted and an HTTP redirect to the portal's URL is returned; (3) DNS interception: all DNS lookups return the portal's IP address. In each case, the intent is that users connecting to the network will open a Web browser and see the portal. This memo examines the HTTP-related issues that these techniques raise, and proposes a potential mitigation strategy...

Since clients cannot differentiate between a portal's response and that of the HTTP server that they intended to communicate with, a number of issues arise. It's sometimes believed that using HTTP redirection to direct traffic to the portal addresses these issues. However, since many of these uses 'follow' redirects, this is not a good solution... The heart of the issues seen is that the client doesn't understand that a response from the portal does not represent the requested resource. As such, the response needs to indicate that it is non-authoritative. In HTTP, response status codes indicate the type of response, and therefore defining a new one is the most appropriate way to do this..."

See also: the discussion thread

PMML: The Power of Predictive Analytics and Open Standards
Alex Guazzelli, IBM developerWorks

The Predictive Model Markup Language (PMML) is the de facto standard language used to represent predictive analytic models. It allows for predictive solutions to be easily shared between PMML compliant applications. With predictive analytics, the Petroleum and Chemical industries create solutions to predict machinery break-down and ensure safety. 'PMML is an XML-based markup language developed by the Data Mining Group (DMG) to provide a way for applications to define models related to predictive analytics and data mining and to share those models between PMML-compliant applications.'

PMML is supported by many of the top statistical tools. As a result, the process of putting a predictive analytics model to work is straightforward since you can build it in one tool and instantly deploy it in another. In a world in which sensors and data gathering are becoming more and more pervasive, predictive analytics and standards such as PMML make it possible for people to benefit from smart solutions that will truly revolutionize their lives.

PMML is the brain child of the Data Mining Group, a vendor-led committee composed of commercial and open source analytic companies. As a consequence, most of the leading data mining tools today can export or import PMML. A mature standard which has evolved over the past 10 years, PMML can represent not only the statistical techniques used to learn patterns from data, such as artificial neural networks and decision trees, but also pre-processing of raw input data and post-processing of model output...

Open standards most definitely need to be part of the equation. For one to fully benefit from predictive solutions and data analysis, systems and applications need to be able to exchange information easily by following standards... The adoption of PMML by the major analytic vendors is a great example of companies embracing interoperability. IBM, SAS, Microstrategy, Equifax, NASA, and Zementis are part of DMG, and open-source companies such as KNIME and Rapid-Iare also part of the committee..."

See also: earlier PMML references

A Streaming XSLT Processor
Michael Kay, Paper Presented at Balisage Markup Conference 2010

"In the architecture of most XSLT processors, the XML parser is used to build a tree representation of the source document in memory. XSLT instructions are then executed, which cause the evaluation of XPath expressions, which select nodes from the source tree by navigating around this tree. Because the XPath axes (child, descendant, parent, ancestor, preceding-sibling, and so on) allow navigation around this tree in arbitrary directions, it is necessary for the entire tree to be held in memory for the duration of the transformation. For some XML documents, this is simply not feasible: even sample datasets representing virtual 3D city models run to 44Gbytes in size...

It has long been recognized that the need to hold the source tree in memory is a serious restriction for many applications.. This paper describes how streaming is implemented in the Saxon XSLT processor (Saxonica). This is influenced by the work of the W3C specification, but it is by no means an exact match to the specification in its current form: many features that should be streamable according to the specification are not yet streamable in Saxon, while Saxon succeeds in streaming some constructs that are non-streamable according to XSLT 2.1.

Successive releases of Saxon, some predating this work and some influenced by it, have provided partial solutions to the streaming challenge with increasing levels of sophistication. At the time of writing, there are many ideas in the specification that are not yet implemented in Saxon, and there are some features in the Saxon implementation that are not yet reflected in the specification. Nevertheless, development of the language standard and of an industrial implementation are proceeding in parallel, which is always a promising indicator that standards when they arrive will be timely and viable. Both the language and the implementation, however, still need a lot more work.

Saxon's approach to the problem is based on using a push architecture end-to-end, to eliminate the source tree as an intermediary between push-based XML parsing/validation and pull-based XPath processing. Implementing the entire repertoire of XPath expressions and XSLT instructions in an event-based pipeline is challenging, to say the least. However, enough has been completed to show that the undertaking is viable, and a large enough subset is already available to users to enable some serious large-scale transformation tasks to be performed..."

Browser 'Privacy Modes' Not So Private After All
John P. Mello, Network World

"All the major web browsers have a privacy mode that's supposed to cover a user's tracks after he or she finishes an Internet session, but a trio of researchers have found those modes fail to purge all traces of a Net surfer's activities.

For instance, Mozilla Firefox has something called a 'custom handler protocol' that creates URLs that hang around even after a user leaves privacy mode...

Secure certificates can also be used to thwart the purpose of privacy modes. Firefox, Internet Explorer and Safari all support the use of SSL client certificates. A website, through Javascript, can instruct a browser to generate an SSL client public/private key pair. That key pair is retained by the browser even after the privacy session ends. In addition, if a site uses a self-signed certificate, IE and Safari will store it locally in a Microsoft certificate vault and it stays there when the privacy session ends. So anyone who knows where to look for it can find it and glimpse into a user's Internet travels.

Source: Gaurav Aggarwal and Dan Boneh, of Stanford University, and Colin Jackson, of Carnegie Melon University — wrote a paper scheduled to be presented next week at the Usenix Security Symposium in Washington, D.C. The bottom line from the trio's travails: don't do anything in privacy mode that you wouldn't do with the boss looking over your shoulder..."

W3C Working Draft for Emotion Markup Language (EmotionML) 1.0
Marc Schroeder (ed), W3C Technical Report

Members of the W3C Multimodal Interaction Working Group have published a second public working draft for Emotion Markup Language (EmotionML) Version 1.0. Abstract: "As the web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions. The present draft specification of Emotion Markup Language 1.0 aims to strike a balance between practical applicability and scientific well-foundedness. The language is conceived as a 'plug-in' language suitable for use in three different areas: (1) manual annotation of data; (2) automatic recognition of emotion-related states from user behavior; and (3) generation of emotion-related system behavior."

As for any standard format, the first and main goal of an EmotionML is twofold: to allow a technological component to represent and process data, and to enable interoperability between different technological components processing the data.

Use cases for EmotionML can be grouped into three broad types: [A] Manual annotation of material involving emotionality, such as annotation of videos, of speech recordings, of faces, of texts, etc; [B] Automatic recognition of emotions from sensors, including physiological sensors, speech recordings, facial expressions, etc., as well as from multi-modal combinations of sensors; [C] Generation of emotion-related system responses, which may involve reasoning about the emotional implications of events, emotional prosody in synthetic speech, facial expressions and gestures of embodied agents or robots, the choice of music and colors of lighting in a room, etc. Interactive systems are likely to involve both analysis and generation of emotion-related behavior; furthermore, systems are likely to benefit from data that was manually annotated, be it as training data or for rule-based modelling. Therefore, it is desirable to propose a single EmotionML that can be used in all three contexts.

Concrete examples of existing technology that could apply EmotionML include: (1) Opinion mining / sentiment analysis in Web 2.0, to automatically track customer's attitude regarding a product across blogs; (2) Affective monitoring, such as ambient assisted living applications for the elderly, fear detection for surveillance purposes, or using wearable sensors to test customer satisfaction; (3) Character design and control for games and virtual worlds; (4) Social robots, such as guide robots engaging with visitors; (5) Expressive speech synthesis, generating synthetic speech with different emotions, such as happy or sad, friendly or apologetic; (6) Emotion recognition (e.g., for spotting angry customers in speech dialog systems); (7) Support for people with disabilities, such as educational programs for people with autism..."

See also: the W3C Multimodal Interaction Activity

Hitachi ID Upgrades Password Manager
Dave Kearns, Network World

"Hitachi ID (formerly M-Tech, and still located in Calgary) has just released version 7.0 of its Password Manager product. According to Hitachi ID CTO Idan Shoham, Version 7 'was designed from the ground up to be the most advanced platform for managing authentication factors. We started with our next-generation technology platform to get the best possible scalability, flexibility and reliability. We then added capabilities to support one-stop management of all of a user's authentication factors—passwords, security questions, OTP tokens, smart cards, voice biometrics, hard disk encryption keys and more. The end result is that Password Manager 7.0 is the first true authentication management platform on the market'...

Password Manager 7.0 is also the first release where telephone-based password and PIN reset (Hitachi ID Telephone Password Manager) and enterprise single sign-on (Hitachi ID Login Manager) are included in the base product and price. I like products that do one thing (in this case, authentication) in all of its aspects and do it very well. Hitachi ID's Password Manager 7.0 certainly fits that description..."

From the Hitachi ID announcement: "Password Manager 7.0 introduces a series of significant new features: (1) Authentication chains, supporting context-sensitive and multi-step processes for validating user identity; examples include: protecting Extranet-facing deployments against attack by robots using CAPTCHAs, leveraging mobile phones as an authentication factor, and integrating with consumer authentication services from vendors such as VeriSign or RSA. (2) User classes for flexible delegation of security rights; examples include: regional help desks with local authority to reset passwords (and) empowering managers to reset passwords for their (direct or indirect) subordinates. (3) A password synchronization service on Windows servers, which minimizes the code running in the operating system kernel and supports advanced features such as user filtering and a retry queue. (4) Enhancements to the managed user enrollment system, enabling large organizations to control the pace at which users are invited to complete their profiles. (5) Many new reports to track user profiles and login accounts, orphan and dormant accounts, enrollment progress and more. (6) Official support for Windows 7 clients, including 64-bit versions.

Password Manager ships with connectors for over 100 types of systems and applications, including common on-premises systems such as Active Directory and SAP and cloud-hosted systems such as Google Applications and WebEx. It also includes integrations with the Windows login process, 10 types of help desk incident management applications, e-mail systems, full disk encryption products and more..."

See also: the Hitachi ID announcement


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: