The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: September 09, 2010
XML Daily Newslink. Thursday, 09 September 2010

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus

W3C Invites Implementation Feedback on Geolocation API Specification
Andrei Popescu (ed), W3C Technical Report

Members of the W3C Geolocation Working Group now invite implementation feedback on the Candidate Recommendation of the Geolocation API Specification. The Geolocation API defines a high-level interface to location information associated only with the device hosting the implementation, such as latitude and longitude. The API itself is agnostic of the underlying location information sources. Common sources of location information include Global Positioning System (GPS) and location inferred from network signals such as IP address, RFID, WiFi and Bluetooth MAC addresses, and GSM/CDMA cell IDs, as well as user input.

W3C publishes a technical report as a Candidate Recommendation to indicate that the document is believed to be stable, and to encourage implementation by the developer community. A Geolocation API Preliminary Implementation Report is available and will be updated during the Candidate Recommendation period. The Candidate Recommendation exit criteria for this document to enter the Proposed Recommendation stage is to have a minimum of two independent and interoperable user agents that implementation each feature of the Geolocation API, which will be determined by passing the user agent tests defined in the test suite developed by the Working Group. In addition, a minimum of two Websites must pass the Tests for Websites portion of the test suite.

The Tests for Web Sites supplies a checklist of the requirements detailed in the Privacy considerations for recipients of location information section of the API specification. "Recipients must only use the location information for the task for which it was provided to them. Recipients must dispose of location information once that task is completed, unless expressly permitted to retain it by the user. Recipients must also take measures to protect this information against unauthorized access. If location information is stored, users should be allowed to update and delete this information. The recipient of location information must not retransmit the location information without the user's express permission. Care should be taken when retransmitting and use of encryption is encouraged... Recipients must clearly and conspicuously disclose the fact that they are collecting location data, the purpose for the collection, how long the data is retained, how the data is secured, how the data is shared if it is shared, how users may access, update and delete the data, and any other choices that users have with respect to the data..."

The Geolocation API Specification specification is limited to providing a scripting API for retrieving geographic position information associated with a hosting device. The relevant geographic position information is provided in terms of World Geodetic System coordinates. The scope of the specification does not include providing a markup language of any kind, nor does it include defining new URI schemes for building URIs that identify geographic locations...."

See also: the W3C Geolocation Working Group

Is Context-Aware Computing Ready for the Limelight?
George Lawton, IEEE Computing Now

"Context-aware computing—which leverages information about users and their environments to improve the interactions among them — has been around for almost two decades. However, it has been implemented commercially only in limited applications, such as those using location information to find nearby friends or stores. This has occurred mainly because of a lack of relevant standards, few devices with the capabilities necessary to perform context-aware computing, and limited sources of context-related information to draw on. Now context-aware computing is ready to take off, due to significant improvements in social networking, mobile technology, smart phones, and sensor networks...

Context-aware computing can improve business processes such as sales, inventory, scheduling, and purchasing by tailoring the way an application presents information to customers and employees, formulating suggestions, and automating some parts of the decision-making process. For example, a sales application might prominently list the products that customers purchased in the past to make it easier for them to find what they want, or it might suggest sales-call opportunities based on a customer's buying history.

A basic system includes a contextually responsive application and hardware elements such as PCs, sensors, and switches. Some of the most useful sources of contextual data include location (captured, for example, from a GPS system), identity (from information a system gathers about a user), activity (such as from a smart-phone-based to-do list), and time (from the system clock). A smart phone is well-suited to context-aware computing because it brings together multiple data streams, including those related to user location and movement, and communications history. Today, context-aware systems can also gather information from social networks...

[Examples:] Radiant Logic released Context Browser, a Web-based application that lets users search contextually across structured information stored in different places. Globys recently launched the Mobile Occasions contextual marketing service for cellular-phone providers. It combines billing and demographic information, as well as other data, to target advertisements to customers. Ryerson University's student-based Digital Media Zone and Appear Networks are testing an application that users with disabilities could download to an Android smart phone to help them navigate the Paris subway... Cisco's Collaboration in Motion initiative would integrate location, activity, behavior, network traffic patterns, and other types of information from the company's routers... AeroScout has implemented Cisco's platform commercially in applications and handheld devices that gather contextual information about a patient and hospital resources, and enable healthcare workers to quickly find the type of equipment and supplies needed...."

IETF Draft: Heterogeneous Wireless Network Security Architecture
Kaizhi Huang (ed), IETF Internet Draft

Members of the IETF IP Security Maintenance and Extensions (IPSECME) Working Group have published an initial level -00 specification Heterogeneous Wireless Network Security Architecture. From the document abstract: "After analysis and comparison of domestic and international wireless network secure schemes and standards, security threats for heterogeneous wireless network are investigated in theory and practice. According to the comprehensive summing-up of current secure standards' application, existing secure vulnerabilities and secure hidden troubles which may happen in future, "network layered, security classified and trusted domain departed" secure standard model is proposed, and corresponding standards for heterogeneous wireless network security architecture are also put forward."

From the analysis: "Multiple heterogeneous wireless networks will be coexistence for a long time in next-generation broadband wireless communication network of (WLAN, WiMAX, etc.). A variety of network access technologies, such as: TD-SCDMA, WCDMA, cdma2000 for wireless wide area network,WiMAX for wireless metropolitan area network and Wi-Fi for WLAN technology, provided more efficient services for different people and different needs, and also provided ways to achieve mobility, personalization of communication and multimedia applications.

At the same time, on a variety of wireless network performance, especially security requirements put forward higher requirements, so specialized security protection mechanisms are in need. The access users should perform authentication and access control to protect the confidentiality and integrity of transmission data. However, wireless broadband network security cannot be satisfied in the whole because of independent and incompatible of different wireless network security standards. Therefore, heterogeneous wireless network security architecture standards need to be developed.

By tracking security standards of domestic and international wireless network, Analysis and comparison of various security mechanisms of wireless networks are presented. Security threats that wireless network may suffered are concluded and summarized in theory and practice. This architecture exploited the usability of security protocols and encryption algorithms in wireless network from throughput, security, fault tolerance, balance between efficiency and other aspects, and estimated the anti-attack capability of information security systems (including encryption algorithms, security protocols and key management, etc.). A brief description of existing wireless broadband network standards in the literature is presented..."

See also: the IETF IP Security Maintenance and Extensions (IPSECME) Working Group

Compromising Twitter's OAuth Security System
Ryan Pau, Ars Technica

"Twitter officially disabled Basic authentication this week, the final step in the company's transition to mandatory OAuth authentication. Sadly, Twitter's extremely poor implementation of the OAuth standard offers a textbook example of how to do it wrong. This article will explore some of the problems with Twitter's OAuth implementation and some potential pitfalls inherent to the standard. I will also show you how I managed to compromise the secret OAuth key in Twitter's very own official client application for Android.

OAuth is an emerging authentication standard that is being adopted by a growing number of social networking services. It defines a key exchange mechanism that allows users to grant a third-party application access to their account without having to provide that application with their credentials. It also allows users to selectively revoke an application's access to their account...

Applications that communicate with OAuth-enabled services can use a set of keys—called the consumer key and consumer secret—to uniquely identify themselves to the service. This allows the OAuth-enabled service to tell the user what third-party application is gaining access to their account during the authorization process... If the key is embedded in the application itself, it's possible for an unauthorized third party to extract it through disassembly or other similar means. It will then be possible for the unauthorized third party to build software that masquerades as the compromised application when it accesses the service.... The function of the consumer secret is really just to let the remote OAuth-enabled Web service know who is making the request—kind of like a user agent string. In the context of a desktop or mobile client application, it's basically superfluous and shouldn't be trusted in any capacity...

I don't think that OAuth is a failure or a dead end. I just don't think that it should be treated as an authentication panacea to the detriment of other important security considerations... I think that OAuth 2.0 -- the next version of the standard—will address many of the problems and will make it safer and more suitable for adoption. The current IETF version of the 2.0 draft still requires a lot of work, however. It still doesn't really provide guidance on how to handle consumer secret keys for desktop applications, for example. In light of the heavy involvement in the draft process by Facebook's David Recordon, I'm really hopeful that the official standard will adopt Facebook's sane and reasonable approach to that problem..."

See also: the IETF Open Authentication Protocol (OAuth) Working Group

Public Review for WebCGM v2.1 Errata Document
Lofton Henderson (ed), OASIS Public Review Document

Members of the OASIS CGM Open WebCGM Technical Committee have approved a Committee Draft specification WebCGM v2.1 Errata for public review through September 21, 2010. Most of the nine 'errata' in this specification constitute minor elements, since the OASIS Standards Approval Process limits 'errata' to proposed corrections which do not constitute any Substantive Change. A 'Substantive Change' is a change to a specification that would require a compliant application or implementation to be modified or rewritten in order to remain compliant.

Development of WebCGM is being undertaken jointly with W3C, where WebCGM 2.1 is related to the previous W3C work on WebCGM 1.0 and 2.0. WebCGM 2.0 was simultaneously published by W3C as a Recommendation and by OASIS as an OASIS Standard. The two versions are identical in technical content, differing only in the formatting and presentation conventions of the two organizations.

"Computer Graphics Metafile (CGM) is an ISO standard, defined by ISO/IEC 8632:1999, for the interchange of 2D vector and mixed vector/raster graphics. WebCGM is a profile of CGM, which adds Web linking and is optimized for Web applications in technical illustration, electronic documentation, geophysical data visualization, and similar fields. First published (1.0) in 1999 and followed by a second (errata) release in 2001, WebCGM unifies potentially diverse approaches to CGM utilization in Web document applications. It therefore represents a significant interoperability agreement amongst major users and implementers of the ISO CGM standard...

The design criteria for WebCGM aim at a balance between graphical expressive power on the one hand, and simplicity and implementability on the other. A small but powerful set of standardized metadata elements supports the functionalities of hyperlinking and document navigation, picture structuring and layering, and enabling search and query of WebCGM picture content..."

See also: the OASIS announcement

Specifying Local Civic Address Fields in PIDF-LO
James Winterbottom, Martin Thomson, Richard Barnes (eds), IETF Internet Draft

Members of the IETF Geographic Location/Privacy (GEOPRIV) Working Group have issued a first public working draft for the Standards Track specification Specifying Local Civic Address Fields in PIDF-LO. The document describes how to specify local civic elements in the Geopriv civic schema maintaining backward compatibility with existing specifications and implementations. Support for providing local civic elements over DHCP is also described.

The IETF Revised Civic Location Format for Presence Information Data Format Location Object (PIDF-LO) RFC specification defines an XML format for the representation of civic location. This format is designed for use with Presence Information Data Format Location Object (PIDF-LO) documents and replaces the civic location format in RFC 4119. The format is based on the civic address definition in PIDF-LO, but adds several new elements based on the civic types defined for Dynamic Host Configuration Protocol (DHCP), and adds a hierarchy to address complex road identity schemes. The format also includes support for the 'xml:lang' language tag and restricts the types of elements where appropriate."

The Geopriv civic location specification of RFC 5139 already "defines an XML schema that are intended to allow the expression of civic location in most countries. However, it was recognized that some countries may require a profile or guidance on how to specify local addresses using the elements defined in RFC 5139, and so RFC 5774 was produced to provide this function. Subsequent to these specifications being produced, a number of individual contributions have been made trying to add additional civic elements to address local jurisdictional requirements. These contributions were specified in such a away that broke backward compatibility for protocols equipment, and other standards already using the RFC 5139 specification.

From the new I-D specification: "The civic schema of RFC 5139 defines an ordered structure of elements that can be combined to describe a street address. The XML extension point at the bottom of the schema is used to include address elements of local significance into the main civic body. For example, suppose the Central Devon Canals authority wishes to introduce a new civic element called "bridge". The authority must define an XML namespace and define the "bridge" element within that namespace. The namespace needs to be a URI and needs to be unique, for example ""... Nodes that receive the location information but don't understand the locally specified address elements can safely ignore them, yet still interpret the main civic elements from RFC 5139 and so maintain backward compatibility. Where the information is passed to local applications, such as a LoST server for emergency call routing, the significance of the localized elements can be safely applied. This allows localized address elements to be included in a location response from a LIS using HELD without modification being required to the HELD protocol or the HELD client on the device... In networks that elect to use DHCP to provide civic address information to clients, three new CATypes are defined to address this basic functionality...."

See also: the IETF Geographic Location/Privacy (GEOPRIV) Working Group Status Pages

Autonomous User Interfaces for Mobile Apps
Robi Karp, DDJ

"The Autonomous User Interface (AUI) is a revolutionary approach to UI design and implementation that goes beyond the custom themes, icon sets, and color schemes common on many mobile phones and other intelligent devices. Through scriptable, autonomous UI coding, AUI lets OEMs, developers, integrators, and other ecosystem participants completely control and customize the look-and-feel of the end-user experience.

With traditional design methodologies, application code 'owns' the particulars of UI implementation, determining the type, orientation, placement, and other attributes of objects on the display (buttons, widgets, etc.), the flow of their use and the callback code that powers those elements. The attributes of a UI design are thereby set in the original design and are only minimally mutable downstream, by channel partners, third-parties and end-users. Some UI and application frameworks support theming—customization of color schemes, menu text styles, window frames, widget sets, etc. However, the fundamental structure and flow of an application UI remains set in stone—a closed box as imagined by the original design team.

On the other hand, the concept of an Autonomous User Interface lets application developers specify generic or abstract presentation of controls, widgets and even content, giving downstream developers the freedom to brand and customize...

This capability expands the market and extends the lifetime of application code by offering developers and other ecosystem players new opportunities to brand, differentiate and refresh device software. In today's dynamic landscape of multiple application OSes (especially in mobile), it's important to build a strong base product that can be easily tailored for different packages, channels and markets..."

Upload XML Bulk Data to Google App Engine Persistent Object Database
Joseph P. McCarthy, IBM developerWorks

The Google App Engine which launched in April 2008 included a method to upload bulk data stored in CSV files using Python. Java language support for upload was added a year later. To date, the App Engine lacks Java-native support for bulk uploads, and CSV remains the only data storage medium supported by the bulk uploading tool.

The supplied development environment for applications creates a local database to persist data during development, and the site itself allows data to be stored as persistent objects, or entities. These entities are created using Plain Old Java Objects (POJOs) annotated with Java Data Object (JDO) annotations. However, the environment has no way to upload data directly between the two (local and deployed) databases.

This article explores various methods to store data from XML documents on the App Engine persistent database... The simplest method to add the data from an XML document to the datastore on GAE is to upload the document as part of the application and use a custom SAX-based parser to create a class in the application based on each entry in the document.

With the SOAP-based client and server method, a method to upload bulk XML data is now available to Java developers... This solution is restricted by both the maximum of characters you can enter into a text area, and the 30 second timeout enforced by Google on requests sent to GAE. If the document is not parsed and the objects made persistent within 30 seconds, then the server will throw an exception and the objects will not be created. SOAP is a protocol that allows XML messages to be sent and received over the internet. To create each employee, you will use a SOAP service running on GAE, one at a time. You can reuse the same handler class as before, but rather than run it on the server, you will use it as a client..."


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: