The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: July 15, 2010
XML Daily Newslink. Thursday, 15 July 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com



W3C Last Call Review for Mobile Web Application Best Practices
Adam Connors and Bryan Sullivan, W3C Technical Report

Members of the W3C Mobile Web Best Practices Working Group have published a Last Call Working Draft for Mobile Web Application Best Practices. The W3C Membership and other interested parties are invited to review the document and send comments to the public comment list through 6-August-2010. Specific instructions to send implementation feedback on the document is available in the implementation feedback template.

The goal of this document is to aid the development of rich and dynamic mobile Web applications. It collects the most relevant engineering practices, promoting those that enable a better user experience and warning against those that are considered harmful... For the purposes of this document, the term 'Web application' refers to a Web page (XHTML or a variant thereof + CSS) or collection of Web pages delivered over HTTP which use server-side or client-side processing (e.g. JavaScript) to provide an "application-like" experience within a Web browser. Web applications are distinct from simple Web content in that they include locally executable elements of interactivity and persistent state...

In a world where the line between mobile and non-mobile is necessarily blurred and a document that restricts its focus solely to best practices that are uniquely mobile would most likely be very short. With this in mind, the focus of this document is to address those aspects of Web application development for which there are additional, non-trivial concerns associated with the mobile context. This applies equally both to the limitations of the mobile context (e.g. small screen, intermittent connectivity), and also the additional scope and features that should be considered when developing for the mobile context (e.g. device context / location, presence of personal data on the device, etc)...

This document is placed into to Last Call based on implementation feedback received during the previous Candidate Recommendation phase. A diff version since publication as a Candidate Recommendation is also available... In the absence of substantive last call comments, the Mobile Web Best Practices Working Group expects to request advancement of this document to Proposed Recommendation once (a) sufficient reports of implementation experience have been gathered to demonstrate that the Mobile Web Application Best Practices are implementable and are interpreted in a consistent manner, (b) an implementation report has been produced indicating the results of using each best practice for the Web sites/pages considered..."

See also: the W3C Mobile Web Best Practices Working Group


Healthcare Use of Identity Federation
John Moehrke, Blog

"This is exciting times in Identity Federation... I think that SAML is a specifically useful protocol for Identity Federation for the purpose of identifying users requesting cross-enterprise based transactions. This is specifically the purpose behind the IHE Cross-Enterprise User Assertion Profile. This profile does not fully leverage all of the power of SAML, but tries to constrain SAML just enough to get Healthcare going at using this technology.

IHE is now extending this profile with some more attributes about the user. Specifically adding a descriptive string for the user, their organization, identifier of their organization, their National Provider Identifier, and such. Also adding their role and the purpose of their request, values that might be used for access control and/or audit logging.

There is also the recent release from the Whitehouse of 'The National Strategy for Trusted Identities in Cyberspace'. From my read of this their goals are in the right place, they do seem to understand the potential miss-use, and they do seem to understand that the only way we can move forward today is to force the issue. This force is not to force a solution, but rather to force the discussion and encourage specific reasonable use-case developments. I think that Healthcare could be a very useful use-case, inclusive of Health Information Exchanges (HIE) and Personal Health Records (PHR). My biggest concern with this initiative is that they seem to be leaning toward a Certificate (PKI) based solution, and may not see the power of SAML...

I think the power of how OpenID and SAML can be used together is showed by a Clinical Trials project that Medtronic was involved with. Their solution is documented nicely on the Kim Cameron's Identity Blog article 'Southworks Seeds Open Source Claims Transformer' [Southworks has put its work bridging OpenID and WS-Federation into an open source project]. This is a project that offered their solutions to open-source and used both OpenID and SAML when the specific tools were the right tool to use..."

See also: Kim Cameron's Identity Blog article


nature.com OpenSearch: A Case Study in OpenSearch and SRU Integration
Tony Hammond, D-Lib Magazine

"This paper provides a case study of OpenSearch and SRU integration on the nature.com science publisher platform. These two complementary search methodologies are implemented on top of a common base service and provide alternate interfaces into the underlying search engine. Specific points addressed include query strings, response formats, and service control and discovery. Current applications and future work directions are also discussed.

The reasons for seeking to develop an SRU service were various. We wanted to be able to provide an off-site search functionality to complement our hosted search. We were also accustomed to fielding queries about our support for Z39.50 with all the promise of federated searching that Z39.50 extended. Concurrently we had been following the development of SRU as a next generation replacement for Z39.50. This Web-based technology mix (XML over HTTP) projected a much better return on investment for us than building (or commissioning) a service based on what is essentially a pre-Web technology—one with its own wire protocol and architectural reference points. Further, the work on search web services in OASIS augured well for SRU becoming a standards-based query language and search protocol.

SRU is an initiative to bring Z39.50 functionality to the Web and is firmly grounded in both structured queries and responses. Specifically a query can be expressed in the high-level query language CQL which is independent of any underlying implementation. Result records are returned using any registered W3C XML Schema format and are transported within a defined XML wrapper format for SRU. The SRU 2.0 draft provides support for arbitrary result formats based on media type...

OpenSearch by contrast was created by Amazon's A9.com and is a simple means to interface to a search service by declaring a URL template and returning a common syndicated format. It therefore allows for loosely organized result sets while not constraining the query. There is support for search operation control parameters (pagination, encoding, etc.), but no constraints are placed on the query string which is regarded as opaque. OpenSearch is thus a means to interface to arbitrary search APIs (both standard and proprietary) and to retrieve results using a common list-based format. OpenSearch is a plug-and-play technology... The nature.com OpenSearch implementation makes use of both technologies: it builds on a solid SRU base service but also presents a rich set of OpenSearch-type result formats. Both interfaces are equally supported in nature.com OpenSearch..."

See also: the OASIS Search Web Services Technical Committee


Semantically Enhancing Collections of Library and Non-Library Content
James E. Powell, Linn Marks Collins, Mark L. B. Martinez; D-Lib Magazine

"Many digital libraries have not made the transition to semantic digital libraries, and often with good reason. Librarians and information technologists may not yet grasp the value of semantic mappings of bibliographic metadata, they may not have the resources to make the transition and, even if they do, semantic web tools and standards have varied in terms of maturity and performance. Selecting appropriate or reasonable classes and properties from ontologies, linking and augmenting bibliographic metadata as it is mapped to triples, data fusion and re-use, and considerations about what it means to represent this data as a graph, are all challenges librarians and information technologists face as they transition their various collections to the semantic web.

This paper presents some lessons we have learned building small, focused semantic digital library collections that combine bibliographic and non-bibliographic data, based on specific topics. The tools map and augment the metadata to produce a collection of triples. We have also developed some prototype tools atop these collections which allow users to explore the content in ways that were either not possible or not easy to do with other library systems...

The semantic web depends upon simple statements of fact. A statement contains a predicate, which links two nodes: an object and a property. The semantic web allows for the use of numerous sources for predicates. This enables making fine grained-assertions about anything... In all, we use about 25 properties and classes from a dozen ontologies in our mapping of disparate data sources into instance data serialized as RDF/XML, and loaded into Sesame semantic repositories manually, and via the openRDF API. We have developed mapping tools that map and augment content in MARC XML format, and OAI Dublin Core content from OAI repositories, RSS and Atom news feed content, and we also have custom tools for processing structured (XML) content, and data from relational databases. The XML and database mapping tools currently must be adapted for each new data source, so they represent one of the more brittle, and labor-intensive aspects of the technologies.

Users can export files representing these graphs via web services. Supported formats include Pajek's net format, GraphViz's DOT formation, the Guess data format, and GraphML. This enables more sophisticated users to use other graph visualization tools for exploring the data, or to merge data from multiple sources. Restricting ourselves to a handful of ontologies by no means limits us in the future. The semantic web allows us to add new triples, using new predicates, whenever the need arises. There is no database schema to update, and the SPARQL query language is forgiving enough to allow changes to the underlying data, with minimal or no changes necessary for queries to continue to work against old and new data..."

See also: the OpenRDF.org web site


Framework for Emergency Calling Using Internet Multimedia
Brian Rosen, Henning Schulzrinne, James Polk, Andrew Newton; IETF I-D

Members of the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group have published a revised version of the Framework for Emergency Calling Using Internet Multimedia, updating the previous draft of July 27, 2009. The IETF has standardized various aspects of placing emergency calls. This document describes how all of those component parts are used to support emergency calls from citizens and visitors to authorities.

An emergency call can be distinguished from any other call by a unique Service URN (RFC 5031) that is placed in the call set-up signaling when a home or visited emergency dial string is detected. Because emergency services are local to specific geographic regions, a caller must obtain his location prior to making emergency calls. To get this location, either a form of measuring, for example, GNSS is deployed, or the endpoint is configured with its location from the access network's Location Information Server (LIS) using a Location Configuration Protocol (LCP). The location is conveyed in the SIP signaling with the call. The call is routed based on location using the LoST protocol (RFC 5222) which maps a location to a set of PSAP URIs. Each URI resolves to a PSAP or an Emergency Services Routing Proxy (ESRP) that serves as an incoming proxy for a group of PSAPs. The call arrives at the PSAP with the location included in the INVITE request.

Devices that create media sessions and exchange audio, video and/or text, and have the capability to establish sessions to a wide variety of addresses, and communicate over private IP networks or the Internet, should support emergency calls. Traditionally, enterprise support of emergency calling is provided by the telephony service provider to the enterprise. In some more recent systems, the enterprise PBX assists emergency calling by providing more fine grained location in larger enterprises. In the future, the enterprise may provide the connection to emergency services itself, not relying on the telephony service provider...

Location can be specified in several ways: Civic, Geospatial (geo), Cell tower/sector... In IETF protocols, both civic and geospatial forms are supported. The civic forms include both postal and jurisdictional fields. A cell tower/sector can be represented as a geo point or polygon or civic location. Other forms of location representation must be mapped into either a geo or civic for use in emergency calls. For emergency call purposes, conversion of location information from civic to geo or vice versa prior to conveyance is not desirable. The location should be sent in the form it was determined. Conversion between geo and civic requires a database..."

See also: IETF Emergency Context Resolution with Internet Technologies (ECRIT)


VALARM Extensions for iCalendar
Cyrus Daboo (ed), IETF Internet Draft

Members of the IETF Calendaring and Scheduling Standards Simplification (CALSIFY) Working Group have published a Standards Track Internet Draft for the specification VALARM Extensions for iCalendar. The document defines a set of extensions to the iCalendar VALARM component to enhance use of alarms and improve interoperability between clients and servers.

From the Introduction: "The IETF iCalendar specification defines a set of components used to describe calendar data. One of those is the 'VALARM' component which appears as a sub-component of 'VEVENT' and 'VTODO' components. The 'VALARM' component is used to specify a reminder for an event or to-do. Different alarm actions are possible, as are different ways to specify how the alarm is triggered.

As iCalendar has become more widely used and as client-server protocols such as CalDAV have become more popular, several issues with 'VALARM' components have arisen. Most of these relate to the need to extend the existing 'VALARM' component with new properties and behaviors to allow clients and servers to accomplish specific tasks in an interoperable manner. For example, clients typically need a way to specify that an alarm has been dismissed by a calendar user, or has been 'snoozed' by a set amount of time. To date, this has been done through the use of custom 'X-' properties specific to each client implementation, leading to poor interoperability.

This specification defines a set of extensions to 'VALARM' components to cover common requirements for alarms not currently addressed in iCalendar. Each extension is defined in a separate section below. For the most part, each extension can be supported independently of the others, though in some cases one extension will require another. In addition, this specification describes mechanisms by which clients can interoperably implement common features such as 'snoozing'..."

See also: the IETF Calendaring and Scheduling Standards Simplification WG Status Pages


Top Five Scripting Languages on the JVM
Andrew Binstock, InfoWorld

"Anyone who has followed software development tools during the last decade knows that the term 'Java' refers to a pair of technologies: the Java programming language and the Java Virtual Machine (JVM). The Java language is compiled into bytecodes that run on the JVM.

The language and the JVM, however, have been increasingly moving in opposite directions. The language has grown more complex, while the JVM has become one of the fastest and most efficient execution platforms available. On many benchmarks, Java equals the performance of binary code generated by compiled languages such as C and C++. The increasing complexity of the language and the remarkable performance, portability, and scalability of the JVM have created an opening for a new generation of programming languages.

In this article, I examine a handful of these languages, comparing and contrasting them, and identifying the needs they satisfy particularly well. I limit myself to the JVM languages that are free and open source... The JVM scripting languages today naturally divide into two groups based on their rate of adoption. Groovy and JRuby fall into the popular camp, while the others are niche players—that is, they appeal to a small community at present. The languages I've focused on are Groovy, JRuby, Fantom, Jython, and Scala. There are a few other candidates, namely Clojure, JavaFX, and NetRexx...

Groovy is an object-oriented language that is compiled to bytecode. Its principal syntactical trait is its close similarity to Java, but with much of the clutter removed. Groovy also provides high-level constructs for handling standard tasks such as string processing, consuming or generating XML, unit testing, and so on—all of which can save developers significant time... Conclusion: Groovy and JRuby lead a strong field, with Scala, Fantom, and Jython following behind...."


An Introduction to Mashups4JSF
Hazem Saleh, IBM developerWorks

"Creating mashups in web applications can be a headache. Developers need to know intensive JavaScript, RSS, and Atom parsing, JSON parsing, and parsing other formats. You also need to study the low-level APIs provided by the mashup service providers and write a great deal of code to integrate the mashups with the web applications.

The Mashups4JSF components interact with the mashup services through the client-side APIs or the REST APIs offered by the mashup service providers. Mashups4JSF provides a set of factories that wraps the implemented services for each mashup service provider. For now, Mashups4JSF has factories for (Google, Yahoo!, YouTube, Twitter, and Digg). Using this architecture allows you to easily add services for the current supported mashup service providers and easily add more factories for new mashup service providers. Another advantage of this architecture is that the wrappered mashup services are totally decoupled from the Mashups4JSF components so the mashup services can be used independently.

This article illustrates the architecture of Mashups4JSF, the configuration of the library, and how to create a mashup application with few lines of code using Mashups4JSF and the IBM JSF Widget Library (JWL) on the WebSphere Application Server V7.0 and JSF 2...

Mashups4JSF aims to offer the declarative mashups for the development community to be a complementary for the work done by GMaps4JSF. In future articles, I will explain the other features of Mashups4JSF like the (ATOM/RSS) feed producer service, give more interactive examples of the other Mashups4JSF components, and I will illustrate how Mashups4JSF can work inside a portlets environment..."


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-07-15.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org