The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: September 10, 2010
XML Daily Newslink. Friday, 10 September 2010

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus

Mobile Development Tool RESTs on CouchDB
Ted Samson, InfoWorld

"With computing devices continuing to emerge in varying shapes and sizes running competing mobile platforms, developing apps for these items keeps getting trickier. Ideally, you could code an application once and have it run fluidly on any device, be it a smartphone, a mini-pad with a 5-inch display, or a full-size tablet. Yet even getting a native iPhone app to display properly on an iPad is no easy task.

CouchOne (formerly CouchIO) has announced a new mobile app dev platform called CouchMobile, aimed at easing cross-platform development by allowing programmers to write Web applications one time, scale horizontally, and share data and applications across any computing platform or mobile device they choose, including the cloud.

CouchMobile is based on the highly respected CouchDB, CouchOne's post-relational database for writing HTML5 applications. CouchDB includes replication and sync features to boost and maintain application performance when network connections are slow, spotty, or down. Its ties to CouchDB are a strong advantage, but CouchMobile's success will likely depend on how well it integrates with the top mobile platforms. According to the company, CouchDB integrates with Android; HP's webOS will support syncing of locally stored data..."

Related from Cloudant: "Cloudant just released Java View Server for CouchDB. This means that not only Erlang and interpreted languages like Javascript or Python can be used to write Map-Reduce jobs but also JVM based languages. The approached will be discussed at the CouchDB community meeting this week... The main advantages that are cited is the massive amount of Java libraries that are available for all kinds of functionality that could be relevant in map reduce tasks. The second one is the more reliable static typing aspect, but that needs to be proven..."

See also: Cloudant Java based view server for CouchDB

Associating Style Sheets with XML Documents 1.0 (Second Edition)
James Clark, Simon Pieters, Henry S. Thompson (eds), W3C PER

W3C announced the publication of the specification Associating Style Sheets with XML Documents 1.0 (Second Edition) as a W3C Proposed Edited Recommendation. This document "describes how style sheets may be associated with an XML document by including one or more processing instructions with a target of 'xml-stylesheet' in the XML document's prolog. Authors might have particular intentions as to how user agents are to present the information contained in their XML documents. This specification provides a non-intrusive mechanism, using a processing instruction, to provide links to one or more style sheets, i.e., resources specifying the desired rendering in a designated language. User agents will use these resources to control presentation of XML." Public comment is invited through October 14, 2010.

This second edition incorporates all known errata as of the publication date, clarifies several areas left unspecified in the earlier edition, and has been restructured to allow other specifications to reuse the rules for parsing pseudo-attributes from a string.

Some of the major changes which have been made since the first edition: (1) Provided definitions for a number of terms used but not defined in the first edition; (2) Added a conformance section, distinguishing between processor and document conformance, all of which was implicit in the first edition; (3) Identified a number of error cases, which were implicit in the first edition's appeal to the parallel with element start tag processing, and specified expected processor behaviour...

(4) In recognition of deployed processor behaviour, allowed 'xml-stylesheet' processing instructions to be ignored unless they are among the children of the document information item; (5) Added a number of references, but removed the explicit dependence on the HTML 4.0 specification by adding descriptions of the meanings of each of the pseudo-attributes consistent with their HTML 4.0 use but brought up-to-date; (6) Removed the (non-normative) Rationale section, as it contained a number of out-of-date assumptions; (7) Made the type pseudo-attribute optional, as agreed by existing erratum..."

Manage Amazon Identity and Authentication Service (IAM) with CloudBerry
Staff, Announcement

"As always we are trying to stay on top of the new functionality offered by Amazon S3 to offer the most compelling Amazon S3 and CloudFront client on Windows platform. Identity and Authentication Management Service (IAM) is a new addition to AWS Family that supports: (1) Identity management: Enables you to manage your private identity space under your AWS account. It will allow you to create, update and delete both identities and groups in your own space. (2) Capability management: Allows you to control what permissions individual identities will have in your AWS environment. (3) Least privilege: Enables you to lock down your AWS environment and only provide identities with the least privilege required when accessing AWS resources under your control. (4) Per user usage tracking: Enables to track usage of your AWS resources on a per identity basis...

In the newer release of CloudBerry Explorer we are introducing support for IAM service. Although we are trying to expand our S3 support, you can use CloudBerry Explorer to manage IAM service when it comes to other AWS such as EC2, SimpleDB, SQS, etc.

User can have any combination of credentials that AWS supports—AWS access key, X.509 certificate, password for web app logins, Multi Factor Authentication (MFA) device for high factor authentication. This allows Users to interact with AWS in any manner that makes sense for them: an employee might have both an AWS access key and a password; a software system might have only an AWS access key to make programmatic calls; and an outside contractor might have only an X.509 certificate to use the EC2 command line interface...

Content Invalidation allows you to remove an object from CloudFront edge locations prior to the expiration time set on that object. Invalidation is designed for unexpected cases where you need to remove an object from an edge location. For instance, you might use invalidation to fix an encoding error on a video you uploaded with a long expiration period, or update the css file for your website if it changes unexpectedly..."

See also: the AWS IAM FAQ document

Flexible Routing in the Cloud
George Lawton, IEEE Computing Now

"Cloud computing gives businesses flexible resource access that lets them offer multitiered pricing for different infrastructure classes of services such as CPU, storage, and total network throughput. However, network latency remains an uncontrolled variable in cloud computing because no tools yet exist to offer the same kind of flexibility for different network service classes.

At Georgia Institute of Technology, researchers are working to develop a Transit Portal (TP) that will let cloud applications dynamically change network routes to meet a particular application's need and a user's willingness to pay more for it...

Latency measures the lag that packets experience in traveling the network. In a perfect world, packets would travel to their destinations at the speed of light, 186,000 miles per second. But in the real world, physical, electronic, and topological characteristics of the route slow packets down. Latency can be mission critical in some highly interactive applications. For example, some stock market trading firms are starting to move their computer trading facilities as physically close to the stock exchanges as possible to minimize the travel time. Flexible network routing would let companies buy a higher class of route when an application requires it and pay less for applications such as data backup...

TP could also be useful outside the cloud to large-scale Internet service providers who want control over how Internet traffic reaches their systems—especially for latency-sensitive applications such as game servers and content distribution... Large network operators typically have network points of presence in geographically dispersed locations, which are connected to other network operations by very expensive carrier-grade routers. A TP connects to special routers at multiple locations to mimic this functionality for smaller users. Disaster recovery is another TP application area..."

Session Initiation Protocol (SIP) Recording Metadata
Ram Mohan R, Parthasarathi R, P. Kyzivat (eds), IETF Internet Draft

Members of the IETF SIP Recording (SIPREC) Working Group have published an initial level -00 specification for Session Initiation Protocol (SIP) Recording Metadata. This IETF working group was chartered to determine requirements and produce a specification for a protocol that will manage delivery of media from an end-point that originates media, or that has access to it, to a recording device. PBX and recording vendors today implement proprietary, incompatible mechanisms to facilitate recording. A standard protocol will reduce the complexity and cost of providing such recording services... Privacy and security of conversations are significant concerns. The working group will make sure that any protocol specified addresses these concerns and includes mechanisms to alert users to the fact that a session they are participating in is being recorded...

Session recording is a critical requirement in many communications environments such as call centers and financial trading. In some of these environments, all calls must be recorded for regulatory, compliance, and consumer protection reasons. Recording of a session is typically performed by sending a copy of a media stream to a recording device. This document describes the metadata model as viewed by Session Recording Server (SRS).

A recording Session Group represents a collection of related Recording Sessions maintained by SRS. Recording Session Groups are optional—they need not be present in the common case where Recording Sessions are independent of one another. A group with multiple sessions might arise when recordings of the same or different communication sessions independently initiate recording sessions....

Security Considerations: The Recording Session is fundamentally a standard SIP dialog and media session and therefore make use of existing SIP security mechanisms for securing the Recording Session and Media Recording Metadata..."

See also: the IETF Session Recording Protocol (SIPREC) Working Group

French Start-Up BonitaSoft Bringing Open-Source BPM to U.S.
Dave Rosenberg, CNET

"A relatively new French open-source start-up is set to soon make landfall in the U.S. BonitaSoft, a maker of open-source business process management (BPM) software. BonitaSoft aims to provide an open-source alternative to proprietary suites from the likes of IBM, Oracle, and SAP that dominate the BPM market. Other French open-source start-ups in the U.S. market include Talend and eXo.

The company is built around the open-source Bonita project, first developed in 2001 at the French National Institute for Research in Computer Science and Control (INRIA). The development team was then hired by French software giant Bull to help develop business process applications, but was eventually spun off to develop its own specialized BPM offering.

BonitaSoft was founded in June 2009 and is led by CEO and Bonita co-creator Miguel Valdés Faura. As an indicator of the level of interest in a commercial offering, Valdés points to the surge in downloads it has seen since its first commercial release at the beginning of the year, averaging 80,000 downloads per month; as a comparison, it took eight years for the Bonita project to reach 100,000 downloads..."

According to Jeremy Lipp's blog: "Cloud Computing is a key subject of interest for us as at BonitaSoft as it gives users a lot of benefits such as improved QoS, high-availability, flexibility and fast deployment of new services without IT guys being overwhelmed with infrastructure issues. BonitaSoft service team has successfully deployed Bonita Open Solution on Windows Azure but also implemented it on other private and public cloud platforms such as Amazon EC2..."

See also: the BonitaSoft web site

IP Flow Information Export (IPFIX) Mediation: Problem Statement
Atsushi Kobayashi and Benoit Claise (eds), IETF Approved RFC

IETF announced that a new Informational Request for Comments (RFC) specification is now available in online RFC libraries: IP Flow Information Export (IPFIX) Mediation: Problem Statement. This document "describes some problems related to flow-based measurement that network administrators have been facing, and then it describes IPFIX Mediation applicability examples along with the problems.

Flow-based measurement is a popular method for various network monitoring usages. The sharing of flow-based information for monitoring applications having different requirements raises some open issues in terms of measurement system scalability, flow-based measurement flexibility, and export reliability that IP Flow Information Export (IPFIX) Mediation may help resolve. IP Flow Information Export (IPFIX) Mediation fills the gap between restricted metering capabilities and the requirements of measurement applications by introducing an intermediate device called the IPFIX Mediator."

Background: "An advantage of flow-based measurement is that it allows monitoring large amounts of traffic observed at distributed Observation Points. While flow-based measurement can be applied to one of various purposes and applications, it is difficult for flow-based measurement to apply to multiple applications with very different requirements in parallel. Network administrators need to adjust the parameters of the metering devices to fulfill the requirements of every single measurement application. Such configurations are often not supported by the metering devices, either because of functional restrictions or because of limited computational and memory resources, which inhibit the metering of large amounts of traffic with the desired setup..."

The IETF IP Flow Information Export (IPFIX) Working Group, part of the IETF Operations and Management Area "has specified an Information Model to describe IP flows and the IPFIX protocol to transfer IP flow data from IPFIX exporters to collectors. Several implementers have already built applications using the IPFIX protocol. As a result of a series of IPFIX interoperability testing events the WG has produced guidelines for IPFIX implementation and testing as well as recommendations for handling special cases such as bidirectional flow reporting and reducing redundancy in flow records... The major current goal of the WG is developing solutions that meet the new requirements without modifying the core IPFIX protocol specifications. For example, the Working Group is developing an XML-based configuration data model that can be used for configuring IPFIX devices and for storing, modifying and managing IPFIX configurations parameter sets; this work is performed in close collaboration with the NETCONF WG..."

See also: the IETF IP Flow Information Export (IPFIX) Working Group Status Pages

HTML5: Getting to Last Call
Philippe Le Hégaret and Maciej Stachowiak, Blog

"We started to work on HTML5 back in 2007 and have been going through issues since then. In November 2009, the HTML Chairs instituted a decision policy, which allowed us to close around 20 issues or so. We now have around 200 bugs and 25 issues on the document.

In order to drive the Group to Last Call, the HTML Chairs, following the advice from the W3C Team, produced a timeline to get the initial Last Call for HTML5. The W3C team expresses its strong support to the chairs of the HTML Working Group in their efforts to lead the group toward an initial Last Call according to the published timeline. All new bugs related to the HTML5 specification received after the first of October 2010 will be treated as Last Call comments, with possible exceptions granted by the Chairs. The intention is to get to the initial Last Call and have a feature-complete document... We encourage everyone to send bugs prior to October 1 and keep track of them in order to escalate them to the Working Group if necessary..."

From the memo of the HTML WG co-Chairs to the WG: "The Chairs have been discussing with the team the need for a timeline to get to Last Call. The W3C Team has strongly urged us to create a timeline to drive an initial Last Call candidate, and we agree that this is essential to progress HTML5.

The key aspect of this timeline is that there will be a cutoff date for bugs to be considered before Last Call; any bugs after that date will instead be treated as Last Call feedback. That date is October 1, 2010. Any bug in the system by that date (even if reopened, resolved, or escalated) will be resolved before Last Call, and we will fully process any that are escalated to issues. Each date in this timeline is a deadline with a consequence. If the date is missed, the consequence will be applied. If there are any issues that WG Members feel are critical to address before Last Call, then it is essential to make sure they are in the bug system ASAP and that other required milestones are met..."

See also: Maciej Stachowiak's memo

NSF: Time for an Internet Do-Over
Bob Brown, Network World

"The U.S. National Science Foundation (NSF) has doled out grants worth up to $32 million in total to a pack of universities dedicated to rethinking everything about the Internet from from its core routing system to its security architecture and addressing the emergence of cloud computing and an increasingly mobile society."

From the NSF announcement: "Earlier this year, NSF challenged the network science research community to look past the constraints of today's networks and engage in collaborative, long-range, transformative thinking inspired by lessons learned and promising new research ideas. The Directorate for Computer and Information Science and Engineering (CISE) at the National Science Foundation (NSF) announced today awards for four new projects as part of the Future Internet Architecture (FIA) program. These awards will enable researchers at dozens of institutions across the country to pursue new ways to build a more trustworthy and robust Internet.

The FIA projects include leaders in computer science and electrical engineering as well as experts in law, economics, security, privacy, and public policy. The program will support 60 researchers at over 30 institutions across the country. The four basic research and system design projects funded under FIA explore different dimensions of the network architecture design space and emphasize different visions of future networks. NSF anticipates that the teams will explore new directions and a diverse range of research thrusts within their research agenda but also work together to enhance and possibly integrate architectural thinking, concepts and components, paving the way to a comprehensive trustworthy network architecture of the future.

Awards: (1) Named Data Networking: The NDN architecture moves the communication paradigm from today's focus on 'where' to 'what', i.e., the content that users and applications care about. By naming data instead of their location (IP address), NDN transforms data into first-class entities. While the current Internet secures the communication channel or path between two communication points and sometimes the data with encryption, NDN secures the content and provides essential context for security. (2) MobilityFirst: proposes an architecture centered on mobility as the norm, rather than the exception. The architecture uses generalized delay-tolerant networking (GDTN) to provide robustness even in presence of link/network disconnections. GDNT integrated with the use of self-certifying public key addresses provides an inherently trustworthy network. (3) NEBULA: an architecture in which cloud computing data centers are the primary repositories of data and the primary locus of computation; data centers are connected by a high-speed, extremely reliable and secure backbone network [with] new trustworthy data, control and core networking approaches to support the emerging cloud computing model of always-available network services. (4) The eXpressive Internet Architecture (XIA): addresses the growing diversity of network use models, the need for trustworthy communication; provides intrinsic security in which the integrity and authenticity of communication is guaranteed [with] flexible context-dependent mechanisms for establishing trust between the communicating principals, bridging the gap between human and intrinsically secure identifiers..."

See also: the NSF announcement


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: