The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: August 19, 2010
XML Daily Newslink. Thursday, 19 August 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



W3C Launches New Web Performance Working Group
Staff, W3C Announcement

W3C has announced the formation of a new Web Performance Working Group within the W3C Rich Web Client Activity. Its principal mission is to define methods to measure aspects of application performance of user agent features and APIs. The Initial Chairs are Arvind Jain (Google) and Jason Weber (Microsoft), and the W3C Team Contact is Philippe Le Hégaret.

To be successful, the Web Performance Working Group is expected to have ten or more active participants for its duration. The Chairs and specification Editors are expected to contribute one day per week towards the Working Group. The Web Applications Working Group will also allocate the necessary resources for building Test Suites for each specification. The group encourages questions and comments on its public mailing lists. This Working Group also welcomes non-Members to contribute technical submissions for consideration, with the agreement from each participant to Royalty-Free licensing of those submissions under the W3C Patent Policy.

Scope: "As Web browsers and their underlying engines include richer capabilities and become more powerful, web developers are building more sophisticated applications where application performance is increasingly important. Developers need the ability to assess and understand the performance characteristics of their applications using well-defined interoperable methods. The Web Performance Working Group's deliverables include user agent features and APIs to measure aspects of application performance. These deliverables will apply to desktop and mobile browsers and other non-browser environments where appropriate and will be consistent with Web technologies designed in other working groups including HTML, CSS, WebApps, DAP and SVG... n order to advance to Proposed Recommendation, each specification is expected to have two independent implementations of each of feature defined in the specification."

Deliverables: (1) Navigation Timing — an interoperable means for site developers to collect real-world performance information from the user agent while loading the root document of a webpage. The user agent captures the end to end latency associated with loading a webpage from a web server. This might include timing associated with the network, timings associated with loading the document, and information about how the page was loaded, for example number of network requests. (2) Resource Timing—an interoperable means for site developers to collect real-world performance information from the user agent while loading resources as specified from the root document of a webpage. This might include: timing associated with the network, timings associated with loading the resource in the document, where a resource may be one of the following elements: iframe, img, script, object, embed and link. (3) User Timings—an interoperable means for site developers to capture timing information with a developer supplied name. The user agent captures the time stamp at the point and time specified in the code executing in the user agent.

See also: the W3C Rich Web Clients Activity


AMQP 1.0 Business Messaging Recommendation Ready for Implementation
Staff, Advanced Message Queuing Protocol Team Announcement

An announcement on the Advanced Message Queuing Protocol (AMQP) public web site notes that AMQP Version 1.0 has been voted as 'Recommended' status. "This means that the AMQP protocol has reached the stage where it is stable enough to build product to. We expect middleware implementors to begin engineering products to the AMQP 1.0 specification." In addition, a ballot has been opened for the PMC10 Adoption of Testing Suite for AMQP 1.0 messaging specification. This vote, if successful, indicates acceptance of the testing suite proposal required to fulfil step 9 of the agreed process with respect to the AMQP 1.0 messaging specification which was voted to 'Recommendation' by resolution PMC09."

"AMQP is an open Internet Protocol for Business Messaging. The AMQP Working Group collaborates on specifications for messaging infrastructure that provides an facility for connecting messaging-dependent applications. AMQP's scope covers messaging within and between firms, with applicability to both business and infrastructure messaging...

Though many networking protocol needs have been addressed, a large gap exists in common guaranteed-delivery messaging middleware. AMQP fills that gap... AMQP enables complete interoperability for messaging middleware, both the networking protocol and the semantics of broker services are defined in AMQP. The AMQP model explicitly defines the server's semantics because interoperability demands the same semantics for any server implementation. To enable technology-neutral interoperability, AMQP defines an efficient wire-level format with modern features..."

From the Recommendation Specification (2010-08-17 19:17:19, SVN 1093): "AMQP is divided up into separate layers. At the lowest level we define an efficient binary peer-to-peer protocol for transporting messages between two processes over a network. Secondly, we define an abstract message format, with concrete standard encoding. Every compliant AMQP process must be able to send and receive messages in this standard encoding... Corporations participating in the AMQP Working Group are: Bank of America, N.A., Barclays Bank PLC, Cisco Systems, Inc., Credit Suisse, Deutsche Boerse Systems Envoy Technologies Inc., Goldman Sachs. INETCO Systems Limited. Informatica Corporation. JPMorgan Chase Bank & Co., Microsoft Corporation. Novell. Progress Software. Rabbit Technologies (VMWare). Red Hat, Inc., Software AG. Solace Systems, Inc., Tervela, Inc., TWIST Process Innovations. VMware, Inc., WSO2, Inc., and 29West Inc.(Informatica)

See also: notes on the AMQP 'Recommendation' Status


IETF Internet Draft for Internationalized Email Headers
Abel YANG and Shawn Steele, IETF Internet Draft

Members of the IETF Email Address Internationalization (EAI) Working Group have published a level -02 specification for Internationalized Email Headers, intended ultimately to obsolete RFC #5335.

From the document abstract: "Full internationalization of electronic mail requires not only the capabilities to transmit non-ASCII content, to encode selected information in specific header fields, and to use non-ASCII characters in envelope addresses. It also requires being able to express those addresses and the information based on them in mail header fields. This document specifies an variant of Internet mail that permits the use of Unicode encoded in UTF-8, rather than ASCII, as the base form for Internet email header field. This form is permitted in transmission only if authorized by an SMTP extension, as specified in an associated specification..."

Background: "Mailbox names often represent the names of human users. Many of these users throughout the world have names that are not normally expressed with just the ASCII repertoire of characters, and would like to use more or less their real names in their mailbox names. These users are also likely to use non-ASCII text in their common names and subjects of email messages, both received and sent. This protocol specifies UTF-8 as the encoding to represent email header field bodies.

The traditional format of email messages allows only ASCII characters in the header fields of messages. This prevents users from having email addresses that contain non-ASCII characters. It further forces non-ASCII text in common names, comments, and in free text (such as in the Subject: field) to be encoded (as required by MIME format per RFC 2047). This specification describes a change to the email message format that is related to the SMTP message transport change described in the associated document, and that allows non-ASCII characters in most email header fields. These changes affect SMTP clients, SMTP servers, mail user agents (MUAs), list expanders, gateways to other media, and all other processes that parse or handle email messages.

See also: the IETF Email Address Internationalization (EAI) Working Group


OGC Seeks Comments on Sensor Web Enablement (SWE) Candidate Standard
Staff, Open Geospatial Consortium Announcement

The Open Geospatial Consortium members are seeking comments on the specification Earth Observation Satellite Tasking Extension for OGC Sensor Planning Service (SPS). The SPS configuration proposed in this profile "supports the programming of Earth Observation (EO) sensor systems. The candidate standard describes a single SPS configuration that can be supported by many satellite data providers who have existing facilities for managing sensor system programming requests.

This SPS standard defines interfaces for queries that provide information about the capabilities of a sensor and how to task the sensor, where the sensor may be any type of sensor with a digital interface. The SPS and EO-SPS standards are part of the OGC Sensor Web Enablement (SWE) suite of standards. SWE standards enable developers to describe, discover, task, and access any Internet or Web accessible sensor, transducer and sensor data repository.

Sensor technology, computer technology and network technology are advancing together while demand grows for ways to connect information systems with the real world. Linking diverse technologies in this fertile market environment, integrators are offering new solutions for plant security, industrial controls, meteorology, geophysical survey, flood monitoring, risk assessment, tracking, environmental monitoring, defense, logistics and many other applications.

Other (pending) SWE standards from OGC include (1) Observations & Measurements (O&M) - The general models and XML encodings for observations and measurements; (2) Sensor Model Language (SensorML) - standard models and XML Schema for describing the processes within sensor and observation processing systems; (3) Transducer Markup Language (TML) - Conceptual model and XML encoding for supporting real-time streaming observations and tasking commands from and to sensor systems; (4) Sensor Observation Service (SOS) - Open interface for a web service to obtain observations and sensor and platform descriptions from one or more sensors..."

See also: OGC Sensor Web Enablement (SWE)


RightScale Scales Up To 1.3 Million Servers In View
Charles Babcock, InformationWeek

RightScale announced that it was managing over one million servers in the cloud through its management platform. That would be a million virtual, not physical, machines, but the number is still impressive. Maybe the return on cloud computing lies in supplying front-end management as much as in infrastructure.

RightScale has built the right tools and management platform to package up enterprise workloads and launch them in the cloud. It does not operate a cloud infrastructure itself, with server hardware and disks. Rather, it can help a customer select a database system from its application catalogue, such as IBM's DB2, tie it to an application server and Web server with a load balancer, and send the combination to Amazon's EC2...

If the customer went straight to Amazon, he'd have to provision each server himself and hope they worked together. RightScale vouches for the integration and monitors the workload. Through the load balancer, it adds another application server if demand warrants it...

RightScale on August 17, 2010 added Windows to its management platform. It had previously managed Linux workloads. CTO Thorsten von Eicken said in his blog that RightScale has been chipping away at a mountain of work to support Windows 2003 and 2008..."


Business Intelligence on the Cheap with Apache Hadoop and Dojo
Michael Galpin, IBM developerWorks

This article provides simple example of crunching big data with Hadoop. This is an easy way to process large amounts of data that can provide you valuable insight into your business. In this article, we use Hadoop's map/reduce capabilities directly. If you start using Hadoop for many use cases, you will definitely want to look at higher order frameworks that are built on top of Hadoop and make it easier to write map/reduce jobs.

Two such excellent, open source frameworks are Pig and Hive, both of which are also Apache projects. Both use a more declarative syntax with much less programming. Pig is a data-flow language, developed by members of the core Hadoop team at Yahoo, while Facebook is more SQL-like and was developed at Facebook. You can use either of these, or the more base map/reduce jobs, to produce output that can be consumed by a web application...

No matter what kind of business you have, the importance of understanding your customers and how they interact with your software cannot be overstated... Once you know what you want to measure and analyze, you need to make sure your application is emitting the data needed to quantify your user's behavior and/or information. Now, you may be lucky and you are already emitting this data. For example, perhaps everything you need is already being emitted as part of some kind of transaction in your system and is being recorded to a database. Or perhaps it is already being written to an application or the system logs. Often neither of these will be the case, and you will need to either modify your system configuration or modify your application to log or record the information you need somehow.

Once you have crunched all of the data, you may be left with very valuable information in a concise format, but it's probably sitting in a database or maybe as an XML file on a filer somewhere. That may be fine for you, but chances are that this data needs to be put into the hands of business analysts and executives in your company. They are going to expect a report of some sort that will be both interactive and visually appealing... Hadoop is based on the map/reduce paradigm. The idea is to take some unwieldy set of data, transform it into just the data that you are interested in (this is the map step), and then aggregate the result (this is the reduce step). In this example you want to start with Apache access logs and turn this into a dataset that just contains how many requests you are getting from the various browsers..."

See also: Apache Hadoop


Standards and Open Source for Cloud Computing
Dave West, InfoQueue

"Three recent announcements highlight the evolving cloud ecosystem in favor of openness and standards. Red Hat has moved its Deltacloud effort to the Apache Incubator. According to David Lutterkort, 'The main reason for this move is that we've heard from several people that they really liked the idea of Deltacloud, and the concept of a true open source cloud API, but didn't like it as a Red Hat project. Apache Incubator is a well-established place for parties with different interests to collaborate on a specific code base, so it seemed the logical place to address these concerns'...

Rackspace announced its OpenStack project [where] OpenStack is an open-source cloud platform designed to foster the emergence of technology standards and cloud interoperability, and Lew Moorman [President, Cloud and CSO at Rackspace] says 'We are founding the OpenStack initiative to help drive industry standards, prevent vendor lock-in and generally increase the velocity of innovation in cloud technologies'...

The Distributed Management Task Force (DMTF) has released two documents — 'Architecture for Managing Clouds' and 'Use Cases and Interactions for Managing Clouds' — that are intended to lay the groundwork for DMTF's next step; naming an API working group to draft APIS for infrastructure as a service. OpenStack and Apache Deltacloud have similar goals: building lightweight REST APIs that allow cloud provider access via an HTTP network... The need for standards and the desirability of open source projects for cloud computing infrastructure and management seems to be needed, and needed quickly..."

Note: The OASIS Identity in the Cloud Technical Committee was chartered recently "to harmonize definitions/terminologies/vocabulary of Identity in the context of Cloud Computing; to identify and define use cases and profiles; and to identify gaps in existing Identity Management standards as they apply in the cloud..." A draft set of use cases is available for review on the TC Wiki.

See also: the OASIS ID-Cloud Use Case Categories for cloud standardization


Data Storage at Density Equivalent to 2.5 Terabits Per Square Inch
Martyn Williams, InfoWorld

"Toshiba detailed a breakthrough in data storage that it says paves the way for hard drives with vastly higher capacity than today. The breakthrough has been made in the research of bit-patterned media, a magnetic storage technology that is being developed for future hard disk drives.

In today's drives, magnetic material is spread across the surface of the disk and bits of data are stored across several hundred magnetic grains, but the technology is reaching its limit.

Prototypes of the media have been made before but Toshiba says its engineers have, for the first time, succeeded in producing a media sample in which the magnetic bits are organized into a pattern of rows.

Toshiba's sample media is still in the prototype stage, but is built at a density equivalent to 2.5 terabits per square inch. Contrast that with Toshiba's current highest capacity drive today, which is based on existing technology and has a density of 541 gigabits per square inch or about one fifth that of the new technology. Toshiba expects the first drives based on bit-patterned media to hit the market around 2013..."


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-08-19.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org