A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
Headlines
- W3C Last Call for Widget Packaging and Configuration Specification
- Under-Estimating XML as Just a Tree
- ApacheCon North America 2010: Servers, The Cloud, and Innovation
- The Case Against Data Lock-in
- HTML5: The Jewel in the Open Web Platform
- New IESG Statement on the Selection and Role of Document Shepherds
- Wireless Interconnect and the Potential for Carbon Nanotubes
W3C Last Call for Widget Packaging and Configuration Specification
Marcos Cáceres (ed), W3C Technical Report
Members of the W3C Web Applications Working Group have published a Last Call Working Draft for the Widget Packaging and Configuration specification. The Last Call review period ends on October 26, 2010. The purpose of this Last Call is to give interested parties an opportunity to review changes made during the CR phase. Once two or more implementers demonstrate that they can pass the Test-Suite, the Working Group Intends to progress this specification to Proposed Recommendation.
The "Widget Packaging and Configuration" specification standardizes a packaging format for software known as widgets. This specification "is part of the Widgets family of specifications, which together standardize widgets as a whole. Widgets are client-side applications that are authored using Web standards such as HTML5, but whose content can also be embedded into Web documents. The specification relies on PKWare's Zip specification as the archive format, XML as a configuration document format, and a series of steps that runtimes follow when processing and verifying various aspects of a package. The packaging format acts as a container for files used by a widget...
The configuration document is an XML vocabulary that declares metadata and configuration parameters for a widget. The steps for processing a widget package describe the expected behavior and means of error handling for runtimes while processing the packaging format, configuration document, and other relevant files..."
Among Changes Since Last Publication: "From implementation experience, the working group found that implementers were not supporting the optional elements from the ITS specification. Both the I18n and the Web Applications Working Groups strongly believe that user agents should provide means for authors to localize content, but also understand that introducing a separate namespace, which is how ITS tags were formerly specified here, is both a burden for authors and for implementers. As such, since last publication, the Working Group coordinated with the i18n Core Working Group to replace the optional dependency on the its:dir attribute and its:span elements with equivalent element and attributes in the widget namespace. This specification now defines a global dir attribute and a span element, as well as support for them in the processing model.."
See also: the W3C Rich Web Client Activity
Under-Estimating XML as Just a Tree
Rick Jelliffe, O'Reilly Blog
"Programmers and academics often think and theorize about XML as kind of tree data structure. And so indeed it is. But it is also allows much more: it is a series of different graph structures composed into or imposed on top of that tree... Many people only use the tree structures in XML by choice (Content, Attributes, Elements and Comments: CACE?), and then find themselves having to revisit and perhaps reinvent the same kinds of data structures provided by the bits of XML they don't use. This is neither good nor bad: it depends on the case. I am not saying that XML IDs or entities are perfect or imperfect, for example: merely that they provide since they spring out of solving particular problems, we can see them as one way of revealing a general problem or solution space...
So [I provide here] a table showing what I think are four different simultaneous data structures that are available in vanilla XML. In this view, XML is: (1) The elements in XML form a tree (a single-rooted, directed, acyclic graph with no shared nodes). And a particular kind of tree: an ordered, typeable tree with labelled nodes that can have properties (a kind of attribute-value tree?) and unique identifiers, and with unlabelled text with no properties possible as leaves, but whose edges are unlabelled and have no properties. (2) Imposed on this we have a graph structure made using the ID/IDREF links. (3) Underneath the elements, the document is composed as a structure of parsed entities. Most XML documents are made of a single entity: one file or one web resource of course, but the entity mechanism allows a document to be constructed from multiple sources of text. These parsed entities form an acyclic directed ordered graph. (4) Then, above this are links that point outside the document (again using the entity mechanism): non-XML entities such as graphics form a star, XML documents form a graph (e.g., the Web), and I've tacked in for completeness the old SGML SUBDOC feature which allows documents to be nested...
Most of the built-in XML layers have more ambitious re-workings: instead of XML parseable entities you may be able to use XInclude, instead of ID/IDREF you may find XSD's KEY/KEYREF better (but probably not), instead of external entities you may find XLinks (for navigation) or RDF (for semantic links) better. No-one would (or could) use SUBDOC but would go for XML Namespaces plus XInclude...
The thought comes as to whether some of the additional XML standards are attempts to convert more of the Ns into Ys, or at least whether it would be a more rational approach to improving XML (or its successor's) expressive power while keeping things simple. For example, XML Schemas has its own composition mechanism (xs:include) and its own hierarchy mechanism (type derivation), its own internal linking system (global/local types) and an external linking system (import). But in the composition layer, how to handle circular inclusions or multiple inclusions (graphs or acyclic graphs) is necessarily defined..."
ApacheCon North America 2010: Servers, The Cloud, and Innovation
Staff, Apache Software Foundation (ASF) Announcement
ApacheCon North America 2010 (ApacheCon NA) will be held November 1-5, 2010 in Atlanta, Georgia, USA. The theme of this year's ApacheCon is "Servers, The Cloud, and Innovation", featuring highly-relevant, professionally-directed presentations that demonstrate specific problems and real-world solutions.
"Apache developers, users, enthusiasts, software architects, administrators, executives, and community managers will learn to successfully develop, deploy, and leverage existing and emerging Open Source technologies critical to their businesses. Hands-on trainings and general conference sessions will cover in-depth dozens of Apache products such as Cassandra, Geronimo, Hadoop, Lucene, Tomcat, and the Apache HTTP Server.
Special events during the week include BarCampApache, Hackathon, MeetUps, expo hall, receptions, and ample networking opportunities with peers and new connections. Both BarCampApache and ASF Project MeetUps are open to the public free of charge. Example sessions: (1) Hadoop + friends/Cloud Computing: Apache Hadoop is a framework for running applications on large clusters built of commodity hardware. A wide variety of companies and organizations use Hadoop to deliver petabyte scale computing and storage using off-the-shelf equipment. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce... (2) Tuscany: Apache Tuscany is a lightweight and open-source infrastructure based on Service Component Architecture (SCA). SCA defines a simple to use, service-based model for construction of service components, assembly and deployment of composite applications in a distributed network. Tuscany integrates with the Apache platform and extends the SCA specification with support for Web 2.0 protocols (ATOM, JSON-RPC), data bindings (JAXB, Axiom, JSON etc), and integration of Web 2.0 toolkits like Dojo. With Apache Tuscany, you can create SCA components in Java, BPEL, or scripting languages and assemble SCA components with other components like EJBs...
Established in 1999, the all-volunteer ASF oversees nearly one hundred fifty leading Open Source projects, including Apache HTTP Server -- the world's most popular Web server software, powering more than 130 Million Websites worldwide. Today, more than 300 individual Members and 2,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide through thousands of software solutions distributed under the Apache License. The community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is funded by individual donations and corporate sponsors that include AMD, Basis Technology, Facebook, Google, HP, Microsoft, Progress Software, VMware, and Yahoo!. Additional sponsors include Matt Mullenweg, AirPlus International, BlueNog, Intuit, Joost, and Two Sigma Investments..."
See also: the ApacheCon announcement
The Case Against Data Lock-in
Brian W. Fitzpatrick, ACM Queue
"Until recently, users rarely asked whether they could quickly and easily get their data out before they put reams of personal information into a new Internet service. They were more likely to ask questions such as: 'Are my friends using the service?' 'How reliable is it?' and 'What are the odds that the company providing the service is going to be around in six months or a year?'
Users are starting to realize, however, that as they store more and more of their personal data in services that are not physically accessible, they run the risk of losing vast swaths of their online legacy if they don't have a means of removing their data...
At Google, our attitude has always been that users should be able to control the data they store in any of our products, and that means that they should be able to get their data out of any product. Period. There should be no additional monetary cost to do so, and perhaps most importantly, the amount of effort required to get the data out should be constant, regardless of the amount of data. Individually downloading a dozen photos is no big inconvenience, but what if a user had to download 5,000 photos, one at a time, to get them out of an application? That could take weeks of their time...
Allowing users to get a copy of their data is just the first step on the road to data liberation: we have a long way to go to get to the point where users can easily move their data from one product on the Internet to another. We look forward to this future, where we as engineers can focus less on schlepping data around and more on building interesting products that can compete on their technical merits—not by holding users hostage. Giving users control over their data is an important part of establishing user trust, and we hope that more companies will see that if they want to retain their users for the long term, the best way to do that is by setting them free..."
HTML5: The Jewel in the Open Web Platform
Philippe Le Hégaret, Blog
"Over the past two weeks, I traveled across the U.S. from New York to San Francisco to talk about HTML5 with developers and W3C member organizations. This continues my global tour which has also taken me earlier to France and Japan. I am inspired by the enthusiasm for the suite of technical standards that make up what W3C calls the 'Open Web Platform.' The Open Web Platform to us is HTML5, a game-changing suite of tools that incorporates SVG, CSS and other standards that are in various stages of development and implementation by the community at W3C. Recent demos show the potential of certain features of HTML5, however, the platform in its entirety is still a work in progress. At this stage community feedback plays an important role in ensuring that the HTML5 specification is the highest quality. The power of this platform is that it is so comprehensive...
The challenge presented by HTML5 is the need to test, refine and mature certain aspects of the specification in order to support the early adopters, the innovators and the engineers who are embracing this technology today. Recently W3C opened a call to developers to submit their issues by October 1, 2010 in order to speed the process of standardization and implementation of HTML as early as possible. In addition, because HTML5 is seeing early adoption, there is a need to refine the draft specification to support the work of those who are pushing this technology out in to the public domain.
From week to week, we see promising examples of the potential of HTML5 demonstrated by impressive displays of 3D animation, navigation and video technologies... The video community is requesting more features in our support of HTML5 video (more metadata support, chapters, quality feedback). The television industry is just starting to think about having APIs to control television channels or the TV remote. The electronic book industry would like to have better text support, in particular vertical text, in CSS. Several companies met this week to talk about supporting audio and video teleconferencing in HTML (ICE, STUN, notification API for incoming calls, etc.)...
The adoption of HTML5 by browser vendors and other members of the IT community is an important factor in the ongoing traction of the platform. We want to hear from those already working with the draft specification so we can use the test cases to identify interoperability issues that need to be addressed leading up to last call in May 2011..."
See also: the HTML5 Editor's Draft
New IESG Statement on the Selection and Role of Document Shepherds
Staff, IESG Announcement
The Internet Engineering Steering Group (IESG) has published a new statement which provides guidance from the IESG on the selection of a Document Shepherd for documents from IETF working groups and documents from individuals. The updated statement augments the text of RFC 4858 which defines the role of the Document Shepherd for documents from IETF working groups. The two published statements of specification quality from the the actual templates summarize a number of technical requirements that apply to Write-Ups for Individual Submissions via the IESG and for IETF Working Group submissions.
From the new statement: "Experience has shown that a successful Document Shepherd need not be the working group chair or secretary. In fact, the IESG encourages the working group chair to select an active working group participant that has strong understanding of the document content, is familiar with the document history, and is familiar with the IETF standards process. The Document Shepherd of a working group document should not be an author or editor of the document.
Not all individual submissions have a Document Shepherd other than an author or editor of the document. When there is one, the Document Shepherd is selected by the Responsible Area Director in consultation with the document authors or editors..."
Topics addressed in the QA write-ups from a Document Shepherd include listing of existing implementations of a protocol; "whether WG consensus behind the document is strong, whether document has had adequate review both from key WG members and from key non-WG members, and whether the Document Shepherd has any concerns about the depth or breadth of the reviews that have been performed; whether the document needs more review from a particular or broader perspective, (e.g., security, operational complexity, someone familiar with AAA, internationalization or XML); whether the Document Shepherd has personally verified that the document satisfies all the automated checks reported ID nits; controversy about particular points or where decisions/consensus was particularly rough; whether sections of the document written in a formal language, such as XML code, BNF rules, MIB definitions, etc., validate correctly in an automated checker; if the document specifies protocol extensions, are reservations requested in appropriate IANA registries; are the IANA registries clearly identified, and if the document creates a new registry, does it define the proposed initial contents of the registry and an allocation procedure for future registrations..."
See also: the Document Shepherd Write-Up for Individual Submissions via IESG
Wireless Interconnect and the Potential for Carbon Nanotubes
Alireza Nojeh and André Ivanov, IEEE Design and Test of Computers
"With the ever-shrinking dimensions of electronic devices and increasing IC densities, CMOS technology is now at a point where interconnect delay and power consumption exceed or are comparable to gate delay and power consumption. In addition to power loss, the resulting thermal issues add significant design and engineering challenges. Coupled with the increasing complexity of interconnect routing, these issues create a bottleneck to further scaling
New materials are being investigated to replace traditional copper interconnect wires and interlayer dielectrics. The recent work suggests that we could be closer to having a nanotube-based on-chip communication system than we think. Over the past two decades, nanotubes have gained ever-increasing popularity because of their attractive electrical, mechanical, thermal, and optical characteristics. Their unique electronic properties include the ability to carry current with densities as high as 109 A/cm2, orders of magnitude higher than traditional copper and silver wires.
For wires, materials that have perhaps attracted the most attention are carbon nanotubes. A carbon nanotube is a hollow cylindrical structure made of carbon atoms, with a nanoscale diameter and a length that can reach centimeters. It can consist of only one layer of atoms (single-walled carbon nanotube—SWNT). It can also include a number of coaxial layers with progressively larger diameters (multiwalled carbon nanotube—MWNT). Each SWNT can be thought of as graphene (one layer of graphite) rolled along a certain direction in its plane.. On the mechanical side, nanotubes have much to offer. Due to the sp2 carbon-carbon bond, although nanotubes are relatively flexible in the lateral dimensions, SWNTs are extremely strong along their axis...
Together with their hollow structure, this makes them unique candidates for lightweight, ultrastrength composite materials. Nanotubes also show attractive actuation behavior and are being investigated as highforce actuators for applications such as artificial muscles. Thermal conductivity in nanotubes along their axis is very high and this makes them good candidates for heat sink structures for ICs. Due to their high surfaceto-volume ratio, nanotubes can form the basis of highly sensitive chemical and biological sensors. They also have applications as high-brightness electron sources, as well as in disease treatment and hydrogen storage..."
Sponsors
XML Daily Newslink and Cover Pages sponsored by:
IBM Corporation | http://www.ibm.com |
ISIS Papyrus | http://www.isis-papyrus.com |
Microsoft Corporation | http://www.microsoft.com |
Oracle Corporation | http://www.oracle.com |
Primeton | http://www.primeton.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/