The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: April 22, 2009
XML Daily Newslink. Wednesday, 22 April 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com



Paper Submissions Due for Balisage: The Markup Conference 2009
Organizers, Balisage Markup Conference Announcement

Members of the Balisage Markup Conference Committee have published a reminder that the last day for Conference and Symposium paper/presentation proposals is April 24, 2009. The Balisage 2009 Conference will be held August 11- 14, 2009, in Montréal, Canada. Balisage is a peer-reviewed conference designed to meet the needs of markup theoreticians and practitioners who are pushing the boundaries of the field. It's all about the markup: how to create it; what it means; hierarchies and overlap; modeling; taxonomies; transformation; query, searching, and retrieval; presentation and accessibility; making systems that make markup dance (or dance faster to a different tune in a smaller space) — in short, changing the world and the web through the power of markup... It's a conference about XSD, XQuery, RDF, UBL, SGML, LMNL, XSL-FO, XTM, SVG, MathML, OWL, TexMECS, RNG, and other things that may or may not yet be associated with such acronyms. We welcome papers about topic maps, document modeling, markup of overlapping structures, ontologies, metadata, content management, and other markup-related topics at Balisage.

Also, on Monday August 10, 2009, the co-located: "International Symposium on Processing XML Efficiently: Overcoming Limits on Space, Time, or Bandwidth." This Symposium, Chaired by Michael Kay (Saxonica), will feature discussions of software design to facilitate processing, document design to facilitate processing, management of XML applications in a processing-intensive environment, and measuring efficiency of XML processing. Overview: "XML has become so ubiquitous that people are trying to apply it in some very hostile environments. From small mobile and embedded devices to web servers and financial messaging gateways delivering thousands of transactions a second, from terabyte-sized (or infinite) documents to databases that contain zillions of tiny documents, people expect XML to sit quietly in the background and not make a nuisance of itself. Yet some of the current technologies don't scale particularly well: XSLT and XQuery, for example, can quickly run out of memory as document sizes increase, while the costs of getting XML in and out of databases can bring a system to a halt. XML processing efficiency MIGHT be improved with streaming transformations and queries, faster parsing, document projection, parallel processing, application-specific optimization, and many other techniques. In this symposium, we'll talk about the state of the art in any or all of these areas, focussing on what impact such techniques can have at a system level: what practical effect they might have on the performance problems faced by real user workloads, or on the ability of XML to reach into areas where the costs have previously been prohibitive. Can performance benefits be achieved without sacrificing XML's hallmark attractions: validation, flexibility, high level declarative programming? In the best Balisage tradition, our aim is to bring together theory and practice: researchers, product engineers, academics, developers, and users..." Note: the online Proceedings for past conferences (Balisage Series on Markup Technologies) have been updated for easy access and navigation.

See also: the Balisage Series on Markup Technologies Proceedings


Is The Atom Publishing Protocol a Failure?
Dilip Krishnan, InfoQueue

Atom expert Joe Gregorio: "The Atom Publishing Protocol is a failure." Joe Gregorio said this in the title of a BitWorking blog article, admitting to having met his blogging-hyperbole-quotient for the day. In a post largely about the how the level of adoption that AtomPub is seeing, is far lower than the expectation. Joe writes that "There are still plenty of new protocols being developed on a seemingly daily basis, many of which could have used AtomPub, but don't." Joe Attributes the inability of AtomPub to become the 'one true protocol', to browser innovations. The world is a different place then it was when Atom and AtomPub started back in 2002, browsers are much more powerful, Javascript compatibility is increasing among them, there are more libraries to smooth over the differences, and connectivity is on the rise. So in the face of all those changes let's see how the some of the original motivations behind AtomPub are holding up. According to Joe, some of the key capabilities that AtomPub was designed to address are either easily available as a result of advances in browser technologies or are no longer a significant differentiator... Joe points to successful implementations of AtomPub in a variety of services, and concludes, saying "the advances in browsers and connectivity have conspired to keep AtomPub from reaching the widespread adoption"... Other use cases are still holding up over time, such as migrating data from one platform to another. Probably the biggest supplier of AtomPub based services is Google with the Google Data APIs, but it also has support from other services; just recently I noticed that flickr offers AtomPub as a method to post images to a blog...

From Joe Gregorio's blog: "AtomPub isn't a failure, but it hasn't seen the level of adoption I had hoped to see at this point in its life. There are still plenty of new protocols being developed on a seemingly daily basis, many of which could have used AtomPub, but don't. Also, there is a large amount of AtomPub being adopted in other areas, but that doesn't seem to be getting that much press, ala, I don't see any Atom-Powered Logo on my phones like Tim Bray suggested. [One motivation for Atom] was for a common interchange format. The idea was that with a common format you could build up libraries and make it easy to move information around. The 'problem' in this case is that a better format came along in the interim: JSON. JSON, born of Javascript, born of the browser, is the perfect 'data' interchange format, and here I am distinguishing between 'data' interchange and 'document' interchange. If all you want to do is get data from point A to B then JSON is a much easier format to generate and consume as it maps directly into data structures, as opposed to a document oriented format like Atom, which has to be mapped manually into data structures and that mapping will be different from library to library. The other aspect is that plain old HTML has become a lot more consumable in recent years thanks to the work on HTML5. If you need a hypertext document format you can reach for HTML these days and don't have to resort to XML based formats. The latter is huge shift in thinking for me personally...

See also: Joe Gregorio's blog article


Announcement: NIST Key Management Workshop
Sara Caswell, NIST Announcement

A NIST Key Management Workshop will be held June 8-9, 2009 at the U.S. National Institute of Standards and Technology, Gaithersburg, Maryland, USA. Registration is required by May 18, 2009. Overview: "Key management is a fundamental part of cryptographic technology and is considered the most difficult aspect associated with its use. Of particular concern are the scalability of the methods used to distribute keys and the usability of these methods. NIST is undertaking an effort to improve the overall key management strategies used by the public and private sectors in order to enhance usability of cryptographic technology, provide scalability across all cryptographic technologies, and support a global cryptographic key management infrastructure. The first step in achieving this goal is to conduct a workshop to identify: (1) the various obstacles in using the key management methodologies currently in use; (2) the alternative technologies that need to be accommodated; (3) alternative strategies useful in achieving the stated goal; and, (4) approaches for transitioning from the current methodologies to the most desirable method...

There will be no registration fee for this workshop. Participation includes: [a] physically attending the workshop at NIST; [b] viewing the workshop presentations via WebCast at remote locations; [c] presentations; [d] discussion; [e] providing written comments and recommended relevant topics of interest... U.S. National Institute of Standards and Technology (NIST) publications on security (including encryption and key management) have played a prominent role for many years, especially for government applications. FIPS Publications are issued by NIST after approval by the Secretary of Commerce pursuant to Section 5131 of the Information Technology Reform Act of 1996 (Public Law 104-106) and the Federal Information Security Management Act of 2002 (Public Law 107-347). NIST Special Publications in the 800 series present documents of general interest to the computer security community. The Special Publication 800 series was established in 1990 to provide a separate identity for information technology security publications. This Special Publication 800 series reports on ITL's research, guidelines, and outreach efforts in computer security, and its collaborative activities with industry, government, and academic organizations. Contact: Elaine Barker (technical and program questions) or Sara Caswell (administrative questions).


Security Flaw Leads Twitter, Others to Pull OAuth Support
Caroline McCarth, CNET News.com

A security hole in OAuth, the open-source protocol that acts as a 'valet key' for users' log-in information, has led services like Twitter and Yahoo to temporarily pull their support. Some developers were dismayed when Twitter pulled its support for OAuth, which it had only recently started to implement: blogger Jesse Stay wrote in a post about other restrictions to Twitter's developer API that its removal of OAuth is one of a number of recent examples of how the microblogging service has 'pulled the rug out from under its developers.' Here are the basics: The hole makes it possible for a hacker to use social- engineering tactics to trick users into exposing their data. The OAuth protocol itself requires tweaking to remove the vulnerability, and a source close to OAuth's development team said that there have been no known violations, that it has been aware of it for a few days now, and has been coordinating responses with vendors. A solution should be announced soon. This is a particularly big deal for Twitter, as OAuth prevents users of a service from having to hand over their passwords to third-party services that use that service's application program interface (API), and Twitter relies heavily on developer-created enhancements to the service from clients like Twhirl and TweetDeck to statistics and analytics applications...

Eran Hammer-Lahav, the OAuth community coordinator for this specific threat, spoke to CNET News: 'We have been aware of this threat for about a week now, and we have been coordinating with all known providers to help them understand the threat and deploy whatever mitigating factors they can,' Hammer-Lahav said, adding that full details will be made available on the OAuth Web site at midnight Pacific time on Thursday. 'There are no known exploits of this, so there are no reported attacks and the providers have either already deployed matters to address this or are doing it right now.' He highlighted Twitter's role in helping to keep things on the down-low at its own expense; when the service disabled OAuth, it did not mention that there was a security hole at its root..."

From Eran Hammer-Lahav's blog: "There is a pretty good story behind this. That is, how we found and managed the OAuth protocol security threat identified last week. In many ways, the story is much more important and interesting than the actual technical details of the exploit, and I promise to tell it in detail soon. For everyone involved, this was a first-of-a-kind experience: managing a specification security hole (as opposed to a software bug) in an open specification, with an open community, and no clear governance model. Where do you even begin? But right now, I know you want the technical details. If you are reading this, I assume you have a basic understanding of how OAuth works. If you don't please take a few minutes to read at least the first two parts of my Beginner's Guide to OAuth. The first part will give you a general overview and second will take you through the user workflow. It is the workflow that is important here. The rest of the guide deals with security and signatures, but the content of these posts, surprisingly, is not involved in this attack. I'll start with what this attack is not about; it does not involve stealing usernames or passwords; it does not involve stealing tokens, secrets, or Consumer Keys; in fact, no part of the OAuth signature workflow is involved. This is not a cryptographic attack. And it does not violate the principal of a user granting access to a specific application. All that remains intact..."

See also: Eran Hammer-Lahav's blog


HTML 5 Differences from HTML 4 Drafts Published
Ian Hickson and David Hyatt (eds), W3C Technical Report

Members of the W3C HTML Working Group have published a new Working Draft of HTML 5. HTML 5 adds to the language of the Web: features to help Web application authors, new elements based on research into prevailing authoring practices, and clear conformance criteria for user agents in an effort to improve interoperability. This particular draft specifies how authors can embed SVG in non-XML text/html content, and how browsers and other UAs should handle such embedded SVG content. The HTML5 specification is intended for authors of documents and scripts that use the features defined in this specification, and implementors of tools that are intended to conform to this specification, and individuals wishing to establish the correctness of documents or implementations with respect to the requirements of this specification. This document is probably not suited to readers who do not already have at least a passing familiarity with Web technologies, as in places it sacrifices clarity for precision, and brevity for completeness. More approachable tutorials and authoring guides can provide a gentler introduction to the topic. In particular, readers should be familiar with the basics of DOM Core and DOM Events before reading this specification. An understanding of WebIDL, HTTP, XML, Unicode, character encodings, JavaScript, and CSS will be helpful in places but is not essential. Implementors of 'HTML5' should be aware that this specification is not stable. Implementors who are not taking part in the discussions are likely to find the specification changing out from under them in incompatible ways. Vendors interested in implementing this specification before it eventually reaches the Candidate Recommendation stage should join the appropriate mailing lists and take part in the discussions.

See also: the list of changes


Sun Expands MySQL Identity
Sean Michael Kerner, InternetNews.com

Though it's likely soon to become part of Oracle, Sun is still rolling out new product releases. Sun recently announced new open source identity federation capabilities with openSSO, as well as integrated MySQL capabilities for identity management. Managing identity and access in an increasingly globally distributed environment is no easy task, but with the new releases, Sun is aiming to make it easier. The enhancements for its identity business include giving openSSO — the "SSO" stands for "single sign-on" — new abilities to interoperate with Google Apps Premier Edition as well as providing Sun's users with more deployment opportunity choices. Sun's identity announcements had already been in progress prior to the announcement of Oracle's acquisition. Oracle has its own identity management and federation products and it's unclear at this point what Oracle has in store once it completes the acquisition of Sun later this year. In the meantime, Sun is pushing forward with its plans on identity. OpenSSO is an open source product that enables Web access management, handling single sign-on and authorization. One of its key feature is federation, which is a standards-based single-sign approach for apps outside the boundaries of an organization. Daniel Raskin, Sun's chief identity strategist: "We're now using openSSO to extend the capabilities of an organization to include SaaS application within their single sign-on network. That means that, for example, an employee could long onto Sun portal, click on Google Mail and using their own enterprise credentials they get access using federation technology." Raskin explained that the federation capability makes use of the SAML standard for federation. He added that with openSSO, Sun is now providing an easy workflow that lets users federate with Google Apps without the need to provision users directly inside of Google. The new federation capability is not the result of a partnership with Google. Rather Raskin noted that Sun is making use of published Google specifications and APIs... While openSSO is available as a freely available open source product, Sun also sells a commercially supported version. Raskin does not expect that just because openSSO is available for free that enterprises won't buy the commercial version. The commercial version includes support which Raskin argued is something that is critical in identity management... Sun is also expanding the availability of openSSO with the Amazon EC2 cloud service. Sun's Glassfish middleware server is also going to be available on EC2 as part of an expanded cloud offering from Sun. Sun already makes MySQL and Solaris available on EC2. Raskin explained that by being on EC2 it makes it easier for Sun partners to test out technology without the need to have their own infrastructure.

See also: the Open Web SSO (OpenSSO) Project


W3C Web Applications WG Publishes Four First Public Working Drafts
Ian Hickson (et al., eds), W3C Technical Report

Members of the W3C Web Applications Working Group have published four First Public Working Drafts of specifications for APIs that enhance the open Web platform as a runtime environment for full-featured applications (1) "Web Storage" defines two APIs for persistent data storage in Web clients: one for accessing key-value pair data and another for accessing structured data. The first method is designed for scenarios where the user is carrying out a single transaction, but could be carrying out multiple transactions in different windows at the same time. Cookies don't really handle this case well. For example, a user could be buying plane tickets in two different windows, using the same site. If the site used cookies to keep track of which ticket the user was buying, then as the user clicked from page to page in both windows, the ticket currently being purchased would "leak" from one window to the other, potentially causing the user to buy two tickets for the same flight without really noticing. The second storage mechanism is designed for storage that spans multiple windows, and lasts beyond the current session. In particular, Web applications may wish to store megabytes of user data, such as entire user-authored documents or a user's mailbox, on the client side for performance reasons. (2) The "Web Workers" specification defines an API for running scripts in the background independently of any user interface scripts. This allows for long-running scripts that are not interrupted by scripts that respond to clicks or other user interactions, and allows long tasks to be executed without yielding to keep the page responsive. Workers (as these background scripts are called herein) are relatively heavy-weight, and are not intended to be used in large numbers. For example, it would be inappropriate to launch one worker for each pixel of a four megapixel image. The examples below show some appropriate uses of workers. Generally, workers are expected to be long-lived, have a high start-up performance cost, and a high per-instance memory cost. (3) "The Web Sockets API" specification defines an API that enables Web pages to use the Web Sockets protocol for two-way communication with a remote host. (4) The "Server-Sent Events" specification defines an API for opening an HTTP connection for receiving push notifications from a server in the form of DOM events. The API is designed such that it can be extended to work with other push notification schemes such as Push SMS. The Web Storage, Web Sockets API, and Server-Sent Events specifications were previously published as parts of the HTML 5 specification, but will now each become Recommendation-track deliverables within the Web Applications Working Group.

See also: the W3C news item


IETF Internet Draft: OAuth Request Body Hash
Brian Eaton and Eran Hammer-Lahav (eds), IETF Internet Draft

This initial-draft -00 specification for OAuth Request Body Hash defines a method to extend the OAuth signature to include integrity checks on HTTP request bodies with content types other than 'application/x-www-form-urlencoded'. From the document Introduction: "The OAuth Core specification provides body integrity checking only for 'application/x-www-form-urlencoded' request bodies. Other types of request bodies are left unsigned. An eavesdropper or man-in-the-middle who captures a signed request URI may be able to forward or replay that URI with a different HTTP request body. Nonce checking and the use of https can mitigate this risk, but may not be available in some environments. Even when nonce checking and https are used, signing the request body provides an additional layer of defense. This specification describes a method to provide an integrity check on non-form-encoded request bodies. The normal OAuth signature base string is enhanced by adding an additional parameter with the hash of the request body. An unkeyed hash is used for the reasons described in Appendix C. This extension is forward compatible: Service Providers that have not implemented this extension can verify requests sent by Consumers that have implemented this extension. If the Service Provider implements this specification the integrity of the body is guaranteed. If the Service Provider does not check body signatures, the remainder of the request will still validate using the OAuth Core signature algorithm. This specification is only useful when cryptographic signatures are used. The OAuth "PLAINTEXT" signature algorithm does not provide integrity checks for any portion of the request and is not supported by this specification... The specification deliberately uses an unkeyed hash algorithm (SHA-1) to provide an integrity check on the body instead of a keyed hash algorithm such as HMAC-SHA1. This decision was made because signing arbitrary octet streams is poor cryptographic hygiene. It can lead to unexpected problems with cryptographic protocols. For example, consider a proxy that uses OAuth to add authentication information to requests sent by an untrusted third-party. If the proxy signs arbitrary octet streams, the third-party can use the proxy as an oracle to forge authentication messages. Including the result of an unkeyed hash in the normal signature base string allows the proxy to add an integrity check on the original message without creating a signing oracle...

See also: The OAuth Core Protocol


VMware Has Launched vSphere, the OS of the Cloud
Abel Avram, InfoQueue

VMware has announced vSphere, dubbed the operating system of the cloud, a virtualization solution that helps business to transform their data centers into private clouds and moves VMware ahead in the virtualization market. VMware vSphere intends to turn the data center into a cloud. VMware wants to help business to make good use of their present investments in the IT infrastructure by turning the data centers into private clouds. Instead of managing dozens or hundreds of individual systems, each with its own hardware, operating system and applications, vSphere gives companies one single ecosystem that hides the network, storage and computing details, and provides security, resource and application management in one package... Besides helping businesses to create their own clouds, VMware is already working with customers like Terremark to transform their business model from selling computing power per server to selling CPU cycles and memory GBs, practically moving them into the cloud business model. Another solution built on vSphere is based on the idea of relating the software installed to the person it is installed for and not to the hardware it runs upon. So, the person could use a thin client or a thick one, the hardware could be changed in time, the operating system it runs upon could be changed or updated, but the user should not be affected. That will be done with VMware vSphere and View by introducing a layer of abstractization through virtualization..."

From the announcement: "With a wide range of groundbreaking new capabilities, VMware vSphere 4 brings cloud computing to enterprises in an evolutionary, non-disruptive way -- delivering uncompromising control with greater efficiency while preserving customer choice. As the complexity of IT environments has continued to increase over time, customers' share of IT budgets are increasingly spent on simply trying to 'keep the lights on'. With the promise of cloud computing, customers are eager to achieve the benefits, but struggle to see the path to getting there. Leveraging VMware vSphere 4, customers can take pragmatic steps to achieve cloud computing within their own IT environments. With these internal clouds, IT departments can dramatically simplify how computing is delivered in order to help decrease its cost and increase its flexibility, enabling IT to respond more rapidly to changing business requirements. VMware vSphere 4 will aggregate and holistically manage large pools of infrastructure—processors, storage and networking—as a seamless, flexible and dynamic operating environment. Any application -- an existing enterprise application or a next-generation application -- runs more efficiently and with guaranteed service levels on VMware vSphere 4. For enterprises, VMware vSphere 4 will bring the power of cloud computing to the datacenter, slashing IT costs while dramatically increasing IT responsiveness. For hosting service providers, VMware vSphere 4 will enable a more economic and efficient path to delivering cloud services that are compatible with customers' internal cloud infrastructures. Over time, VMware will support dynamic federation between internal and external clouds, enabling private cloud environments that span multiple datacenters and/or cloud providers..."

See also: the announcement


Intel Finds Stolen Laptops Can Be Costly
Brooke Crothers, CNET News.com

A laptop's value is more than meets the eye. Intel says stolen laptops cost corporate owners more than $100,000 in some cases, in a study announced Wednesday. The study on notebook security, commissioned by Intel and conducted by the Ponemon Institute, states that laptops lost or stolen in airports, taxis, and hotels around the world cost their corporate owners an average of $49,246 "reflecting the value of the enclosed data above the cost of the PC," Intel said. Analyzing 138 instances of lost and stolen notebooks, the study based the $49,246 price tag on costs associated with replacement, detection, forensics, data breach, lost intellectual property, lost productivity, and legal, consulting and regulatory expenses, Intel said. Data breach alone represents 80 percent of the cost... The average cost if the notebook is discovered missing the same day is $8,950, according to the study. After more than one week, this figure can reach as high as $115,849. In addition to the obvious need for vigilance, countermeasures include encryption and data-deletion security services. The study found that data encryption makes the most significant difference in the average cost: a lost notebook with an encrypted hard-disk drive is valued at $37,443, compared with $56,165 for a nonencrypted version, the study says.


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-04-22.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org