The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: May 06, 2010
XML Daily Newslink. Thursday, 06 May 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



W3C Final Incubator Report on Model-Based User Interfaces
Juan G-Calleros, Gerrit Meixner, Fabio Paternò (et al, eds), W3C XG Report

The Model-Based UI XG Final Report has been published by members of the W3C Model-Based User Interfaces Incubator Group (MBUI-XG). During the last year the MBUI-XG has evaluated research on MBUIs, including end-to-end models that extend beyond a single Web page, and has assessed its potential as a framework for developing context-sensitive Web applications. The MBUI-XG hopes to enable a new generation of Web authoring tools and runtimes that will make it much easier to create tomorrow's Web applications and to tailor them for a wide range of user preferences, device capabilities and environments. To achieve this, the MBUI-XG has evaluated research on MBUI as a framework for authoring Web applications and with a view to proposing work on related standards.

The Final Report provides an overview of the main results achieved by such an Incubator Group. W3C has also organized a Workshop on Future Standards for Model-Based User Interfaces (13-14 May 2010, Rome) to identify opportunities and challenges for new open standards in this area, particularly concerning the semantics and syntaxes of task, abstract and concrete user interface models...

The purpose of Model-Based Design is to identify high-level models, which allow designers to specify and analyse interactive software applications from a more semantic oriented level rather than starting immediately to address the implementation level. This allows them to concentrate on more important aspects without being immediately confused by many implementation details and then to have tools which update the implementation in order to be consistent with high-level choices.

Thus, by using models which capture semantically meaningful aspects, designers can more easily manage the increasing complexity of interactive applications and analyse them both during their development and when they have to be modified. After having identified relevant abstractions for models, the next issue is specifying them through a suitable language that enable integration within development environments, so as to facilitate the work of the designers and developers. For this purpose, the notion of User Interface Description Language (UIDL) has emerged in order to express any model... Nowadays, the increasing availability of new interaction platforms has raised a new interest in model-based approaches in order to allow developers to define the input and output needs of their applications, vendors to describe the input and output capabilities of their devices, and users to specify their preferences. However, such approaches should still allow designers to have good control on the final result in order to be effective.

See also: W3C Model-based User Interfaces


OASIS SCA-C Technical Committee Releases Drafts for Public Review
Staff, OASIS Announcement

Members of the OASIS Service Component Architecture / C and C++ (SCA-C-C++) Technical Committee have released four Committee Draft specifications for public review through July 04, 2010. Part of the OASIS Open-CSA member section, this TC was chartered to develop specifications that standardize the use of C and C++ technologies within an SCA domain.

Service Component Architecture Client and Implementation Model for C Test Assertions Version 1.1 (edited by Bryan Aupperle, David Haney, and Pete Robbins) defines the Test Assertions for the SCA C Client and Implementation Model specification. The Test Assertions represent the testable items relating to the normative statements made in the SCA C Client and Implementation Model specification. The Test Assertions provide a bridge between the normative statements in the specification and the conformance TestCases which are designed to check that an SCA runtime conforms to the requirements of the specification..."

Service Component Architecture Client and Implementation Model for C Testcases Version 1.1 defines the TestCases for the SCA C Client and Implementation Model specification. The tests described in this document are related to the Test Assertions described in SCA Client and Implementation Model for C Test Assertions. The testcases are structured in the same manner as the testcases for the SCA Assembly specification as described in the SCA Assembly testcases document.

Service Component Architecture Client and Implementation Model for C++ Test Assertions Version 1.1 defines the Test Assertions for the SCA C++ Client and Implementation Model specification. The Test Assertions represent the testable items relating to the normative statements made in the SCA C++ Client and Implementation Model specification. The Test Assertions provide a bridge between the normative statements in the specification and the conformance TestCases which are designed to check that an SCA runtime conforms to the requirements of the specification... Service Component Architecture Client and Implementation Model for C++ Testcases Version 1.1 defines the TestCases for the SCA C++ Client and Implementation Model specification. The TestCases represent a series of tests that an SCA implementation must pass in order to claim conformance to the requirements of the SCA C++ Client and Implementation Model specification.

See also: the OASIS Service Component Architecture / C and C++ (SCA-C-C++) TC


NIST Releases Final Public Draft for Assessing the Security Controls
Staff, National Institute of Standards and Technology Announcement

The U.S. National Institute of Standards and Technology (NIST) announced the publication of the Final Public Draft for Special Publication 800-53A, Revision 1: Guide for Assessing the Security Controls in Federal Information Systems and Organizations. This final draft of Special Publication 800-53A, Revision 1 provides guidelines for developing security assessment plans and associated security control assessment procedures that are consistent with Special Publication 800-53, Revision 3, Recommended Security Controls for Federal Information Systems and Organizations, August 2009, including updates as of 05-01-2010.

The final draft of Special Publication 800-53A, Revision 1, developed by the Joint Task Force Transformation Initiative Working Group is part of the ongoing initiative to develop a unified information security framework for the federal government and its contractors. This publication represents the third in a series of publications being developed under the auspices of the Joint Task Force Transformation Initiative.

For the past three years, NIST has been working in partnership with the Office of the Director of National Intelligence (ODNI), the Department of Defense (DOD), and the Committee on National Security Systems (CNSS) to develop a common information security framework for the federal government and its contractors. The updated security assessment guideline incorporates best practices in information security from the United States Department of Defense, Intelligence Community, and Civil agencies and includes security control assessment procedures for both national security and non national security systems. The guideline for developing security assessment plans is intended to support a wide variety of assessment activities in all phases of the system development life cycle including development, implementation, and operation...

The important changes described in Special Publication 800-53A, Revision 1, are part of a larger strategic initiative to focus on enterprise-wide, near real-time risk management; that is, managing risks from information systems in dynamic environments of operation that can adversely affect organizational operations and assets, individuals, other organizations, and the Nation. The increased flexibility in the selection of assessment methods, assessment objects, and depth and coverage attribute values empowers organizations to place the appropriate emphasis on the assessment process at every stage in the system development life cycle. For example, carrying out an increased level of assessment early in the system development life cycle can provide significant benefits by identifying weaknesses and deficiencies in the information system early and facilitate more cost-effective solutions..."

See also: the NIST announcement


Character Set and Language Encoding for Hypertext Transfer Protocol (HTTP) Header Field Parameters
Julian F. Reschke (ed), IETF Internet Draft

The Internet Engineering Steering Group (IESG) has approved the specification Character Set and Language Encoding for Hypertext Transfer Protocol (HTTP) Header Field Parameters as an IETF Proposed Standard. This document is a clarification and simplification of an existing protocol specification, and reviewers indicate the simplifications are well-judged and match practical use in HTTP. Also, the draft claims that, as of January 2010, there were at least three independent implementations of the encoding defined in document Section 3.2: Konqueror (trunk), Mozilla Firefox, and Opera. Graham Klyne is the Document Shepherd for this document. Alexey Melnikov is the IETF Responsible Area Director.

This specification defines a simplified mechanism for encoding arbitrary (e.g. non-ASCII) characters in HTTP header field parameters. By default, message header field parameters in Hypertext Transfer Protocol (HTTP) messages can not carry characters outside the ISO-8859-1 character set. RFC 2231 defines an escaping mechanism for use in Multipurpose Internet Mail Extensions (MIME) headers. This document specifies a profile of that encoding suitable for use in HTTP header fields. There are multiple HTTP header fields that already use RFC 2231 encoding in practice (Content-Disposition) or might use it in the future (Link). The purpose of this document is to provide a single place where the generic aspects of RFC 2231 encoding in HTTP header fields are defined.

By default, message header field parameters in Hypertext Transfer Protocol (HTTP) messages can not carry characters outside the ISO-8859-1 character set. RFC 2231 defines an encoding mechanism for use in Multipurpose Internet Mail Extensions (MIME) headers. This document specifies an encoding suitable for use in HTTP header fields which is compatible to a profile of the encoding defined in RFC 2231.

When to Use the Extension: Section 4.2 of RFC 2277 requires that protocol elements containing human-readable text are able to carry language information. Thus, the ext-value production ought to be always used when the parameter value is of textual nature and its language is known. Furthermore, the extension ought to also be used whenever the parameter value needs to carry characters not present in the US-ASCII character set. Note that it would be unacceptable to define a new parameter that would be restricted to a subset of the Unicode character set.

See also: the W3C Discussion List


Revised IETF Internet Draft: Web Linking
Mark Nottingham (ed), IETF Internet Draft

A revised level -10 draft specification has been produced for Web Linking, published originally under the title Link Relations and HTTP Header Linking. Changes in version -09 include: Corrected 'ptoken / ptokenchar' BNF; Disallow multiple title* parameters; Prefer title* over title when available; Remove "\" from ptokenchar; Explain why mailing list isn't archived; Define default language for title and title*, based on Content-Language, when present. Changes in -10 (result of IESG review): clarified media BNF; added various security considerations; updated registration procedures; added more detail to 'payment' relation; corrected 'hub' relation. On 2010-05-07, IESG announced that version -10 had been approved and advanced to IETF Proposed Standard level: 'This is not a WG document. However, it was well-reviewed on the HTTPBIS WG mailing list. The document focuses on registration and review of HTTP Link relations, and has been reviewed by IANA.'

Overview: A means of indicating the relationships between resources on the Web, as well as indicating the type of those relationships, has been available for some time in HTML, and more recently in Atom (RFC 4287). These mechanisms, although conceptually similar, are separately specified. However, links between resources need not be format-specific; it can be useful to have typed links that are independent of their serialisation, especially when a resource has representations in multiple formats. To this end, this document defines a framework for typed links that isn't specific to a particular serialisation or application. It does so by re-defining the link relation registry established by Atom to have a broader domain, and adding to it the relations that are defined by HTML.

Furthermore, an HTTP header-field for conveying typed links was defined in Section 19.6.2.4 of RFC 2068, but removed from RFC 2616, due to a lack of implementation experience. Since then, it has been implemented in some User-Agents (e.g., for stylesheets), and several additional use cases have surfaced. Because it was removed, the status of the Link header is unclear, leading some to consider minting new application-specific HTTP headers instead of reusing it. This document addresses this by re-specifying the Link header as one such serialisation, with updated but backwards-compatible syntax...

In this specification, a link is a typed connection between two resources that are identified by IRIs (RFC 3987), and is comprised of a context IRI, a link relation type, a target IRI, and (optionally) target attributes. A link can be viewed as a statement of the form "{context IRI} has a {relation type} resource at {target IRI}, which has {target attributes}." Note that in the common case, the context IRI will also be a URI (RFC 3986), because many protocols (such as HTTP) do not support dereferencing IRIs. Likewise, the target IRI will be converted to a URI in serialisations that do not support IRIs (e.g., the Link header). This specification does not place restrictions on the cardinality of links; there can be multiple links from and to a particular IRI, and multiple links of different types between two given IRIs. Likewise, the relative ordering of links in any particular serialisation, or between serialisations (e.g., the Link header and in-content links) is not specified or significant in this specification; applications that wish to consider ordering significant can do so..."

See also: the document history


The Emperor's New APIs: On the (In)Secure Usage of New Client-side Primitives
Steve Hannax, Eui Chul Richard Shinz (et al), Conference Paper

This paper on The Emperor's New APIs was prepared for 'W2SP 2010: Web 2.0 Security and Privacy 2010', held in conjunction with the 2010 IEEE Symposium on Security and Privacy. Since 1980, the IEEE Symposium on Security and Privacy has been a premier forum for computer security research, presenting the latest developments and bringing together researchers and practitioners.

Paper excerpts: "Several new browser primitives have been proposed to meet the demands of application interactivity while enabling security. To investigate whether applications consistently use these primitives safely in practice, we study the real-world usage of two client-side primitives, namely postMessage and HTML5's client-side database storage. We examine new purely client-side communication protocols layered on postMessage (Facebook Connect and Google Friend Connect) and several real-world web applications (including Gmail, Buzz, Maps and others) which use clientside storage abstractions. We find that, in practice, these abstractions are used insecurely, which leads to severe vulnerabilities and can increase the attack surface for web applications in unexpected ways. We conclude the paper by offering insights into why these abstractions can potentially be hard to use safely, and propose the economy of liabilities principle for designing future abstractions. The principle recommends that a good design for a primitive should minimize the liability that the user undertakes to ensure application security...

A recurring problem in these designs is that these abstractions are not designed with the economy of liabilities principle in mind, i.e., they rely significantly on the developers to ensure security. In this paper, we find this to be true of two recent client-side abstractions: postMessage, a cross-domain communication construct and client-side persistent storage (HTML5 and Google Gears). In the case of postMessage, we reverse engineered the client-side protocols and systematically extracted the security-relevant checks in the code to find new vulnerabilities in them. In the case of client-side storage, we find that applications do not sanitize database outputs, which can lead to a stealthy, persistent, client-side XSS attack. We found bugs in several prominent web applications including Gmail and Google Buzz and uncovered severe new attacks in major client-side protocols like Facebook Connect and Google Friend Connect...

We hope our study encourages future primitives to be designed with the economy of liabilities principle in mind. We offer some enhancements to existing to the current APIs to shift the burden of verifying and ensuring security properties from the developer to the browser. And, we encourage developers to scrutinize their applications for similar problems using automated techniques..."

See also: the program listing for the IEEE Symposium on Security and Privacy


Updated IETF Internet Draft: The OAuth 2.0 Protocol
Eran Hammer-Lahav, David Recordon, Dick Hardt (eds), IETF Internet Draft

Members of the IETF Open Authentication Protocol (OAuth) Working Group have released an updated draft for the specification The OAuth 2.0 Protocol. Draft version -02 removes the restriction on 'redirect_uri' including a query, adds a 'scope' parameter, and provides an initial proposal for a JSON-based token response format. The specification was authored with the participation and based on the work of Allen Tom (Yahoo!), Brian Eaton (Google), Brent Goldman (Facebook), Luke Shepard (Facebook), Raffi Krikorian (Twitter), and Yaron Goland (Microsoft).

OAuth provides a method for making authenticated HTTP requests using a token—an identifier used to denote an access grant with specific scope, duration, and other attributes. Tokens are issued to third-party clients by an authorization server with the approval of the resource owner. OAuth defines multiple flows for obtaining a token to support a wide range of client types and user experience. Specificallym, it defines the use of OAuth over HTTP per RFC 2616 or HTTP over TLS 1.0 as defined by IETF RFC 2818; other specifications may extend it for use with other tranport protocols.

From the Introduction: "With the increasing use of distributed web services and cloud computing, third-party applications require access to server-hosted resources. These resources are usually protected and require authentication using the resource owner's credentials (typically a username and password). In the traditional client-server authentication model, a client accessing a protected resource on a server presents the resource owner's credentials in order to authenticate and gain access. Resource owners should not be required to share their credentials when granting third-party applications access to their protected resources. They should also have the ability to restrict access to a limited subset of the resources they control, to limit access duration, or to limit access to the HTTP methods supported by these resources.

OAuth provides a method for making authenticated HTTP requests using a token, where tokens are issued to third-party clients by an authorization server with the approval of the resource owner. Instead of sharing their credentials with the client, resource owners grant access by authenticating directly with the authorization server which in turn issues a token to the client. The client uses the token (and optional secret) to authenticate with the resource server and gain access. For example, a web user (resource owner) can grant a printing service (client) access to her protected photos stored at a photo sharing service (resource server), without sharing her username and password with the printing service. Instead, she authenticates directly with the photo sharing service (authorization server) which issues the printing service delegation-specific credentials (token)..."

See also: the IETF Open Authentication Protocol (OAuth) Working Group


Improvements Made to the Final Open XML SDK 2.0 for Microsoft Office
Brian Jones, Blog

"A previous post announced the release of the Open XML SDK 2.0 with promise of a list of improvements and breaking changes made to the SDK compared to the December 2009 CTP. We made a few tweaks to the SDK... One of the changes we made to the SDK is around supporting autosave for document creation. Back in August 2009 we released a CTP that included autosave functionality when modifying parts within the Open XML package. This functionality allowed for changes to be automatically saved into the package, without the need to call 'Save()' methods. We also provided a mechanism to turn off this functionality. This functionality worked great when opening existing files. However, we never added such functionality for newly created documents. That's what we added this time around for the final version of the SDK...

Improved Namespace Processing: The older CTPs used a predefined mapping between the XML prefixes and the different namespaces. In other words, the SDK assumed a specific prefix for a specific namespace. For example, a WordprocessingML Paragraph would always write out the element 'p' with the prefix 'w', such as 'w:p'. The issue, that some of you brought up, is that the SDK didn't handle custom prefixes defined in XML fragments if those prefixes conflicted with the predefined list of prefixes of the SDK. With the final version of the SDK we fixed this issue. The final version of the SDK should be a lot more robust..."

From the download site: "Open XML is an open ECMA 376 standard and is also approved as the ISO/IEC 29500 standard that defines a set of XML schemas for representing spreadsheets, charts, presentations, and word processing documents. Microsoft Office Word 2007, Excel 2007, and PowerPoint 2007 all use Open XML as the default file format. The Open XML file formats are useful for developers because they use an open standard and are based on well-known technologies: ZIP and XML...

The Open XML SDK 2.0 for Microsoft Office is built on top of the System.IO.Packaging API and provides strongly typed part classes to manipulate Open XML documents. The SDK also uses the .NET Framework Language-Integrated Query (LINQ) technology to provide strongly typed object access to the XML content inside the parts of Open XML documents. The Open XML SDK 2.0 simplifies the task of manipulating Open XML packages and the underlying Open XML schema elements within a package. The Open XML Application Programming Interface (API) encapsulates many common tasks that developers perform on Open XML packages, so you can perform complex operations with just a few lines of code. The tools package contains the Open XML SDK v2.0 Productivity Tool for Office and the documentation for the Open XML SDK v2. The Open XML SDK 2.0 Productivity Tool for Microsoft Office provides a number of features designed to improve your productivity and accelerate your learning while working with the SDK and Open XML files. Features include the ability to generate Open XML SDK 2.0 source code based on document content, compare source and target Open XML documents to reveal differences and to generate source code to create the target from the source, validate documents, and display documentation for the Open XML SDK v2.0, the ECMA376v1 standard, and the Microsoft Office implementation notes..."

See also: the Open XML SDK 2.0 download


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-05-06.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org