A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
Headlines
- CloudAudit: Automated Audit, Assertion, Assessment, and Assurance API
- TestCases for the SCA Policy Framework Version 1.1 Specification
- W3C Last Call Working Drafts for Authoring Tool Accessibility Guidelines
- IETF Timezone Service Protocol Specification
- Beyond Web Browser Cookies: HTML5 Web Storage
- Use of SAML in Name Attributes for the GSS-API EAP Mechanism
- First Look: Firefox 4 Beta 1 Shines on HTML5
CloudAudit: Automated Audit, Assertion, Assessment, and Assurance API
Christofer Hoff, Sam Johnston (et al. eds), IETF Internet Draft
IETF has published a first public working draft for the specification CloudAudit 1.0 - Automated Audit, Assertion, Assessment, and Assurance API (A6). This (for now) Experimental Internet Draft describes "an open, extensible and secure interface that allows cloud computing providers to expose Audit, Assertion, Assessment, and Assurance (A6) information for cloud infrastructure (IaaS), platform (PaaS), and application (SaaS) services to authorized clients. The goal of CloudAudit is to provide a common interface that allows Cloud providers to automate the Audit, Assertion, Assessment, and Assurance (A6) of their environments and allow authorized consumers of their services to do likewise via an open, extensible and secure set of interfaces. CloudAudit is a volunteer cross-industry effort from the best minds and talent in Cloud, networking, security, audit, assurance and architecture backgrounds...
From the document Introduction: "CloudAudit provides a common interface, naming convention, set of processes and technologies utilizing the HTTP protocol to enable cloud service providers to automate the collection and assertion of operational, security, audit, assessment, and assurance information. This provides duly authorized and authenticated consumers and brokers of cloud computing services to automate requests for this data and metadata. CloudAudit supports the notion of requests for both structured and unstructured data and metadata aligned to compliance and audit frameworks. Specific compliance framework definitions and namespaces ('compliance packs') will be made available incrementally. The first CloudAudit release is designed to be as simple as possible so as it can be implemented by creating a consistent namespace and directory structure and placement of files to a standard web server that implements HTTP. Subsequent releases may add the ability to write definitions and assertions, and to request new assertions be generated (e.g. a network scan). That is, while 1.x versions are read-only, subsequent releases may be read-write.
A duly authorized and authenticated client will typically interrogate the service and verify compliance with local policy before making use of it. It may do so by checking certain pre-defined parameters (for example, the geographical location of the servers, compliance with prevailing security standards, etc.) or it may enumerate some/all of the information available and present it to an operator for a manual decision. This process may be fully automated, for example when searching for least cost services or for an alternative service for failover..."
According to the CloudAudit web site FAQ document: "[CloudAudit development includes representatives] from CSC, Stratus, Akamai, Microsoft, VMware, Google, Amazon Web Services, Savvis, Terrimark, Rackspace, etc... We have cross-section representation across industry, we're building namespaces, specifications, and those are not done in the dark. They're done with a direct contribution of the cloud providers themselves, because they understand how important it is to get this information standardized. Otherwise, you're going to be having ad-hoc comparisons done of services which may not portray your actual security services capabilities or security posture accurately. We have a huge amount of interest, a good amount of participation, and a lot of alliances that are already bubbling with other cloud standards..."
See also: the CloudAudit web site
TestCases for the SCA Policy Framework Version 1.1 Specification
David Booz (ed), OASIS Public Review Draft
Members of the OASIS Service Component Architecture / Policy (SCA-Policy) Technical Committee have released TestCases for the SCA Policy Framework Version 1.1 Specification (Committee Draft 01/Public Review 01) for public review and comment through September 06, 2010. This document defines the TestCases for the SCA Policy specification. The TestCases represent a series of tests that an SCA runtime must pass in order to claim conformance to the requirements of the SCA Policy specification.
The SCA Policy testcases follow a standard structure. They are divided into two main parts: (1) Test Client, which drives the test and checks that the results are as expected (2) Test Application, which forms the bulk of the testcase and which consists of Composites, WSDL files, XSDs and code artifacts such as Java classes, organized into a series of SCA contributions The basic idea is that the Test Application runs on the SCA runtime that is under test, while the Test Client runs as a standalone application, invoking the Test Application through one or more service interfaces.
The test client is designed as a standalone application. The version built here is a Java application which uses the JUnit test framework, although in principle, the client could be built using another implementation technology. The test client is structured to contain configuration information about the testcase, which consists of metadata identifying the Test Application in terms of the SCA Contributions that are used and the Composites that must be deployed and run, and data indicating which service operation(s) must be invoked with input data and expected output data, including exceptions for expected failure cases..."
"Test Assertions for the SCA Policy Framework 1.1 Specification" (a separate document) describes the testable items relating to the normative statements made in the SCA Policy Framework specification. The Test Assertions provide a bridge between the normative statements in the specification and the conformance TestCases that are designed to check that an SCA runtime conforms to the requirements of the specification... "SCA Policy Framework Version 1.1" presents tha SCA framework and its usage: The capture and expression of non-functional requirements is an important aspect of service definition and has an impact on SCA throughout the lifecycle of components and compositions. SCA provides a framework to support specification of constraints, capabilities and QoS expectations from component design through to concrete deployment. The document describes the SCA policy association framework that allows policies and policy subjects specified using WS-Policy and WS-PolicyAttachment, as well as with other policy languages, to be associated with SCA components..."
See also: the OASIS announcement
W3C Last Call Working Drafts for Authoring Tool Accessibility Guidelines
Jan Richards, Jeanne Spellman, Jutta Treviranus (eds), W3C Technical Reports
Members of the W3C Authoring Tool Accessibility Guidelines Working Group (AUWG) have published two Last Call Working Drafts for public review through September 02, 2010. Both specifications are parts of a series of accessibility guidelines published by the W3C Web Accessibility Initiative (WAI). The Web Accessibility Initiative develops strategies, guidelines, and resources to help make the Web accessible to people with disabilities.
The AUWG charter includes: development of a version of ATAG which is compatible with other documents, such as the Web Content Accessibility Guidelines (WCAG), and reflective of current practice; development of techniques for implementing ATAG 2.0 in a range of different types of authoring tools; development of methods for evaluating conformance of authoring tools; tracking related work in other working groups, commenting on and integrating it as appropriate; facilitating the evaluation of authoring tools for conformance to the guidelines; working with authoring tool developers in implementation methods and techniques... WAI is supported in part by: the U.S. Department of Education's National Institute on Disability and Rehabilitation Research, European Commission's Information Society Technologies Programme, HP, IBM, Microsoft Corporation, SAP, Verizon Foundation, and Wells Fargo.
The Last Call Authoring Tool Accessibility Guidelines (ATAG) 2.0 Working Draft provides "guidelines for designing web content authoring tools that are both (1) more accessible to authors with disabilities and (2) designed to enable, support, and promote the production of accessible web content by all authors. It includes recommendations for assisting authoring tool developers to make the authoring tools that they develop more accessible to people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, motor difficulties, speech difficulties, and others. Accessibility, from an authoring tool perspective, includes addressing the needs of two (potentially overlapping) user groups with disabilities: (A) authors of web content, whose needs are met by ensuring that the authoring tool user interface itself is accessible, and (B) end users of web content, whose needs are met by ensuring that all authors are enabled, supported, and guided towards producing accessible web content..."
The companion Last Call WD Implementing ATAG 2.0: A Guide To Understanding and Implementing Authoring Tool Accessibility Guidelines 2.0 provides non-normative information to authoring tool developers who wish to satisfy the success criteria in ATAG 2.0. This document includes additional information about the intent of the success criteria, examples of how the success criteria might be satisfied, and references to other related resources. Although the normative definitions and requirements for ATAG 2.0 can all be found in the ATAG 2.0 document itself, the concepts and provisions may be new to some people. Implementing ATAG 2.0 provides a non-normative extended commentary on each guideline and each success criterion to help readers better understand the intent and how the guidelines and success criteria work together. It also provides examples that the Working Group has identified for each success criterion..."
See also: the Implementing ATAG 2.0 guide
IETF Timezone Service Protocol Specification
Michael Douglass and Cyrus Daboo (eds), IETF Internet Draft
IETF has published an initial level -00 Internet Draft of the specification for a Timezone Service Protocol. It defines a timezone service protocol that allows reliable, secure and fast delivery of timezone information to client systems such as calendaring and scheduling applications or operating systems. The authors credit CalConnect (The Calendaring and Scheduling Consortium) for advice with this specification via the CalConnect Timezone Technical Committee. The overall service is made up of several layers that include Contributors, Publishers, Root Providers, Local Providers, and Clients.
From the Introduction: "Timezone information, in general, combines a Universal Coordinated Time (UTC) offset with daylight saving time (DST) rules. Timezones are typically tied to specific geographic and geopolitical regions. Whilst the UTC offset for particular regions changes infrequently, DST rules can change frequently and sometimes with very little notice (sometimes hours before a change comes into effect).
Calendaring and scheduling systems, such as those that use iCalendar as well as operating systems, critically rely on timezone information to determine the correct local time. As such they need to be kept up to date with changes to timezone information. To date there has been no fast and easy way to do that. Often times timezone data is supplied in the form of a set of data files that have to be "compiled" into a suitable database format for use by the client application or operating system. In the case of operating systems, those changes often only get propagated out to client machines when there is an operating system update and those may not be frequent enough to ensure accurate timezone data is always in place.
This specification defines a timezone service protocol based on HTTP that allows for fast, reliable and accurate delivery of timezone information to client systems. A further specification defines an XML schema for timezone data that can be used as an interchange format between client and server or between servers. This can be used as an alternative to iCalendar VTIMEZONE component data, when such data is not appropriate. This specification does not specify the source of the timezone information. It is assumed that a reliable and accurate source is available. Nor does it address the need for global timezone identifiers for timezone data.
See also: the IETF Timezone XML Specification
Beyond Web Browser Cookies: HTML5 Web Storage
Peter Lubbers and Brian Albers, DDJ
"HTML5 Web Storage is an API that makes it easy to persist data across web requests. Before the Web Storage API, remote web servers had to store any data that persisted by sending it back and forth from client to server. With the advent of the Web Storage API, however, developers can now store data directly in a browser for repeated access across requests, or to be retrieved long after you completely close the browser, thus greatly reducing network traffic. One more reason to use Web Storage is that this is one of few HTML5 APIs that is already supported in all browsers, including Internet Explorer 8.
By using this simple API, developers can store values in easily retrievable JavaScript objects, which persist across page loads. By using either 'sessionStorage' or 'localStorage', developers can choose to let values survive either across page loads in a single window or tab, or across browser restarts, respectively. Stored data is not transmitted across the network, and is easily accessed on return visits to a page. Furthermore, larger values—as high as a few megabytes -- can be persisted using the HTML5 Web Storage API. This makes Web Storage suitable for document and file data that would quickly blow out the size limit of a cookie... Values that are stored into localStorage or sessionStorage can be browsed similar to cookies in Chrome's Developer Tools, Safari's Web Inspector, and Opera Dragonfly. These interfaces also grant users the ability to remove storage values as desired, and easily see what values a given web site is recording while they visit the pages..."
The W3C Web Storage specification "defines an API for persistent data storage of key-value pair data in Web clients. introduces two related mechanisms, similar to HTTP session cookies, for storing structured data on the client side. The first is designed for scenarios where the user is carrying out a single transaction, but could be carrying out multiple transactions in different windows at the same time. Cookies don't really handle this case well. For example, a user could be buying plane tickets in two different windows, using the same site. If the site used cookies to keep track of which ticket the user was buying, then as the user clicked from page to page in both windows, the ticket currently being purchased would 'leak' from one window to the other, potentially causing the user to buy two tickets for the same flight without really noticing. To address this, this specification introduces the sessionStorage IDL attribute. Sites can add data to the session storage, and it will be accessible to any page from the same site opened in that window...
The second storage mechanism is designed for storage that spans multiple windows, and lasts beyond the current session. In particular, Web applications may wish to store megabytes of user data, such as entire user-authored documents or a user's mailbox, on the client side for performance reasons. Again, cookies do not handle this case well, because they are transmitted with every request. The localStorage IDL attribute is used to access a page's local storage area..."
See also: the W3C Web Storage specification
Use of SAML in Name Attributes for the GSS-API EAP Mechanism
Sam Hartman and Josh Howlett (eds), IETF Internet Draft
An initial draft has been published in IETF for the specification of Name Attributes for the GSS-API EAP Mechanism. According to the abstract: "The naming extensions to the Generic Security Services Application Programming interface provide a mechanism for applications to discover authorization and personalization information associated with GSS-API names. The Extensible Authentication Protocol GSS-API mechanism allows an Authentication/Authorization/Accounting peer to provide authorization attributes along side an authentication response. It also provides mechanisms to process Security Assertion Markup Language (SAML) messages provided in the AAA response. This document describes the necessary information to use the naming extensions API to access that information.
Details: "SAML assertions can carry attributes describing properties of the subject of the assertion. For example, an assertion might carry an attribute describing the organizational affiliation or e-mail address of a subject. According to Section 8.2 and 2.7.3.1 of the OASIS SAML Core standard, the name of an attribute has two parts. The first is a URI describing the format of the name. The second part, whose form depends on the format URI, is the actual name; currently, GSS-API name attributes take the form of a single URI.
Administrators may need to type SAML attribute names into configuration files or otherwise tell applications how to find attributes. It is desirable to support accessing these attributes from applications that have no awareness of SAML. So, the GSS-API attribute name should be something that an administrator can reasonably easily construct from a SAML attribute name. In particular, adding or removing URI escapes, base64 encoding or similar transformations would significantly decrease usability.
Instead, it seems desirable to extend GSS-API naming extensions to support concepts such as SAML names where the format is specified separately. The format of GSS-API attribute names should be changed. If no space character is found in the name, then the name is interpreted as a URI describing the attribute. Otherwise, the portion from the beginning of the buffer to the first space is interpreted as a URI describing the form and interpretation of the rest of the buffer; this portion is known as the attribute type URI..."
See also: SAML references
First Look: Firefox 4 Beta 1 Shines on HTML5
Peter Wayner, InfoWorld
"The Firefox 4 beta version brings the browser that much closer to taking over everything on the desktop. There are fewer reasons for anyone to interact with an extra plug-in or the operating system...
More important [than Firefox 4's new Chrome-like interface] are the many new features generally lumped together under the catchall standard HTML5, a specification that's still a draft but has become more of a rallying cry for AJAX, JavaScript, endless tags, and life beyond plug-ins... Many of the enticing new features open up new opportunities for AJAX and JavaScript programmers to add more razzle-dazzle and catch up with Adobe Flash, Adobe AIR, Microsoft Silverlight, and other plug-ins. The CSS transitions, still 'partially supported' in Firefox 4 Beta 1, give programmers the chance to set up one model for changing the CSS parameters without writing a separate JavaScript function to do it. The browser just fades and tweaks the CSS parameters over time.
MathML and SVG data are now a bit easier to mix right in with old-fashioned text. The Canvas and optional WebGL layers can create custom images at the browser without waiting for a server to deliver a GIF... Firefox 4 also adds an implementation of the Websockets API, a tool for enabling the browser and the server to pass data back and forth as needed, making it unnecessary for the browser to keep asking the server if there's anything new to report... Converting this information to the HTML tags is becoming more fluid. The Mozilla release notes, for instance, brag that Firefox 4's parser is 20 percent faster at interpreting the innerHTML calls generated by dynamic JavaScript.
[Some performance values] still lag behind the competition. [But] there are areas in which Firefox still leads. Firefox's collection of extensions and plug-ins is still broader and more developed than any other. Firefox 4 nurtures this advantage by making it possible to turn the different extensions on and off without restarting. Firefox is also taking the lead by implementing Google's WebM video standard... The browser programmers are taking the best from each other, and this is competition at its finest..."
Sponsors
XML Daily Newslink and Cover Pages sponsored by:
IBM Corporation | http://www.ibm.com |
ISIS Papyrus | http://www.isis-papyrus.com |
Microsoft Corporation | http://www.microsoft.com |
Oracle Corporation | http://www.oracle.com |
Primeton | http://www.primeton.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/