This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com
- OAuth Dynamic Client Registration Protocol for User-Managed Access
- New W3C Working Drafts for RDFa Core 1.1 and XHTML+RDFa 1.1
- OASIS Public Review: Product Life Cycle Support DEXs Version R4
- Google's Vision for Cloud Identity
- W3C Recommendation: XHTML Modularization 1.1 Second Edition
- Worse is Better, or Is It?
- The Best Web Browser: Chrome, Firefox, Internet Explorer, Opera, Safari?
OAuth Dynamic Client Registration Protocol for User-Managed Access
Christian Scholz, Maciej Machulak, Eve Maler (eds), IETF Internet Draft
Members of the IETF Open Authentication Protocol (OAuth) Working Group have published a first public working draft for the Standards Track Internet Draft OAuth Dynamic Client Registration Protocol. The goal in this proposed OAuth Dynamic Client Registration protocol is for an authorization server "to provide a client with a client identifier and optionally a client secret in a dynamic fashion. To accomplish this, the authorization server must first be provided with information about the client, with the client-name being the minimal information provided. In practice, additional information will need to be furnished to the authorization server, such as the client's homepage, icon, description, and so on.
From the document Introduction: "This informal draft discusses a number of requirements for and approaches to automatic registration of clients with an OAuth authorization server, with special emphasis on the needs of the OAuth-based User-Managed Access protocol (UMA-Core). In some use-case scenarios it is desirable or necessary to allow OAuth clients to obtain authorization from an OAuth authorization server without the two parties having previously interacted. Nevertheless, in order for the authorization server to accurately represent to end-users which client is seeking authorization to access the end-user's resources, a method for automatic and unique registration of clients is needed... The dynamic registration protocol proposed here is envisioned to be an additional task to be performed by the OAuth authorization server, namely registration of a new client identifier and optional secret and the issuance of this information to the client. This task would occur prior to the point at which the client wields its identifier and secret at the authorization server in order to obtain an access token in normal OAuth fashion."
Use Cases: "The UMA protocol involves two instances of OAuth flows. In the first, an end-user introduces a host (essentially an enhanced OAuth resource server) to an authorization manager (an enhanced OAuth authorization server) as a client of it, possibly without that host having obtained client identification information from that server previously. In the second, a requester (an enhanced OAuth client) approaches a host and authorization manager to get and use an access token in approximately the normal OAuth fashion, again possibly without that client having obtained client identification information from that server previously. Both the host-as-client and the requester-as-client thus may need dynamic client registration in order for the UMA protocol flow to proceed. The needs for inter-party trust vary in different UMA use cases... In cases where high-sensitivity information is being protected or where a regulatory environment puts constraints on the building of trust relationships, such as sharing health records with medical professionals or giving access to tax records to outsourced bookkeeping staff, static means of provisioning client identifiers may be imposed.
An editor's note (Eve Maler): "The UMA group has produced the following I-D as input to the OAuth discovery/registration/binding discussion. We wanted to set forth our requirements (knowing that there may be other requirements from the wider community) and propose some solutions that meet them. If further discussion seems to warrant an updating of this draft, we're happy to do that. If you have interest in getting involved in UMA-specific work, feel free to drop me a note. - Eve"
New W3C Working Drafts for RDFa Core 1.1 and XHTML+RDFa 1.1
Ben Adida, Mark Birbeck, Shane McCarron (eds), W3C Technical Reports
Members of the W3C RDFa Working Group have published working draft specifications for RDFa Core 1.1: Syntax and Processing Rules for Embedding RDF Through Attributes and XHTML+RDFa 1.1: Support for RDFa via XHTML Modularization. RDFa Core is "a specification for attributes to express structured data in any markup language. The embedded data already available in the markup language (e.g., XHTML) is reused by the RDFa markup, so that publishers don't need to repeat significant data in the document content... RDFa shares some of the same goals with microformats. Whereas microformats specify both a syntax for embedding structured data into HTML documents and a vocabulary of specific terms for each microformat, RDFa specifies only a syntax and relies on independent specification of terms (often called vocabularies or taxonomies) by others. RDFa allows terms from multiple independently-developed vocabularies to be freely intermixed and is designed such that the language can be parsed without knowledge of the specific vocabulary being used...
From 'RDFa Core 1.1': "The current Web is primarily made up of an enormous number of documents that have been created using HTML. These documents contain significant amounts of structured data, which is largely unavailable to tools and applications. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience: an event on a web page can be directly imported into a user's desktop calendar; a license on a document can be detected so that users can be informed of their rights automatically; a photo's creator, camera setting information, resolution, location and topic can be published as easily as the original photo itself, enabling structured search and sharing.
Embedded data already available in markup languages (e.g., XHTML) is reused by the RDFa markup, so that publishers don't need to repeat significant data in the document content. The underlying abstract representation is RDF, which lets publishers build their own vocabulary, extend others, and evolve their vocabulary with maximal interoperability over time. The expressed structure is closely tied to the data, so that rendered data can be copied and pasted along with its relevant structure. The rules for interpreting the data are generic, so that there is no need for different rules for different formats; this allows authors and publishers of data to define their own formats without having to update software, register formats via a central authority, or worry that two formats may interfere with each other..."
From 'XHTML+RDFa 1.1: Support for RDFa via XHTML Modularization': "RDFa Core 1.1 itself defines attributes and syntax for embedding semantic markup in Host Languages. This document defines one such Host Language. This language is a superset of XHTML 1.1, integrating the attributes as defined in RDFa Core 1.1. This document is intended for authors who want to create XHTML-Family documents that embed rich semantic markup..."
See also: W3C RDFa API
OASIS Public Review: Product Life Cycle Support DEXs Version R4
Tor Arne Irgens (ed), OASIS Public Review Draft
Members of the OASIS Product Life Cycle Support (PLCS) Technical Committee have published an approved "Product Life Cycle Support DEXs Version R4" specification. The review period ends 24-August-2010.
This OASIS group collaborates on the deployment of an international standard for product data exchange (ISO 10303) to support complex engineered assets from concept to disposal. The TC was chartered to to establish structured data exchange and sharing capabilities for use by industry to support complex engineered assets throughout their total life cycle. These Data Exchange Sets (DEXs) are based upon ISO 10303 (STEP) Application Protocol 239 (Product Life Cycle Support). The OASIS Product Life Cycle Support TC will be responsible for defining, developing, testing and publishing of OASIS Product Life Cycle Support DEXs, and for liaison with ISO TC 184/SC4. The OASIS Product Life Cycle Support TC will coordinate these activities with relevant OASIS Technical Committees and promote the use of OASIS Product Life Cycle Support DE's across industries and governments world-wide.
From the specification Introduction: " The business goals of the OASIS PLCS DEXs are to satisfy three significant requirements for the owners/operators of complex products and systems such as aircraft, ships and power plants, namely: (1) Reduction in the total cost of ownership; (2) Increased asset availability; (3) Effective information management throughout the product lifecycle.
Lifecycle data needed 'is often distributed over multiple IT systems and organizations, and historically has been difficult to access and consolidate. The PLCS standard provides a large, integrated information model covering the whole lifecycle. The PLCS standard provides the basic mechanisms enabling neutral file exchanges between IT systems and organisations. This helps remove delays and costs for both the end user of the product and the supplier, and is particularly important for service-based contracts such as 'power-by-the-hour'... The information content of PLCS covers: The identification and composition of a product design from a support viewpoint; The definition of documents and their applicability to products and support activities; The identification and composition of individual products; Configuration management activities, over the complete life cycle; Activities required to sustain product function; The resources needed to perform such activities; The planning and scheduling of such activities; The capture of feedback on the performance of such activities, including the resources used; The capture of feedback on the usage and condition of a product; The definition of the support environment in terms of people, organizations, skills, experience and facilities..."
See also: the OASIS announcement
Google's Vision for Cloud Identity
Eric Sachs, Conference Paper
This presentation by Eric Sachs (Senior Product Manager, Google Security) was given at the 2010 Cloud Identity Summit. The summit included 'Dissecting Cloud Identity Standards', where "Secure internet identity infrastructure requires standard protocols, interfaces and API's. The summit goals were to help make sense of the alphabet soup presented to end-users, including OpenID, SAML, SPML, ACXML, OpenID, OIDF, ICF, OIX, OSIS, Oauth IETF, Oauth WRAP, SSTC, WS-Federation, WS-SX (WS-Trust), IMI, Kantara, Concordia, Identity in the Clouds (new OASIS TC), Shibboleth, Cloud Security Alliance and TV Everywhere...
Sachs' paper overviews Google's goals in identity services to increase growth, and provide a more seamless user Google provides federated identity services for over 2 millions businesses and hundreds of millions of users. He explains why Google has made such a large investment in technologies such as OpenID & OAuth, and how consumer websites and enterprise oriented websites are connecting experience.... [Excerpts:] Broad Net-wide goals are to (1) Reduce friction on the Internet by: improving collaboration between users, especially between companies; promoting data sharing between users and their service providers; enhancing user experience through personalization and increased signup rates (2) Increase user confidence in security of the Internet, by reducing password proliferation re-use across sites; promoting high adoption of multi-factor authentication; advancing user/enterprise controlled data-sharing...
As to eliminating passwords by using Open Standards: No one company can do this on their own. Consistency in User Interface/Experience is critical. Support from major players is a must (Microsoft, Facebook, Google, Yahoo, AOL, etc.). The solution must support not just consumers, but also small/medium sized businesses and enterprises, and the solution must work globally. It's is not just web apps: must support iPhone apps, POP/IMAP apps, Windows apps, Mac apps, Linux apps, Blackberry apps, etc. If the app's website has no password for the user, what does user type in the login box? It's the same problem as OpenID and SAML. On a web login page, we redirect via SAML/OpenID. What do you do from a login page that is not in a web-browser?
Multi-factor authentication unlocks market for multi-factor auth vendors, especially mobile phone/network providers; usability is greatly improved by linking a user/employee's single identity provider with muli-factor authentication..."
See also: the online Summit presentations
W3C Recommendation: XHTML Modularization 1.1 Second Edition
Shane McCarron, Daniel Austin, Subramanian Peruvemba (eds), W3C Technical Report
W3C announced the publication of XHTML Modularization 1.1 Second Edition as a final Recommendation. The 'Recommendation' level of standardization in W3C's model of specification maturity means that the document has been reviewed by W3C Members, by software developers, and by other W3C groups and interested parties, and is endorsed by the Director as a W3C Recommendation. It is a stable document and may be used as reference material or cited from another document. W3C's role in making the Recommendation is to draw attention to the specification and to promote its widespread deployment. This enhances the functionality and interoperability of the Web. This standard was produced by the W3C XHTML 2 Working Group as part of the HTML Activity. The document is available in several non-normative formats: Single HTML file, PostScript version, PDF version, ZIP archive, or Gzip'd TAR archive.
The second edition of version 1.1 of XHTML Modularization defines an abstract modularization of XHTML and implementations of the abstraction using XML Document Type Definitions (DTDs) and XML Schemas. This modularization provides a means for subsetting and extending XHTML, a feature needed for extending XHTML's reach onto emerging platforms. This specification is intended for use by language designers as they construct new XHTML Family Markup Languages. This specification does not define the semantics of elements and attributes, only how those elements and attributes are assembled into modules, and from those modules into markup languages. This update includes several minor updates to provide clarifications and address errors found in version 1.1.
The specification supersedes the previous edition of XHTML Modularization 1.1, reflecting mostly minor corrections to ensure consistency among various markup languages that rely upon XHTML Modularization. Most significant among these are: (1) Changing the datatype of the 'class' attribute so that it permits an empty value, where historically the 'class' attribute was permitted to be empty. (2) Moving the 'name' attribute for the 'form' and 'img' markup elements out of the legacy module and into their base modules; this attribute is required for some scripting constructs. (3) Changing the datatype for the 'usemap' attribute from 'IDREF' to 'URIREF'—because most user agents require that map references be relative URIs that are local to the document.
The modularization of XHTML refers to the task of specifying well-defined sets of XHTML elements that can be combined and extended by document authors, document type architects, other XML standards specifications, and application and product designers to make it economically feasible for content developers to deliver content on a greater number and diversity of platforms... XHTML Modularization is a decomposition of XHTML 1.0, and by reference HTML 4, into a collection of abstract modules that provide specific types of functionality. These abstract modules are implemented in this specification using the XML Schema and XML Document Type Definition languages. The rules for defining the abstract modules, and for implementing them using XML Schemas and XML DTDs, are also defined in this document..."
Worse is Better, or Is It?
Eliot Kimber, Dr. Macro's XML Rants Blog
"At the just-concluded Balisage Conference, Michael Sperberg-McQueen brought up the (apparently) famous 'worse is better' essay by Richard P. Gabriel... Gabriel's original argument is essentially that software that chooses simplicity over correctness and completeness has better survivability for a number of reasons, and cites as a prime example Unix and C, which spread precisely because they were simple (and thus easy to port) in spite of being neither complete functionally nor consistent in terms of their interfaces (user or programming). Gabriel then goes on, over the years, to argue against his own original assertion that worse is better and essentially falls into a state of oscillation between 'yes it is' and 'no it isn't'...
Thinking then about 'worse is better' and Gabriel's inability to decide conclusively if it is actually better got me to thinking and the conclusion I came to is that the reason Gabriel can't decide is because both sides of his dichotomy are in fact wrong. In the New Jersey approach, 'finished' is defined by the implementors with no obvious reference to any objective test of whether they are in fact finished. At the same time, the MIT approach falls into the trap that agile methods are designed explicitly to avoid, namely overplanning and implementation of features that may never be used...
Both the MIT and New Jersey approaches ultimately fail because they are not directly requirements driven in the way that agile methods are and must be. Or put another way, the MIT approach reflects the failure of overplanning and the New Jersey approach reflects the failure of underplanning. Agile methods, as typified by Extreme Programming, attempt to solve the problem by doing just the right amount of planning, and no more, and that planning is primarily a function of requirements gathering and validation in the support of iteration.
To that degree, agile engineering is much closer to the worse is better approach, in that it necessarily prefers simplicity over completeness and it tends, by its start-small-and-iterate approach, to produce smaller solutions faster than a planning-heavy approach will..."
See also: the early 1991 paper
The Best Web Browser: Chrome, Firefox, Internet Explorer, Opera, Safari?
Peter Wayner, InfoWorld
"The challenges of choosing a Web browser are greater now because the browser is becoming the home for almost everything we do. Do you have documents to edit? There's a website for that. Did you miss a television show? There's a website for that. Do you want to announce your engagement? There's a website for that too. The Web browser handles all of that and more... On one hand, the programs are as close to commodities as there are in the computer industry. The core standards are pretty solid and the job of rendering the document is well understood. Most differences can be smoothed over when the Web designers use cross-platform libraries like jQuery...
It's easy for a programmer to be enthusiastic about Google's Chrome 5.0 because Google has been emphasizing some of the things that programmers love. Chrome sticks each Web page in a completely separate process, which you can see by opening up Windows Task Manager. If some Web programmer creates an infinite loop or a bad AJAX call in a Web page, Chrome isolates the trouble. Your other pages can keep on running... Best for: People who want to juggle many windows filled with code that crashes every so often. Worst for: People who get upset when a website breaks because the developer tested the site on IE only...
XML Daily Newslink and Cover Pages sponsored by:
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/