The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: September 30, 2008
XML Daily Newslink. Tuesday, 30 September 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



W3C Announces Workshop on Security for Access to Device APIs from the Web
Dominique Hazael-Massieux, W3C Announcement

W3C has issued a public invitation for participation in a "Workshop on Security for Access to Device APIs from the Web," to be hosted by Vodafone in London on December 10-11, 2008. Workshop co-chairs include Nick Allot (OMTP) and Thomas Roessler (W3C). The goal of this workshop is to bring together people from a wide variety of backgrounds (API designers, security experts, usability experts, ...) to discuss the security challenges involved in allowing Web applications and widgets to access the APIs that allow to control these features, and to advise the W3C on appropriate next steps for any gap that needs to be addressed with new technical work. Rationale: As the Web becomes an ubiquitous development platform, application developers need to get access to the features available on the computers or devices on which their Web application (through a browser or through a widget) is running. With the emergence of the Web as a compelling alternative to locally installed applications, security issues are an increasing obstacle for realizing the full potential of the Web, in particular when Web applications developers need to get access to features not traditionally available in the browsing environment: cameras, GPS systems, connectivity and battery levels, external applications launch, access to personal data (e.g. calendar or addressbook), etc. W3C membership is not required in order to participate in the Workshop, but position papers are required and must be submitted by email no later than October 30, 2008. Topics suitable for position papers and discussion points include: (1) Existing frameworks on desktop and mobile platforms to regulate security policies for specific APIs; (2) Similarities and differences of the security approaches in desktop and mobile platforms, in a browser and in a widgets environment; (3) Usability of security relevant user interactions; issues and opportunities in the mobile environment; (4) Safe language and API subsets, and models for application use of such subsets; (5) Policy based trust delegation mechanisms; (6) Reducing the attack surface exposed by Web page scripts; (7) Role of authentication of users and applications in securing API access; (8) Increasing awareness of good security practices for Web applications; (9) Usability of security and privacy policies. [Note other W3C Workshops and Symposia.]

See also: W3C Mobile Web Initiative


OAuth: HTTP Authorization Delegation Protocol
Eran Hammer-Lahav and Blaine Cook, IETF Internet Draft

A version -00 Standards track IETF Internet Draft has been submitted for the "OAuth: HTTP Authorization Delegation Protocol." The editors note that this specification is substantially identical to OAuth Core 1.0; an associated posting clarifies the differences. The OAuth protocol enables websites or applications (Consumers) to access Protected Resources from a web service (Service Provider) via an API, without requiring Users to disclose their Service Provider credentials to the Consumers. More generally, OAuth creates a freely-implementable and generic methodology for API authentication. An example use case is allowing printing service printer.example.com (the Consumer), to access private photos stored on photos.example.net (the Service Provider) without requiring Users to provide their photos.example.net credentials to printer.example.com. OAuth does not require a specific user interface or interaction pattern, nor does it specify how Service Providers authenticate Users, making the protocol ideally suited for cases where authentication credentials are unavailable to the Consumer, such as with OpenID. OAuth aims to unify the experience and implementation of delegated web service authentication into a single, community-driven protocol. OAuth builds on existing protocols and best practices that have been independently implemented by various websites. An open standard, supported by large and small providers alike, promotes a consistent and trusted experience for both application developers and the users of those applications... OAuth includes a Consumer Key and matching Consumer Secret that together authenticate the Consumer (as opposed to the User) with the Service Provider. Consumer-specific identification allows the Service Provider to vary access levels to Consumers (such as un-throttled access to resources). Service Providers should not rely on the Consumer Secret as a method to verify the Consumer identity, unless the Consumer Secret is known to be inaccessible to anyone other than the Consumer and the Service Provider. The Consumer Secret may be an empty string (for example when no Consumer verification is needed, or when verification is achieved through other means such as RSA)...

See also: the associated posting


OASIS Public Review Draft: UBL Guidelines for Customization Version 1.0
Michael Grimley, Mavis Cournane, Tim McGrath (et al., eds), OASIS PRD

Members of the OASIS Universal Business Language (UBL) Technical Committee have submitted an approved Committee Draft of "UBL Guidelines for Customization Version 1.0" for public review through November 29, 2008. The OASIS UBL TC has produced a vocabulary that, for many user communities, can be used 'as is.' However, the TC also recognizes that some user communities must address use cases whose requirements are not met by the UBL off-the-shelf solution. These Guidelines are intended to aid such users in developing custom solutions based on UBL. The goal of these UBL customization guidelines is to maintain a common understanding of the meaning of information being exchanged between specific implementations. The determining factors governing when to customize may be business-driven, technically driven, or both. The decision should driven by real world needs balanced against perceived economic benefits. The UBL library and document schemas have been developed from conceptual models based on the principles of the ebXML Core Component Technical Specification. These are then expressed in W3C XML Schema (XSD), based upon the UBL Naming and Design Rules. It is these schemas that may be used to both specify and validate UBL conformance. It is recommended that a similar approach be followed when customizing UBL... Customizations of UBL may be refined even further for different scenarios. A profile characterizes the choreography of an interchange. A given document type may have two different sets of constraints in two different profiles of the same customization. For example, an invoice instance used in the choreography of a Basic Procurement profile may not require as many information entities as an invoice instance used in the different choreography of an Advanced Procurement profile. Thus the three dimensions of the version of a set of UBL document structural constraints are defined by the UBL version (standard), the business process context version (customization), and the choreography version (profile). An instance claiming to satisfy the document constraints for a particular profile in a customization asserts this in the UBLCustomizationID and UBLProfileID entities...

See also: the announcement


The Derivatives Crisis and Standards
Rick Jelliffe, O'Reilly Technical

"What I have found particularly interesting in my brief excursions into the dens of the technical beavers of large financial institutions is how much the move to standards requires shoehorning: these institutions typically have very large investments in transaction or closed-world systems with long life-cycles. Indeed, some of them the lifecycle is so long, and the facilities so mission critical to the business, that for all intents and purposes they will never go away. Anyone who has implemented the ACORD insurance standard for XML will probably attest to how many non-ACORD extensions are necessary. And I have heard of a financial institution where mortgage requests from brokers get processed disconnected from information about which broker made the request: the broker has to re-send information in order to get billed; this is not because of some reasonable anonymising system for privacy or whatever, it is apparantly merely because their approval system was designed before the mortgage broker market existed and has never been upgraded... We hear a lot of talk about Web 2.0, but has the financial sector even got to Web 1.0, really? [...] The reduction in the cost of memory and data transmission has naturally changed the economics of information, and lead to the idea that organizations can be (internally and externally) generous information with rather than parsimonious. The XML phenomenon has piggybacked and lead this change. Rather than having database reports with highly targeted and specific requests, to reduce the number of joins for instance, the idea is to reduce the amount of filtering at the DBMS side, and have data reports available with perhaps more than is necessary for specific tasks, but which thereby becomes useful for multiple projects and new, even spontaneous, uses... Making sure that data reports are never disconnected from this kind of primary metadata seems to me to be a pre-condition for proper electronic aggregation, where you need to be able to de-aggregate both for auditing and to allow disentangling of complex information (such as derivatives.) The other aspect is of universal identification: I have been exploring this in the PRESTO columns in this blog. Universal identification means not that every XML document should be available on the WWW with a URL, but that every significant piece of data at every significant level of granularity in an organization should have a clear, ubiquitous, hierarchical identifier regardless of whether the information can be retrieved using that identifier at any particular point in time..."


Sun Launches Open Source OpenSSO for Identity Management
Neil Roiter, SearchSecurity.com

Sun MicroSystems' OpenSSO Enterprise is a major upgrade over its Sun Java System Access Manager predecessor and analysts say it's an intriguing open source model for major commercial products. OpenSSO Enterprise combines access management, federation and secure Web services in a single product. It was built in collaboration with the OpenSSO project which is based on Access Manager code. The core components are available for download. Sun MicroSystems Inc. has staked a lot on its open source initiatives to enhance its stature in the development community, strengthen its offerings, and, of course, boost sales... John Barco, Sun's director of product management said OpenSSO represents the company's overall strategy for making all operating system software open source. He cited transparency about the product, the code and the development roadmap, so customers know what features are coming. In that vein, the new model will give customers the option of downloading fully tested product updates at three-month intervals, or wait for the full annual update release. Barco said the open source approach allows this kind of schedule, as the community participation helps vet new releases... Open source aside, OpenSSO packs a lot more than the last Access Manager release: (1) Access management with an embedded directory server, OpenDS, so OpenSSO can be implemented without necessarily configuring or deploying a stand-alone directory. Barco said that OpenDS is purpose-built for embedded technologies and telcos; it's not meant to compete with or supplant the company's SunOne enterprise directory. (2) The federation is a hub-and-spoke architecture, the spokes being easy-to-implement packages called, somewhat cutely, Fedlets (reminiscent of Big Fix's Fixlets?). The architecture, Barco said, allows enterprises to create federation partners by simply sending a small (8.5 MB) Fedlet package. The partner adds the Fedlet to the appropriate container, filter or application to create a quick SAML 2.0-based relationship. (3) The Secure Web services component includes a security token service, which can also be deployed standalone to support third-party products.

See also: the announcement


Combined Presence Schemas Utilizing RELAX NG
Jari Urpalainen (ed), IETF Internet Draft

This memo describes a batch of Presence Information Data Format (PIDF) and its extension schemas written with the RELAX NG schema language. Unlike with the current W3C XML Schema language it is possible to write reasonable forwards and backwards compatible presence combination schemas. These RELAX NG schemas are stricter than the W3C Schemas and thus the instance documents that validate with these schemas follow the intended content model more closely. Especially, these schemas are targeted to actual implementations in order to decrease interoperability problems. The set includes schemas for PIDF, DataModel, RPID, CIPID, CAPS and LocationTypes. These schemas are more restrictive than the corresponding W3C XML Schemas, i.e., if instance documents validate according to these schemas they should also validate with the W3C XML Schemas, too. These schemas are provided as informative to applications who wish to utilize RELAX NG as a validation tool. All schemas written with the RELAX NG schema language are based on patterns. The W3C Schema datatypes are used to constrain the element and attribute content in these schemas. The model for these schemas is based on the approximate chronological order of appearance of these schemas, i.e. the PIDF schema is the baseline schema and the DataModel schema includes it by adding some extensions. Then the RPID schema includes the DataModel schema and defines new extensions, etc. When an implementation wants to validate an instance document it just has to provide a single schema e.g., a RPID reference to the validator as that schema will include all the others. Extension points, i.e., where 'any' wildcards are used in W3C XML Schemas, are described by adding a similar extension definition which can be extended by using the combine="interleave" pattern rule. The wildcard definition must be redefined in extension schemas since name classes must not overlap with the interleave pattern. The schemas presented in this memo are thus deterministic and unambiguous although it is not a general requirement of the RELAX NG schema language. The ability to easily redefine extension points can help to detect implementation errors when an application does not have any extensions beyond (e.g.) RPID and DataModel...

See also: RELAX NG as ISO DSDL, Part 2


Real Web 2.0: Open, Geographic Information Systems at Geonames.org
Uche Ogbuji, IBM developerWorks

The most wonderful thing about the open data aspect of Web 2.0 is that sometimes such resources include all the data you need to create your own little magic corners of the Web. GeoNames is one of those sites and services that is not just indispensable in its own right, but is also an important ingredient in other indispensable services. It's a Web site built around a well-designed, freely accessible database of geographical information. GeoNames is a database, Web service, and destination site for all things geographical. It has a rich, RESTful API and offers Semantic Web features using Linking Open Data conventions. The GeoNames geographical database is available for download free of charge under a creative commons attribution license. It contains over eight million geographical names and consists of 6.5 million unique features whereof 2.2 million populated places and 1.8 million alternate names. All features are categorized into one out of nine feature classes and further subcategorized into one out of 645 feature codes. The data is accessible free of charge through a number of webservices and a daily database export. GeoNames is already serving up to over 11 million web service requests per day. GeoNames is integrating geographical data such as names of places in various languages, elevation, population and others from various sources. Numerous types of queries can be performed on GeoNames, for example: (1) Find places near a postal code, by country—returning an XML file or JSON feed; (2) Find the postal codes near a given latitude/longitude—returning an XML file; (3) Find the 'children' of a given geographical feature -- for example, the provinces within a country, or the settlements within a province, returning an XML file or JSON feed; (4) Find geocoded Wikipedia articles near a given latitude/longitude, postal code, or place name—returning an XML file or JSON feed; (5) Find all neighbors of a country; (6) Find the weather stations and their most recent weather observations within a bounding box of four latitude/longitude pairs—returning an XML file; (7) Get the time zone at a given latitude/longitude; (8) Get the elevation in meters for a latitude/longitude representing a land area... GeoNames is also an anchor of what some are calling "Web 3.0"—Semantic Web. In an earlier installment of this column I discussed Linking Open Data (LOD), which is a practical initiative for enabling the Semantic Web. GeoNames is a key part of LOD, thanks to its support of RDF metadata for places (in fact, it supports a very detailed ontology to try to establish very clear context for everything). In the earlier discussion I discussed using HTTP 303 codes to give URIs to non-computable resources such as people and abstract qualities. GeoNames uses this approach to give places URIs suitable for the Semantic Web...

See also: W3C Basic Geo (WGS84 lat/long) Vocabulary


To Render or Not to Render XBRL
Diane Mueller, O'Reilly Broadcast

One of the interesting long running conversations in the XBRL technical community has been the discussion of what the market wants and needs in terms of rendering XBRL content. For a long time, we debated whether or not XBRL data would ever be rendered. The main camp on one side of this many-sided conversation held fast to the belief that XBRL data would only ever be transmitted from machine to machine and would never be seen by the human eye. However, there was a counter-argument that accountants and regulators (who only recently put down their pencils and paper to switch to spreadsheets and keypads), would still have serious trust issues and would at least need to see the XBRL content rendered in order to audit and sign-off on the content before submitting or publishing the data to various stakeholders... Financial reports have a long and checkered history and the value-proposition of XBRL is that it will help disambiguate some of the obfuscation that goes on in the reporting of financial data to the public, ensuring accurate, high quality valid information to feed the voracious supply chain that consumes the stuff. As I watch, the meltdowns on Wall Street and the effects of the mortgage crisis and listen to mud-slinging in the election campaigns, I can't help but hark back to Enron, WorldCom and Arthur Andersen meltdowns that were some of the impetuses for the remaining Big 4 to get behind the XBRL technology bandwagon in the first place... The question of rendering centers currently on creating a 'canonical' rendering of a financial report... Getting the 'canonical' rendering into the hands of the consuming stakeholders is an important aspect of financial report. In the Internet age, it's all about getting the 'eyeballs on the glass' approach - aka what our computer screens show us is what we believe, and enabling the applications we use to consume this content accurately is the key to successful business decisions. The XBRL technical community has addressed this issue in three approaches...

See also: XBRL references


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-09-30.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org