The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: October 16, 2008
XML Daily Newslink. Thursday, 16 October 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com



Site-Wide Metadata for the Web
Mark Nottingham and Eran Hammer-Lahav (eds), IETF Internet Draft

Mark Nottingham and Eran Hammer-Lahav of Yahoo! have published an initial -00 Internet Draft specifying "Site-Wide Metadata for the Web." According to the commentary in Mark's blog: "Metadata discovery is a nagging problem that's been hanging around the Web for a while. There have been a few stabs at this problem, but no real progress. This is both unfortunate and worrisome, because as the next generation of Web-based protocols informed by REST, Web 2.0 and the like roll out, they're going to need a way to find and talk about metadata on the Web in an automated fashion... The immediate need is for XRDS-Simple; Eran wanted a way to find security metadata for a site, and in discussion we agreed that rather than re-inventing the heel for the Nth time, we'd try to do it right...and so 'site-meta' was born; an ultra-simple, lightweight and minimally intrusive way to find a Web site's metadata...' It is increasingly common for Web-based protocols to require the discovery of policy or metadata about a site before communicating with it. For example, the Robots Exclusion Protocol specifies a way for automated processes to obtain permission to access resources; likewise, the Platform for Privacy Preferences tells user-agents how to discover privacy policy beforehand. While there are several ways to access per-resource metadata (e.g., HTTP headers, WebDAV's PROPFIND, the overhead associated with them often precludes their use in these scenarios. When this happens, it is common to designate a 'well-known location' for site metadata, so that it can be easily located. However, this approach has the drawback of risking collisions, both with other such designated 'well-known locations' and with pre-existing resources." From the FAQ: Use of a META tag or microformat in the root resource would place constraints on the format of a site's root resource to be HTML or similar. While extremely common, it isn't universal (e.g., mobile sites, machine-to-machine communication, etc.). Also, some root resources are very large, which would place additional overhead on clients and intervening networks. Why not use response headers on the root resource, and have clients use HEAD? This is attractive, in that you could either put metadata directly in response headers, or you could refer to a resource in a similar manner to site-meta. However, it requires an extra round-trip for metadata discovery, which is unacceptable in some scenarios. The primary use cases are described in the specification introduction; when it's necessary to discover metadata or policy before a resource is accessed, and/or it's necessary to describe metadata for a whole site (or large portions of it), site-meta is appropriate. In other cases (e.g., fine-grained metadata that doesn't need to be known ahead of time), other mechanisms are more appropriate. Why scope metadata to be site-wide? The alternative is to allow scoping to be dynamic and determined locally, but this has its own issues, which usually come down to: (a) an unreasonable number of requests to determine authoritative metadata, (b) increased complexity, with a higher likelihood of implementation and interoperability (or even security) problems. Besides, many mechanisms on the Web already presume a site scope (e.g., robots.txt, P3P, cookies, javascript security), and the effort and cost required to mint a new URI authority is small and shrinking...

See also: Mark Nottingham's blog


New W3C Recommendation for "RDFa in XHTML: Syntax and Processing"
Ben Adida, Mark Birbeck (et al., eds), W3C Technical Report

W3C announced that the Semantic Web Deployment Working Group and the XHTML2 Working Group have published the following W3C Recommendation: "RDFa in XHTML: Syntax and Processing. A Collection of Attributes and Processing Rules for Extending XHTML to Support RDF." This specification provides publishers with a standard way to express structured data on the Web within XHTML. A companion document "RDFa Primer: Bridging the Human and Data Webs" was published as a W3C Working Group Note. RDFa allows tools to read XHTML, enabling a new world of user functionality, allowing users to transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience. The current Web is primarily made up of an enormous number of documents that have been created using HTML. These documents contain significant amounts of structured data, which is largely unavailable to tools and applications. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience: an event on a web page can be directly imported into a user's desktop calendar; a license on a document can be detected so that users can be informed of their rights automatically; a photo's creator, camera setting information, resolution, location and topic can be published as easily as the original photo itself, enabling structured search and sharing. RDFa is a specification for attributes to express structured data in any markup language. This document specifies how to use RDFa with XHTML. The rendered, hypertext data of XHTML is reused by the RDFa markup, so that publishers don't need to repeat significant data in the document content. The underlying abstract representation is RDF, which lets publishers build their own vocabulary, extend others, and evolve their vocabulary with maximal interoperability over time. The expressed structure is closely tied to the data, so that rendered data can be copied and pasted along with its relevant structure. The rules for interpreting the data are generic, so that there is no need for different rules for different formats; this allows authors and publishers of data to define their own formats without having to update software, register formats via a central authority, or worry that two formats may interfere with each other. RDFa shares some use cases with microformats. Whereas microformats specify both a syntax for embedding structured data into HTML documents and a vocabulary of specific terms for each microformat, RDFa specifies only a syntax and relies on independent specification of terms (often called vocabularies or taxonomies) by others. RDFa allows terms from multiple independently-developed vocabularies to be freely intermixed and is designed such that the language can be parsed without knowledge of the specific term vocabulary being used.

See also: the RDFa Primer


Using WS-Trust Support in Metro to Secure Web Services
Jiandong Guo, Blog

Metro is a high performance, extensible, easy-to-use web services stack. It combines the JAX-WS reference implementation with Project Tango. Project Tango, also called Web Services Interoperability Technology or WSIT, implements numerous WS-* standards to enable interoperability with other implementations and to provide Quality of Service (QOS) features such as security, reliability, and transaction support. Metro is available in the open source, enterprise-ready GlassFish v2 application server as well as in the modular GlassFish v3 application server. Metro also runs in the Apache Tomcat web container. In addition, it has been successfully used in other application servers. An earlier article introduced Metro's support (through WSIT) for web services security. This support implements the following web services security specifications published by OASIS: WS-Security, WS-SecurityPolicy, WS-SecureConversation, and WS-Trust. The following article focuses on the support in Metro for WS-Trust. You will learn the basics of WS-Trust and its Security Token Service (STS) framework; you'll also learn about the support in Metro for WS-Trust and STS. A sample application package accompanies the article. The package includes sample applications that demonstrate how to enable web service security using STS-issued tokens associated with various types of proof keys such as a symmetric proof key, a public proof key, or no proof key... WS-Trust is a WS-* specification that provides extensions to the WS-Security specification. WS-Security provides the basic framework for message level security in web services. WS-Trust builds on that base to specify a framework for broker trust across different security domains. It specifically deals with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. In WSIT, you specify authentication requirements in a security policy which you attach to the Web Services Definition Language (WSDL) file for the service. For example, a X509Token assertion in a security policy may specify that an X509 certificate from the client is required for the client to authenticate to the service. In this case, the service understands the client's identity as represented by its certificate. However, if the client and the service are in different security domains they have no direct trust relationship. In that case, you can use an STS to authenticate the client. The STS is an authority trusted by the client and the service. You can also use an STS to issue a security token, that is, a collection of claims such as name, role, and authorization code, for the client to access the service. In you use an STS, you need to change that assertion in the security policy to indicate that the client must call an STS first to get a security token. The security token is usually a Security Assertion Markup Language (SAML) token... [This article concludes with a summary of the WS-Trust support in Metro.]

See also: the GlassFish Metro web site


TAG Work In Recent Months: October 2008
Dan Connolly and Stuart Williams, W3C TAG Report

Members of the W3C Technical Architecture Group (TAG) have published an October 2008 update on core technologies of the Web HTML, HTTP, and URIs, and the work of the TAG that continues to focus on the corresponding areas of formats, protocols, and naming. The TAG was created in February 2001. Three TAG participants are appointed by the Director and five TAG participants are elected by the Advisory Committee. The mission of the TAG is stewardship of the Web architecture. Included in this mission is building consensus around principles of Web architecture, resolving issues involving Web architecture, and helping to coordinate cross-technology architecture developments inside and outside W3C. From the October Report: (1) Formats: HTML and ARIA, Namespace Documents -- The TAG devoted half of its 3-day September F2F to the topic of "HTML and The Web". The TAG plans to meet with the HTML WG during TPAC 2008 to discuss Modularisation of the HTML5 specification... The "Self-Describing Web" was updated 12-May-2008 and 8-September-2008 it discusses features of the Web that support reliable, ad hoc discovery of information, including formats such as Atom, RDFa, GRDDL, XML, and RDF. (2) Protocols: widget packaging, HTTP scalability, linking—In a 19-June-2008 discussion of issue scalabilityOfURIAccess-58, the TAG encouraged use of XML catalogs as a caching mechanism to mitigate the load that automated access to DTDs and schemas put on the W3C web site... (3) Naming (XRIs): The TAG is working on comparing HTTP and DNS to other approaches to persistent naming such as info: and xri: under issue issue URNsAndRegistries-50. While the URNs, Namespaces and Registries draft has been preempted by other work since the last update of August 2006, the TAG discovered a 31 May deadline on a ballot to Approve XRI Resolution v2.0 as an OASIS Standard and summarized its position: TAG recommends against XRI. In an attempt to facilitate dialog after this somewhat awkward step, the TAG and the OASIS XRI TC held a 3 July joint meeting and continue the discussion by email in www-tag with the goals of improving understanding of our respective positions and to publically record points of agreement and if necessary irresolvable disagreement. Discussion of with the OASIS XRI TC has continues through October 2008 with improving levels of mutual understanding and much less talking past one another. The XRI TC are currently exploring alternate approaches to meeting their requirements using existing URI schemes with the intention of developing an approach aligned with the Architecture of the World Wide Web...

See also: the TAG home page


openstreetMap: User-Generated Street Maps
Mordechai Haklay and Patrick Weber, IEEE Pervasive Computing

From the University College London, the authors discuss the GPS eXchange format and OpenStreetMap (OSM). OpenStreetMap is a free editable map of the whole world. OpenStreetMap allows you to view, edit and use geographical data in a collaborative way from anywhere on Earth. The wide availability of high-quality location information has enabled mass-market mapping based on affordable GPS receivers, home computers, and the Internet. Although a range of projects based on user-generated mapping has emerged, OpenStreetMap (OSM) is probably the most extensive and effective project currently under development. OSM follows the peer production model that created Wikipedia; its aim is to create a set of map data that's free to use, editable, and licensed under new copyright schemes... As of May 2008, OSM had more than 33,000 registered users (with approximately 3,500 currently active contributors), and data contribution growth continues to rise quickly... A key motivation for this project is to enable free access to current geographical information where, in European countries, accurate digital geographical information is considered to be expensive and out of the reach of individuals, small businesses, and community organizations. In the US, where basic road information is available through the US Census Bureau's Tiger (Topologically Integrated Geographic Encoding and Referencing)/Line program, the details provided are limited to streets and roads only. In addition, owing to the high cost of mapping, the Tiger system's update cycles are infrequent and don't take into account rapid changes. Commercial geographical information products from providers such as NAVTEQ are also expensive and aren't available for individual users in an accessible format... At the core of OSM data management, it's easy to see how open source philosophy permeates the project's technical infrastructure. OSM is built iteratively using the principle that the simplest approach to any problem is the best way to ensure the success of the project as a whole. OSM's developers deliberately steered away from using existing standards for geographical information from standard bodies such as the Open Geospatial Consortium (OGC)—for example, its WMS standard. They felt that most such tools and standards are hard to use and maintain, citing performance issues with, for instance, MapServer (a popular open source WMS) and a lack of adaptability of OGC-compliant software packages to support wiki-style behavior... Access to the core OSM database is provided by a dedicated RESTful API, which is implemented in Ruby on Rails and supports authentication, enabling users to add, update, and delete geographical features. The API accepts and outputs data in OSM XML, a dedicated data transport format developed for the project that replicates the databases' specific entity model. All editing tools use this API for accessing and updating the main database. As a result, editing and presentation tools can be developed independently from the database, with the lightweight communication protocol acting as a glue between the elements of OSM's GeoStack... Significantly, vendors of commercial GIS packages, such as CadCorp SIS and Global Mapper, have recently included OSM XML data support out of the box. Users have converted OSM information for use on a multitude of devices, including mobile phones, PDAs, and GPS units. A community-maintained software package lets users translate OSM data into the Garmin IMG GPS map format, despite this format's proprietary nature and lack of documentation...

See also: OpenStreetMap web site


Apache Tuscany Enables SOA Solutions
Staff, SOA World Magazine News

Apache Tuscany, a new Top-Level Project of the Apache Software Foundation (ASF), has announced the release of version 1.3.2 of its Service Component Architecture (SCA) for Java. Apache Tuscany provides a robust, highly extensible infrastructure for building, deploying, running and managing Service Oriented Architecture (SOA) solutions, streamlining the development process of service-based application networks and addressing real business problems posed in SOA. Service Oriented Architecture solutions utilize new and existing services to create new applications that may consist of different technologies. Anthony Elder, ASF Vice President and Chair of the Apache Tuscany Project Management Committee. "We continue to receive enthusiastic support for Tuscany's simple, highly extensible Service Component Architecture (SCA), Service Data Objects (SDO), and Data Access Service (DAS) subprojects. Becoming an ASF Top-Level Project and great ideas for future development -- including improvements in application server integration, distributed runtimes and Web 2.0 support—underscore how Tuscany continues to go from strength to strength." Service Component Architecture (SCA) Support Across Protocols and Technologies Service Component Architecture defines a simple, service-based model for the construction, assembly and deployment of a network of services (both existing and new ones) that are defined in a language-neutral way. SCA can be used with a wide variety of existing middleware technologies, enabling existing assets to be leveraged. Apache Tuscany supports many different binding protocols and programming technologies and works with a variety of container models. Its modular architecture makes it easy to integrate with other technologies and runs on Apache's Tomcat, and Geronimo projects, as well as other application servers. "Using Tuscany helps our leading banking customers to fulfill the SOA architecture and business component specification," said Chris Cheng, Vice President of Primeton Technologies Ltd. "Primeton has done a consulting service for one of China's leading financial services company, to help them to fulfill the SOA architecture, business component specifications and the container implementation. Now the enterprise architecture, business component specifications and container implementation have been approved to be used in all the Enterprise-wide applications in the future." In addition, Apache Tuscany implements the SCA Version 1.0 specifications as defined by the Open Service Oriented Architecture Collaboration. Apache Tuscany also supports the specifications being standardized by OASIS Open CSA and provides additional support for OSGI, scripting languages, and Web 2.0 related technologies such as RSS and DWR.

See also: the Apache announcement


REST for Java Developers: A Resource-Oriented Approach to Web services
Brian Sletten, Java World Magazine

Representational State Transfer (REST) is an architectural style for creating, maintaining, retrieving, and deleting resources. REST's information-driven, resource-oriented approach to building Web services can both satisfy your software's users and make your life as a developer easier. This article, the first in a four-part series by REST expert Brian Sletten, introduces the concepts that underlie REST, explains the mechanisms that RESTful applications use, and explores the benefits of REST. The Web has become the mind-boggling global information ecosystem it is not by accident, but because of specific technology choices made along the way. Roy Fielding, originator of the term REST, documented these choices in his acclaimed Ph.D. thesis. He highlighted the goal of building networked software systems exhibiting the properties of: Performance, Scalability, Generality, Simplicity, and Modifiability Performance, as a property, is a quality of responsiveness. Given the networked nature of these software systems, we always want to avoid paying a latency penalty. Scalability is a property that indicates how many users can simultaneously access a service. Generality allows these systems to solve a wide variety of problems. The more moving parts a software system has and the more complex its interactions, the harder it is to prove that it does what it is supposed to do. We would like our systems to be as simple as possible and extensible in the face of new requirements, new technologies, and new use cases. As an architectural style, REST is simple and flexible, and it allows the various communicating pieces to change over time. This flexibility gives you the resilience to embrace the change that inevitably comes in the form of new use cases, new technologies, new requirements, or a new understanding of your domains. Software achieves scalability by talking to logical names that might map to a load balancer that redirects to multiple back-end responders. Clients do not need to know which specific machine they are communicating with. To meet the needs of this kind of an environment, you need an integration approach that separates out the concerns of: (1) The things we care about; (2) How we refer to them; (3) How we manipulate them; (4) How we choose to represent them for creation, updates, and retrieval...


What's Next After Web 2.0? Here's What You Told Us...
Richard MacManus, ReadWriteWeb

Has the world arrived at one of those giant inflexion points (we asked), where one Web era is usurped by another? We asked you to leave a comment in the post telling us what you think will be next. Many of you did just that and also the post was fortunate enough to get to the digg frontpage, where it received 100 additional comments. Finally, we polled our friends on Twitter today and got many great replies. This article is an attempt to synthesize, analyze and categorize all of the responses from RWW, digg and Twitter. What is next after Web 2.0? Jason Palmer claimed that XMLHttpRequest and AJAX drove web 2.0. He thinks that "the next wave will come once HTML 5 and CSS 3 are fully supported on all popular browsers. This will, again, give developers more toys to play with, and expand the boundaries of entrepreneurs." [...] Will The Semantic Web ever arrive? Several commenters were optimistic... Mark Johnson, Powerset/Microsoft Program Manager, commented that "the next era of the Web will represent greater understanding of computers." He went on to suggest that "if Web 1.0 was about Read and Web 2.0 was about Read/Write, then Web 3.0 should be about Read/Write/Understand." Specifically he said that "a computer that can understand should be able to: find us information that we care about better (e.g., smart news alerts), make intelligent recommendations for us (e.g., implicit recommendations based on our reading/surfing/buying behavior), aggregate and simplify information... and probably lots of other things that we haven't yet imagined, since our computers are still pretty dumb." [...] Tim O'Reilly, whose company coined the term 'web 2.0' in 2004, has been lately pushing for developers to tackle the hard problems of the world. Education is one area ripe for Web innovation. Harley of WorldLearningTree recently submitted his suggestions on how to revolutionalize online education to Google's "Project10ToThe100" contest... Privacy and security have been hot issues in the web 2.0 era, but they will become even more important in the next -- as education, health and other 'real world' apps take center stage... John McCrea forsees the walls coming down "and a new open stack (OpenID, OAuth, Portable Contacts, XRDS-Simple, OpenSocial, microformats) enables seemless interoperability, with users in control." [...] The jury is still out on whether web 2.0 has officially ended. Of course the Web is iterative and so version numbers don't really mean anything. But even so we may see more of a focus on 'real world' problems from now on and a move away from consumer apps as the primary focus.


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-10-16.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org