The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: November 25, 2009
XML Daily Newslink. Wednesday, 25 November 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



LinkedIn Development Platform Comes to Life
Clint Boulton, eWEEK

"LinkedIn has followed through on its plans to launch a development platform, opening the site 'Developer.linkedin.com' to let software programmers put LinkedIn's profile content into their business applications and Websites. Microsoft is using LinkedIn to add profile information to Microsoft Office 2010 email users with the Outlook Social Connector. Twitter application TweetDeck will support the LinkedIn platform in its next version, allowing TweetDeck users to access their LinkedIn network updates from within TweetDeck, which will add a LinkedIn column...

Developers will be able to freely use the REST-based APIs LinkedIn has created and made available. The LinkedIn platform uses the OAuth standard to let programmers allow users to easily access their profile information and network content via a secure log-in... The platform launch comes more than a year after LinkedIn drummed up a lot of notice for the platform with partners Google, Amazon, Six Apart, WordPress, Box.net, Huddle, SlideShare and TripIt..."

See also: LinkedIn Developer Network - OAuth Authentication


OAuth Web Resource Authorization Profiles (OAuth WRAP)
Allen Tom, Brian Eaton, Dick Hardt, Yaron Goland; OAuth Community Draft

The 'OAuth WRAP' specification is one of several now being developed under the terms of the Open Web Foundation Agreement by Google, Microsoft, Yahoo! (and others). From OAuth WRAP Version 0.9.7.2: "The OAuth Web Resource Authorization Profiles (OAuth WRAP) allow a server hosting a Protected Resource to delegate authorization to one or more authorities. An application (Client) accesses the Protected Resource by presenting a short lived, opaque, bearer token (Access Token) obtained from an authority (Authorization Server). There are Profiles for how a Client may obtain an Access Token when acting autonomously or on behalf of a User.

An associated OAuth WRAP group was set up for discussion of the Web Resource Authorization Protocol (WRAP) profiles in OAuth, aka OAuth WRAP. While similar in pattern to OAuth 1.0A, the WRAP profiles have a number of important capabilities that were not available previously in OAuth. This specification is being contributed to the IETF OAuth Working Group."

See also: the OAuth WRAP Wiki


Metalink/HTTP: Mirrors and Checksums in HTTP Headers
Anthony Bryan, Neil McNab (et al, eds), IETF Internet Draft

An updated version of the IETF Standards Track Internet Draft "Metalink/HTTP: Mirrors and Checksums in HTTP Headers" has been published. This version introduces Metalink/XML for partial file checksums. "Partial file checksums can be used to detect errors during the download. Metalink servers are not required to offer partial file checksums, but they are encouraged to do so. If the object checksum does not match the Instance Digest, then fetch the Metalink/XML, where partial file checksums may be found, allowing detection of which server returned incorrect data. If the Instance Digest computation does not match then the client needs to fetch the partial file checksums, if available, and from there figure out what of the downloaded data can be recovered and what needs to be fetched again.

Metalink/HTTP is an alternative representation of Metalink information, which is usually presented as an XML-based document format. Metalink/HTTP attempts to provide as much functionality as the Metalink/XML format by using existing standards such as Web Linking, Instance Digests in HTTP, and ETags. Metalink/HTTP is used to list information about a file to be downloaded. This can include lists of multiple URIs (mirrors), Peer-to-Peer information, checksums, and digital signatures.

Identical copies of a file are frequently accessible in multiple locations on the Internet over a variety of protocols (such as FTP, HTTP, and Peer-to-Peer). In some cases, users are shown a list of these multiple download locations (mirrors) and must manually select a single one on the basis of geographical location, priority, or bandwidth. This distributes the load across multiple servers, and should also increase throughput and resilience. At times, however, individual servers can be slow, outdated, or unreachable, but this can not be determined until the download has been initiated. Users will rarely have sufficient information to choose the most appropriate server, and will often choose the first in a list which may not be optimal for their needs, and will lead to a particular server getting a disproportionate share of load.

This document describes a mechanism by which the benefit of mirrors can be automatically and more effectively realized. All the information about a download, including mirrors, checksums, digital signatures, and more can be transferred in coordinated HTTP Headers. This Metalink transfers the knowledge of the download server (and mirror database) to the client. Clients can fallback to other mirrors if the current one has an issue. With this knowledge, the client is enabled to work its way to a successful download even under adverse circumstances. All this is done transparently to the user and the download is much more reliable and efficient. In contrast, a traditional HTTP redirect to a mirror conveys only extremely minimal information - one link to one server, and there is no provision in the HTTP protocol to handle failures. Furthermore, in order to provide better load distribution across servers and potentially faster downloads to users, Metalink/HTTP facilitates multi-source downloads, where portions of a file are downloaded from multiple mirrors (and optionally, Peer-to-Peer) simultaneously..."

See also: the Metalink discussion group


Momentum Builds for CMIS Open Content Management Standard
Chris Kanaracus, The Industry Standard and New York Times

"A proposed standard meant to help content management systems communicate with each other has steady momentum, and an initial version could be finalized early next year. Content Management Interoperability Services (CMIS) was first announced in September 2008. It outlines a standardized Web services interface for sharing content across multiple CMS (content management system) platforms. Organizations face difficulties when integrating information from various content repositories, because specialized connectors typically have been required for each system.

Both customers and vendors stand to gain from CMIS. It should cut the amount of one-off integrations and custom development work end-users currently must do, and in addition, software vendors won't have to build and support a wide range of connectors, according to 451 Group analyst Kathleen Reidy... The specification is supported by the content management industry's biggest players, including EMC, Adobe, Microsoft, Open Text, IBM and SAP. Open-source CMS vendor Alfresco is also a backer. The company said Monday [2009-11-23] it has included support in the 3.2 version of its platform for CMIS 1.0, which is now in a public review period scheduled to end Dec. 22. CMIS' inclusion in Alfresco 3.2 will enable users to get a hands-on look during the review period..."

See also: the Alfresco Software announcement


Discovery and OAuth and User-Managed Access (UMA)
Eve Maler, Blog

"If you saw the ProtectServe status update from the Internet Identity Workshop in May 2009, you may want to check out our progress on what has become 'User-Managed Access (UMA, pronounced like the actress). The purpose of the UMA Work Group is to develop specs that let an individual control the authorization of data sharing and service access made between online services on the individual's behalf, and to facilitate interoperable implementations of the specifications.

The proposition still centers on helping individuals gain better control of their data-sharing online, along with making it easier for identity-related data to live where it properly should—rather than being copied all over the place so that all the accuracy and freshness leaks out. On our wiki you'll now find a fledgling spec that profiles OAuth and its emerging discovery mechanisms XRD and LRDD. We're also starting to collect a nice little bunch of diagrams and such, to help people understand what we're up to...

Briefly, the UMA protocol has four distinct parties vs. OAuth's three: there's an authorizing user, a consumer/client (which we call a 'requester'), an SP/server (which we call a "host"), and an authorization manager. We compose three instances of OAuth to introduce all these parties appropriately to each other: there's user/host/AM (three-legged), requester/host (two-legged), and requester/AM (another two-legged). Because of our goals to allow most of these parties to meet fairly dynamically, we are leaning quite heavily on XRD and LRDD for discovery; various simplifying assumptions could probably be made to simplify this picture, however...


Learning to YODL: Building York's Digital Library
Peri Stracchino and Yankui Feng, Ariadne

The authors describe a year's progress in building a digital library infrastructure for multimedia resources at York University. "It was decided to build the architecture using Fedora Commons as the underlying repository, with the user interface being provided by Muradora. Fedora Commons is a widely used and stable open source repository architecture with active user and developer communities, whilst Muradora has a much smaller user and developer base, and benefits from less stable project funding.

Fedora Commons comes with a rich set of APIs but with no user interface of its own.. As we needed to get a working system up and running quickly, we felt that Muradora provided an acceptable shot-term solution despite the risks of working with a comparatively unstable project...

Muradora uses Extensible Access Control Markup Language (XACML) to describe access control policies, and also provides a graphical user interface from which to set access restrictions on individual resources, from Collection level down to individual images, clips or audio files. However because of the complex copyright requirements relating to our stored material, the out-of- the-box access control was not sufficiently fine-grained to meet our needs, and we have had to do further bespoke development. In particular, we needed to be able to restrict access on the basis of user role (public, member of staff, undergraduate student, teacher, administrator, taught postgraduate student) and of membership of specific course modules. As a student or teacher is likely to be a member of multiple course modules, this also implies that it must be possible for a single user to have multiple roles...

Areas for future development include further work on the access control architecture. It is often the case that a flat list of modules and manual role-by-role application of policies is not scalable. There is a need to apply access control to roles in a more managed and sustainable way. A hierarchical representation of roles would be a clearer and efficient way to display roles. In addition, a flexible search interface is needed to query appropriate role-based or collection/object-based policies. Time-limited policies are also needed to manage short-term policies, e.g. policies crossing academic year(s)...

See also: the Fedora Repository Project and the Fedora Commons


Proposal for a Data Management API within the GridRPC
Y. Caniou, E. Caron, (et al, eds), OGF Proposed Recommendation

Gregory Newby, Open Grid Forum (OGF) Editor, announced the release of a Proposed Recommendation for the document "Proposal for a Data Management API within the GridRPC." The document is open for public comment.

The goal of this document is to define a data management extension to the GridRPC API for End-User applications... The motivation of the data management extension is to provide explicit functions to handle data exchanges between a data storage service, a GridRPC platform, and the client. The GridRPC API defines a RPC mechanism to access Network Enabled Servers. However, an application needs data to run and generates some output data, which have to be transferred. As the size of data may be large in grid environments, it is mandatory to optimize transfers of large data by avoiding useless exchanges. Several cases may be considered depending on where data are stored: on an external data storage, inside the GridRPC platform or on the client side. In all these cases, the knowledge of 'what to do with these data?' is owned by the client. Then, the GridRPC API must be extended to provide functions for explicit and simple data management..."

This document was produced by members of the Grid Remote Procedure Call Working Group (GRIDRPC-WG). This Working Group was chartered to create an OGF Recommendation for a grid-enabled, remote procedure call (RPC) mechanism. "The first document entitled A GridRPC Model and API for End-User Applications has been completed and published as GFD-R.52. Currently we are concentrating on the second recommendation document, which defines GridRPC API for middleware developers that extends the model and GridRPC API for end-users, and also focuses on data management mechanism within the GridRPC model. The GridRPC middleware tools developed will further lower the barrier to acceptance for grid use by hiding the tremendous amount of infrastructure necessary to make grids work, while providing even higher-level abstractions for domain-specific middleware."

Note: Members of the OGC's OGSA Authorization Working Group also published Proposed Recommendation versions for these three specifications: "Use of XACML Request Context to Obtain an Authorisation Decision", "Use of SAML to retrieve Authorization Credentials", and "Use of WS-TRUST and SAML to access a Credential Validation Service."

See also: the OGF public comment instructions


First Look: Microsoft SharePoint 2010 Beta
Martin Heller, InfoWorld

"Microsoft SharePoint 2010 is a major upgrade from SharePoint 2007 in several areas. It has a much improved user interface, especially for online editing... Visual Studio 2010 supports a dozen kinds of SharePoint 2010 projects, can deploy both sandboxed and farm-wide custom projects, and can debug code deployed to SharePoint. SharePoint Designer, essentially a customized version of Expression Web, supports site design at a professional level and even allows nonprogrammers to build simple applications. Interfaces via .Net, REST, XML, and JavaScript allow programmers to tie a SharePoint site to line of business applications and databases, as well as to other applications into SharePoint...

SharePoint has long been a versatile platform for all sorts of internal and public Web sites, with an emphasis on group collaboration sites, and SharePoint 2010 has greatly improved and expanded those capabilities. It is more flexible and more capable, has a much improved user interface, and does a better job of implementing multilingual sites... SharePoint 2010 supports wiki markup (specifically, MediaWiki-compatible links) and wiki-style WYSIWYG editing pretty much everywhere... Meta data, in the form of tags, formal taxonomies, user-created folksonomies, and bookmarks, add another dimension of classification to site navigation and content-based search. SharePoint 2010 supports all of these and can use them for targeting list content to specific audiences, for routing documents to specific libraries and folders, for displaying tag clouds, and for searching...

I have to compliment Microsoft for this new degree of openness. In the past, the company has hesitated to offer open interfaces to its server products, leaving the impression that it wanted to lock developers and customers into its platform. Now, Microsoft has allowed developers to integrate with SharePoint in whatever way is most convenient for the application at hand, using open, standard methods..."


Buffalo Ships First USB 3.0 Hard Drive
Lance Whitney, CNET News.com

"Buffalo Technology seems to have won the race as the first vendor to actually ship a USB 3.0 hard drive. The company announced that it is shipping its new SuperSpeed USB 3.0 external DriveStation HD-HXU3. Tapping into the speed of the new USB 3.0 spec, the drive can push data at least three times faster than a USB 2.0 drive... Since the USB 3.0 Promoter Group finalized the new USB 3.0 standard about a year ago, vendors have been pushing to get their new products out the door...

With its higher transfer rates, the new USB standard is ideal for moving around large images as well as huge audio and video streams. As such, USB 3.0 is seen as competition for other high-speed transfer technologies, such as eSATA and FireWire. Though USB 3.0 offers a theoretical maximum burst rate of 625MB or 4.8 gigabits per second, neither the Buffalo nor Freecom drive will come close to that mark at this point. Freecom has rated its drive at 130 megabits per second while a Buffalo representative told me his company's drive would average around 120Mbps. USB 3.0 has been promoted as offering speeds up to 10 times faster than USB 2.0. But manufacturers will need time to rev up their new drives to approach that threshold..."


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-11-25.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org