The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: March 21, 2008
XML Daily Newslink. Friday, 21 March 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
SAP AG http://www.sap.com



W3C Last Call Working Draft: Cool URIs for the Semantic Web
Leo Sauermann and Richard Cyganiak (eds), W3C Technical Report

W3C announced that members of the Semantic Web Education and Outreach (SWEO) Interest Group have published the Last Call Working Draft for "Cool URIs for the Semantic Web." The document is intended to become a W3C Interest Group Note giving a tutorial explaining decisions of the TAG for newcomers to Semantic Web technologies. It was initially based on the DFKI Technical Memo TM-07-01 and was subsequently published as a W3C Working Draft in December 2007; it was reviewed by the Technical Architecture Group (TAG) and the Semantic Web Deployment Group (SWD). The document is a practical guide for implementers of the RDF specification. The title is inspired by Tim Berners-Lee's article "Cool URIs don't change". It explains two approaches for RDF data hosted on HTTP servers. Intended audiences are Web and ontology developers who have to decide how to model their RDF URIs for use with HTTP. Applications using non-HTTP URIs are not covered. The document is an informative guide covering selected aspects of previously published, detailed technical specifications. The Resource Description Framework (RDF) allows users to describe both Web documents and concepts from the real world—people, organisations, topics, things—in a computer-processable way. Publishing such descriptions on the Web creates the Semantic Web. URIs (Uniform Resource Identifiers) are very important, providing both the core of the framework itself and the link between RDF and the Web. This document presents guidelines for their effective use. It discusses two strategies, called 303 URIs and hash URIs. It gives pointers to several Web sites that use these solutions, and briefly discusses why several other proposals have problems. It is important to understand that using URIs, it is possible to identify both a thing (which exists outside of the web) and a web document describing the thing. For example the person Alice is described on her homepage. Bob may not like the look of the homepage, but fancy the person Alice. So two URIs are needed, one for Alice, one for the homepage or a RDF document describing Alice. The question is where to draw the line between the case where either is possible and the case where only descriptions are available. According to W3C guidelines in "Architecture of the World Wide Web, Volume One," we have an Web document (there called information resource) if all its essential characteristics can be conveyed in a message. Examples are a Web page, an image or a product catalog. The URI identifies both the entity and indirectly the message that conveys the characteristics. In HTTP, a status 200 response code should be sent when a Web document has been accessed, a different setup is needed when publishing URIs that are meant to identify entities.

See also: the W3C Semantic Web Education and Outreach (SWEO) Interest Group


Quark Delves Into Publishing Workflow
Joab Jackson, Government Computer News

Publishing software company Quark has introduced new software poised to help tame increasingly unwieldy publishing production routines. Quark announced the release of DPS earlier this month, at the AIIM International Exposition and Conference in Boston. The newly released Quark Dynamic Publishing Solution sets out to confront a growing problem experienced by organizations that publish a lot of material -- that of keeping track of the material as it is used across different media... Design publication tools such as Quark's QuarkXPress and Adobe's InDesign have been ill-suited to reformat designed material for the Web, so the process of moving printed material to the Web tends to be a time-consuming and sometimes still manual process. According to the product description: "Quark Dynamic Publishing Solution Quark DPS consists of multiple software components, including desktop tools for creating content, and server-based technology for automating publishing workflows. It is based on open standards to allow for easy integration with enterprise content management systems and other business applications. Dynamic publishing automates the creation and delivery of information across multiple channels, from print to Web, email and beyond. It allows users to create reusable components of information that can be combined to create various types of documents for any audience. Dynamic publishing automates the page formatting process allowing for the production of print, Web, and electronic content from a single source of information. Quark uses XML (Extensible Markup Language) as the underlying data format for your information because its capabilities line up perfectly with dynamic publishing's requirements. XML lets you break down your information into components of any size that may be useful. For example, an article might include a title, subtitle, and body copy, which itself might consist of a number of components such as paragraphs. Some of those components may be reused across multiple articles or documents, thereby enabling you to create a single source where one change can update many documents. In addition, XML enforces the absolutely consistent structure that makes automation possible. Without this consistency, the only option would be to continue the labor-intensive effort of hand-crafting pages indefinitely. XML allows information to exist independently of its formatting. By applying formatting separately, through an automated process, XML-based information can easily be published in multiple formats and multiple types of media..."

See also: Quark Dynamic Publishing Solution (DPS) - 'Why XML'


An Extensible Markup Language (XML) Configuration Access Protocol (XCAP) Diff Event Package
Jari Urpalainen (ed), IETF Internet Draft

This document describes an "xcap-diff" SIP event package, with the aid of which clients can receive notifications of the partial changes of Extensible Markup Language (XML) Configuration Access Protocol (XCAP) resources. The initial synchronization and document updates are based on using the XCAP-Diff format. XCAP (RFC 4825) is a protocol that allows clients to manipulate XML documents stored on a server. These XML documents serve as configuration information for application protocols. As an example, RFC 4662 resource list subscriptions (also known as presence lists) allow a client to have a single SIP subscription to a list of users, where the list is maintained on a server. The server will obtain presence for those users and report it back to the client. Another specification, "Extensible Markup Language (XML) Document Format for Indicating a Change in XML Configuration Access Protocol (XCAP) Resources" defines a data format which can convey the fact that an XML document managed by XCAP has changed. This data format is an XML document format, called an XCAP diff document. This format can indicate that a document has changed, and provide its previous and new entity tags. It can also optionally include a set of patch operations which indicate how to transform the document from the version prior to the change, to the version after it. As defined in this XCAP Diff Event Package memo, an "XCAP Component" is an XML element or an attribute, which can be updated or retrieved with the XCAP protocol. "Aggregating" means that while XCAP clients update only a single XCAP component at a time, several of these modifications can be aggregated together with the XML-Patch-Ops semantics. When a client starts an "xcap-diff" subscription it may not be aware of all the individual XCAP documents it is subscribing to. This can, for instance happen when a user subscribes to his/her collection of a given XCAP Application Usage where several different clients update the same XCAP documents. The initial notification can give the list of these documents which the authenticated user is allowed to read. The references and the strong ETag values of these documents are shown so that a client can separately fetch the actual document contents with the HTTP protocol. After these document retrievals, the subsequent SIP notifications can contain patches to these documents by using XML-Patch-Ops semantics. While the initial document synchronization is based on separate HTTP retrievals of full documents, XML elements or attributes can be received "in-band", that is straight within the 'xcap-diff' notification format.

See also: XCAP in RFC 4825


OpenLiberty-J Client Library for Liberty Web Services (ID-WSF 2.0)
Staff, OpenLiberty.org Announcement

OpenLiberty.org, the global open source community working to provide developers with resources and support for building interoperable, secure and privacy-respecting identity services, has announced the release of OpenLiberty-J, an open source Liberty Web Services (ID-WSF 2.0) client library designed to ease the development and accelerate the deployment of secure, standards-compliant Web 2.0 Applications. OpenLiberty.org will hold a public webcast to review OpenLiberty-J on April 2, 2008 at 8 am US PT. OpenLiberty-J enables application developers to quickly and easily incorporate the enterprise-grade security and privacy capabilities of the proven interoperable Liberty Alliance Identity Web Services Framework into identity consuming applications such as those found in enterprise service oriented architectures (SOAs), Web 2.0 social networking environments and client-based applications on PC's and mobile devices. Released as beta today under the Apache 2.0 license, OpenLiberty-J code is available for review and download at OpenLiberty.org. OpenLiberty-J is based on J2SE, and open source XML, SAML, and web services libraries from the Apache Software Foundation and Internet2, including OpenSAML, a product of the Internet2 Shibboleth project. The library implements the Liberty Advanced Client functionality of Liberty Web Services standards. Developers can immediately begin using the OpenLiberty-J code to build a wide range of new identity applications that are secure and offer users a high degree of online privacy protection. "With the release of OpenLiberty-J, developers now have a comprehensive library of open source code to begin driving security and privacy into applications requiring identity management functionality," said Conor P. Cahill, Principal Engineer, Intel and OpenLiberty-J contributor. "OpenLiberty.org encourages the global open source community to begin working with the code and welcomes contributions to further the evolution of OpenLiberty-J as the project moves from beta to general availability later this year."

See also: the OpenLiberty.org web site


Demand for Interop Fuels J2EE, Microsoft Unity
Vance McCarthy, Integration Developer News

This article aims to give developers and architects an armchair tour of the scope and depth of how J2EE leading vendors are working with Microsoft to push the availability of next-gen interop technologies and Best Practices. Last month's JavaOne put J2EE/.NET interop in the spotlight like never before. Sun and Microsoft technical experts stood together on a Moscone stage in San Francisco, and debuted co-developed interop technologies for helping J2EE developers secure traffic between J2EE and .NET platforms. If JavaOne is any indication, the fences between J2EE and .NET are definitely coming down. Simon Guest, an interop specialist and senior program manager on Microsoft's Architecture Strategy Team, presented at JavaOne. Following Microsoft's Andrew Layman co-keynote with Sun's Mark Hapner, Guest commented, "we got really good applause from the audience. A lot of developers came by our booth to tell us they were glad we were there, which was good to hear"—the implication being that Java users and developers are also telling Java vendors it's OK to work closely with Microsoft on interop. J2EE/.NET interop is 'extremely important' to IBM customers, according to Jeff Jones, IBM's director of strategy for information management software (IMS): "Customers tell us that .NET has come more front and center for them, so our focus on .NET interop has intensified. IBM and Microsoft] now have a jointly staffed lab in Kirkland, Washington. At that lab, IBM has woven support into DB2 for .NET devs, and made great progress with our ability to interop with Windows Server 2003 and the upcoming 2005 version... BEA is also intensifying its interop programs with Microsoft, but their approach is a bit different than Big Blue's. BEA execs say J2EE/.NET interop will be key to providing better unified support for .NET and J2EE programming models, making it easier for developers and architects to program in a mixed environment. Earlier this spring, BEA introduced its AquaLogic Service Bus, an abstraction layer designed to sit above Java/J2EE and .NET environments... For Sun Microsystems there are very compelling reasons to partner with Microsoft, and work to improve J2EE/.NET interop tools and approaches. Customers of both companies are demanding interoperability at all levels, but perhaps most importantly interop must come with a unified security model. As Sun and Microsoft interop experts joined together on the JavaONE stage, McNealy demonstrated a new interop security standard, dubbed Message Transmission Optimization Mechanism (MTOM). MTOM enables developers to send binary attachments between Java and .NET using Web Services, while retaining the protections offered by WS-* security and reliability specs...


Opinion: WSO2 Mashup Server Takes First Steps
Steven Nunez, ComputerWorld

Mashups (composite applications) promise the ability to easily create useful new applications from existing services and Web applications. By combining data from multiple sources across the Web, and from within the enterprise, mashups can help distill important information for people who would otherwise need to gather and distill it manually. Composite applications in 2008 are in the "early adopter" phase, with companies exploring their uses and potential in the enterprise. There's no lack of entrants in the field; a quick search turned up at least 20 different mashup platforms, both commercial and open source. Products such as JackBe Presto, Nexaweb Enterprise Web 2.0 Suite and Kapow's RoboSuite illustrate the range of approaches. WSO2's Mashup Server is aimed at Web developers seeking a complete environment for building, deploying and administering composite applications. It's clear that the WSO2 Mashup Server design team gave some thought to what such developers would need to create mashups, and for those with an understanding of JavaScript, XML, and AJAX, this toolset makes developing mashups simple... Parsing XML in JavaScript is usually a difficult and tedious task, but the inclusion of Mozilla's E4X (ECMAScript for XML) makes parsing XML simpler. JSON (JavaScript Object Notation) would be a good alternative communication mechanism, and hopefully future versions will include the option of returning JSON objects as well. Hosted Objects are objects hosted within the WSO2 Mashup Server that provide access to remote data sources. These objects are written in Java, and provide access to APP (Atom Publishing Protocol) resources, RSS feeds, e-mail, and instant messaging services (although only for sending messages), among others. One of the more useful if more complicated hosted objects is the "scraper" object, which makes use of Web-Harvest to screen scrape Web pages that do not provide Web services. From the enterprise standpoint, significant omissions are the lack of JMS and SQL hosted objects. Creating the client side of the mashup is straightforward. Using the generated JavaScript stubs, you simply need to include them in the Web page that's consuming the service... Mooshup.com is a community of mashup authors, where they can develop, share, discover, and run Javascript-powered mashups. The site is powered by the WSO2 Mashup Server, which is available as a free open source download.

See also: the Mooshup.com developer community web site


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
EDShttp://www.eds.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
SAP AGhttp://www.sap.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-03-21.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org