This issue of XML Daily Newslink is sponsored by:
BEA Systems, Inc. http://www.bea.com
- Atom Feed Paging and Archiving Becomes an IETF Proposed Standard
- W3C Issues Last Call for Delivery Context: Client Interfaces (DCCI) 1.0
- DITA Version 1.1 Submitted to OASIS Members for Approval as a Standard
- IESG Approves Requests for Registration of New Media Types
- Sun Looks to Steal Linux Thunder with Project Indiana
- The XML Pipeline Processor: Alpha Testing
- Meet Jena, a Semantic Web Platform for Java
- Eric Newcomer on the Future of OSGi
- BEA Teams with HP and Intel in SOA Demonstration Center
- A Behind-The-Scenes Look at How DRM Becomes Law
Atom Feed Paging and Archiving Becomes an IETF Proposed Standard
Staff, IESG Announcement
The Internet Engineering Steering Group announced the approval of "Feed Paging and Archiving" as an IETF Proposed Standard. Lisa Dusseault reviewed the specification for the IESG. "This document is an important improvement on RFC 4287, adding the ability to download only part of a published Atom feed at a time. Many servers already provide only part of a feed (e.g. a blog with hundreds of archived entries) when clients request the feed itself, and the information about how to get the rest of the feed is *simply not there* with existing standards (or if there, human-readable only). Although this is not a WG document, it was reviewed by the members of the AtomPub Working Group. There was no significant dissent, and there was definitely support for implementing it. It is already implemented in Apache Abdera project." Syndicated Web feeds using such formats as Atom are often split up into multiple documents to save bandwidth, allow 'sliding window' access, or for other purposes. The "Feed Paging and Archiving" specification formalizes two types of feeds that can span one or more feed documents; 'paged' feeds and 'archived' feeds. Additionally, it defines 'complete' feeds to cover the case when a single feed document explicitly represents all of the feed's entries. Each has different properties and trade-offs: (1) Complete feeds contain the entire set of entries in one document, and can be useful when it isn't desirable to 'remember' previously-seen entries. (2) Paged feeds split the entries among multiple temporary documents. This can be useful when entries in the feed are not long-lived or stable, and the client needs to access an arbitrary portion of them, usually in close succession. (3) Archived feeds split them among multiple permanent documents, and can be useful when entries are long-lived and it is important for clients to see every one. The semantics of a feed that combines these types is undefined by this specification. Although they refer to Atom normatively, the mechanisms described herein can be used with similar syndication formats. The document has been edited and released in twelve revisions by Mark Nottingham under four different titles: Feed History: Enabling Stateful Syndication [v. 00 - 03]; Feed History: Enabling Incremental Syndication [v. 04 - 05]; Extensions for Multi-Document Syndicated Feeds [v. 06]; Feed Paging and Archiving [v. 07 - 11].
See also: Atom References
W3C Issues Last Call for Delivery Context: Client Interfaces (DCCI) 1.0
Keith Waters, Rafah A. Hosn (et al., eds), W3C Technical Report
The W3C Ubiquitous Web Applications Working Group has published a Last Call Working Draft for "Delivery Context: Client Interfaces (DCCI) 1.0 - Accessing Static and Dynamic Delivery Context Properties". The document defines platform and language neutral programing interfaces that provide Web applications access to a hierarchy of dynamic properties representing device capabilities, configurations, user preferences and environmental conditions. An ever increasing variety of devices with an increasing range of capabilities is gaining the ability to access the Web. This is providing the opportunity for the development of novel applications that are sensitive to such capabilities and can make use of them. Devices often keep a significant amount of information about their current operating state. Information, such as the current battery level, the ambient light level of the surroundings, the bandwidth available on the currently connected network and whether or not the user has muted audio output, is often available. The normal mechanism for accessing such information is via system-specific interfaces provided within the device. The interface described in this specification provides a more general mechanism that allows this kind of information to be made available to scripts running in Web pages within a browser on the device. Applications built using script-based techniques, such as AJAX, have shown how dynamic capabilities can be included in Web pages. Scripts that use DCCI will also be able to use dynamic information about the device, network and user preferences to influence their behavior. W3C's Ubiquitous Web Applications Working Group seeks to simplify the creation of distributed Web applications involving a wide diversity of devices, including desktop computers, office equipment, home media appliances, mobile devices (phones), physical sensors and effectors (including RFID and barcodes). This will be achieved by building upon existing work on device independent authoring and delivery contexts by the former DIWG, together with new work on remote eventing, device coordination and intent-based events.
See also: the W3C news item
DITA Version 1.1 Submitted to OASIS Members for Approval as a Standard
Staff, OASIS Announcement
Members of the OASIS Darwin Information Typing Architecture (DITA) Technical Committee have released an approved Committee Specification of the DITA 1.1 specification for consideration as an OASIS Standard. Statements of use have been provided by IBM, JustSystems (XMetaL), Flatirons Solutions, PTC-Arbortext, and Comtech Services, Inc. DITA is an architecture for creating topic-oriented, information-typed content that can be reused and single-sourced in a variety of ways. It is also an architecture for creating new topic types and describing new information domains based on existing types and domains. The process for creating new topic types and domains is called specialization. Specialization allows the creation of very specific, targeted document type definitions while still sharing common output transforms and design rules developed for more general types and domains, in much the same way that classes in an object-oriented system can inherit methods of ancestor classes. DITA topics are XML conforming. As such, they are readily viewed, edited, and validated with standard XML tools, although some features such as content referencing and specialization may benefit from customized support Version 1.1 of the Darwin Information Typing Architecture (DITA) is made up of four distinct units: an architectural specification, a language specification, and the DTD and Schema implementations of the language. Additional functionality in DITA 1.1: (1) A 'bookmap' specialization for encoding book-specific information in a DITA map; (2) A 'glossentry' specialization for glossary entries; (3) Indexing specializations for see, see-also, page ranges, and sort order; (4) Improvements to graphic scaling capability; (5) Improved short description flexibility through a new 'abstract' element; (6) Specialization support for new global attributes, such as conditional processing attributes; (7) Support for integration of existing content structures through the 'foreign' element; (8) Support for new kinds of information and structures through the 'data' and 'unknown' elements; (9) Formalization of conditional processing profiles. The work of the OASIS DocBook TC and the OASIS Open Document Format TC are related in the documentation realm to DITA, however the goals and design of DITA provide different capabilities for potentially wider user communities.
See also: the announcement
IESG Approves Requests for Registration of New Media Types
Staff, IESG Secretary Announcement
Postings from the IESG Secretary announce the approval of requests to approve two new Media Types: 'application/wspolicy+xml' and 'application/wsdl+xml'. Application Media Types are registered with IANA (Internet Assigned Numbers Authority, formally The Internet Corporation for Assigned Names and Numbers). Registration of media types based on XML is described in the IETF Request for Comments #3023 ("XML Media Types"). The registration template for 'application/wspolicy+xml' is presented in Appendix A of the "Web Services Policy 1.5 - Framework" specification. That appendix defines the 'application/wspolicy+xml' media type which can be used to describe Web Services Policy documents serialized as XML. Either 'wsp:Policy' or 'wsp:PolicyAttachment' could be the root element of such a document. The registration template for 'application/wsdl+xml' is presented in Appendix A of the W3C Recommendation "Web Services Description Language (WSDL) Version 2.0 Part 1: Core Language." The 'application/wsdl+xml' media type can be used to describe WSDL 2.0 documents serialized as XML.
See also: WSDL 2.0
Sun Looks to Steal Linux Thunder with Project Indiana
Paul Krill, InfoWorld
Sun plans to release binaries in Spring 2008 for its OpenSolaris Unix platform, similar to how Linux is offered, as part of the company's Project Indiana. Having already offered up Solaris to open source via the OpenSolaris project, Sun will expand its proselytizing of the platform by releasing binaries. Project Indiana seeks to combine what Sun described as the best of Solaris—its enterprise-class capabilities, innovation, and backward compatibility—with the best of Linux—its distribution model, community, and its being free and open source. Pre-releases of Project Indiana are expected to start this fall. Also featured as part of the project will be short release cycles that will offer something downloadable offered every six months. Developers will get the latest Solaris innovations without having to build the Solaris code. With the project, Sun is moving to a two-tier development environment in which enterprise customers can get the commercial version of Solaris and developers can access the Indiana binary version. The Indiana variant will feature ease of installation, network-based package management, and Solaris's ZFS (Zettabyte File System) as the default file system. ZFS recaptures states of a system to assist in problem resolution.
See also: IBM and AIX 6
The XML Pipeline Processor: Alpha Testing
Norm Walsh, Blog
Norman Walsh, co-editor of the W3C specification "XProc: An XML Pipeline Language," announced the creation of a permanent status page for 'the XML Pipeline Processor'. The XML Pipeline Processor is an implementation of the XProc specification being developed by the W3C to address questions about the XML processing model. "The first alpha version is now available... The XML Pipeline Processor runs XProc pipelines. It is a command-line application. This version of the XML Pipeline Processor is supposed to implement the 6-July-2007 version of the XProc specification. The current release of the the XML Pipeline Processor is 0.0.1, from 10-July-2007. This is a very alpha release; it is implemented in Java and should run on any platform that supports Java 1.5 or later and has a command-line. It is available under the terms of either the GNU General Public License Version 2 only ("GPL") or the Common Development and Distribution License ("CDDL"). The repository contains not only the Java sources, but also ancillary files and the Netbeans project that is used to build it. Caveats from the release notes: "While you're encouraged to experiment with xproc and report bugs and problems that you encounter, it is very definitely not yet complete or ready for production. The principle goals of this project are to provide a complete and correct implementation of 'XProc: An XML Pipeline Language' in a time frame that's useful for evaluating the specification as it develops. Secondary goals include extensibility and performance. Consistent with the principle goals, very little effort has been expended to create a stand-alone, self-contained distribution. In the future, it's likely that the number of prerequisites will be reduced..."
See also: the project web site
Meet Jena, a Semantic Web Platform for Java
Ian Dickinson, DevX.com
Tools for developing semantically aware applications are rapidly growing more Java friendly. This article takee a closer look at Jena, an open source toolkit for processing and working with semantic web data. Programmers who want to develop semantic web applications have a growing range of tools and libraries to choose from. One such tool, the Jena platform, is an open source toolkit for processing Resource Description Framework (RDF), Web Ontology Language (OWL), and other semantic web data. Jena is a free, open source (under a liberal BSD license) Java platform for processing semantic web data. In this case semantic web particularly refers to the approach based on the World Wide Web Consortium (W3C) Semantic Web standards, especially RDF, OWL, and SPARQL. This discussion introduces Jena's Model abstraction that provides the container interface for collections of RDF triples, which are data linked by relationships. 'Model' is one of the key components of Jena's approach to handling RDF data. We explore its core capabilities along with some of the extensions of the basic Model that are built in to Jena to give you a working knowledge of Jena code that will load, process, query, and write RDF data and ontologies. Jena's built-in RDB Model adapters work with a specific triple store table layout, but there are other tools that extend Model to cover repositories other than triple stores, such as native relational tables or LDAP servers. The examples discussed here created models programmatically, but it's also possible to describe models using a declarative vocabulary (in RDF, naturally) and have this description assembled into a Jena Model object. Jena's schemagen tool can automate the translation of ontology terms into Java constants that can be used by Java programs to access RDF and OWL data.
See also: W3C Semantic Web
Eric Newcomer on the Future of OSGi
Mark Little, InfoQ
Eric Newcomer is the CTO of IONA Technologies. He's had a long and distinguished career, spanning time at DEC, writing one of the Distributed Transaction 'bibles', several best selling books on Web Services and participating in many of the important Web Services specifications and standards, such as WS-CAF and WS-TX. Eric is now co-chair of the OSGi Enterprise Expert Working Group and agreed to talk to us about OSGi, Java and ESB. Newcomer: "What really got OSGi into the spotlight was Eclipse. And I think that's still probably how most people would know about OSGi—the Eclipse platform is an implementation of OSGi, and every Eclipse plug in that you download and install uses OSGi behind the scenes... As anyone who's worked in multiple organizations knows, each one has its own unique approach and process. The OSGi process starts with a formalization of requirements into RFPs (Request for Proposal) and then once some number of RFPs are approved by the Requirements Committee, the Expert Group members can start creating RFCs (Request for Comment), which are the design documents identifying proposed solutions to the requirements. After that (or perhaps concurrently) EG members can develop a reference implementation and then someone (preferably not the same people as those coding the RI) develop the conformance test. A specification is only complete once there's an RI and a conformance test. So we are still relatively at the beginning of the process, but making good progress. The EEG was created in December  by the OSGi Board, and held its initial meeting at the end of January  in Dublin, Ireland. At that time several "workstreams" were identified. Leaders were assigned to the various workstreams and from that the 13 or so current RFPs were created. A couple of weeks ago the EEG voted to submit seven of the RFPs for approval, so with that we are effectively starting on the design stage. Based on the RFPs submitted we will be spending a lot of time figuring out how to map existing enterprise technologies onto OSGi, such as Spring, SCA, JEE, JBI, Web services, and perhaps others.
See also: the OSGi Alliance
BEA Teams with HP and Intel in SOA Demonstration Center
Staff, BEA Announcement
BEA Systems, Inc., a world leader in enterprise infrastructure software, today announced that it has teamed with HP and Intel to open the BEA Center of Innovation in McLean, Virginia. The Center is dedicated to the development of an open community of commercial and Federal organizations blending government Information Technology (IT) solutions with commercial best practices to enable easier adoption of Service-Oriented Architecture (SOA) in the Public sector. The new BEA Center of Innovation features an SOA demonstration center, collaborative SOA learning environment and SOA Research and Development Lab. In collaboration with BEA's alliances, HP and Intel, customers can learn how to integrate capabilities from the processor core up through the computer platform, operating environments, and application platform, providing a holistic view of next generation SOA directions. The BEA Center of Innovation will also feature some of the latest technologies and patterns being deployed as part of the SOA, including virtualization, event-driven services, Java Real-time and Web 2.0 technologies.
A Behind-The-Scenes Look at How DRM Becomes Law
Cory Doctorow, InformationWeek
Technology, and public policy, open data, and law: the author looks at the back room dealing that allowed entertainment companies and electronics companies to craft public policy on digital rights management. "Otto von Bismarck quipped, 'Laws are like sausages, it is better not to see them being made.' I've seen sausages made. I've seen laws made. Both processes are pleasant in comparison to the way anti-copying technology agreements are made. This technology, usually called 'Digital Rights Management' (DRM) proposes to make it hard for your computer to copy some files. Because all computer operations involve copying, this is a daunting task—as security expert Bruce Schneier has said, 'Making bits harder to copy is like making water that's less wet.' At root, DRMs are technologies that treat the owner of a computer or other device as an attacker, someone against whom the system must be armored. Like the electrical meter on the side of your house, a DRM is a technology that you possess, but that you are never supposed to be able to manipulate or modify. Unlike the your meter, though, a DRM that is defeated in one place is defeated in all places, nearly simultaneously. Why manufacture a device that attacks its owner? One would assume that such a device would cost more to make than a friendlier one, and that customers would prefer not to buy devices that treat them as presumptive criminals. DRM technologies limit more than copying: they limit uses, such as viewing a movie in a different country, copying a song to a different manufacturer's player, or even pausing a movie for too long. Surely, this stuff hurts sales: Who goes into a store and asks, 'Do you have any music that's locked to just one company's player? I'm in the market for some lock-in'..."
See also: DRM and XML
XML Daily Newslink and Cover Pages are sponsored by:
|BEA Systems, Inc.
|Sun Microsystems, Inc.
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: firstname.lastname@example.org
Newsletter unsubscribe: email@example.com
Newsletter help: firstname.lastname@example.org
Cover Pages: http://xml.coverpages.org/