The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: August 28, 2008
XML Daily Newslink. Thursday, 28 August 2008

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Oracle Corporation

Oracle Enterprise Pack for Eclipse Supports Java EE Development
Staff, Oracle Announcement

Oracle has announced the availability of the Oracle Enterprise Pack for Eclipse (OEPE) as the first free Eclipse 3.4 environment to support Oracle WebLogic Server 10g Release 3. With the release of Oracle Enterprise Pack for Eclipse and Oracle Workshop for WebLogic Server, Oracle continues to provide a versatile portfolio of open source and private source productivity tools enabling Java developers to reduce project development time while improving quality. The Oracle Enterprise Pack for Eclipse (OEPE) is a set of plug-ins designed to support Java EE development, especially where Eclipse is your preferred Integrated Development Environment (IDE). It installs as a plug-in to your existing Eclipse, or will install Eclipse for you, and supports your favorite server or servlet engine. Oracle Enterprise Pack for Eclipse is the latest in a series of products that combine technology from Oracle Fusion Middleware and BEA Systems. These products illustrate the rapid progress that Oracle is making in combining market-leading technologies from the two companies into a unified product offering. For organizations using the Eclipse IDE, Oracle Enterprise Pack for Eclipse delivers a certified set of Eclipse plug-ins to accelerate database, Java, Java Enterprise Edition (Java EE) and Oracle WebLogic Server development. Key new development features of Oracle WebLogic Server are supported by this release, including FastSwap, the ability to redefine Java classes without redeployment; helping developers streamline lengthy iterative development cycles. A Strategic Developer and Eclipse Foundation Board Member, Oracle has a long-standing history of active participation in the Eclipse community. Oracle currently leads several Eclipse-based projects including: (1) JavaServer Faces (JSF) Tools—an extensible tooling infrastructure and exemplary tools for building JSF-based, web-enabled applications; (2) Dali Java Persistence Architecture (JPA) Tooling; (3) BPEL Tools: The goal of the BPEL Project is to add comprehensive support to Eclipse for the definition, authoring, editing, deploying, testing and debugging of WS-BPEL 2.0 processes. WS-BPEL (Web Services Business Process Execution Language), or BPEL, is a vendor-neutral specification being developed by OASIS to specify business processes as a set of interactions between web services. By providing these tools, this project aims to build a community around support for BPEL in Eclipse. (4) Eclipse Data Tools Platform; (5) Eclipse Persistence Services (EclipseLink).

See also: the OEPE description and download

The Session Initiation Protocol (SIP) Pending Additions Event Package
Gonzalo Camarillo (ed), IETF Internet Draft

The Internet Engineering Steering Group (IESG) has announced approval of "The Session Initiation Protocol (SIP) Pending Additions Event Package" as an IETF Proposed Standard. The IETF Session Initiation Proposal Investigation (SIPPING) Working Group was chartered to document the use of SIP for several applications related to telephony and multimedia, and to develop requirements for extensions to SIP needed for those applications. Numerous approved RFCs and Internet Drafts use XML format in protocol descriptions. The 'SIP Pending Additions Event Package' specification defines a SIP event package whereby user agents can subscribe to the consent-related state of the resources that are being added to a resource list that defines a translation. The framework for consent-based communications in SIP identifies the need for users manipulating the translation logic at a relay (e.g., adding a new recipient) to be informed about the consent-related status of the recipients of a given translation. That is, the user manipulating the translation logic needs to know which recipients have given the relay permission to send them SIP requests. A user agent subscribes to a relay using the Pending Additions event package. NOTIFY requests within this event package can carry an XML document in the "application/resource-lists+xml" format (per RFC 4826) or in the "application/resource-lists-diff+xml" format, which is based on XML patch operations ("An Extensible Markup Language (XML) Patch Operations Framework Utilizing XML Path Language (XPath) Selectors"). A document in the "application/resource-lists+xml" format provides the user agent with the whole list of resources being added to a resource list along with the consent-related status of those resources. A document in the "application/resource-lists-diff+xml" format provides the user agent with the changes the list of resources being added has experimented since the last notification sent to the user agent... Other Internet Drafts from the IETF Session Initiation Proposal Investigation (SIPPING) Working Group have entered Last Call or have been issued as IETF Proposed Standard: (1) "Extensible Markup Language (XML) Format Extension for Representing Copy Control Attributes in Resource Lists" is now a Proposed Standard; (2) "A Document Format for Requesting Consent" is also now a Proposed Standard; (3) "A Session Initiation Protocol (SIP) Event Package for Session-Specific Session Policies" is in Last Call review; (4) see also "A User Agent Profile Data Set for Media Policy."

See also: the IETF Session Initiation Proposal Investigation (SIPPING) WG Charter

Mt. Sinai Medical Center Looks to Open Standards for Patient Smartcards
Ellen Messmer, Network World

Mt. Sinai Medical Center in New York City, which five years ago pioneered the practice of giving out a smartcard to patients to store identity and healthcare records, is realigning its focus to support open standards that could get other hospital systems supporting smartcards, too. Mt. Sinai Medical Center, has issued about 14,000 of the smartcards to patients through the pilot program that started at the Elmhurst Hospital Center affiliated with Mt. Sinai's School of Medicine. Mt. Sinai Medical Center now plans a redesign of its patient smartcard to adhere to an [XML-based] open standard known as the "Continuity of Care Record" (CCR) with the anticipation that other medical institutions in the New York area and elsewhere might support patient smartcards, too. The Mt. Sinai-issued smartcard, which stores the patient's personal information, lab results and other medical records, is updated every time the smartcard is placed in a card reader with access to the specialized database of the hospital information system which acts as the smartcard data repository... If the federal government approved the idea, Mt. Sinai would like to be able to put a standards-based healthcare application on a PIV card, [VP Paul] Contino says. The immediate effort, though, entails Mt. Sinai switching to an XML-based standard called CCR that was jointly developed by several organizations, including ASTM International, Massachusetts Medical Society and HIMSS. Contino says Mt. Sinai will be steering its patient smartcard project toward using CCR, with the goal of also encouraging other hospital systems to adopt it in order to establish a multi-hospital system where different healthcare providers one day will be able to accept each other's issued patient smartcards for purposes of sharing patient-related data. To date, there appear to be few patient smartcard projects in the United States, Contino says, but the potential is there to get hospitals to support them more widely through adherence to open standards.

See also: the Continuity of Care Record (CCR)

Create Time-Availability Maps with Perl and Google Earth
Nathan Harrington, IBM developerWorks

Time-availability maps provide a listing of who is most likely to be available for a certain hour in a certain location. Nationwide and international teams, flexible work hours, and four-day workweeks all contribute to change in when and where teams work together. Time-availability maps provide a listing of who is most likely to be available for a certain hour in a certain location. For example: users in different portions of the country that will be likely to receive messages during a particular time window. Instant messaging logs, phone-usage records, group calendars, badge-reader access, and any number of other time-related data records are suitable for creating these time-availability maps. This article presents tools and code to help you find the best times to reach various members of your team across geographical areas using Google Earth. The article focuses on the extraction of perhaps the most common form of availability data: e-mail headers. A person is most likely to be available around the times they are most active sending messages. Each user is assigned the appropriate geographical area, and KML is created with designator fade depths based on message count per hour. Using Google Earth's time-slider feature, including animation and total time window selection, helps visualize the resulting availability map for users throughout a geographic area. Using a common method of message time-tracking (email headers) and a program to generate [XML-based] Keyhole Markup Language (KML) with appropriate settings, this article demonstrates useful visualization techniques using Google Earth's TimeSpan feature and the "time slider." One of Google's many excellent KML documentation pages lists an example with a rough outline of U.S. states in KML. Comprising about 13,000 points, these rough outlines provide an excellent basis for highlighting states... With the incredible Google Earth interface, and the custom code above for creating KML, you can build your own time-availability maps for a wide variety of applications. Consider extracting login and activity times from your instant messaging logs, phone records, or other sources to build additional and overlapping data sets. Expand the time windows, or focus more on a minute-to-minute spread of information through your company or customer set. Extract your Web-server visitor information and build a municipality-specific set of place marks and designators to explore where and when your Web-site visitors see your content.

See also: the KML Documentation

Closing the Gap Between Web 2.0 and the Semantic Web
Jana Herwig, Social Computing Magazine

Two days ago in upper Austria, the BarCamp Traunsee, subtitled "Social Media Review Camp", took place, which I had co-organized and which was co-sponsored by our firm, the Semantic Web Company. Andreas Blumauer, also of SWC, joined me on the first day, hosting a session about and giving an introduction to Linked Data. Given the angle of the BarCamp, he gave it to an audience of Web 2.0 people (i.e. consultants, marketers, developers, communications people). And was he able to bridge the gap between 2.0 and 3.0? [...] If Semantic Web people start explaining their concepts to 'other species', they very soon start juggling acronyms and technical lingo, in particular names and abbreviations from the Semantic Web Stack—understandably so, as URIs, XML and RDF form the very foundation, on the technological side... Just as the Semantic Web interfaces are only about to become more accessible to Web 2.0 people, I think a vital next step in promoting the Semantic Web is to find human-readable explanations of its technologies. What seems to be pegged in people's mind is that you have to have an API to make mashups, and that mashups are what constitutes one of the miracles of Web 2.0... You can and must have both semantic technologies like NLP (natural language processing) and open standards such as defined in the Semantic Web stack. It's not like one is for the 'cool kids' (or Web 2.0 kids) and the other one for the 'geeks'. If anything, then I'd say that the 'cool kids' are probably more interested in improving the service of just their site (making the industry and software market more diverse, if there are enough of them), whereas the 'geeks' work towards global exchange through the definition and further development of open standards (and make sure the 'cool kids' don't get trapped in their data silos). In the end, once the Semantic Web enters maturity level, it will need both of them.

See also: the November 2008 Web 2.0 Summit

State of the Semantic Web: Know Where to Look
Brian Sletten,

Those looking for evidence of progress on the Semantic Web do not have to look far. Several major projects and companies are embracing the vision and technology stack like never before. Semantic Web technologies are here in many important ways, and you are most likely using these technologies on a daily basis; even if it's an indirect usage. The success of these technologies is not simply a question of everyone adopting the same models and the same terms; it is about a rich and vibrant ecosystem of data, documents, and software tied together in useful ways... First of all, sites like Delicious, Flickr, and other folksonomy-based sites demonstrate that when the bar is lowered and the value is demonstrated, people will happily contribute tags and other metadata. Delicious can filter out typos and bogus tags by looking at the most common terms for a page. The challenge to the proponents of Semantic Web technologies is to make it as easy to select terms from standard and shared vocabularies as it is to type arbitrary tags. Secondly, new technologies are eliminating the need to convert data to RDF directly. These include Gleaning Resource Descriptions from Dialects of Languages (GRDDL), RDFa, and SPARQL endpoints. GRDDL and RDFa allow RDF to be produced through standard transformations from existing XML and XHTML resources. Simple markup, no more complicated than current presentations, allows proper metadata to be mixed in the presentation structure and domain-specific hackery like microformats. With these tools in place and supported by certain content publishers, it will be trivial to support publication metadata, licensing information, geotagging information, and the like from the pages you visit. It is also possible to link this extracted information to different data sources for further discovery. SPARQL endpoints allow RDF views into both RDF and non-RDF data. Some projects leverage other technologies, such as Mulgara Semantic Store, which uses D2RQ in its Relational Resolver to allow RDF queries to include results from non-RDF data sources. This kind of combination allows the RDF model to be populated with content from existing Customer Relationship Management (CRM), Enterprise Resource Planning (ERP), and other relational systems. There is no need to convert the data and store it as RDF; it is generated on the fly... As RDF data is made available publicly on the web and in the Enterprise, it allows for technologies to create relationships across data sources. The Linking Open Data project has gained tremendous momentum in the past year and is now connecting billions of triples worth of data together through billions of links. As an example, consider the thousands of Wikipedia volunteers who curate the concepts and relationships that keep the site up-to-date and (presumably) accurate. These include facts such as that the Louvre is a museum in Paris, France. These terms and relationships are now converted monthly into RDF and are exposed at DBPedia. It is now possible to take a term from Wikipedia, query DBPedia for metadata about this term, and convert the alternate names for the term and its geographic information into a Flickr query for pictures constrained to a specific location.

See also: the W3C Semantic Web Activity

Data Points: Service-Driven Apps With Silverlight 2 And WCF
John Papa, MSDN Magazine

It is true that Silverlight 2 makes it easy to build rich internet applications (RIAs) with tremendous graphical prowess. However, it is also true that Silverlight 2 makes it easy to build highly functional line-of-business (LOB) applications. Silverlight 2 supports a subset of the powerful XAML-based data binding that Windows Presentation Foundation (WPF) enables. The XAML binding markup extensions in Silverlight 2 simplify the process of binding entities to Silverlight controls. Because they run completely on the client computer, Silverlight applications are isolated from the entities that are managed on the server. Therefore service-based communication through technologies such as RSS, Representational State Transfer (REST), and Windows Communication Foundation (WCF) must be available. Fortunately, Silverlight 2 supports interaction with these and other communication pathways, enabling Silverlight applications to interact seamlessly with back-end LOB applications. In this article I demonstrate how to build a Silverlight 2 user interface that communicates through WCF to interact with business entities and a database. The business logic, entity model, and data-mapping code can be consumed by any presentation tier. I create WCF services to be consumed by a Silverlight 2 app and set up the server hosting the WCF services to allow cross-domain invocation. The Silverlight 2 sample application is made up of a handful of user controls and styles. The presentation layer communicates with the server using asynchronous calls through WCF. It uses a WCF service reference to allow the Silverlight application to communicate with the service according to the service's operation contracts and data contracts... I began building the sample app with the lower layers, starting with the WCF service itself. You can build a WCF service that can communicate with a Silverlight app by creating a new WCF project in Visual Studio. A standard WCF service can be invoked by a Silverlight app as long as the Silverlight app has a binding of type basicHttpBinding. You must either make sure you change the default binding of the WCF service from wsHttpBinding to basic­HttpBinding or create a new binding of type basicHttpBinding. For example, the web.config file of the sample WCF service host app contains the following XML that defines the service configuration... Silverlight 2 data-binding features are simple to implement using the declarative binding syntax shown in the sample application. The bindings listen for the PropertyChanged event so they can update their targets. In this column, I show how easy it is to implement binding through Silverlight, how to communicate with WCF services to interact with business objects and a database, and how to define a ClientAccessPolicy.xml file to allow and restrict remote communications.

SSO Summit Session: OAuth and WS-Trust
Eve Maler, Pushing String Blog

Finally, here are the additional notes I took on the OAuth/WS-Trust session Ashish Jain moderated at the SSO Summit... we discussed use cases for having a security token service in its most basic form. There are 'syntactic' reasons to need to exchange tokens: (1) Going from a proprietary token format to a standard one (e.g., Kerberos to SAML); (2) Going from one standard token format to another (e.g., SAML1.1 to SAML2); (3) Going from one proprietary token format to another. The participants considered this pretty much a necessary evil for integration purposes—a tactical need that is likely to subside over time as standard token formats stabilize, converge, etc. We saw both internal and cross-domain uses for this, but identified today's WS-Trust sweet spot as being within enterprises where multiple token formats are (still) in use. Then there are semantic reasons to exchange tokens. For example, 'identity oracle' use cases might have a need for this -- handing out a cooked/computed assertion that someone is 'over 25' rather than sharing their actual date of birth. There are as many unique use cases here as one can imagine. I noted that Liberty ID-WSF has a few of these baked into services that it has defined, but they don't currently use WS-Trust. A group is taking the first steps in a rapprochement here... [Note: Eve references the "Liberty Web Services Harmonization SIG", chartered "to harmonize ID-WSF and WS-Trust. The goal of this SIG is to expediently identify and document the use cases that drive this harmonization need, and will serve to prioritize the technical work that will ultimately be done in another forum(s). Another key goal of this SIG is to come to a consensus on where the technical work will take place that meets the industry harmonization requirements as captured in the initial slate of approved use cases... This SIG is designed to become an active public discussion forum for the collection, development, and analysis of use cases that are met by ID-WSF leveraging WS-Trust composability and functionality. Members will develop a detailed public roadmap that provides recommended next actions to the current stewards of the ID-WSF and WS-Trust standards (i.e. the Liberty Alliance Technology Expert Group and OASIS WSSX TC, respectively) for the technical work required, and will foster awareness of and participation in this harmonization effort from the broadest industry cross-section possible..."]

See also: the Harmonization SIG

New Direction for 'JavaScript 2'
Paul Krill, InfoWorld

Standardization efforts for the next version of JavaScript have taken a sharp turn this month, with some key changes in the Web scripting technology's direction. JavaScript creator Brendan Eich, CTO of Mozilla, has helped forge a consensus on how to proceed with the direction for JavaScript's improvements. The biggest change in JavaScript 2's direction is that the ECMAScript 4 project has been dropped. That change resolves a long-simmering debate as to whether ECMAScript 3.1 or ECMAScript 4 should be the basis of JavaScript 2. This decision at the ECMA International standards group overseeing the JavaScript standard unites the EMCA International Technical Committee 39, including Eich, with Google and Microsoft around the "Harmony" road map. The "Harmony" road map starts with an effort to finalize ECMAScript 3.1, essentially a rationalization of the current version, and produce two interoperable implementations by spring 2009. "I think you could characterize 3.1 as a maintenance release," says John Neumann, chair of the technical committee. The ECMAScript 3.1 effort will formalize bug fixes but also standardize across all implementations some of the improvements made in the field, Neumann says. That's key, so applications written for one browser will work in another. After the ECMAScript 3.1 effort, work will then proceed on a more significant ECMAScript successor dubbed Harmony. Plans for both ECMAScript 3.1 and Harmony call for providing tools to help developers more easily implement security. That plan will require the technical committee to codify security practices; the committee plans to meet this week to discuss security.

See also: the Ecma announcement


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: