The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: October 18, 2007
XML Daily Newslink. Thursday, 18 October 2007

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



Revised Civic Location Format for PIDF-LO
Martin Thomson and James Winterbottom (eds), IETF Internet Draft

Members of the IETF Geographic Location/Privacy (GEOPRIV) Working Group have released an updated version of "Revised Civic Location Format for PIDF-LO." The work was produced within the IETF Real-time Applications and Infrastructure Area. RFC 4119 "A Presence-based GEOPRIV Location Object Format" defines a location object which extends the XML-based Presence Information Data Format (PIDF), designed for communicating privacy-sensitive presence information and which has similar properties. RFC 4776 "Dynamic Host Configuration Protocol (DHCPv4 and DHCPv6) Option for Civic Addresses Configuration Information" further defines information about the country, administrative units such as states, provinces, and cities, as well as street addresses, postal community names, and building information. The option allows multiple renditions of the same address in different scripts and languages. This document ("Revised Civic Location Format for PIDF-LO") augments the GEOPRIV civic form to include the additional civic parameters captured in RFC 4776. The document also introduces a hierarchical structure for thoroughfare (road) identification which is employed in some countries. New elements are defined to allow for even more precision in specifying a civic location. The XML schema (Section 4, 'Civic Address Schema') defined for civic addresses allows for the addition of the "xml:lang" attribute to all elements except "country" and "PLC", which both contain language-neutral values. The IETF GEOPRIV Working Group was chartered to assess the authorization, integrity and privacy requirements that must be met in order to transfer [location] information, or authorize the release or representation of such information through an agent. As more and more resources become available on the Internet, some applications need to acquire geographic location information about certain resources or entities. These applications include navigation, emergency services, management of equipment in the field, and other location-based services. But while the formatting and transfer of such information is in some sense a straightforward process, the implications of doing it, especially in regards to privacy and security, are [underspecified]. Also in scope: authorization of requestors and responders; authorization of proxies (for instance, the ability to authorize a carrier to reveal what timezone one is in, but not what city; an approach to the taxonomy of requestors, as well as to the resolution or precision of information given them.

See also: the earlier story


Update XML in DB2 9.5
Matthias Nicola and Uttam Jain, IBM developerWorks

This article discussed the W3C "XQuery Update Facility" specification in the context of IBM DB2 9.5. The XQuery Update Facility extends the XML Query language, XQuery. The XQuery Update Facility provides expressions that can be used to make persistent changes to instances of the XQuery 1.0 and XPath 2.0 Data Model. It provides facilities to perform any or all of the following operations on an XDM instance: insertion of a node, deletion of a node, modification of a node by changing some of its properties while preserving its identity, and creation of a modified copy of a node with a new identity. One of the most significant new features in IBM DB2 9.5 for Linux, Unix and Windows is the XML update functionality. The previous version, DB2 9, introduced pureXML support for storing and indexing of XML data and querying it with the SQL/XML and XQuery languages. Modifications to an XML document were performed outside of the database server followed by an update of the full document in DB2. Now DB2 9.5 introduces the XQuery Update Facility, a standardized extension to XQuery that allows you to modify, insert, or delete individual elements and attributes within an XML document. This makes updating XML data easier and provides higher performance. When DB2 9.5 executes the UPDATE statement, it locates the relevant document(s) and modifies the specified elements or attributes. This happens within the DB2 storage layer, that is the document stays in DB2's internal hierarchical XML format the entire time, without any parsing or serialization. Concurrency control and logging happens on the level of full documents. Overall, this new update process can often be 2x to 4x faster than the [DB2 9 pureXML] process. This article describes how to perform such XML updates with XQuery transform expressions. You'll see how to embed a transform in UPDATE statements to permanently change data on disk, and in queries, to modify XML data "on the fly" while reading it out without permanently changing it. The latter can be useful if applications need to receive an XML format that's different from the one in the database.

See also: IBM Systems Journal


Semantic Web Services, Part 1
David Martin and John Domingue, IEEE Intelligent Systems

Semantic Web services (SWS) has been a vigorous technology research area for about six years. A great deal of innovative work has been done, and a great deal remains. Several large research initiatives have been producing substantial bodies of technology, which are gradually maturing. SOA vendors are looking seriously at semantic technologies and have made initial commitments to supporting selected approaches. In the world of standards, numerous activities have reflected the strong interest in this work. Perhaps the most visible of these is SAWSDL (Semantic Annotations for WSDL and XML Schema). SAWSDL recently achieved Recommendation status at the World Wide Web Consortium. SAWSDL's completion provides a fitting opportunity to reflect on the state of the art and practice in SWS—past, present, and future. This two-part installment of 'Trends & Controversies' discusses what has been accomplished in SWS, what value SWS can ultimately provide, and where we can go from here to reap these technologies' benefits. The essays in this issue effectively define service technology needs from a long-term industry perspective. Brodie starts by recognizing that, although industry has embraced services as the way forward on some of its most pressing problems, SOA is a framework for integration rather than the solution for integration. He outlines the contributions that are needed from semantic technologies and the implications for computing beyond services. Leymann emphasizes the broad scope of service-related technical requirements that must be addressed before SWS can effectively meet businesses' IT needs and semantically enabled SOA can be regarded as an enterprise solution rather than a mere packaging of applications. He argues that a great deal remains to be done in several important areas.

See also: W3C SAWSDL


Semantic Web Visions: A Tale of Two Studies
Seth Grimes, Intelligent Enterprise Weblog

Professor Jorge Cardoso of the University of Madeira, Portugal, has written a very interesting paper titled "The Semantic Web Vision: Where are We?" Cardoso defines the Semantic Web as "a machine-readable World Wide Web" and he notes "a significant evolution of standards as improvements and innovations allow the delivery of more complex, more sophisticated, and more far-reaching semantic applications." Cardoso posted to a variety of technical e-mail lists to solicit survey responses and sent 40 personal invitations. Two-thirds of the 627 responses came from academics and 18% from industry with 16% of respondents working in both academia and industry. He asked survey participants to report their use of ontology editors, ontology languages, and reasoning engines, software applications that derive new facts or associations from existing information. Refer to his paper for findings. Over 50% of respondents reported using ontologies for either or both of two purposes: to share common understanding of the structure of information among people or software agents (69.9%) and to enable reuse of domain knowledge (56.3%). These are knowledge management functions, stepping-stones on the path to the vision of autonomous software agents negotiating the Web that Tim Berners-Lee first articulated over ten years ago. Only 12.4% of answers indicated use of ontologies for purposes that are, perhaps, closer to actualization of that vision, for "code generation, data integration, data publication and exchange, document annotation, information retrieval, search, reasoning, annotating experiments, building common vocabularies, Web service discovery or mediation, and enabling interoperability." Nonetheless, Cardoso concludes that "70% of people working on the Semantic Web are committed to deploying real-world systems that will go into production in less than 2 years."

See also: W3C Semantic Web


Knowledge Services on the Semantic Web
Gregoris Mentzas, Kostas Kafentzis, Panos Georgolios; CACM Preprint

In this article we present a Semantic Web-enabled architecture for trading knowledge assets. The most suitable environment for technologically supporting Web-enabled knowledge provision services is the use of Semantic Web services. In this area, we should note the recent work of the Semantic Annotations for WSDL (SAWSDL) Working Group of the W3C, whose objective is to develop a mechanism to enable semantic annotation of Web services descriptions. In our work we developed multifaceted ontological structures in order to define the necessary modeling primitives that are important for describing knowledge provision services that go beyond common Web services like a flight booking or book selling. The knowledge service utilizes the content and context ontology for a twofold purpose: to discover knowledge objects within a collection and to be discovered as a service, namely to determine its identity. We have specified an enhanced universal discovery, description, and integration (UDDI) platform known as k-UDDI, which enables the discovery, negotiation, and invocation of knowledge services with the incorporation incorporation of reference ontologies that semantically enrich the Web services infrastructure. The k-UDDI holds all reference ontologies that allow a common understanding of services and facilitate semantically enhanced service discovery, IPR and business specific issues and finally negotiation processes generating sound contracts. Knowledge service discovery is provided by the discovery service of the registry, which is exposed via a Web service interface. As knowledge services will be traded, mechanisms are needed to support negotiation and contracting tasks. We make use of our negotiation ontology and develop a flexible negotiation mechanism that enables bargaining between the service provider and requester concerning the terms and conditions of use of a knowledge service. [Also published in CACM 50/10 (October 2007), 53-58.]

See also: UDDI references


The Search Engine Unfriendliness of Web 2.0
Stephan Spencer, SearchEngineLand.com

Wouldn't it be great if all those whiz-bang Web 2.0 interactive elements based on AJAX (Asynchronous JavaScript and XML) and Flash—such as widgets and gadgets and Google Maps mashups—were search engine optimal? Unfortunately, that's not the case. In fact, these technologies are inherently unfriendly to search engine spiders. So, if you intend to harness Web 2.0 technologies for wider syndication, increased conversion, improved usability and greater customer engagement, you'd better read on or you'll end up missing the boat when it comes to better search engine rankings. When it comes to AJAX and Flash, the onus is on you to render them search engine friendly. The major search engines just can't cope with these Web 2.0 technologies very well at all. Some search engines, including Google, have rudimentary means of extracting content and links from Flash. Nonetheless, any content or navigation embedded within Flash will, at best, rank poorly in comparison to a static, HTML-based counterpart, and at worst, not even make it into the search engine's index. Google's view on Flash is that it doesn't provide a user-friendly experience. Flash is wholly inaccessible to the vision-impaired, unrenderable on devices such as mobile phones and PDAs, and can't be accessed without broadband connectivity. In particular, Google frowns on navigational elements presented exclusively in Flash. Given this stance, Google isn't likely to make big improvements on how it crawls, indexes and ranks Flash files anytime soon. So, it's in your hands to either replace those Flash elements with a more accessible alternative like CSS/DHTML or to employ a Web design approach known as "progressive enhancement... AJAX poses similar problems to spiders as Flash does because AJAX also relies on JavaScript. Search engine spiders can't execute JavaScript commands. AJAX can be used to pull data seamlessly in the background onto an already loaded Web page, sparing the user from the "click-and-wait" frustrations associated with more conventional Web sites, but the additional content that's pulled in via AJAX is invisible to the spiders unless it's preloaded into the page's HTML and simply hidden from the user via CSS. Here, progressive enhancement renders a non-JavaScript version of the AJAX application for spiders and JavaScript-incapable browsers. A low-tech alternative to progressive enhancement is to place an HTML version of your AJAX application within noscript tags.


Why Microsoft Should Not Support SCA
David Chappell, Blog

Will Microsoft support Service Component Architecture (SCA)? It seems unlikely... First, it's important to understand that SCA is purely about portability—it has nothing to do with interoperability. To connect applications across vendor boundaries, SCA relies on standard Web services, adding nothing extra. This is an important point, but it's often lost (or misunderstood) in SCA discussions. Because some of SCA's supporters describe it as a standard for SOA, people assume it somehow enhances interoperability between products from different vendors. This just isn't true, and so Microsoft not supporting SCA will in no way affect anyone's ability to connect applications running on different vendor platforms. But what about portability? Just as the various Java EE specs have allowed some portability of code and developer skills, SCA promises the same thing. Wouldn't Microsoft supporting SCA help here? The answer is yes, but only a little. To explain why, it's useful to look separately at the two main things SCA defines: programming models for creating components in various languages and an XML-based language for defining composites from groups of these components... While some SCA skills portability will occur—at least everybody will be describing components and composites using the same terms—I'm doubtful that SCA will do much to help move applications from one vendor's SCA product to another. Put another way, don't look to SCA to play a big role in reducing vendor lock-in...

See also: the OASIS SCA TCs


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
EDShttp://www.eds.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
SAP AGhttp://www.sap.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2007-10-18.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org