The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: September 09, 2008
XML Daily Newslink. Tuesday, 09 September 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com



Services Mashups: The New Generation of Web Applications
Djamal Benslimane, et al (eds). IEEE Internet Computing

The Internet and related technologies have created an interconnected world in which we can exchange information easily, process tasks collaboratively, and form communities among users with similar interests to achieve efficiency and improve performance. Use of lighter-weight approaches to services, especially for Web applications, is increasing. Here, the Web APIs and RESTful (Representational State Transfer) reign supreme... Recently, in the context of the Web, the mashup concept has emerged, and researchers have developed a huge number of Web 2.0 applications—a way to create new Web applications by combining existing Web resources utilizing data and Web APIs. Mashups are about information sharing and aggregation to support content publishing for a new generation of Web applications. By extension, service mashups aim to design and develop novel and modern Web applications based on easy-to-accomplish end-user service compositions. Combining Web service technologies with fresh content, collaborative approaches (such as Web 2.0 technologies, tags, and microformats), and possibly Web data management and semantic technologies (RSS, RDFa, Gleaning Resource Descriptions from Dialects of Languages, and the Sparql Protocol and RDF Query Language) is an exciting challenge for both academic and industrial researchers building a new generation of Web-based applications. Researchers have created different mashup tools and platforms, letting developers and end-users access and compose various data that Web applications can provide. IBM's QEDWiki, Yahoo Pipes, Google Mashup Editor, and Microsoft's Popfly are some well-known examples of mashup platforms that users have largely adopted. Yet these platforms and associated tools represent only early and limited sets of capabilities that are sure to be followed by more powerful and flexible alternatives... Key issues must be considered in the future to improve sharing (registration and publication), finding (search and discovery), reusing (invocation), and integrating (mediation and composition) services... The first key challenge is that of semantic heterogeneity. Compared to data, services can present a broader form of heterogeneity. Correspondingly, the Web services research community has identified a broader form of semantics—data (I/O), functional (behavioral), nonfunctional (quality of service, policy), and execution (runtime, infrastructure, exceptions). Several research projects have looked at semantics for traditional (WSDL or SOAP) Web services to help address heterogeneity and mediation challenges, and the community took a step toward supporting semantics for Web services by adopting Semantic Annotation for WSDL (SAWSDL) as a W3C recommendation in August 2003. Now, attention has shifted to using semantics for community-created content, as with the Semantic MediaWiki, and for WebAPIs and RESTful services, such as hRESTS, SA-REST (Semantic Annotation of RESTful Services), and smart mashups. We believe that existing mashup approaches and tools must move one step further in order to use semantics approaches to deal with service interoperability and integration (including mediatability). To do so, we must have an open eye on how we might build new solutions upon existing semantic Web technologies using appropriate Web 2.0 and Semantic Web approaches and technologies that complement each other.

See also: Semantic Annotations for WSDL and XML Schema (SAWSDL)


W3C XML Schema Definition Language (XSD): Component Designators
Mary Holstege and Asir S. Vedamuthu (eds), W3C Technical Report

Members of the W3C XML Schema Working Group have published "W3C XML Schema Definition Language (XSD): Component Designators," updating an earlier draft of 2005-03-29. This specification defines a scheme for identifying XML Schema components as specified by W3C "XML Schema Part 1: Structures" and "XML Schema Part 2: Datatypes." This version incorporates all Working Group decisions through 2008-07-25. Some twenty-four changes made since the last public Working Draft are presented in the Status Section. The document has been reviewed by the XML Schema Working Group and the Working Group has agreed to publication as a Working Draft; comments on the document should be made in W3C's public installation of Bugzilla. Part 1 of the W3C XML Schema Definition Language (XSD) recommendation defines schema components, and Section 2.2 lays out the inventory of schema components into three classes: (1) Primary components: simple and complex type definitions, attribute declarations, and element declarations; (2) Secondary components: attribute and model group definitions, identity-constraint definitions, and notation declarations; (3) "Helper" components: annotations, model groups, particles, wildcards, and attribute uses. However, a QName (prefix:localname) is not sufficient to the task of designating any schema component. A key technical challenge to obtaining a useful system of naming XML Schema components where designators must either include full expanded names, or define namespace bindings; designators must distinguish named components in different symbol spaces from one another; designators must provide a means of distinguishing locally scoped element and attribute declarations with the same name; designators must provide for any designatable unnamed components, such as anonymous type definitions, wildcards, and the schema description component; and designators must function in the face of redefinitions. The schema description schema component may represent the amalgamation of several distinct schema documents, or none at all. It may be associated with any number of target namespaces, including none at all. It may have been obtained for a particular schema assessment episode by de-referencing URIs given in schemaLocation attributes, or by an association with the target namespace or by some other application-specific means. In short, there are substantial technical challenges to defining a reliable designator for the schema description, particularly if that designator is expected to serve as a starting point for the other components encompassed by that schema. This specification divides the problem of constructing schema component designators into two parts: defining a designator for an assembled schema, and defining a designator for a particular schema component or schema components, understood relative to a designated schema.

See also: the W3C XML Activity


Overlay Data on Maps Using XSLT, KML, and the Google Maps API, Part 2: Transform and Use the Data
Jake Miles, IBM developerWorks

This two-part article series shows how to develop an application for a real estate brokerage to display all available apartment listings as clickable Placemarks on Google Maps. Part 1 showed an application that collects the apartment listing information from the user, uses the Google Geocoder Web service to turn the street address into its geographical coordinates (longitude and latitude), and stores the coordinates in the database along with the address information. In this Part 2 article installment, you use this data to produce a KML overlay document and display it in Google Maps and Google Earth. First, we use store procedures to produce XML from MySQL. Then with XSLT and a technique called Muenchian grouping, we transform the XML data into a KML document containing the overlay information—one Placemark for each apartment building. The pop-up balloon for each Placemark displays the available apartment listings in that building. Finally, we use the Google Maps API to display the KML overlay in a Google Map embedded within the Web site. These articles only touch the surface of what's possible, especially since you can create 3D polylines and polygons in KML, not just Placemarks displaying textual information. You can leverage Google Maps, Google Earth, and the Google Geocoder on almost any Web site that deals with address information, and with XSLT you can transform any XML data that contains coordinate data into exciting KML overlays.

See also: article Part 1


Writing Functional Code with RDFa
Michael Hausenblas, DevX.com

RDFa is a current W3C Candidate Recommendation [as of 20-June-2008], and large organizations such as Yahoo! are implementing RDF in their search engine technologies. Now is an excellent time to learn how to use this set of XHTML extensions to produce RSS news feeds. News feeds in all their manifestations -- both with and without RDF—have a long tradition as structured data on the web. RDF (the data model) can represent relations between certain entities. For example, one relation between a human and a feed could be conceived as creator. In contrast, HTML is about structure and presentation, the semantics of the conveyed data is not—and cannot -- be represented. Presentation-oriented formats such as HTML are useful for users, but typically cause rather expensive back-end processing (along with heuristics). However, the RDF data model is useless without serialization syntaxes that are available to exchange representations online. To date, RDF/XML is the only official RDF serialization syntax that is available for developers to use... RDF serialization needs to make a persistent graph structure (be it in XML or in another form), and if the graph order is irrelevant, then interoperability issues can arise for use cases where the order is important, for example, in news feeds. However, when HTML is used as the container for RDF, structural elements can be preserved while defining and carrying arbitrary vocabularies (such as FOAF, SIOC, Dublin Core, DOAP, etc.) [...] Converting an RSS 1.0 feed into an XHTML+RDFa representation is likely of little value on its own. However, using such a feed in a reader (such as netvibes.com) would be a first step. SPARQLScript has a nice demo on how to create semantic mashups. Further, linking the content (or specific metadata) of a feed item to a dataset such as DBpedia or Geonames makes new use cases possible. From integrating other sources (for example, mapping hash tags from microblogs to DBpedia entities), to cross-site queries regarding a certain user, the possibilities are limited only by your imagination. The downside to using RDFa is that not every tool currently supports it. For example, to query the example feed, you would naturally use SPARQL. However, nowadays most SPARQL engines require RDF/XML as input. Therefore, you'd need an RDFa processor such as the RDFa Distiller to convert the RDFa serialization into an RDF/XML serialization.

See also: the SPARQLer general purpose processor


How Will We Interact with the Web of Data?
Tom Heath, IEEE Internet Computing

This article discusses some ways in which our interaction with the Web of data might differ from how we interact with the established Web of documents, and what this might mean for both users and producers of Web content. Machine-readable data, given explicit semantics and published online, coupled with the ability to link data in distributed data sets, are the Semantic Web's key selling points. Together, these features allow aggregation and integration of heterogeneous data on an unprecedented scale, and machines will do the grunt work for us. However, without a human somewhere in this process to reap the rewards of these new capabilities, the endeavour is meaningless. Far from removing humans from the equation, a Web of machine-readable data (the Semantic Web; we also call it the 'Web of data') creates significant challenges and opportunities for human-computer interaction. To date, the Semantic Web community has mostly been busy developing the technical infrastructure to make the Web of data feasible in principle and publishing linked data sets to make it a reality. RDF is a W3C specification for making statements about things in machine-readable form. These statements each consist of a subject, predicate, and object, hence the name triples. In most cases, the subject of a triple is a uniform resource identifier (URI) that can identify anything the data publisher chooses, be that a person, a place, a document in the Web, an abstract concept—in short, anything... The document in which a particular RDF graph is published becomes primarily an indicator of provenance, rather than representing the definitive packaging of a certain slice of data or content. Of far greater relevance than the documents themselves are the things described in those documents: the people, places, and concepts. It's at the level of 'things' that browsers for the Web of data should operate. Providing simple browsers for RDF triples, and the documents in which they're published, is one option for enabling people to interact with this information space. The one-page-at-a-time style of browsing, which we know well from the Web of documents, would make nothing of the potential we now have for integrated views of data assembled from numerous locations. So, Semantic Web browsers must not simply echo the underlying representation of the data. Instead, they must treat 'things,' in the broadest sense, as first-class citizens of the interface.


OpenOffice.org 3 Edges Towards Release
Serdar Yegulalp, InformationWeek

OpenOffice.org's first release candidate for version 3.0 hit the tubes yesterday. It's an evolutionary, rather than revolutionary, edition of the open source office suite. It isn't to OO.o 2 what, say, Office 2007 was to Office 2003 — but it's solid, and most importantly, noticeably faster. If there was one complaint I heard about OO.o more than any other, it was "It's too slow!" On Windows, RC1 of version 3 opens all of its apps pretty snappily even without the Quickstarter running in the system tray. A well-engineered program shouldn't need a crutch like the Quickstarter in the first place, so I disabled it to see how well things worked without it. Another major addition is native (if not feature-complete) support for Microsoft Office macros, something enormously useful to people trying to make a jailbreak from the Office but find they can't due to the need to support legacy macros. Financial houses and law firms seem to be two types of folks most heavily dependent on Office VB macros—so maybe some success stories from their side of the fence will compel those with far less ambitious needs to make the jump, too. One of the big [advances] for me has been OO.o's native .PDF support. Version 3 adds in the ability to import and edit .PDFs via a plugin, and a much more detailed and powerful .PDF export dialog. Nominally I use a print driver to do .PDF export, but anything that exports natively within an app gets precedence. Version 3 also reads Office 2007's OOXML documents pretty transparently: none of the Word docs I opened with it gave me problems. But on the whole I'd rather convert to and use the standard OASIS document format to avoid any cross-compatibility issues. On that note, since OO.o 3 uses the most recent version of the OASIS document format (1.2), don't save anything as 1.2 if you intend to also open it in older versions of OO.o. You can force saving documents in the older version of the format through the menus.

See also: the web site


NYC's 911 System Upgraded to Accept Photos, Video
Steven Musil, CNET News.com

New York City is touting a new weapon in its war on crime: cell phone cameras. Tipsters in New York City can now send photos and video from computers and Web-enabled cell phones and PDAs to the city's 911 and non-emergency hot lines to report crimes and quality-of-life issues such as potholes, officials announced Tuesday. While many cities' emergency systems are equipped to accept text messages, this is believed to be the first system that also is able to process photos and video. When 911 callers tell police operators that photos or video related to their complaint are available, a detective with the New York Police Department's Real Time Crime Center will call back to receive the images. Depending on the case, the images may be shared widely with the public, with police officers on patrol, individual detectives or other law enforcement agencies, according to city officials. The images may also be used to help in assessing and responding to emergencies... In preparation for the upgrade, more than 12,000 new computers were reportedly installed in precincts around the city and police operators received special training on how to handle emergency calls that contained images or video. New York Mayor Michael Bloomberg praised the technology's ability to deliver information instantaneously to the city's 911 operators, who handle 11 million calls annually.


Application Lifecycle Management Meets Model-Driven Development
John Carrillo and Scott McKorkle, DDJ

The combination of ALM and MDD gives you the connected workflow you need to handle the development of even the most complex applications and systems. Application lifecycle management (ALM) has evolved into an ecosystem of integrated processes and domain technologies for the system and software development lifecycle. ALM establishes a framework that you can use to catalog and manage customer requirements, plan a portfolio of development projects to address these requirements, manage design, development, testing, and deployment, and manage change throughout the entire process. MDD, on the other hand, lets you more accurately design, simulate, and validate the complex behavior of distributed, mission-critical systems. The model-driven process uses visual aids to accurately describe and define system objectives and solutions. Scientific and technical industries, such as aerospace, defense, and telecommunications, depend on MDD to increase the quality and efficiency of complex software and systems through modeling. The combination of ALM and MDD creates a rich environment of connected processes and interacting solutions that are proving invaluable for successful systems and software development projects. This is a welcome advance, given the state of today's complex development environments. A major challenge for many organizations is finding a way of integrating all aspects of the development lifecycle in an intuitive, yet formal manner to deliver long lasting, business critical products, systems, and applications. Moreover, in combination ALM and MDD provide exponential gains in the optimization of development lifecycle processes. MDD is a natural fit within the ALM framework. The integration of ALM and MDD lets you optimize products and services and serves as a framework for the fast, accurate, and coordinated design and development of architectures, applications, and products... With MDD, traceability is established throughout development. Each feature can be traced back through the model to its originating requirement, while extraneous features (those "thrown in" by well-intentioned developers) are quickly exposed, eliminating the expense and bloat of unintended feature-creep. You can also simulate and validate system behavior, which adds a whole new dimension to constructing complex applications like those based on service-oriented architectures (SOA).


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-09-09.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org