The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: September 04, 2008
XML Daily Newslink. Thursday, 04 September 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



W3C Proposed Recommendation for 'RDFa in XHTML: Syntax and Processing'
Ben Adida, Mark Birbeck (et al., eds), W3C Technical Report

W3C's Semantic Web Deployment Working Group and XHTML2 Working Group have published the Proposed Recommendation for " RDFa in XHTML: Syntax and Processing. A Collection of Attributes and Processing Rules for Extending XHTML to Support RDF." A companion "RDFa Implementation Report" describes implementations of the RDFa Syntax and Processing rules, that is, implementations that are able to parse an XHTML+RDFa document and generate an RDF graph according to the processing rules. The modern Web is made up of an enormous number of documents that have been created using HTML. These documents contain significant amounts of structured data, which is largely unavailable to tools and applications. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience: an event on a web page can be directly imported into a user's desktop calendar; a license on a document can be detected so that users can be informed of their rights automatically; a photo's creator, camera setting information, resolution, location and topic can be published as easily as the original photo itself, enabling structured search and sharing. RDFa is a specification for attributes to express structured data in any markup language. This document specifies how to use RDFa with XHTML. The rendered, hypertext data of XHTML is reused by the RDFa markup, so that publishers don't need to repeat significant data in the document content. The underlying abstract representation is RDF, which lets publishers build their own vocabulary, extend others, and evolve their vocabulary with maximal interoperability over time. The expressed structure is closely tied to the data, so that rendered data can be copied and pasted along with its relevant structure. The rules for interpreting the data are generic, so that there is no need for different rules for different formats; this allows authors and publishers of data to define their own formats without having to update software, register formats via a central authority, or worry that two formats may interfere with each other. RDFa shares some use cases with microformats. Whereas microformats specify both a syntax for embedding structured data into HTML documents and a vocabulary of specific terms for each microformat, RDFa specifies only a syntax and relies on independent specification of terms (often called vocabularies or taxonomies) by others. RDFa allows terms from multiple independently-developed vocabularies to be freely intermixed and is designed such that the language can be parsed without knowledge of the specific term vocabulary being used. This document is a detailed syntax specification for RDFa, aimed at: (1) those looking to create an RDFa parser, and who therefore need a detailed description of the parsing rules; (2) those looking to recommend the use of RDFa within their organisation, and who would like to create some guidelines for their users; (3) anyone familiar with RDF, and who wants to understand more about what is happening 'under the hood', when an RDFa parser runs.

See also: the RDFa Implementation Report


SmartGrid Domain Experts Workgroup NIST
Toby Considine, AutomatedBuildings.com

This article presents notes of an August 5, 2008 Meeting, taken by Toby Considine (Systems Specialist, Facility Services, University of North Carolina, Chapel Hill): "NIST (National Institute for Standards and Technology) started by making a strong claim for ownership in this area, citing Title XIII, 1305 of EISA 2007. NIST set out an aggressive agenda including a preliminary report at GridWeek 2008 on 9/24 and a NIST workshop on developing standards at Grid Interop in Atlanta November 11-13, 2008. NIST wants to have in place tight working relationships with the target SDOs (Standards Development Organizations) in place before 2009. NIST and the GridWise Architectural Council are working together to direct the standards direction toward e-commerce and interactions with building operations and with the building occupants. The Council identified OASIS as the critical SDO for its e-commerce expertise, a view strongly seconded by the NIST secretary and by the domain leaders. A secondary interest was noted in relationships with the NBIMS (National Building Information Model Standard) and BuildingSmart was identified. NBIMS provides standards for describing building operations and energy models. NIST would like to fast track standardization building services to lay alongside energy models. There was some interest in and discussion of reaching out to FIATECH to accelerate operations / energy use interactions... The goal of the SmartGrid standardization efforts is to design the information exchange and informational interoperability to enable healthy markets to emerge around energy use in buildings. Success was defined as enabling buildings to trade their energy. The group was in violent agreement that we needed to work on business to business interactions, and not on machine to machine interactions. Services inside the building would be coordinated by the business processes of the occupants. Grid messages would go to the business agent of the occupants. Interactions, including pricing and bidding, would be between the grid agents and the building agents..."

See also: XML and Web Services for Facilities Automation Systems


Common YANG Data Types for YANG Data Modeling Language
Juergen Schoenwaelder (ed), IETF Internet Draft

Members of the IETF NETCONF Data Modeling Language (NETMOD) Working Group have published an initial -00 Internet Draft for "Common YANG Data Types." Contributors include Andy Bierman, Martin Bjorklund, Balazs Lengyel, David Partain, and Phil Shafer. YANG provides the language and rules for defining such models for use with NETCONF (RFC 4741). YANG is a data modeling language used to model configuration and state data manipulated by the NETCONF protocol, NETCONF remote procedure calls, and NETCONF notifications. This document describes the syntax and semantics of the YANG language, how the data model defined in a YANG module is represented in XML, and how NETCONF operations are being used to manipulate the data. The YANG language supports a small set of built-in data types and provides mechanisms to derive other types from the built-in types. This document introduces a collection of common data types derived from the built-in YANG data types. The definitions are organized in several YANG modules. The "yang-types" module contains generally useful data types. The "inet-types" module contains definitions that are relevant for the Internet protocol suite while the "ieee-types" module contains definitions that are relevant for IEEE 802 protocols. Their derived types are generally designed to be applicable for modeling all areas of management information. Appendix A 'XSD Translations' presents XML Schema (XSD) translations of the types defined in this document; Appendix B 'RelaxNG Translations' provides the RelaxNG translations for Core YANG Derived Types, Internet Specific Derived Types, and IEEE Specific Derived Types.

See also: the YANG Data Modeling Language for NETCONF


Google Geocoder Web Service: Overlay Data on Maps Using XSLT, KML, and the Google Maps API
Jake Miles, IBM developerWorks

Google Maps has become the ubiquitous map technology on the Web, allowing users to instantly bring up geographical maps and pan and zoom around them, including 360-degree views of the street at eye-level. Google Earth provides a 3D photographic encyclopedia of the Earth, letting you pan and zoom an image of the Earth at various heights. With the Google Maps API, you can embed Google Maps into your own Web pages. Add KML, an XML language used to describe geographical information such as coordinates on the Earth, and you can overlay your own visual and textual data onto the maps. You can also import KML data into Google Earth, and project your own 3D data onto the Earth as the user pans and zooms. In this two-part article series, you will combine it with the Google Maps API and XSLT to create data overlays for display in Google Maps and Google Earth. You will create an example application for a real-estate brokerage that lets a broker enter listings for apartments through an HTML form, uses Google's Geocoder Web service to translate those addresses into longitudes and latitude, and then creates KML overlays from the database of apartment listings. In Part 1, you build the first half of the application to collect the apartment listing information from the user, uses the Google Geocoder Web service to turn the street address into its geographical coordinates (longitude and latitude), and stores those coordinates in the database along with the address information. In Part 2 of this serial article we use stored procedures to produce XML data from a MySQL query, XSLT to transform that data into KML overlay data, and the Google Maps API to display the KML on a map embedded in the Web site... In the example data for Nine Inch Nails' album "The Slip", rendered using Google Earth and KML, the height of each spike reflects the number of downloads recorded at that location, and was created (one must assume), by creating a line in KML at the download's longitude and latitude, from altitude 0 to an altitude proportionate to the number of downloads at that location. One critical detail missing from this visualization is the ability to map customer addresses, or at least postal codes, to their geographical coordinates on the Earth, because all custom KML data is positioned on the Earth using longitude, latitude and altitude coordinates. To solve this problem, Google recently made available the Google Geocoder Web service, that takes a street address and returns KML data describing the address to whatever accuracy is possible, including its latitude and longitude. Once you have these coordinates you can overlay textual and visual data on 2D maps and the 3D globe as creatively as your imagination allows. [Note: KML is an international standard maintained by the Open Geospatial Consortium (OGC).]

See also: the Google Maps API


Business Process Semantics: An Opportunity for Convergence
Stephen Zisk, DM Review Magazine

Knowledge management professionals help businesses improve performance by formalizing the understanding of what the business is accomplishing, how well it is doing and what is impeding the business both internally and externally. Knowledge management is a wide-ranging discipline, looking at the business in terms of requirements, data collection and improvements. One unifying principle is the recognition that all businesses change and that improvement requires embracing change. This article examines two approaches to improving a business and discusses how these approaches—business process management (BPM) and semantic Web technology—may be used together to improve business agility in the face of change... The focus on the process of a BPMS often puts a prescriptive and sequential spin on data modeling. However, this is somewhat countered by the inclusion of business rules, their focus on policy as well as a descriptive or declarative approach to business or process decisions and the data needed to make them... Semantics is the study of meaning in communication. Semantic data models, or ontologies, are formal models of domains of interest and have been explored for more than 25 years as part of artificial intelligence and knowledge management for computer reasoning, data classification and normalization and linguistic and text analysis. The core standards for the semantic Web, managed as part of the World Wide Web Consortium (W3C) semantic Web activity, include the resource description framework (RDF). The framework defines abstract statements and provides an XML-based implementation language and Web ontology language (OWL), which extends RDF to define ontologies. OWL and RDF implement an object-relational model allowing creation of a directed graph, a network of objects and relationships describing data. This model is potentially richer and more flexible than traditional data models used to implement relational databases. BPMS is a maturing software sector with well-understood benefits and a good choice of software, but with just enough focus on data and information management to execute processes. Bringing semantic Web technology into a BPM practice offers rigorous and explicit data modeling and allows integrated rules engines to reason, classify and improve data quality in the BPMS... Existing standards for BPM and rules are immature and do not play well with the more mature standard set for the semantic Web. Berners-Lee, in his Rule Interchange Format (RIF) announcement referenced previously, said, "A Rule Interchange Format will, for example, help businesses find new customers, doctors validate prescriptions and banks process loan applications." This is the promise of an advanced BPMS, and standards committees could help reduce silo standards. If enlightened vendors can deliver merged BPM and semantic Web capabilities with a strong integrated rules approach, businesses will adopt the solution as their next generation data and process architecture.

See also: Standards for Business Process Modeling


OpenSocial Foundation Launches with Google, Yahoo, MySpace
Clint Bounton, eWEEK

Google, Yahoo, and MySpace have formally launched the OpenSocial Foundation to garner support for the OpenSocial data portability effort. The group's goal is make sure that OpenSocial will remain open and free for developers or anyone else contributing to the specification. OpenSocial "provides a common set of APIs for building social applications that run across multiple websites. With standard JavaScript and HTML, developers can create apps that access a social network's friends and update feeds. This forum is the place for anyone interested in OpenSocial to post questions, share ideas, and participate in community discussion around the APIs... OpenSocial applications use Google's gadget architecture but with extensions that provide programmatic access to social data within its container environment. Similar to gadgets, OpenSocial apps are hosted XML documents with HTML/JavaScript within their bodies. Social apps have most of the infrastructure of gadgets available to them but with a few minor exceptions...One of the initial environments for social apps which use the OpenSocial APIs is Orkut. Other OpenSocial enabled websites are expected to launch support for developers soon... OpenSocial provides a standard way for websites to expose their social graph and more. Seeing the activities of other people helps you stay up to date with your friends, and allows everything from resumes to videos to spread virally through the graph. OpenSocial also provides a way for application data to persist on a social networking site, as well as specifying the different ways that an application can be viewed within an OpenSocial container..." The new OpenSocial Foundation has selected five of the seven board members to preside over the group's governance: Google's David Glazer, credited with leading the OpenSocial API efforts; hi5's Anil Dharni; Flixster's Joe Greenstein; MySpace's Allen Hurff; and Yahoo's Sam Pullara. In an unusual move, and one that underscores the open nature of the group, the board will also include two community representatives, which will be selected by participants of the OpenSocial Foundation in the coming weeks.

See also: the OpenSocial RESTful API Specification


Jeff Barr Discusses Amazon Web Services
Ryan Slobojan, InfoQueue

In this interview from QCon London 2008, Amazon Web Services (AWS) Evangelist Jeff Barr discusses SimpleDB, S3, EC2, SQS, cloud computing, how the different Amazon services interact within an application, the origins of AWS, SimpleDB and Microsoft SQL Server Data Services, globalization of the AWS cloud, the March AWS outage, SimpleDB Stored Procedures and converting between AMIs and VMWare. Excerpts: "For the last couple of years what we've been doing at Amazon is opening up our own computing infrastructure to outside developers. So the same very reliable, very scalable technologies that we use for our own applications are now available to developers for things like storage and messaging and computing and database storage as well. They are all open and accessible to developers on basically a pay-as-you-go basis... We have SimpleDB for structured storage, we have S3 for more block storage, EC2 is the compute cloud, SQS for messaging. We have the Flexible Payment Service for point-to-point money transfer and DevPay is a way to take other services and put your own business model around those services... There is a common authentication service that goes across all the different services. So once you create your Amazon developer account, you use the same private key, public key mechanism to access those services. The services are running inside the same datacenter so there is no charge for bandwidth between services inside the datacenter. So a good example is if you have data stored in S3, you're going to pull over to EC2 for processing, do all that processing, send it back to S3—that bandwidth back and forth doesn't cost us, so we don't charge developers for that bandwidth. So then a common payment mechanism is another common aspect of the services, and then finally what happens is developers will then put these together... A very common architecture is to use the Simple Queuing Service as the messaging between different parts of a scalable app. So a very common one I talk about all the time is Podango and their podcast processing. So they have a number of different functional units running on different EC2 instances and then each different kind of functional unit, be it a transcoding or assembly or different kinds of processing, each of those is driven by a separate queue and there is one or more EC2 instances pulling off of each queue. If a queue is getting too busy, if it's taking too much time to do work they can simply ramp up the number of EC2s working on a given queue, makes it very easy to scale. They can be functional at a very very low level of processing—if they only have a few podcasts per hour they can process that; if they get thousands or tens of thousands, the queues are going to get a little bit bigger, the system automatically senses how busy the queues are and then scales up in response..."

See also: Amazon Web Services


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-09-04.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org