The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: September 17, 2008
XML Daily Newslink. Wednesday, 17 September 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com



Understanding Devices Profile for Web Services, Web Services Discovery, and SOAP-over-UDP
Ram Jeyaraman (ed), Microsoft Contribution to OASIS WS-DD TC

An inaugural F2F meeting of the OASIS Web Services Discovery and Web Services Devices Profile (WS-DD) Technical Committee was held September 16-18, 2008 in Redmond, WA, hosted by Microsoft Corporation. In connection with this meeting, the base specifications have been contributed to the TC, along with a white paper and presentation on SOAP-over-UDP. Microsoft has contributed the paper "Understanding Devices Profile for Web Services, Web Services Discovery, and SOAP-over-UDP." The primary goal of this document is to increase understanding of the use of DPWS, WS-Discovery, and SOAP-over-UDP specifications using relevant usage scenarios to illustrate and motivate the use of those specifications. From the document Introduction: "From the point of view of DPWS, WS-Discovery and SOAP-over-UDP specifications, a service is a Web Service. DPWS defines a special service, called a device, as a Web Service, whose function is to participate in discovery, and to describe other services available or hosted in the same physical device container. The type for this hosting service is 'wsdp:Device'. Services that are not a DPWS device may also participate in WS-Discovery... DPWS is a profile of Web Services protocols consisting of a minimal set of implementation constraints to enable secure Web Service messaging, discovery, description, and eventing on resource-constrained endpoints. It defines an extensible metadata model for describing the characteristics of devices. It defines the metadata format that allows services of 'wsdp:Device' type to describe their hosted services, such as printer services and scanner services. It defines a policy assertion that allows devices to indicate compliance with this profile. It provides guidance on security. WS-Discovery provides a Web Services based lightweight dynamic discovery protocol to locate Web Services. It does not require specialized network intermediaries to aid discovery. It is transport independent; it may be used over HTTP, UDP, or other transports. It defines a compact signature format for cryptographically signing WS-Discovery protocol messages sent over UDP transport. WS-Discovery allows discovery of services in ad hoc networks with a minimum of networking services (e.g., no DNS or directory services). It leverages, when available, network services (e.g., DNS or directory services) to reduce network traffic in managed networks, where such services exist. It allows for smooth transitions between ad hoc and managed networks... SOAP-over-UDP is a binding of SOAP to UDP (User Datagram Protocol) transport, including message patterns (one-way, request-response, and multicast), message encoding, URI scheme, security considerations, message re-transmission behavior, and duplicate detection mechanisms. This binding describes how to transmit a SOAP message inside a UDP packet, including guidance on how to secure the SOAP message in spite of the UDP packet size limitation (64k)..."

See also: the WS-DD TC posting


Discovery and HTTP: XRI, XRDS-Simple, OpenID, OAuth, and POWDER
Eran Hammer-Lahav, Blog

Discovery is the process in which machines find out information about web resources, which enables them to interact with previously unknown services. It centers around locating and retrieving the resource metadata and parsing it. The challenge is making this workflow consistent with the web architecture and the HTTP protocol, while at the same time addressing key scalability requirements and efficiencies. Put simply: A server is trying to interact with an unfamiliar resource (identified by a URL). The server must first find out where the resource's metadata resides, fetch it, parse the metadata, and learn how to interact with the resource. This definition of discovery makes clear distinction between the process used to find the metadata, and the format used to provide it. First find it, then parse it—and to find it, start from the resource's URL. Different document schemas offer varying levels of complexity and features and are created to address different use cases. XRDS, POWDER, and even robots.txt, offer significantly different approaches to encoding resource metadata. They each define a different schema for describing resources, sharing some general concepts, but each with a different focus and approach... Debating the suitability of these schemas without a concrete application is futile. The key to this discussion is that each of these schemas offers a different balance between complexity and functionality, and it is the market's job to decide which one is the most suitable for each application. The XRDS, POWDER, and other communities should not try to merge their work into a single solution, nor should either one try to dismiss or ignore the other. Instead, they should focus on where they are in agreement, and where there is no value in competing approaches... The intention of this post is not to identify a single solution, but to show what solutions have been discussed in a single comprehensive list. It does however show that some combination of the 'Link' header with a dynamic mapping approach to metadata location will produce the closest match to the list of requirements. I would like to see the XRI, XRDS-Simple, OpenID, OAuth, and POWDER communities begin a dialog around this that will move us quickly towards a single road to discovery. We can continue to disagree on the destination...

See also: the W3C ESW Wiki 'Uniform Access to Links and Properties'


Towards an International Address Standard
Serena Coetzee, Antony Cooper (et al.), GSDI-10 Conference Paper

This paper was presented at the Tenth International Conference for Spatial Data Infrastructure (GSDI-10). "Address standards have been developed and are still being developed by a number of countries (e.g., South Africa, Australia, New Zealand, United Kingdom, Denmark and the United States of America) and international organizations — e.g., Universal Postal Union (UPU), International Organization for Standardization (ISO) and the Organization for the Advancement of Structured Information Standards (OASIS). More recently, these standards have tended to include geospatial components and to cater for other forms of service delivery and not just postal, such as goods delivery, connecting utilities, routing emergency services and providing a reference context for presenting other information. The time is right for bringing these various initiatives together to develop one, common international address standard. Such a standard will promote interoperability and reusability of address-related software tools, by providing one common framework for their developers. The standard will facilitate the development of spatial data infrastructures (SDIs), particularly those that span national borders, and facilitate data discovery through geospatial portals. An international address standard will help developing countries without widespread addressing systems speed up the process of assigning addresses and maintaining address data bases... Table 11 describing ten addressing standards ("Overview of issues addressed in the address standards") shows that most of the address standards: include geo-referencing by coordinates; describe all kinds of addresses (as opposed to only postal addresses); provide data models; use UML to describe their data models; and use XML as an encoding format. Some of the standards include metadata and a few of the standards include data quality, though the trend is to specify data quality measures in a separate standard... The authors believe that the best approach is to develop a new international address standard within ISO/TC 211, as addresses are a fundamental geospatial data theme, and because developing the standard within ISO will allow the broadest participation from governments, academia, industry, NGOs, civil society and international organizations such as UPU and OASIS. Particularly, involvement by relevant organizations will be encouraged to get the broadest possible participation. However, developing the international address standard within ISO implies that copies of the standards must be bought, and we propose to either develop an abstract standard with regional profiles or to develop the standard jointly with an organization that makes their standards available for free. This will help ensure that the standard gets to the local authorities who ultimately have to implement the standard in their areas of jurisdiction..."

See also: Markup Languages for Names and Addresses


Microsoft CSS Vendor Extensions
Harel M. Williams, The Windows Internet Explorer Weblog

"As you may know, all browsers have a set of CSS features that are either considered a vendor extension (e.g. -ms-interpolation-mode), are partial implementations of properties that are fully defined in the CSS specifications, or are implementation of properties that exist in the CSS specifications, but aren't completely defined. According to the CSS 2.1 Specification, any of the properties that fall under the categories listed previously must have a vendor specific prefix, such as '-ms-' for Microsoft, '-moz-' for Mozilla, '-o-' for Opera, and so on. As part of our plan to reach full CSS 2.1 compliance with Internet Explorer 8, we have decided to place all properties that fulfill one of the following conditions behind the '-ms-' prefix: (1) If the property is a Microsoft extension—not defined in a CSS specification/module; (2) If the property is part of a CSS specification or module that hasn't received Candidate Recommendation status from the W3C; (3) If the property is a partial implementation of a property that is defined in a CSS specification or module. This change applies to the following properties [see the Table], and therefore they should all be prefixed with '-ms-' when writing pages for Internet Explorer 8 (please note that if Internet Explorer 8 users are viewing your site in Compatibility View, they will see your page exactly as it would have been rendered in Internet Explorer 7, and in that case the prefix is neither needed nor acknowledged by the parser... We understand the work involved in going back to pages you have already written and adding properties with the '-ms-' prefix, but we highly encourage you to do so in order for your page to be written in as compliant a manner as possible. However, in order to ease the transition, the non-prefixed versions of properties that existed in Internet Explorer 7, though considered deprecated, will continue to function in Internet Explorer 8. Changes in the filter property syntax: Unfortunately, the original filter syntax was not CSS 2.1 compliant. For example, the equals sign, the colon, and the commas [highlighted in red] are illegal... Since our CSS parser has been re-designed to comply with standards, the old filter syntax will be ignored as it should according to the CSS Specification. Therefore, it is now required that the defined filter is fully quoted. In order to guarantee that users of both Internet Explorer 7 and 8 experience the filter, you can include both syntaxes. Due to a peculiarity in our parser, you need to include the updated syntax first before the older syntax in order for the filter to work properly in Compatibility View; this is a known bug and will be fixed upon final release of IE8..."

See also: W3C Cascading Style Sheets


W3C Issues Last Call: WebCGM 2.1
Benoit Bezaire and Lofton Henderson (eds), W3C Technical Report

W3C's WebCGM Working Group has published the First Public Last Call Working Draft of WebCGM 2.1. Comments are welcome through 01-November-2008. Computer Graphics Metafile (CGM) is an ISO standard, defined by ISO/IEC 8632:1999, for the interchange of 2D vector and mixed vector/raster graphics. WebCGM is a profile of CGM, which adds Web linking and is optimized for Web applications in technical illustration, electronic documentation, geophysical data visualization, and similar fields. First published (1.0) in 1999, WebCGM unifies potentially diverse approaches to CGM utilization in Web document applications. It therefore represents a significant interoperability agreement amongst major users and implementers of the ISO CGM standard. The design criteria for WebCGM aim to balance graphical expressive power on the one hand, versus simplicity and implementability on the other. A small but powerful set of standardized metadata elements supports the functionalities of hyperlinking and document navigation, picture structuring and layering, and enabling search and query of WebCGM picture content. The present version, WebCGM 2.1, refines and completes the features of the major WebCGM 2.0 release. WebCGM 2.0 added a DOM (API) specification for programmatic access to WebCGM objects, a specification of an XML Companion File (XCF) architecture, and extended the graphical and intelligent content of WebCGM 1.0... This document was developed by the WebCGM Working Group, part of the Graphics Activity. The Working Group expects to advance this Working Draft to Recommendation Status. This WebCGM 2.1 specification is based on a work by the same name, WebCGM 2.1 an OASIS Committee Specification. This initial W3C version of the WebCGM 2.1 specification is substantially identical in technical content to that Committee Specification. The WebCGM 2.1 specification is related to the previous W3C work on WebCGM 1.0 and 2.0. WebCGM 2.0 was simultaneously published by W3C as a Recommendation and by OASIS as an OASIS Standard. The two versions are identical in technical content, differing only in the formatting and presentation conventions of the two organizations. It is agreed that WebCGM 2.1 will be progressed by the same collaborative process and will result in technically a identical Recommendation and OASIS Standard.

See also: the W3C WebCGM Working Group


U.S. National Vulnerability Database Updated, Upgraded
William Jackson, Government Computer News

The National Institute of Standards and Technology has incorporated Mitre's Common Platform Enumeration in the latest version of the National Vulnerability Database, a comprehensive repository of information on potential vulnerabilities in computer systems. NIST is applying the CPE product-naming scheme in the NVD dictionary that identifies names of products such as operating systems and applications. Experienced systems administrators and security analysts can get by with informal naming systems for platforms and products when they are dealing with vulnerabilities and configuration issues. But automated security practices require a more consistent and structured naming scheme that allows tools and people to identify the IT platforms to which a vulnerability or security guidance applies. With a clear naming scheme, administrators can generate IT platform names consistently and predictably. NIST made more than 80,000 updates to NVD in preparation for the latest upgrade, which enables greater automation of security processes. Data in the earlier NVD product dictionary was suitable only for human use because its structure was loosely defined. However, the new dictionary enables the data to be used in machine-to-machine communications. For example, a database of network assets listing hardware, software, patches and service packs can be correlated with a database of security vulnerabilities, thereby identifying vulnerabilities that might be present on instances of software. That is made possible by linking NVD's large repository of vulnerability information to standard product names. NVD is a collection of 36 programs with a database back end and a Web browser front end. Researchers in NIST's Computer Security Division, with support from the Homeland Security Department's National Cyber Security Division, developed the database... NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics. NVD supports the Information Security Automation Program (ISAP). NVD is a product of the NIST Computer Security Division and is sponsored by the Department of Homeland Security's National Cyber Security Division. It supports the U.S. government multi-agency (OSD, DHS, NSA, DISA, and NIST) Information Security Automation Program. It is the U.S. government content repository for the Security Content Automation Protocol (SCAP). As of 2008-09-16, the NVD contained: 32704 CVE Vulnerabilities, 161 Checklists, 151 US-CERT Alerts, 2260 US-CERT Vuln Notes, and 2097 OVAL Queries—with CVE Publication rate of 12 vulnerabilities per day.

See also: the National Vulnerability Database


IETF Internet Draft: Tags for Identifying Languages
Addison Phillips and Mark Davis (eds), IETF Internet Draft

Addison Phillips (Globalization Architect - Lab126) has announced the release of "Tags for Identifying Languages" -v17. This IETF Internet Draft (draft-ietf-ltru-4646bis-17) describes the structure, content, construction, and semantics of language tags for use in cases where it is desirable to indicate the language used in an information object. It also describes how to register values for use in language tags and the creation of user-defined extensions for private interchange. This release will form the basis for an upcoming Working Group last call review. "There are many reasons why one would want to identify the language used when presenting or requesting information. The language of an information item or a user's language preferences often need to be identified so that appropriate processing can be applied. For example, the user's language preferences in a Web browser can be used to select Web pages appropriately. Language information can also be used to select among tools (such as dictionaries) to assist in the processing or understanding of content in different languages. Knowledge about the particular language used by some piece of information content might be useful or even required by some types of processing; for example, spell-checking, computer-synthesized speech, Braille transcription, or high-quality print renderings. One means of indicating the language used is by labeling the information content with an identifier or "tag". These tags can also be used to specify the user's preferences when selecting information content, or for labeling additional attributes of content and associated resources. Sometimes language tags are used to indicate additional language attributes of content. For example, indicating specific information about the dialect, writing system, or orthography used in a document or resource may enable the user to obtain information in a form that they can understand, or it can be important in processing or rendering the given content into an appropriate form or style. This document specifies a particular identifier mechanism (the language tag) and a registration function for values to be used to form tags. It also defines a mechanism for private use values and future extension. A language tag is composed from a sequence of one or more "subtags", each of which refines or narrows the range of language identified by the overall tag. Subtags, in turn, are a sequence of alphanumeric characters (letters and digits), distinguished and separated from other subtags in a tag by a hyphen. There are different types of subtag, each of which is distinguished by length, position in the tag, and content: each subtag's type can be recognized solely by these features. This makes it possible to extract and assign some semantic information to the subtags, even if the specific subtag values are not recognized. Thus, a language tag processor need not have a list of valid tags or subtags (that is, a copy of some version of the IANA Language Subtag Registry) in order to perform common searching and matching operations. The only exceptions to this ability to infer meaning from subtag structure are the grandfathered tags listed in the productions 'regular' and 'irregular'... [Note: For language identification, the XML Recommendation notes that "it is often useful to identify the natural or formal language in which the content is written. A special attribute named 'xml:lang' may be inserted in documents to specify the language used in the contents and attribute values of any element in an XML document. The values of the attribute are language identifiers as defined by IETF RFC 1766/3066/4646 "Tags for the Identification of Languages."]

See also: Language Identifiers in the Markup Context


Interop: Capitalizing On Web 2.0 With Wikis, Social Networking
K. C. Jones, InformationWeek

As Interop's focus turned to energy and efficiency during the annual conference at the Javits Center in New York City Tuesday, Energy Boot Camp added a wiki on "Ways to Save the Earth." While Interop's primary focus in terms of energy is on how to reduce consumption by IT systems, the wiki offers a wider range of ideas on how to contribute to sustainability. The page includes tips on how to reduce carbon footprints from IT and links to other sites with tips for eco-marketing, as well as those with tips for promoting sustainability within an organization. Taking into account the time and energy it takes to shut down and power a PC back up, the Interop wikis recommend turning off laptops and desktops if they will be inactive for about 15 minutes. The 'Ways to Save the Earth' wiki covers power management settings, turning off screen-savers, and saving energy on printing. In addition to using better paper and printing fewer pages, the Ways to Save the Earth wiki recounts another paper-saving solution generated by elementary school students in Massachusetts. They pointed out that narrowing margins can save resources, including oil and fresh water used to produce paper.

See also: the Interop Conference web site


RFID Comes to New York State Driver's Licenses
Don Sears, eWEEK Blog

The borders of New York have become easier to cross by car or cruise ship. The state of New York has made available new RFID-enabled driver's licenses (Enhanced Driver License, or EDL) that allow U.S. citizens of the state to not have to use a passport for border crossings among the immediate North American neighbors and 17 countries in the Caribbean. The DMV, assessing the benefits, says these ID cards will kill a number of inconvenient birds with one mobile ID stone: "The documents also speed border crossing, cost less than a passport and fit in your wallet." These do not replace passports, but appear to be a way for the state to possibly build up New York state's homeland security database. New York wasn't the first state to attempt something like this, but it appears to be the first to make something actually happen... An EDL or ENDID is an approved travel identification document for land and sea border crossings between the U.S. and Canada, Mexico, Bermuda and the Caribbean, and an EDL is also a driver license. AN EDL or ENDID is not acceptable for air travel between these countries ... There is no personal identification information recorded on the RFID tag. The RFID tag contains only a unique number assigned to the tag that will verify issuance of the document to one individual. These IDs/driver's licenses are good for domestic air travel, much like current New York state driver's licenses, but only cost an additonal $30 on top of the normal license cost. It appears that one of the key motivations for this initiative is economic... I wonder if we are inching closer to a national ID database for homeland security? I think, ultimately, that will depend on who is elected in November. One more thing: For those Big Applers who are worried about their patriotism being questioned, these new ID cards have a U.S. flag on them.

See also: RFID resources


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-09-17.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org