The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: November 09, 2009
XML Daily Newslink. Monday, 09 November 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



Public Review: Extensible Resource Descriptor (XRD) Version 1.0
Eran Hammer-Lahav and Will Norris (eds), OASIS Public Review Draft

Members of the OASIS Extensible Resource Identifier (XRI) Technical Committee have approved a Committee Draft of the Extensible Resource Descriptor (XRD) Version 1.0 specification for public review through January 06, 2010.

The XRD document is related to Extensible Resource Identifier (XRI) Version 3.0 (Working Draft 02, August 20, 2009), which provides the normative technical specification for XRI generic syntax and normalization rules. XRI (Extensible Resource Identifier) provides a common language for structured identifiers that may be used to share semantics across protocols, domains, systems, and applications. XRI builds directly on the structure and capabilities of URI (Uniform Resource Identifier) and IRI (Internationalized Resource Identifier). XRI is a profile of URI and IRI syntax and normalization rules for producing URIs or IRIs that contain additional structure and semantics beyond those specified in the URI or IRI definitions given in IETF RFCs.

XRD is "a simple generic format for describing resources. Resource descriptor documents provide machine-readable information about resources (resource metadata) for the purpose of promoting interoperability, and assist in interacting with unknown resources that support known interfaces...

For example, a web page about an upcoming meeting can provide in its descriptor document the location of the meeting organizer's free/busy information to potentially negotiate a different time. The descriptor for a social network profile page can identify the location of the user's address book as well as accounts on other sites. A web service implementing an API protocol can advertise which of the protocol's optional components are supported..."

See also: the OASIS announcement


DataCache API First Working Draft Published
Nikunj R. Mehta (ed), W3C Technical report

Members of the W3C Web Applications (WebApps) Working Group have released a First Public Working Draft for the DataCache API specification. The WG invites public comment, and the latest stable version of the editor's draft for this specification is always available on the W3C CVS server. The W3C WebApps Working Group, "a merger of the WebAPI and WAF Working Groups, is chartered to develop standard APIs for client-side Web Application development. This work will include both documenting existing APIs such as XMLHttpRequest and developing new APIs in order to enable richer web applications."

From the specification Introduction: "Web applications often encounter seemingly random disconnections or network slowdowns, which deteriorates application responsiveness and availability, and, therefore, user experience. Working around network issues entails specially written applications, synchronization programs, and data protocols for specific platforms. The standard HTTP caches built in to existing user agents are under no obligation to locally store a cacheable resource and do not provide any guarantees about off-line serving of HTTP resources. An application cache (per HTML5) can hold static representations of a set of pre-defined resources that can be served locally. However, applications cannot alter this set of resources programmatically. Moreover, an application cache cannot satisfy requests other than GET and HEAD.

To address this limitation, this specification introduces data caches. Instead of a static manifest resource listing the resources to be cached, a data cache can be modified programmatically. Web applications can add or remove resources in a data cache which can then be statically served by the user agent when that resource is requested. This specification also provides embedded local servers to dynamically serve off-line representations of resources such as in response to unsafe HTTP methods, e.g., POST...

The specification does not introduce a new programming model for Web applications as data caches and embedded local servers are transparently pressed into action by the user agent, depending on system conditions. This means that existing applications can be used unchanged in environments that are not affected by network unreliability. Applications can be altered to use APIs specified in this document, only if they require improved responsiveness. Such applications can seamlessly switch between on-line and off-line operation without needing explicit user action..."

See also: the W3C news item


vCard Extensions to WebDAV (CardDAV)
Cyrus Daboo (ed), IETF Internet Draft

Members of the IETF vCard and CardDAV (VCARDDAV) Working Group have released an updated version of the specification vCard Extensions to WebDAV (CardDAV). A list of changes is presented in Appendix A (Change History) and in the diff publication formats. Document Section 10 (pages 38-47) provides XML Element Definitions (CARDDAV:addressbook, CARDDAV:supported-collation, CARDDAV:addressbook-query, CARDDAV:address-data, CARDDAV:allprop, CARDDAV:prop, CARDDAV:filter, CARDDAV:limit, CARDDAV:addressbook-multiget, etc).

This specification "vCard Extensions to WebDAV (CardDAV) defines extensions to the Web Distributed Authoring and Versioning (WebDAV) protocol to specify a standard way of accessing, managing, and sharing contact information based on the vCard format.

Address books containing contact information are a key component of personal information management tools, such as email, calendaring and scheduling, and instant messaging clients. To date several protocols have been used for remote access to contact data, including Lightweight Directory Access Protocol (LDAP, defined in RFC 4510), Internet Message Support Protocol (IMSP), and Application Configuration Access Protocol (ACAP), together with SyncML used for synchronization of such data. WebDAV, defined in IETF RFC 4918, offers a number of advantages as a framework or basis for address book access and management. Most of these advantages boil down to a significant reduction in design costs, implementation costs, interoperability test costs and deployment costs: ability to use multiple address books with hierarchical layout, ability to control access to individual address books and address entries as per WebDAV ACL, server-side searching of address data, well defined internationalization support through WebDAV's use of XML, use of vCards for well defined address schema to enhance client interoperability. A key disadvantage of address book support in WebDAV is lack of change notification: many of the alternative protocols also lack this ability, but an extension for push notifications could easily be developed....

vCard is in wide spread use in email clients and mobile devices as a means of encapsulating address information for transport via email, or for import/export and synchronization operations. An update to vCard (vCard Version 4) is currently being developed and is compatible with this specification..."

See also: the vCard XML Schema


CMIS Management Interoperability Services (CMIS) Public Review
Ethan Gur-esh, Microsoft Enterprise Content Management (ECM) Team Blog

This blog posting was supplied by Ethan Gur-esh, Secretary of the OASIS CMIS Technical Committee. "... after working with many other vendors like Alfresco, Nuxeo, OpenText, Oracle, SAP, and others on the CMIS specification, forming a Technical Committee at OASIS to deliver that specification as a truly open standard, and having four 'plug-fest' events where we've tested actual (prototype) implementations of the specification together to make sure it would work in the real-world, I'm thrilled to announce that on October 23, 2009, Version 1.0 of the CMIS specification entered OASIS' public review process...

At this point, pretty much every vendor in the ECM space is really motivated to start supporting CMIS in their respective products. We've all seen the excitement from customers about CMIS... Of course, the prerequisite for all this is a final, OASIS-ratified 1.0 standard. While several companies have released prototypes based on interim drafts (which are wonderful proof-points that CMIS is ready for real-world implementation), look for vendors to start disclosing specific plans once the specification is final...

Those of you who attended the SharePoint Conference last week have seen that SharePoint 2010 is looking pretty shiny and polished. But until the CMIS 1.0 specification is final, we can't realistically commit to exact dates when our CMIS support would be ready. This means that our plans need to be flexible to balance the following needs: (1) Not rushing the finalization of the CMIS 1.0 specification in a way that would compromise its quality; (2) Release CMIS support as soon as possible for SharePoint 2010 that meets the interoperability needs of our customers and partners... We're definitely looking forward to having the CMIS standardization process complete so we can lock-down our plans to the point where we can share additional details..."

See also: OASIS Public Review for Content Management Interoperability Services (CMIS) v1.0


XML Denial of Service Attacks and Defenses
Bryan Sullivan, MSDN Magazine Security Briefs

Denial of service (DoS) attacks are among the oldest types of attacks against Web sites. Documented DoS attacks exist at least as far back as 1992, which predates SQL injection (discovered in 1998), cross-site scripting (JavaScript wasn't invented until 1995), and cross-site request forgery... From the beginning, DoS attacks were highly popular with the hacker community, and it's easy to understand why. A single 'script kiddie' attacker with a minimal amount of skill and resources could generate a flood of TCP SYN (for synchronize) requests sufficient to knock a site out of service...

Over the years, SYN flood attacks have been largely mitigated by improvements in Web server software and network hardware. However, lately there has been a resurgence of interest in DoS attacks within the security community—not for 'old school' network-level DoS, but instead for application-level DoS and particularly for XML parser DoS.

XML DoS attacks are extremely asymmetric: to deliver the attack payload, an attacker needs to spend only a fraction of the processing power or bandwidth that the victim needs to spend to handle the payload. Worse still, DoS vulnerabilities in code that processes XML are also extremely widespread. Even if you're using thoroughly tested parsers like those found in the Microsoft .NET Framework System.Xml classes, your code can still be vulnerable unless you take explicit steps to protect it. This article describes some of the new XML DoS attacks. It also shows ways for you to detect potential DoS vulnerabilities and how to mitigate them in your code..."


Nine in Ten Web Applications Have Serious Flaws
Thomas Claburn, InformationWeek

"The number of software vulnerabilities detected has risen to the point that almost 9 out of 10 Web applications have flaws that could lead to the exposure of sensitive information. Cenzic's 'Web Application Security Trends Report Q1-Q2, 2009' report, released on Monday, says that more than 3,100 vulnerabilities were identified in the first half of the year, 10% more than the number identified in the second half of 2008...

Cenzic says that SQL Injection and Cross Site Scripting vulnerabilities played a role in 25% and 17% of all Web attacks respectively. In recent years, Mozilla's Firefox has tended to have a higher number of vulnerabilities than Internet Explorer, but Firefox bugs have been fixed more quickly than those affecting Internet Explorer. Thus, Mozilla has argued that the number of days that users were vulnerable represents a more useful security metric than a comparison of vulnerabilities. Members of the Firefox team have also argued that the security of Firefox and Internet Explorer can't easily be compared because Mozilla's security process is open..."

See also: Application Security Standards


Tim Berners-Lee: Machine-readable Web Still a Ways Off
Joab Jackson, Government Computer News

"Despite recent initiatives such as the U.S. Data.gov site, the idea of a machine-readable Web extolled by World Wide Web creator Sir Tim Berners-Lee still faces many obstacles, he admitted during a talk at the International Semantic Web Conference , held this week in Chantilly, VA, USA. Formats found on Data.gov, such as spreadsheets or even application programming interfaces, don't do enough to help the reusability of data, he said. Neither are there enough commercial products available to make Web site transitions to the new semantic Web formats easy...

Berners-Lee has long extolled the virtues of annotating the Web with machine-readable data. This week's conference of semantic Web enthusiasts, however, offered him the chance to discuss in-depth the challenges of getting the rest of the Web world to start using the technologies and approaches he advocates...

He said that the use of RDF should not require building new systems, or changing the way site administers work, reminiscing about how many of the original Web sites were linked back to legacy mainframe systems. Instead, scripts can be written in Python, Perl or other languages that can convert data in spreadsheets or relational databases into RDF for the end-users... The idea of enabling the Semantic Web so it can be shared seems to be gaining at least some traction, not the least because of efforts that disregard some of the more advanced notions of the semantic Web, such as ontology-building, in favor of simply linking data sources..."

See also: on Linked Data


What DNS Is Not
Paul Vixie, ACM Queue

"DNS (Domain Name System) is a hierarchical, distributed, autonomous, reliable database. The first and only of its kind, it offers realtime performance levels to a global audience with global contributors. Every TCP/IP traffic flow including every World Wide Web page view begins with at least one DNS transaction. DNS is, in a word, glorious. To underline our understanding of what DNS is, we must differentiate it from what it is not. The Internet economy rewards unlimited creativity in the monetization of human action, and fairly often this takes the form of some kind of intermediation. For DNS, monetized intermediation means lying. The innovators who bring us such monetized intermediation do not call what they sell 'lies,' but in this case it walks like a duck and quacks like one, too.

What DNS is not is a mapping service or a mechanism for delivering policy-based information. DNS was designed to express facts, not policies. Because it works so well and is ubiquitous, however, it's all too common for entrepreneurs to see it as a greenfield opportunity. Those of us who work to implement, enhance, and deploy DNS and to keep the global system of name servers operating will continue to find ways to keep the thing alive even with all these innovators taking their little bites out of it.

These are unhappy observations and there is no solution within reach because of the extraordinary size of the installed base. The tasks where DNS falls short, but that people nevertheless want it to be able to do, are in most cases fundamental to the current design. What will play out now will be an information war in which innovators who muscle in early enough and gain enough market share will prevent others from doing likewise—DNS lies vs. DNS security is only one example..."

Note: "Paul Vixie is president of Internet Systems Consortium (ISC), a nonprofit company that operates the DNS F root name server and that publishes the BIND software used by 80% of the Internet for DNS publication. He is also chairman of American Registry for Internet Numbers (ARIN), a nonprofit company that allocates Internet number resources in the North American and Caribbean regions."

See also: American Registry for Internet Numbers (ARIN)


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-11-09.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org