The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: September 18, 2008
XML Daily Newslink. Thursday, 18 September 2008

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
IBM Corporation

Document Schema Definition Languages (DSDL) Part 1: Overview
Members, ISO/IEC JTC 1/SC 34/WG 1 DSDL Final Committee Draft

On behalf of ISO/IEC JTC 1/SC 34 and the IPSJ/ITSCJ, Toshiko Kimura has announced the availability of a Final Committee Draft and ballot for "Information Technology — Document Schema Definition Languages (DSDL) — Part 1: Overview." This draft, distributed for review and comment, is open for ISO ballot through 2009-01-19. ISO/IEC 19757 consists of the nine parts: Part 1: Overview; Part 2: Regular-grammar-based validation - RELAX NG; Part 3: Rule-based validation - Schematron; Part 4: Namespace-based validation dispatching language - NVDL; Part 5: Datatypes; Part 7: Character repertoire description language - CRDL; Part 8: Document schema renaming language - DSRL; Part 9: Datatype- and namespace-aware DTDs. ISO/IEC 19757 defines a set of Document Schema Definition Languages (DSDL) that can be used to specify one or more validation processes performed against Extensible Markup Language (XML) or Standard Generalized Markup Language (SGML) documents, where XML is an application profile of SGML—ISO 8879. A document model is an expression of the constraints to be placed on the structure and content of documents to be validated against the model and the information set that needs to be transmitted to subsequent processes. Since the development of Document Type Definitions (DTDs) as part of ISO 8879, a number of technologies have been developed through various formal and informal consortia notably by the World Wide Web Consortium (W3C) and the Organization for the Advancement of Structured Information Standards (OASIS). A number of validation technologies are standardized in DSDL to complement those already available as standards or from industry. Historically, when many applications act on a single document, each application inefficiently duplicates the task of confirming that validation requirements have been met. Furthermore, such tasks and expressions have been developed and utilized in isolation, without consideration of how the features and functionality available in other technologies might enhance validation objectives. The main objective of ISO/IEC 19757 is to bring together different validation-related tasks and expressions to form a single extensible framework that allows technologies to work in series or in parallel to produce a single or a set of validation results. The extensibility of DSDL accommodates validation technologies not yet designed or specified. In the past, different design and use criteria have led users to choose different validation technologies for different portions of their information. Bringing together information within a single XML document sometimes prevents existing document models from being used to validate sections of data. By providing an integrated suite of constraint description languages that can be applied to different subsets of a single XML document, ISO/IEC 19757 allows different validation technologies to be integrated under a well-defined validation policy.

See also: the home page

Updated Working Draft for Access Control for Cross-Site Requests
Anne van Kesteren (ed), W3C Technical Report

Members of the W3C Web Applications Working Group have published a Working Draft for "Access Control for Cross-Site Requests." It is expected that this document will progress along the W3C Recommendation track. Web application technologies commonly apply same origin restrictions to network requests. These restrictions prevent a Web application running from one origin from obtaining data retrieved from another origin, and also limit the amount of unsafe HTTP requests that can be automatically launched toward destinations that differ from the running application's origin. In Web application technologies that follow this pattern, network requests typically use ambient authentication and session management information, including HTTP authentication and cookie information. This specification extends this model in several ways: (1) Web applications are enabled to annotate the data that is returned in response to an HTTP request with a set of origins that should be permitted to read that information by way of the user's Web browser. The policy expressed through this set of origins is enforced on the client. (2) Web browsers are enabled to discover whether a target resource is prepared to accept cross-site HTTP requests using non-GET methods from a set of origins. The policy expressed through this set of origins is enforced on the client. (3) Server side applications are enabled to discover that an HTTP request was deemed a cross-site request by the client Web browser, through the Origin HTTP header. This extension enables server side applications to enforce limitations on the cross-site requests that they are willing to service. This specification is a building block for other specifications, so-called hosting specifications, which will define the precise model by which this specification is used. Among others, such specifications are likely to include XMLHttpRequest Level 2, XBL 2.0, and HTML 5 (for its server-sent events feature).

See also: the W3C Web Applications (WebApps) Working Group

Mozilla Joins Stopbadware Effort
Matt Hines, eWEEK Blog

The fine folks over at Harvard University's Berkman Center for Internet and Society have added another powerful ally to their Stopbadware online malware filtering project by snaring Mozilla, makers of the Firefox browser and other open source software, to officially cooperate in the effort. Mozilla joins the veritable online "who's who" of powerful sponsors that have already committed their time, money and expertise to, which scans the Web for sites that spew nefarious content and then adds them to a list accessed by search engines that then warn users not to visit said URLs. Long credited for the near-constant work being done to secure its code base to improve the stability and safety of its browsers under the directorship of its esteemed security lead Window Snyder, Mozilla joins the likes of Google, PayPal, Lenovo, VeriSign, AOL, and Trend Micro in providing intelligence to Stopbadware. Stopbadware blogger Maxim Weinstein noted that the two organizations have been working together for some time already, with members of the Mozilla team sharing ideas with the researchers during the development of Firefox 3's malware warning screen. One of the biggest effects that Stopbadware has had with its work, beyond warning end users to steer clear of dangerous sites, is in pushing U.S.-based registrars and ISPs to keep malware-distributing URLs from surviving on their services. On a grass roots level, in many cases the smaller mom-and-pop sites that end up on the company's listings don't even realize that their sites have been infected and are distributing questionable content until they are informed of their status by the clearinghouse.

See also: the web site

IESG Announces Approval of Two Drafts on IETF IPR Policy
Joel M. Halpern, Scott Bradner, Jorge Contreras (eds), IETF I-Ds

The Internet Engineering Steering Group (IESG) announced that two Internet Drafts on IPR Policy have been approved: (1) "Advice to the Trustees of the IETF Trust on Rights to be Granted in IETF Documents" has been approved as an Informational RFC, and (2) "Rights Contributors Provide to the IETF Trust" has been approved as a Best Common Practice (BCP) document. The two Internet Drafts were produced by IETF Intellectual Property Rights Working Group, and are now in the IESG Processing Stage. The documents were reviewed by the IPR Working Group and by IETF counsel; they were also reviewed by Russ Housley for the IESG. The IETF policies about intellectual property rights in Contributions to the IETF are designed to ensure that such Contributions can be made available to the IETF and Internet communities while permitting the authors to retain as many rights as possible. Of course, Contributors grant some rights to the IETF. The IETF trust holds and manages those rights on behalf of the IETF. The Trustees of the IETF Trust are responsible for that management. This management includes granting the licenses to copy, implement and otherwise use IETF contributions, among them Internet-Drafts and RFCs. The Trustees of the IETF Trust accepts direction from the IETF regarding the rights to be granted. The most contentious part of the debate was on whether or not to freely allow the production of modified versions of the material outside the IETF context. The rough consensus was that code has to be modifiable in order to be useful, while the arguments for allowing modification of prose text were not compelling. These documents will not come into force until it is published as an RFC. The IETF Trust is in the final stages of approval for the license agreement that meets the requirements contained in these documents. The IETF Chair or IETF Trust Chair will provide the URL; however, the RFC numbers for these documents is needed for the license agreement. Coordination is needed to ensure that these documents come out together.

See also: Rights Contributors Provide to the IETF Trust

Card Use Can Stem ID Theft, Microsoft Says
Jabulani Leffall, Application Development Trends

Microsoft this week released a white paper on identity theft with the aim of starting a "vendor-neutral" discussion on the use of "information cards" as an Internet security solution... The appeal for collaboration comes as Microsoft is already well into implementing its Windows CardSpace technology. CardSpace is Microsoft's current information card technology. It's a client application for Windows operating systems that stores digital identities... From the 'Introduction' to "Online Identity Theft: Changing the Game Protecting Personal Information on the Internet": "Personally identifying information (PII) in digital form is the lifeblood of the Internet age. Because individuals, organizations, businesses and governments have been willing to trust service providers with such PII, the past decade has seen a tremendous variety of new uses for the Internet. Access to PII has helped fuel explosive growth in e-commerce and e-government applications as well as various online communities. Online banking and investing services, travel and shopping Web sites, and electronic filing of tax returns and license renewals are all examples of how the Internet is enabling economic opportunity, efficiency and personal convenience in addition to offering countless other benefits. But along with the benefits, concerns about protecting PII are also escalating. Armed with personal information gathered online and offline through phishing attacks, spyware, social engineering scams and other illicit methods, identity thieves are stealing billions of dollars through unauthorized transactions and new lines of credit opened fraudulently in the name of unwitting consumers. Online fraud is undermining confidence in the Internet and slowing the growth of online commerce and other services. In 2006, 12 percent of EU residents aged 16 to 74 said they avoided online purchases because of security concerns. In comparison, 57 percent said they had used the Internet and 30 percent said they shopped online in 2007. Identity theft is not only a threat faced by consumers but also a significant concern for organizations as they handle growing volumes of PII and use it in more diverse ways. Widely publicized leaks of sensitive data from custodians such as financial institutions, credit bureaus and government agencies are eroding public trust in the Internet and threatening to dampen online commerce and services. This paper outlines a set of near-term tactics for mitigating online identity theft as well as a longer-range strategic vision for fundamentally 'changing the game' with regard to how people assert their identity on the Internet and how such identity claims are verified by other parties during an online interaction or transaction. It also offers recommended actions for government and industry leaders to help establish the infrastructure necessary for creating a more trustworthy Internet.

See also: the Microsoft Paper

Using PHP's MDB2_Schema, an XML-Based Database Schema Manager
Octavia Andreea Anghel,

MDB2_Schema library from the PHP Extension and Application Repository (PEAR) is a powerful solution for preserving and using database schemas for different kinds of Relational Database Management Systems (RDBMSs). Using the examples in this article, you will have the basics for using a tool specially designed to work with database schemas. It's very easy to use, and it offers a high degree of flexibility, especially considering that this is only a beta version. Because MDB2_Schema stores database schemas in XML format, it's independent of any particular RDBMS. You can execute basic SQL statements such as CREATE, ALTER, DROP, and INSERT directly through MDB2_Schema. MDB2_Schema supports reverse engineering and the format is compatible with both Microsoft Access (.mdb) and Internet Information Server's Metabase files. Metabase is a package that allows the developers to write database applications in PHP that are independent of the DBMS (DataBase Management System). The main advantage of Metabase is that developers only need to learn and use one set of commands to implement applications that may run with many different DBMS. The MDB2_Schema PEAR requires other PEARs to operate properly. At minimum, you will need to install the MDB2 PEAR and at least one database driver. To connect to a specified database you first need to set up a DSN (Data Source Name), which can be a string or an array that defines the parameters for the connection: the RDBMS type, the protocol, the host specification, the username, the password, and the database name. We show how to extract the structure, the content, and the entire database into three different files using the current database definition. To do that, you'll use two methods: the dumpDatabase() and getDefinitionFromDatabase() methods. Reverse-Engineer a Database: The reverse of dumping a database is also possible—creating a database using the dumped schema document 'structure.xml'. To do that, you use the parseDatabaseDefinitionFile and the createDatabase methods... It will be interesting to monitor the evolution of this PEAR and to see what the first stable release brings.


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: