The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: March 11, 2008
XML Daily Newslink. Tuesday, 11 March 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



What is in the New Draft of OOXML?
Rick Jelliffe, O'Reilly Articles

Update on Office Open XML (OOXML): In late February 2008, a week-long Ballot Resolution Meeting (BRM) was held in Geneva, Switzerland. It was attended by 120 individual delegates from about 34 different National Standards Bodies. The outcome of the meeting was a series of editor's instructions to allow a new draft of the standard to be create: usually these instructions are completely specific though there may be some general ones, for example to use one term rather than another globally. At time of writing, March 2008, OASIS has been working on ODF 1.2 which is slated to improve several important ODF weakspots, in particular relating to formulas and metadata. It is mooted for re-submission to ISO during 2008. The results of the BRM are available online and National Bodies now have one month (end of March 2008) to decide if the changed draft meets their requirements. For the new draft to pass, it will require 5 National Bodies (of the 'P' class), to switch from Abstain or No votes (remembering that No with Comments may mean 'Conditional Yes') Of the 1027 Editor's responses, the BRM addressed 189 responses by specific resolutions and discussions of the BRM, and the rest using a paper ballot where each National Body in attendance voted: this accepted 825 of the Editor's recommendations and rejected 13. The issue of a paper ballot had been abstain on issues of lesser interest to them... The changes from the BRM usually relate to either correcting bugs or better documentation. Additions to functionality tended to be limited to providing better accessibility and better internationalization, rather than completing or expanding the general feature set. The Editor's Disposition of Comments clearly tried to reduce the amount of gratuitous breakage of documents or applications, and the explicit resolutions of the BRM continued this policy (IMHO). If the new draft is adopted as a standard, it does not remain static but can be maintained by the relevant ISO/IEC JC1 committee, SC34, Document Processing and Description Languages. Procedures exist for National Bodies to submit Defect Reports, which again attract the Editor's attention and National Body voting acceptance, so the kind of process seen at the BRM becomes an ongoing effort, if there is enough interest by National Bodies. The upshot is that, if DIS29500 mark II and ODF 1.2 both get accepted as standards, by the end of 2008 we should have two standards which together can thoroughly cover the field of representing current and legacy office documents, each representing one of the two dominant commercial traditions, with both under active and significantly open maintenance to fill in the remaining gaps and to repair pending broken parts, with clear cross-mapping to allow interconversion, with an increasing level of modularity so that the can share their component parts, and at least with a feasible agenda of co-evolution and other kinds of convergence.

See also: Tim Bray's blog


SCAP Narrows Security Gap
William Jackson, Government Computer News

Released by NIST last spring, the Security Content Automation Protocol (SCAP) is a suite of tools to help automate vulnerability management and evaluate compliance with federal information technology security requirements. It is an expansion of the National Vulnerability Database with an automated checklist that uses a collection of recognized standards for naming software flaws and configuration problems in specific products. SCAP has done a lot to help agencies in the uphill battle against security vulnerabilities, but it hasn't yet gotten them over the top. NIST is now accrediting independent labs for a SCAP product evaluation program, vendors are producing scanning tools using the protocol, and agencies are using them to automate compliance with IT security regulations. The more mature standards in the suite include: (1) The Common Vulnerabilities and Exposures Standard (CVE) from Mitre, which provides standard identifiers and a dictionary for security vulnerabilities related to software flaws. (2) Open Vulnerability and Assessment Language (OVAL), also from Mitre, a standard Extensible Markup Language for security testing procedures and reporting. (3) Extensible Configuration Checklist Description Format from the National Security Agency and NIST, a standard XML for specifying checklists and reporting results. (4) Common Vulnerability Scoring System from the Forum of Incident Response and Security Teams, a standard for conveying and scoring the impact of vulnerabilities. Less mature standards are: (5) Common Configuration Enumeration from Mitre, standard identifiers and dictionary for system security configuration issues. (6) Common Platform Enumeration from Mitre, standard identifiers and a dictionary for platform and product naming.

See also: Application Security Standards


Defining NETCONF Data Models using Document Schema Definition Languages (DSDL)
Rohan Mahy, Sharon Chisholm, and Ladislav Lhotka (eds), IETF Internet Draft

Members of the IETF Network Configuration (NETCONF) Working Group have published an updated draft for the specification "Defining NETCONF Data Models using Document Schema Definition Languages (DSDL)." The document describes a concrete proposal for creating Netconf and other IETF data models using the RelaxNG schema language and the Schematron validation language, which are both part of ISO's Document Schema Definition Languages (DSDL) standard. Appendix D preents the DHCP schema in Relax XML format For those who prefer the XML syntax of Relax NG, the "dhcp.rnc" file was converted to "dhcp.rng" using Trang. The NETCONF Working Group has completed a base protocol used for configuration management. This base specification defines protocol bindings and an XML container syntax for configuration and management operations, but does not include a modeling language or accompanying rules for how to model configuration and status information (in XML syntax) carried by NETCONF. The IETF Operations area has a long tradition of defining data for SNMP Management Information Bases (MIBs) using the SMI to model its data. The approach to data modeling described in this Internet Draft uses the two most mature parts of the ISO Document Schema Definition Languages (DSDL) multi-part standard: Relax NG and Schematron. The proposal then goes on to define additional processing and documentation annotation schema. Relax NG is a mature, traditional schema language for validating the structure of an XML document. Schematron is a rule-based schema validation language which uses XPath expressions to validate content and relational constraints. In addition, this document defines and reuses various annotation schema which can provide additional metadata about specific elements in the data model such as textual descriptions, default values, relational integrity key definitions, and whether data is configuration or status data. This combination was created to specifically address a set of Netconf-specific modeling requirements, and in addition should be useful as a general purpose data modeling solution useful for other IETF working groups. The authors believe that reusing schema work being developed and used by other standards bodies provides substantial long-term benefits to the IETF management community, so this proposal attempts to reuse as much existing work as possible.

See also: Document Schema Definition Languages (DSDL)


DocBook 5.0: The Definitive Guide Updates
Norm Walsh, Blog

DocBook is a very popular set of tags for describing books, articles, and other prose documents, particularly technical documentation. DocBook is an XML vocabulary normatively defined using RELAX NG and Schematron. But DocBook is older than any of those technologies. It was originally an SGML vocabulary described with the standard SGML Document Type Definition, or DTD... I recently published a new version of DocBook 5.0: The Definitive Guide (TDG5). Over the weekend, I finally sat down and updated chapters 4 (Publishing DocBook Documents) and 5 (Customizing DocBook). There's still not much in chapter 4, but chapter 5 is much improved, I think. The element descriptions in the reference are now up-to-date with the official release of DocBook V5.0. In the original Definitive Guide, the content models were expressed in DTD syntax. The DTD, in turn, was constructed from parameter entities which are really a string substitution or macro language. Expand all the parameter entities, reformat the text, and you get something that's terse, but relatively easy to learn to read. In DocBook V5.0, the content models are expressed in RELAX NG. While RELAX NG has a compact syntax, the patterns aren't simple string substitutions. In addition, a few of the patterns exploit co-constraints which are tricky to read. One option for displaying them is simply to leave all the patterns in place [code for 'bibliod']. If you want to know what can go in a 'bibioid' element, you don't care what I called the patterns. The solution I reached eventually was to expand the patterns, simplify them where possible, and present them as lists... That works, mostly, but it's a bit hard to read when the list gets very long. For many DocBook elements, the list of inlines is quite long. Recently, I decided to try grouping related elements in the list; that seems to be an improvement. [As implemented in the online book,] In a JavaScript-aware browser, you can click on the graphics to expand part or all of the list; if you click the '[x]', the grouping is removed, restoring the original presentation. On a non-JavaScript-aware browser, all of the lists are shown expanded...

See also: the online book


Two Ajax and XSLT Approaches: Transforming XML in Ajax with XSLT
Mark Pruett, IBM developerWorks

Part 1 of this series 'XML processing in Ajax' introduced a problem specification: to build a weather badge that can be inserted easily into any Web page. The weather badge is constructed using Ajax techniques and uses data provided by the United States National Weather Service (NWS). That NWS data is provided in an XML format, updated every 15 minutes. This article installment looks at the second and third approaches. These two approaches share one thing in common: they both use XSLT. The XSLT language differs from many other computer languages in that its syntax is valid XML. This can be a trifle confusing if you're used to the C, Java, Perl, or Python languages. XSLT is a language to query XML and transform XML into other formats. This is precisely the problem I have with my weather data—it's packaged as XML, but I want something more user- (and browser-) friendly. And the NWS data contains a lot more information than the weather badge requires. Some technique is required to extract just the data items I need. XSLT can handle both these requirements. As with other programming languages, like Perl or Ruby, you execute XSLT by running it through a language interpreter. This is often called an XSLT processor. But XSLT is not a general-purpose programming language -- it exists to translate a single XML data file. So most XSLT processors require two input files: the XSLT program and the XML file it transforms... A third approach uses some elements of the first two approaches. An Apache Web proxy brings the XML back to the browser for further processing, and XSLT handles the actual translation from XML to HTML. To invoke an XSLT processor inside a browser is more computer-intensive than Approach 1, which merely accessed the needed data elements directly from the DOM tree. For this simple example, the extra compute time is unlikely to be noticed by the user. But if the XML is large or the XSLT translation is complex, then the user might notice an unacceptable delay while the browser churns out the results. The flip side of this is that the code needed to manually trudge through the complex DOM tree of a large XML file can lead to JavaScript code that's difficult to write and more difficult to maintain. You also need to consider the type of browser and computer your users are running. Are these high-end workstations with memory and processor power to spare? Or are they old, underpowered machines? The answers to these questions determine if complex JavaScript XSLT processing is a good idea for your application.


Ajax and XML: Ajax for Tables
Jack D. Herrington, IBM developerWorks

When people think of Ajax and Web 2.0, they mostly remember the visual elements of the user experience. It's the feel of working in-place, without the page refresh, that gives Ajax its distinctive appeal. It's not completely hype: The page refresh of traditional HTML applications does cause a blink and a reload that even on the fastest connections presents a visual context change. In this article, the author shows several techniques—both with Ajax and without—that demonstrate this context change-free approach to user experience design. He starts with the simplest example of Ajax user experience, the tabbed window. The article shows a few of the different types of interface elements that you can build with Ajax, PHP, and the Prototype.js library. Tabs present the easiest way to put a lot of data in a relatively small amount of real estate. And the fantastic Prototype.js JavaScript library makes building Ajax-enabled tabbed windows in Dynamic HTML (DHTML) easy. The first demonstration of building tables with Ajax is through the use of an XML request to the server over Ajax. The value of this technique is two-fold. First, it loads the data on demand and can be updated in place, which makes for a pleasant user experience. Second, the technique requires an XML data source, which is valuable not just for the Ajax code but for any client looking to consume your data through XML.

See also: the live demo of paged tables with Ajax


E-health Consortium's New Partners for the Patient Data Repository Plan
Craig Stedman, ComputerWorld

The Dossia electronic health records consortium was dealt a blow last summer, when a highly publicized development deal that it announced in late 2006 with a nonprofit research organization ended in a legal dispute. But according to Dossia's chief technology officer, the group—which consists of Wal-Mart Stores Inc., Intel Corp., AT&T Inc. and five other large companies—is back up off the mat and plans to make the Web-based health records system broadly available to employees of its founding members later this year. The consortium also has started trying to entice other employers to join the group. And in the spring, Dossia plans to publish an API for the system and accelerate its efforts to get health care providers to agree to input the medical records of patients. Initially, the consortium is targeting health care organizations in geographic areas where its founding members have large numbers of employees. To try to ensure that only patients and the health care professionals they designate can access the medical records, Dossia is encrypting the data at both the application and database levels. It also is storing identifying information in a separate database, apart from the medical records themselves. In addition to Wal-Mart, Intel, AT&T and Cardinal Health, which sells medical instruments and supplies as well as health care software products, Dossia's current members include Applied Materials Inc., BP America Inc., Pitney Bowes Inc. and Sanofi-Aventis U.S. LLC. Dossia CTO Dave Hammond said the group is also doing development work aimed at enabling the health record system to scale "so it can support millions of people." In addition, the Dossia team is implementing "a very granular security model" that will enable patients to specify who can and can't access their data, down to the level of specific medical tests or immunizations.

See also: XML in Healthcare


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
EDShttp://www.eds.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
SAP AGhttp://www.sap.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-03-11.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org