The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: August 27, 2008
XML Daily Newslink. Wednesday, 27 August 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com



Mapping of YANG to DSDL
Ladislav Lhotka (ed), IETF Internet Draft

This memo describes the algorithm for translating YANG data models to a schema (or multiple schemas) utilizing a subset of the DSDL schema languages together with a limited number of other annotations. The IETF NETMOD working group complements the results of the NETCONF WG by addressing the data modeling issues. The major item in the NETMOD charter is a new data modeling language called YANG. This language, being based on SMIng (RFC 3216), builds on the experience of previous network management systems, most notably SNMP. However, since NETCONF chose Extensible Markup Language (XML) as the method for encoding both configuration data and their envelope (RPC layer), this work can and should also benefit from the body of knowledge, standards and software tools that have been established in the XML world. To this end, YANG also provides an alternative syntax called YIN that is able to represent the same information using XML. Despite the syntactic differences, the information models of YANG and YIN are virtually identical and conversion between YANG and YIN is straightforward in both directions. However, having data models expressed in an XML syntax is not by itself sufficient for leveraging the existing XML know-how and tools. It is also necessary to convey the meaning of YANG models and present it in a way that the existing XML tools can understand. As a matter of fact, YANG/YIN can be viewed as yet another XML schema language. While there are several aspects that make YANG models specific to the NETCONF realm, for the most part the grammatical and semantic constraints that the models express can be equivalently represented in the general-purpose XML schema languages such as W3C XML Schema, RELAX NG, Schematron and others. Therefore, one of the chartered items of the NETMOD WG is to define a mapping from YANG to the Document Schema Definition Languages (DSDL) that is being standardized as ISO/IEC 19757. The DSDL framework comprises a set of XML schema languages that address grammar rules, semantic constraints and other data modeling aspect but also, and more importantly, can do it in a coordinated and consistent way... The aim is to map as much structural, datatyping and semantic information as possible from YANG to DSDL with annotations so that the resulting schema(s) can be used with standard XML tools for a relatively comprehensive validation of the contents of configuration datastores. The most important schema language in the DSDL framework is RELAX NG.

See also: the IETF NETCONF Data Modeling Language (NETMOD) Working Group


Unicode Consortium Proposes Update to Unicode Character Encoding Model
Ken Whistler, Mark Davis, Asmus Freytag (eds), Unicode Technical Report Draft

Rick McGowan of the Unicode Consortium announced the availability of a draft update to Unicode Technical Report #17, "Unicode Character Encoding Model." The review period for this new item closes on October 27, 2008. This 'Proposed Update' document, presented in color-coded diff format, revises the previus version 'tr17-5', published 2004-09-09. UTR #17, Unicode Character Encoding Model, is being updated to correct the titles for various references. The model has been resynched to bring it back up to date for Unicode 5.0. The text has also been edited fairly extensively for readability and consistency. If you have comments for official UTC consideration, please post them by submitting your comments through our feedback and reporting page. The Character Encoding Model report "describes a model for the structure of character encodings. The Unicode Character Encoding Model places the Unicode Standard in the context of other character encodings of all types, as well as existing models such as the character architecture promoted by the Internet Architecture Board (IAB) for use on the internet, or the Character Data Representation Architecture (CDRA) defined by IBM for organizing and cataloging its own vendor-specific array of character encodings. The four levels of the Unicode Character Encoding Model can be summarized as: (1) ACR (Abstract Character Repertoire): the set of characters to be encoded, for example, some alphabet or symbol set; (2) CCS (Coded Character Set): a mapping from an abstract character repertoire to a set of nonnegative integers; (3) CEF (Character Encoding Form): a mapping from a set of nonnegative integers that are elements of a CCS to a set of sequences of particular code units of some specified width, such as 32-bit integers (4) CES (Character Encoding Scheme): a reversible transformation from a set of sequences of code units (from one or more CEFs to a serialized sequence of bytes) The IAB model, as defined in IETF RFC 2130, distinguishes three levels: Coded Character Set (CCS), Character Encoding Scheme (CES), and Transfer Encoding Syntax (TES). However, four levels need to be defined to adequately cover the distinctions required for the Unicode character encoding model. One of these, the Abstract Character Repertoire, is implicit in the IAB model. The Unicode model also gives the TES a separate status outside the model, while adding an additional level between the CCS and the CES..."

See also: XML and Unicode


A 3D Exploration of the HTML Canvas Element
Greg Travis, DevX.com

The HTML Canvas, an element of the upcoming HTML 5 specification, allows you to efficiently draw arbitrary graphics at the primitive or individual pixel level. This article shows how to implement a 3D rendering using the HTML Canvas. Vector graphics abound on the web, and they come in a variety of formats, including Flash and SVG. HTML Canvas, one of the newer incarnations, occupies a different niche from other vector graphics systems. While SVG is a declarative graphics file format that can be rendered by any kind of program and Flash is built around a complete multimedia system (including browser plug-in libraries, the ActionScript scripting language, and content-creation tools), HTML Canvas is HTML. In fact, HTML Canvas is part of the upcoming HTML 5 specification. As such, the HTML Canvas is integrated into the DOM tree, which means it can be accessed from JavaScript. Thus, the HTML Canvas allows you to do many of the things that Flash and SVG renderers can do... The HTML Canvas bridges the gap between HTML markup and individual pixels. It allows you to efficiently draw arbitrary graphics at the level of individual drawing primitives or even at the level of individual pixels. And it lets you do it right from JavaScript. This article describes the implementation of a simple 3D game using the HTML Canvas (HC). HC currently is designed for 2D graphics, but in the end, 3D graphics are rendered as 2D graphics, so HC is fine for 3D as well. And, because HC is implemented natively, you can get a pretty decent frame rate... Over the years, web designers and programmers have put a great deal of work into tricking HTML elements and CSS style declarations into doing unusual things—all in the name of pixel-accurate layout. It's always been hard to attain pixel-level accuracy, because HTML was meant to protect you from layout details. As you have learned from the examples in this article, the HTML Canvas enables you to bridge the gap between HTML markup and individual pixels..." Summary, from the HTML 5 draft: "The canvas element is used in contexts where embedded content is expected. It represents a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, or other visual images on the fly. Authors should not use the canvas element in a document when a more suitable element is available. For example, it is inappropriate to use a canvas element to render a page heading: if the desired presentation of the heading is graphically intense, it should be marked up using appropriate elements (typically H1) and then styled using CSS and supporting technologies such as XBL..."

See also: 'The canvas element' in the HTML 5 draft


The Digital Stakhanovite: Metadata Conundrums
Karl Dubost, W3C Blog

"There is an increasing number of people living in a digital era. Not only the environment becomes digital, but their own life products are digital. I'm not the earliest adopter, but I have used computers since 1983, Internet since 1991 and digital photography since 1993. I have accumulated around 418.000 emails and around 45.000 photos. Emails have basic metadata (author, subject, date), which helps to create proper indexing and search. It could be certainly refined with sophisticated search algorithms. Digital photos have now EXIF (including date, and technical parameters of the camera) but nothing much else. My brain associates these photos ordered in a dated space with a list that I maintain of my very rough location. It helps me remember in which city it was taken, but nothing more. Here lies the challenge. Giving more precise metadata to these photographs would be certainly useful for my own consumption but—a fulltime job. [Looking at] what could be done automatically through the devices, location is certainly one that should happen more and more often, see the new Nikon Coolpix P6000 with GPS and the geolocation activity creation being discussed at W3C [Geolocation Working Group] right now. The challenges become bigger when you want to share these photos with a larger public... The difficulty is that the right solution is more social than technical. Giving meaningful alternative information for the images you put online, really depends on the context... There are many possible cases. The big issue is how do we design the technology so that it will accomodate a maximum of use cases (social contexts) without making impossible for others to exist. There is not yet a definitive answer..."

See also: the W3C Geolocation Working Group Draft Charter


A Short Introduction to Cloud Platforms: An Enterprise-Oriented View
David Chappell, White Paper

The coming shift to cloud computing is a major change in our industry. One of the most important parts of that shift is the advent of cloud platforms. As its name suggests, this kind of platform lets developers write applications that run in the cloud, or use services provided from the cloud, or both. Different names are used for this kind of platform today, including on-demand platform and platform as a service (PaaS). Whatever it's called, this new way of supporting applications has great potential... If every development team that wishes to create a cloud application must first build its own cloud platform, we won't see many cloud applications. Fortunately, vendors are rising to this challenge, and a number of cloud platform technologies are available today. The goal of this overview is to categorize and briefly describe those technologies as they're seen by someone who creates enterprise applications... A new kind of application platform doesn't come along very often. But when a successful platform innovation does appear, it has an enormous impact. Think of the way personal computers and servers shook up the world of mainframes and minicomputers, for example, or how the rise of platforms for Ntier applications changed the way people write software. While the old world doesn't go away, a new approach can quickly become the center of attention for new applications. Cloud platforms don't yet offer the full spectrum of an on-premises environment. For example, business intelligence as part of the platform isn't common, nor is support for business process management technologies such as full-featured workflow and rules engines. This is all but certain to change, however, as this technology wave continues to roll forward. Cloud platforms aren't yet at the center of most people's attention. The odds are good, though, that this won't be true five years from now. The attractions of cloud-based computing, including scalability and lower costs, are very real. If you work in application development, whether for a software vendor or an end user, expect the cloud to play an increasing role in your future. The next generation of application platforms is here..."

See also: Chappell's blog


Who Provides What in the Cloud
John Edwards, InfoWorld

The news that AT&T has joined the rapidly growing ranks of cloud computing providers reinforces the argument that the latest IT outsourcing model is well on its way to becoming a classic disruptive technology. By enabling datacenter operators to "publish" computing resources—such as servers, storage, and network connectivity -- cloud computing provides a pay-by-consumption scalable service that's usually free of long-term contracts and is typically application- and OS-independent. The approach also eliminates the need to install any on-site hardware or software. Currently dominated by Amazon.com and several small startups, cloud computing is increasingly attracting the interest of industry giants, including Google, IBM, and now AT&T. "Everyone and their dog will be in cloud computing next year," predicts Rebecca Wettemann, vice president of research at Nucleus Research, a technology research firm. Yet James Staten, an infrastructure and operations analyst at Forrester Research, warns that prospective adopters need to tread carefully in a market that he describes as both immature and evolving. Staten notes that service offerings and service levels vary widely between cloud vendors. "Shop around," he advises. "We're already seeing big differences in cloud offerings." To help cut through the confusion, we provide [in this article a] rundown some major cloud providers—both current and planned—all offering resources that go beyond basic services such as SaaS (software as a service) applications and Web hosting...


IESG Approves Draft RFC 2822 for Internet Message Format Specification
Peter W. Resnick (ed), IETF Draft Standard

The Internet Engineering Steering Group (IESG) has announced the publication of Internet Draft 'draft-resnick-2822upd-06' ("Internet Message Format") as an IETF Draft Standard. When finalized as a new RFC, this specification will obsolete the current Standards Track RFC 2822 (Internet Message Format) published in April 2001. The RFC for 'resnick-2822upd-06' will be published together with 2821, "Simple Mail Transfer Protocol." Many people contributed to this document, including folks who participated in the Detailed Revision and Update of Messaging Standards (DRUMS) Working Group of the Internet Engineering Task Force (IETF), the chair of DRUMS, the Area Directors of the IETF, and people who simply sent their comments in via email. This document specifies the Internet Message Format (IMF), a syntax for text messages that are sent between computer users, within the framework of "electronic mail" messages. The specification is an update to (RFC 2822), which itself superseded (RFC 822), updating it to reflect current practice and incorporating incremental changes that were specified in other RFCs such as (RFC 1123). In the context of electronic mail, messages are viewed as having an envelope and contents. The envelope contains whatever information is needed to accomplish transmission and delivery. The contents comprise the object to be delivered to the recipient. This specification applies only to the format and some of the semantics of message contents. It contains no specification of the information in the envelope. However, some message systems may use information from the contents to create the envelope. It is intended that this specification facilitate the acquisition of such information by programs. This specification is intended as a definition of what message content format is to be passed between systems. Though some message systems locally store messages in this format (which eliminates the need for translation between formats) and others use formats that differ from the one specified in this specification, local storage is outside of the scope of this specification... Document Quality: The current document represents implementation experience from the past seven years in email since RFC 2822 was published. As an update intended to move the internet message format to Draft Standard status, the key issues was to remove features not implemented by vendors and to tighten down the specification to represent what has been implemented.

See also: 'rfc2821bis' for Simple Mail Transfer Protocol


A Look at the Open Virtualization Format Specification
Denise Dubie, Network World

With the VMworld 2008 Conference upon us next month, the challenges virtualization presents IT managers is top of mind for many vendors and industry organizations. One group in particular, the Distributed Management Task Force, or DMTF, in September 2007 announced the acceptance of a draft specification that promised to simplify virtualization interoperability, security and management. According to a DMTF paper about the Open Virtualization Format Specification, the OVF "describes an open, secure, portable, efficient and extensible format for the packaging and distribution of software to be run in virtual machines (VM)." The proposed format uses existing packaging tools to combine one or more VM together with a standards-based XML wrapper that provides the virtualization platform—from VMware, Microsoft, Citrix, or others—a portable package, which includes installation and configuration parameters for the VMs. The OVF could also help IT managers understand how virtual machines have been changed throughout their lifecycle. For instance, if a VM template is cloned and that clone has changed from the master template, IT managers need to know what has changed to be able to troubleshoot performance problems on the VM. According to John Suit, Fortisphere CTO and principal founder, understanding the relationships among VMs and the history of changes a particular instance has undergone will aid IT managers looking to manage and optimize their multi-platform virtual environment. And a standard such as OVF will make it possible to track such information across heterogeneous virtualization platforms. Fortisphere develops software to prevent configuration drift and enable automated policy-based management in virtual environments. Chris Wolf, senior analyst with Burton Group: "OVF marks a new era in virtualization—one which includes never-seen-before interoperability in the enterprise. The whole IT community collectively stands to benefit from this landmark standard."

See also: Open Virtual Machine Format Specification (OVF)


Flash Storage Today
Adam Leventhal, ACM Queue

The past few years have been an exciting time for flash memory. The cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications... The brunt of the current effort to bring flash to primary storage has taken the form of SSDs (solid-state disks), flash memory packaged in hard-drive form factors and designed to supplant conventional drives. This technique is alluring because it requires no changes to software or other hardware components, but the cost of flash per gigabyte, while falling quickly, is still far more than hard drives. Only a small number of applications have performance needs that justify the expense... By combining the use of flash as an intent-log to reduce write latency with flash as a cache to reduce read latency, we can create a system that performs far better and consumes less power than other systems of similar cost. It's now possible to construct systems with a precise mix of write-optimized flash, flash for caching, DRAM, and cheap disks designed specifically to achieve the right balance of cost and performance for any given workload, with data automatically handled by the appropriate level of the hierarchy. It's also possible to address specific performance problems with directed rather than general solutions. Through the use of smarter software, we can build systems that integrate different technologies to extract the best qualities of each. Further, the use of smarter software will allow flash vendors to build solutions for specific problems rather than gussying up flash to fit the anachronistic constraints of a hard drive. ZFS is just one example among many of how one could apply flash as a log and a cache to deliver total system performance. Most generally, this new flash tier can be thought of as a radical form of HSM (hierarchical storage management) without the need for explicit management. [Note: See also "Flash Memories: Successes and Challenges" by Stefan K. Lai in IBM Journal of Research and Development.]

See also: SDD for laptops


U.S. Tax Dollars at Work: Technical Flaws Hobble Watch List
Wyatt Kash, Government Computer News

A variety of technical flaws in an upgrade of the system that supports the U.S. government's terrorist watch list has drawn congressional fire and raised concerns that the entire system might be in jeopardy. The concerns are over a program called Railhead, which was intended to improve information sharing, fusing and analysis of terrorist intelligence data across government agencies. The program was designed to eventually take the place of the Terrorist Identities Datamart Environment (TIDE), which is the central data repository on international terrorists' identities. The multi-year upgrade program, valued at approximately half a billion dollars, is being led by the National Counterterrorism Center, part of the Office of the Director of National Intelligence... According to the subcommittee report, initial plans calling for replacing the legacy database and its online interface were scrapped in favor of converting the system, using XML (Extensible Markup Language). But one of two Railhead design teams raised concerns that XML would substantially increase the size of data files—and slow down transmission times to the 30 separate networks accessing the system. The resulting design delays were compounded by concerns about the system's security, the fact that certain data wouldn't move to the new system, and issues concerning whether the system would properly handle sensitive but unclassified data. Recent software testing failures, though normal for a project of this nature, raised further questions about whether the system's overall design had deeper flaws... Problems with the system's development came to a head in recent weeks. The government has fired most of the 862 contractors from a variety of companies who were working on the project, according to a report in the August 22, 2008 edition of the Wall Street Journal. [Note: ComputerWorld reports that the $500 million IT project desitned to prevent terrorist attacks is a failure, and cannot even handle basic search terms like "and, or and not"...]

See also: ComputerWorld


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-08-27.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org