The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: May 13, 2008
XML Daily Newslink. Tuesday, 13 May 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Primeton http://www.primeton.com



Improve the Performance of Your XML Applications Using Xerces-C++
David A. Cargill and Khaled Noaman, IBM developerWorks

Xerces-C++ is a validating XML parser that is provided as a shared library. The library includes interfaces for DOM and SAX. Specifically, SAXParser is an interface for the SAX 1.0 specification; SAX2XMLReader is an interface for the SAX 2.0 specification; XercesDOMParser is an interface for the DOM specification; and DOMBuilder is an implementation of the Load interface of the DOM Level 3.0 Abstract Schemas and Load and Save specification. Xerces-C++ has four scanners: IGXMLScanner, WFXMLScanner, DGXMLScanner, and SGXMLScanner. It's important to choose the appropriate scanner for your scenario to obtain better performance. IGXMLScanner, the default scanner, is an all-purpose scanner that not only handles well-formedness, but is also involved in validating XML documents against DTDs and/or XML Schemas. WFXMLScanner, on the other hand, handles only well-formedness checking, not grammar validation. If you're only concerned with the well-formedness of the document, then use WFXMLScanner. Use DGXMLScanner if you're only doing DTD validation, and use SGXMLScanner if you're only doing XML Schema validation. This article provides several tips on how to use the Xerces-C++ parser to improve the performance of your applications.


Concordia Project Sponsors Entitlements Management Workshop
Staff, Concordia Announcement

The Concordia Project, a global cross-industry initiative formed by members of the standards community to drive harmonization and interoperability among open standards, policy initiatives and protocols, today announced a public standards-based policy and entitlements management workshop taking place from 10:00am to 5:00pm at the Burton Catalyst Conference in San Diego on June 23, 2008. The public meeting is sponsored by Liberty Alliance and Burton Group and is the first Concordia event to focus on policy and entitlements management and associated standards such as XACML and WS-Policy. The interactive session will feature use case and interoperability scenarios presented by representatives from the defense, government and manufacturing sectors. During the June 23, 2008 workshop, early deployers of policy and entitlement management solutions will present requirements for policy management, including entitlement and fine grained authorization in the enterprise, to a panel of policy and technology experts from the Concordia community. The panel will identify and discuss commonalities and potential options for successfully addressing use case scenarios. The Concordia community will then work collaboratively to prioritize the next steps involved in developing solutions to meet deployer requirements based on the use cases presented at the workshop and those submitted to the Concordia community through its public wiki. The June 23, 2008 workshop is the sixth time Concordia members have held public face-to-face meetings and follows the RSA Conference 2008 event where the community held its first interoperability demonstrations. Nearly 600 attended the public workshop where FuGen Solutions, Internet2, Microsoft, Oracle, Ping Identity, Sun Microsystems and Symlabs demonstrated interoperability scenarios designed to meet deployer requirements using Information Card, Liberty Alliance, and WS-* identity protocols. Previous meetings have taken place at RSA Conference 2007, Catalyst 2007, Digital ID World, the Identity Open Space (IOS) and the Internet Identity Workshop (IIW). All organizations and individuals interested in contributing to the deployment of standardized policy frameworks and proven interoperable standards-based solutions are encouraged to attend the workshop.

See also: the Concordia Project web site


New PyAMF Release Improves Support for Google App Engine
Moxie Zhang, InfoQueue

PyAMF 0.3.1 was released this week, just in time to meet the increased interest on Python and RIA generated by the recent preview release of Google App Engine and the announcement of Adobe's Open Screen Project. PyAMF is an open source project that provides action message format (AMF) support for Python. This allows for AMF-based communication between Python-powered Web servers and rich Internet application (RIA) clients in Flash, Flex or AIR. Google App Engine (GAE) enables users to build Web applications on the same scalable systems used by Google, so that they can expand from one user to millions of users without the need to rebuild infrastructure. The release of PyAMF 0.3.1 improves support for GAE and introduces a new AMF gateway for GAE Web applications. Adobe's Open Screen Project further opens core Adobe data/file formats, such as SWF and FLV, the fundamental elements for Flash/Flex-based RIA applications. Adobe Integrated Runtime and Flash Player use AMF for communications between an application and a remote server. AMF encodes remote procedure calls (RPC) into compact binary representations that can be transferred over HTTP/HTTPS or the RTMP/RTMPS protocol. PyAMF enables the development of Flex-based RIA applications along with a GAE backend, thus putting RIA into the Google cloud computing platform.


TIBCO Demos High-Performance SCA on ActiveMatrix Service Grid
Staff, TIBCO Announcement

TIBCO Software Inc announced a demonstration of how the combination of SCA and the TIBCO ActiveMatrix Service Grid provides a highly scalable Service Oriented Architecture (SOA) combination for enterprise developers looking to reap the benefits of both models. Using SCA helps in part by laying a common framework for organizing, modeling and composing services within the enterprise. SCA goes beyond previous interoperability standards and provides a design standard that allows enterprise architects and developers a model driven approach to creating composite service oriented applications. "ActiveMatrix Service Grid enables companies to achieve extreme scalability by providing service virtualization through an open, extensible service platform based on a proven foundation," said Matt Quinn, senior vice president of engineering and technology strategies at TIBCO. "Service virtualization provides developers with fast and easy-to-use tools that enable them to quickly get their jobs done, not devote their limited working hours to rebuilding common components." Developers can use SCA to provide a simpler and more service-oriented approach to building business logic. ActiveMatrix Service Grid helps by providing an OSGI-based deployment environment for service enablement in high-performance application environments to leverage the benefits of agility and flexibility of an SOA without sacrificing high performance. Using TIBCO ActiveMatrix Service Grid and SCA, developers can go beyond Java and BPEL services to also have Java coupled with .NET, offering a powerful combination for enterprise customers to manage their broader technology infrastructure.

See also: TIBCO Standards Support


Do New Web Tools Spell Doom for the Browser?
Neil McAllister, InfoWorld

Gone are the static pages and limited graphics of fifteen years ago. In their place are lush, highly interactive experiences, as visually rich as any desktop application. The Web has become the preferred platform for enterprise application delivery, to say nothing of online entertainment and social software. Example: Twhirl, a desktop client for the Twitter online service. Double-click its icon and the application launches in seconds. Its window is small and stylized, with an attractive, irregular border and configurable color schemes. What few controls it has are convenient and easy to use. It's sleek, fast, and unobtrusive. In short, it's everything that navigating to the Twitter Web site with a browser is not. Although it looks and feels like an ordinary desktop application, Twhirl's UI is rendered with HTML, CSS, Flash, and ActionScript. Essentially, it's a Web app... At Mozilla, platform evangelist Mark Finkle explores new ways for current browser technology to better meet the needs of today's Web apps. Finkle is project lead for Prism, software from Mozilla Labs that offers a middle ground between AIR's desktop integration and the traditional browser experience. Prism is a tool for creating SSBs (site-specific browsers)—applications designed to work exclusively with a single Web application, but without the menus, toolbars, and accoutrements of a normal Web browser... Despite differences in approach between AIR and Gears, Adobe and Google actually share a common vision. Both companies aim to extend the current Web browsing experience with new features that allow developers to deliver RIAs more easily. And, because Web developers, too, have diverse goals and methods, the traditional browser is unlikely to disappear as an application-delivery platform, even as desktop-based Web apps proliferate... Significantly, both Adobe and Google also rely heavily on open source code. Google Gears is 100 percent open source, while AIR incorporates the open source WebKit rendering engine and the SQLite data storage library. An important effect of this is that code contributed by one company can actually benefit the other. This informal collaboration, combined with the formal Web standards process, ensures that the future development of the Web remains a dialogue, not a debate.


Proposed Open Architecture for XML Authoring and Localization (OAXAL) TC
Staff, OASIS Announcement

OASIS announced that a draft charter has been submitted to establish a new OASIS Technical Committee "Open Architecture for XML Authoring and Localization (OAXAL)." OAXAL represents a method to exploit technical documentation assets by extending the usefulness of core XML-related standards in a comprehensive Open Standards based architecture. The OAXAL allows system builders to create an integrated environment for document creation and localization. The OAXAL TC will deliver an Open Architecture for XML Authoring and Localization Reference Model. This Reference Model will demonstrate the integration of the standards listed above to present a complete automated package from authoring through translation. Authors are provided with a systematic way to identify and store all previously authored sentences. Authors are encouraged to reuse existing sentences; such sentences are likely to have been previously translated, providing a means to increase translation matches. OAXAL makes use of several international standards, including: (1) W3C ITS - an XML vocabulary that defines translatability rules for a given XML document type. (2) xml:tm—XML-based text memory, a LISA OSCAR standard for author and translation memorySRX, Segmentation Rules Exchange, an XML vocabulary defining segmentation rules for each language. (3) GMX -- Global Information Management Metrics Exchange, a LISA OSCAR standard for word and character count and metrics exchange. (4) TMX—Translation Memory eXchange, a LISA OSCAR standard for exchanging translation memories. (5) Unicode TR29—primary Unicode standard defining word and sentence boundaries. (6) DITA— Darwin Information Typing Architecture, an OASIS standard for component based XML publishing. (7) XLIFF—XML Localization Interchange File Format, an OASIS standard for exchanging localization data Further information on OAXAL is provided by Andrzej Zydron in an XML.com article "OAXAL: Open Architecture for XML Authoring and Localization."

See also: the XML.com article


Extending and Versioning Languages: Compatibility Strategies
David Orchard (ed), W3C Draft TAG Finding

An updated compatible versioning strategies document was announced by the editor of the draft TAG Finding, incorporating comments from reviewers. The document focuses on providing information on how a language can be designed for forwards compatible versioning, often the hardest type of versioning to plan for. It also provides motivation for versioning and some discourse on incompatible and backwards compatible versioning. Separate documents contain the versioning terminology definitions and XML specific versioning material... The evolution of languages by adding, deleting, or changing syntax or information is called versioning. Making versioning work in practice is one of the most difficult problems in computing. Arguably, the Web rose dramatically in popularity because support for evolution and versioning were built into HTML and HTTP. Both systems provide explicit extensibility points and rules for understanding extensions that enable their decentralized extension and versioning. This finding describes general problems and techniques in evolving systems in compatible ways. The terminology definitions used throughout are expressed in [the companion document on Versioning Terminology]. A number of design patterns and rules are discussed with a focus towards enabling language changes such that newer version(s) of a language are processable by software that only understands the older version(s) of the language, aka forwards-compatibility. There are a few crucial good practices that enable forwards compatible versioning in a language: (1) the language should be extensible; (2) any extensions in a text of the language should have a well-defined default meaning—which often is that the extension conveys no information and can be ignored; (3) if the texts of the language contain version identifiers, then a given language version should define a set of compatible future version identifiers.

See also: Extending and Versioning Languages, Terminology


Red Hat's Fedora 9 Loads Portable Desktop On USBs
Charles Babcock, InformationWeek

Fedora has gained an application in version 9 that captures an image of a user's preferred desktop and loads it onto a USB device. The feature allows anyone with a low-cost, 1-GB or 2-GB memory stick to carry a desktop around for use on any common x86 instruction set hardware. Fedora is the community-developed version of Linux that Red Hat issues frequently to get changes and new features out into the community. Fedora project leader Paul Frields agreed the ability to create a transportable Linux desktop on a USB device could have uses ultimately in the enterprise, but he was unwilling to predict when such a feature might find its way into the slower-moving, more carefully tested Red Hat Enterprise Linux. Nevertheless, Frields said the ability to transport a Linux desktop on a low-cost memory device opens up several possibilities among users and device manufacturers. Such an approach to desktop mobility fits in with goals of producing 'low-heat-producing, low-power-consuming mobile devices that could run off a USB key.' It would also give the Fedora project an additional way to popularize its work. Any visitor to a Fedora advocate at a trade show booth could walk away with a version of the operating system on a pocket device... Live USB images will work with the remaining memory space on a device that already contains files, although it's probably wise to have roughly a minimum of 1 GB available, Frields said. But 2 GB is better, if a browser and a set of applications and related data are part of the desktop... Live USB images will create a logical layer above the desktop image on a USB that allows changes to be made to the operating system, applications, or files that accompany them without disturbing the overall configuration of the device. It creates an overlay file system: you can add data or create documents, or update a Firefox browser for viewing the Web. Fedora images or snapshots can be downloaded from the Fedora Project URL listed on the Red Hat Web site. The operating system can be combined with such small footprint applications as AbiWord word processing; Evolution e-mail, calendar, and address book; and Gnumeric spreadsheet.


Distributed Version Control Systems: A Not-So-Quick Guide Through
Sebastien Auvray, InfoQueue

Since Linus Torvalds presentation at Google about 'git' in May 2007, the adoption and interest for Distributed Version Control Systems has been constantly rising. We will introduce the concept of Distributed Version Control, see when to use it, why it may be better than what you're currently using, and have a look at three actors in the area: git, Mercurial and Bazaar. A Version Control System (or SCM) is responsible for keeping track of several revisions of the same unit of information. It's commonly used in software development to manage source code project. The historical and first project VCS of choice was CVS started in 1986. In December 1999, in order to manage the mainline kernel sources, Linus chose BitKeeper described as "the best tool for the job". Prior to this Linus was integrating each patch manually. While all its predecessors were working in a Client-(Central)Server model BitKeeper was the first VCS to allow a truly distributed system in which everybody owns their own master copy. Due to licensing conflicts, BitKeeper was later abandoned in favor of git. Other systems following the same model are available: Mercurial, Bazaar, darcs, and Monotone. Distributed Version Control Systems take advantage of the peer-to-peer approach. Clients can communicate between each other and maintain their own local branches without having to go through a Central Server/Repository. Then synchronization takes place between the peers who decide which changesets to exchange. This results in some striking differences and advantages from a centralized system: (1) No canonical, reference copy of the codebase exists by default; only working copies. (2) Disconnected operations: Common operations such as commits, viewing history, diff, and reverting changes are fast, because there is no need to communicate with a central server. Even if a central server can exist (for stable, reference or backup version), if Distribution is well used it shouldn't be as much queried as in a CVCS schema. (3) Each working copy is effectively a remoted backup of the codebase and change history, providing natural security against data loss. (4) Experimental branches -- creating and destroying branches are simple operations and fast. (5) Collaboration between peers made easy.

See also: the Mercurial web site


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-05-13.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org