This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com
MadCap Does DITA
Bob Doyle, DITA Newsletter
MadCap Software has announced a beta program and road map that will integrate DITA XML into several MadCap products, including Flare, Blaze, Analyzer, Lingo, X-Edit, and the forthcoming Team Server. The suite of tools will allow creating, managing, translating, and publishing DITA content without third-party tools like the DITA Open Toolkit. Mike Hamilton, MadCap VP of Product Management, says the new tools will work alongside any DITA tools your organization already has in place. You can import existing DITA topics or whole projects. You can analyze the content, for example locating phrases that are similar but not identical. Making phrases truly identical will enhance reuse and lower translation costs in Lingo, their integration of DITA authoring with translation memory... This is a major move for MadCap, who has joined the OASIS Technical Committee and will help in the development of the DITA standard. The MadCap DITA initiative has three phases: In the first phase , MadCap Software will add DITA support to four products: (1) MadCap Flare, the company's flagship product for single-source, multi-channel publishing; (2) MadCap Blaze, their topic-based publishing application for long-print documents; (3) MadCap Analyzer for reporting, analyzing and proactively suggesting improvements to content; (4) MadCap Lingo, their integrated authoring and translation memory system. With MadCap Flare and Blaze, authors will be able to import DITA projects and topics as raw XML content, and using the XML editor, change the style sheets to get the desired look and structure. Authors will then have the option to publish the output as DITA content; print formats, such as Microsoft Word, DOCX and XPS or Adobe FrameMaker, PDF and AIR; and a range of HTML and XHTML online formats. MadCap's software handles the DITA transforms, so authors don't have to...
See also: MadCap DITA
High-Performance XML Parsing in Python With lxml
Liza Daly, IBM developerWorks
Python has never suffered from a scarcity of XML libraries. Since version 2.0, it has included the familiar xml.dom.minidom and related pulldom and Simple API for XML (SAX) models. Since 2.4, it has included the popular ElementTree API. In addition, there have always been third-party libraries that offer higher-level or more pythonic interfaces. While any XML library is sufficient for simple Document Object Model (DOM) or SAX parsing of small files, developers are increasingly faced with larger datasets and a need for real-time parsing of XML in a Web services context. Meanwhile, experienced XML developers may prefer XML-native languages such as XPath or XSLT for their compactness and expressivity. It would be ideal to have access to the declarative syntax of XPath while retaining the general-purpose functionality available in Python. lxml is the first Python XML library that demonstrates high-performance characteristics and includes native support for XPath 1.0, XSLT 1.0, custom element classes, and even a pythonic data-binding interface. It is built on top of two C libraries: libxml2 and libxslt. They provide most of the horsepower behind the core tasks of parsing, serializing, and transforming. Many software products come with the pick-two caveat, meaning that you must choose only two: speed, flexibility, or readability. When used carefully, lxml can provide all three. XML developers who have struggled with DOM performance or with the event-driven model of SAX now have the chance to work with higher-level pythonic libraries. Programmers coming from a Python background who are new to XML have an easy way to explore the expressivity of XPath and XSLT. Both coding styles can co-exist happily in an lxml-based application.
See also: earlier references on XML and Python
HTTP Enabled Location Delivery (HELD)
Mary Barnes, James Winterbottom (et al., eds) IETF Internet Draft
Members of the Geographic Location/Privacy (GEOPRIV) Working Group have published an updated Internet Draft for the "HTTP Enabled Location Delivery (HELD)" specification. This specification defines an extensible XML-based protocol that enables the retrieval of LI from a LIS by a Device. This protocol can be bound to any session-layer protocol, particularly those capable of MIME transport. This document describes the use of HyperText Transfer Protocol (HTTP) and HTTP over Transport Layer Security (HTTP/TLS) as transports for the protocol. The location of a Device is information that is useful for a number of applications. The L7 Location Configuration Protocol (LCP) problem statement and requirements document provides some scenarios in which a Device might rely on its access network to provide location information. The Location Information Server (LIS) service applies to access networks employing both wired technology (e.g. DSL, Cable) and wireless technology (e.g., WiMAX) with varying degrees of Device mobility. This document describes a protocol that can be used to acquire Location Information (LI) from a LIS within an access network. Section 7 provides the XML Schema Definition, of the "application/held+xml" format. This is presented as a formal definition of the "application/held+xml" format. This specification identifies two types of location information that may be retrieved from the LIS. Location may be retrieved from the LIS by value, that is, the Device may acquire a literal location object describing the location of the Device. The Device may also request that the LIS provide a location reference in the form of a location URI or set of location URIs, allowing the Device to distribute its LI by reference. Both of these methods can be provided concurrently from the same LIS to accommodate application requirements for different types of location information... A "presence" parameter may be included in the "locationResponse" message when specific locationTypes (e.g., "geodetic" or "civic") are requested or a "locationType" of "any" is requested. The LIS MUST follow the subset of the rules relating to the construction of the "location-info" element in the PIDF-LO Usage Clarification, Considerations and Recommendations document in generating the PIDF-LO for the presence parameter...
XML for Publishing: Move Content to XML Without Missing a Beat
Kay Whatley, IBM developerWorks
When you move into an XML-based publishing environment, the right approach can save time and even make new publishing paradigms possible if you properly plan and design the structure. XML is a powerful medium for content, as it turns your documents from a mash of text and objects into a sortable, adjustable, hierarchical collection of pieces. Evaluating existing unstructured content is imperative to reach short-term and long-term publishing goals. This article describes how you can convert documents designed for print publishing to structured documents. The sections that follow cover the logical musts to ensure that publishing is possible — even easier — following a transition to XML. The focus is on how to design structure for your content... You can analyze your documents and prepare a structure that fits them using elements that you name and design. Alternatively, you can use an industry-standard structure (for example, MIL-SPEC, DITA, DocBook). If you need to conform to an existing structure, you might need to make changes to your content to make it fit into the selected structure. To use DITA against the example given previously, you might decide to rewrite the paragraphs that follow the procedural steps... The move from unstructured publishing to structured publishing takes time and effort. If you plan in advance, you can avoid problems and ensure a smooth transition. Such a transition comes from careful consideration of options, pilot projects to gauge conversion time in advance, and advanced setup for fast conversion. Ensuring that logical elements, attributes, and hierarchy are in place means publishing can smoothly continue after converting to XML.
Requirements of Japanese Text Layout Draft Published
Yasuhiro Anan, Hiroyuki Chiba, Junsaburo Edamoto (et al., eds), W3C TR
Participants from four W3C Groups (CSS, Internationalization Core, SVG, and XSL Working Groups), as part of the Japanese Layout Task Force, have published an update of "Requirements of Japanese Text Layout." This document describes requirements for general Japanese layout realized with technologies like CSS, SVG and XSL-FO. The document is mainly based on a standard for Japanese layout, JIS X 4051. However, it also addresses areas which are not covered by JIS X 4051. This draft contains most of the material which the task force intends to publish as a Group Note in December 2008. A Japanese version is also available. Learn more about W3C's Internationalization Activity. Japanese composition exhibits several differences from Western composition. Major differences include: (1) The use of not only horizontal writing mode but also vertical writing mode; (2) The fact that, in principle, the width of all ideographic (cl-19), hiragana (cl-15), katakana (cl-16) characters is full-width and fixed-width, and these characters are composed using solid setting. Accordingly, this document mainly explains the characteristics of Japanese composition. Section 1 addresses basics of specifying Japanese text composition. Section 2 explains the characteristics of letters and symbols which are used in Japanese composition, their differences in vertical writing mode and horizontal writing mode, and the design and adaptation of 'kihon-hanmen'. Section 3 explains line composition methods for ideographic characters (cl-19), hiragana (cl-15), katakana (cl-16), and punctuation marks, together with ruby (inter-line pronunciation information and annotation) and the mixing of Japanese and Latin letters.
See also: the Japanese Layout Task Force
Selected from the Cover Pages, by Robin Cover
Microsoft has announced a new identify management strategy under code-name 'Geneva'. This single, simplified, claims-based identity model includes support for several standards in the federated identity space, including SAML 2.0, WS-Federation, and WS-Trust. Components include 'Geneva' Framework for building claims-aware .NET applications, 'Geneva' Server, and Windows CardSpace 'Geneva'. A Beta release was unveiled at the Microsoft PDC, available for download. 'Geneva' is Microsoft's open platform for simplified user access based on claims. In the Geneva context, claims "describe identity attributes and can be used to drive application and other system behaviors with an open architecture that implements the industry's shared Identity Metasystem vision... The Identity Metasystem is a shared industry vision that defines a single identity model for the enterprise, federation, and the consumer and citizen Web. Claims issued by security token services (STS) are used in the Identity Metasystem to help applications make user access decisions across applications and systems regardless of location or architecture. Claims are delivered inside security tokens produced by an STS, and can disclose identity information selectively." To maximize administrative efficiency 'Geneva' automates federation trust configuration and management using the new harmonized federation metadata format (based on SAML 2.0 metadata) that was recently adopted by the OASIS WSFED TC. WS-Trust is provided to support Information Card based Identity Selectors from third parties, as well as Windows CardSpace. WS-Federation is required to maintain interoperability with existing federations being operated by government agencies, military organizations and business enterprises around the world.
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: firstname.lastname@example.org
Newsletter unsubscribe: email@example.com
Newsletter help: firstname.lastname@example.org
Cover Pages: http://xml.coverpages.org/