A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com
Headlines
- IBM Lotus Symphony Supports Open Document Format (ODF)
- W3C Launches eGovernment Activity to Help Empower Citizens
- A Simple ISO NVDL Script for Preparing ODF XML for Validation
- Yahoo Opens Address Book Interface
- Offline Web Applications
- XProc: Meta-Programming and Rube Goldberg
- A Dynamic Host Configuration Protocol (DHCP) based Location-to-Service Translation Protocol (LoST) Discovery Procedure
- OASIS Members Submit Charter for Proposed DITA Adoption TC
- Enable Firefox Extensions for the Semantic Web
- PowerBuilder as a Client for UPS Web Services
IBM Lotus Symphony Supports Open Document Format (ODF)
Staff, IBM Announcement
IBM has announced the commercial-grade, general availability of Lotus Symphony, a suite of free, ODF-based software tools for creating and sharing documents, spreadsheets, and presentations. The three core tools comprising Lotus Symphony—Lotus Symphony Documents, Lotus Symphony Spreadsheets and Lotus Symphony Presentations—handle the majority of office productivity tasks that most people perform. Quick reading reference cards and online tutorials on the Lotus Symphony Web site show how easy it is to transfer documents between Symphony and Word, PowerPoint or Excel. IBM offers a set of powerful, open APIs for extending Lotus Symphony with a wide range of plug-ins—including Eclipse and Universal Network Object component model and others. This can empower business people to harness powerful business processes such as enterprise resources planning and customer relationship management directly from their desktop. Companies and governments can integrate Lotus Symphony tools into their custom applications and connect to myriad data sources that allow individuals to work in a single view while presenting and updating data from multiple sources instantly. IBM is offering a free developer toolkit on the Symphony site that enables individual users of Lotus Symphony, as well as independent software developers, to create plug-ins, or software adaptors, and composite applications, or mashups. These can transform static documents into living information streams capable of managing primary business functions such as shipping, sales and fulfillment. This announcement affirms IBM's commitment to evolving office productivity software from static, financially draining software to a dynamic, cost-effective tool that allows businesses to invest in more innovative pursuits. Launched in September 2007, Lotus Symphony has been downloaded by nearly one million individuals in an open public Beta program. Lotus Symphony is a truly global product, available in 24 languages, developed by a worldwide team anchored in Beijing, China, and improved through the community of individual users on the Symphony Web site. While Lotus Symphony remains a free, easy download from the Web with free online, moderated support, IBM is also announcing fee-based services to support the needs of large organizations. This optional service, IBM Elite Support for Lotus Symphony 1.0, delivers unlimited remote technical support at a level consistent with other IBM software products via an annual subscription to IBM's Passport Advantage or Passport Advantage Express volume licensing programs.
See also: the Lotus Symphony web site
W3C Launches eGovernment Activity to Help Empower Citizens
Staff, W3C Announcement
W3C has announced the launch of a new forum for governments, citizens, researchers, and other stakeholders to investigate how best to use Web technology for good governance and citizen participation. This forum is open to the public. W3C invites any person or organization interested in eGovernment to join the new eGovernment Interest Group. The group is the culmination of several years of work by W3C in this area, including two Workshops on eGovernment in 2007, one in Europe, and one in North America. eGovernment refers to the use of the Web or other information technologies by governing bodies to interact with citizens, between departments and divisions, and between governments themselves. Like any information provider, governments have found it useful and efficient to interact with customers (citizens) via the Internet, allowing them to file tax returns online, take drivers education classes, apply for a visa, and vote. Access to information, and efficient and secure interactions, contribute to fair governance. Interoperable, Open Web Standards have benefitted governments around the world in the past several years, including those from W3C in the areas of XML, Semantic Web, Accessibility, Internationalization, and Mobile access. These standards make it possible for people with diverse capabilities, using various devices, to access information. Open standards also make it more likely that data will remain available long into the future, increasing the value of investments in the creation and gathering of data. Semantic Web standards in particular lend themselves to data aggregation (mashups) and thus to collaboration (planned and unplanned) among government agencies and with other eGovernment actors. Semantic Web technology also helps in the management of accountability, which can help reduce errors and mistakes and build trust. The new Interest Group, co-Chaired by Kevin Novak (American Institute of Architects) and Josi M. Alonso (W3C/CTIC), will develop good practices and guidelines for the use of Open Web Standards in governance, identify and document where current technology does not adequately address stakeholder needs. The Interest Group will seek to work closely with other W3C Working Groups and international organizations; some potential liaisons listed in the charter include: the European Commission, the Organization for Economic Co-operation and Development (OECD), OASIS, the Organization of American States (OAS), the International Council for Information Technology in Government Administration (ICA), and the World Bank eDevelopment Thematic Group.
See also: the W3C eGovernment Activity
A Simple ISO NVDL Script for Preparing ODF XML for Validation
Rick Jelliffe, O'Reilly Articles
ISO Namespace Validation Dispatching Language (NVDL) is a little language for taking an XML documents, sectioning it off into single namespace sections, attaching or detatching these sections in various ways, and then sending the resulting sections to the appropriate validation scripts. NVDL solves several problems that come up with namespaces, and as with DSRL takes a very different approach than XSD takes (not saying one is better or worse: they have different capabilities and therefore may even be used together). One of these problems is the problem that often the official schema has a wildcard to say "at this point you can put any element", but you really want to limit this to your own elements only and you don't want to edit the official schemas (and thereby create versioning and configuration issues). Another of these issues can be found in ODF. It allows foreign elements anywhere, and in order to validate against the schemas you have to strip these out. However, this does not mean just remove the foreign element and their children, you have to leave the non-foreign descendents in place. Now this is something that W3C XSD cannot really handle well. You can have a wildcard to allow foreign elements, and process them laxly so that when you come to an ODF namespace you start validating, but you don't have the capability of validating that these elements are correct against the content model you want on the parent of the wildcard. You lose synch... [The ODF spec constraint says, in part:] 'Conforming applications either shall read documents that are valid against the OpenDocument schema if all foreign elements and attributes are removed, or shall write documents that are valid against the OpenDocument schema if all foreign elements are removed before validation takes place.' Hmmm, seems like a job for NVDL... [This article's sample script shows] a nice declarative way to specify the validation pre-processing which can be actually run with the various NVDL processors around the place.
See also: NVDL resources
Yahoo Opens Address Book Interface
Stephen Shankland, CNET NEWS.com
Fulfilling a second major part of its promise to make the internal workings of its Web site more extroverted, Yahoo is opening the interface for its address book for outside use. The move could mean that Yahoo, struggling under business pressures but still a stronghold of Web activity, could become more tightly tied to others' Web services. For example, a programmer starting up a social networking site could use the interface to send invitations to a member's list of contacts stored at Yahoo. Yahoo users have stored more than 500 million address books, and the service is used by more than 150 million unique users each month. Opening the address book API (application programming interface) is the second major step taken so far in executing the Yahoo Open Strategy that Chief Technology Officer Ari Balogh announced in April 2008. Yahoo Open Strategy is an attempt to link the company more with other Internet activities rather than remain a sealed-off, if sprawling, Internet domain. Through its open strategy, the company envisions outside programmers building Web applications on Yahoo's site, Yahoo services being incorporated into outside applications, and social connection information within Yahoo being used more widely. Some highlights of what you can do with the API, according to the announcement: (1) Obtain unique identifiers (i.e., email addresses) to help build a social network; (2) Look up phone numbers for mobile and SMS applications; (3) Look up email addresses for content-sharing applications—for example, you can enhance the "share with friend" capability of your site, making it easy for users to look up their contacts by combining the Address Book API with the YUI auto-complete widget; (4) Make it a breeze for your users to send gifts easily; they can add addresses from their Yahoo! Address Book with almost no typing... XML Versioning and Validation: "All XML documents returned by the API start with an XML prolog containing both an XML version/encoding declaration, and a Document Type Declaration. The Document Type Declaration (DOCTYPE) points to a versioned external DTD. Although a DTD is referenced, the XML document is standalone. Clients of the API do not need to DTD-validate server responses at run-time. XML documents that are POSTed to the API must start with an XML prolog that contains at least the XML version and encoding declaration. A Document Type Declaration can be used, and if it is, it must match what the Address Book servers were using when that particular integration was implemented. Independently of whether a DTD is being referenced or not, the XML document must be standalone. Address Book servers will not fetch and parse a remote DTD. This use of DOCTYPE provides an informal versioning mechanism for the XML API. However, even without this mechanism, properly implemented clients should be, by design, forwards compatible, and Address Book servers will be written to be backwards compatible."
See also: the Address Book XML/JSON API Developer Guide
Offline Web Applications
Anne van Kesteren and Ian Hickson (eds), W3C Technical Report
W3C announced the release of an "Offline Web Applications" specification published as a Working Group Note by the W3C HTML Working Group, part of the HTML Activity. Users of typical online Web applications are only able to use the applications while they have a connection to the Internet. When they go offline, they can no longer check their e-mail, browse their calendar appointments, or prepare presentations with their online tools. Meanwhile, native applications provide those features: e-mail clients cache folders locally, calendars store their events locally, presentation packages store their data files locally. In addition, while offline, users are dependent on their HTTP cache to obtain the application at all, since they cannot contact the server to get the latest copy. The HTML 5 specification provides two solutions to this: a SQL-based database API for storing data locally, and an offline application HTTP cache for ensuring applications are available even when the user is not connected to their network. This document highlights these features (SQL, offline application caching APIs as well as online/offline events, status, and the localStorage API) from HTML 5 and provides brief tutorials on how these features might be used to create Web applications that work offline... (1) The client-side SQL database in HTML 5 enables structured data storage. This can be used to store e-mails locally for an e-mail application or for a cart in an online shopping site. The API to interact with this database is asynchronous which ensures that the user interface doesn't lock up. Because database interaction can occur in multiple browser windows at the same time the API supports transactions. To create a database object you use the 'openDatabase()' method on the Window object. It takes four arguments: a database name, a database version, a display name, and an estimated size, in bytes, of the data to be stored in the database. (2) Offline Application Caching APIs: The mechanism for ensuring Web applications are available even when the user is not connected to their network is the manifest attribute on the html element. The attribute takes a URI to a manifest, which specifies which files are to be cached...
See also: the W3C HTML Activity Statement
XProc: Meta-Programming and Rube Goldberg
Kurt Cagle, DevX.com
XProc, the XML Pipeline Language, is designed as a way of describing a set of declarative processes. This article demonstrates how XProc neatly solves a number of problems that tend to transcend working with any one single XML operational language. Declarative programming can take a little getting used to, especially if your standard mode of operation is working with languages like Java or C#. In essence, such programming requires that you think not of objects, properties and methods but rather of rules, filters and pipelines. Indeed, one reason that the future is looking increasingly declarative is that the web, as a network, does not lend itself well to being described as a collection of objects with methods and properties. That resistance is at least part of the reason why SOA (service oriented architecture) essentially requires that you build an entire infrastructure on top of the web just to make it work properly.... XProc heralds a significant shift in the building of XML pipelines and web applications. The specification itself will likely be out either late in 2008 or early in 2009, and already a few XML database creators are exploring the deployment of XProc within their own systems, either as something that can be invoked from within other processes (such as an XQuery call) or as scriptable entities in their own right. Because of it's declarative nature, it's also not hard to foresee a point in the near future where XProc will be used to marshal actions across multiple server environments, though this first specification only hints at that vision—in short, XProc has the potential to become a vehicle for larger scale multi-system orchestration. In the more immediate term, you can get a first glimpse of XProc via prototype implementations [available online]. If other standards like XSLT2 and XQuery are any indication, adoption of XProc is likely to be slow at first, given the presence of commercial workflow systems, but like those two standards, adoption should pick up pretty quickly with one or two solid implementations, as XProc neatly solves a number of problems that tend to transcend working with any single XML operational language. Developers are moving to ever larger levels of abstraction as programming moves beyond single processor environments or even standard client/server architectures and it is likely that XProc will be one of the languages leading the charge to that next level.
See also: XProc - An XML Pipeline Language
A Dynamic Host Configuration Protocol (DHCP) based Location-to-Service Translation Protocol (LoST) Discovery Procedure
H. Schulzrinne, J. Polk, H. Tschofenig (eds), IETF Internet Draft
The Internet Engineering Steering Group (IESG) announced the approval of the specification "A Dynamic Host Configuration Protocol (DHCP) based Location-to-Service Translation Protocol (LoST) Discovery Procedure" as an IETF Proposed Standard. It describes how a LoST client can discover a LoST server using DHCP. Although the LoST specification has been implemented there are no implementations known for the DHCP-based discovery procedure. From a deployment point of view it is likely that the DNS-based discovery procedure will be available before this document will see a deployment. The document was produced by members of the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group, part of the Real-time Applications and Infrastructure Area. Emergency service numbers like 911 and 112 relate to an emergency service context, and depend on a broad, regional configuration of service contact methods and a geographically-constrained context of service delivery. These calls are intended to be delivered to special call centers equipped to manage emergency response. Successful delivery of an emergency service call within those systems requires both an association of the physical location of the originator with an appropriate emergency service center and call routing to deliver the call to the center. However, calls placed using Internet technologies do not use the same systems to achieve those goals, and the common use of overlay networks and tunnels (either as VPNs or for mobility) makes meeting them more challenging. There are, however, Internet technologies available to describe location and to manage call routing. The IETF ECRIT Working Group was chartered to describe when these may be appropriate and how they may be used. The "Location-to-Service Translation Protocol (LoST)" specification describes an XML-based protocol for mapping service identifiers and geospatial or civic location information to service contact Uniform Resource Locators (URLs). LoST servers can be located anywhere but a placement closer to the end host, e.g., in the access network, is desireable. Such a LoST server placement provides benefits in disaster situations with intermittent network connectivity regarding the resiliency of emergency service communication. In order to interact with a LoST server, the LoST client eventually needs to discover the server's IP address. Several mechanisms can be used to learn this address, including manual configuration. In environments where the access network itself either deploys a LoST server or knows a third party that operates a LoST server, DHCP can provide the end host with a domain name. This domain name is then used as input to the DNS-based resolution mechanism described in LoST that reuses the URI-enabled NAPTR specification. This "Discovery Procedure" document specifies a DHCPv4 and a DHCPv6 option that allows LoST clients to discover local LoST servers.
See also: the Location-to-Service Translation Protocol (LoST)
OASIS Members Submit Charter for Proposed DITA Adoption TC
Staff, OASIS Announcement
A new "DITA Adoption Technical Committee" has been proposed by fourteen members of OASIS. According to the draft Charter, the OASIS DITA Adoption Technical Committee members "will collaborate to provide expertise and resources to educate the marketplace on the value of the DITA OASIS standard. By raising awareness of the benefits offered by DITA, the Technical Committee increases the demand for, and availability of, DITA conforming products and services, resulting in a greater choice of tools and platforms and expanding the DITA community of users, suppliers, and consultants. Since DITA adoption is stronger in the US than in the rest of the world, especially the European Union, the Technical Committee will actively solicit participation from non-US members and help to facilitate providing information promoting DITA adoption globally... The DITA Adoption Technical Committee is closely allied with the DITA Technical Committee. The DITA TC is responsible for the development, maintenance, and enhancement of the DITA specification and the language reference. As such, the DITA TC concentrates on clearly defining and developing the technical content of the specification. It also supports the work of several subcommittees that are creating industry-specific specializations of the DITA specification. The DITA Adoption TC concentrates on the promotion of the DITA standard to the global user community, helping to encourage DITA adoption in new industries, new areas of content creation, and new organizations. To this end, the DITA Adoption TC's focus is on building public awareness of the standard, educating potential users in the standard, and ensuring that miscommunications that may exist are quickly corrected."
See also: DITA references
Enable Firefox Extensions for the Semantic Web
Rob Crowther, IBM developerWorks
The upcoming Firefox 3.0 release has built-in support for microformats in the form of an API that you can access from a Firefox extension. This article provides a simple example showing how to use this API from within your extension code. We take a skeleton 'Hello World' extension and give it the ability to store an hCard from any Web page and then use that stored hCard to populate a Web form. To follow along with this tip you need a basic understanding of how extensions are built for Firefox. Fortunately, if you write JavaScript and HTML, you already have almost all the knowledge you need. We take a standard Firefox extension template and quickly give it the ability to use the hCard microformats—thanks to the new APIs in Firefox 3.0. You can see that the API makes manipulating microformats data very easy. With very little code, you can build an extension that is a massive time saver, and to add similar features to your own extensions will be very little work. As a next step, you might consider generalizing the paste action to provide a mapping file for any form, and allow use of an hCard microformats to fill it. A shared repository of mappings can vastly simplify the process of filling in forms on the Web.
See also: the hCard microformat documentation
PowerBuilder as a Client for UPS Web Services
Victor A Reinhart, SYS-CON JDJ
"Does your shipping department have any of these problems: Extra charges due to incorrect addresses? Difficulty tracking packages? How about duplicate entries? If your shipping department is really ancient, you may even have rolls of pre-printed UPS shipping labels. UPS offers a totally free solution where you directly access their Web Service using XML. All data is secured via HTTPS. There's no software to install, no expiration date, and no proprietary database. It's a lightweight elegant solution. In case you have problems, UPS provides support. Imagine that, support for a free solution! This technology is mature—in three years, we've never had to change our application due to a change in the XML specs. To get started, go to 'ups.com', Business Solutions, Portfolio of Services, and pick UPS Online Tools. This article covers the UPS Shipping Tool. You'll need to get this tool from UPS and sign up with them... This solution [as sketched in this article] has served us well. All we need now is a label printer for each PC. eBay has many of these printers for sale and even our oldest printer has never had a problem. As an added bonus, the Zebra printer has its own language, which lets us print other labels from PowerBuilder as well. You can also use the 'Shipping Tool' to void a shipment. We use the Tracking Tool too. Every day, a batch job uses a DataWindow to select all outstanding shipments. For each one, it uses the Tracking Tool and looks up the status based on the Tracking Number. The response tells us whether the package was delivered, and when. We store this right back in our database. This shows that PowerBuilder can act as a client for the UPS Web Services. Using these same techniques, you could also code for FedEx or other Web Services too."
See also: the earlier UPS Online Tools notice
Sponsors
XML Daily Newslink and Cover Pages are sponsored by:
BEA Systems, Inc. | http://www.bea.com |
IBM Corporation | http://www.ibm.com |
Primeton | http://www.primeton.com |
Sun Microsystems, Inc. | http://sun.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/