A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
Headlines
- Call for Implementations of 'XProc: An XML Pipeline Language'
- The XBRL Mandate is Here: Is IT ready?
- XBRL International Technical Working Groups Roadmap
- W3C XML Schema Definition Language (XSD): Component Designators
- Kojax: Mobile AJAX from Microsoft?
- OpenID OAuth Extension
- An Interview with Evan Prodromou
Call for Implementations of 'XProc: An XML Pipeline Language'
Norman Walsh, Alex Milowski Henry Thompson (eds), W3C Technical Report
W3C's XML Processing Model Working Group now invites implementation of the Candidate Recommendation for "XProc: An XML Pipeline Language." This specification describes the syntax and semantics of XProc, a language for describing operations to be performed on XML documents. A pipeline consists of steps. Like pipelines, steps take zero or more XML documents as their inputs and produce zero or more XML documents as their outputs. The inputs of a step come from the web, from the pipeline document, from the inputs to the pipeline itself, or from the outputs of other steps in the pipeline. Document status: The XML Processing Model Working Group believes that this specification sufficiently addresses the use cases and requirements that it set out to address the published Use Cases. The Last Call Working Draft of this specification resulted in a number of comments which have all been addressed by the Working Group. The Last Call comments and their disposition are summarized in our Disposition of Comments document. The changes made between Last Call and this Candidate Recommendation draft are highlighted in a draft with revision markup. The W3C publishes a Candidate Recommendation to indicate that the document is believed to be stable and to encourage implementation by the developer community. A test suite with coverage of the primary XProc constructs has been created. The Working Group expects to request that the Director advance this document to Proposed Recommendation once the Working Group has completed the test suite and demonstrated at least two interoperable implementations for each test. Based on known implementations, the Working Group does not plan to request to advance to Proposed Recommendation prior to 01 February 2009. An ongoing implementation report will be maintained.
See also: Calabash, An XProc implementation
The XBRL Mandate is Here: Is IT ready?
Ephraim Schwartz, InfoWorld
Given all the pressures IT is under, another compliance initiative may seem to be one too many. There is such a mandate: to submit financial reports using XBRL (Extensible Business Reporting Language) tags. How much will the XBRL mandate add to IT's burden? At first, the burden will be small, but it will increase over time—as will the opportunity to use XBRL for better internal operations, not just for reporting compliance. The purpose of the XBRL mandate is to make corporate financial information more easily available to stockholders -- and to make sure companies are really reporting the same things, the federal government has mandated the use of XBRL. The first SEC deadline for public companies with a market cap of $5 billion or more to submit financial reports in interactive data, aka XBRL format, is set for December 15, 2008. A year later, most Fortune 1500 companies must provide interactive XBRL data, and a year after that, all public companies will be required to submit the annual 10-K and quarterly 10-Q financial reports as interactive data. After that, companies should expect the SEC to require more financial documents to be published in XBRL format and for other government agencies to begin mandating its use as well, says Diane Mueller, vice president of XBRL Development for JustSystems, an XML tools provider. John Stantial, assistant comptroller at the conglomerate United Technologies Corp. (UTC), expects to see the Department of Labor, the Internal Revenue Service, and the Bureau of Economic Analysis adopt XBRL as a requirement. So what must IT do to make its company's financial reporting XBRL-compliant? Under the SEC's initial reporting requirements, there is not much case for IT involvement, says Mike Willis, a partner at auditor PricewaterhouseCoopers and the founding chairman of XBRL International, a supply-chain consortium representing more than 600 companies. Most of the initial XBRL effort is to tag the financial statements with the correct taxonomy, using XBRL markup terms, to describe a particular financial concept or fact in, say, a profit-and-loss statement. That effort is entirely a financial reporting activity, notes David Blaszkowsky, director of the SEC's Office of Interactive Disclosure. The work can be done in-house or outsourced to a financial publisher...
See also: earlier XBRL references
XBRL International Technical Working Groups Roadmap
Staff, XBRL Announcement
Hugh Wallis (Director of Technical Standards at XBRL International Inc) published an announcement for an updated XBRL Standards roadmap. The XBRL International Standards Board (XSB) is responsible for managing the production of the consortium's technical materials. It is charged with setting priorities for the creation of new material and ensuring all material is of a uniformly high quality, with the goal of accelerating adoption of XBRL around the world. "From time to time the XBRL International Standards Board (XSB) publishes a roadmap of the planned activities for which it is responsible. The latest such roadmap has just been published and is available from the XSB page on the website. XBRL "is a language for the electronic communication of business and financial data which is revolutionising business reporting around the world. It provides major benefits in the preparation, analysis and communication of business information. It offers cost savings, greater efficiency and improved accuracy and reliability to all those involved in supplying or using financial data. XBRL is one of a family of "XML" languages which is becoming a standard means of communicating information between businesses and on the internet. The idea behind XBRL, Extensible Business Reporting Language, is simple. Instead of treating financial information as a block of text—as in a standard internet page or a printed document—it provides an identifying tag for each individual item of data. This is computer readable. For example, company net profit has its own unique tag. The introduction of XBRL tags enables automated processing of business information by computer software, cutting out laborious and costly processes of manual re-entry and comparison. Computers can treat XBRL data "intelligently": they can recognise the information in a XBRL document, select it, analyse it, store it, exchange it with other computers and present it automatically in a variety of ways for users. XBRL greatly increases the speed of handling of financial data, reduces the chance of error and permits automatic checking of information. XBRL is being developed by an international non-profit consortium of approximately 450 major companies, organisations and government agencies. It is an open standard, free of licence fees. It is already being put to practical use in a number of countries and implementations of XBRL are growing rapidly around the world. XBRL has lisisons with ISO 20022 TC 68, OMG, and UN/CEFACT TBG 12.
See also: the XBRL International Standards Board
W3C XML Schema Definition Language (XSD): Component Designators
Mary Holstege and Asir S. Vedamuthu (eds), W3C Technical Report
Members of the XML Schema Working Group have published a Last Call Working Draft for the "W3C XML Schema Definition Language (XSD): Component Designators" specification. This document defines a scheme for identifying XML Schema components as specified by "XML Schema Part 1: Structures" and "XML Schema Part 2: Datatypes." Comments are welcome through January 19, 2009. This version incorporates all Working Group decisions through 2008-10-31. It has been reviewed by the Working Group and the Working Group has agreed to publication as a Last Call Working Draft. The following changes were made since the last public Working Draft: (1) Clarified the normalization required for canonical schema component designators; revised the EBNF accordingly. (2) Removed unclear non-goal concerning namespace validity. (3) Fixed error in EBNF for ExtensionAccessor. Summary: "Part 1 of the W3C XML Schema Definition Language (XSD) defines schema components in three classes: (a) Primary components: simple and complex type definitions, attribute declarations, and element declarations (b) Secondary components: attribute and model group definitions, identity-constraint definitions, and notation declarations (c) 'Helper' components: annotations, model groups, particles, wildcards, and attribute uses In addition there is a master schema component, the schema component representing the schema as a whole. This component will be referred to as the schema description component in this specification... The schema description schema component may represent the amalgamation of several distinct schema documents, or none at all. It may be associated with any number of target namespaces, including none at all. It may have been obtained for a particular schema assessment episode by de-referencing URIs given in schemaLocation attributes, or by an association with the target namespace or by some other application-specific means. In short, there are substantial technical challenges to defining a reliable designator for the schema description, particularly if that designator is expected to serve as a starting point for the other components encompassed by that schema. The specification divides the problem of constructing schema component designators into two parts: defining a designator for an assembled schema, and defining a designator for a particular schema component or schema components, understood relative to a designated schema..."
See also: the W3C XML Schema Working Group
Kojax: Mobile AJAX from Microsoft?
Darryl K. Taft, eWEEK
Microsoft is reportedly working on a mobile AJAX technology code-named Kojax. The goal of Kojax is said to be to enable developers to create interactive mobile applications using a combination of Visual Studio tools and JavaScript... According to published reports, Microsoft is working on an AJAX-style mobile application development environment code-named Kojax, designed to help developers create mobile applications, purportedly for use in emerging markets. AJAX is a Web development technique used for creating interactive Web applications. The code name for the technology brings to mind Kojak, the tough, bald-headed, lollipop-licking cop from 70s-era TV crime drama of the same name. Kojak's catchphrase was, "Who loves ya, baby?" Microsoft must be hoping AJAX developers will dig Kojax. However, the company would not comment on the project. Microsoft blogger and code-name maven Mary Jo Foley, who uncovered the Kojax name and information, said, "Kojax is a mobile development platform, according to my sources, that will allow Microsoft- and third-party-developed applets [to] run in an AJAX-like way, using a combination of Visual Studio tools and JavaScript, on Java-based mobile phones." [...] In Kojax, it is quite possible that Microsoft will offer a more friendly mobile development solution that enables developers to tap into the power of the mobile browser and JavaScript. Indeed, standards organizations such as the OpenAjax Alliance, the World Wide Web Consortium and others have been working on the concept of mobile AJAX for the last year or more. Microsoft is a key member of the OpenAjax Alliance's IDE work group that is looking at issues such as mobile AJAX, said Jon Ferraiolo, an IBM engineer and director of the OpenAjax Alliance, a consortium of vendors and organizations working to promote AJAX interoperability.
OpenID OAuth Extension
Dirk Balfanz, Breno de Medeiros, David Recordon (et al, eds), Community Working Draft
This community draft specification describes a mechanism to combine an OpenID authentication request with the approval of an OAuth request token. The OpenID OAuth Extension describes how to make the OpenID Authentication and OAuth Core specifications work well together. In its current form, it addresses the use case where the OpenID Provider and OAuth Service Provider are the same service. To provide good user experience, it is important to present a combined authentication and authorization screen for the two protocols. This extension describes how to embed an OAuth approval request into an OpenID authentication request to permit combined user approval. For security reasons, the OAuth access token is not returned in the URL. Instead a mechanism to obtain the access token is provided. The specific mechanisms proposed are extensions to the OpenID authentication request and response, and also to the assertion verification mechanism, found in Sections 9, 10, and 11 of the "OpenID Authentication" specification (V2.0, Final, December 2007). Before requesting authentication The Combined Consumer must have obtained an unauthorized OAuth request token (Section 6.1 of OAuth; it must also have performed OpenID discovery and (optionally) created an association with the OP as indicated in the preamble to Section 9 of the "OpenID" spec... The proposal takes the approach to insulate each protocol from the other, both for backwards compatibility as well as to enable OpenID and OAuth to evolve and incorporate additional features without requiring reviews of the combined usage described here. In particular: (1) OpenID full compatibility - The OpenID identity provider (OP) MAY safely announce the endpoint supporting the OAuth extension to all relying parties, whether or not they support the extension as well. The use of a separate service-type announcement for Combined Providers endpoints provides a mechanism for auto-discovery of OAuth capabilities by RPs. (2) OAuth token compatibility - The OAuth tokens approved via this mechanism MAY be used identically as tokens acquired through alternative mechanisms (e.g., via standard OAuth) without requiring special considerations either because of functionality or security reasons..." [Note: belated reference for a draft still in discussion as of 2008-11-25.]
See also: Internet Identity Workshop IIW2008b topics
An Interview with Evan Prodromou
Howard Wen, DevX.com
This article presents "An Interview with Evan Prodromou, the Developer Behind the Open Source Twitter Clone." Prodromou is the author of Laconica, an open source tool that lets anyone set up their own Twitter clone. He discusses the technical challenges of microblogging and why it's not a fad. Indenti.ca, his current project, is his attempt to develop a free network service using shared, open data. But to the uninitiated, the site and service look—and function very much -- like a clone of Twitter. The big difference is that the software running Indenti.ca, which Prodromou has also been developing, Laconica, is free and open source. People can copy the code and use it on their own servers. Prodromou: "IM tends to not have a lot of persistence. You don't expect to be able to find a particular message or something that someone said in IM kept around on the web forever, and that's something that does happen with microblogging. We expect things to be persistent. There's also an expectation of real-time conversation with IM. Whereas in microblogging, the conversation can happen over a period of days. So it's a more extended conversation. Finally, most IM conversations are one-to-one. With microblogging, even a relatively antisocial person will usually end up with 50, 60, 100 people listening to them... As someone who's very active in open source software and open source web software, I'm very interested in how much of our online life we are motel model'—you put your data in and it won't come out... Laconica is written with PHP and MySQL. We have off-line processing daemons that do a lot of the same kind of work that the Twitter off-line daemons do. They do routing and they do sending stuff out over different channels. We don't have a dedicated queuing server that's built into the system right now, but that will be in an upcoming version... There's a good case to be made for using microblogging in the enterprise. When people in different departments in a company need to know what each is doing, it's a great way to keep people up-to-date. I think corporate microblogging is going to become a big part of what people do in their company. So I don't think it's just a fad. It's going to be ubiquitous, and, if we're careful and smart, we can make it into a protocol that everyone can use, that lots of different vendors can support and implement, and that we can build on top of, instead of having competing services..."
Sponsors
XML Daily Newslink and Cover Pages sponsored by:
IBM Corporation | http://www.ibm.com |
Microsoft Corporation | http://www.microsoft.com |
Oracle Corporation | http://www.oracle.com |
Primeton | http://www.primeton.com |
Sun Microsystems, Inc. | http://sun.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/