The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: July 12, 2010
XML Daily Newslink. Monday, 12 July 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com



OASIS Releases OpenDocument Version 1.2 for Public Comment
Staff, OASIS Announcement

Members of the OASIS Open Document Format for Office Applications (OpenDocument) Technical Committee have announced a 60-day public review for the Open Document Format for Office Applications (OpenDocument) Version 1.2 specification, ending September 06, 2010. OpenDocument Version 1.2 [Committee Draft 05] "specifies the characteristics of an XML-based application-independent and platform-independent digital document file format, as well as the characteristics of software applications which read, write and process such documents. This standard is applicable to document authoring, editing, viewing, exchange and archiving, including text documents, spreadsheets, presentation graphics, drawings, charts and similar documents commonly used by personal productivity software applications.

For illustrative purposes, OpenDocument Version 1.2 "describes functionality using terminology common in desktop computing environments that contain a display terminal, keyboard and mouse, attached to a computer hosting an operating system with a graphical user interface which includes user interface controls such as input controls, command buttons, selection boxes, etc. However, the specification is not limited to such environments; it supports the use of alternative computing environments, other form factors, non-GUI consumers and producers, and the use of assistive technologies, using analogous user interface operations..."

This specification is distributed in three parts: Part 1 defines an XML schema for office documents. The XML schema defined herein is designed for transformations using XSLT and processing with XML-based tools. Part 2 defines a formula language for OpenDocument documents. OpenFormula is a specification of an open format for exchanging recalculated formulas between office applications, in particular, formulas in spreadsheet documents. OpenFormula defines data types, syntax, and semantics for recalculated formulas, including predefined functions and operations. Using OpenFormula allows document creators to change the office application they use, exchange formulas with others (who may use a different application), and access formulas far in the future, with confidence that the recalculated formulas in their documents will produce equivalent results if given equivalent inputs.

Part 3 defines the package format for OpenDocument documents: OpenDocument defines a package file to store the XML content of a document as separate parts together with associated binary data as file entries in a single package file. These file entries may be compressed to further reduce the storage taken by the package. This package is a Zip file, whose structure is described in Appendix C. OpenDocument Packages impose additional structure on the Zip file to accomplish the representation of OpenDocument Format documents. A document within a package may consist of a set of files creating a unit, for instance the set of files specified by OpenDocument Part 1. These files may be located in the root of the package, or within a directory. If they are contained in the root of the package, they are called document. If they are located within a directory, the document they constitute is called a sub document. A package may contain multiple sub documents, but only a single document can be contained in the root of the package. Unless otherwise stated, the term document refers to the document contained in the root of the package, which may include sub documents..."

See also: the OASIS announcement


W3C RDF Workshop Report Highlights JSON, Turtle, and Other Formats
Staff, W3C Announcement

W3C has announced the publication a Workshop Report on the RDF Next Steps Workshop held June 26-27, 2010 in Stanford, Palo Alto, CA, hosted by the National Center for Biomedical Ontology (NCBO). Resource Description Framework (RDF), the first layer of the Semantic Web, became a W3C Recommendation in 1999. As a result of the R&D activities and the publication of newer standards like SPARQL, OWL, POWDER, or SKOS, and also due to the large scale deployment and applications, a number of issues regarding RDF came to the fore. Some of those are related to features that are not present in the current version of RDF but which became necessary in practice (e.g., the concept of Named Graphs)... Workshop participants concluded that support for JSON, Turtle, and for 'Named Graphs' are top priorities for any future work on RDF. Participants also highlighted the importance of compatibility with existing deployment.

From the Workshop Report Executive Summary: "The Resource Description Framework (RDF), the first layer of the Semantic Web, became a W3C Recommendation in 1999. A major revision was published in 2004, adding a few features, clarifying the syntax and semantics, and retaining compatibility with existing deployment. In the years since then, RDF has been widely implemented and has been adopted in various industries for a wide range of applications. Now, in June 2010, the W3C held a workshop to gather feedback and begin to determine if another revision of RDF is warranted and, if so, which elements should be added or clarified.

The workshop submissions, presentations, and discussions indicated a strong demand for a few features to be added in a compatible manner. Participants also expressed considerable resistance to doing anything which would disrupt or confuse existing deployment efforts. While some participants expressed a strong desire to change certain elements of the design of RDF, there was general agreement that the negative impact from doing so, in nearly all cases, made it unwise.

Recommended Next Steps: The workshop submissions, presentations, and discussions indicated a strong demand for a few features to be added in a compatible manner. Based on the results of the Workshop, and on the community feedbacks thereof, W3C should consider chartering an RDF Working Group at the earliest convenience to address those issues. Workshop participants urge the W3C to use the workshop summary table as guidance in the production of the working group charter. Workshop partipants made it clear that any Working Group chartered should be clearly directed to respect, support, and advance existing deployment, avoiding any changes which would negatively impact current RDF users... Some high priorities include Standardize Model for Graph Identification; Modify Semantics to Support Graph Identification; Switch to Improved Inference Rules; Standardize a JSON RDF Syntax; Make Turtle a W3C Standard; Add Graphs to Turtle; Add Graphs to RDF/XML; Add Graphs to RDF/XML; Specifiy Linked Data Style of RDF; Weakly Deprecate some RDF/XML Features; Have Explicit Support for Annotations; Align RDF Semantics with SPARQL..."

See also: the associated CFP


LISA Hosts Project for Open-Source Enterprise-Level Translation Tools
Staff, Localization Industry Standards Association Announcement

"IBM is partnering with LISA (Localization Industry Standards Association), Welocalize, Cisco, and Linux Solution Group e.V. (LiSoG) to create an open source project that provides a full-featured, enterprise-level translation workbench environment for professional translators.

LISA's stated mission is to develop technical open standards for the globalization process to facilitate international business. Teaming up with industry thought leaders, LISA is announcing the OpenTM2 Project. Not only does OpenTM2 provide a public and open implementation of translation workbench environment that serves as the reference implementation of existing localization industry standards, such as TMX, it also aims to provide standardized access to globalization process management software. Along with LISA's stated mission on localization industry standards, this initiative provides LISA the opportunity to reinvigorate existing localization standards with an open reference implementation.

OpenTM2, based on the open-source version of IBM TranslationManager/2, offers the first full-featured, enterprise-level translation workbench environment in the open-source world that allows translators to produce high quality translation in a cost-effective manner. With consistency in tooling and exchange standards in an open environment, translators are no longer limited to the choice of expensive proprietary software with very limited interoperability with other commercial tooling. Through partnership with other key open source initiatives in the industry, the intent is to provide an open platform for service integration from an end-to-end process point of view..."

Arle Lommel, LISA's head of Open Standards Activities: 'The announcement of OpenTM2 is a critical step toward providing a true open exchange environment for translation software technologies. It is very encouraging to see open standards being pushed to the forefront of the globalization business. An end-to-end localization technology solution will help level the playing field by providing an easier and less costly means for customers to increase their translation volumes'..."

See also: John Yunker 'Translation Memory Goes Open Source'


W3C Web Internationalization FAQs: Using 'B' and 'I' Elements of HTML5
Richard Ishida, Posting to W3C List www-international@w3.org

"The HTML5 specification redefines b and i elements to have some semantic function, rather than purely presentational. However, the simple fact that the tag names are 'b' for bold and 'i' for italic means that people are likely to continue using them as a quick presentational fix. This article explains why that can be problematic for localization (and indeed for restyling of pages in a single language), and echoes the advice in the specification intended to address those issues...

A general issue: Using 'b' and 'i' tags (elements) can be problematic because it keeps authors thinking in presentational terms, rather than helping them move to properly semantic markup. At the very least, it blurs the ideas. To an author in a hurry, it is tempting to just use one of these tags in the text to make it look different, rather than to stop and think about things like portability and future-proofing. Internationalization problems can arise because presentation may need to differ from one culture to another, particularly with respect to things like bold and italic styling... Just because an English document may use italicisation for emphasis, document titles and idiomatic phrases in a foreign language, it doesn't hold that a Japanese translation of the document will use a single presentational convention for all three types of content. Japanese authors may want to avoid both italicization and bolding, since their characters are too complicated to look good in small sizes with these effects.

[So] You should bear in mind that the content of a 'b' markup element may not always be bold, and that of an 'i' element may not always be italic. The actual style is dependent on the CSS style definitions. You should also bear in mind that bold and italic may not be the preferred style for content in certain languages. You should not use 'b' and 'i' tags if there is a more descriptive and relevant tag available. If you do use them, it is usually better to add class attributes that describe the intended meaning of the markup, so that you can distinguish one use from another..."

From the HTML5 draft specification: "The 'i' element represents a span of text in an alternate voice or mood, or otherwise offset from the normal prose, such as a taxonomic designation, a technical term, an idiomatic phrase from another language, a thought, a ship name, or some other prose whose typical typographic presentation is italicized... The 'b' element represents a span of text to be stylistically offset from the normal prose without conveying any extra importance, such as key words in a document abstract, product names in a review, or other spans of text whose typical typographic presentation is boldened... As with the 'i' element, authors are encouraged to use the class attribute on the 'b' element to identify why the element is being used, so that if the style of a particular use is to be changed at a later date, the author doesn't have to go through annotating each use. The 'b' element should be used as a last resort when no other element is more appropriate. In particular, headings should use the 'h1' to 'h6' elements, stress emphasis should use the em element, importance should be denoted with the strong element, and text marked or highlighted should use the mark element..."

See also: the draft HTML5 specification


Microsoft Unveils 'Turnkey' Cloud Appliance
Paul McDougall, InformationWeek

"Microsoft has unveiled a preconfigured system designed to help businesses move to cloud computing quickly and efficiently without disrupting existing IT operations. The Windows Azure platform appliance consists of the Windows Azure cloud operating system, Microsoft SQL Azure, and, according to the company, 'a Microsoft-specified configuration' of network, storage, and server hardware. HP, Dell, and Fujitsu to offer versions of the system, while eBay signs on as first customer...

HP's pact with Microsoft extends a $250 million infrastructure-to- application partnership the companies announced earlier this year. HP will provide data center hosting services for Azure users... Dell will use the Windows Azure platform appliance as a foundation through which it will deliver its Dell platform-as-a-service Cloud. Fujitsu, for its part, will deploy the appliance in its datacenters in Japan to serve its internal needs and those of some of its customers.

eBay, meanwhile, will become one of the first enterprise users of the Windows Azure platform appliance. The online auctioneer plans to employ the appliance at two of its datacenters. Ebay has wrapped up a successful test of Windows Azure, and has, somewhat ironically, handed off its listings page for Apple iPads to Microsoft's public cloud..."

According to the Microsoft announcement: "Microsoft provides a comprehensive and integrated service and server platform that allows customers and partners to deploy clouds when and where they want. The Windows Azure platform offers a standardized service platform; the customizable Windows Server platform lets customers and partners build public and private clouds that leverage existing investments with maximum flexibility... The company also introduced the Microsoft Management and Virtualization Solution Incentive and the Private Cloud Deployment Kit, which offers financial rewards and guidance to partners as they build virtualization and private cloud solutions.... [Microsoft's] Muglia disclosed new details regarding Microsoft code name 'Dallas,' an information service powered by the Windows Azure platform that provides developers and information workers access to third-party premium data sets and Web services..."

See also: the Microsoft announcement


Federated Authentication Beyond The Web: Problem Statement and Requirements
Hannes Tschofenig (ed), IETF Internet Draft

An initial level -00 Internet Draft has been published for Federated Authentication Beyond The Web: Problem Statement and Requirements. From the abstract: "It is quite common that application developers and system architects are in a need for authentication and authorization support in a distributed environment. At least three parties need to cooperate, namely the end host, the identity provider, and the relying party. At the end of the exchange the identity provider asserts identity information or certain attributes to the relying party without exposing the user's long-term secret to the relying party. While the problem sounds challenging and interesting but it is not new. In fact, various IETF groups have produced specifications to solve this problem, such as Kerberos, RADIUS, and Diameter. Outside the IETF various Single-Sign-On solution for HTTP-based applications have been developed as well..."

The typical setup for a three party protocol has a 'Three Party Authentication Framework', so might surprise that there are actually four parties... With three party protocols there are a number of different protocol variants possible, as the available crypto-literature shows... A real world entity is behind the end host and responsible for establishing some form of contract with the identity provider, even if it is only as weak as completing a web form and to confirm the verification email.

We assume that the identity provider and the relying party belong to different administrative domains. Very often there is some form of relationship between the identity provider and the relying party. This is particularly important when the relying party wants to use information obtained from the identity provider for authorization decisions and when the identity provider does not want to release information to every relying party (or only under certain conditions). While it is possible to have a bilateral agreement between every identity provider and every relying party on an Internet scale this setup does require some intermediary, the 'stuff-in-the-middle'...

Is it possible to design a system that builds on top of successful protocols to offer non-Web-based protocols with a solid starting point for authentication and authorization in a distributed system? The solution MUST make use of the AAA infrastructure (RADIUS and Diameter). Ideally, modifications at AAA servers SHOULD be kept at a minimum. Modifications to the AAA infrastructure that affect operational aspects must not be made. The next requirement concerns security: The relying party must not get in possession of the long-term secret of the entity that is authenticated towards the AAA server. Since there is no single authentication mechanism that will be used everywhere there is another associated requirement: The authentication framework MUST allow for the flexible integration of authentication mechanisms..."

See also: the Project Moonshot presentation from IETF 77


FAO and UNESCO-IOC/IODE Support Open Access DSpace Ontology Repository
Staff, UN Food and Agriculture Organization Announcement

"The United Nations agencies of FAO (Food and Agriculture Organization) and UNESCO-IOC/IODE have announced a joint initiative to provide a customized version of DSpace using standards and controlled vocabularies in oceanography, marine science, food, agriculture, development, fisheries, forestry, natural resources and related sciences.

The Hasselt University Library produced a customized version of DSpace called OceanDocs for the International Oceanographic Data and Information Exchange (IODE) of the Intergovernmental Oceanographic Commission of UNESCO (IOC) and adapted it to the standards of the Oceanographic community. The OceanDocs Network, created in 2004, now has some 50 members. The FAO customized DSpace using the AGRIS Application Profile (AP) and is developing a plug-in for the use of controlled vocabularies for communities in food, agriculture, development, fisheries, forestry, natural resources and related sciences such as AGROVOC.

The communities supported by FAO and UNESCO-IOC/IODE are synergistic and the standards on metadata and controlled vocabularies are similar for both. A common repository development is a logical result. Hasselt University Library will create for FAO and UNESCO-IOC/IODE a new version called AgriOceanDocs DSpace which will be available from August 1, 2010. It will integrate the previous developments of both Agencies in one customized version of DSpace.

The communities of FAO and UNESCO-IOC/IODE active in oceanography and food, agriculture, development, fisheries, forestry, natural resources and related sciences will provide a bespoke repository software based on DSpace to offer Open Access to the literature. They will use the same high standards for metadata, thesauri and other ontologies ensuring advanced access to the scientific publications in the field and the possibility to create new services for their researchers..."


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-07-12.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org