The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: August 17, 2010
XML Daily Newslink. Tuesday, 17 August 2010

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation

W3C Leads Discussion at TypeCon 2010 on New Open Web Font Format (WOFF)
Staff, W3C Announcement

"W3C is attending TypeCon 2010 this week for community discussion about Web Open File Format (WOFF), the new open format for enabling high-quality typography for the Web. WOFF expands the typographic palette available to Web designers, improving readability, accessibility, internationalization, branding, and search optimization. Though still in the early phases of standardization, WOFF represents a pivotal agreement among browser vendors, foundries and font service providers who have convened at W3C to address the long-standing goal of advancing Web typography.

Rich typographic choice, in addition to the ability to preserve brand identity online and improve readability of Web content, stand out as the most visible benefits of improved Web typography. However, styling real text instead of using images of text provides many other benefits. Text may be rendered as speech, which improves accessibility for people who are blind or with low vision. Real text is discoverable through search engines. There are also many written languages for which there have not been widely available typefaces; WOFF will thus make it possible to create content for the Web in more of the world's languages.

The mission of the Web Fonts Working Group, part of the Fonts Activity, is to develop specifications that allow the interoperable deployment of downloadable fonts on the Web. Existing specifications (CSS3 Fonts, SVG) explain how to describe and link to fonts, so the main focus will be the standardisation of font formats suited to the task, and a specification defining conformance (for fonts, authoring tools, viewers ...) covering all the technology required for WebFonts.

As the relevant specifications are all implemented, and either standardised (OpenType by ISO/IEC 14496-22:2009, SVG by the SVG 1.1 Recommendation) or mature (WOFF, EOT, CSS3 Fonts) the group would be chartered to only make the minimal changes needed for interoperability and standardisation. In addition, the provision of interoperable font formats would allow the testing of CSS3 Fonts, speeding it to Recommendation status..."

See also: the WOFF Frequently Asked Questions (FAQ) document

Typographic Pizzazz: Coming to a Web Near You
Stephen Shankland, CNET

"Your favorite font could soon be coming to the Web. That's because of a new technology called Web Open Font Format, or WOFF, that has attracted support from all the right players: browser makers, standards groups, typography designers, and online services to ease licensing. The technology, just now ready enough to use, is making something of a debut this week at the TypeCon conference in Los Angeles. WOFF supports use of different typefaces for much closer control over typography. Sometimes it involves choosing the most practical typeface for a job -- say, readability of long tracts of text or legibility of headlines. Sometimes it involves bringing style to words. WOFF packages up a category of fonts called Sfnt ('spline font') that includes fonts encoded with OpenType, TrueType, and Open Font Format technology.

WOFF grew out of cooperation among Erik van Blokland from type foundry LettError, Tal Leming from type foundry Type Supply, and Jonathan Kew of Mozilla. It's steadily accumulated allies, and some final pieces have now fallen into place...

For example, in browser support, Apple has added support in prototype builds of WebKit, the browser engine used by Safari. The four other major browsers already had signed up for WOFF. Adobe also said it will offer several Adobe fonts for Web use through a font subscription service called TypeKit... On the browser front, Mozilla was first, but Opera and, notably, Microsoft signed on as well. Next came Google's Chrome. Apple didn't comment for this story, but the fact that the nightly build of Safari's WebKit supports WOFF speaks volumes...

Most folks are familiar with picking fonts in word processors and other applications. Yet only now, after several false starts, is the possibility being built into the Web. Web designers today have significant limitations when it comes to fonts. The biggest one is the limited 'Web-safe' list of fonts that can be expected on most computers. Among them are Arial, Verdana, Georgia, Times New Roman, Trebuchet. To be sure, HTML permits tags that invoke boldface and italic, but that's very far removed from what, for example, a magazine layout designer might expect..."

See also: the WOFF File Format 1.0 specification

Public Review of SAML Profiles: Metadata and Service Provider Protocol
Scott Cantor (ed), OASIS Public Review Drafts

Members of the OASIS Security Services (SAML) Technical Committee have published two approved Committee Drafts for public review through October 13, 2010. OASIS encourages feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of OASIS technical work. Interested parties are invited to join the TC as it continues to further development of its specifications. The OASIS IPR Policy is applicable to the work of this technical committee; OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specifications, so that notice of them may be posted to the notice page for this TC's work.

SAML V2.0 Metadata Profile for Algorithm Support Version 1.0 profiles the SAML V2.0 Metadata specification, which includes an element allowing entities to describe the XML Encryption algorithms they support. This specification defines metadata extension elements to enable entities to describe the XML Signature algorithms they support, and a profile for using both elements to enable better algorithm agility for profiles that rely on metadata... The use of the 'md:EncryptionMethod' element in the SAML V2.0 Metadata specification is not completely defined, and there is no comparable support for communicating the XML Signature algorithms supported by an entity... There are more general standards for the description of security requirements of communicating endpoints, such as 'WS-SecurityPolicy'. This specification is not intended as a replacement for such mechanisms, but is directed at systems with fewer requirements that are already designed around SAML V2.0 Metadata.

Service Provider Request Initiation Protocol and Profile Version 1.0 defines a generic browser-based protocol by which a request can be made to a service provider to initiate a protocol-specific request for authentication, and to ask that particular options be used when making such a request. Modern standards for browser-based Single Sign-On (SSO) typically include the ability to initiate the authentication process from either the identity provider (IdP) or service provider (SP) participating in the exchange. However, the standards typically lack a defined mechanism for asking either end to actually initiate the process, relying on proprietary interfaces, or on the user agent accessing a protected resource at the service provider.

IdP-initiated SSO assumes a variety of information is known at the time of a request, including the identity provider itself and its location, protocol features and binding/profile details to apply, how to express the desired resource to access, etc. In general, it suffers by leaving the service provider 'out of the loop' in formulating the request and applying its own decision-making in doing so. On the other hand, SP-initiated SSO suffers from a lack of standardization, particularly when support for 'deep-linking', or unauthenticated access to resources within a protected system, is lacking. Many complex deployments are unable to fully support direct access in that fashion, and require special conventions or work-arounds that are often propagated to links constructed outside of the affected site, creating brittle links and maintenance challenges. A standard protocol for invoking the SSO functionality available at a service provider in an abstracted, protocol-neutral fashion solves both problems..."

See also: the Service Provider Request Initiation Protocol

Implementation Report for DomainKeys Identified Mail (DKIM) Signatures
Murray S. Kucherawy (ed), IETF Internet Draft

An RFC4871 Implementation Report has been published by IETF as an initial level -00 Internet Draft, supporting the work of a rechartered IETF Working Group 'IETF Domain Keys Identified Mail (DKIM)' and a revision of the "DomainKeys Identified Mail (DKIM) Signatures" RFC published as a Proposed Standard in May 2007. The new document contains an implementation report for the IESG covering DKIM in support of the advancement of that specification along the Standards Track.

DomainKeys Identified Mail (DKIM) "permits a person, role, or organization that owns the signing domain to claim some responsibility for a message by associating the domain with the message. This can be an author's organization, an operational relay or one of their agents. DKIM separates the question of the identity of the signer of the message from the purported author of the message. Assertion of responsibility is validated through a cryptographic signature and querying the signer's domain directly to retrieve the appropriate public key. Message transit from author to recipient is through relays that typically make no substantive change to the message content and thus preserve the DKIM signature.

The IETF DKIM specification has now reached a level of maturity sufficient to consider its advancement along the standards track. Enclosed is a summary of collected interoperability data provided from sources that are aggregating such information as well as from a more formal DKIM interoperability event that took place in October 2007. That interoperability event included participants from all of the following organizations: Alt-N Technologies, AOL, AT&T Inc., Bizanga Ltd., Brandenburg InternetWorking, Brandmail Solutions, ColdSpark, Constant Contact, Inc., DKIMproxy, Domain Assurance Council, Google Inc., ICONIX Inc., Internet Initiative Japan (IIJ), Ironport Systems, Message Systems, Port25 Solutions, Postfix, Sendmail, Inc., StrongMail Systems, and Yahoo! The handful of interoperability issues referred to weaknesses or ambiguities in DKIM resulted in several errata being opened via the RFC Editor web site. These are being addressed in an RFC4871bis draft effort that is now starting from within the DKIM working group..."

The IETF Domain Keys Identified Mail (DKIM) Working Group has been rechartered to "switch its focus to refining and advancing the DKIM protocols. The current deliverables for the DKIM working group are: (1) Advance the base DKIM protocol (RFC 4871) to Draft Standard—as a first priority for the working group. (2) Collect data on the deployment, interoperability, and effectiveness of the base DKIM protocol, with consideration toward updating the working group's informational documents. (3) Collect data on the deployment, interoperability, and effectiveness of the Author Domain Signing Practices protocol (RFC 5617), and determine if/when it's ready to advance on the standards track, update it at Proposed Standard, advance it to Draft Standard, deprecate it, or determine another disposition, as appropriate. (4) Update the overview and deployment/operations documents; these are considered living documents, and should be updated periodically, as we have more real-world experience. (5) Consider issues related to mailing lists, beyond what is already documented. This includes considerations for mailing list software that supports or intends to support DKIM, as well as considerations for DKIM/ADSP deployment in the presence of mailing lists that do not have such support..."

See also: the bis-00 version of DomainKeys Identified Mail (DKIM) Signatures

DCMI Forms Metadata Provenance Task Group
Michael Panzer, Dublin Core Metadata Initiative Announcement

"The Dublin Core Metadata Initiative (DCMI) has recently started a task group to address the issue of metadata provenance. The group aims at creating a shared model of the data elements required (a so-called 'application profile') to satisfactorily describe an aggregation of metadata statements in order to collectively import, access, use and publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the described metadata. Thus, we will not be concerned with the general issue of provenance metadata, but only with the provenance (in the broad sense of additional information) of metadata. In essence, we are dealing with metametadata. The Task Group is led by Kai Eckert of the University of Mannheim and Michael Panzer of OCLC who have become members of the DCMI Advisory Board. We are currently seeking participants from communities who would be interested in working collaboratively on this DCMI initiative.

Since metadata provenance information is in itself metadata, it should be possible to be republished in quite the same way and in conjunction with the metadata statements or sets it describes. The task group is charged to also create usage guidelines that outline ways to connect 'content' metadata aggregations with its provenance metadata in a way robust enough to survive multiple republication processes

An initial proposal for a task group was submitted to the Advisory Board during DC-2009 in Seoul. In the meantime, Kai Eckert and Michael Panzer have participated in the weekly teleconferences of the W3C Provenance XG. The Incubator Group is charged with producing requirements and state-of-the-art reports on the area of provenance for Semantic Web technologies, both of which will be used as input for defining the AP. Should follow-on activities of the Incubator Group include standardization, the AP developed by the task group would provide input both as a use case and as a source of requirements...

Dublin Core Metadata Initiative (DCMI) is an open organization engaged in the development of interoperable metadata standards that support a broad range of purposes and business models. DCMI's activities include work on architecture and modeling, discussions and collaborative work in DCMI Communities and DCMI Task Groups, annual conferences and workshops, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices."

See also: the W3C Provenance Incubator Group

Java Web services: WS-Security Without Client Certificates
Dennis Sosnoski, IBM developerWorks

"Many WS-Security configurations require both client and server to use public/private key pairs, with X.509 certificates to vouch for the ownership of the public keys. This is probably the most widely used technique for signing or encrypting messages with WS-Security, and it does have some advantages.

In particular, client certificates provide strong client identity verification and strong signature guarantees on requests. But it also has drawbacks, including the performance overhead of asymmetric encryption and the management headaches of obtaining and maintaining certificates for each client.

Using asymmetric encryption with public/private key pairs for signing and encrypting messages is simple, at least conceptually. You use your private key to sign messages and the recipient's public key to encrypt messages. Anyone with access to your public key (which is generally wrapped within layers of authentication in the form of an X.509 certificate) can verify the signature you generated using your private key, whereas only the owner of the corresponding private key can decrypt a message encrypted with a public key. If the client doesn't have a public/private key pair, you can't use full asymmetric encryption. The alternative is symmetric encryption, but with symmetric encryption you must have a secret key known only to the parties involved in a message exchange. How can you establish such a secret key?

The technique that WS-Security uses is to have the client generate a secret-key value, which is then encrypted using asymmetric encryption with the server's public key and embedded in the request message in a 'xenc:EncryptedKey' token. The client can use this secret key—or for better security, a separate key derived from the secret key—to encrypt and/or sign the request message, and the server can do the same with the response message. There's no need for the server to send the secret key back to the client, because the client already has it available..."

Preventing Future Oil Spills with Software-Based Event Detection
S.S. Iyengar, S. Mukhopadhyay, C. Steinmuller, Xin Li; IEEE Computer

"The Deepwater Horizon oil spill is one of the largest and costliest in history, with far-reaching effects on the Gulf Coast that will be felt for decades to come... While BP's milestone maneuver on 19-July-2010 to stop the flow of oil may ultimately succeed, it raises the question: could this awful tragedy have possibly been prevented in the first place? Instead of the chaos occurring today, could the company simply have ordered a maintenance crew to the rig or perhaps shut it down?

BP has released documents indicating that it had concerns about the rig as far back as mid-2009, as well as numerous warnings that something was amiss prior to the blowout. The warnings were apparently ignored in BP's decision-making process since the low probability of a disaster masked the risk associated with such action. Complex event processing (CEP) systems detect problems in mission-critical, real-time applications and generate intelligent decisions to modulate the system environment. In the case of deepwater oil drilling, advanced CEP technology could have helped to prevent the current crisis in the Gulf of Mexico...

At LSU, we have developed the Cognitive Information Management Shell (CIM Shell), a CEP system that can analyze complex events and activities and adapt rapidly to evolving situations in a wide variety of environments. By archiving past events and cross-referencing them with current events, the system can discover deep patterns and then act upon them. Agent-based techniques continually adjust CIM Shell's parameters in near real time to adapt the system to changing environments, and human operators can easily add information to tweak the system, making goals easier to achieve... We've used the system in numerous scenarios, including soil and water management and video control in the presence of frame losses. Deployed over a hardware cloud infrastructure with GPU accelerators, it can handle around 100,000 events and make a million inferences every second.

The practical uses of distributed event detection and monitoring and systems like CIM Shell are enormous, ranging from enterprise management to golf-course sprinklers to a hospital patient's intravenous pump. Some examples: (1) Drone ships, robotic freighters with a small human crew that are capable of docking themselves and avoiding ocean hazards, could revolutionize the shipping industry. (2) Advanced credit-monitoring systems that better detect anomalies in purchase activity could help prevent identity theft. (3) Power-conservation systems could remotely shut down machines after long periods of inactivity or if no one is there to operate them. (4) A security system could order an inventory after too many seemingly unrelated disturbances in a warehouse..."

See also: the OASIS Symptoms Automation Framework (SAF) TC

FRBR in Practice: Helsinki Celia Library for the Visually Impaired
Wendy Taylor and Kathy Teague, Ariadne

"We are anticipating the launch of the new library management system to our readers in the fourth quarter of 2011 — an opportune moment to review our cataloguing practice and investigate the possibility of cataloguing the accessible format, e.g. braille at the manifestation level rather than as a holding attached to the bibliographic record describing the print book. [Certain negative effects of this choice] could be corrected by Functional Requirements for Bibliographic Records (FRBR). In order to test this theory we needed to have a better understanding of FRBR and how it actually works. We applied for a Ulverscroft/IFLA Best Practice Award to visit to the Celia Library.

What is FRBR? IFLA established a Study Group to define the purpose of bibliographic records and how they could be changed to be more effective in an increasingly networked environment. The group defined how bibliographic records should help users to perform the following functions: search and find materials; from the search results, identify the key object of their interest; select a version/representation of the object which matches the users needs; acquire access to the item. From these functions an entity-relationship model was developed which includes three groups of entities. Group 1: attributes which describe the products of the artistic or intellectual creation—work (a distinct intellectual or artistic creation), expression (intellectual or artistic realisation of a work), manifestations (technical embodiment of an expression of a work), item (single example of a manifestation). Group 2: the persons or bodies responsible for producing the Group 1 entities. Group 3: subjects which describe concept, object, event, or place... Accessible materials for blind and partially sighted people are usually reproductions of print works making them ideal for FRBR description.

RNIB produces the majority of its titles in more than one format. A standard print work is scanned to create a master XML file which is used to produce accessible copies in braille, audio or giant print: (1) Braille is a system of raised dots, which enables blind people to read with the tips of their fingers. Contracted braille (formerly known as grade two) is where each character or cell, can represent a single print character; (2) RNIB Talking Books are in DAISY format. DAISY is an acronym for Digital Accessible Information System and is now recognised worldwide as a standard for audio books. The DAISY format is a digital reading format that can combine audio, text and graphical information in one production; (3) Books with a font size of over 18pt are defined as 'giant print'...

We have heard a lot about FRBR but seeing it in practice is entirely different. It works as we have speculated but it helps a great deal to have our conjecture confirmed. The knowledge gained during our visit to the Helsinki Celia Library fills in the gaps of our understanding of FRBR. It confirms our theory that FRBR would help simplify the display of search results for our readers. We feel surer that we can change our cataloguing practice from cataloguing the print book with alternative format holdings to cataloguing the alternative format at the manifestation level without detrimental effect on our readers... Each alternative format having its own bibliographic record will allow for easier extraction of data from the LMS, something we do frequently at RNIB NLS, in order to create reading lists to help our readers choose what they want to read..."

See also: Barbara Tillett's overview in FRBR, A Conceptual Model for the Bibliographic Universe


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: