The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: February 23, 2010
XML Daily Newslink. Tuesday, 23 February 2010

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc.

W3C Advances Speech Synthesis Markup Language (SSML) Version 1.1 to PR
Daniel C. Burnett, Zhi Wei Shuang (et al, eds), W3C Technical Report

The W3C Voice Browser Working Group has published a Proposed Recommendation for the specification Speech Synthesis Markup Language (SSML) Version 1.1. The W3C Membership and other interested parties are invited to review the document and send comments to the Working Group's public mailing list through 23-March-2010. Known implementations of SSML are documented in the Implementation Report, along with the associated test suite.

The Voice Browser Working Group has sought to develop standards to enable access to the Web using spoken interaction. The Speech Synthesis Markup Language Specification is one of these standards and is designed to provide a rich, XML-based markup language for assisting the generation of synthetic speech in Web and other applications. The essential role of the markup language is to provide authors of synthesizable content a standard way to control aspects of speech such as pronunciation, volume, pitch, rate, etc. across different synthesis-capable platforms.

The Speech Synthesis Markup Language specification and is based upon the JSGF and/or JSML specifications, which are owned by Sun Microsystems. SSML is part of a larger set of markup specifications for voice browsers developed through the open processes of the W3C. A related initiative to establish a standard system for marking up ext input is SABLE, which tried to integrate many different XML-based markups for speech synthesis into a new one. The activity carried out in SABLE was also used as the main starting point for defining the Speech Synthesis Markup Requirements for Voice Markup Languages; since then, SABLE itself has not undergone any further development.

The intended use of SSML is to improve the quality of synthesized content. Different markup elements impact different stages of the synthesis process. The markup may be produced either automatically, for instance via XSLT or CSS3 from an XHTML document, or by human authoring. Markup may be present within a complete SSML document or as part of a fragment embedded in another language, although no interactions with other languages are specified as part of SSML itself. Most of the markup included in SSML is suitable for use by the majority of content developers; however, some advanced features like phoneme and prosody (e.g. for speech contour design) may require specialized knowledge..."

See also: the SSML Implementation Report

U.S. Federal ICAM Security Assertion Markup Language (SAML) 2.0 Profile
Terry McBride, Matt Tebo, John Bradley, and Dave Silver (eds), ICAM Technical Report

A draft specification has been uploaded to the OASIS Security Services (SAML) TC repository for comment: Federal Identity, Credentialing, and Access Management: Security Assertion Markup Language (SAML) 2.0 Profile.

"Security Assertion Markup Language (SAML) 2.0 Profile as described in this document has been adopted by Federal Identity, Credential, and Access Management (ICAM) for the purpose of Level of Assurance (LOA) 1, 2, and 3 identity authentication and holder-of-key assertions for binding keys or other attributes to an identity at LOA 4. Proper use of this Profile ensures that implementations: (1) Meet Federal standards, regulations, and laws; (2) Minimize risk to the Federal government; (3) Maximize interoperability; (4) Provide end users (e.g., citizens) with a consistent context or user experience at a Federal Government site...

This Profile is a deployment profile based on the Organization for the Advancement of Structured Information Standards (OASIS) SAML 2.0 specifications, and the Liberty Alliance eGov Profile v.1.5. This Profile relies on the SAML 2.0 Web Browser SSO Profile to facilitate end user authentication. This Profile does not alter these standards, but rather specifies deployment options and requirements to ensure technical interoperability with Federal government applications. Where this Profile does not explicitly provide guidance, the standards upon which this Profile is based take precedence. In addition, this Profile recognizes the eGov Profile conformance requirements , and to the extent possible reconciles them with other SAML 2.0 Profiles. The objective of this document is to define the ICAM SAML 2.0 Profile so that persons deploying, managing, or supporting an application based upon it can fully understand its use in ICAM transaction flows..."

Identity, Credential and Access Management (ICAM) is a Subcommittee of the Information Security and Identity Management Committee (ISIMC), established in September 2008 by the U.S. Federal CIO Council. ICAM's mission is to foster "effective government-wide identity and access management, enabling trust in online transactions through common identity and access management policies and approaches, aligning federal agencies around common identity and access management practices, reducing the identity and access management burden for individual agencies by fostering common interoperable approaches, ensuring alignment across all identity and access management activities that cross individual agency boundaries, and collaborating with external identity management activities through inter-federation to enhance interoperability."

See also: the ICAM description

Identifying Opportunities for Improvement in Security Architecture
Gunnar Peterson, Blog

"A report that should surprise nobody is available: people pick predictable passwords... After the security breach, database security firm Imperva analysed the passwords used, publishing a report entitled Consumer Password Worst Practices. The data found that the most common passwords were: 1. 123456, 2. 12345, 3. 123456789, 4. Password, 5. iloveyou, 6. princess, 7. rockyou, 8. 1234567, 9. 12345678, 10. abc123... The analysis revealed a large amount of users had chosen 'easy-to-crack' passwords, the most common being '123456', which was chosen by 290,731 users, or almost one percent.

What is surprising is that this "secret" along with other great "unknowns" like Social Security Number, is what's used in standard practice Information Security to bootstrap the whole access control program. When the users are spoofed, phished, pharmed, and otherwise tricked into clicking on something that allows the bad guys in, they're routinely insulted as "how could they be so dumb" to click on that or whatever. But is it any dumber than architecture that relies on storing and shipping the dynamite and the detonator in the same truck?

When the Object logic and data are stored in the same domain as the Subject *and* the Subject's secrets: does this make any sense? I will now bash my head on my desk repeatedly. The current architecture is not a security architecture in any meaningful sense, its an operational and deployment convenience. If you are building out security architecture like this for Web apps, Web Services, and Cloud, then please stop. Step away from the keyboard and look at using something else...

From a security architecture standpoint there's no real excuse to spray username/password around everywhere any more. The role of architecture is to separate concerns and place functionality and ownership in places where they have the most knowledge and resources to accomplish the task. In identity the knowledge and resources required to identify and manage users are totally separate from those required to identify and manage apps and servers, yet they are often combined into a lowest common denominator. The new identity standards like Information Cards, SAML, and oauth are widely supported in products, best of breed and open source. Investigate them and find the best fit for your company and systems. The only thing worse than a weak/guessable password is lots and lots of weak/guessable username/passwords..."

An Extensible Format for Email Feedback Reports
Yakov Shafranovich, John Levine, Murray Kucherawy (eds), IETF Intenet Draft

Members of the IETF Messaging Abuse Reporting Format Working Group have released Internet Draft Extensible Format for Email Feedback Reports which defines an extensible format and MIME type that may be used by network operators to report feedback about received email to other parties. This format is intended as a machine-readable replacement for various existing report formats currently used in Internet email.

The following requirements are necessary for feedback reports, which are the subject of actual specification is defined later in this document: (1) they must be both human and machine readable; (2) a copy of the original email message (both body and header) or the message header must be enclosed in order to allow the receiver to handle the report properly; (3) the machine readable section must provide ability for the report generators to share meta-data with receivers; (4) the format must be extensible.

'Introduction': As the spam problem continues to expand and potential solutions evolve, network operators are increasingly exchanging abuse reports among themselves and other parties. However, different operators have defined their own formats, and thus the receivers of these reports are forced to write custom software to interpret each. In addition, many operators use various other report formats to provide non-abuse-related feedback about processed email. This memo seeks to define a standard extensible format by creating the "message/feedback-report" MIME type for these reports. This format and content type are intended to be used within the scope of the framework of the "multipart/report" content type defined in RFC 3462. While there has been previous work in this area (e.g. "Proposed Spam Reporting BCP Document" and "Abuse Reporting Standards Subgroup of the ASRG"), none of them have yet been successful. It is hoped that this document will have a better fate.

This format is intended primarily as an Abuse Reporting Format (ARF) for reporting email abuse but also includes support for direct feedback via end user mail clients, reports of some types of virus activity, and some similar issues. This memo also contains provision for extensions should other specific types of reports be desirable in the future... This document only defines the format and MIME content type to be used for these reports. Determination of where these reports should be sent, how trust among report generators and report recipients is established, and reports related to more than one message are outside the scope of this document. It is assumed that best practices will evolve over time, and will be codified in future documents.

See also: the formation of the IETF MARF Working Group

PGP Offers Enterprise Key Management to Consolidate Encryption Control
Neil Roiter, Network Computing

"PGP Key Management Server, announced recently, aims to consolidate key management across third-party applications and devices, including custom applications, which typically lack built-in capabilities. PGP says enterprises are struggling with managing disparate certificates authorities for e-commerce,payment systems, file transfers and other processes. Wireless access points are another problem area, as large enterprises often use WLAN gear from multiple vendors, requiring separate key management for each. PGP Key Management Server allows enterprises to manage third-party application and device keys, consolidate controls and reduce risk of data loss because of a corrupted or lost encryption key.

PGP's entry into the enterprise key management arena comes at a time when encryption is becoming pervasive. In the past, enterprises shied away from encryption projects, in part because of key management burdens. Today, they often don't have a choice as data security programs and compliance mandates, such as PCI DSS, HIPAA/HITECH, and a smorgasbord of state data protection laws, have pushed encryption into the enterprise. Meanwhile, web applications, storage systems, mobile devices, corporate databases, e-mail and other technologies increasingly rely on encryption to secure information in transit, authenticate transactions and protect data at rest..."

From the PGP announcement: The PGP Key Management Server provides a versatile foundation to centralize management of encryption throughout the enterprise to help organizations take control over their encryption keys, strengthen security, and reduce operational cost. Key features in this release include: (1) Support for heterogeneous environments: Asymmetric, Symmetric and Proprietary Keys; Architected for multi-protocol support—KMIP, OPAL, IEEE 1619.3, PKCS 11; Support for desktops, servers, and devices. (2) Out of the Box, automated management and deployment for certificates: Provisioning and lifecycle management including expiration dates and renewal; SSL, VPN, Wireless Access, etc. (3) Generalized automation agent: Simple integration on 30+ operating system versions...

Phillip Dunkelberger, president and CEO at PGP Corporation: "Key management has become a huge challenge for Fortune 1000 companies, government organizations and other institutions who can no longer approach security 'one device' at a time—instead they need a cross platform, trusted data protection strategy... Managing a handful of encryption keys is one thing, but when organizations get more sophisticated and start to encrypt multiple laptops, smartphones, drives, servers and tapes, the task can become daunting. Our customers have been looking for a complete approach to key management that addresses data protection in a strategic manner... Deployment and the need to manage encryption technology has grown dramatically over the past several years as hard drives, servers, databases, smartphones and flash drives virtually all come provisioned with encryption. Additionally, SSL certificates are ubiquitous in server and cloud environments, and bring their own set of administrative challenges. Each new encryption product introduces yet another set of key management responsibilities that compound the administrative overhead and cost for IT and security departments. The PGP® Key Management Server 3.0 provides enterprises with a unified enterprise key management platform that is essential for streamlining security operations and providing the highest level of data protection..."

See also: the PGP announcement

Use DITA, PHP, and Blobs for Building Executable Process Models
Thomas G. Freund, IBM developerWorks

"The complexity facing embedded systems architects today is daunting because of added requirements in safety, reliability, and network accessibility. Yet, the tools typically used are often a step behind large-scale software spaces and do not provide the ability to transition smoothly between the detailed device level and a total system view. This articles shows how to use open source standards such as DITA and PHP and tools such as blob representations to create a system-level environment to address these needs...

The Darwin Information Typing Architecture (DITA) defines an XML architecture for designing, writing, managing, and publishing many kinds of information both in print and on the Web. It provides a way to organize knowledge about anything in an organization in a consistent structure that is computer sensible and that humans can understand. This organized knowledge can range from assembly instructions for a piece of machinery to procedures for billing customers or recipes for a chemical solvent.

DITA is a collection of XML constructs that represents a variety of ways in which primarily textual knowledge can be structured and stored. These constructs are called topics—a unit of information that is self-contained and can, for example, answer a single question. Several topics can relate to each other by means of DITA maps, or documents that help you navigate among a series of topics.

The DITA task topic describes how to complete a procedure to accomplish a specific goal. In other words, it answers questions of the type, 'How do I . . . ', where its key constructs are: 'prereq': Provides the information needed to start a process; 'context': Provides background information for carrying out the steps of a task; 'steps': Specific actions that must be followed to accomplish a task, where steps must contain one or more command elements describing the particular action that must be accomplished; 'result': The expected outcome for the task; 'example': An optional example illustrating use of the task; 'postreq': Optional tasks that can be performed after the successful completion of this task... DITA is made up of XML constructs, one of which is the DITA task topic. What is needed is a streamlined tool to traverse the task structure. Enter PHP and its DOM traversal classes. Translating tasks into a useful structure is done through the strong parsing and DOM traversal capabilities of a command-line PHP script..."

See also: DITA references

YANG Module for NETCONF Monitoring
Mark Scott and Martin Bjorklund (eds), IETF Internet Draft

Members of the IETF Network Configuration (NETCONF) Working Group have published an updated specification for the YANG Module for NETCONF Monitoring. The defines a NETCONF data model to be used to monitor the NETCONF protocol. The monitoring data model includes information about NETCONF datastores, sessions, locks and statistics. This data facilitates the management of a NETCONF server. This document also defines methods for NETCONF clients to discover data models supported by a NETCONF server and defines a new NETCONF 'get-schema' operation to retrieve them.

The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized on top of a simple Remote Procedure Call (RPC) layer...YANG (A data modeling language for NETCONF) is a data modeling language used to model configuration and state data manipulated by the Network Configuration Protocol (NETCONF) protocol, NETCONF remote procedure calls, and NETCONF notifications; YANG is used to model the operations and content layers of NETCONF...

The specification YANG Module for NETCONF Monitoring provides information about NETCONF sessions and supported schema as defined in I-D 4741bis. Considerations such as different schema formats, feature optionality and access controls can all impact the applicability and level of detail the NETCONF server sends to a client during session setup. The methods defined in this document address the need for further means to query and retrieve schema and NETCONF state information from a NETCONF server. These are provided to complement existing base NETCONF capabilities and operations and in no way affect existing behaviour. A new 'get-schema' operation is also defined to support explicit schema retrieval via NETCONF..."

See also: the IETF Network Configuration (NETCONF) Working Group


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: