The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: April 13, 2010
XML Daily Newslink. Tuesday, 13 April 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



SNIA Publishes Cloud Data Management Interface (CDMI) Version 1.0
Staff, Storage Networking Industry Association Announcement

At the SNW Spring 2010 Conference, "the Storage Networking Industry Association (SNIA) announced formal approval of the Cloud Data Management Interface (CDMI) as a SNIA Architecture standard. This milestone marks the first industry-developed open standard for cloud computing and will allow for interoperable cloud storage implementations from cloud service providers and storage vendors. Details are provided in the 173-page specification ('Cloud Data Management Interface Version 1.0') and in the announcement: 'Industry's First Cloud Standard to Be Used for Interoperable Cloud Storage... SNIA Announces Cloud Data Management Interface Standard'.

The standard is applicable to public, private and hybrid storage clouds and is expected to be implemented by service providers and cloud infrastructure vendors for all cloud deployment models. More than just a data path to the cloud, the CDMI also includes the ability to manage service levels that data receives when it is stored in the cloud.

The CDMI also includes a common interoperable data exchange format for securely moving data and its associated data requirements from cloud to cloud. The new standard, produced in record time for an open standard, was created by the SNIA Cloud Storage Technical Work Group (TWG) which consists of more than 180 members from more than 60 different organizations around the world. The Cloud Storage TWG, which was started at Storage Networking World Spring 2010, has completed the specification in less than twelve months.

From the Introduction: 'When discussing cloud storage and standards, it is important to distinguish the various resources that are being offered as services. These resources are exposed to clients as functional interfaces (data paths) and are managed by management interfaces (control paths). We explore the various types of interfaces that are part of offerings today and show how they are related. We propose a model for the interfaces that can be mapped to the various offerings and that form the basis for rich cloud storage interfaces into the future. Another important concept in this specification is that of metadata. When managing large amounts of data with differing requirements, metadata is a convenient mechanism to express those requirements in such a way that underlying data services can differentiate their treatment of the data to meet those requirements. The appeal of cloud storage is due to some of the same attributes that define other cloud services: pay as you go, the illusion of infinite capacity (elasticity), and the simplicity of use/management. It is therefore important that any interface for cloud storage support these attributes, while allowing for a multitude of business cases and offerings, long into the future..."

See also: the SNIA announcement


Updated Working Draft: Media Fragments URI 1.0
Raphaël Troncy and Erik Mannens (eds), W3C Technical Report

Members of the W3C Media Fragments Working Group have published a Second Public Working Draft for the "Media Fragments URI 1.0" specification, produced as part of the W3C Video on the Web Activity. It implements technology identified in the companion "Use Cases and Requirements for Media Fragments."

The Media Fragments 1.0 specification defines the syntax for constructing media fragment URIs and explains how to handle them when used over the HTTP protocol. The syntax is based on the specification of particular field-value pairs that can be used in URI fragment and URI query requests to restrict a media resource to a certain fragment... It provides for a media-format independent, standard means of addressing media fragments on the Web using Uniform Resource Identifiers (URI). In the context of this document, media fragments are regarded along three different dimensions: temporal, spatial, and tracks. Further, a fragment can be marked with a name and then addressed through a URI using that name. The specified addressing schemes apply mainly to audio and video resources—the spatial fragment addressing may also be used on images.

Background: "Audio and video resources on the World Wide Web are currently treated as "foreign" objects, which can only be embedded using a plugin that is capable of decoding and interacting with the media resource. Specific media servers are generally required to provide for server-side features such as direct access to time offsets into a video without the need to retrieve the entire resource. Support for such media fragment access varies between different media formats and inhibits standard means of dealing with such content on the Web...

The media fragment URIs specified in this document have been implemented and demonstrated to work with media resources over the HTTP and RTP/RTSP protocols. Existing media formats in their current representations and implementations provide varying degrees of support for this specification. It is expected that over time, media formats, media players, Web Browsers, media and Web servers, as well as Web proxies will be extended to adhere to the full specification... Example uses are the sharing of such fragment URIs with friends via email, the automated creation of such fragment URIs in a search engine interface, or the annotation of media fragments with RDF..."

See also: the W3C Media Fragments Working Group


Mashups and the Enterprise Mashup Markup Language (EMML)
Arun Viswanathan, DDJ

"Mashups are an architectural style that combines data and/or content from different data sources or sites. Mashups are normally differentiated based on the use, architecture style, and data. While consumer mashups have been in use for a while, we're now see them moving into the enterprise. In a general sense, what differentiates a consumer mashup from an enterprise mashup scene is that enterprise mashups are built following standard guidelines such those promoted by the Open Mashup Alliance (OMA), a standard model proposed by mashup vendors. The OMA defines an Enterprise Mashup Markup Language (EMML) which is used to define mashups in a standardized manner. The mashup defined thus can be deployed in any of the mashup runtimes which is implemented as per the specifications provided by the OMA.

In this article, I examine at the importance of OMA, the mashup architecture proposed by OMA, and the ease with which developers can create, deploy, and test a mashup developed as per the EMML specification... EMML supports an extensive set of operations and commands to handle simple to complex processing needs. EMML also supports the results of one mashup as input to another. All in all, this should give you a head start for developing more complicated mashups by referring to the detailed EMML documentation provided in the OMA portal.

EMML is declarative mashup domain-specific language which eliminates complex and procedural programming requirement to create mashups. It is an open specification and the language is free to use. EMML will thus remove any vendor lock-in and allows portability of the mashup solution. OMA has released the EMML specification, EMML schema, and an open source reference implementation where EMML scripts can be deployed and tested...

An EMML file is the mashup script that has a ".emml" extension and uses the Enterprise Mashup Markup Language. The mashup script defines the services, operations and responses to be constructed based on the results generated. The input to the EMML script could be through any of the data sources such as XML, JSON, JDBC, Java Objects or Primitive types. EMML provides a uniform syntax to call any of the service styles—REST, SOAP, RSS/ATOM, RDBMS, POJO or web clipping from HTML pages. Complex programming logic is also supported through an embedded scripting engine which supports JavaScript, Groovy, JRuby, XPath and XQuery. The EMML script is then deployed on to any J2EE compliant application server where the EMML runtime has been deployed. The mashup is then accessible as REST service using a URL with the mashup name. The Mashup service returns the result in XML format..."

See also: the OMA EMML Documentation


W3C Revised Working Draft: Widget Updates
Marcos Cáceres and Robin Berjon (eds), W3C Technical Report

Members of the W3C Web Applications Working Group have published a revised Working Draft for the Widget Updates specification, updating the previous draft of 2008-10-07. This specification defines a process and a document format to allow a user agent to update an installed widget package with different version of a widget package. A widget cannot update itself; instead, a widget relies on the user agent to manage the update process. A user agent can perform an update over HTTP and from non-HTTP sources (e.g., directly from a device's memory card or hard disk). This "Widget Updates" specification is part of the Widgets Family of Specifications; it takes into account the recommendations from the Widget Updates Patent Advisory Group and considers the large set of prior art the PAG found. The WG was chartered to provide specifications that enable improved client-side application development on the Web, including specifications both for application programming interfaces (APIs) for client-side development and for markup vocabularies for describing and controlling client-side application behavior.

'Widgets' are client-side applications that are authored using Web standards, but whose content can also be embedded into Web documents. The specification relies on PKWare's Zip specification as the archive format, XML as a configuration document format, and a series of steps that runtimes follow when processing and verifying various aspects of a package. The packaging format acts as a container for files used by a widget. The configuration document is an XML vocabulary that declares metadata and configuration parameters for a widget. The steps for processing a widget package describe the expected behavior and means of error handling for runtimes while processing the packaging format, configuration document, and other relevant files...

There are a multitude of reasons why authors might want to update a widget including addressing security vulnerabilities, making performance enhancements, and adding new features. Sometimes authors may even want to revert back to a previous version of a widget, if it is found that a newly deployed version of a widget contains issues or vulnerabilities. To facilitate the process of updating widgets, this specification introduces an XML element, called update-description, to be included into a widget's configuration document, and an XML format, called an Update Description Document (UUD). This specification also defines the rules the govern the interactions between the user agent, the UDD, and the updated widget.

On the one hand, the update-description element provides an author with a means to point to a UUD. On the other hand, the UUD provides metadata about an update including: (1) a means to describe the purpose of the update. (2) a means to indicate the version number of the potential update. (3) a link to where the updated widget package can be retrieved from... If a user agent determines, via the strategies defined in this specification, that two widget packages are not the same version, and if the user consents, the user agent will attempt to replace the currently installed widget package with a potential update. Updates are designed to retain any locally stored data, so to protect end-users from losing data that a widget may have stored at runtime.

See also: the W3C Web Applications (WebApps) Working Group


NIST Guide to Protecting Personally Identifiable Information (PII)
Pat O'Reilly, NIST Computer Security Division Announcement

"NIST has released Special Publication 800-122, "Guide to Protecting the Confidentiality of Personally Identifiable Information (PII)." SP 800-122 provides practical, context-based guidelines for identifying PII and determining what level of protection is appropriate for each instance of PII. The document also suggests safeguards that may offer appropriate levels of protection for PII and provides recommendations for developing response plans for incidents involving PII.

"The escalation of security breaches involving personally identifiable information (PII) has contributed to the loss of millions of records over the past few years. Breaches involving PII are hazardous to both individuals and organizations. Individual harms may include identity theft, embarrassment, or blackmail. Organizational harms may include a loss of public trust, legal liability, or remediation costs.

To appropriately protect the confidentiality of PII, organizations should use a risk-based approach; as McGeorge Bundy once stated, 'If we guard our toothbrushes and diamonds with equal zeal, we will lose fewer toothbrushes and more diamonds?' This document provides guidelines for a risk-based approach to protecting the confidentiality of PII. The recommendations in this document are intended primarily for U.S. Federal government agencies and those who conduct business on behalf of the agencies, but other organizations may find portions of the publication useful. Each organization may be subject to a different combination of laws, regulations, and other mandates related to protecting PII, so an organization's legal counsel and privacy officer should be consulted to determine the current obligations for PII protection. For example, the Office of Management and Budget (OMB) has issued several memoranda with requirements for how Federal agencies must handle and protect PII. To effectively protect PII, organizations should implement the following recommendations..."

Summary: "(1) Organizations should identify all PII residing in their environment. (2) Organizations should minimize the use, collection, and retention of PII to what is strictly necessary to accomplish their business purpose and mission. (3) Organizations should categorize their PII by the PII confidentiality impact level. (4) Organizations should apply the appropriate safeguards for PII based on the PII confidentiality impact level. (5) Organizations should develop an incident response plan to handle breaches involving PII. (6) Organizations should encourage close coordination among their chief privacy officers, senior agency officials for privacy, chief information officers, chief information security officers, and legal counsel when addressing issues related to PII..."

See also: NIST Special Publications


Password Attack on Apache Hosted JIRA, Bugzilla, or Confluence
Staff, Apache Infrastructure Team Incident Report

From the Apache.org Incident Report of 04/09/2010: "Apache.org services recently suffered a direct, targeted attack against our infrastructure, specifically the server hosting our issue-tracking software. The Apache Software Foundation uses a donated instance of Atlassian JIRA as an issue tracker for our projects. Among other projects, the ASF Infrastructure Team uses it to track issues and requests. Our JIRA instance was hosted on brutus.apache.org, a machine running Ubuntu Linux 8.04 LTS...

If you are a user of the Apache hosted JIRA, Bugzilla, or Confluence, a hashed copy of your password has been compromised. JIRA and Confluence both use a SHA-512 hash, but without a random salt. We believe the risk to simple passwords based on dictionary words is quite high, and most users should rotate their passwords. Bugzilla uses a SHA-256, including a random salt. The risk for most users is low to moderate, since pre-built password dictionaries are not effective, but we recommend users should still remove these passwords from use. In addition, if you logged into the Apache JIRA instance between April 6th and April 9th, you should consider the password as compromised, because the attackers changed the login form to log them...

What Happened? — On April 5th [2010], the attackers via a compromised Slicehost server opened a new issue, INFRA-2591. This issue contained the following text: 'ive got this error while browsing some projects in jira http://tinyurl.com/XXXXXXXXX [obscured]'... Tinyurl is a URL redirection and shortening tool. This specific URL redirected back to the Apache instance of JIRA, at a special URL containing a cross site scripting (XSS) attack. The attack was crafted to steal the session cookie from the user logged-in to JIRA. When this issue was opened against the Infrastructure team, several of our administators clicked on the link. This compromised their sessions, including their JIRA administrator rights. At the same time as the XSS attack, the attackers started a brute force attack against the JIRA login.jsp, attempting hundreds of thousands of password combinations. On April 6th, one of these methods was successful...

What are we changing? — We have remedied the JIRA installation issues with our reinstall. JIRA is now installed by root and runs as a separate daemon with limited privileges. For the time being we are running JIRA in a httpd-tomcat proxy config with [new] rules... We will be making one-time-passwords mandatory for all super-users, on all of our Linux and FreeBSD hosts. We have disabled caching of svn passwords, and removed all currently cached svn passwords across all hosts..."

See also: skeptikal.org blog


Virtualization and Cloud Security Modeled on NAC
Andreas M. Antonopoulos, Network World

Virtualization and cloud computing have disrupted the security industry to its core. We have not quite figured out how to deal with very dynamic infrastructure while most security is implemented in a mostly static ring of devices surrounding the resources they protect. We're still arguing about where the security should be positioned: in a hardware device outside the virtualized pool of resources, or embedded in the hypervisor or running in a virtual machine? The answer is both, but the real issue is how to orchestrate and coordinate between the two. When it comes to orchestrating security for a very dynamic environment, the answer somewhat surprisingly comes from network access control (NAC)...

There are two big problems with security with virtualized resources. Firstly, the resources may be dynamic and transient. Servers are cloned and launched unexpectedly; they may move around with VMWare's VMotion or equivalent. Secondly, security requires both network and computing affinity. With virtualization, those two things are at odds: getting nearest the network flows puts you in the hypervisor or virtual machine where computation power is limited and shared with the actual workloads. An appliance gives you compute power with specialized hardware but moves you away from the workloads I/O.

Ideally, you should have compute-expensive tasks done outside the pool on dedicated hardware and the network interception and control points closest to the workload and working with the hypervisor. Ideally, the two would collaborate with each other and with the virtualization system though orchestration.

That's exactly the set of problems that NAC attempts to address. With NAC you have endpoints (laptops, smartphones, desktops, printers) connecting to switches ad-hoc and in a transient fashion. Security must be coordinated between the stuff that runs on the endpoint (antivirus, policies and so on) and the stuff that needs to run in the network (firewalls, intrusion detection/prevention) while applying policies dynamically as each endpoint arrives on the scene..."

See also: the Trusted Computing Group (TCG)


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-04-13.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org