The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: August 25, 2010
XML Daily Newslink. Wednesday, 25 August 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com



DMTF Launches DASH/CDM Conformance Programs and Certification Registry
Staff, Distributed Management Task Force Announcement

The Distributed Management Task Force (DMTF), an organization bringing the IT industry together to collaborate on systems mangement standards development, validation, promotion and adoption, has "announced the availability of conformance programs for its Common Diagnostic Model (CDM) 1.0 and Desktop and mobile Architecture for System Hardware (DASH) 1.0 standards. A total of fourteen (14) products from DMTF member companies Dell, HP, and Intel have tested and validated their products for conformance to these standards.

DMTF conformance programs provide test suites to vendors, enabling them to test DMTF-based solutions for conformance to their respective specifications. Vendors who validate their products' test results are eligible to list their products in the DMTF Certification Registry. This provides IT customers and vendors with increased confidence that products listed in the registry are manageable as defined by DMTF standards.

Part of the DMTF's Common Information Model (CIM), the Common Diagnostic Model (CDM) specification is widely used throughout the industry to evaluate the health of hardware systems in multi-vendor environments. Companies supporting the CDM 1.0 conformance program include DEll, HP, IBM, and Realtek. Desktop and Mobile Architecture for System Hardware (DASH) is a suite of specifications that standardize the management of desktop and mobile systems independent of machine state, operating platform or vendor. Companies supporting the DASH 1.0 conformance program include AMD, Dell, Intel, VMware, American Megatrend, Inc., Broadcom, Fujitsu, Hitachi, HP, IBM and Realtek. Additionally, DMTF is developing a conformance program for Systems Management Architecture for Server Hardware (SMASH). SMASH 1.0 is a command line protocol that provides a common management interface for managing a heterogeneous server environment.

Each conformance program has developed its own set of test tools based on the DMTF specifications to drive conformance testing. Using these tools, vendors test products in their own labs using the appropriate DMTF conformacne test suite and submit results to DMTF for validation and certification..."

See also: DMTF Conformance Programs


IETF Internet Draft: Simple HTTP State Management Mechanism
Adam Barth (ed), IETF Internet Draft

A first public IETF Internet Draft has been published for a Standards Track Simple HTTP State Management Mechanism. This document describes a simple HTTP state management mechanism, called 'cake', that lets HTTP servers maintain stateful sessions with HTTP user agents. This mechanism is harmonized with the same-origin security model and provides both confidentiality and integrity protection against active network attackers. In addition, the mechanism is robust to cross-site request forgery attacks.

From the document Introduction: "HTTP does not provide servers with a robust mechanism for tracking state between requests. The dominant HTTP state management mechanism in use on the Internet, known as cookies, has a number of historical infelicities that impair its security. In particular, cookies have the following serious defects: (1) Cookies provide no integrity protection against active network attackers. Even if the example.com HTTP server always employs TLS, a network attacker manipulate the server's cookies by spoofing responses; (2) Cookies assume that a given host name trusts all of its superdomains and siblings. In particular, 'students.example.edu' can manipulate the cookies used by 'grades.example.edu', potentially resulting in security vulnerabilities. (3) Cookies indicate only which user agent issued a given HTTP request. They provide no information about why the user agent issued that request. This design flaw leads many HTTP servers to be vulnerable to cross-site request forgery attacks, in which the attacker tricks the server into performing an action on behalf of the user by causing the user agent to issue an HTTP request to the server with the user's cookies.

This document defines a simple HTTP state management mechanism that addresses these shortcommings of cookies. In this mechanism, the server stores a secret key at the user agent, called the 'cake-key'. When the user agent issues subsequent HTTP requests to the server, the user agent sends a string, called a cake, containing a HMAC (using the cake-key) of the security-origin that generated the request. By whitelisting expected cakes, the server can accept requests from origins of its choice, mitigating cross-site request forgery vulnerabilities.

Unlike cookies, which can leak from one host to another and from one scheme to another (e.g., http vs. https), the cake-key is scoped to a security-origin. Therefore, an active network attacker who might compromise 'http://example.com' cannot manipulate the state for 'https://example.com'...


Red Hat Submits Deltacloud APIs as Potential Industry Standard
Joab Jackson, InfoWorld

"As the industry call for cloud interoperability grows more fervent, open source enterprise software company Red Hat has submitted its cloud platform, Deltacloud, to the DMTF (Distributed Management Task Force) as a potential standard for cloud interoperability. Lack of interoperability among different cloud providers is one of the major concerns that prevent enterprises from adopting cloud computing, according to Gary Chen, an IDC research manager covering enterprise virtualization software, in a presentation that accompanied Red Hat's announcement.

Red Hat launched Deltacloud in September 2009 as a set of open source APIs that could be used to move cloud-based workloads among different IaaS (infrastructure as a service) providers, such as Amazon and Rackspace. To encourage external contributions to Deltacloud, Red Hat relinquished the Deltacloud code base to the Apache Incubator, a repository for programs that may eventually be overseen by the Apache Foundation.

The company also started a site, called APIwanted.org, where external parties can submit suggestions for additional APIs and other desired functionality for Deltacloud. In addition to Red Hat itself, other companies participating in the development of Deltacloud, or using it in some way, include Cisco, Dell, Hewlett-Packard, IBM, Ingres, and Intel...

DMTF's Cloud Management Working Group will consider adopting Deltacloud as a standard. DMTF oversees existing standards such as CDM (the Common Diagnostic Model), DASH (the Desktop and Mobile Architecture For System Hardware), and OVF (the Open Virtualization Format)..."

See also: the Red Hat announcement


BPMN Model Interchange: Clearing the Hurdles
Bruce Silver, BPMS Watch Blog

"I've been thinking a lot about the XML side of Business Process Model and Notation (BPMN). While we usually think of BPMN as a diagramming standard, it is also — in principle -- a model interchange standard, an XML format than can be exported from tool A and imported into tool B. BPMN 2.0, XPDL 2.1 (for BPMN 1.2), and XPDL 2.2 (for BPMN 2.0) all purport to deliver this. In reality, however, BPMN model interchange faces serious (some would say insurmountable) hurdles. I have been working on a number of tools to overcome these obstacles.

To achieve BPMN model interchange, you need: (1) An explicitly enumerated set of interchangeable model elements and attributes. The full BPMN 2.0 schema is too open-ended for unrestricted interchange... (2) Modeling tools that unambiguously support all the elements and attributes in those conformance classes, meaning the mapping of diagram shapes and labels to the standard is unambiguous... (3) XML export (as BPMN 2.0 or XPDL) from those tools. Process Modeler exports either BPMN 2.0 or (in the Pro edition) XPDL 2.1 for BPMN 1.2. There are a couple problems with it in the current build, but I expect those to be fixed shortly. Visio 2010 Premium has no native XPDL or BPMN export....

You also need: (4) Validation of user-created models to support effective interchange, where four distinct types of validation are required: [1] Adherence to the rules of the BPMN specification; [2] Adherence to the palettes specified by the Descriptive and Analytic conformance classes; [3] Schema validation of the exported XML, either XPDL or BPMN 2.0; you might think that tools would always produce schema-valid XML, but they don't; [4] Adherence to certain conventions that allow unambiguous serialization of the diagram; these conventions go beyond the requirements in the BPMN spec, and they could be thought of as style rules. (5) Process modeling and executable process design tools that can import and edit BPMN 2.0 or XPDL that passes all of the aforementioned validation tests. This is the biggest gap right now. The primary need is to export from a business-friendly BPMN tool like itp or Visio 2010 and import into a BPMN 2.0-based BPMS like Oracle BPM11g...

With the tools I will be providing, we should be able to get through step 4.[4], and I'm hoping to begin closer dialog with Oracle (and others) on step 5... I plan to have some of them available, at least for my clients and students, in the next few weeks..."

See also: the Business Process Model and Notation (BPMN)


OASIS SCA-J Technical Committee Approves Test Cases/Assertions Specs
Mike Edwards and David Booz (eds), OASIS Public Review Drafts

Members of the OASIS Service Component Architecture / J (SCA-J) Technical Committee have released two approved Committee Drafts for public review through October 24, 2010. This TC was chartered to develop specifications that standardize the use of the use of Java technologies within an Service Component Architecture (SCA) domain.

The TestCases for the SCA_J Common Annotations and APIs Version 1.1 Specification defines the TestCases for the SCA Java Common Annotations and APIs Assembly specification. The TestCases represent a series of tests that an SCA runtime must pass in order to claim conformance to the requirements of the SCA Java Common Annotations and APIsAssembly specification.

SCA/J CAA test cases follow a standard structure, divided into two main parts: (1) Test Client, which drives the test and checks that the results are as expected (2) Test Application, which forms the bulk of the test case and which consists of Composites, WSDL files, XSDs and code artifacts such as Java classes, organized into a series of SCA contributions The basic idea is that the Test Application runs on the SCA runtime that is under test, while the Test Client runs as a standalone application, invoking the Test Application through one or more service interfaces. Test Client The test client is designed as a standalone application. The version built here is a Java application which uses the JUnit test framework, although in principle, the client could be built using another implementation technology. The test client is structured to contain configuration information about the testcase, which consists of metadata identifying the Test Application in terms of the SCA Contributions that are used and the Composites that must be deployed and run [and] data indicating which service operation(s) must be invoked with input data and expected output data (including exceptions for expected failure cases)...

The Test Assertions for the SCA_J Common Annotations and APIs Version 1.1 Specification defines the Test Assertions for the SCA/J CAA specification. The Test Assertions represent the testable items relating to the normative statements made in the SCA Java Common Annotations and APIs specification. The Test Assertions provide a bridge between the normative statements in the specification and the conformance TestCases which are designed to check that an SCA runtime conforms to the requirements of the specification.

See also: Test Assertions


First Public Working Draft: Prohibiting SSL Version 2.0
Sean Turner and Tim Polk (eds), IETF Internet Draft

An initial level -00 IETF Internet Draft has been published for the Standards Track specification Prohibiting SSL Version 2.0. This document requires that when TLS clients and servers establish connections that they never negotiate the use of Secure Sockets Layer (SSL) version 2.0. This document updates the backward compatibility sections found in the 'Transport Security Layer (TLS) Protocol', published as IETF RFC #5246.

"Many protocols specified in the IETF rely on Transport Layer Security (TLS) for security services. This is a good thing, but some TLS clients and servers also support negotiating the use of SSL version 2.0; however, this version does not provide the expected level of security. SSL version 2.0 has known deficiencies. This document describes those deficiencies, and it requires TLS clients and servers never negotiate the use of SSL version 2.0.

SSL version 2.0 deficiencies include: (1) Message authentication uses MD5 ('The MD5 Message-Digest Algorithm'). Most security-aware users have already moved away from any of MD5. (2) Handshake messages are not protected. This permits a man-in-the- middle to trick the client into picking a weaker cipher suite than they would normally choose. (3) Message integrity and message encryption use the same key, which is a problem if the client and server negotiate a weak encryption algorithm. (4) Sessions can be easily terminated. A man-in-the-middle can easily insert a TCP FIN to close the session and the peer is unable to determine whether or not it was a legitimate end of the session... Because of the deficiencies noted in the previous sections, TLS implementations MUST NOT support SSL 2.0. The specific changes to TLS, including earlier versions, are as follows: [i] TLS clients MUST NOT use SSL 2.0 ClientHello messages. [ii] TLS servers MUST NOT accept SSL 2.0 ClientHello messages..."

Credit: "The idea for this document was inspired by discussions between Peter Saint Andre, Simon Josefsson, and others on the XMPP mailing list. We would also like to thank Paul Hoffman, Yaron Sheffer, and Nikos Mavrogiannopoulos, Yngve Pettersen, Marsh Ray, and Martin Rex for reviews and comments on earlier versions of this document."

See also: The Transport Layer Security (TLS) Protocol Version 1.2 (RFC 5246)


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-08-25.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org