The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: April 28, 2010
XML Daily Newslink. Wednesday, 28 April 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com



Emerging Cloud Identity Management Standards Boost Cloud Security
Laura Smith, TechTarget Search Security

"Cloud computing identity management standards groups are clamoring to ensure the open and secure exchange of identities among cloud providers and their customers. One of the largest efforts to create cloud computing identity management standards is InCommon, whose participants include more than 116 institutions of higher education, forty-one service providers, and six federal agencies and nonprofit groups. The group coordinates common definitions and guidelines for security, privacy and data interchange among identity providers (such as higher-education institutions) and cloud service providers to validate that both parties are who they commit to be and are acting in good faith.

This information then is encapsulated in metadata that is included within certificates, allowing the identity provider and the service provider to share information. InCommon presently uses two community-developed products for exchanging information: Security Assertion Markup Language (SAML), an XML-based standard for communicating identity information between organizations; and Shibboleth, a Web-based single sign-on service that supports authentication for remote service requests....

Another standards group, the Trusted Cloud Initiative (TCI), aims to help cloud providers develop industry-recommended, secure and interoperable solutions. The TCI will build on the cloud computing best practices guidelines recently published by the CSA with regard to identity provisioning, authentication, federation and authorization in a cloud environment. Those recommendations are heavily geared to open standards. For example, the CSA recommends: (1) Using standard connectors provided by cloud providers that preferably are built on the Service Provisioning Markup Language (SPML) schema. If a cloud provider does not currently offer SPML, the CSA recommends that enterprises request it. (2) Evaluating proprietary authentication schemas used by Software as a Service and Platform as a Service providers, such as a shared encrypted cookie. The general preference should be for using open standards. (3) Making sure that applications are designed to accept such formats as SAML or WS-Federation, a specification that allows disparate security realms to broker information about identities, attributes and authentication. (4) Checking that any local authentication service implemented by a cloud provider is compliant with the Initiative for Open Authentication (OATH), which is a collaboration of device, platform and application vendors that hope to foster the use of strong authentication across networks, devices and applications...

Yet another group, the Jericho Forum, has proposed a cloud architecture that uses security and identity management across all levels of the cloud (infrastructure, platform, software, process) in a design it calls collaboration-oriented architecture (COA). This concept involves a computer system designed to use third-party services beyond the "perimeter," or area of control. COA would organize the identification, authentication and authorization credentials of organizations, individuals and systems in a standardized form that could be validated across cloud platforms..."

See also: the InCommon Federation


Softerra Releases Open Source SPML Library for .NET
Staff, Softerra Announcement

Softerra announced that it has developed an open source SPML (Service Provisioning Markup Language) library that allows any interested party to enable SPML-interchange between its corporate applications and management platforms.

The "OASIS Service Provisioning Markup Language (SPML) Version 2" specification, ratified as an OASIS Standard in April 2006, "defines the concepts and operations of an XML-based provisioning request-and-response protocol. In the SPML model, a Requesting Authority (RA) or requestor is a software component that issues well-formed SPML requests to a Provisioning Service Provider (for example, portal applications that broker the subscription of client requests to system resources, or service subscription interfaces within an Application Service Provider). In an end-to-end integrated provisioning scenario, any component that issues an SPML request is said to be operating as a requestor. This description assumes that the requestor and its provider have established a trust relationship between them... A Provisioning Service Provider (PSP) or provider is a software component that listens for, processes, and returns the results for well-formed SPML requests from a known requestor. For example, an installation of an Identity Management system could serve as a provider..."

The Softerra SPML2 Library "is written in C# for creating SPML-enabled applications using Microsoft's .NET development framework. It supports the SPML version 2.0 specification based on DSML v2 Profile. However, it is flexible enough to be extended for using with any custom capabilities and profiles, according to Eugene Pavlov, Softerra Product Manger.

A standard such as SPML helps to solve problems with identity interchange between any company's specific platforms or networks. Softerra' SPML2 Library will make it easier to take advantage of SPML. It is absolutely free for all kinds of use including commercial. Softerra offers code samples and test cases for the library. It helps you interpret, configure and issue standards-compliant provisioning requests across diverse identity infrastructures. Time is saved, all the requirements are met... Initially Softerra started SPML2 Library creation for its internal use developing SPML-enabled Softerra Adaxes v.2010.1. After several months of the library development and enhancements, Softerra decided to share the results with all interested parties..."

See also: the Softerra SPML2 open source library


Cloud Security Alliance Releases Cloud Controls Matrix Document
Staff, CSA Announcement

The Cloud Security Alliance has announced the availability of version 1.0 of the CSA Cloud Controls Matrix, a catalog of cloud security controls aligned with key information security regulations, standards, and frameworks. The matrix is based upon the CSA Security Guidance for Critical Areas of Focus in Cloud Computing... CSA Cloud Controls Matrix is intended to help a wide range of IT practitioners bridge the gap between traditional security frameworks and guidance specific to cloud computing. Initially available in spreadsheet form, future versions will be delivered using formats such as XML to ease solution integration.

The Cloud Security Alliance Controls Matrix (CM) is specifically designed to provide fundamental security principles to guide cloud vendors and to assist prospective cloud customers in assessing the overall security risk of a cloud provider. The CSA CM provides a controls framework that gives detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in thirteen domains.

The foundations of the Cloud Security Alliance Controls Matrix rest on its customized relationship to other industry-accepted security standards, regulations, and controls frameworks such as the HITRUST CSF, ISO 27001/27002, ISACA COBIT, PCI, and NIST, and will augment or provide internal control direction for SAS 70 attestations provided by cloud providers. As a framework, the CSA CM provides organizations with the needed structure, detail and clarity relating to information security tailored to the cloud industry.

The CSA CM strengthens existing information security control environments by emphasizing business information security control requirements, reduces and identifies consistent security threats and vulnerabilities in the cloud, provides standardize security and operational risk management, and seeks to normalize security expectations, cloud taxonomy and terminology, and security measures implemented in the cloud..."

See also: the CSA Cloud Controls Matrix announcement


Business Entities and the Business Entity Definition Language (BEDL)
Prabir Nandi, Dieter König, Simon Moser, Rick Hull (et al), IBM Technical Paper

"Part 1 of this series introduces the concept of business entities as a means of representing the business view of data. It proposes two new standards, the Business Entity Definition Language (BEDL) and BPEL4Data, an extension to WS-BPEL for the holistic design and execution of process with Business Entities. Part 2 will cover the BPEL4Data language elements in depth, and discuss the architecture that brings together the BPEL family of languages (WS-BPEL, WS-HumanTask) with BEDL in execution scenarios.

The specification and deployment of business processes and operations is crucial to the successful management of medium or large-scale enterprises. In most business process management tool suites, data is treated mostly as an afterthought. Activities and their flows are the main abstractions and the data manipulated by the processes is essentially hidden in process variables... Over the past decade, a new approach to business process and operations modeling has emerged, that is based on Business Entities. Business Entities provide a new basis for specifying business operations that combines data and process at a fundamental level...This article introduces a way to take advantage of the Business Entity approach while still using and building upon standards such as WS-BPEL and BPMN. This enables the use of Business Entities in conjunction with the large industrial investment in, and vast embedded base of, tooling for these process-centric approaches. This article introduces a new proposed standard, called Business Entity Definition Language (BEDL), and describes how you can use it alongside process-centric technologies such as WS-BPEL and BPMN

In order to bring the advantages of the Business Entities perspective into the existing world of WS-BPEL, BPMN, and similar process-centered technologies, this article introduces the BEDL variant of the Business Entity notion. This variant focuses on four essential aspects of Business Entities: information model, (macro-level) lifecycle model, access policies based on role and lifecycle state, and notifications of state and data change events. Unlike much of the existing literature and practical applications of Business Entities to date, the BEDL variant does not include mechanisms to specify the detailed processing steps involved in a Business Entity lifecycle model; these can be specified in a separate but integrated fashion using, for example, WS-BPEL or BPMN.

More specifically, we will describe an approach for specifying a complete Business Operations Model (BOM) using a BEDL specification that specifies the relevant Business Entity types, in combination with a WS-BPEL (or BPMN) specification that specifies the various processing steps used in conjunction with those Business Entity types. A runtime implementation of such BOMs can be supported using software components dedicated to implementing the BEDL specification and a conventional WS-BPEL (or BPMN) engine..." [Note: the article is available in PDF, along with sample schema files (XSD) and BEDL instance file (.bedl) in a ZIP package. See comments from Paul Vincent, Jean-Jacques Dubray, and Boris Lublinsky.]

See also: the PDF


A Scalable Lightweight Join Query Processor for RDF Data
Medha Atre, Vineet Chaoji, Mohammed Zaki, James Hendler; WWW2010 Paper

This paper was presented in the WWW 2010 Conference forum on Semi-Structured Data. "RDF is extensively being used for representing data from the fields of bioinformatics, life sciences, social networks, and Wikipedia as well. Since disk-space is getting cheaper, storing this huge RDF data does not pose as big a problem as executing queries on them. Querying these huge graphs needs scanning the stored data and indexes created over it, reading that data inside memory, executing query algorithms on it, and building the final results of the query. Hence, a desired query processing algorithm is one which: (i) keeps the underlying size of the data small (using compression techniques), (ii) can work on the compressed data without uncompressing it, and (iii) doesn't build large intermediate results...

The Semantic Web community, until now, has used traditional database systems for the storage and querying of RDF data. The SPARQL query language also closely follows SQL syntax. As a natural consequence, most of the SPARQL query processing techniques are based on database query processing and optimization techniques. For SPARQL join query optimization, previous works like RDF-3X and Hexastore have proposed to use 6-way indexes on the RDF data. Although these indexes speed up merge-joins by orders of magnitude, for complex join queries generating large intermediate join results, the scalability of the query processor still remains a challenge.

In this paper, we introduce (i) BitMat — a compressed bit-matrix structure for storing huge RDF graphs, and (ii) a novel, light-weight SPARQL join query processing method that employs an initial pruning technique, followed by a variable-binding-matching algorithm on BitMats to produce the final results. Our query processing method does not build intermediate join tables and works directly on the compressed data. We have demonstrated our method against RDF graphs of upto 1.33 billion triples — the largest among results published until now (single-node, non-parallel systems), and have compared our method with the state-of-the-art RDF stores — RDF-3X and MonetDB.

Our results show that the competing methods are most effective with highly selective queries. On the other hand, BitMat can deliver 2-3 orders of magnitude better performance on complex, low-selectivity queries over massive data..."

See also: the WWW2010 conference papers listing


Compellent Launches New ZFS-Based Storage System
Chris Preimesberger, eWEEK

"Compellent, a progressive-thinking data storage company that was an early mover to the idea of providing unified connectivity, has launched the first network-attached system based entirely on the open-source Zettabyte File System (ZFS). The 128-bit ZFS, which numerous storage network developers have described as extraordinarily fast, is an in-demand open-source software package for handling unstructured data in any type of block, file or drive as a virtualized single pool of storage. ZFS, developed mostly at Sun Microsystems several years ago, is based on a transactional object model that removes most of the traditional constraints associated with I/O operations...

In its new zNAS storage system, Compellent has added its own secret sauce: consolidated file- and block-level storage on a ZFS-based platform using its own Fluid Data architecture. The architecture increases storage utilization by automatically (according to predesigned policies) tiering file storage at the block level, by thin-provisioning storage for unstructured data, and by delivering rapid data recovery and site-to-site replication..."

According to the announcement: "Unstructured data expected to grow 60 percent annually, where managing separate pools of file and block data can quickly become costly and complex. Compellent zNAS unified storage integrates the highly scalable ZFS file system with a Fluid Data architecture to consolidate all enterprise data into a single, virtualized pool. zNAS is ideal for midsize and large enterprises with mixed Unix, Linux and Windows CIFS/NFS environments. Regardless of the size or type of data, with Compellent zNAS, administrators can actively, intelligently manage data throughout its lifecycle with block-based applications such as Thin Provisioning, Automated Tiered Storage, Replays (continuous snapshots), Boot from SAN and Thin Replication.

The Fluid Data architecture enables the Compellent SAN to actively manage data at a more granular level. Detailed information about each block is captured in action, providing unprecedented system intelligence inside the volume. A sophisticated data movement engine uses this metadata, or data about the data, to intelligently store, recover and manage your data... Business applications are implemented faster, information to make decisions is always available, new technologies are instantly deployed, and data is continuously protected against downtime and disaster..."

See also: the Wikipedia summary of ZFS


Cisco Expands Cloud-Based Security Services
Ellen Messmer, Network World

"In the latest chapter of what it calls its 'Secure Borderless Network' initiative, Cisco announced an expanded reporting capability for its ScanSafe Web-filtering service as well as the addition of a data-loss prevention option for the company's cloud-based e-mail security service.

Cisco, which acquired ScanSafe in December, says its offering now provides user behavior trends, details on any company policy violations, malware statistics and forensic analysis information... NewPage, a Miamisburg, Ohio, coated-paper manufacturer, uses ScanSafe to control Web usage for thousands of employees and has been testing the new reporting tool for a few months and has seen a dramatic improvement...

Cisco also announced it's adding a DLP [data loss protection] and encryption capability to its IronPort-based hosted e-mail security service, which customers can use in lieu of installing the IronPort appliance on their own premises. The DLP service option for the cloud is based on the technology Cisco licenses from RSA and already added to the IronPort appliance last year.

Cisco says the new cloud-service option includes a way to transmit TLS-protected e-mail from the customer's e-mail server to a Cisco data center — Cisco claims it will have 33 of these data centers globally by year-end — where the e-mail would be filtered to make sure it doesn't contain sensitive information before re-transmitting it. Cisco acknowledges it's competing against Google's Postini service, which has some basic DLP features... Pricing on the ScanSafe service typically runs $2 to $5 per user per month, and the DLP feature in Cisco's e-mail security service costs between $1.25 to $1.50 per user per month..."

See also: the Cisco Data-Loss Prevention announcement


Expanding the Cloud: Opening the AWS Asia Pacific Region
Werner Vogels, Blog

"Amazon Web Services [AWS] took an "important step in serving customers worldwide: the AWS Asia Pacific (Singapore) Region is now launched. Customers can now store their data and run their applications from the Singapore location in the same way they do from the other U.S. and European Regions.

On the importance of Regions in Cloud Computing: "Quite often 'The Cloud' is portrayed as something magically transparent that lives somewhere in the internet. This portrayal can be a desirable and useful abstraction when discussing cloud services at the application and end-user level.

However, when speaking about cloud services in terms of Infrastructure-as-a-Service (IaaS), it is very important to make the geographic locations of services more explicit. There are four main reasons to do so: (1) Performance — For many applications and services, data access latency to end users is important. You need to be able to place your systems in locations where you can minimize the distance to your most important customers. The new Singapore Region offers customers in APAC lower-latency access to AWS. (2) Availability — The cloud makes it possible to build resilient applications to make sure they can survive different failure scenarios... By placing instances in different Availability Zones, developers can build systems that can survive many complex failure scenarios. The Asia Pacific (Singapore) region launches with two Availability Zones. (3) Jurisdictions — Some customers face regulatory requirements regarding where data is stored. AWS Regions are independent, which means objects stored in a Region never leave the Region unless you transfer them out. For example, objects stored in the EU (Ireland) Region never leave the EU. Customers thus maintain control and maximum flexibility to architect their systems in a way that allows them to place applications and data in the geographic jurisdiction of their choice. (4) Cost-effectiveness — Cost-effectiveness continues to be one of the key decision making factors in managing IT infrastructure, whether physical or cloud-based. AWS has a history of continuously driving costs down and letting customers benefit from these cost reductions in the form of reduced pricing...

Regions have become a very important tool for worldwide rollout of applications. The uniformity of the environment allows customers who have built applications for one Region to easily launch the application in a different Region. For example, there is a large European Insurance company that is looking to expand their EU-based product offerings to the Asia Pacific market..."

See also: the announcement


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-04-28.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org