The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: August 23, 2010
XML Daily Newslink. Monday, 23 August 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com



Consensus on the Future of Standards-Based Provisioning and SPML
Mark Diodati, Gartner Blog Network

"I had the honor of facilitating the Standards-Based Provisioning Special Interest Group at this year's Catalyst conference. The participants believe that standards-based provisioning is at a crossroads and wish to publish the following statement. The statement is based upon our conversation; all of the participants have reviewed it... The group readily achieved a consensus about two things: the need for standards-based provisioning and the qualities required for successful provisioning standard.

The participants have firsthand experience with the difficulties of proprietary provisioning from the perspective of both vendor and end-user organizations. The SIG meeting was particularly timely, as OASIS is evaluating the need for an SPML v3 standard. Additionally, the SaaS market is at a critical juncture as vendors look for standards-based solutions to the provisioning problem...

The second iteration of the SPML standard was approved in the spring of 2006 and included additional capabilities and operational modes. In trying to address every possible use case, interoperable provisioning services leveraging the SPML v2 standard became impractical. Organizations wishing to use SPML must write provisioning services specifically for each vendor's SPML implementation (if the vendor supports SPML at all). The difficulty in building a single, interoperable provisioning service has made the adoption of SPML by application developers a non-starter. Without adoption by enterprise and cloud application developers, SPML will not be adopted. In conclusion, the SPML v2 standard is broken.

The next iteration of SPML should focus on solving 'the connector problem' and provisioning use cases for cloud-based applications. That is, the next version of SPML should readily enable the development of simple, standards-conformant provisioning services for both enterprise and cloud applications. The next iteration of SPML needs to become simpler. It must support a simple use case for conformant, standards-based provisioning services. Additionally, the SPML standard is too imbalanced; it places too much burden on target applications. The participants assert that the next version of SPML must possess a 'simple' profile for to be successful. The simple profile should include the following qualities [list follows]..."

See also: a proposal for SPML TNG/v3


IETF Internet Draft: The Atom Link "inline" Extension
James M. Snell (ed), IETF Internet Draft

An initial level -00 IETF Internet Draft has been published for the specification of The Atom Link 'inline' Extension. This specification is Informational: an IETF Informational specification "is published for the general information of the Internet community, and does not represent an Internet community consensus or recommendation. The Informational designation is intended to provide for the timely publication of a very broad range of responsible informational documents from many sources, subject only to editorial considerations and to verification that there has been adequate coordination with the standards process..."

The Atom Link 'inline' Extension "adds a mechanism to the Atom Syndication Format which publishers of Atom Feed and Entry documents can use to embed representations of linked resources into a child element of the 'atom:link' element.

Details: "The 'atom:inline' element may be used as the child of an 'atom:link' element to embed representations of the resource referenced by the containing 'atom:link'. The 'atom:inline' element may contain a type attribute whose value specifies the MIME media type of the representation contained by the element. If the type attribute is not provided, Atom Processors must behave as though the type attribute were present with a value equal to that specified by the containing 'atom:link' elements type attribute. The value of the type attribute must not be a composite type... An 'atom:link' element may contain any number of 'atom:inline' elements, but must not contain more than one with the same type attribute value. If the value of the type attribute begins with 'text/' (case insensitive), the content of 'atom:inline' must not contain child elements.

If the value of the type attribute is an XML media type, per IETF RFC 3023, or ends with '+xml' or '/xml' (case insensitive), the content of the 'atom:inline' element may include child elements and should be suitable for handling as the indicated media type. This would normally mean that the 'atom:inline' element would contain a single child element that would serve as the root element of the XML document of the indicated type. For all other values of the type attribute, the content of atom: inline MUST be a valid Base64 encoding, as described in RFC 3548, section 3. When decoded, it should be suitable for handling as the indicated media type. In this case, the characters in the Base64 encoding may be preceded and followed in the atom:inline element by white space, and lines are separated by a single newline (U+000A) character. The atom:inline element may have an 'xml:lang' attribute, whose content indicates the natural language for the element and its descendents.

See also: Atom references


Apache Software Foundation Releases Commons Compress Version 1.1
Christian Grobmeier, Apache Announcement

The Apache Commons Compress development team announces the release of Commons Compress Version 1.1. Commons Compress defines an API for working with compression and archive formats. These include: bzip2, gzip and ar, cpio, jar, tar, zip. Source and binary distributions are available for download from the Apache Commons download site.

"The compress component is split into compressors and archivers. While compressors (un)compress streams that usually store a single entry, archivers deal with archives that contain structured content represented by ArchiveEntry instances which in turn usually correspond to single files or directories. Currently the bzip2 and gzip formats are supported as compressors where gzip support is provided by the java.util.zip package of the Java class library.

The ar, cpio, tar and zip formats are supported as archivers where the zip implementation provides capabilities that go beyond the features found in java.util.zip. The compress component provides abstract base classes for compressors and archivers together with factories that can be used to choose implementations by algorithm name. In the case of input streams the factories can also be used to guess the format and provide the matching implementation... The stream classes all wrap around streams provided by the calling code and they work on them directly without any additional buffering. Compress provides factory methods to create input/output streams based on the names of the compressor or archiver format as well as factory methods that try to guess the format of an input stream...

Changes in this version include: New features in this version 1.1 release include: (1) Command-line interface to list archive contents; (2) Tar implementation support for Pax headers; (3) ZipArchiveInputStream can optionally extract data that used the STORED compression method and a data descriptor; (4) The ZIP classes will throw specialized exceptions if any attempt is made to read or write data that uses zip features not supported (yet); (5) 'ZipFile#getEntries' returns entries in a predictable order; (6) The 'Archive*Stream' and 'ZipFile' classes now can have '(Read|Write)EntryData' methods that can be used to check whether a given entry's data can be read/written; (7) The ZIP classes now detect encrypted entries; (8) Improve ExceptionMessages in ArchiveStreamFactory; (9) ArchiveEntry now has a 'getLastModifiedDate' method..."

See also: the Apache announcement


Supporting the Semantic Linked Data Web: Raptor RDF Syntax Library V2
Dave Beckett, Software Announcement

"Today I released the first beta version of Raptor 2 (Raptor RDF Syntax Library). This is the culmination of about 9 months work refactoring the Raptor 1 codebase... I know that Raptor 2 is not going to place Raptor 1 for applications for some time, so this is a separately installed library with a new location for the header file and a new shared library base...

"Raptor is a free software / Open Source C library that provides a set of parsers and serializers that generate Resource Description Framework (RDF) triples by parsing syntaxes or serialize the triples into a syntax. The supported parsing syntaxes are RDF/XML, N-Triples, TRiG, Turtle, RSS tag soup including all versions of RSS, Atom 1.0 and 0.3, GRDDL and microformats for HTML, XHTML and XML and RDFa. The serializing syntaxes are RDF/XML (regular, and abbreviated), Atom 1.0, GraphViz, JSON, N-Quads, N-Triples, RSS 1.0 and XMP. The typical sequence of operations to parse is to create a parser object, set various handlers and options, start the parsing, send some syntax content to the parser object, finish the parsing and destroy the parser object...

Raptor was designed to work closely with the Redland RDF library (RDF Parser Toolkit for Redland) but is entirely separate. It is a portable library that works across many POSIX systems (Unix, GNU/Linux, BSDs, OSX, cygwin, win32).

In the Version 2 beta, a major addition is a raptor_world object that is used as a single object to hold on to all shared resources and configuration... The addition of the world object meant that each constructor for an object in raptor now takes that object, so it can get access to the shared configuration and resources. That itself meant the change was extensive, broad in scope. The single place to manage resources means it's easier to ensure proper cleanup and deal with library-wide issues..."

See also: the Raptor reference page


Policy Frameworks for Protecting Privacy in the Cloud
Peter Fleischer, Blog

Peter Fleischer writes: "I had the privilege of sharing a podium in Dublin last week at the Institute of International and European Affairs with the Irish Data Protection Commissioner. We were invited to discuss policy frameworks for protecting privacy in the Cloud. The talks are posted at the IIEA's web site..."

"This event focused on the regulatory issues on the topic of Cloud Computing, in terms of the EU Data Protection Framework. The topic is of particular interest from both a business and a policy perspective in terms of the ongoing debate on the Smart Economy, and the possible regulatory changes required in order to maximise the potential of Cloud Computing.

Peter Fleischer is Google's Global Chief Privacy Counsel. Mr. Fleischer has over 10 years' experience in the field of data protection, and previously worked as Director of Regulatory Affairs and Privacy Lead for Microsoft. Billy Hawkes was appointed by the Government as Data Protection Commissioner in July 2005 for a five-year term. Prior to his appointment, he worked in various government departments, including the Departments of Finance, Enterprise, Trade & Employment and Foreign Affairs."

The presentations (video and audio formats) are available from the Institute of International and European Affairs (IIEA) web site, and on YouTube.

See also: Updating the EU Data Protection Framework for Cloud Computing


CloudAudit Gets Real
George Hulme, InformationWeek

"For enterprises, one of the biggest challenges with cloud computing include transparency into the operational, policy and regulatory, and security controls of cloud providers. For cloud providers, one of their pressing challenges is answering all of the audit and information gathering requests from customers and prospects...

Not being able to assess and validate compliance and security efforts within various cloud computing models is one of the biggest challenges cloud computing now faces. First, when a business tries to query a cloud provider, there may be lots of misunderstanding about what is really being asked for. Additionally, cloud providers can't spend all of their time fielding questions about how they manage their infrastructure. And, regrettably, not many public cloud providers offer much transparency into their controls. And no, SAS 70 audits don't really account for much of anything when it comes to security.

CloudAudit aims to change that. The group is developing a common way for cloud computing providers to automate how their services can be audited and assessed and assertions provided on their environment for Infrastructure-, Platform-, and Software-as-a-Service providers. Consumers of these services would also have an open, secure, and extensible way to use CloudAudit with their service providers... The group currently boasts about 250 involved in the effort, from end users, auditors, system integrators, and cloud providers representing companies such as Akamai, Amazon Web Services, enStratus, Google, Microsoft, Rackspace, VMware, and many others

Last week the group released its first specification to the IETF as a draft, as well as CompliancePacks that map control objectives to common regulatory mandates, such as HIPAA, PCI DSS, and ISO27002 and COBIT compliance frameworks..."

See also: the initial CloudAudit 1.0 specification


Cloud Computing by Government Agencies: Business And Security Challenges
Shahid N. Shah, IBM developerWorks

"This article briefly describes the procurement challenges and then jumps into advising government cloud service purchasers on the positives and negatives of security in the cloud, and how to manage their potential vendors' security risks. The security threats are covered from the government's point of view, but smart cloud vendors will take it as a preview of what questions the government might ask them and prepare accordingly.

The United States federal government has the largest annual IT budget of any organization, almost $80 billion in 2010 alone. To save money and improve services, the government is beginning to adopt a cloud first approach towards procuring new and replacement systems. The business cases and technical benefits for moving into the cloud are the same for the government as they are for other firms, only the savings and challenges are much bigger. Government agencies have two special challenges: procurement and security

While some benefits are associated with moving into the cloud, most IT professionals see many reasons to be afraid. Most of these questions — multi-tenancy, encryption, and compliance concerns — boil down to trust. Here are common questions that security professionals ask: (1) How do I know I can trust your (the vendor's) security model? Will your documentation and process be transparent? How do I know that you're probably responding to audit findings? (2) Can your proprietary implementation be easily examined to uncover faults? (3) Do you support Trusted Internet Connections (TICs) with full auditing for Internet traffic bandwidth utilized by the government? TICs are being mandated and most cloud providers don't even know what TIC is. (4) How do you track classified data leaks into unclassified systems? What kinds of processes do you have in place to deal with a hard drive wipe in case you shared data for multiple customers on a single hard drive? How do you satisfy concerns around liability of mixing classified and unclassified data? (5) How do you guarantee that government data stays on servers physically located within the continental United States? There are stringent rules written by Congress and regulations enacted by different presidential administrations that require this to be the case. (6) Are backups outside of your system boundary? Is the transport over a secure connection and encrypted at a remote location? Is it encrypted during transit offsite? [...]

Proper identity management and access control are difficult already within the boundaries of a single entity's IT system. But when a system crosses boundaries, with part of it being within an internal network and a smaller or larger part being in the cloud within another vendor's environment, it becomes even more difficult. If you help secure such hybrid systems, be sure to consider hiring practices at your cloud vendors. They should have at least as stringent hiring policies as your own. How will the vendor notify you when individuals that work on your data or systems come and leave?..."


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-08-23.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org