This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com
- W3C Publishes XML Base (Second Edition) as a Recommendation
- FEMA Request for Information (RFI) for Identity Management Solutions
- Full-Disk Encryption: Drive Makers Settle on a Single Encryption Standard
- Extensible Markup Language Evidence Record Syntax
- Ian Robinson Discusses REST, WS-* and Implementing an SOA
- W3C Group Note for XHTML Media Types Second Edition
- Beehive Collaboration Service Interface (CSI) and Programming Model
- Google Finally Enables Offline Access for Gmail
W3C Publishes XML Base (Second Edition) as a Recommendation
Jonathan Marsh and Richard Tobin (eds), W3C Technical Report
The W3C XML Core Working Group has published the W3C Recommendation for "XML Base (Second Edition)." This document is an Edited Recommendation of the W3C. It supersedes the previous W3C Recommendation of 27-June-2001. The changes are summarized in an appendix. This second edition is not a new version of XML Base; its purpose is to clarify a number of issues that have become apparent since the first edition was published. A companion document "Testing XML Base Conformance" is also available. Background: The XML Linking Language (XLink) defines Extensible Markup Language (XML) 1.0 constructs to describe links between resources. One of the stated requirements on XLink is to support HTML 4.01 linking constructs in a generic way. The HTML BASE element is one such construct which the XLink Working Group has considered. BASE allows authors to explicitly specify a document's base URI for the purpose of resolving relative URIs in links to external images, applets, form-processing programs, style sheets, and so on. This document describes a mechanism for providing base URI services to XLink, but as a modular specification so that other XML applications benefiting from additional control over relative URIs but not built upon XLink can also make use of it. The syntax consists of a single XML attribute named xml:base. The deployment of XML Base is through normative reference by new specifications, for example XLink and the XML Infoset. Applications and specifications built upon these new technologies will natively support XML Base. The behavior of xml:base attributes in applications based on specifications that do not have direct or indirect normative reference to XML Base is undefined. This specification does not attempt to specify which text strings in a document are to be interpreted as URIs. That is the responsibility of each XML vocabulary. The question addressed by this specification is: given a relative URI in an XML document, what base URI is it resolved against? It is expected that a future RFC for XML Media Types will specify XML Base as the mechanism for establishing base URIs in the media types it defines.
FEMA Request for Information (RFI) for Identity Management Solutions
Alice Lipowicz, Federal Computer Week
The U.S. Federal Emergency Management Agency wants innovative pricing options for software that would control access and identity management of the agency's computer systems to deal with surges in use. From the text: "This Request for Information (RFI) is to assist the Federal Emergency Management Agency (FEMA) in the identification of potential option and to obtain Identity Management Solution vendor and product information for solutions that are able to support Agency requirements. FEMA is seeking to identify suitable Commercial-off-the-Shelf (COTS) and / or Governmentoff- the-shelf (GOTS) solutions that address all or part of the capabilities needed. Responses to this RFI are due by 2:00 PM EST on February 6, 2009... In addition to the normal capabilities that exist in COTS identity management and access control solutions, FEMA has some special requirements that are necessary to support the Agency mission. These unique use cases must be addressed for any Identity Management Solution to be effective in the FEMA environment. These use cases include, but are not limited to: (1) Allow FEMA personnel and contractors to be granted system access upon validating that the appropriate level of background investigations has been completed. (2) Allow state and local partners (external non-FEMA individuals) to access systems that allow them to apply for various types of grants (e.g., Mitigation, Fire Grants, Public Assistance, etc), perform Preliminary Damage Assessments, communicate and plan for disasters, etc. (3) Allow First Responders (external non-FEMA individuals) to access training material and courses provided by the National Fire Academy in Emmitsburg, MD. First Responders access FEMA-owned portals and Learning Management Systems to receive some distance training. (4) Allow disaster victims to access FEMA portals and web-based applications to apply for assistance (e.g., FEMA Individual Assistance). These Disaster victims may be displaced and will have to prove that they are who they say they are (e.g., ID-proofing). These Disaster victims may be displaced and using publicly available computers (e.g., public library and/or shelters) to access FEMA systems. (5) Allow FEMA to integrate with its Federal Government partners (e.g., Small Business Administration, SSA, etc.) and other business partners that might logistics support to move supplies to a disaster location. (6) Allow for FEMA users to simultaneously be assigned to multiple organizations and disasters (e.g., a HQ employee who works in Finance with a certain set of privileges may also be deployed to the Field and have a completely different set of privileges)..." Example requirements: 4.2.5. (Access Control) "Support for Role Based Access Control (RBAC) and a strong model to enforce the separation of duties"; 4.5.2 (Service Oriented Architecture Enablement) "Ability to provide authentication interoperability mechanisms such as Security Assertion Markup Language (SAML)." [FEMA source]
Full-Disk Encryption: Drive Makers Settle on a Single Encryption Standard
Lucas Mearian, Computerworld
"The world's six largest computer drive makers today published the final specifications for a single, full-disk encryption standard that can be used across all hard disk drives, solid state drives (SSD) and encryption key management applications. Once enabled, any disk that uses the specification will be locked without a password — and the password will be needed even before a computer boots. The three 'Trusted Computing Group (TCG)' specifications cover storage devices in consumer laptops and desktop computers as well as enterprise-class drives used in servers and disk storage arrays. 'This represents interoperability commitments from every disk drive maker on the planet,' said Robert Thibadeau, chief technologist at Seagate Technology and chairman of the TCG. 'We're protecting data at rest. When a USB drive is unplugged, or when a laptop is powered down, or when an administrator pulls a drive from a server, it can't be brought back up and read without first giving a cryptographically-strong password. If you don't have that, it's a brick. You can't even sell it on eBay.' By using a single, full-disk encryption specification, all drive manufacturers can bake security into their products' firmware, lowering the cost of production and increasing the efficiency of the security technology. For enterprises rolling out security across PCs, laptops and servers, standardized hardware encryption translates into minimum security configuration at installation, along with higher performance with low overhead. The specifications enable support for strong access control and, once set at the management level, the encryption cannot be turned off by end-users. Whenever an operating system or application writes data to a self-encrypting drive, there is no bottleneck created by software, which would have to interrupt the I/O stream and convert the data 'so there's no slowdown,' Thibadeau said..." Arshad Noor (Chair of the OASIS Enterprise Key Management Infrastructure [EKMI] Technical Committee) wrote in a posting: "The article neglects to mention that its also transparent to an attacker if he/she has compromised a software layer between the application and the FDE drive. Which explains why the CEO of Heartland Payment Systems is calling for end-to-end encryption now that they have discovered 'encryption religion'..."
See also: the posting of Arshad Noor
Extensible Markup Language Evidence Record Syntax
Aleksej Blazic, Svetlana Saljic, Tobias Gondrom (eds), IETF Internet Draft
The IETF Long-Term Archive and Notary Services (LTANS) Working Group was chartered "to define requirements, data structures and protocols for the secure usage of archive and notary services. In many scenarios, users need to be able to ensure and prove the existence and validity of data, especially digitally signed data, in a common and reproducible way over a long and possibly undetermined period of time. Cryptographic means are useful, but they do not provide the whole solution. For example, digital signatures (generated with a particular key size) might become weak over time due to improved computational capabilities, new cryptanalytic attacks might "break" a digital signature algorithm, public key certificates might be revoked or expire, and so on... Long-term non-repudiation of digitally signed data is an important aspect of PKI-related standards. Standard mechanisms are needed to handle routine events, such as expiry of signer's public key certificate and expiry of trusted time stamp authority certificate. A single timestamp is not sufficient for this purpose. Additionally, the reliable preservation of content across change of formats, application of electronic notarizations, and subsequent notary services require standard solutions." This document ("Extensible Markup Language Evidence Record Syntax ") specifies XML syntax and processing rules for creating evidence for long-term non-repudiation of existence of data. ERS-XML incorporates alternative syntax and processing rules to ASN.1 ERS syntax by using XML language. Section 6 presents the XSD Schema for the Evidence Record. Due to the differences in XML processing rules and other characteristics of XML language, XMLERS does not present a direct transformation of ERS in ASN.1 syntax. The XMLERS syntax is based on different processing rules as defined in RFC 4998 and it does not support for example import of ASN.1 values in XML tags. Creating evidence records in XML syntax must follow the steps as defined in this draft. XMLERS is a standalone draft and is based on RFC 4998 conceptually only. Evidence Record Syntax in XML format is based on long term archive service requirements as defined in RFC 4810 ("Long-Term Archive Service Requirements". XMLERS syntax delivers the same (level of) non-repudiable proof of data existence as ASN.1 ERS. The XML syntax supports archive data grouping (and de-grouping) together with simple or complex time-stamp renewal process. Evidence records can be embedded in the data itself or stored separately as a standalone XML file.
Ian Robinson Discusses REST, WS-* and Implementing an SOA
Ryan Slobojan (interviews) Ian Robinson, InfoQueue
In this interview from QCon San Francisco 2008, Ian Robinson discusses REST vs. WS-*, REST contracts, WADL, how to approach company-wide SOA initiatives, how an SOA changes a company, SOA and Agile, tool support for REST, reuse and foreseeing client needs, versioning and the future of REST-based services in enterprise SOA development. Robinson is a Principal Consultant with ThoughtWorks, where he specializes in the design and delivery of service-oriented and distributed systems. As to WS-* or REST? Robinson: "I think it is always going to depend; we are always going to have heterogeneous environments within the enterprise. There are likely technologies that are already in place, applications that are already in place that use WS-*, and it is unlikely that we would want to replace those just to impose some kind of uniform solution. A lot of the stacks offer a kind of homogenous development environment. And if we are developing the internals of an application or the internals of a service we can certainly take advantage of a lot of those WS-* compliant applications and interfaces. I think once we are looking for tremendous reach and scalability, when we are looking to extend across organizational boundaries, then we might want to look at more RESTful solutions... So I am building up effectively this kind of social networks for contracts. So all these different artifacts we can generate off the top of something produced with this DSL. It's something that I am playing with at the moment, but that seems to me to be a way of being able to express my expectations of a message and then create type classes that are really dedicated to those expectations, towards servicing those expectations. So I've kind of got a bit off track in terms of your question around REST tool support. In developing some solutions recently with Atom and AtomPub, what I've really wanted to ensure is that the protocol and the way in which those clients are interacting with those feeds is being adhered to, so I want to create a whole bunch of unit tests around the service that is generating those feeds... I want to be able to assert that specific HTTP headers are coming back, certain response codes in response to a particular stimulus. What I found, and I was doing this on the .Net framework, what I found was that you can very quickly get into that HTTP context, but for every test, what you are having to do is actually instantiate a service over HTTP and communicate with it. So what I have done is just create a simple wrapper around that HTTP context, it's an interface that I own, and then I can mock it out and obviously set expectations with regard to that mock. And then I've got a separate bunch of tests that just assert that specific implementations of that interface actually delegate to the .Net framework. One of the things that we want to be doing is testing the protocol; like I said, that's in terms of status codes, headers, that kind of stuff...
W3C Group Note for XHTML Media Types Second Edition
Shane McCarron (ed), W3C Technical Report
W3C announced that a Group Note has been published for "XHTML Media Types Second Edition: Serving the Most Appropriate Content to Multiple User Agents from a Single Document Source." The specification was produced by members of the XHTML2 Working Group, chartered "to fulfill the promise of XML for applying XHTML to a wide variety of platforms with proper attention paid to internationalization, accessibility, device-independence, usability and document structuring. This mission includes providing an essential piece for supporting rich Web content that combines XHTML with other W3C work on areas such as math, scalable vector graphics, synchronized multimedia, and forms, in cooperation with other Working Groups." This 'XHTML Media Types' Note contains suggestions about how to format XHTML to ensure it is maximally portable, and how to deliver XHTML to various user agents—even those that do not yet support XHTML natively. Many people want to use XHTML to author their web pages, but are confused about the best ways to deliver those pages in such a way that they will be processed correctly by various user agents. This document is intended to be used by document authors who want to use XHTML today, but want to be confident that their XHTML content is going to work in the greatest number of environments. The suggestions in this document are relevant to all XHTML Family Recommendations at the time of its publication... Note that, because of the lack of explicit support for XHTML (and XML in general) in some user agents, only very careful construction of documents can ensure their portability, as presented in Appendix A. If you do not require the advanced features of XHTML Family markup languages (e.g., XML DOM, XML Validation, extensibility via XHTML Modularization, semantic markup via XHTML+RDFa, Assistive Technology access via the XHTML Role and XHTML Access modules, etc.), you may want to consider using HTML 4.01 in order to reduce the risk that content will not be portable to HTML user agents. Even in that case authors can help ensure their portability AND ease their eventual migration to the XHTML Family by ensuring their documents are valid and by following the relevant guidelines in Appendix A.
See also: the W3C XHTML2 Working Group
Beehive Collaboration Service Interface (CSI) and Programming Model
Eric Chan (et al., eds), Oracle Technical Documentation
Members of the Oracle Beehive Development team have published "Beehive Collaboration Service Interface (CSI): An Overview of the CSI Programming Model." This document is planned for contribution to the OASIS Integrated Collaboration Object Model for Interoperable Collaboration Services (ICOM) Technical Committee. Excerpt: "The Beehive Collaboration Service Interface (as described in Java docs oracle.csi.controls) is Beehive Server's internal, data access API which is used to develop standards based public interfaces as well as protocol-specific services. In the following sections we provide the overview of the CSI programming components, which include Controls, Entity Handles, Entity Snapshots and Projections, Entity Updaters, Filters/Predicates, and Object Events. The entity classes in CSI Java docs may include the effects of de-normalizations, which are the results of implementation issues. Thus CSI entity classes may not look exactly like the Beehive Object Model (BOM). CSI is provided to OASIS ICOM TC primarily to augment the behavior and operational aspects of the object model. When there is a discrepancy between BOM and CSI entity classes the object model described in Beehive Object Model takes precedence over the CSI entity model in the Java docs. Key Concepts of CSI: CSI organizes all the methods available to manipulate entities in BOM into a number of logical groups called controls. Each control provides methods to create, retrieve, update, and delete entities of a specific high level type. For example, there is a control to manipulate workspaces and another to manipulate documents. The Java doc package summary documents the controls available in Beehive. Any application that uses CSI must first decide which controls it needs and then use the control locator to locate and invoke the controls from its execution environment... [Subsequent sections present Control Locator and Controls, Data Access, Access Control, Entity Model, List Filter Predicates, Optimistic Locking, and Object Events.] The Interface Summary documents AccessControl, AddressBookControl, BondControl, CalendarControl, CategoryControl, CommunityControl, CompressableControl, ConferenceAdapterControl, ConferenceControl, DocumentControl, EmailControl, ExternalArtifactControl, FaxMessageControl, ForumControl, HeterogeneousFolderControl, InstantMessageControl, LabelControl, LinkControl, LockControl, OperationStatusControl, PreferenceControl, PresenceControl, ReminderControl, ResourceDirectoryControl, SearchControl, SubscriptionControl, TaskControl, TimeZoneControl, TopicControl, UserDirectoryControl, UserReportsControl, VersionableControl, VersionConfigurationControl, VoiceMessageControl, WikiControl, WorkflowControl, WorkspaceControl, etc.
See also: the proposed ICOM TC Charter
Google Finally Enables Offline Access for Gmail
Clint Boulton, eWEEK
Google's long-awaited offline access for Gmail is here, bringing a sigh of relief to users of Google's messaging and collaboration software. Google Apps Standard Edition users will be able to access it immediately with a few steps, while consumers will see a more gradual rollout. The move should put Google on a more level playing field in cloud computing versus Microsoft, Yahoo Zimbra, Zoho and others with e-mail clients that already provide offline access... In Web application parlance, offline access is when users can access application data even when they're not connected to the Web. Google will soon follow Gmail offline access with offline access to Google Calendar. This will initially be available to Google Apps users only. Created in Google's Gmail Labs, offline access will enable Gmail to load in a Web browser without a Web connection. Users will be able to read, archive or write messages. Users can hit send on composed messages, which will remain in the Gmail outbox. When the user's computer reconnects online, Gmail will push the messages from its queue toward their recipients... Offline access for Gmail consumer and business users is a major step for Google, which is trying to compete with Microsoft, Yahoo's Zimbra and other e-mail providers by making Gmail as robust as possible for its tens of millions of users. This is particularly important for users who are trying to access their application data in areas with spotty Internet connections, or with no Web connections at all. Air travel, for example, tends to be the biggest stumbling block for applications that don't let their users access data offline... To enable offline access, Google Apps Standard Edition users should sign into Gmail and click 'Settings', then select the Labs tab and pick Enable next to the Offline Gmail feature prompt. After clicking 'Save Changes', users should see an Offline link in the upper right-hand corner of the account. Click this link to begin offline data synchronization. While users of the free standard edition can follow these instructions immediately, Google Apps Premier Edition and Google Apps Education Edition users will need their domain administrators to enable Gmail Labs for everyone on the domain first. However, in businesses where admins have marked the New Features check box in the admin console, users will be able to turn on the Gmail offline feature through Gmail Labs...
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/