This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
- Thales Announces Appliance for Standards-Based Encryption Key Management
- The Key to Strong Encryption: Matching the Right Tools to the Job
- Apache Tuscany Development Team Releases Tuscany SCA 2.0-M2
- Establishing Location URI Contexts using HTTP-Enabled Location Delivery
- Sam Ruby on HTML Reunification
- Working with Intermediate jQuery: The UI Project
- AI and Cultural Heritage: Semantic Classification of Byzantine Icons
Thales Announces Appliance for Standards-Based Encryption Key Management
Staff, Thales e-Security Announcement
Thales, leader in information systems and communications security, has announced Thales's Encryption Manager for Storage (TEMS), "the industry's first standards-based encryption key management appliance for storage managers." TEMS supports IEEE P1619.3 and is designed to support OASIS Key Management Interoperability Protocol (KMIP), along with certain proprietary key management interfaces from leading storage vendors. This eliminates the need for storage professionals to deploy multiple key management systems and gives storage vendors the option to partner with an independent key management provider rather than develop and maintain their own key management systems. TEMS is available as a ready-to-use appliance that consolidates and automates the management of encryption keys for storage systems in a transparent manner, delivering unified, fine-grained and auditable encryption key security controls. This enables organizations to adopt storage encryption with the confidence that their encryption keys are under control and secured in a cost- effective and future proof management system. Jon Oltsik, senior analyst with Enterprise Strategy Group: "As data breaches continue to embarrass companies and incur real costs, security initiatives have naturally focused on the storage infrastructure. The use of encryption within the switching fabric, back-up tapes, drives, arrays, and host adapters is rapidly becoming essential for safeguarding sensitive information, but many organizations are concerned about reliability and data recoverability issues." TEMS is the first standards-based key manager available with draft IEEE P1619.3 key management standard support and will support the final specification, due out early 2010. Subsequent releases will also support the recently announced OASIS KMIP key management standard, originally co-authored by Thales with other leading vendors. Through collaboration with partners TEMS will support legacy or proprietary key management interfaces to provide storage professionals with the flexibility and freedom to utilize encryption at various points within their storage environments and to take advantage of pre-certified integration with their preferred storage systems. TEMS benefits from a secure and highly scalable architecture designed to meet FIPS 140-2 and builds on Thales's core cryptographic expertise and reputation for deploying strong cryptography in support of a wide variety of applications. TEMS will be available in July 2009.
See also: IEEE P1619.3 Key Management
The Key to Strong Encryption: Matching the Right Tools to the Job
William Jackson, Government Computer News
"Cryptography can be a powerful tool—so powerful, in fact, that the U.S. federal government has regulated the export of the technology for national-security reasons. But no matter how strong it is, no single cryptographic tool can solve all your data security needs. Security technologies each have strengths and limitations, even when they use the same types of algorithms and key strengths... Is the data to be protected at rest or in transit? Where is it being stored? Is it sensitive or highly classified? How valuable is it to someone else? How long is it likely to remain valuable? Are you worried primarily about theft or about loss? Answers to questions such as those can help determine whether you want a broad tool such as full-disk encryption, a more granular tool such as file encryption, or a specialty tool such as format-preserving encryption. Your answers can help determine what strength key you should be using. With the increasing power of small handheld devices and growing efficiency of cryptographic tools, the computational overhead of cryptography is becoming less of an issue when selecting a tool. Organizations probably should go with the strongest cryptographic tools available, experts say, because bad guys can more easily crack encryption as computers become more powerful... Cryptography is the scrambling of a message according to a formula or algorithm so that only someone with the proper key can unscramble and read it. It has been around for millennia, but until recently, it has been used largely by governments because of the difficulty of generating, distributing and securing the keys that are adequately strong and complex...
Wayne Grundy, director of the Transglobal Secure Collaboration Program (TSCP): "The big challenge is: How do you protect the keys?" TSCP was formed in 2002 by the United Kingdom's Ministry of Defence to define technical specifications for secure collaboration between governments and among contractors. Its members include the U.S. Defense Department, the Dutch government, and a handful of major international defense contractors, including BAE Systems, Boeing, EADS, Lockheed Martin, Northrop Grumman, Raytheon and Rolls-Royce. Specifications for a secure e-mail standard developed by TSCP use a trusted public-key infrastructure model, similar to the U.S. government's Federal PKI Bridge. The specifications also include a set of policies and procedures for vetting and managing an organization's identity and access controls. This would assure users that an e-mail is securely encrypted and the senders and receivers are who they say they are and are entitled to access the contents. Although the U.S. government is still a leader in strong cryptography, the development of powerful computing technology and invention of public-key cryptography have moved it into the private sector... Agencies generally are required to encrypt sensitive data that residies on mobile devices, from laptop PCs to handheld devices, and there are a variety of techniques to choose from, including full-disk encryption, virtual disk or volume encryption, and file and folder encryption... Virtual disk encryption or volume encryption encrypts most of the drive but not all of it. An encrypted device can be booted without decrypting, but authentication is needed to access data. File and folder encryption is more granular, allowing the encryption of specific files or folders. This method can be convenient because it allows a user to encrypt only data that is sensitive but can be less convenient because it can mean decrypting multiple files to access them.
Apache Tuscany Development Team Releases Tuscany SCA 2.0-M2
Staff, Apache Foundation Announcement
Members of the The Apache Tuscany development team have announces the Version 2.0 M2 release of the Java SCA project. Apache Tuscany provides a runtime environment based on the Service Component Architecture (SCA). SCA is a set of specifications aimed at simplifying SOA application development which are being standardized by OASIS as part of the Open Composite Services Architecture (Open CSA). This milestone release is a further step towards the 2.0 final release, highlights include improved spec compliance, initial support for Distributed OSGi (RFC 119), new Webapp integration with support for various web technologies/frameworks, and Maven Archetypes to make developing SCA applications easier. Details: The Apache Tuscany SCA 2.0-M2 release includes implementations of the main SCA specifications and recent updates from Open CSA drafts including: SCA Assembly Model V1.1; SCA Policy Framework V1.1; SCA Java Common Annotations and APIs V1.1; SCA Java Component Implementation V1.1; SCA Web Services Binding V1.1; SCA WS-BPEL Client and Implementation V1.1; and portions of SCA JEE Integration V1.1. It also includes implementations of many features not yet defined by SCA specifications, including SCA bindings for RMI, databindings for JAXB, Axis2's AXIOM, DOM, SAX and StAX, and integration with various web frameworks. The Tuscany SCA Runtime can be configured as a single node SCA domain or as an SCA domain distributed across multiple nodes. In addition Tuscany SCA supports the following host-deployment options: (1) running standalone; (2) running in a OSGi enabled runtime Environment - Equinox; (3) running with distributed nodes across multiple JVMs...
See also: Distributed OSGi
Establishing Location URI Contexts using HTTP-Enabled Location Delivery
James Winterbottom, Hannes Tschofenig, Martin Thomson (eds), IETF Internet Draft
This memo describes a protocol extension for the HTTP-Enabled Location Delivery (HELD) protocol. Section 5 presents the XML Schema. The HELD specification defines an extensible XML-based protocol that enables the retrieval of LI from a LIS by a Device. This protocol can be bound to any session-layer protocol, particularly those capable of MIME transport. This document describes the use of HyperText Transfer Protocol (HTTP) and HTTP over Transport Layer Security (HTTP/TLS) as transports for the protocol. It identifies two types of location information that may be retrieved from the LIS. Location may be retrieved from the LIS by value, that is, the Device may acquire a literal location object describing the location of the Device. The Device may also request that the LIS provide a location reference in the form of a location URI or set of location URIs, allowing the Device to distribute its LI by reference. Both of these methods can be provided concurrently from the same LIS to accommodate application requirements for different types of location information. The Location Information Server (LIS) service applies to access networks employing both wired technology (e.g., DSL, Cable) and wireless technology (e.g., WiMAX) with varying degrees of Device mobility. This document describes a protocol that can be used to acquire Location Information (LI) from a LIS within an access network. The I-D "Establishing Location URI Contexts using HTTP-Enabled Location Delivery (HELD)" described a method that allows a Target to manage their location information on a Location Information Server (LIS) through the application of constraints invoked by accessing a location URI. Constraints described in this memo restrict how often location can be accessed through a location URI, how long the URI is valid for, and the type of location information returned when a location URI is accessed. Extension points are also provided... A location URI that is provided by a LIS using the basic HELD specification, is essentially immutable once retrieved. There is no means provided of controlling how the URI is used. A default policy is applied to the URI, which is fixed until the location URI expires; a Location Recipient in possession of the location URI can retrieve the Target's location until the expiry time lapses. This basic mechanism may be reasonable in a limited set of applications, but is unacceptable in a broader range of applications. In particular, the ability to change policy dynamically is required to better protect the privacy of the Target. Two new forms of HELD request are defined by this document...
Sam Ruby, intertwingly Blog
At the present time, the HTML 5 document is a browser behavior specification and a list of author conformance requirements. The first part is essentially uncontroversial—nobody, including browser vendors like it, but it is what it is, and it ain't changing. Authors of libraries recognize this too. Everybody is OK with that. The second part is the source of seemingly unending controversy. Are alt attributes required? What should be considered a conformance error in SVG? Is RDFa legal? The current draft hasn't been built based on consensus, and this needs to be resolved prior to Last Call... Meanwhile, extensibility and the relative roles of HTML and XHTML2 working groups were hot topics at the AC meeting last month. Steven Pemberton and I have been having a productive discussion, and we've also consulted with a number of AC representatives. This post is the result of those discussions. Quick summary: (1) I'd like to retain intact the design principles for platform design as worked out by the browser vendorsHTML Working Group in 2007. (2) I'd like to relax slightly the design principles for language design, and give considerable latitude to existing designs, as well as grandfather in exceptions. (3) I want to explore the idea of dropping the assumption that the current HTML working group has the sole responsibility for, and absolute dominion over, authoring guidelines... For platform features (i.e., the ones that impact browser implementations), consensus by browser vendors is essential. I will also note that there are platform features in both of the current HTML5 and XHTML2 published working drafts which do not yet enjoy that level of consensus... For language features, the bar can (and should) be much lower. Today's browser have a default rendering and a default mapping to the DOM for unknown markup. In many cases (e.g., attributes), that means that such markup will not be visible. An unfortunate consequence of this is an invaluable feedback loop is lost, and therefore data quality will suffer. We need to agree up front this is entirely the responsibility of library developers to make this stuff visible. Experience has shown that validators, while necessary and important, are not sufficient. FWIW, a similar observation can be made about 'lang' attributes for non-CJK languages. Meanwhile, many of the requirements for unfettered language level extensibility come from vendors which produce content via XML pipelines. Love it or hate it, (and there are plenty in the latter camp even with XML circles) XML namespaces are the way to do such extensibility. Ben Adida (correctly, in my opinion) observed that the working group that has responsibility for an XML serialization of HTML needs to be aware of and respect such as *a* mechanism for extensibility. It was also widely observed that those with such pipelines don't find the distinction between HTML and XHTML to be a useful one as it is only the one that controls the final transfer (and therefore the content-type) that has any control over the serialization. The obvious implication of this is that markup requirements for language features will bleed through between one serialization and the other. Direct observation has corroborated this. My conclusion is that xmlns attributes, and both element and attribute names containing colons, need to be allowed in conformant HTML. It needs to be noted that such nodes are placed into the DOM today differently by HTML and XML parsers. This is unfortunate, but given the experience of Opera, it appears to be beyond our ability to correct at this point...
See also: an HTML 5 draft
Working with Intermediate jQuery: The UI Project
Michael Abernethy, IBM developerWorks
See also: the jQuery web site
AI and Cultural Heritage: Semantic Classification of Byzantine Icons
P. Tzouveli, N. Simou, G. Stamou, S. Kollias; IEEE Intelligent Systems
This article discusses the use of fuzzy description logics and patterns to automatically determine the sacred figure depicted in an icon. As the amount of the Web's cultural content grows, search and retrieval procedures for that content become increasingly difficult. Moreover, Web users need more efficient ways to access huge amounts of content. So, researchers have proposed sophisticated browsing and viewing technologies, raising the need for detailed metadata that effectively describes the cultural content. Several annotation standards have been developed and implemented, and Semantic Web technologies provide a solution for the semantic description of collections on the Web. Unfortunately, the semantic annotation of cultural content is time consuming and expensive, making it one of the main difficulties of cultural-content publication. Therefore, the need for automatic or semi- automatic analysis and classification of cultural assets has emerged... Some cultural domains are appropriate for automatic analysis and classification methods. Byzantine icon art is one of them. The predefined image content and the low variability of the image characteristics support the successful application of image analysis method... Byzantine iconography follows a unique convention of painting. The artistic language of Byzantine painters is characterized by apparent simplicity, overemphasized flatness, unreal and symbolic colors, lack of perspective, and strange proportions. The sacred figures are set beyond real time and real space through the use of gold backgrounds. From Art Manual to Semantic Representation: Although the knowledge in Dionysios's manual concerns vague concepts such as 'long hair,' 'young face,' and so on, it's quite strict and formally described. Consequently, we can create an ontological representation of this knowledge using OWL. In this way, the ontology's axiomatic skeleton will provide the terminology and restrictions for Byzantine icons...
The project's 'Knowledge representation and reasoning' subsystem consists of terminological and assertional knowledge and a reasoning engine. These types of knowledge are the basic components of a knowledge-based system based on DLs, a structured knowledge- representation formalism with decidable-reasoning algorithms. DLs have become popular, especially because of their use in the Semantic Web (as in OWL DL, for example). DLs represent a domain's important notions as concept and role descriptions. To do this, DLs use a set of concept and role constructors on the basic elements of a domain-specific alphabet. This alphabet consists of a set of individuals (objects) constituting the domain, a set of atomic concepts describing the individuals, and a set of atomic roles that relate the individuals. The concept and role constructors that are used indicate the expressive power and the name of the specific DL. Here, we use SHIN, an expressive subset of OWL DL that employs concept negation, intersection, and union; existential and universal quantifiers; transitive and inverse roles; role hierarchy; and number restrictions. Results: We evaluated our system on a database, provided by the Mount Sinai Foundation in Greece, containing 2,000 digitized Byzantine icons dating back to the 13th century. The icons depict 50 different characters; according to Dionysios, each character has specific facial features that makes him or her distinguishable. Evaluation of the Byzantine-icon-analysis subsystem produced promising results. The subsystem's mean response time was approximately 15 seconds on a typical PC. In the semantic-segmentation module, the face detection submodule reached 80 percent accuracy. In most cases, the failure occurred in icons with a destroyed face area...
See also: the Web Ontology Language (OWL)
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/