The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: May 24, 2010
XML Daily Newslink. Monday, 24 May 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



W3C Forms New Library Linked Data Incubator Group
Staff, W3C Announcement

W3C has announced the creation of the Library Linked Data Incubator Group as part of the W3C Incubator Activity. A W3C Incubator project (XG) is an initiative to foster development of emerging Web-related technologies. Incubator Activity work is not on the W3C standards track but in many cases serves as a starting point for a future Working Group. The scope of an Incubator Group is expected to be Web-Based Applications built upon the infrastructure of the Web, ideally focused on potentially foundational technologies.

The mission of the W3C Library Linked Data Incubator Group is "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities. The initiative focuses on Linked Data for ehe library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future.

This incubator group has been initiated by actors from national libraries, university libraries and research units, library vendors companies and other interested stakeholders. Its scope is however not limited to libraries as institutions, but is meant to involve other cultural heritage institutions, partners from the publishing industry, and other relevant domains... Digital libraries rely deeply on existing building blocks of librarianship, such as: (1) metadata schemas—MODS, MADS, METS; (2) metadata models for libraries, which are now evolving towards the Web, e.g., FRBR, FRAD, RDA; (3) standards and protocols for building interoperability beyond the library domain—OAI-ORE, SKOS, SRU/CQL; (4) (digital) library systems shifting from an integrated vision towards a networked environment, e.g., Europeana, Worldcat, VIAF project...

The group will explore how existing building blocks of librarianship, such as metadata models, metadata schemas, standards and protocols for building interoperability and library systems and networked environments, encourage libraries to bring their content, and generally re-orient their approaches to data interoperability towards the Web, also reaching to other communities. It will also envision these communities as a potential major provider of authoritative datasets (persons, topics...) for the Linked Data Web. As these evolutions raise a need for a shared standardization effort within the library community around (Semantic) Web standards, the group will refine the knowledge of this need, express requirements for standards and guidelines, and propose a way forward for the library community to contribute to further Web standardization actions..."

See also: the Library Linked Data Incubator Group Charter


IEEE ICSG Malware Working Group Releases XML Schema for Data Sharing
Staff, IEEE-SA Industry Connections Security Group Announcement

The IEEE Standards Association (IEEE-SA) Industry Connections Security Group (ICSG) has announced the availability the availability of an XML Schema designed to support sharing of information about malware. This XML schema is "designed to facilitate the quick, cost-effective sharing of samples of malware (malicious software such as viruses, worms and spyware) by computer-security organizations. AVG Technologies, McAfee Inc., Microsoft Corp., Panda Security, Sophos, Symantec Corp. and Trend Micro have adopted the flexible ICSG solution as part of their efforts to more quickly deliver the protection that their users most urgently need... Anti-virus companies, Internet service providers (ISPs), law-enforcement agencies, testing bodies and other organizations may [download] the XML schema for use in sharing malware samples with other bodies with whom they have arranged to exchange information.

According to Jeff Green, ICSG chair and senior vice president of McAfee Labs: 'ICSG has provided a much-needed collaborative environment for the computer-security industry to come together quickly and tackle our most pressing issues as they arise. The introduction of an easily adaptable XML schema for sharing malware samples is an important first deliverable, and we have already identified an array of other areas -- packer usage, application and cloud-computing security and privilege management—where ICSG can deliver unique value.' Vincent Weafer, vice president of Symantec Security Response: 'ICSG's XML schema not only makes the process significantly more efficient, it also enables an organization to prioritize the threats. It all adds up to faster rollout of more relevant protection for our customers'...

ICSG formed in 2009 as a global effort to pool experience and resources in combating the systematic and rapid rise in threats to computer security. Since then, membership has more than doubled, to fifteen (15) organizations. In addition to creating the schema for sharing malware samples, ICSG has begun developing guidelines and definitions for identifying bad or good usage of 'packer' software, which is frequently used for compressing malware for hard-to-detect distribution via executable files. Other areas to be addressed by ICSG include application and cloud-computing security and privilege-management protocols..."

ICSG is "a group of computer security entities that have come together to work on common goals and industry issues. The key focus is to solve security issues. In the past few years, attackers have shifted away from mass distribution of a small number of threats to micro distribution of millions of distinct threats. ICSG was established, under the umbrella of the IEEE Standards Association (IEEE-SA) Industry Connections program out of the desire by many in the security industry to pool their experience and resources in response to the systematic and rapid rise in new malware being introduced to the market."

See also: the IEEE ICSG Malware Working Group


Revised IETF Internet Draft for Web Host Metadata
Eran Hammer-Lahav, IETF Internet Draft

An updated Informational IETF Internet Draft has been published for the specification host-meta: Web Host Metadata. This specification describes a method for locating host metadata for Web-based protocols. The level -09 revision: (1) removes the 'hm:Host' XML element due to lack of use cases, since protocols with signature requirements can define their own way of declaring the document's subject for this purpose; (2) makes minor editorial changes; (3) changes following redirections to a 'MUST'; (4) updates references.

From the specification Introduction: "Web-based protocols often require the discovery of host policy or metadata, where host is not a single resource but the entity controlling the collection of resources identified by Uniform Resource Identifiers (URI) with a common URI host as defined by RFC 3986. While these protocols have a wide range of metadata needs, they often define metadata that is concise, has simple syntax requirements, and can benefit from storing its metadata in a common location used by other related protocols.

Because there is no URI or resource available to describe a host, many of the methods used for associating per-resource metadata (such as HTTP headers) are not available. This often leads to the overloading of the root HTTP resource, e.g. 'http://example.com/', with host metadata that is not specific to the root resource, and often has nothing to do it... This memo therefore registers the 'well-known' URI suffix 'host-meta' in the Well-Known URI Registry established by RFC 5785, and specifies a simple, general-purpose metadata document for hosts, to be used by multiple Web-based protocols.

The host-meta document uses the XRD 1.0 document format as defined by 'Extensible Resource Descriptor (XRD) Version 1.0' (OASIS TC Committee Draft 16), which provides a simple and extensible XML-based schema for describing resources. This memo defines additional processing rules needed to describe hosts. Documents MAY include any XRD element not explicitly excluded. The host-meta document root MUST be an 'XRD' element. The document SHOULD NOT include a 'Subject' element, as at this time no URI is available to identify hosts. The use of the 'Alias' element in host- meta is undefined and NOT RECOMMENDED. The subject (or 'context resource' as defined by 'Web Linking' (IETF Network Working Group, IETF Internet Draft) of the XRD 'Property' and 'Link' elements is the host described by the host-meta document. However, the subject of 'Link' elements with a 'template' attribute is the individual resource whose URI is applied to the link template... Clients obtain the host-meta document for a given host by making an HTTPS GET request to the host's port 443 for the '/.well-known/host-meta' path. If the request fails to produce a valid host-meta document, clients make an HTTP GET request to the host's port 80 for the '/.well-known/host-meta' path. Servers MUST support at least one but SHOULD support both ports..." [Note: see updated version -10]

See also: the XRD Committee Draft


The NoSQL Alternative: High-Performance Database Options
Ken North, DDJ

A new generation of low-cost, high-performance database software is rapidly emerging to challenge SQL's dominance in distributed processing and Big Data applications. Some companies have already traded SQL's rich functionality for these new options that let them create, work with, and manage large data sets. A big reason for this movement, dubbed NoSQL, is that different implementations of Web, enterprise, and cloud computing applications have different requirements of their databases. Not every app requires rigid data consistency, for example. Also, when an application uses data distributed across hundreds or even thousands of servers, simple economics points to using no-cost server software as opposed to paying per-processor license fees...

Applications such as online transaction processing, business intelligence, customer relationship management, document processing, and social networking don't have identical needs for data, query, or index types, nor do they have equivalent requirements for consistency, scalability, and security. For example, BI applications run analytical and decision-support queries that can exploit bitmap indexes for operations with gigabyte- or terabyte-sized databases. Web analytics, drug discovery, financial modeling, and similar applications look to distributed systems for efficiently processing gigabyte- to terabyte-sized data sets... Social network applications such as Facebook and Amazon.com have adopted BASE (basically available, soft state, eventually consistent) properties over the more familiar ACID (atomicity, consistency, isolation, durability) ones to serve their massive Web user communities of millions...

A variety of data stores are gaining popularity for creating applications for scalable Web sites and elastic environments such as the private or public cloud. Distributed key-value stores are great when you don't need SQL rule enforcement, strong consistency, complex queries, integrated queuing, or the ability to operate with operational databases that exceed available RAM...

New low-latency data stores provide scalability for applications that don't require rich query and analytics capabilities. Amazon has developed SimpleDB, and Google developed Bigtable. Other low-latency, open source options include Cassandra, Hypertable, MongoDB, Project Voldemort, Redis, Tokyo Tyrant, and Dynamo, the database used for Amazon S3, which as of March 2010 was hosting 102 billion objects..."


OpenID v.Next Core Protocol Working Group Charter
Dick Hardt, Posting to OpenID Specs Council Mailing List

Members of the OpenID community have submitted a Charter proposal for a new OpenID v.Next Core Protocol Working Group. The goal is to produce a core protocol specification or family of specifications for OpenID v.Next that address the limitations and drawbacks present in OpenID 2.0 that limit OpenID's applicability, adoption, usability, privacy, and security. Compatibility with OpenID 2.0 is an explicit non-goal for this work.

Specific chartered goals are to: (1) define core message flows and verification methods; (2) enable support for controlled release of attributes; (3) enable aggregation of attributes from multiple attribute sources; (4) enable attribute sources to provide verified attributes; (5) enable the sources of attributes to be verified; (6) enable support for a spectrum of clients, including passive clients per current usage, thin active clients, and active clients with OP functionality; (7) enable authentication to and use of attributes by non-browser applications; (8) enable optimized protocol flows combining authentication, attribute release, and resource authorization; (9) define profiles and support features intended to enable OpenID to be used at levels of assurance higher than NIST SP800-63 v2 level 1; (10) ensure the use of OpenID on mobile and other emerging devices; (11) ensure the use of OpenID on existing browsers with URL length restrictions; (12) define an extension mechanism for identified capabilities that are not in the core specification; (13) evaluate the use of public key technology to enhance, security, scalability and performance; (14) evaluate inclusion of single sign out; (15) evaluate mechanisms for providing redundancy; (16) complement OAuth 2.0; (17) minimize migration effort from OpenID 2.0; (18) seamlessly integrate with and complement the other OpenID v.Next specifications; (19) depreciate redundant or unused mechanisms.

The anticipated audience or users of the work iinclude Implementers of OpenID Providers, Relying Parties, Active Clients, and non-browser applications utilizing OpenID. Related work being done in other WGs or organizations includes the OpenID Authentication 2.0 and related specifications, for example Attribute Exchange (AX), Contract Exchange (CX), Provider Authentication Policy Extension (PAPE), Artifact Binding (AB) and the draft User Interface (UI) Extension. Also: OAuth 2.0, SAML 2.0 Core and SAML Authn Context. Initial members of the OpenID v.Next Core Protocol Working Group include (proposers) Dick Hardt, Michael B. Jones, Breno de Medeiros, Ashish Jain, George Fletcher, John Bradley, Nat Sakimura, and Shade.

See also: the OpenID Society


Alfresco Announces Activiti Project as an Apache 2 Licensed BPM Engine
Josh Long, InfoQueue

"Alfresco Software, makers of a leading open source enterprise content managment (ECM) system announced Monday their open source Activiti Business Process Managment (BPM) project, led by jBPM creator, former JBoss jBPM lead and BPM authority Tom Baeyens. Joram Barrez, also a former jBPM team member, joins him as a core Activiti developer. Alfresco has long embedded jBPM in their product offering, and will continue to support it going forward. Ultimately, Alfresco will also include Activiti in future releases.

Activiti is a new, Apache 2 Licensed open source project that offers a light weight, embeddable BPM engine with BPMN 2.0 support. In the BPM market, there are many specifications that (arguably) never quite offered leadership on all the main problems solved by a BPM engine, or workflow engine. BPEL is often criticized for providing too limited a runtime model to build more complex processes. More confusingly, BPMN 1.0 emerged and specified a very rich set of symbols to describe processes, but did not specify execution semantics, as BPEL did. Many vendors clamored to build BPMN tools that round-tripped to BPEL, but this was untenable as BPEL could not describe many things that could be drawn in BPMN..."

From the Alfresco announcement: "An independently-run and branded open source project, Activiti will work independently of the Alfresco open source ECM system. Activiti will be built from the ground up to be a light-weight, embeddable BPM engine, but also designed to operate in scalable Cloud environments. Activiti will be liberally licensed under Apache License 2.0 to encourage widespread usage and adoption of the Activiti BPM engine and BPMN 2.0, which is being finalized as standard by OMG...

Activiti will become Alfresco's default business process engine, Alfresco will continue to support jBPM as well as other business process engines currently integrated with its ECM software. Alfresco will also offer support, maintenance and indemnity for the Activiti suite alongside the Alfresco Enterprise Edition... The first alpha release of Activiti indicates the breadth of capabilities intended for the project, including: (1) Activiti Engine: A simple JAR file containing the Process Virtual Machine and BPMN process language implementation; (2) Activiti Probe: A system administration console to control and operate the Activiti Engine; (3) Activiti Explorer: A simple end-user application for managing task lists and executing process tasks; (4) Activiti Modeler: A browser-based and Ajax-enabled BPMN 2.0 process modeling tool designed for business analysts..."

See also: the Alfresco announcement


Novell and Symplified: Cloud-based Identity Management Gets a Boost
Ellen Messmer, Network World

"Giving network managers a way to provide access, single sign-on and provisioning controls in cloud-computing environments are getting a boost from both Novell and a much smaller competitor, start-up Symplified. Novell said its Identity Manager 4.0 product, expected out in the third quarter, will be able to work with Salesforce.com and Google Apps, as well as Microsoft SharePoint, and SAP applications to support a federated identity structure in the enterprise.

Symplified broke new ground with what it's calling Trust Cloud for EC2, software that provides access management, authentication, user provisioning and administration, single sign-on and usage auditing for enterprise applications running on the Amazon EC2 platform. It can be ordered through Symplified's Trust Cloud site and automatically deployed on the Amazon EC2 virtual-machine instances that customers request under an arrangement with Amazon.

Like Google, and Safesforce.com, Amazon supports the Security Assertion Markup Language (SAML) protocol, seen as a standard building block for identity management interoperability. But only about 5% of the estimated 2,200 service providers in the burgeoning cloud-computing market appear to support SAML, Eric Olden says, so Symplified also elected to support a variety of non-SAML-based protocols, such as those used at cloud-based recruiting and personnel management application provider Taleo, for example.

Analyst Glazer says cloud computing is having a profound effect on the vendors in the identity management arena, which spent years arguing and developing SAML, to find one of its most promising uses is not just in the fortress of the enterprise to control provisioning and other functions in corporate networks, but now also in the cloud... Novell intends to charge about $29.95 to $50 per user for Identity Manager 4.0, while Symplified's Trust Cloud for EC2 costs $1 per user per application..."


VMware's SpringSource Partners with Google on Cloud Computing
John K. Waters, Application Development Trends

VMware and Google are joining forces to make life easier for developers aiming their apps at the cloud. The two companies have announced a series of collaborations that will enable Java developers to use Google and VMware tools on cloud apps and deployments of Spring Java applications on the Google App Engine. VMware CEO Paul Maritz unveiled the collaboration plan this week during a keynote at the Google I/O conference in San Francisco.

The collaboration plan includes new support for Spring Java apps on the Google App Engine; combining the capabilities of Spring Roo, a next generation rapid application development tool, with the Google Web Toolkit (GWT); and tighter integration of VMware's Spring Insight performance tracing technology and Google's Speed Tracer.

From the text of the Google announcement: "We're excited to announce our work with VMware to connect our developer tools, making it possible to create rich, multi-device web applications that can be hosted in a variety of Java-compatible hosting environments. Call it cloud portability for the enterprise—productively build apps that you can deploy onto Google App Engine for Business, a VMware environment or other infrastructure such as Amazon EC2 As part of this announcement, we're providing early access to these tools: you can start using them right now by downloading the latest milestone version of VMware's SpringSource Tool Suite (STS). If you prefer to wait for the general release, you can sign up to be notified...

Tools include: (1) Spring Roo: With Spring Roo, a next-generation rapid application development tool, Java developers can easily build full applications in minutes, using the Java Persistence API (JPA) to connect to new or existing databases. Roo outputs standard Java code; (2) Google Web Toolkit SDK: New data presentation widgets in Google Web Toolkit speed development of traditional enterprise applications, increase performance and interactivity for enterprise users, and make it much easier to create engaging mobile apps with a fraction of the investment previously required; (3) SpringSource Tool Suite: Using the Eclipse-based SpringSource Tool Suite, developers can now choose to deploy their application in their current VMware vSphere environment, in VMware vCloud, directly to Google App Engine for Business, or elsewhere; (4) Google Web Toolkit Speed Tracer: Speed Tracer now helps developers identify and fix performance problems not only in the client and network portions of their apps, but also on the server..."

See also: the Google announcement


Seagate Momentus XT Hybrid SSD Boots Into a New Dimension
Brian Chee, InfoWorld

"Seagate's new XT Hybrid SSD costs just $156 for 500GB of disk and a 4GB SSD (solid-state drive), and it fits in a laptop. Not only that, but it is operating system independent: The NAND flash memory is directly integrated into the drive controller board and does not require any specific software to be loaded on the operating system...

Other drive vendors such as SilverStone are shipping what is in essence an SSD front-end cache in a 3.5-inch drive tray, but the SilverStone product is strictly a cache and doesn't seem to have any sort of adaptive technology. Another vendor, Raidon, offers an SSD that seems to serve as a read cache, since the SSD in this case mirrors what's on the spinning platters. Both are good efforts, but they leave me wanting.

Seagate's product isn't just a hard disk with a 4GB SSD cache—the SSD is adaptive, meaning it keeps tabs on your most frequently used data in order to serve it up in an instant. Seagate's NDA briefing stressed that the first time you try to load anything from the hybrid drive, you won't see any difference in speed from any other 7,200-rpm disk drive. However, each subsequent load will get faster and faster. How much faster? Mileage will of course vary, but Seagate claims as much as 40 percent. Naturally, I ran my own tests to find out [data]..."

From the announcement: "Seagate announced channel and OEM shipments of the Momentus XT drive, the world's fastest 2.5-inch laptop PC hard drive, combining SSD-like performance with the massive capacity and much lower cost of HDDs. The Momentus XT drive also features Adaptive Memory -- a groundbreaking new technology from Seagate that learns and optimizes the drive's performance to each user by moving frequently used information into the flash memory for faster access. The Momentus XT solid state hybrid drive boots up to 100 percent faster than traditional 5400RPM drives, the mainstream spin speed for laptop PCs, and sets new benchmarks for real-world system performance for laptops and gaming systems... The unique Adaptive Memory technology works by identifying patterns in how often certain digital data is used, and then moving the most frequently used information to the embedded solid state memory for faster access -- effectively tailoring hard drive performance to each user and their applications..."

See also: the Seagate announcement


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-05-24.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org