The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: February 25, 2009
XML Daily Newslink. Wednesday, 25 February 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com



OASIS Public Review: Identity Metasystem Interoperability Version 1.0
Michael B. Jones and Michael McIntosh (eds), OASIS Approved CD

Members of the OASIS Identity Metasystem Interoperability Technical Committee have approved a Committee Draft version of the "Identity Metasystem Interoperability Version 1.0" specification for public review. The review period extends through April 27, 2009. This specification is intended for developers and architects who wish to design identity systems and applications that interoperate using the Identity Metasystem Interoperability specification. An Identity Selector and the associated identity system components allow users to manage their Digital Identities from different Identity Providers, and employ them in various contexts to access online services. In this specification, identities are represented to users as 'Information Cards'. An Information Card provides a visual representation of a Digital Identity for the end user. Information Cards contain a reference to an IP/STS that issues Security Tokens containing the Claims for that Digital Identity. The 'Information Card Model' refers to the use of Information Cards containing metadata for obtaining Digital Identity claims from Identity Providers and then conveying them to Relying Parties under user control. Information Cards can be used both at applications hosted on Web sites accessed through Web browsers and rich client applications directly employing Web services. The Identity Metasystem Interoperability specification prescribes a subset of the mechanisms defined in WS-Trust 1.2, WS-Trust 1.3, WS-SecurityPolicy 1.1, WS-SecurityPolicy 1.2, and WS-MetadataExchange to facilitate the integration of Digital Identity into an interoperable token issuance and consumption framework using the Information Card Model. It documents the Web interfaces utilized by browsers and Web applications that utilize the Information Card Model. Finally, it extends WS-Addressing's endpoint reference by providing identity information about the endpoint that can be verified through a variety of security means, such as https or the wealth of WS-Security specifications. This profile constrains the schema elements/extensions used by the Information Card Model, and behaviors for conforming Relying Parties, Identity Providers, and Identity Selectors.

See also: the OASIS IMI Technical Committee formation


NIST Guidelines Released for Secure Use of Digital Signatures, Hashing
William Jackson, Government Computer News

Guidelines for secure use of approved hash algorithms have been updated by the U.S. National Institute of Standards and Technology, providing the technical specifications for the latest Federal Information Processing Standards (FIPS). NIST publications on security (including encryption and key management) have played a prominent role for many years, especially for government applications. FIPS Publications are issued by NIST after approval by the Secretary of Commerce pursuant to Section 5131 of the Information Technology Reform Act of 1996 (Public Law 104-106) and the Federal Information Security Management Act of 2002 (Public Law 107-347). The newly released Special Publication 800-107,"Recommendation for Applications Using Approved Hash Algorithms" provides guidelines for achieving the needed level of security when using algorithms approved in FIPS 180-3. Hash functions compute a fixed-length digest, or hash, for a document or message, which can be used to assure that a document has not been altered. The agency also has released guidelines for assuring that digitally signed documents cannot be tampered with by a second party. Special Publication 800-106, titled "Randomized Hashing for Digital Signatures," specifies ways to enhance the security of cryptographic hash functions used in digital signatures by randomizing the message. Special Publications in the NIST 800 series present documents of general interest to the computer security community. The Special Publication 800 series was established in 1990 to provide a separate identity for information technology security publications. This Special Publication 800 series reports on ITL's research, guidelines, and outreach efforts in computer security, and its collaborative activities with industry, government, and academic organizations.

See also: NIST Special Publications


ICT Solutions Using SOA and Web Services
Staff, European Communities epractice.eu News

The eGovernment Workshop jointly organised by the OASIS eGov Member Section and the Belgian Federal Ministry of Finance took place on 19-February-2009 in Brussels. The slides have been made available for free online. The objective of this one-day Workshop were to introduce ICT solutions in the context of public finance, including tax, benefits and payments, while exchanging experience amongst professionals with cases focusing on SOA (Service oriented architecture) and web-based services. The Workshop brought together a wide range of officials from various ministries concerned with tax, revenue and related public finance issues, members of the secretariats of several international and European organisations, and representatives of technology companies and industry/trade associations. The OASIS eGovernment Member Section has developed the Workshop to promote interoperability and the use of open standards in this rapidly expanding sector, which is foreseen to continue to grow despite the financial crises afflicting major markets.

See also: the online proceedings


W3C eGovernment Stakeholder Meeting Welcomes IT and Policy Representatives
Staff, W3C Announcement

World Wide Web Consortium announced that the W3C eGovernment Interest Group will hold a special stakeholder meeting hosted by the American Institute of Architects on 12-13 March in Washington, DC to address the goals, benefits and limitations of implementing electronic government. The two-day meeting provides a global forum for IT and policy representatives from government and industry to address the political, legal, financial, and social factors that impact the successful implementation of open government initiatives. The goal of the forum is to document progressive solutions for electronic government as well as to develop a road map for developing Web standards to realize open and interoperable solutions. Representatives from global government organizations and private industry will hear presentations from Ellen Miller, co-founder and executive director of the Sunlight Foundation; John Sheridan, Head of e-Services and Strategy for The National Archives of the United Kingdom; Kevin Novak, Vice President, Integrated Web Strategy and Technology, The American Institute of Architects, and Jose M. Alonso, W3C eGovernment Activity Lead and CTIC Fellow based in Spain. In addition to presentations, the meeting will focus discussions on use cases and potential road maps related to the following topics: (1) Government transparency, openness and interoperability of data; (2) Citizen participation and engagement with government information; (3) Use of social media among governments and citizenry; (4) Incorporation of standard technologies to reduce costs and increase productivity of information technologies. The W3C eGovernment Interest Group was chartered in June of 2008 recognizing that governments throughout the World needed assistance and guidance in achieving the promises of electronic government through technology and the Web. Members of the eGovernment Interest Group have been developing Draft Use Cases to identify the main issues that governments are facing when using the Web. Examples inxclude: Strategic Plans; Persistent URIs; Identification Authentication; Risk Management for Social Media; Plain Language; XSD for DRM; Data Sharing Policy Expression; Semantic MyPage; Your Website is your API; US Privacy OWL; FEA-RMO OWL; Linked Open Government; Common Service Model. A report will be released later this year with the main issues identified and proposals for how they have can be solved. The IG will focus on the unique and diverse needs of governments throughout the developed and developing World in enabling electronic service and information delivery, as well as providing opportunities for discovery, interaction and participation.

See also: the W3C eGovernment IG use cases


Revised Internet Draft: Link Relations and HTTP Header Linking
Mark Nottingham (ed), IETF Internet Draft

Significant changes has been made to the version -04 release of the I-D "Link Relations and HTTP Header Linking." This specification defines relation types for Web links, and defines a registry for them. It also defines how to send such links in HTTP headers with the 'Link' header-field. Background: A means of indicating the relationships between resources on the Web, as well as indicating the type of those relationships, has been available for some time in HTML, and more recently in Atom (IETF RFC 4287). These mechanisms, although conceptually similar, are separately specified. However, links between resources need not be format-specific; it can be useful to have typed links that are independent of the format, especially when a resource has representations in multiple formats. To this end, this document defines a framework for typed links that isn't specific to a particular serialisation or context of use. It does so by re-defining the link relation registry established by Atom to have a broader scope, and adding to it the relations that are defined by HTML. Furthermore, an HTTP header-field for conveying typed links was defined in RFC 2068, but removed from RFC 2616, due to a lack of implementation experience. Since then, it has been implemented in some User-Agents (e.g., for stylesheets), and several additional use cases have surfaced. Because it was removed, the status of the Link header is unclear, leading some to consider minting new application- specific HTTP headers instead of reusing it. This document addresses this by re-specifying the Link header with updated but backwards-compatible syntax. Note that Atom conveys links in the 'atom:link' element, with the "href" attribute indicating the target IRI and the "rel" attribute containing the relation type. The context of the link is either a feed IRI or an entry ID, depending on where it appears; generally, feed-level links are candidates for transmission as a Link header. When serialising an atom:link into a Link header, it is necessary to convert target IRIs (if used) to URIs. Atom defines extension relation types in terms of IRIs. This specification defines them as URIs, to aid in their comparison. Atom allows registered link relation types to be serialised as absolute URIs, because a base URI is defined for the registry. Such relation types SHOULD be converted to the appropriate registered form so that they are not mistaken for extension relation types. Note also that while the Link header allows multiple relations to be associated with a single link, atom:link does not. In this case, a single link-value may map to several atom:link elements. As with HTML, atom:link defines some attributes that are not explicitly mirrored in the Link header syntax, but they may also be used as link-extensions... Changes in -04: Defined context as a resource, rather than a representation; Removed concept of link directionality; relegated to a deprecated Link header extension; Relation types split into registered (non-URI) and extension (URI); Changed wording around finding URIs for registered relation types; Changed target and context URIs to IRIs (but not extension relation types); Add RFC2231 encoding for 'title' parameter, explicit BNF for title*; Add i18n considerations; Specify how to compare relation types; Changed registration procedure to Designated Expert; Softened language around presence of relations in the registry; Added 'describedby' relation; Re-added 'anchor' parameter, along with security consideration for third-party anchors; Softened language around HTML4 attributes that aren't directly accommodated; Various tweaks to abstract, introduction and examples.

See also: the discussion list archivee


Towards a Plugin Architecture for XRX Web Applications
Dan McCreary, O'Reilly Technical

Several of us in the XML community have been creating XRX web applications for about a year now and we have a small but quickly growing library of web applications that we tend to reuse over and over for our customers. In software development XRX is a web application architecture based on XForms, REST and XQuery. XRX applications store data on both the web client and on the web server in XML format and do not require a translation between data formats. XRX is considered a simple and elegant application architecture due to the minimal number of translations needed to transport data between client and server systems. The XRX architecture is also tightly coupled to W3C standards (CSS, XHTML 2.0, XPath, XML Schema) to ensure XRX applications will be robust in the future. Because XRX applications leverage modern declarative languages on the client and functional languages on the server they are designed to empower non-developers who are not familiar with traditional procedural languages such as JavaScript, Java or .Net... Many of these web apps have begun life as a simple XML Schema file and have then been transformed from an XML Schema into a full XRX web application using a set of XQuery transformations. Our goal is to use a Model-Driven-Development style where the applications are generated directly from the XML Schemas. They have all the features of a typical web application: (Create, Read, Update, Delete, Search) and I am now getting to the stage where most of us can create new XRX applications from an simple element-centric XML Schema in about an hour with a little help from some of the tools from the still very rough XRX Wikibook and XRX code samples. One of the things we find challenging is the need to quickly take these applications and re-purpose them for other customers. Our goal is to be able to take a folder for a web application and just drag it into an "apps" folder just like installing a software package. But we also want the application to be instantly be "registered" by the web site just like an Eclipse plugin is registered. This turns out to be very easy since we can take advantage of the fast XQueries that rely on native XML indexes. In order to make our applications a little more portable we have started to create a single XML file that describes the functionality of each XRX application. This file (which we call the app-info.xml file) has much the information necessary for shared web site application services. When I add a new web application to a web site we would like the application and application icon to appear in a site's main applications menu. The site breadcrumbs should be updated to reflect the application label when you navigate into the application collections, and the "New" menu would allow new items to be created. The site-wide search functions should include the new items managed by the new application [etc.]... What you see is that some of these features are web-site driven and some of them are similar to the functions of an Eclipse plugin. This reflects the fact that many people are using XRX as a web-application development environment. It also reflects the fact that fast keyword index search is now assumed as a built-in function of many XRX applications. Most importantly, we would like these XRX applications to be portable between any web servers and databases that support XQuery.

See also: XRX via Wikipedia


OSGi Takes Off Among Enterprise Service Bus Providers
Darryl K. Taft, eWEEK

"The Open Services Gateway Initiative Service Platform is catching on as a platform of choice for enterprise service bus solutions. The OSGi Alliance announced that many leading ESB providers in the market have demonstrated their commitment to the platform through the current or planned employment of OSGi technology in their ESBs and products that utilize ESBs. The OSGi Service Platform delivers the dynamic module system for Java to providers and their customers, modularizing and componentizing the Java platform and allowing applications to be adapted remotely and in real time, OSGi Alliance officials said. OSGi technology is a component integration platform with a service-oriented architecture and lifecycle capabilities that enable dynamic delivery of services. Leading vendors deploying ESBs on the OSGi Service Platform include Progress Software, Red Hat and TIBCO Software Inc. These market innovators note the clear advantages OSGi technology provides as a platform for ESBs. Other vendors that are not part of the OSGi Alliance, such as WSO2, also have standardized on the OSGi platform. 'OSGi technology and ESBs are a natural match,' said Gordon van Huizen, chief technology officer for Progress Software. 'First, OSGi technology speeds time to market for new ESB features and functions and improves manageability of ESB deployments. Second, the OSGi programming model offers the potential to create custom components that work seamlessly with out-of-the-box ESB functions. Ultimately, I believe we are looking at the possibility of OSGi technology enabling a 'best-of-breed' style approach to the ESB market, in which OSGi technology becomes the 'platform' for the ESB and vendors develop OSGi bundles to plug in.'[...]


What Does Woz See in Solid-State Drives?
Brooke Crothers, CNET News.com

What does Steve Wozniak know about solid-state drives that we don't? David Flynn, the chief technology officer of SSD start-up Fusion-io, provides some insight into why the Apple co-founder is joining the company as chief scientist. I talked with Flynn on the phone about what the Salt Lake City start-up, founded in 2006, does and what attracted Wozniak. Enterprise solid-state drives typically offer much better performance than even the fastest hard-disk drives. Fusion-io claims that its IoDrive improves storage performance by as much as 1,000 times over traditional disk arrays while operating at a fraction of the power and at a tenth of the total cost of ownership. Flynn offered an analogy to describe what his company hopes to achieve. "The 3D accelerator decimated the vertically integrated companies like SGI, Evans, and Sutherland," he said. "They used to be able to charge hundreds of thousands of dollars for workstations." But inexpensive, off-the-shelf 3D graphics cards from companies like 3dfx, Nvidia, and ATI Technologies in the late 1990s changed all of this... Fusion-io's technology is pegged to IOPS (input/output operations per second). And companies such as Citibank and American Express are increasingly looking at server performance through the IOPS lens, according to Samsung, which makes both hard-disk drives and solid-state drives. Enterprise SSDs process 100 times the number of IOPS per watt as a typical 15K 2.5-inch server hard disk drive, according to Samsung data... Lower power consumption is also a plus. Enterprises solid-state drives consume less than 25 percent of the power of a 15K hard-disk drive, according to data provided by Samsung in October. Performance and low power consumption, however, aren't enough, according to Flynn. Because enterprise solid-state drives are a relatively new technology, reliability is crucial. Fusion-io offers a technology called "Flashback" protection--extra chips that can jump in to take over immediately if there is a failure. "This is at the chip level. It's not wear-out that's the problem, it's chips that short out" because of the high voltages, Flynn said. Here are some more specifics Flynn offered. Currently, Fusion-io can achieve just shy of 1 terabyte of storage by using three 320GB cards. "We're doubling density per module and doubling the number of modules per card so we're going to have 1.3TB on a single PCI Express card"...


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-02-25.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org