The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: May 13, 2010
XML Daily Newslink. Thursday, 13 May 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com



First Draft of XSL Transformations (XSLT) Version 2.1 Draft Published
Michael Kay (ed), W3C Technical Report

Members of the W3C XSL Working Group have published the First Public Working Draft for XSL Transformations (XSLT) Version 2.1. This specification has been developed in conjunction with XPath 2.1 and other documents that underpin both XSLT and XQuery. Although the development of this family of documents is coordinated, it has not been possible on this occasion to publish them simultaneously, and there may therefore be imperfect technical alignment between them. This will be corrected in later drafts.

A transformation in the XSLT language is expressed in the form of a stylesheet, whose syntax is well-formed XML conforming to the Namespaces in XML Recommendation. A stylesheet generally includes elements that are defined by XSLT as well as elements that are not defined by XSLT. XSLT-defined elements are distinguished by use of the namespace, which is referred to in this specification as the XSLT namespace. Thus, this specification is a definition of the syntax and semantics of the XSLT namespace. The term stylesheet reflects the fact that one of the important roles of XSLT is to add styling information to an XML source document, by transforming it into a document consisting of XSL formatting objects (see XSL-FO), or into another presentation-oriented format such as HTML, XHTML, or SVG. However, XSLT is used for a wide range of transformation tasks, not exclusively for formatting and presentation applications.

A transformation expressed in XSLT describes rules for transforming zero or more source trees into one or more result trees. The structure of these trees is described in the Data Model specification. The transformation is achieved by a set of template rules. A template rule associates a pattern, which matches nodes in the source document, with a sequence constructor. In many cases, evaluating the sequence constructor will cause new nodes to be constructed, which can be used to produce part of a result tree. The structure of the result trees can be completely different from the structure of the source trees. In constructing a result tree, nodes from the source trees can be filtered and reordered, and arbitrary structure can be added. This mechanism allows a stylesheet to be applicable to a wide class of documents that have similar source tree structures.

The main focus for enhancements in XSLT 2.1 is the requirement to enable streaming of source documents. This is needed when source documents become too large to hold in main memory, and also for applications where it is important to start delivering results before the entire source document is available. While implementations of XSLT that use streaming have always been theoretically possible, the nature of the language has made it very difficult to achieve this in practice. The approach adopted in this specification is twofold: it identifies a set of restrictions which, if followed by stylesheet authors, will enable implementations to adopt a streaming mode of operation without placing excessive demands on the optimization capabilities of the processor; and it provides new constructs to indicate that streaming is required, or to express transformations in a way that makes it easier for the processor to adopt a streaming execution plan..."

See also: the W3C XSL Working Group


Microsoft Active Directory Federation Services (ADFS) 2.0
Staff, Geneva Team Blog

On May 05, 2010, Microsoft announced the general availability of Active Directory Federation Services (ADFS) Version 2.0 "This release for Windows Server 2008 and 2008 R2 will make it easier to work across companies and leverage the cloud, and to develop secure applications all while using industry standard interoperable protocols.

Some of the key features of Active Directory Federation Services 2.0 are as follows. ADFS enables Single User Access Model; native single sign-on enables users to have one account and one password to use across diverse systems, improving user productivity. It supports access on-premises and in the Cloud: identities can be used seamlessly between on-premises software and cloud services. ADFS uses Standard Protocols, including WS-* and SAML protocols, enabling applications based on different programming models, languages and devices to interoperate. It supports enhanced Federated Identity Management. Partner organizations can manage their own identities while securely sharing and accepting identities with each other in order to make access decisions, minimizing need to manage authentication credentials for other parties.

ADFS is available as an Integrated Server Role. AD FS 2.0 is a server role within Windows Server 2008 R2 that can be easily deployed and managed using Server Manager, instead of handled as an added feature, as in Windows Server 2003 R2. It integrates seamlessly with SharePoint 2010 and AD RMS for secure collaboration across organization.

ADFS 2.0 provides enhanced developer experiences: it is built on Windows Identity Foundation (WIF) which enables .NET developers to externalize identity logic from their application, improving developer productivity, enhancing application security, and enabling interoperability. It now supports improved administration, including simple and effective trust setup and management features which enable IT to easily connect to cloud or partner organizations' systems...

See also: an Active Directory Federation Services 2.0 Overview


Eugenio Pace on Identity Federation, WIF, and ADFS 2.0
Jon Arild Toerresdal and Eugenio Pace, InfoQueue

Microsoft has entered the cloud and customers are looking into moving their applications to this new platform. In doing so authentication and identity management needs to be addressed. InfoQ Editor Jon Arild Toerresdal talked to Eugenio Pace, Senior Program Manager in the Patterns & Practices team about the recent federation and identity technologies released from Microsoft.

Excerpts (Eugenio): "Many applications today take the responsibility of authenticating its users. They do this by managing user credentials, usually in the form of username/passwords or some other shared secret. The result is a proliferation of user accounts repositories, one on each application. In many situations this is just fine, but in many other scenarios this approach has many limitations. For example: if each application stores user accounts in its own database, then it is very difficult to provide a 'Single Sign on' experience. There are also increased management tasks, like creating new user accounts as you have to do so on each application. There're also increased security risks: if a user is not valid anymore you might forget to disable his account from one of the applications and inadvertently allow access when you shouldn't. Identity Federation (and the related claims based authentication architecture) aims at solving all these challenges by factoring out user authentication from applications and moving it to a specialized service: an identity provider...

One of the key scenarios where identity federation shines, is for cloud based applications. I believe that as companies move more and more applications to the cloud, a claims based approach to identity will be the preferred approach... Microsoft has recently released the Windows Identity Foundation (WIF), a .NET library to claims enable .NET applications. WIF provides higher level abstractions that significantly lowers the bar of entry to the world of Identity Federation and Claims. WIF ships with an SDK and tools integrated into Visual Studio to simplify even further the development experience. WIF is designed to abstract developers away from all the details of validating, parsing and exchanging security tokens. It implements WS-Federation and WS-Trust protocols and can handle SAML 1.1 and 2.0 security tokens. Developers will mostly deal with higher level abstractions such as IPrincipal and IIdentity...

ADFS v2 can currently authenticate users against AD only. However, ADFS v2 supports rich and extensible engine for issuing claims that can use various attribute stores. Out of the box, ADFS v2 can issue claims from information stored in LDAP directories and SQL repositories. You can also develop your own custom attribute provider..."


Extensible Markup Language Evidence Record Syntax
Aleksej J. Blazic, Svetlana Saljic, Tobias Gondrom (eds), IETF Internet Draft

Members of the IETF Long-Term Archive and Notary Services (LTANS) Working Group have published an updated -05 version of the specification Extensible Markup Language Evidence Record Syntax. In many scenarios, users must be able to demonstrate the (time) existence, integrity, and validity of data including signed data for long or undetermined period of time. The purpose of the document is to define XML Schema and processing rules for Evidence Record Syntax in XML format. The document is related to initial ASN.1 syntax for Evidence Record Syntax as defined in RFC 4998.

Background: "The evolution of electronic commerce and electronic data exchange in general requires introduction of non-repudiable proof of data existence as well as data integrity and authenticity. Such data and non-repudiable proof of existence must endure for long periods of time, even when information to prove data existence and integrity weakens or ceases to exist. Mechanisms such as digital signatures do not provide absolute reliability on a long term basis. Algorithms and cryptographic material used to create a signature can become weak in course of time and information needed to validate digital signatures may became compromised or simply cease to exist due to for example decomposing certificate service provider. Providing a stable environment for electronic data on a long term basis requires the introduction of additional means to continually provide an appropriate level of trust in evidence on data existence, integrity and authenticity...

XMLERS does not supplement the RFC 4998 specification; it introduces the same approach but in a different format and processing rules. The use of Extensible Markup Language (XML) format is already recognized by a wide range of applications and services and is being selected as the de-facto standard for many applications based on data exchange. The introduction of evidence record syntax in XML format broadens the horizon of XML use and presents a harmonized syntax with a growing community of XML based standards including those related to security services (e.g. XMLDSig or XAdES). Due to the differences in XML processing rules and other characteristics of XML language, XMLERS does not present a direct transformation of ERS in ASN.1 syntax. The XMLERS syntax is based on different processing rules as defined in RFC 4998 and it does not support for example import of ASN.1 values in XML tags. Creating evidence records in XML syntax must follow the steps as defined in this draft.

An Evidence Record may be generated and maintained for a single data object or a group of data objects that form an archive object. Data object (binary chunk or a file) may represent any kind of document or part of it. Dependencies among data objects, their validation or any other relationship than "a data object is a part of particular archived object" are out of the scope of this draft. An Evidence Record maintains a close relationship to time stamping techniques. However, time-stamps as defined in RFC 3161, can cover only a single unit of data and do not provide processing rules for maintaining a long term stability of time-stamps applied over a data object. Evidence for an archive object is created by acquiring a time-stamp from a trustworthy authority for a specific value that is unambiguously related to a single or more data objects..."

See also: the IETF Long-Term Archive and Notary Services WG Charter


FAQs for Smart Grid Interoperability
Crisie Charan-Thomas and Ken Sinclair, AutomatedBuildings.com

Charan-Thomas: "If you visit a hundred different web sites, you will find a hundred different definitions of a Smart Grid. Since there is no globally-centralized Smart Grid authority, we may never have a single definition—but that's not necessarily a bad thing. However, in analyzing many of the definitions that are out there, you will find a couple of elements that are common to almost all of them: a two-way flow of information coupled with a two-way flow of power. When you think about it, one of these two items are involved in every application we talk about in Smart Grid; distributed generation, demand response, renewable integration, energy storage, consumer energy management, smart appliances, advanced metering infrastructure, substation automation, and so on...

Since much of the desired functionality and many of the applications that we seek to enable in the Smart Grid are based on the need for reliable communications, there is a definite role for IP networking as a globally accepted and universally known communications protocol. Naturally there are groups that believe IP should be the only communications protocol that is used in the grid, just as there are groups that believe it shouldn't be allowed at all. Like it or not, IP is already in the grid in as much as various AMI companies are building IP adapters for smart meters, and IP networking is built into the C12 standards. For devices that are capable of supporting the logical addressing schemes that are characteristic of IP networking implementations, it will be a viable alternative...

IEC and NIST are two of the leading bodies promoting Smart Grid, and each has produced their own roadmap. Both groups have gone to great lengths to come up with a list of standards, and they share a number of them in common. In terms of approach, NIST and IEC have considered a slightly different set of applications that will be enabled by Smart Grid. The NIST list of applications includes demand response and consumer energy efficiency, wide-area situational awareness, energy storage, electric transportation, advanced metering infrastructure, distribution grid management, cyber security, and network communications as the priority applications for Smart Grid.

The list of IEC applications includes high-voltage DC, blackout prevention, distribution management and automation, substation automation, distributed energy resources, advanced metering infrastructure, demand response, smart homes, electric storage, and electromobility. While they both have differing governing organizations, a number of NIST participants in the Smart Grid Interoperability Panel (SGIP) are also members of the IEC Strategy Group 3 (SG3) for Smart Grid..."


Automatic Call Handling (ACH) RESTful Interface
Rifaat Shekh-Yusef and Theo Zourzouvillys (eds), IETF Internet Draft

An initial level -00 Internet Draft has been published for an "Automatic Call Handling (ACH) RESTful Interface" specification.

From the documwent Introduction: "The Session Initiation Protocol (SIP) as defined in RFC 3261 is a protocol used for establishing calls for real-time communication between users. Some systems allow for automatic treatment of calls arriving to a specific user. Some of this treatment takes place before the call is presented to the endpoint, while others take place after the endpoint has received the call indication. Some automatic treatment can be set by the system administrator while others can be set by the end user. This automatic treatment of incoming calls is referred to as automatic call handling (ACH).

This document is focused on the automatic call handling (ACH) features described in the "An Analysis of Automatic Call Handling (ACH) Implementation Issues in the Session Initiation Protocol (SIP)" draft. The specification defines a RESTful interface that allows a RESTful client to directly affect the Automatic Call Handling (ACH) behavior at a domain authoritative for a specific SIP address... The document is limited to a subset of network provided features and does not support more complex operations such as time-of-day routing. The following features will be provided in the first version of this protocol: (1) Call forwarding (unconditional for all calls, on busy, no answer, not reachable); (2) Barring (incoming and outgoing); (3) DND—Do Not Disturb...

The actions taken under the control of the configuration settings established by these RESTful operations are made at the system authoritative for the domain part of the AOR. This has the implication that any configuration change will apply to all bindings for the AOR..."


JRuby 1.5 Released
John K. Waters, Application Development Trends

The JRuby community has released the latest upgrade of its 100 percent Java implementation of the Ruby programming language, JRuby 1.5. This release completes one of this open source community's longest development cycles. According to a blog posting, it took nearly five months to complete the upgrades and bug fixes in this release.

JRuby "is a Java implementation of the Ruby programming language, being developed by the JRuby team. It is free software released under a three-way CPL/GPL/LGPL license. JRuby is tightly integrated with Java to allow the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code—similar to Jython for the Python language..."

The JRuby 1.5 release comes with 1,300 revisions and 432 bug fixes. There's also a new native access framework designed for performance and better FFI support, a native launcher for *Nix platforms, Ant support and Rake-Ant integration, and "better and better" support for Windows. JRuby developers will also be glad to find multiple performance improvements for Ruby-to-Java calling, including improved correctness, memory and speed.

The Ruby 1.8.7 standard library has also been updated in this release, as has the RubyGems 1.3.6, and RSpec 1.3.0. API improvements based on user input (JSR-223, BSF, RedBridge, etc) have been embedded, plus the ruby-debug tool is now installed by default. Look also for many fixes for Rails 3, including start-up time improvements, reduced memory use for Java class metadata, faster loading of Java classes and jar-in-jar support in the classloader..."

See also: What's New in JRuby Version 1.5


Ice Cream With (RFID) Chips to Go: The Real-Time Scoop on Flavors
Jaikumar Vijayan, ComputerWorld

Businesses are using RFID tags to track everything from large shipping containers and livestock to tiny electronic components. It's unlikely, though, that any other business is using radio frequency identification technology for the same purpose as Izzy's Ice Cream Cafe in St. Paul, Minn. The shop, which epitomizes the classic mom-and-pop business, has concocted almost 100 flavors of ice cream and serves 32 flavors at any one time. Until this week, customers had little way of knowing whether their favorite flavors—Peppermint Bon Bon, Cherries Jubilee and Dulce de Leche, to name a few—were available until they arrived at the counter.

Not anymore. On Monday, Izzy's started using RFID technology to give customers real-time updates on all the available flavors in its dipping cabinet, the glass covered case where the tubs of ice cream are displayed...

RFID readers stuck in the dipping cabinets scan tags attached to the signs that go above each ice cream tub to give customers updated information on available ice cream flavors. Each time one tub of ice cream is replaced with a new flavor, an employee swaps out the RFID tag in front of the tub with the one corresponding to the new flavor.

RFID readers in the dipping cabinet scan the tags 22 times every second and send the information to a system that then projects a series of dots representing different flavors onto a wall in the store... With limited space in front of the cabinet, most ice cream shops resort to listing available flavors on display boards behind the counter. It's a system that is manually intensive and prone to errors, especially when a shop sells as many flavors as Izzy's does, Sommers said. And it results in too many crestfallen customers at the order counter after they learn that their favorite flavor is sold out..."


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-05-13.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org