A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com
Headlines
- It's the F-Word: Everything is Federated
- Review: Web Services Human Task (WS-HumanTask) Specification Version 1.1
- New IETF Draft: Link Extension to iCalendar
- ISOC Panel Notes Rise in IPv6-Related Activity
- Heartland Ramps Up First End-To-End Encryption
- Deriving New Business Insights With Big Data
- IETF Last Call Public Review for Web Host Metadata Specification
- W3C Publishes Report on Model-Based UI Workshop
- Creating Mobile Web Applications with HTML 5: Canvas, CSS3, and More
- WSO2 Adds Rules Server to SOA Platform
It's the F-Word: Everything is Federated
Leif Johansson, IETF Journal
"Everything is federated these days. In some cases, particularly when it is waved at a problem, the F-word 'Federated' is not well defined. In other cases there are clearly defined semantics for the word federation. Federated authentication is one of those cases... In the Enterprise Authentication Model (late 1990s), the purpose of authentication was to establish a security context using technologies such as GSS-API (Generic Security Service Application Program Interface), SASL (simple authentication and security layer), or TLS (transport layer security) between two endpoints of a communications channel together with identifiers that represented users. Back then authentication involved two parties: the client and the server...
In January 2001, the OASIS Security Services Technical Committee (SSTC) convened to begin work on what became SAML (Security Assertion Markup Language), a technology that has become one of the cornerstones of federated authentication. SAML can support several use-cases but the most commonly deployed pattern is called Browser Web SSO. This particular profile of SAML involves three actors: the user, the identity provider, and the service provider. SAML uses XML-based messages and relies on public key cryptography (although not necessarily on public key infrastructure (X.509 or PKIX) to sign and encrypt those messages.
The IETF has been mostly absent from this field but that may soon be changing. IETF 77, which was held in Anaheim, California, in March 2010, saw its first Moonshot project bar BoF session. The aptly named project, which is partly funded by JANET and by the GEANT3 project, aims high but the potential benefits justify the effort. It brings together a wide range of IETF standards including EAP, GSS-API (generic security service application program interface), and Radius together with SAML to construct a federation framework that may provide many current IETF protocols, including SSHv, NFSv4, and IMAP, access to federated identity. If successful, it would extend identity federation beyond Web-only applications, but it would also provide a general trust-framework for the Internet.
Other work related to federated authentication that might end up being done in the IETF address alternative approaches that have been proposed to bridge Web-centric identity (such as SAML or OpenID) and SASL. The driving force behind a lot of these efforts is the need to federate messaging, calendaring, as well as virtual worlds protocols, which are some of the more important examples of applications where the browser isn't the obvious choice of a client. The needs of the mobile market and its focus on apps may well turn out to be a boon for federated identity. While HTTP is often used as a protocol layer, typically through RESTful2 service calls, the client is not always a browser in the traditional sense..."
See also: the Project Moonshot presentation from IETF 77
Review: Web Services Human Task (WS-HumanTask) Specification Version 1.1
Luc Clément, Dieter König, Vinkesh Mehta, Ralf Mueller (et al, eds), OASIS PRD
Members of the OASIS WS-BPEL Extension for People (BPEL4People) Technical Committee have released Committee Draft 10 / Public Review Draft 04 of "Web Services Human Task (WS-HumanTask) Specification Version 1.1" for public review through July 12, 2010.
"The concept of human tasks is used to specify work which has to be accomplished by people. Typically, human tasks are considered to be part of business processes. However, they can also be used to design human interactions which are invoked as services, whether as part of a process or otherwise. This specification introduces the definition of human tasks, including their properties, behavior and a set of operations used to manipulate human tasks. A coordination protocol is introduced in order to control autonomy and life cycle of service-enabled human tasks in an interoperable manner...
One of the motivations of WS-HumanTask was an increasingly important need to support the ability to allow any application to create human tasks in a service-oriented manner. Human tasks had traditionally been created by tightly-coupled workflow management systems (WFMS). In such environments the workflow management system managed the entirety of a task's lifecycle, an approach that did not allow the means to directly affect a task's lifecycle outside of the workflow management environment (other than for a human to actually carry out the task). Particularly significant was an inability to allow applications to create a human task in such tightly coupled environments...
The component within a WFMS typically responsible for managing a task's lifecycle (aka workitem) is called a Workitem Manager. Using this approach, the WFMS no longer incorporates a workitem manager but rather interacts with a Task Processor. In this architecture the Task Processor is a separate, standalone component exposed as a service, allowing any requestor to create tasks and interact with tasks. It is the Task Processor's role to manage its tasks' lifecycle and to provide the means to 'work' on tasks. Conversely, by separating the Task Processor from the WFMS tasks can be used in the context of a WFMS or any other WS-HumanTask application (also referred to as the Task Parent). A special case of a business process acting as a Task Parent of a human task is described by the BPEL4People specification..."
See also: the OASIS WS-BPEL Extension for People (BPEL4People) TC
New IETF Draft: Link Extension to iCalendar
Michael Douglass (ed), IETF Internet Draft
Members of the IETF Calendaring and Scheduling Standards Simplification (CALSIFY) Working Group have published an initial level -00 Internet Draft for the Standards Track specification Link Extension to iCalendar. This specification introduces a new iCalendar property LINK to provide ancillary information for iCalendar components.
Overview: "The currently existing iCalendar standard (RFC 5545) lacks a general purpose method for referencing additional, external information relating to calendar components. This document proposes a method for referencing typed external information that can provide additional information about an iCalendar component (such as a VCARD). This method is general purpose and may be used anywhere the need to reference additional information is desired. This new LINK property is closely aligned to the LINK header defined in the IETF Internet Draft "Web Linking" which specifies relation types for Web links, and defines a registry for them, and defines the use of such links in HTTP headers with the Link header-field...
The LINK property defines a typed reference or relation to external meta-data or related resources. By providing type and format information as parameters, clients and servers are able to discover interesting references and make use of them, perhaps for indexing or the presentation of interesting links for the user. Many of these relations are designed to handle common use cases in event publication. It is generally important to provide information about the organizers of such events. Sponsors also wish to be referenced in a prominent manner. In social calendaring it is often important to identify the active participants in the event, for example a school sports team, and the inactive participants, for example the parents. This property will also allow references to other data that has a time components. For example, in the power industry it allows the creation of schedules of power usage linked to related information about the amount and cost.
For example the RFC 5545 LOCATION property provides only an unstructured text value for specifying the location where an event (or 'TODO' item) will occur. This is inadequate for use cases where structured location information (e.g. address, region, country, postal code) is required or preferred, and limits widespread adoption of iCalendar in those settings. Using LINK, structured information about the venue such as address, city, region/state and postal code can be communicated, perhaps using a VCARD object. Servers and clients can retrieve the vcard object when storing the event and use it to index by geographic location. As another example, a calendar item can reference a video feed for the event. This provides event publishers with a means to attract consumers to their sites while providing a service directly accessible from the users' calendar client..."
See also: the IETF Calendaring and Scheduling Standards Simplification (CALSIFY) Working Group
ISOC Panel Notes Rise in IPv6-Related Activity
Carolyn Duffy Marsan, IETF Journal
"Momentum surrounding IPv6 is picking up, and IETF participants should be ready for it to snowball soon, according to an Internet Society (ISOC) panel held in Anaheim, California, during the IETF meeting. Leslie Daigle, chief Internet technology officer at ISOC, said she saw an increase in IPv6-related activity during 2009. She pointed out that Japan published its IPv6 action plan last year, while the U.S. government required IPv6 in its acquisition regulations. Australia moved up-to 2012-the deadline for having its whole government transitioned to IPv6. Leslie also said that such Internet service providers (ISPs) as Hurricane Electric, Verizon, and Comcast were stepping up their efforts to deploy IPv6...
Geoff Huston, chief scientist at APNIC and a longtime IETF participant, said he's been trying to measure IPv6 deployment. He researched three sets of data: Border Gateway Protocol table entries, DNS queries, and dual stack Web server access. He said the number of routing table entries for IPv6 has grown from 1,000 to 3,000 from 2008 to 2010... Geoff said it was hard to quantify IPv6 activity by looking at DNS data, but by studying Web server ratios, he estimated that IPv6 represents 1 percent of Internet traffic today...
Jason Livingood, executive director of Internet Systems Engineering at Comcast, said customer response to the ISP's announcement of IPv6 trials this year has been very strong. Comcast is testing three IPv6 transition mechanisms developed by the IETF: 6RD, dual stack lite, and native dual stack over cable and fiber.
Panelist David Temkin, network engineering manager at Netflix, said he was surprised at how easy it has been to deploy IPv6: 'We rely on a CDN (content delivery network) for the bulk of our movie streaming. We host our own website and most of the content that goes behind that. Both the internal integration of our website and our corporate network and the external integration with Limelight for an IPv6 CDN was very straightforward'..."
See also: the Wikipedia IPv6 description
Heartland Ramps Up First End-To-End Encryption
Ellen Messmer, Network World
"Heartland Payment Systems, the victim last year of a massive data breach of sensitive card data, vowed after that devastating event to develop new security gear based on end-to-end encryption between itself and its merchants to prevent such a breach from occurring again. That's now taking shape, but slowly.
There is as of yet no end-to-end encryption requirement for debit- and credit-card processing, though the Payment Card Industry (PCI) Security Standards Council, which sets technical standards used by payment processors and merchants, is expected to weigh in on that topic in its upcoming PCI standard in October, 2010...
Unwilling to delay action after last year's devastating discovery of a data breach that has so far cost it well over $100 million in fines and associated costs, Heartland has spearheaded its own multi-million- dollar end-to-end encryption technology effort to keep cybercriminals at bay...
Heartland CEO Bob Carr says the definition of end-to-end encryption may end up varying, but in the case of Heartland, it means protecting card data, particularly the track data, as it's being swiped at the merchant to the entry point to Heartland's network, and encrypted on through Heartland's network. However, this encryption now stops at the card brand point, such as Visa and MasterCard, and isn't encrypted on through to the banking points. Carr thinks the most vulnerable points that hackers will try to exploit are in the interconnections between merchant and payments processor, but he acknowledges that as the industry evolves to better protect these routes, hackers will undoubtedly look for the weakest link in the chain..."
See also: Cryptographic Key Management
Deriving New Business Insights With Big Data
Stephen Watt, IBM developerWorks
"Emerging capabilities to process vast quantities of data are bringing about changes in technology and business landscapes. This article examines the drivers, the new landscape, and the opportunities available to analytics with Apache Hadoop.
From an enterprise standpoint, all of this information has been getting increasingly difficult to store in traditional relational databases and even data warehouses. These issues are asking some difficult questions of practices that have been around for years. For instance: How does one query a table with a billion rows? How can one run a query across all of the logs on all of the servers in a data center? Further compounding the issue is that a lot of the information needing to be processed is either unstructured or semi-structured text, which is difficult to query.
When data exists in this quantity, one of the processing limitations is that it takes a significant amount of time to move the data. Apache Hadoop has emerged to address these concerns with its unique approach of moving the work to the data and not the other way around. Hadoop is a cluster technology comprising two separate but integrated runtimes: the Hadoop Distributed File System (HDFS), which provides redundant storage of data; and map/reduce, which allows user-submitted jobs to run in parallel, processing the data stored in the HDFS. Although Hadoop is not well suited to every scenario, it provides clear performance benefits..."
What is Hadoop? [From the web site:] "The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. Hadoop includes several sub-projects, including (1) Hadoop Common: The common utilities that support the other Hadoop subprojects; (2) Chukwa: A data collection system for managing large distributed systems; (3) HBase: A scalable, distributed database that supports structured data storage for large tables; (4) HDFS: A distributed file system that provides high throughput access to application data; (5) Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying; (6) MapReduce: A software framework for distributed processing of large data sets on compute clusters; (7) Pig: A high-level data-flow language and execution framework for parallel computation; (8) ZooKeeper: A high-performance coordination service for distributed applications..."
See also: the Apache Hadoop Project
IETF Last Call Public Review for Web Host Metadata Specification
Eran Hammer-Lahav (ed), IETF Internet Draft
The Internet Engineering Steering Group (IESG) has received a request to consider Web Host Metadata [I-D draft-hammer-hostmeta-13.txt] as an IETF Proposed Standard. The IESG plans to make a decision in the next few weeks, and solicits final comments on this action. Please send substantive comments to the IETF mailing lists by 2010-07-23. The editor notes: 'This is the conclusion of the discovery work; after many drafts and proposals, this is the last draft to be submitted. It includes the surviving parts of the original LRDD proposal (Link-based Resource Descriptor Discovery), which has be abandoned.'
The "Web Host Metadata" specification describes a method for locating host metadata as well as information about individual resources controlled by the host. Web-based protocols often require the discovery of host policy or metadata, where 'host' is not a single resource but the entity controlling the collection of resources identified by Uniform Resource Identifiers (URI) with a common URI host, per RFC 3986. While web protocols have a wide range of metadata needs, they often use metadata that is concise, has simple syntax requirements, and can benefit from storing their metadata in a common location used by other related protocols.
Because there is no URI or representation available to describe a host, many of the methods used for associating per-resource metadata (such as HTTP headers) are not available. This often leads to the overloading of the root HTTP resource (e.g. 'http://example.com/' ) with host metadata that is not specific or relevant to the root resource itself.
This memo registers the well-known URI suffix 'host-meta' in the Well-Known URI Registry established by [RFC5785], and specifies a simple, general-purpose metadata document format for hosts, to be used by multiple web-based protocols. In addition, there are times when a host-wide scope for policy or metadata is too coarse-grained. host-meta provides two mechanisms for providing resource-specific information: (1) Link Templates: links using a URI template instead of a fixed target URI, providing a way to define generic rules for generating resource-specific links by applying the individual resource URI to the template. (2) Link-based Resource Descriptor Documents (LRDD, pronounced 'lard'): descriptor documents providing resource-specific information, typically information that cannot be expressed using link templates. LRDD documents are linked to using link templates with the 'lrdd' relation type..."
W3C Publishes Report on Model-Based UI Workshop
Dave Raggett, Blog
"The published report from the W3C Workshop on Future Standards for Model-Based User Interfaces is now available. The workshop took place in central Rome on 13-14 May 2010, and focused on ideas for making it easier to create Web applications that can be delivered across many kinds of devices and that adapt dynamically to the context.
To achieve this, it is necessary to separate out different kinds of design concerns, and this is where models come in. There has been a steady stream of research work in this area for many years (including the W3C MBUI Incubator Group), and the workshop was held to bring together researchers to examine whether it is now timely to launch standards work. The workshop participants recommended that W3C consider starting a new Working Group on meta-models as a basis for exchange between different markup languages for model-based authoring tools. We hope to make a start on this later this year with help from the EU Serenoa project..."
From the Workshop report: "Web application developers face increasing difficulties due to wide variations in device capabilities, in the details of the implementation languages they support, the need to support assistive technologies for accessibility, the demand for richer user interfaces, the suites of programming languages and libraries, and the need to contain costs and meet challenging schedules during the development and maintenance of applications... Research work on model-based design of context-sensitive user interfaces has sought to address the challenge of reducing the costs for developing and maintaining multi-target user interfaces through a layered architecture that separates out different concerns: Application task models, data and meta-data; Abstract Interface—device and modality independent, e.g. select 1 from N; Concrete Interface—device and/or modality dependent, e.g. use of radio buttons, but implementation language independent; Implementation on specific devices with wide variations in hardware and software capabilities—e.g. HTML+JavaScript, SVG, Java, .NET or Flash... This architecture focuses on design and separates off the implementation challenges posed by specific delivery channels...
The Workshop's final session focused on next steps, and the opportunity for launching new standards work at W3C. The main focus should be on meta-models as a basis for exchange between different markup languages at different layers in the Cameleon reference framework. This should address the needs for adaptation, validation, accessibility and multimodal interaction. Further considerations include migratory user interfaces, and user interfaces involving multiple devices and distributed applications. A key target for the final UI would be HTML5, SVG and associated JavaScript libraries, e.g. jQuery and Dojo. Target devices would include mobile, notepad, desktop, and large screens on tables or walls, together with gestural input..."
See also: the W3C Report
Creating Mobile Web Applications with HTML 5: Canvas, CSS3, and More
Michael Galpin, IBM developerWorks
"HTML 5 comes with plenty of new features for mobile Web applications, including visual ones that usually make the most impact. Canvas is the most eye-catching of the new UI capabilities, providing full 2-D graphics in the browser. In this article you learn to use Canvas as well as some of the other new visual elements in HTML 5 that are more subtle but make a big difference for mobile users...
The article is a whirlwind tour of many of the new UI-related features in HTML 5, from new elements to new style to the drawing canvas. These features, with the exception for a few notable exceptions at the end, are all available for you to use on the Webkit-based browsers found on the iPhone and on Android-based devices. Other popular platforms like the Blackberry and Nokia smartphones are getting more powerful browsers that also leverage the same technologies you have looked at in this article.
As a mobile Web developer you have the opportunity to target a wide range of users with visual features more powerful than anything you have ever had access to with HTML, CSS, and JavaScript on desktop browsers. The previous four parts of this series talked about many other new technologies (like geolocation and Web Workers) that are available to you on these amazing new mobile browsers. The mobile Web is not some weaker version of the Web you have programmed for years; it is a more powerful version full of possibilities..."
See also: the HTML5 specification
WSO2 Adds Rules Server to SOA Platform
Paul Krill, InfoWorld
Open source enterprise SOA vendor WSO2 has launched WSO2 Business Rules Server for building business rules within SOA. Based on the open source Drools business rules management system, the product separates business logic from infrastructure code. Business rules can be encapsulated in more accessible forms so that rules are accurate and represent current reflections of business needs, WSO2 said. Drools developers can integrate rules into SOA implementations... Pluggable rules engines in Business Rules Server are based on the Java Specification Request 94 rules engine API standard; the Drools engine is provided as the default engine. Other features of Business Rules Server include a wizard interface, a management console, SOA interoperability, and quality of service. WS-* and REST Web services are supported. A rule repository offers versioning and rollback, governance, and lifecycle management..."
From the announcement: "WSO2 BRS is the newest addition to WSO2 Carbon, the award-winning, open source middleware platform, which uses the highly regarded OSGi framework to provide a flexible, modular architecture. By adding the rule functionality of WSO2 BRS, rules become a first-class component of the complete WSO2 Carbon platform. As a result, many SOA patterns can benefit from rules, for example: (2) Incorporating rule-based logic into workflows run by the WSO2 Business Process Server (WSO2 BPS). (2) Using rule-based message mediation within the WSO2 Enterprise Service Bus (WSO2 ESB). (3) Augmenting a goverance scheme with rules using the WSO2 Governance Registry. (4)Targeting and filtering alerts within the WSO2 Business Activity Monitor (BAM) using rules. (5) Implementing Web services by a rule set. Like all WSO2 middleware products, WSO2 BRS runs on the common Carbon framework, which provides enterprise-class management, security, clustering, logging, statistics, tracing—along with support for both on-premise and cloud deployments...
WSO2 BRS provides a smooth process for exposing rules as services within an SOA. Key features include: Wizard interface for easily exposing a rule as a service; Management console, which makes all WSO2 Carbon service management features available for rule services; Standard SOA interoperability and Quality of Service (QoS), including secure, reliable WS- and REST services; Rule repository based on WSO2 Registry, including versioning and rollback, governance, and lifecycle management; Web 2.0 community features, including the ability to share, tag, comment on, and rate rules; Pluggable rule engines based on the JSR-94 rule engine API standard, with the popular Drools engine provided as the default..."
See also: the WSO2 announcement
Sponsors
XML Daily Newslink and Cover Pages sponsored by:
IBM Corporation | http://www.ibm.com |
ISIS Papyrus | http://www.isis-papyrus.com |
Microsoft Corporation | http://www.microsoft.com |
Oracle Corporation | http://www.oracle.com |
Primeton | http://www.primeton.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/