The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: February 19, 2008
XML Daily Newslink. Tuesday, 19 February 2008

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
SAP AG http://www.sap.com



Protect Your Project Zero Applications with OpenID
Todd Kaplinger and Gang Chen, IBM developerWorks

Access control-based security of application resources is one of the core features of Project Zero. The OpenID Foundation describes OpenID as an open, decentralized, free framework for user-centric digital identity. OpenID takes advantage of already existing Internet technology (URI, HTTP, SSL, Diffie-Hellman) and realizes that people are already creating identities for themselves whether it be at their blog, photostream, profile page, and so on. With OpenID you can easily transform one of these existing URIs into an account you can use at sites which support OpenID logins. Project Zero adopted the OpenID technology as part of its security offering. In this article, the third and final part of the series, you learn about Project Zero Security and how to leverage OpenID authentication, define security rules for the application, and extend a user registry... OpenID provides increased flexibility for application deployment by enabling applications to leverage third-party authentication providers for handling authentication. Providers such as OpenID have become very common as more users want a single user profile across multiple sites for blogs, wikis, and other social networking activities. Additionally, many Web sites do not want to maintain, or require users to continually provide, the same profile-related information just to ensure that the user credentials are valid. We hope this final article in the series has helped you learn how to use the OpenID technology in the Project Zero platform to achieve this decentralized authentication, and that the entire series has helped you understand best practices for building the all-important security features into your Zero applications. As a developer of fast-paced, user-driven Web 2.0 applications, you know how vital security is to both your customers and your business.


Conference Event Package Data Format Extension for Centralized Conferencing (XCON)
Gonzalo Camarillo, Srivatsa Srinivasan (et al., eds), IETF Internet Draft

Members of the IETF Centralized Conferencing (XCON) Working Group have published an initial Internet Draft for "Conference Event Package Data Format Extension for Centralized Conferencing (XCON)." The XCON framework defines a notification service that provides updates about a conference instance's state to authorized parties using a notification protocol. The "Data Format Extension" memo specifies a notification mechanism for centralized conferencing which reuses the SIP (Session Initiation Protocol) event package for conference state. Additionally, the notification protocol specified in this document supports all the data defined in the XCON data model (i.e., data model as originally defined in RFC 4575) plus all the extensions, plus a partial notification mechanism based on XML patch operations. Section 5.4 provides an XML Schema for Partial Notifications. Generating large notifications to report small changes does not meet the efficiency requirements of some bandwidth-constrained environments. The partial notifications mechanism specified in this section is a more efficient way to report changes in the conference state. In order to obtain notifications from a conference server's notification service, a client subscribes to the 'conference' event package at the server as specified in RFC 4575. The NOTIFY requests within this event package can carry an XML document in the "application/conference-info+xml" format. Additionally, per this specification, NOTIFY requests can also carry XML documents in the "application/xcon-conference-info+xml" and the "application/xcon-conference-info-diff" formats. A document in the "application/xcon-conference-info+xml" format provides the user agent with the whole state of a conference instance. A document in the "application/ xcon-conference-info-diff+xml" format provides the user agent with the changes the state of the conference instance has experimented since the last notification sent to the user agent.

See also: the IETF Centralized Conferencing (XCON) Working Group


Universal Middleware: What's Happening With OSGi and Why You Should Care
Dave Chappell and Khanderao Kand, SOA World Magazine

The Open Services Gateway Initiative (OSGi) Alliance is working to realize the vision of a "universal middleware" that will address issues such as application packaging, versioning, deployment, publication, and discovery. In this article we'll examine the need for the kind of container model provided by the OSGi, outline the capabilities it would provide, and discuss its relationship to complementary technologies such as SOA, SCA, and Spring. Enterprise software is often composed of large amounts of complex interdependent logic that makes it hard to adapt readily to changes in requirements from the business. You can enable this kind of agility by following a Service Oriented Architecture (SOA) pattern that refactors a system into application modules grouped by business functions that expose their public functionality as services (interfaces)... we'll explain how an Open Services Gateway initiative (OSGi) container would solve them. We'll begin with an introduction to the OSGi's solution to the problem, concepts, and platform, and then we'll delve into the evolution of the OSGi from its past in the world of embedded devices to its future in enterprise systems. We'll also explain the relationship between the OSGi and other initiatives, containers, and technologies to provide a comprehensive picture of the OSGi from the perspective of software development... Conceptually both SCA and OSGi provide a composite model for assembling a services-based composite application that can expose some services to the external world as well as invoke external services. In OSGi R4, declarative services define a model to declare a component in XML, capturing its implementation and references. Besides SCA-like component-level information, the OSGi model captures additional information to control runtime behavior. For example, R4 provides bind/unbind methods to track the lifecycle or manage target services dynamically. SCA metadata defines wires between components or from a component to a reference in its composite model...

See also: the InfoQueue article on OSGi


Codecs, Metadata, and Addressing: Video on the Web Workshop Report
Staff, W3C Announcement

W3C announced that a published Report on the W3C Video on the Web Workshop is now available. Thirty-seven organizations discussed video and audio codecs, spatial and temporal addressing, metadata, digital rights management, accessibility, and other topics related to ensuring the success of video as a "first class citizen" of the Web. W3C thanks Cisco for hosting the Workshop, which took place 12-13 December 2007 simultaneously in San Jose, California and Brussels, Belgium. Five major areas of possible work emerged from the Workshop: video codecs, metadata, addressing, cross-group coordination and best practices for video content. The W3C team will work with interested parties to evaluate the situation with regards to video codecs, and what, if anything, W3C can do to ensure that codecs, containers, etc. for the Web encourage the broadest possible adoption and interoperability. As for metadata, one direction would be to create a Working Group tasked to come up with a simple common ontology between the existing standards which defines a mapping between this ontology and existing standards and defines a roadmap for extending the ontology, including information related to copyright and licensing rights. W3C should also consider creating a Group to investigate the important issue of addressing. The goal would be to: (1) provide a URI syntax for temporal and spatial addressing; (2) investigate how to attach metadata information to spatial and temporal regions when using RDF or other existing specifications, such as SMIL or SVG. A Group working on guidelines and best practices for effective video and audio content on the Web could be useful, and would look at the entire existing delivery chain from producers to end-users, from content delivery, to metadata management, accessibility or device independence. Also available online: forty-two position papers and Workshop minutes.

See also: the W3C Workshop position papers


RESTful SOA Using XML
Adriaan de Jonge, IBM developerWorks

Service Oriented Architecture (SOA) is used in companies that have large numbers of applications for employees in different departments with varying responsibilities. Many of these applications share functionalities, but the combinations of functionalities, user-interface specifics, and usability requirements differ. Like many enterprise architectures, SOA follows a multitier model, but it doesn't stop there. Within the server, functionalities are divided over separate services. A client can consume one or more of the services, and one service can be consumed by many clients. The result is a loosely coupled architecture that propagates the reusability of existing software. SOA fits particularly well in large companies that have several hundred poorly integrated applications and that need to clean up their IT infrastructures. SOA is a proven practice, capable of working effectively in large environments. Adapters can to translate legacy applications to services that integrate as backends to modern applications. Middleware technology is available to orchestrate services and control access to specific functionalities in the service. Because the need for SOAs is highest in this area, vendors of middleware technology typically focus their products toward large and heavyweight solutions. Usually, SOA is implemented with the SOAP protocol, described by a Web Services Description Language (WSDL) document. Although many developer tools make it relatively easy to work with SOAP and WSDL, I consider them heavyweight technology, because they're hard to work with if you don't use those tools. You can implement SOA just as well by sending simple messages over Hypertext Transfer Protocol (HTTP). Basically, this is what RESTful Web services do. Representational State Transfer (REST; the name was coined by Roy Fielding) isn't a protocol or technology: It's an architectural style. REST, a lightweight alternative to SOAP, is resource oriented rather than action oriented. It's often summarized as bringing back remote procedure calls to GET, POST, PUT, and DELETE statements using HTTP. In my opinion, this is the second important step.


Web Services Connector for JMX Enters Public Review
Jean-Francois Denise, Blog

The JSR 262 has has now entered the Public Review phase. New JMX types supported for MBean operations: NotificationResult, NotificationFilterSupport, AttributeChangeNotificationFilter, MBeanServerNotificationFilter. This allows the JSR 262 connector to support the new Event Service being defined by JSR 255, which has MBean operations that use those types. JSR 262 defines a way to use Web Services to access JMX instrumentation remotely. It provides a way to use the server part of the JMX Remote API to create a Web Services agent exposing JMX instrumentation, and a way to use the client part of the API to access the instrumentation remotely from a Java application. It also specifies the WSDL definitions used so that the instrumentation will be available from clients that are not based on the Java platform, or from Java platform clients accessing the instrumentation directly using the JAX-RPC API. The Web Services Connector for Java Management Extensions (JMX) Agents Reference Implementation Project develops and evolves the reference implementation of JSR 262 specification. The JSR 262 defines a connector for JMX that uses Web Services to make JMX instrumentation available remotely. JMX Connector semantics are preserved when connecting from a JMX Client. WS-Management standard from the DMTF is the protocol in use in the connector. This Connector allows WS-Man native clients to interoperate with JMX Agent. Such clients can be written in Java language or not (C, C#, JavaScript, Perl, ...). The JMX technology was developed through the Java Community Process (JCP) program, and was one of the earliest JSRs (JSR 3). It was subsequently extended by the JMX Remote API (JSR 160). The future evolutions of both JSRs have now been merged into a single JSR to define version 2.0 of the JMX specification (JSR 255). A management interface, as defined by the JMX specification, is composed of named objects called Management Beans, or MBeans. MBeans are registered with an ObjectName in an MBean server. To manage a resource or resources in your application, you create an MBean that defines its management interface, and then register that MBean in your MBean server. The content of the MBean server can then be exposed through various protocols, implemented by protocol connectors, or by protocol adaptors.

See also: the Web Services Connector for JMX Agents Project


Access Control for Cross-site Requests
Anne van Kesteren (ed), W3C Technical Report

W3C announced that the Web Application Formats (WAF) Working Group has released a new snapshot of the editor's draft of "Access Control for Cross-site Requests." The WAF Working Group is part of the Rich Web Clients Activity in the W3C Interaction Domain. It includes recent HTTP header name changes and incorporates a new proposal for limiting the amount of requests in case of non-GET methods to various different URIs which share the same origin. In addition to those technical changes it also makes the (until now) implicit requirements and use cases explicit by listing them in an appendix and contains a short FAQ on design decisions. Summary: "In Web application technologies that follow this pattern, network requests typically use ambient authentication and session management information, including HTTP authentication and cookie information. This specification extends this model in several ways: (1) Web applications are enabled to annotate the data that is returned in response to an HTTP request with a set of origins that should be permitted to read that information by way of the user's Web browser. The policy expressed through this set of origins is enforced on the client. (2) Web browsers are enabled to discover whether a target resource is prepared to accept cross-site HTTP requests using non-GET methods from a set of origins. The policy expressed through this set of origins is enforced on the client. (3) Server side applications are enabled to discover that an HTTP request was deemed a cross-site request by the client Web browser, through the Access-Control-Origin HTTP header. This extension enables server side applications to enforce limitations on the cross-site requests that they are willing to service. This specification is a building block for other specifications, so-called hosting specifications, which will define the precise model by which this specification is used. Among others, such specifications are likely to include XMLHttpRequest Level 2, XBL 2.0, and HTML 5 (for its server-sent events feature). According to the editor's note: "We expect the next draft to go to Last Call so hereby we're soliciting input, once again, from the Forms WG, HTML WG, HTTP WG, TAG, Web API WG, and Web Security Context WG..."

See also: the W3C Web Application Formats Working Group


GRDDL: Gleaning Information From Embedded Metadata
Brian Sletten, DevX.com

This article explains how to put GRDDL-enabled agents to the task of extracting valuable information from machine-processable metadata embedded in documents—courtesy of prevailing semantic web standards. HTML and XHTML traditionally have had only modest support for metadata tags. The World Wide Web Consortium (W3C) is working on including richer metadata support in HTML/XHTML with emerging standards such as RDF with attributes (RDFa), embedded RDF (eRDF), and so on. These standards allow more specific metadata to be attached to different structural and presentation elements, which provides a unified information resource. Gleaning Resource Descriptions from Dialects of Languages (GRDDL, pronounced griddle) offers a solution to the embedded metadata problem in a flexible, inclusive, and forward-compatible way. It allows the extraction of standard forms of metadata (RDF) from a variety of sources within a document. People usually associate XHTML with GRDDL, but it is worth noting that GRDDL is useful for extracting standardized RDF metadata from other XML structures as well. GRDDL theoretically supports a series of naming conventions and standard transformations, but it does not require everyone to agree to particular markup strategies. It allows you to normalize metadata extraction from documents using RDFa, microformats, eRDF, or even custom mark-up schemes. The trick is to identify the document as a GRDDL-aware source by specifying an HTML metadata profile. The profile indicates to any GRDDL-aware agents that the standard GRDDL profile applies. Anyone wishing to extract metadata from the document should identify any relevant 'link' tags with a 'rel' attribute of transformation and apply it to the document itself. This approach avoids the conventional problem of screen scraping, where the client has to figure out how to extract information. With GRDDL, the publisher indicates a simple, reusable mechanism to extract relevant information. While there is currently no direct support for GRDDL in any major browser, that situation is likely to change in the near future. Until then, it is not at all difficult to put a GRDDL-aware proxy in between your browser and GRDDL-enabled pages, which the Piggy Bank FireFox extension from MIT's SIMILE Project does.

See also: the W3C GRDDL Working Group


Sponsors

XML Daily Newslink and Cover Pages are sponsored by:

BEA Systems, Inc.http://www.bea.com
EDShttp://www.eds.com
IBM Corporationhttp://www.ibm.com
Primetonhttp://www.primeton.com
SAP AGhttp://www.sap.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2008-02-19.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org