The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: January 07, 2009
XML Daily Newslink. Wednesday, 07 January 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com



Let's Talk More about SPML (Service Provisioning Markup Language)
Mark Diodati, Burton Group Blog

Jackson Shaw and James McGovern have been blogging recently about one of my favorite topics: Service Provisioning Markup Language (SPML). I'd like to contribute to the discussion... One thing that organizations using SPML should do is to secure the service from an authentication, authorization, and encryption perspective. In most instances, because the number of SPML requestors and providers (this is terminology specific to SPML) are small, most organizations are opting to manually configure the requesting authority and the provisioning service provider with static passwords or certificate lists to establish trust between the provisioning services components. These authentication techniques don't provide authorization services in any meaningful sense. A large SPML implementation requires authorization services to determine the rights of the requesting authority to manage the specific user on the respective provisioning service target. In our opinion, the multi-tenancy (call it cloud-based if you like) use case is an example of a large SPML implementation—one must build the requisite authorization and authentication services to support the provisioning service. SPML's lack of authentication and authorization capabilities highlights the broader issues we see with the emergence of identity services. An authorization service requires authentication services in order to have any utility whatsoever. The authorization and authentication services may be consolidated (one big authorization and authentication service) or discrete (two separate services). One example of a discrete authorization service is a XACML authorization service that leverages the user's SiteMinder SMSESSION ticket for authentication... As for federation and federated provisioning, the lack of provisioning capabilities remains an operational impediment. Several years ago, a Liberty Alliance Technical Expert Group began working on a way to 'harmonize' SPML and SAML. While the services would remain separate 'pipes', the TEG was working on a way to harmonize the user attribute schema across the two services...

See also: the OASIS Provisioning Services TC


Draft TAG Finding: The Self-Describing Web
Noah Mendelsohn (ed), W3C Draft Tag Finding

The editor of "The Self-Describing Web" announced the publication of an updated version of "The Self-Describing Web" as a draft TAG Finding. W3C created the Technical Architecture Group (TAG) to document and build consensus around principles of Web architecture and to interpret and clarify these principles when necessary, to resolve issues involving general Web architecture brought to the TAG, and to help coordinate cross-technology architecture developments inside and outside W3C. This draft of "The Self-Describing Web" attempts to cover most but not all of the changes that were agreed at a recent TAG face to face meeting (with some exceptions noted), including integration/adaptation of the handwritten suggestions provided by Tim Berners-Lee. Document abstract: "The Web is designed to support flexible exploration of information by human users and by automated agents. For such exploration to be productive, information published by many different sources and for a variety of purposes must be comprehensible to a wide range of Web client software, and to users of that software. HTTP and other Web technologies can be used to deploy resource representations that are self-describing: information about the encodings used for each representation is provided explicitly within the representation. Starting with a URI, there is a standard algorithm that a user agent can apply to retrieve and interpret such representations. Furthermore, representations can be what we refer to as grounded in the Web, by ensuring that specifications required to interpret them are determined unambiguously based on the URI, and that explicit references connect the pertinent specifications to each other. Web-grounding ensures that the specifications needed to interpret information on the Web can be identified unambiguously. When such self-describing, Web-grounded resources are linked together, the Web as a whole can support reliable, ad hoc discovery of information. This finding describes how document formats, markup conventions, attribute values, and other data formats can be designed to facilitate the deployment of self-describing, Web-grounded Web content."

See also: W3C TAG Findings


Apache Commons Project Announces Release of Commons Digester v2.0
Rahul Akolkar, Apache News Online

"The Apache Commons project would like to announce the immediate availability of Commons Digester 2.0. Commons Digester 2.0 is a major release with breaking changes and requires a minimum of JDK 1.5. Details can be found in the release notes. The Digester Component: Many projects read XML configuration files to provide initialization of various Java objects within the system. There are several ways of doing this, and the Digester component was designed to provide a common implementation that can be used in many different projects. Basically, the Digester package lets you configure an XML-to-Java object mapping module, which triggers certain actions called rules whenever a particular pattern of nested XML elements is recognized. A rich set of predefined rules is available for your use, or you can also create your own. Advanced features of Digester include: (1) Ability to plug in your own pattern matching engine, if the standard one is not sufficient for your requirements. (2) Optional namespace-aware processing, so that you can define rules that are relevant only to a particular XML namespace. (3) Encapsulation of Rules into RuleSets that can be easily and conveniently reused in more than one application that requires the same type of processing... The Apache Commons Proper is dedicated to one principal goal: creating and maintaining reusable Java components. The Commons Proper is a place for collaboration and sharing, where developers from throughout the Apache community can work together on projects to be shared by the Apache projects and Apache users. Commons developers will make an effort to ensure that their components have minimal dependencies on other libraries, so that these components can be deployed easily. In addition, Commons components will keep their interfaces as stable as possible, so that Apache users (including other Apache projects) can implement these components without having to worry about changes in the future..."

See also: the Commons Digester project web site


Semantics: Not RDF, but Enrichment, Classification, and Taxonomy
Kurt Cagle, XML.com

Within the realm of computational semantics, there is still a fairly broad disconnect between triple pair semantics, the use of RDF (or turtle notation) to create atomic assertions, and the realm of semantics as reflected on the web. I do not expect this to change much in 2009, save perhaps that the gulf between the two will likely just get wider... One area that I feel is poised to really take off in the next year is content enrichment. Enrichment involves taking a collection of text, running a series of rules and contextual filters on the data looking for names, events and patterns, then encasing this content within specialized XML markup. Depending upon the database, the source, and the service agreements involved, such enrichment performs an invaluable service in being able to establish the context of a given phrase within an article, and by extension being able to provide both an abstract of the content and specialized search looking for meta-content within a document... Its very likely that in order for the RDF/Semantic Web approach to gain credence in these spaces, ontologists will have to start with these specific industry schemas and develop RDF-based tools that model them. Given that I see XML databases increasingly carrying the load in working with these schemas, this will also likely result, at some point in the not too distant future, of a need for a meeting of the minds between the XQuery working group and the SparQL working group in order to develop a SparQL analog that can be run in XQuery, probably as a set of optional modular extensions to the language. I don't know if this is on the agenda at the W3C yet, though if its not, then its likely we won't see significant traction there until 2011 at the earliest. The other area where there's been something of a 'small s' semantic revolution has been the growing awareness of the intimate link between web navigation and knowledge navigation among both web developers and semantics specialists. As web sites grow, they become more complex, deeper, and far more difficult to maintain in terms of their underlying structure. Ultimately this comes down to a question of classification and partition of the topics within the site itself, and this in turn points to a potential semantic solution for managing large and topically interconnected content. The folksonomy 'tagging' revolution (which I think is probably running out of steam) was a significant first step, but folksonomies are by their nature unstructured and poorly regulated. I think this is going to be the year that a lot of both web design and web framework support is going to embrace semantic tools and concepts (the inclusion of RDF support within the taxonomy-heavy Drupal system is a good case in point).

See also: the W3C SPARQL Wiki


Banks Ordering Pre-Tagged RFID Computer Gear
Marianne Kolbasuk McGee, InformationWeek

A consortium of financial services companies and technology vendors is in the final phases of a project to create standards for RFID tags that promise to help banks more efficiently and cost effectively track their IT assets. Technology vendors such as Dell, Hewlett-Packard, IBM, and Sun Microsystems are expected to soon begin shipping computer equipment pre-tagged with RFID chips that meet functional and technology requirements assembled by Financial Services Technology Consortium (FSTC), a banking industry technology organization. FSTC on January 12, 2009 will conclude final phases of this RFID standards project, and some vendors are anticipated to begin shipping computer gear with these tags shortly after. The standards for the tags were developed with collaboration between technology vendors and members of FSTC. However, the requirements spelled out by FSTC for the banking industry RFID tags also embrace other established standards, including Electronic Product Code (EPC) Global specifications, so the banking tags could also help other industries more easily track their computer gear, too... While the banking industry might be leading a trend that will convince companies in other sectors to also track their IT assets using RFID technology, banks obviously aren't the first to embrace RFID... John Fricke, FSTC's Chief of Staff: "Wal-Mart's use of RFID has reduced costs, so the financial industry has taken a look at it and is also implementing the technology to reduce the costs of tracking its IT assets. Banks by law are required to track and document their IT inventory, he said. That's a time-consuming and expensive activity, he said. RFID eliminates an enormous amount of manual work involved with IT asset tracking." Wells Fargo has already implemented more than 100,000 RFID tags on its IT assets, including computer servers, laptops, storage, and networking gear, over the last year. Those tags meet the FSTC standards emerging for the banking industry. However, because computer vendors until now haven't shipped their products with these tags attached, Wells Fargo had to manually stick on the tags to each piece of computer equipment. Already, the rollout of RFID tags is making IT asset tracking more efficient and cost-effective at Wells Fargo: In the past, the company relied on bar-code scanners to take IT asset inventory. Now, the information is automatically transmitted from the RFID tags to antennas and the data gets fed into inventory, order entry, and other systems. Wells Fargo's physical portals also are wired to recognize when shipments of computer gear donning the tags arrive at Wells Fargo loading sites. This helps ensure that equipment reaches the right locations and aren't misplaced...

See also: Radio Frequency Identification (RFID) Resources and Readings


Selected from the Cover Pages, by Robin Cover

Oracle Beehive Object Model Proposed for Standardization in OASIS ICOM TC

OASIS has announced the submission of a draft charter for a new OASIS Technical Committee to define an integrated collaboration object model supporting a complete range of enterprise collaboration activities. The proposed data model is based upon the Oracle Beehive Object Model (BOM), to be contributed by Oracle to the ICOM TC. The new standard model, interface, and protocol would support contextual collaboration within business processes for an integrated collaboration environment which includes communication artifacts (e.g., email, instant message, telephony, RSS), teamwork artifacts (such as project and meeting workspaces, discussion forums, real-time conferences, presence, activities, subscriptions, wikis, and blogs), content artifacts (e.g., text and multi-media contents, contextual connections, taxonomies, folksonomies, tags, recommendations, social bookmarking, saved searches), and coordination artifacts (such as address books, calendars, tasks) etc. The charter for the proposed OASIS Integrated Collaboration Object Model for Interoperable Collaboration Services (ICOM) Technical Committee is supported by representatives from Oracle, DERI, Nortel, ESoCE-NET, and Fraunhofer FIT. As proposed, the first TC meeting would be held March 3, 2009 with Martin Chapman (Oracle) as Convenor. Members of the OASIS ICOM TC will produce an Integrated Collaboration Object Model (ICOM) for Interoperable Collaboration Services Specification and associated UML 2.0 model. The TC will also produce the non-normative matter (which may include models, architectures, and guidelines) for the interoperability protocols to facilitate composite collaboration services for shared workspaces.


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-01-07.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org