The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: February 23, 2009
XML Daily Newslink. Monday, 23 February 2009

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com



Alfresco Drupal CMIS Integration Available
Jeff Potts, Optaros Blog

Optaros, in conjunction with Acquia and Alfresco, has made available a set of Drupal modules that integrates the popular community platform, Drupal, with the leading open source content management repository, Alfresco. We've released the integration as two modules. The first, simply called CMIS API, is a module that knows how to make RESTful CMIS calls to any CMIS-compliant repository and provides Drupal developers with a set of functions to execute those calls. There is nothing vendor-specific in the CMIS API module—it's all based on the proposed Content Management Interoperability Services (CMIS) spec... To do something useful, you need a CMIS-compliant repository... CMIS implementations will have some functions that are specific to each vendor. For example, how authentication is handled is currently vendor-specific. So, we put all of that code in the CMIS Alfresco module. As more and more repositories roll out their CMIS implementations you'll be able to retrieve content from those as well. So the CMIS API module requires one or more CMIS implementation modules, and currently, the CMIS Alfresco module is the only one that exists. As additional repositories become CMIS compliant and people develop implementation modules for those repositories, they'll snap right in. When you put Drupal and Alfresco together, you have something powerful: All of the community features of a Drupal site plus all of the rich content management features of an Alfresco repository... In most organizations, people have more than one content repository and multiple web site technologies. As some people learn the hard way, issuing a top-down edict from IT that dictates the technologies the business must use to build content-centric web apps rarely makes sense. The promise of CMIS is to break the tight coupling between content repositories and the front-end. If every repository in your organization were CMIS enabled, and if every front-end technology could get content out of any CMIS-enabled repository through a simple API call, it wouldn't matter where your content was stored or what front-end technology you wanted to use to get to it. This integration is one step toward that interoperable, open, 'content as a service' ideal...

See also: the CMIS Drupal-Alfresco Integration project


Building Consensus Around ODF 1.2
Doug Mahugh, Blog

It's been interesting to be a part of the ODF TC and learn about how an OASIS working group handles the maintenance work for an evolving standard. I've also been involved in INCITS V1, Ecma TC45, and more recently SC34 WG4, and each group has its own style, based on the governing rules, the personalities of the members and committee chairs, the nature of the work to be done, and other factors. ODF 1.2 will be the next major deliverable from the ODF TC, and co-chairs Rob Weir (IBM) and Michael Brauer (Sun) have been leading discussions of various open proposals on the email reflector and in the weekly TC phone calls. Much progress has been made, and there are also many open issues remaining to be resolved. One topic that has been debated extensively is the question of how to handle conformance in ODF 1.2... The ODF TC has been discussing how the conformance clause of ODF might be modified for the 1.2 release to meet this requirement. This topic has been discussed in numerous weekly phone calls, and has become the most-discussed topic on the email reflector since I joined the ODF TC last year. There are currently over 200 emails on this topic in January and February of 2009 from 14 different TC members, including editor Patrick Durusau, well-known standards participants from South Africa (Bob Jolliffe), Brazil (Jomar Silva), Czech Republic (Jirka Kosek) and the US (Dennis Hamilton), ODF implementers (IBM, Sun, Microsoft, Novell, KOffice, Nokia, Adobe, Gnumeric), and others. Although the debate has been vigorous at times, it has all been conducted in a cooperative and respectful spirit of collaboration, and I'm sure I'm not the only person who has learned quite a bit from the discussion. I'll describe my perspective on the conformance debate in another post soon, but for today I just wanted to comment on the process we're following, and how this debate is (hopefully) leading to consensus on this topic. [Note: see the associated followup Blog comments]

See also: the OASIS Open Document Format Interoperability and Conformance (OIC) TC


Open Source Schematron 2009 Released
Rick Jelliffe, O'Reilly Technical

A new version of the open source ISO Schematron validator is out now, available at Schematron.com. Schematron is a validation language for making assertions about the presence or absence of patterns in XML documents; it is the most powerful of any standard schema language, and uses XPath. There are two distributions. One is for XSLT1-hosted implementations, and another for XSLT2-hosted implementation (such as SAXON 9). The XSLT2-hosted implementations make it possible to use XPath2, which has much more grunt than XPath1. The distributions include versions of the popular 'iso_schematron_message' and 'iso_svrl' validators; in the next few weeks, the other common validators will also be ported and included. The two distributions makes it more straight forward to validate. Both distributions are believed to be complete and correct ISO Schematron implementations. Both are based on a four-stage XSLT pipeline: an optional macro-processor to handle inclusion expansion (it also handles the inclusions required by several other schema languages), an optional second macro-processor to handle expansion of abstract patterns, the schema compiler, and finally the validator. The XSLT2 distribution has several important experimental features, which may also make their way into the XSLT1 version. Some of them I am mooting for the upcoming update to ISO Schematron. (1) Localization: All error messages have been removed to an external file, allowing command-line selection of the language of error messages. (2) Properties: Traditionally Schematron has identified the locations of errors using XPaths; in fact, the current implementation allows selection of three different XPath formats. (3) Multi-document patterns: Schematron currently validates a single document, however it can access using the 'document()' function other XML documents to get information to help validate. The new implementation provides a 'pattern/@documents' attribute which takes an XPath that produces a sequence of URLs, which are documents that the pattern applies to. (4) Options I have also been working on an update to Christopher Lauret's Schematron-in--Ant task. The initial version used schematron-message, then the update used 'schematron-svrl', and so was incompatible. So I am aiming to allow both... This version is marked Candidate Release as if it were a standard, with the intent that after a couple of months to check whether there are any newly-introduced issues, it would be a "final" release. It seems pretty solid.

See also: XML Schema Languages


OMG SOA in Healthcare Conference
Staff, OMG and Health Level Seven Announcement

The OMG Conference Review Committee is seeking proposals for presentations based on "real-world" organizational experiences, evaluations, case studies or research papers relevant to the conference themes of SOA and value in Healthcare. The conference will be held on June 2-4, 2009 at the Hyatt Regency O'Hare, Chicago, IL USA. Service Oriented Architecture (SOA) adoption is viewed as a key enabler for the 21st century enterprise due to increased opportunity for productivity and integration, and requires significant changes for both business and IT executives. The goal of the conference, now in its second year, is to raise the dialogue about SOA and its use in healthcare, with a focus on its role as a transformation agent to add organizational value. The focus of the SOA in Healthcare conference is to convey real-world experiences, assembling a community of peers to exchange ideas and discuss what has worked, what did not work, and review best practices for attendees to benefit from lessons learned faced in real implementations. Not a "tech industry" event, this conference is exclusively healthcare focused, and will highlight the challenges unique to healthcare organizations and emphasize cross-industry solutions that are viable within the healthcare domain. It is targeted primarily to a health-IT savvy audience. Topic areas include: (1) SOA and Business Value—such as Return-on-Investment, Enterprise Architecture, business-IT alignment, agility, etc.; (2) Organizational Adoption and SOA use—such as SOA planning, program development, planning, business process management, stakeholder involvement, governance, oversight, etc.; (3) Architecture—such as Enterprise Architecture, Integration Architecture, Product Architecture, Interoperability, and Design; (4) Integration, Interoperability, and Legacy Enablement—such as legacy integration, refactoring, off-the-shelf package integration, custom software development, product development. The conference organizers have assembled an esteemed collection of experts to review all submissions to participate. Review panel members include: Donna Agnew (Vice President and Chief Information Officer, Presbyterian Healthcare Services), Dennis Giokas (Chief Technology Officer, Canada Health Infoway, Inc), John Quinn (Chief Technology Officer, Health Level Seven), and Richard Soley, PhD (Chairman and Chief Executive Officer, OMG; Executive Director, SOA Consortium).

See also: the conference web site


How Entity Extraction is Fueling the Semantic Web Fire
Dan McCreary, O'Reilly Technical

I have been working on several large entity extraction projects in the last few months and I have been very impressed at the scope and depth of some of the new OpenSource entity extraction tools as well as the robustness of commercial products. I thought I would discuss this since these technologies could start to move the semantic web (Web 3.0) up the hockey stick growth curve. If you are not familiar with Entity Extraction (EE), think of it as pulling the most relevant nouns out of a text document. But EE technologies look at the syntax of each sentence to differentiate nouns from verbs and locate the most critical entities in each document. There are very primitive EE tools in place today such as tools extract InfoBox data from a Wikipedia page. But these are only the beginning. There are many more to come. Once of the most significant developments in Entity Extraction is the Apache UIMA project. UIMA (for Unstructured Information Management Architecture). This is a massive project being undertaken by the Apache foundation to allow non-programmers to extract entities from free-form text using an innovative high-performance architecture for doing document analysis. The best way to describe UIMA is a version of UNIX pipes for entity extraction but with an important twist: the data does not have to physically move between processes. Annotator processes dance over the documents and out pops documents with high-precision annotations. The second demonstration you can try is the Thompson/Reuters ClearForest/ OpenCalais demos. This is a commercial product with an excellent FireFox plugin called Gnosis that can be used to see the incredible quality that Entity Extraction has made in the last few years. The FireFox add-on does a great job of extracting precise entities from any free form text. The results can also be formatted in RDF or other formats... When you combine EE with a native XML database such as eXist-db.org, search application can be created with just a few pages of XQuery. In fact, it is possible that an XQuery module will be created to automate the EE process...I think that the newer generation of EE will add an incredible amount of fuel to the semantic web fire. This is going to ignite several new business strategies.

See also: OASIS UIMA Version 1.0 being balloted


Rich Internet Applications with Grails
Michael Galpin, IBM developerWorks

Rich Internet Applications (RIAs) promise the dynamism and functionality of desktop applications through the browser. One of the key characteristics is moving your presentation layer to the client and backing it with a robust RESTful service layer on the server. This idea is being popularized with buzzwords like SOUI (Service Oriented User Interface) and SOFEA (Service Oriented Front End Architecture). Many organizations are pursuing a Service Oriented Architecture (SOA). This is often done to make your architecture more agile, allowing your business to more rapidly evolve. Of course, you probably have another pressing initiative in your organization: to modernize your user interfaces into a Rich Internet Application. It can be tough to deal with both buzz words, SOA and RIA. It turns out, however, that these two things work well together. You can use an SOA design to deploy services to your application servers. You can move all of your presentation logic to the client and leverage a powerful front-end technology such as Flex to create an RIA. REST (Representational State Transfer)-style Web services have gained popularity because of their simple semantics. They are easier to create and to consume. They can use XML, just like SOAP services, but this can be Plain Old XML (POX) with no fancy wrappers and headers like with SOAP. The Grails framework makes it very easy to create these kind of Web services, so let's get started with a Grails domain model. In this article, the first of a two-part series, you will see how simple it is to create a Web service back end using Groovy's Grails Web application framework, and you will hook it up to an RIA developed with Adobe's Flex framework... Grails is a general purpose Web development framework. Most Web applications use a relational database to store and retrieve the data used in an application, and thus Grails comes with a powerful Object Relational Modeling (ORM) technology known as GORM. With GORM, you can easily model your domain objects and have them persisted to a relational database of your choice, without ever dealing with any SQL. GORM uses the popular Hibernate library for generating database-specific and optimized SQL, and for managing the life cycles of domain objects.


Why Do We Need Distributed OSGi?
Eric Newcomer, InfoQueue

As we are achieving a key milestone with the Distributed OSGi project, it seems like a good time to review what's been done so far, to identify the remaining steps, and talk about why we are doing this in the first place. In November 2008 we published an update to the early release drafts of the design documents (Requests for Comment or RFCs in OSGi terminology) for the upcoming 4.2 release of the OSGi Specification. This month we released at Apache CXF the reference implementation source code for one of the important new designs for this release, RFC 119, Distributed OSGi. The OSGi Alliance hosted a public workshop in September, 2006 to further investigate requirements for a possible enterprise edition. The current release of the OSGi Specification has since become part of Java SE, included via JSR 291, and the question confronted by those of us who attended the workshop was whether the OSGi Specification should also become an alternate for Java EE, and if so, what requirements would need to be satisfied. One of the key requirements was the ability for OSGi services to invoke services running in other JVMs, and to support enterprise application topologies for availability, reliability, and scalability. The current OSGi Specification defines the behavior of service invocations in a single JVM only... To explain where we are in the process, it's helpful to give a little background on how the OSGi Alliance works. Its process is very similar to the Java Community Process. In fact, the OSGi specification started life as JSR 8, and basically still represents the evolution of that original JSR effort. The OSGi process starts with Request for Proposal documents (RFPs) that detail requirements. Once an RFP is approved, one or more Requests for Comment (RFCs) are created with designs that meet the requirements. After an RFC is approved, the specifications are updated to include the design. The RFPs and RFCs are both products of the expert groups, although they tend to be led by individuals or small team within the group... The other major parts of the upcoming release include various extensions to the core framework, the Spring-derived Blueprint Service component model for developing OSGi services, and various bits of Java EE mapped to OSGi bundles (JTA, JDBC, JPA, JNDI, JMX, JAAS, and Web Apps). The Java EE mapping is not as far along as the enhancements to the core, the Spring/Blueprint, or Distributed OSGi work, but a preview is expected to be published along with the R4.2 final release. The past two years have resulted in the Distributed OSGi requirements and design documented in the early release draft and illustrated in the reference implementation code at Apache CXF. This is one of the significant new features of the upcoming OSGi Specification R4.2 enterprise release, due out in mid-2009. Together with extensions to the OSGi core framework, the Spring-derived Blueprint Service component model, and mapping of key Java EE technologies, the upcoming release represents a major step forward for the OSGi specification and community...


What's Your Digital Lifestyle?
Joe Wilcox, eWEEK Microsoft Watch

"I'm having a midlife digital identity crisis. Perhaps you share the same problem. Maybe we can work through it together. My digital personas and activities spread out over too many places. I don't know what community I belong to anymore. Surely, you know this problem. There's a sense of lost identity, being on AIM, Facebook, Flickr, FriendFeed, Ovi, Tumblr, Twitter, YouTube, Vimeo, Windows Live, Zune and many other services. I've abandoned some services and haven't yet become active on others. Companies and individuals have this sense that the digital lifestyle is larger than life—that people can defy physical laws and be in two or more places at once. Where different digital services intersect/integrate, perhaps that's somewhat true. But if the connection is people, how many can you realistically be close to? How many places can you digitally be at once? Services like Facebook Connect or Google Friend Connect offer some remedy to online social networking silos. But they're only a beginning, I say. I find some services falling into strange silos and others overlapping in weird ways. For example, 95 percent of my Windows Live Messenger contacts work for or are affiliated with Microsoft. AIM is a mishmash of everybody else, including eWEEK colleagues, family, friends, peers and people working for other high-tech companies. Most of my Twitter followers work for high-tech or PR firms. So I do little to no personal tweeting. Why bother? [...] Can you be Fraker Attacker on Xbox, Frugal Juvenile on YouTube and Perry Pissant on USTREAM? That's all without accounting for specialized interests, like photography, role-play gaming, skate boarding or Yo-Yo collecting. Those activities open up other digital and social activities. For example, a Nikon camera owner might join Nikonians or some other photo forum. Do you ever wonder about the mental health of having all these digital personalities? Doctors institutionalize people for split-personality disorders. Do you digitally know who you are?..."


Apple Safari 4 Public Beta Passes Web Standards Project's Acid3 Test
Jim Dalrympl, InfoWorld

Apple on has released a public beta of Safari 4, the next generation of its Internet Web browser. Apple touts the Nitro engine at the heart of Safari as running JavaScript over four times as fast as Safari 3, up to 30 times faster than Internet Explorer 7 and three times faster than Firefox. Nitro also 'allows Safari 4 to load HTML pages three times faster than IE 7.' Other new features in Safari 4 include Top Sites, giving users a visual preview of frequently visited pages; Full History Search, to search through titles, web addresses and the complete text of recently viewed pages; Cover Flow, to easily flip through web history or bookmarks; and Tabs on Top, to make tabbed browsing easier. From the announcement: "Apple is leading the industry in defining and implementing innovative web standards such as HTML 5 and CSS 3 for an entirely new class of web applications that feature rich media, graphics and fonts. Safari 4 includes HTML 5 support for offline technologies so web-based applications can store information locally without an Internet connection, and is the first browser to support advanced CSS Effects that enable highly polished web graphics using reflections, gradients and precision masks. Safari 4 is the first browser to pass the Web Standards Project's Acid3 test, which examines how well a browser adheres to CSS, JavaScript, XML and SVG web standards that are specifically designed for dynamic web applications.... Safari for Mac, Windows, iPhone and iPod touch are all built on Apple's WebKit, the world's fastest and most advanced browser engine. Apple developed WebKit as an open source project to create the world's best browser engine and to advance the adoption of modern web standards. Most recently, WebKit led the introduction of HTML 5 and CSS 3 web standards and is known for its fast, modern code-base. The industry's newest browsers are based on WebKit including Google Chrome, the Google Android browser, the Nokia Series 60 browser and Palm webO.." Safari 4 public beta is free and available now for download from Apple's Web site.

See also: the Apple announcement


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com
Sun Microsystems, Inc.http://sun.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2009-02-23.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org