This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
- Microsoft Releases Open Tools for Scientific Research
- Ontology Design for Scientific Theories that Make Probabilistic Predictions
- Apache Software Foundation Makes Qpid a Top-Level Project
- CSS Last Call: Selectors Level 3
- It Was 20 Years Ago Today: The Web
- Explaining the Open Web Foundation (OWF): Why a New Process?
- BriefingsDirect Analysts Discuss Solutions For Bringing Human Interactions Into Business Process Workflows
- The BPM Technology Convergence, Part I
- SOA Security: The Basics
- Google Launches Google Friend Connect API
"Information necessary for science to progress can be hard to find. Addressing this challenge for researchers, Microsoft and Creative Commons have announced the release of the Ontology Add-in for Microsoft Office Word 2007 that will enable authors to easily add scientific hyperlinks as semantic annotations, drawn from ontologies, to their documents and research papers. Ontologies are shared vocabularies created and maintained by different academic domains to model their fields of study. This Add-in will make it easier for scientists to link their documents to the Web in a meaningful way. Deployed on a wide scale, ontology-enabled scientific publishing will provide a Web boost to scientific discovery. Science Commons, a division of Creative Commons, is incubating the adoption of semantic scientific publishing through creation of a database of ontologies and development of supporting technical standards and code. Microsoft Research has built a technology bridge to enable the link between Microsoft Office Word 2007 and these ontologies... The Ontology Add-in for Word 2007 is enabled by Microsoft Office Word's Open XML Formats underpinnings and its extensibility in the form of smart tags. Creative Commons is a not-for-profit organization, founded in 2001, that promotes the creative re-use of intellectual and artistic works—whether owned or in the public domain... According to the March 2009 Technology Preview for "Word Add-in For Ontology Recognition"— "The goal of the add-in is to assist scientists in writing a manuscript that is easily integrated with existing and pending electronic resources. The major aims of this project are to add semantic information as XML mark-up to the manuscript using ontologies and controlled vocabularies (using OBO), and to integrate manuscript content with existing public data repositories. As part of the publishing workflow and archiving process, the terms added by the add-in, providing the semantic information, can be extracted from Word files, as they are stored as custom XML tags as part of the content. The semantic knowledge can then be preserved as the documented is converted to other formats, such as HTML or the XML format from the National Library of Medicine, which is commonly used for archiving. The full benefit of semantic-rich content will result from an end-to-end approach to the preservation of semantics and metadata through the publishing pipeline, starting with capturing knowledge from the subject experts, the authors, and enabling this knowledge to be preserved when published, as well as made available to search engines and presented to people consuming the content..."
Ontology Design for Scientific Theories that Make Probabilistic Predictions
David Poole, Clinton Smyth, and Rita Sharma, IEEE Intelligent Systems
Scientific theories that make predictions about observable quantities can be evaluated by their fit to existing data and can be used for predictions on new cases. The authors' goal is to publish such theories along with observational data and the ontologies needed to enable the interoperation of the theories and the data. This article is about designing ontologies that take into account the defining properties of classes. The authors show how a multidimensional design paradigm based on Aristotelian definitions is natural, can easily be represented in OWL, (Web Ontology Language) and can provide random variables that provide structure for theories that make probabilistic predictions. They also show how such ontologies can be the basis for representing observational data and probabilistic theories in their primary application domain of geology. In a multidimensional ontology, dimensions are defined by functional properties or by each value of a nonfunctional property, classes are defined in terms of values on properties, and the domain of a property that defines a dimension is the most general class on which the property makes sense. OWL and the Multidimensional Design Pattern: OWL was designed to allow for the specification and translation of ontologies. OWL allows for the specification of classes, properties, and individuals and relations between them. It is possible to use OWL to specify ontologies using the multidimensional design pattern. It is interesting to note that we could find no tutorials or material for teaching or learning OWL that use this design. We divide the object properties into two classes: (1) A discrete property is an object property whose range is an enumeration class. (2) A referring property is an object property whose range is a nonenumeration class (that is, the value is an individual in the world). The dimensions of a multidimensional ontology are defined in terms of discrete properties... For building scientific ontologies, we suggest that the ability to use the ontology for defining probabilistic theories is essential. We outline a way that this can be done in a straightforward manner that should not distract ontology designers from the other issues that need to be considered. The multidimensional design pattern provides more structure than stating the subclass relation directly. We argue that it is more natural and show how it can be used for probabilistic modeling. This interaction between ontologies and probabilistic reasoning forms the foundation of applications we are building in minerals exploration and landslide prediction. This article considers only one aspect of the problem. Another aspect is, given descriptions of theories and individuals in the world at various levels of abstraction and detail, how to use them to make coherent decisions, which will also involve modeling utilities. A further aspect is that the assumption that we know the correspondence between individuals in the world and the model is not generally applicable. We need to determine which model individuals correspond to which individuals in the world (that is, which individuals fill the roles in the model). We also need to model and reason about existence and nonexistence..."
See also: W3C Web Ontology Language (OWL)
Apache Software Foundation Makes Qpid a Top-Level Project
Staff, Apache Software Foundation Announcement
The Apache Foundation recently announced the graduation of the Qpid project from the Apache Incubator as a Top-Level Project (TLP), signifying that the Project's community and products have been well- governed under the ASF's meritocratic process and principles. Apache Qpid is an Open Source messaging implementation built on the Advanced Messaging Queuing Protocol (AMQP) specification, the first open standard for enterprise messaging. Qpid provides transaction management, queuing, clustering, federation, security, management, and support across multiple operating systems and platforms. Qpid started in 2006 with a donation of code created by some of the initial project members. Since then, Qpid continues to expand both its committer base and diversity of organizations and individuals represented. Today, Qpid runs critical systems for many users and large organizations while continuing to lead through innovation and implementation. John O'Hara, Chairman of the AMQP Working Group and Executive Director at JPMorgan: "I am delighted that the Apache Software Foundation has graduated the Qpid Project. AMQP is an open infrastructure for business messaging over the Internet. Apache Qpid developers have been active participants in the AMQP Working Group working in partnership with other AMQP solution developers and end-users. The ASF's provision of Qpid as its AMQP implementation adds to the range of AMQP solutions businesses can choose from to improve their efficiency..." Peter Galli reports in an associated blog: Microsoft was invited to join the AMQP working group in October 2008 by the six founding members. Sam Ramji, the Senior Director of Platform Strategy at Microsoft said at that time in a blog post that the company had "committed to participate in the development of the specification and is keenly interested in the developing need for interoperability in enterprise messaging. While message-based transports with security and transactional integrity were a vital infrastructure component throughout financial institutions, the AMQP specification and related implementations may also provide greater interoperability for a number of other vertical scenarios, including insurance and healthcare. AMQP specifies a wire-level protocol (think of a transport like TCP or HTTP) and FIX, FpML, SOAP, and other messages can be sent of AMQP in LAN and WAN environments..." Ramji stressed that Microsoft's work in AMQP would be consistent with the commitment to openness outlined in July 2008. As the AMQP Working Group required a limited royalty-free patent licensing commitment from its members Microsoft, as a participant, agreed to grant royalty-free patent licenses on specified terms to implementers of the specification...
See also: Peter Galli's blog
CSS Last Call: Selectors Level 3
Tantek Çelik, Elika J. Etemad, Daniel Glazman (eds), W3C Technical Report
W3C announced a last call public review for the CSS "Selectors Level 3" specification. The document has been produced by members of the W3C Cascading Style Sheets (CSS) Working Group. The deadline for comments is April 07, 2009. CSS (Cascading Style Sheets) is a language for describing the rendering of HTML and XML documents on screen, on paper, in speech, etc. CSS uses Selectors for binding style properties to elements in the document. 'Selectors' are patterns that match against elements in a tree, and as such form one of several technologies that can be used to select nodes in an XML document. Selectors have been optimized for use with HTML and XML, and are designed to be usable in performance-critical code. A Selector represents a structure. This structure can be used as a condition (e.g. in a CSS rule) that determines which elements a selector matches in the document tree, or as a flat description of the HTML or XML fragment corresponding to that structure. Selectors may range from simple element names to rich contextual representations. The specification for review describes the selectors that already exist in CSS1 and CSS2, and further introduces new selectors for CSS3 and other languages that may need them. The main differences between the selectors in CSS2 and those in Selectors are: (1) the list of basic definitions (selector, group of selectors, simple selector, etc.) has been changed; in particular, what was referred to in CSS2 as a simple selector is now called a sequence of simple selectors, and the term "simple selector" is now used for the components of this sequence; (2) an optional namespace component is now allowed in element type selectors, the universal selector and attribute selectors; (3) a new combinator has been introduced; (4) new simple selectors including substring matching attribute selectors, and new pseudo-classes; (5) new pseudo-elements, and introduction of the "::" convention for pseudo-elements; (6) the grammar has been rewritten; (7) profiles to be added to specifications integrating Selectors and defining the set of selectors which is actually supported by each specification; (8) Selectors are now a CSS3 Module and an independent specification; other specifications can now refer to this document independently of CSS; (9) the specification now has its own test suite.
See also: the W3C CSS Working Group
It Was 20 Years Ago Today: The Web
Charles Cooper, CNET News.com
Is it already 20 years since Tim Berners-Lee authored "Information Management: A proposal" and set the technology world on fire? Back in 1989, Berners-Lee was a software consultant working at the European Organization for Nuclear Research outside of Geneva, Switzerland. On March 13 of that year , he submitted a plan to management on how to better monitor the flow of research at the labs. People were coming and going at such a clip that an increasingly frustrated Berners-Lee complained that CERN was losing track of valuable project information because of the rapid turnover of personnel. It did not help matters that the place was chockablock with incompatible computers people brought with them to the office. [...] He got to work on a document, which is amazing to read with the benefit of 20-20 hindsight. But it would take Berners-Lee another couple of years before he could demo his idea. Even then, the realization of his theory had to wait until the middle of the 1990s when Jim Clark and Marc Andreessen popularized the notion of commercial Web browsing with Netscape. And as prescient as the CERN document was, not even Berners-Lee could imagine where his basic design was about to lead. To wit, part of his very modest conclusions: "We should work toward a universal linked information system, in which generality and portability are more important than fancy graphics techniques and complex extra facilities. The aim would be to allow a place to be found for any information or reference which one felt was important, and a way of finding it afterwards. The result should be sufficiently attractive to use that it the information contained would grow past a critical threshold, so that the usefulness the scheme would in turn encourage its increased use."
Explaining the Open Web Foundation (OWF): Why a New Process?
Eran Hammer-Lahav, Blog
"In the next few blog posts I will try to answer questions about what the Open Web Foundation is about, what we are working on, and how you can help. This first post will answer questions about the reasons for creating a new Intellectual Property Rights (IPR) process. The views expressed here are my personal perspective and may not represent those of other people involved. [In current standards organizations] the process is based on an upfront scope. It involves negotiating a well defined scope among all the key participants prior to any actual specification work. Many times companies write unpublished specification drafts ahead of such negotiation, using them to have a better understanding of where they want to end up... Why can't the Open Web Foundation use existing IPR policies? The decision to create a new policy and not to use the current policies established by standard bodies is based on the reality of the communities the Open Web Foundation emerged from. After years of experience with the standards bodies, we can fully appreciate the value they provide, but also the significant pitfalls they create. The reality is that they have failed to support the kind of communities and results accomplished by the OAuth, OpenSocial, and OpenID. The two main issues with these policies are that they are strongly designed to accommodate and protect big companies, and that they usually take a very long time from idea to final specification. Other issues include complicated bureaucracy, the need for strong political allies, complicated legal documents inaccessible by most individuals, expensive fees and travel costs, and an existing body of work which in many cases creates a strong bias against new ideas. The reality is that many new communities are voting with their feet and staying away from such bodies. They knowingly create work that is not legally protected from IPR perspective because the alternatives are just not practical..."
See also: Open Web Foundation (OWF) references
A complete transcript has been published for a 'BriefingsDirect Analyst Insights' event on alignment of human interaction with business process management. BriefingsDirect is a periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests. "The need to automate and extend complex processes is obvious. What's less obvious, however, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM). This will become all the more important, as cloud-based services become more common. We're going to revisit the topic of BPEL4People, an OASIS specification that we discussed when it first arrived, probably a year-and-a-half ago. We'll also see how it's progressing with someone who has been working with the specification at OASIS since its beginning... Michael Rowley, director of technology and strategy at Active Endpoints: "I was at BEA for five years. I was involved in a couple of their BPM-related efforts. I led up the BPELJ spec effort there as part of the WebLogic integration team. I was working in the office of the CTO for a while and working on BPEL-related efforts. I also worked on the business process modeling notation (BPMN) 2.0 efforts while I was there. I worked a little bit with the ALBPM team as well, and a variety of BPM-related work. Then, I've been at Active Endpoints for a little over half a year now. While here, I am working on BPEL4People standards, as well as on the product itself, and on some BPMN related stuff as well... We've had some very good feedback from our users on BPEL4People. People really like the idea of a standard in this area, and in particular, the big insight that's behind BPEL4People, which is that there's a different standard for WS-Human Task. It's basically keeping track of the worklist aspect of a business process versus the control flow that you get in the BPEL4People side of the standard. So, there's BPEL4People as one standard and the WS-Human Task as another closely related standard. By having this dichotomy you can have your worklist system completely standards based, but not necessarily tied to your workflow system or BPM engine. We've had customers actually use that. We've had at least one customer that's decided to implement their own human task worklist system, rather than using the one that comes out of the box, and know that what they have created is standards compliant. This is something that we're seeing more and more. Our users like it, and as far as the industry as a whole, the big vendors all seem to be very interested in this. We just recently had a face-to-face and we continue to get really good turnout, not just at these meetings, but there's also substantial effort between meetings. All of the companies involved—Oracle, IBM, SAP, Microsoft, and TIBCO, as well as Active Endpoints—seem to be very interested in this. One interesting one is Microsoft. They are also putting in some special effort here..."
See also: the transcript
The BPM Technology Convergence, Part I
Winston Dhanraj, ZDNet Blog
A number of businesses today acknowledge the relevance of BPM & SOA and are increasingly looking to derive value from its adoption and application. Why, even the most important CIO on this planet considers SOA. (overseeing $70 billion spend is quite an awesome IT job). Anyways, BPM has clearly seen through the trough of disillusionment, with a number of standards organizations, vendors, system integrators, customers and of course the analysts (Gartners & Forresters) applying focused efforts in their own ways towards pushing BPM/SOA further up the productivity plain. Well, at least one of the key enablers has been Technology, be it in terms of various technical standards formulations, proven architectural frameworks or vendor product innovations. With this write-up I try to create just one perspective of today's BPM technology landscape. This actually leads me to investigate (in part II of this article), whether a general convergence of technology is possible within the frame of this perspective, and is that really where we are going to find the best possible solutions to BPM & SOA. To narrow-down the scope of the BPM technology landscape, I consider three core foundational technology components required to put together any comprehensive BPM or SOA solution: (1) Process Engine: BPEL/WSDL being the dominant standard; (2) Rules Engine: No real standard here, except JSR94 that defines interfacing APIs; (3) Enterprise Service Business: Variety of standards such as JBI, SCA-SDO—incompatible unfortunately... The BPM provider community as we know it today is broadly spilt across two (or maybe three) camps [...] The more dominant camp, that envisions a BPM ecosystem where Process & Rules are loosely-coupled (service-oriented) components. This camp includes heavy-weights such as IBM, Tibco, Oracle-Weblogic, and Microsoft... The Emergent, less dominant camp lead by Pegasystems that envisions a BPM ecosystem where Process and Rules are unified tightly-coupled 'first class citizens'.The third camp (now more-or-less morphed into/aligned with one of the two camps mentioned above) that believed all Business logic can be modeled purely in a Rules Engine. ILOG probably was one such vendor that was recently snapped up by IBM...
SOA Security: The Basics
Mark O'Neill, CSO Online
Diving into Service Oriented Architecture? Vordel's Mark O'Neill covers basic SOA security threats and defenses—and explains how security helps increase SOA's business benefits... In this article, we examine how security applies to Service Oriented Architecture (SOA). Before we discuss security for SOA, let's take a step back and examine what SOA is. SOA is an architectural approach which involves applications being exposed as 'services'. Originally, services in SOA were associated with a stack of technologies which included SOAP, WSDL, and UDDI. However, many grassroots developers then showed a preference for lightweight REST (Representational State Transfer) services as opposed to the more heavyweight SOAP messages, with the result that REST is now an accepted part of SOA. The rise of Web 2.0 has cemented RESTss place in the SOA world, since REST is widely used in Web 2.0. More recently, Cloud services such as Amazon's Simple Queuing Service (SQS) may be used alongside local services, to create a 'hybrid' SOA environment. The result of all this is that SOA now encompasses the original SOAP/REST/UDDI stack, REST services, and the Cloud. From a security professional's point of view, all of it must be secured. It is tempting to launch into a description of SOA Security without first asking 'Why?' Why apply security to SOA? One obvious answer is to protect the SOA infrastructure against attack. This is a valid reason, but there are also enabling, positive reasons for applying security to SOA, such as the ability to monitor usage of services in a SOA. We begin by examining the attacks against SOA technologies, both SOAP and REST. Then we examine how standards such as WS-Security allow policies to be applied to SOA, thus allowing controlled usage and monitoring and finally examine the security ramifications when an enterprise integrates local on-site applications with cloud computing services...
The old and the new: Passwords, X.509 Certificates, and WS-Security: Passwords have been around since time immemorial. They are still widely used within SOA Security. In many cases, it is simply a case of HTTP Authentication, sent over SSL so that the password is not sent in the clear. Indeed, even if Digest Authentication is used, where the password is not sent in the clear, SSL should still be used in order to block certain capture-replay attacks. Even though HTTP Authentication over SSL is "old" technology, it still is widely used for point-to-point authentication within an SOA. X.509 certificates are used in the context of SSL authentication, where a Web Service can prove its identity to a client, or, in the case of two-way SSL, the client also proves its identity to the service. In this case "identity" is amorphous, since Web Services interactions often involve applications talking to applications, without a human being involved. So the "identity" is the identity of an application. And, as is the case in all usage of X.509 certificates, the trust is based on the issuer of the X.509 certificate (a Certificate Authority, often abbreviated to "CA"). As well as SSL, X.509 certificates are often used in the context of digital signatures. XML Signature is a standard which defines how XML data can be digitally signed using the private key which corresponds to an X.509 certificate, so that anyone who holds the signatory's X.509 Certificate can validate the signature... WS-Security is a newer technology which was standardized in 2004. It builds on what has come before. It defines how XML Encryption and XML Signature apply to SOAP, so that a SOAP message may be encrypted and/or signed. Additionally, it defines where passwords and X.509 Certificates are placed in a SOAP message, and how SOAP may operate with Kerberos. This allows for interoperability between different applications which use WS-Security.
Google Launches Google Friend Connect API
Loren Baker, Search Engine Journal
See also: Google Friend Connect APIs
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/