This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com
- W3C OWL Working Group Publishes OWL 2 Web Ontology Language Documents
- SOA Product Review: Intel XML Software Suite 1.1
- Microsoft Elaborates on Oslo: "M" and "Quadrant"
- Implementing Supply Chain SOA with SKOS
- Information Model and XML Data Model for Traceroute Measurements
- Call for Participation: Workshop on Speaker Biometrics and VoiceXML 3.0
- WorkSoft Upgrades SOA Testing, Validation Tools
- IMAP Annotation for Indicating Message Authentication Status
- US Library of Congress Makes a Step Towards PRESTO
W3C OWL Working Group Publishes OWL 2 Web Ontology Language Documents
Boris Motik, Peter F. Patel-Schneider (et al, eds), W3C Technical Reports
Members of the W3C OWL Working Group have annnounced the release of seven specifications relating to OWL 2, including two First Public Drafts. OWL 2 extends OWL, a core standard of the Semantic Web, adding new features that users have requested and that software providers are prepared to implement. The OWL Web Ontology Language builds on RDF and RDF Schema and adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. "exactly one"), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes. The new features in OWL 2 include extra syntactic sugar, additional property and qualified cardinality constructors, extended datatype support, simple metamodelling, and extended annotations. (1) "OWL 2 Structural Specification and Functional-Style Syntax" defines OWL 2 ontologies in terms of their structure, and it also defines a functional-style syntax in which ontologies can be written. It describes the conceptual structure of OWL 2 ontologies and thus provides a normative abstract model for all (normative and nonnormative) syntaxes of OWL 2. Such a structural specification of OWL 2 provides the foundation for the implementation of OWL 2 tools such as APIs and reasoners. (2) "OWL 2 Direct Semantics" provides the direct model-theoretic semantics for OWL 2, which is compatible with the description logic SROIQ; urthermore, this document defines the most common inference problems for OWL 2. (3) "OWL2 RDF-Based Semantics" is a first public working draft which provides the RDFS-compatible model-theoretic semantics for OWL 2, called "OWL 2 Full". A strong relationship holds between the RDF-Based Semantics of OWL 2 Full and the Direct Semantics of OWL 2 DL, in that OWL 2 Full is, in some sense, able to reflect all logical conclusions of OWL 2 DL. (4) "OWL 2 Mapping to RDF Graphs" provides mappings by means of which every OWL 2 ontology in the functional-style syntax specification can be mapped into RDF triples and back without any change in the formal meaning of the ontology. (5) "OWL 2 XML Serialization" defines an XML syntax for OWL 2 that mirrors its structural specification; an XML schema defines this syntax and is available as a separate document, as well as being included; it declares an XML Serialization Namespace for OWL 2 (http://www.w3.org/ns/owl2-xml). (6) "OWL 2 Web Ontology Language: Profiles" provides a specification of several profiles of OWL 2 which can be more simply and/or efficiently implemented. In logic, profiles are often called fragments. Most profiles are defined by placing restrictions on the syntax of OWL 2. These restrictions have been specified by modifying the productions of the functional-style syntax. (7) The "OWL 2 Conformance and Test Cases" first public working draft describes the conditions that OWL 2 tools must satisfy in order to be conformant with the language specification. It also presents a set of tests that both illustrate the features of the language and can be used for testing conformance.
See also: OWL 2 Direct Semantics
SOA Product Review: Intel XML Software Suite 1.1
Paul O'Connor, SYS-CON
The benefits of XML over opaque message formats in data interchange are well established. No matter if your focus is SOAP, REST, POX, or syndication with RSS or ATOM, your applications will revolve around XML processing. The bane of XML has always been the overhead of processing it in terms of memory and CPU consumption: parsing documents, performing XML Schema validation, searching for elements with XPath, and especially executing transforms. This problem has been met head-on by Intel's Software and Services Group, with the release of the Intel XML Software Suite. The fact that Intel has a software development group dedicated to creating software tools optimized for Intel hardware platforms is not surprising or new information to folks doing software development for multi-core systems. What is surprising is the level of optimization that has been achieved in this XML toolkit. The Intel XML Software Suite includes both Java and C/C++ libraries for Windows (Vista, XP, Server 2003, Server 2008) and Linux (Red Hat AS/ES, SuSE Server 9/10) on IA-32 and 64-bit Intel processors. A recent update of the Intel XML Software Suite also supports the Intel Itanium platforms on HP-UX OS. Performance is optimized for use on multi-core Xeon processors. Compatible Java JDKs include Sun, JRockit, and IBM. The product is not free: you will need a license to use it, but you can get started with a free evaluation license... The Intel XML Software Suite is comprised of four separate XML processing functions bundled as a single product: (1) XML Parsing (DOM and SAX); (2) XML Schema Validation; (3) XPath—XML navigation and expression handling; (4) XSL Transformation (XSLT). Underpinning each XML processing function of the product is a custom, highly optimized, XML pull parser. The Intel XML Software Suite derives its true power from its deep integration with the Intel processor architectures for which it is optimized... What they did is build a set of native libraries optimized for dual-core Xeon processors that can alternatively be linked with C/C++ programs or surfaced in Java as a native library. Anyone familiar with native code optimization will understand the value of a good compiler. To this end, the product is built with the Intel Compiler. Since XML processing is conducive to multithreading, Intel turned loose their multithreading performance analysis kit to achieve further optimization. The user has the ability to configure the number of threads at runtime. Not surprisingly, Intel reports that the best performance realized in their testing occurred when the number of threads equaled the number of processor cores. The Intel XML Software Suite is poised to incorporate upcoming Intel Streaming SIMD Extensions 4 (SSE4) instructions to further boost XML processing performance. By including the SSE4.2 instructions inside the Intel XML Software Suite, developers will be able to take advantage of these new instructions without changing their application code; you simply need to be using the latest XML library from Intel. Intel makes it easy for developers to benefit from new Intel CPU instructions by incorporating them into runtime libraries this way...
See also: the product description
Microsoft Elaborates on Oslo: "M" and "Quadrant"
Paul Krill, InfoWorld
Shedding more light on its Oslo vision for model-based software development, Microsoft has elaborated on plans to preview Oslo technologies, offering code names and citing the company's DSL (Domain Specific Languages) concept as a lynchpin of the platform. A Community Technology Preview of Oslo is due at the Microsoft Professional Developers Conference in Los Angeles on October 27, 2008. Featured in the CTP will be a declarative modeling language now being identified by the code name "M," as well as software modeling tool code-named "Quadrant. A repository for integration between models also will be part of the CTP. User feedback on the CTP will help determine the overall road map for Oslo technologies, according to Robert Wahbe, Microsoft corporate vice president of the company's Connected Systems Division. With Oslo, Microsoft seeks to provide another layer of abstraction for developers and make development easier; models become the applications. Business analysts also could make changes to models. "It's easier in many cases to look at a model and see what it's trying to do rather than look at hundreds of thousands of lines of code," Wahbe said. With the M language, ISVs and developers could build textual DSLs; a DSL enables a developer to write down intent in a way that is close to how a developer is thinking about a problem. M also can be used to build data models. "The idea of DSLs has been around. What we're trying to do with Oslo is make it easier for mainstream developers to use models in general. Microsoft, as an ISV itself, will use DSLs for building domains for activities like workflow and databases. The notion is that M is excellent at building these DSLs in an easy way. In turn, once you have that DSL, what it does is it lets you produce something that the platform can execute directly." A model is translated to XAML, which can be executed by the platform. Oslo also can work with multiple runtimes from platforms like Java if developers customize the Oslo tools. Microsoft is attacking the two core issues of modeling: translating from models into executable code and the functional aspect of an application, in which functional models must accommodate nonfunctional aspects of an application such as security and systems management.
See also: Steven Martin on 'M' and 'Quadrant'
Implementing Supply Chain SOA with SKOS
Brian Sletten, DevX.com
The supply chain vision applies to the world of service oriented architectures (SOA) and Enterprise development. You can implement this vision today by applying the concepts of the web in the Enterprise, combining URL-addressable RESTful web services and data sources into sophisticated, efficient processes. It is not necessary to pull all the pieces into a heavy deployment structure like conventional J2EE and .NET applications. New functionality becomes a recombination of existing information and functionality. The issue with the existing approaches is that they require too much human agreement and are not universally applicable across services and data. The social costs of these efforts always far outweigh the technical costs. It is necessary to support different classification schemes because it is simply not possible to achieve consensus among all stakeholders. Additionally, it is necessary to have a consistent metadata strategy for not just the services, but the data, concepts, policies, documents, and everything else that might participate in these orchestrated spaces. This is where the Resource Description Framework (RDF) in general and the Simple Knowledge Organization System (SKOS) language come in... The goal of using these technologies is to describe and categorize services as quickly and easily as possible. Languages such as the Web Ontology Language (OWL) provide the ability to model domains quite accurately, but the skills and effort to do so remain outside the reach of most Enterprise developers for the near future... SKOS is an attempt to allow simpler concept schemas (e.g., taxonomies, controlled vocabularies, etc.) to be used in place of heavier weight ontologies. Modeling taxonomies is within the skill set of modern information architects, software developers, and service-oriented architects... The successful adoption of a service oriented architecture requires the right balance of deployment infrastructure, metadata description, and query capability. By applying web architectures in the Enterprise you can achieve a flexible, scalable addressing scheme that hides specific backend implementation details behind logical names. After you commit to this kind of information-driven architecture, you need to describe and categorize the kinds of data and services available in this environment. RDF and SKOS provide tremendous capabilities while striking a nice balance of simplicity, expressiveness, and flexibility. If you have more sophisticated modeling needs and ontology engineers at your disposal, throwing OWL into the mix can greatly enrich your descriptive and inferencing capabilities even more. These technologies enable the kind of lightweight, flexible, scalable, on-demand orchestrations as envisioned by the supply chain SOA metaphor in ways that lower costs and are more efficient than conventional technologies.
Information Model and XML Data Model for Traceroute Measurements
Saverio Niccolini (et al., eds), IETF Internet Draft
Members of the IETF IP Performance Metrics Working Group have published an updated Internet Draft for "Information Model and XML Data Model for Traceroute Measurements." This document describes a standard way to store the configuration and the results of traceroute measurements. It describes the terminology and defined an information model dividing the information elements in two semantically separated groups (configuration elements and results elements). On the basis of the information model a data model based on XML is defined to store the results of traceroute measurements. Traceroutes are being used by lots of measurement efforts, either as an independent measurement or to get path information to support other measurement efforts. That is why there is the need to standardize the way the configuration and the results of traceroute measurements are stored. The standard metrics defined by the IPPM working group in matter of delay, connectivity and losses do not apply to the metrics returned by the traceroute tool; therefore, in order to compare results of traceroute measurements, the only possibility is to add to the stored results a specification of the operating system as well as name and version for the traceroute tool used. This document, in order to store results of traceroute measurements and allow comparison of them, defines a standard way to store them using a XML schema. Section 2 of the document defines the terminology used in this document, Section 3 describes the traceroute tool, Section 4 describes the results of a traceroute measurement as displayed to the screen from which the traceroute tool was launched. Section 5 and Section 6 respectively describe the information model and data model for storing configuration and results of the traceroute measurements. Section 7 contains the XML schema to be used as a template for storing and/or exchanging traceroute measurements information.
Call for Participation: Workshop on Speaker Biometrics and VoiceXML 3.0
Staff, W3C Announcement
W3C has issued a call for particpiation in a March 2009 "Workshop on Speaker Biometrics and VoiceXML 3.0," to be held at SRI International, Menlo Park, CA, USA. W3C membership is not required to participate in this workshop. However, position papers will be the basis for the discussions at the workshop, and each organization or individual wishing to participate must submit a position paper. Workshop Chairs include Judith Markowitz (Co-Chair Speaker Biometrics Committee, VoiceXML Forum), Ken Rehor (also Co-Chair), and Kazuyuki Ashimura (W3C Voice Browser Activity Lead). The W3C Voice Browser Working Group seeks to develop standards to support secure access to the Web and Web services using biometric, speaker identification and verification (SIV). Interest in SIV is growing in both the private and public sector. That interest is motivated by a variety of factors, primarily cost and labor issues; convenience; and the growing number of regulations/laws governing data privacy and security that have been put in place exist at international, national, local, and industry levels. Unlike other biometric technologies, speech recognition, and speech synthesis, there are no standards specifically governing the use of SIV. ISO/IEC 19784-1 (called 'BioAPI') is a generic, biometric application programming language that was designed to support SIV in non-telephony deployments. Its utility for SIV Web-services applications has not yet been fully explored. The three other SIV standards projects (Media Resources Control Protocol (MRCP V2)—Internet Engineering Technology Forum; NCITS 1821-D Speaker Recognition Format for Raw Data Interchange - VoiceXML Forum & InterNational Committee for Information Technology Standards; and ISO/IEC 1.37.19794-13, Voice Data - International Standards Organization and the International Electrotechnical Commission) are all still under development... This workshop is focused on SIV within VoiceXML 3.0. The goal is to identify and prioritize directions for SIV standards work as a means of making SIV more useful in current and emerging markets. The Voice Browser Working Group is considering following three activities: (1) Identify application requirements for SIV in VoiceXML 3.0; (2) Identify SIV standards relevant to VoiceXML 3.0; (3) Integrate existing and in-process standards with VoiceXML 3.0... The Workshop organizers expect several communities to contribute to the workshop: SIV technology vendors; Developers of applications using SIV; Biometric specialists interested in incorporating SIV into their systems; Security industry specialists; SIV researchers and other experts; Users of SIV in the public and private sectors; Standards bodies interested in SIV.
See also: W3C Workshops and Symposia
WorkSoft Upgrades SOA Testing, Validation Tools
Vance McCarthy, Integration Developer News
Worksoft Inc. is shipping an upgrade to its Worksoft Certify for SOA, which provides a 'scriptless' approach to testing and validating business processes for SOA. The ofering works with SAP, .NET, Java and mainframe environments. Worksoft Certify for SOA 8.2.1 extends and simplifies testing capabilities to speed deployment and improve accuracy to business processes: "SOA drives business value by ensuring that the IT services required to quickly respond to changing business needs are delivered in a secure and manageable way," said Bruce Johnson, CEO and President of Worksoft, in a statement. "Worksoft Certify for SOA simplifies the validation of these dynamic, composite applications by allowing WSDLs and messages to be tested in the context of the end-to-end business process they support." Specifically, Worksoft Certify for SOA 8.2.1 allows: (1) Tests to be designed and executed at the business process level to validate, in context, both the Web Services and the end-user application, covering the stack from bottom to top; (2) Simulations of both the messages and responses to allow test suites to be built and executed before producer services are even available, thereby compressing the overall SOA project lifecycle; (3) Automated identification of all changes to the XML and SOAP messages and test cases that are affected by each new revision of the service; (4) Seamless integration into Worksoft Certify's existing repository of reports, requirements, test data and processes... Worksoft Certify, offers a patented 'scriptless' approach to functional and business process validation testing enabling business analysts and functional users to define, execute, and maintain tests through a menu-driven interface with business language narratives.
IMAP Annotation for Indicating Message Authentication Status
Murray S. Kucherawy (ed), IETF Internet Draft
An initial Internet Draft -00 has been published for "IMAP Annotation for Indicating Message Authentication Status." It defines an application of the IMAP (Internet Message Access Protocol) Annotations facility whereby a server can store and retrieve meta-data about a message relating to message authentication tests performed on the message and the corresponding results. Electronic mail, though ubiquitous and highly useful, is also prone to increasing abuse by parties that choose to exploit its lenient design for nefarious purposes such as "spam" and "phishing." Abuse of this leniency has become so widespread as to become an economic problem. Several nascent methods of mitigating this problem such as SPF and DKIM appear to make strides in this direction but are themselves not sufficient. In many cases the results of attempts to authenticate messages must be relayed to the user for final disposition. This memo defines a new annotation for IMAP using the IANA Considerations found in IETF RFC 5257 ("IMAP ANNOTATE Extension") which is used to store and relay message authentication results from upstream (e.g. "border") mail servers to internal mail servers which ultimately do message delivery. This information can then be used by delivery agents or even the users themselves when determining whether or not the content of such messages is trustworthy. The IMAP annotation defined in this memo is expected to serve several purposes: (1) Convey to MUAs from filters and Mail Transfer Agents (MTAs) the results of various message authentication checks being applied; (2) Provide a common location for the presentation of this data; (3) Create an extensible framework for specifying results from new authentication methods as such emerge; (4) Convey the results of message authentication tests to later filtering agents within the same "trust domain", as such agents might apply more or less stringent checks based on message authentication results; (5) Do all of this in a way not prone to forgery or misinterpretation.
See also: the discussion list
US Library of Congress Makes a Step Towards PRESTO
Rick Jelliffe, XML.com
It's very pleasing to see that the US Library of Congress Thomas project is making user-friendly, structured URLs available as permanent aliases for its legislation. A predictable URI will bring up a page with links to all the different information available about that piece of legislation: text, sponsors, costings, metadata, etc. Try it to see! The advantage is that these names are formed using simple rules (congress number: 110, bill type: congressional resolution, number: 33) so you can figure out the URL if you know this information: you don't need to search for it, and it won't go out of date. There has been a good grassroots move to require this: the Open House project, for example. Many European legislatures have also moved towards similar approaches. I have been pushing a similar approach, but taking it further, in the PRESTO approach. How does the Thomas project correspond to the PRESTO approach? Big ticks for having clear and hackable names, and for shielding the underlying implementation (it is just done in the resolver). A big tick for having names that apply to information regardless of whether it is available. A big tick for being permanent. A big tick for having a single resource that is the hub/index for information about its subresources. Under the PRESTO approach, the next step would be to then make each of these subresources available using permanent PRESTO URIs. In the current implementation, you can get to the top-level resource, but then the linked resources are back to using impenetrable queries for the URL... In turn, this [example] page gives you an HTML version, but it also gives you different rendings possible which also have obscure URLs. For example, to get the PDF version, the PRESTO approach would be to make this a subresource. In this case, we use the ";" syntax suggested by Tim Berners-Lee as matrix URIs, rather than query parameters... ["'Handles' are web addresses that do not change over time. A Handle is a form of uniform resource identifier (URI) that resolves to a uniform resource locator (URL). As a stable pointer, the Handle will not change even if the underlying URL changes over time or the object moves to a new directory. The Global Handle Registry is run by CNRI (Corporation for National Research Initiatives), making it possible to resolve Handles from any computer on any network. The actual page URL, not the Handle, is shown in the browser address bar when the page is displayed..."]
See also: 'Legislative Handles'
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: firstname.lastname@example.org
Newsletter unsubscribe: email@example.com
Newsletter help: firstname.lastname@example.org
Cover Pages: http://xml.coverpages.org/