The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: March 04, 2009
XML Daily Newslink. Wednesday, 04 March 2009

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
IBM Corporation

Ontology Summit 2009: Toward Ontology-based Standards
Staff, Summit Organizing Committee Posting

An initial description has been published for "Ontology Summit 2009: Toward Ontology-based Standards." The event will be held April 6-7, 2009 in Gaithersburg, Maryland. The Summit is co-organized by NIST, Ontolog, NCOR, NCBO, OASIS, OMG, ISO TC 184/SC4, STI International, DERI,... "This summit will address the intersection of two active communities, namely the technical standards world, and the community of ontology and semantic technologies. This intersection is long overdue because each has much to offer the other. Ontologies represent the best efforts of the technical community to unambiguously capture the definitions and interrelationships of concepts in a variety of domains. Standards—specifically information standards—are intended to provide unambiguous specifications of information, for the purpose of error-free access and exchange. If the standards community is indeed serious about specifying such information unambiguously to the best of its ability, then the use of ontologies as the vehicle for such specifications is the logical choice. Conversely, the standards world can provide a large market for the industrial use of ontologies, since ontologies are explicitly focused on the precise representation of information. This will be a boost to worldwide recognition of the utility and power of ontological models. The goal of this Ontology Summit 2009 is to articulate the power of synergizing these two communities in the form of a communique in which a number of concrete challenges can be laid out. These challenges could serve as a roadmap that will galvanize both communities and bring this promising technical area to the attention of others. Exactly what challenges are chosen is the subject to be debated and decided upon during the electronic discussion period leading up to the face-to-face meeting in April of 2009.

See also: Standards and Ontologies

W3C Launches Semantic Sensor Network Incubator Group
Staff, W3C Announcement

W3C has announced the creation of the "Semantic Sensor Network Incubator Group," sponsored by W3C Members CSIRO, Wright State University, and OGC. The group's mission is to begin the formal process of producing ontologies that define the capabilities of sensors and sensor networks, and to develop semantic annotations of a key language used by services based sensor networks. The Group has been chartered through March 2010. From the 'Scope' of the Charter: "As networks of sensors become more commonplace there is a greater need for the management and querying of these sensor networks to be assisted by standards and computer reasoning. The OGC's Sensor Web Enablement activities have produced a services-based architecture and standards, including four languages for describing sensors, their capabilities and measurements, and other relevant aspects of environments involving multiple heterogeneous sensors. These standards assist, amongst other things, in cataloguing sensors and understanding the processes by which measurements are reached, as well as limited interoperability and data exchange based on XML and standardized tags. However, they do not provide semantic interoperability and do not provide a basis for reasoning that can ease development of advanced applications.

Ontologies and other semantic technologies can be key enabling technologies for sensor networks because they will improve semantic interoperability and intergration, as well as facilitate reasoning, classification and other types of assurance and automation not included in the OGC standards. A semantic sensor network will allow the network, its sensors and the resulting data to be organised, installed and managed, queried, understood and controlled through high-level specifications. Sensors are different to other technologies, such as service-oriented architecture, because of the event based nature of sensors and sensor networks and the temporal relationships that need to be considered. Further, when reasoning about sensors the various constraints, such as power restrictions, limited memory, variable data quality, and loosely connected networks need to be taken into account The SSN-XG will work on two main objectives: (1) the development of ontologies for describing sensors; (1) the extension of the Sensor Markup Language (SML), one of the four SWE languages, to support semantic annotations. The first objective, ontologies for sensors, will provide a framework for describing sensors. These ontologies will allow classification and reasoning on the capabilities and measurements of sensors, provenance of measurements and may allow reasoning about individual sensors as well as reasoning about the connection of a number of sensors as a macroinstrument. The sensor ontologies will, to some degree, reflect the OGC standards and, given ontologies that can encode sensor descriptions, understanding how to map between the ontologies and OGC models is an important consideration. he second objective of semantic annotation of sensor descriptions and services that support sensor data exchange and sensor network management will a serve similar purpose as that espoused by semantic annotation of Web services. We will use as an input for this objective ongoing work on SML-S which in turn borrows from the SAWSDL (also a W3C recommendation) for semantic annotation of WSDL-based Web Services, and SA-REST for semantic annotation for RESTful services.

See also: the W3C Incubator Activity

Towards a Common Key Management Standard: Making Encryption Work
Staff, Computer Business Review

"Enterprises are increasingly aware of the need to implement encryption in order to protect their information assets, but this has emphasized the challenges around interoperability and effective key lifecycle management. As a result, a group of storage and security vendors has announced the key management interoperability protocol, which is intended to standardize key management interfaces. The growing trend towards remote and mobile working, combined with the ubiquity of internet access and the prevalence of removable storage, has created a significant increase in the channels through which data can be lost. Unsurprisingly, this growth in loss vectors has resulted in a torrent of high-profile incidents that have led to valuable data being compromised. The higher visibility of the risks of data loss and the awareness of their effects on the bottom line (especially in this challenging economic environment) have encouraged greater investments from enterprises in information protection technologies, particularly encryption. Attempts to resolve these concerns have often been ad hoc and have therefore failed to remove obstacles to the wider and deeper adoption of encryption solutions. This may all change thanks to the welcome introduction of the proposed KMIP standard that has been formulated by a group of prominent enterprise storage and security vendors including Brocade, EMC, HP, IBM, LSI, [NetApp,] Seagate, and Thales. KMIP is intended to standardize and simplify the interactions between disparate key management mechanisms and encryption methods across the entire IT infrastructure. The long list of vendors which have already committed to the standard will almost guarantee its ratification and make it very likely to emerge as the foremost protocol in this marketplace. However, KMIP is still at an early stage of its development and is many steps away from being adopted by the industry. [Note: As anticipated, a new OASIS KMIP TC has now been chartered; the Call for Participation includes information on registering to join this Technical Committee. Eligible persons wishing to gain TC voting status as of the first meeting (April 24, 2009) should register by April 09, 2009.]

See also: the OASIS KMIP TC Charter and Call for Participation

IETF NETCONF Configuration Protocol (bis)
Rob Enns, Martin Bjorklund, Juergen Schoenwaelder (eds), IETF Internet Draft

An initial draft version -00 specification has been published for the IETF Standards Track "NETCONF Configuration Protocol", bis level. Appendix B presents the (now Normative) XML Schema for NETCONF RPC and Protocol Operations. The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized on top of a simple Remote Procedure Call (RPC) layer. From the introduction: The NETCONF protocol uses a remote procedure call (RPC) paradigm. A client encodes an RPC in XML and sends it to a server using a secure, connection-oriented session. The server responds with a reply encoded in XML. The contents of both the request and the response are fully described in XML DTDs or XML schemas, or both, allowing both parties to recognize the syntax constraints imposed on the exchange. A key aspect of NETCONF is that it allows the functionality of the management protocol to closely mirror the native functionality of the device. This reduces implementation costs and allows timely access to new features. In addition, applications can access both the syntactic and semantic content of the device's native user interface. NETCONF allows a client to discover the set of protocol extensions supported by a server. These "capabilities" permit the client to adjust its behavior to take advantage of the features exposed by the device. The capability definitions can be easily extended in a noncentralized manner. Standard and non-standard capabilities can be defined with semantic and syntactic rigor... The NETCONF protocol is a building block in a system of automated configuration. XML is the lingua franca of interchange, providing a flexible but fully specified encoding mechanism for hierarchical content. NETCONF can be used in concert with XML-based transformation technologies, such as XSLT, to provide a system for automated generation of full and partial configurations. The system can query one or more databases for data about networking topologies, links, policies, customers, and services. This data can be transformed using one or more XSLT scripts from a task-oriented, vendor-independent data schema into a form that is specific to the vendor, product, operating system, and software release. The resulting data can be passed to the device using the NETCONF protocol.

See also: the IETF Network Configuration (NETCONF) Working Group

Apache Tuscany Java SCA 2.0 M1 Released
Dilip Krishnan, InfoQueue

The Apache Tuscany team announced the release of 2.0 M1 of the Java Service Component Architecture (SCA) project. SCA defines a technology neutral component and assembly model for business application developers to focus on implementing the business logic and composing them into business solutions without worrying about the technology concerns. The latest version of SCA is being standardized at OASIS as part of Open Composite Service Architecture (Open CSA). "Apache Tuscany provides a runtime based on the Service Component Architecture (SCA) which is a set of specifications aimed at simplifying SOA Application Development. The SCA specifications are being standardized at OASIS as part of the Open Composite Services Architecture (Open CSA). The Apache Tuscany SCA 2.0-M1 release is the first milestone on the road to a full Apache Tuscany SCA 2.0 release. The goal of Apache Tuscany SCA 2.0 is to provide an OSGi based SCA runtime that is compliant with the OASIS SCA specifications. With this first milestone release, a solid OSGi foundation is in place to support the development, build, testing and deployment of Tuscany modules and extensions following OSGi best practices. The first steps have also been taken to incorporate the latest OASIS SCA draft specifications. In subsequent milestone releases the compliance gap with the OASIS specifications will continue to be narrowed and, now that the OSGi infrastructure in place, OSGi/SCA integration at the application level will be explored further. The Apache Tuscany SCA 2.0-M1 release includes implementations of the main SCA specifications and some recent updates from Open CSA drafts." Raymond Feng: "From an OSGi-centric view, SCA can be used to describe the OSGi service remoting and Quality of Services (QoS) and an SCA runtime such as Tuscany can be the Distribution Software for RFC 119. From an SCA-centric view, Tuscany provides implementation.osgi to reuse OSGi bundles as coarse-grained SCA components in an SCA composite application so that they can be assembled with other business services beyond OSGi..."

See also: the OASIS Open Composite Services Architecture (CSA) Member Section

Open Like a Un-Gagged Mouth
Rick Jelliffe, O'Reilly Technical

"ODF 1.3? At the OASIS ODF Technical Committee archives you can see the current items people have sent in for "ODF Next-Gen" (i.e., ODF 1.3) in the last fortnight. The window for these requests is open for another month and half. I have made three requests so far, using the appropriate form: for CJK Tables (I have blogged on this), for CJK Text Grid Enhancements, and for better support for Markup Compatibility and Extensibility. ODF 1.2? Meanwhile, something interesting has come out on the ODF comments list. It seems that OASIS rules actually ban Technical Committee members from participating on the comments list with non-committee members. Communication is a one-way affair, an offering to silent gods. Now this may seem incredible to you. Surely OASIS is the open organization? Well, in this regard it isn't: "The purpose of the TC's public comment facility is to receive comments from the public and is not for public discussion." [Update: however they can still participate in other forums: see the recent welcome clarification. I have now confirmed that the registration page for the appropriate public forum, now has the appropriate checkbox,to be functional.] More than a decade ago, standards bodies were generally freaked out that the Internet was making it so easy to find information that the publishing-revenue model of standards could not last. It is interesting that OASIS seems to have had exactly the same problem: an inadequate revenue model because it could not charge for its standards. But the solution of decreased openness seems a bad one... So if OASIS ODF TC committee members cannot discuss things on the comments list, who are they? Six months ago I put the boot into the levels of participation in a blog item which was intended to be provocative in the sense that I was concerned about the corporate concentration of power in the ODF TC..."

[Note, OpEd: It would be interesting to create a description of feedback mechanisms used by various SSOs/SDOs in terms of supporting non-members "discussion" of a developing specification. If a commenter asks a question, and someone answers (on-list), is that a "discussion"? What if there are followup questions, and various answers? The BSI British Standards Draft Review System (mentioned by Patrick Durusau) uses a web interface, but I don't know if comment threading is supported. Rob Weir opined that "the most appropriate vehicle for submitting public comments would be via something like JIRA — an issue tracking system. The use of a list for submitting comments is pretty much a hack: User signs up, agrees to the Feedback Licence, posts their comment, maybe waits for a day or two to see if they get a response, and then sign off the list. It is essentially a transactional list for depositing comments. Having a free-ranging discussion makes our job tracking individual comments more difficult. It also makes it more difficult for the subscriber who thought they were on a low-traffic comment list and find their inbox filled with discussion posts." The OASIS 'opendocument-users' list mentioned in the thread is of type "-users"; these "-users" lists are "to assist those adopting an OASIS Standard or specification", and thus (apparently [?]) are inappropriate for comments/questions/discussion on a draft specification.]

See also: Rob Weir on JIRA comment tracking


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
Microsoft Corporation
Oracle Corporation
Sun Microsystems, Inc.

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: