The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Created: August 10, 2007.
News: Cover StoriesPrevious News ItemNext News Item

OGC Releases Transducer Markup Language (TML) Implementation Specification.

Contents

An announcement from The Open Geospatial Consortium (OGC) describes the publication of the OpenGIS Transducer Markup Language (TML) Implementation Specification as an approved OpenGIS Publicly Available Standard.

Sensor systems "have two basic parts: a sensing element and a transducer that converts energy from one form to another form (usually an electric signal) that can be interpreted. The OGC TML specification defines the conceptual model and XML Schema for describing transducers and supporting real-time streaming of data to and from sensor systems. TML thus defines (a) a set of models describing the response characteristics of a transducer, and (b) an efficient method for transporting sensor data and preparing it for use with other data through spatial and temporal associations."

TML response models "are formalized XML descriptions of known hardware behaviors. The models can be used to reverse distorting effects and return artifact values to the phenomena realm. TML provides models for a transducer's latency and integration times, noise figure, spatial and temporal geometries, frequency response, steady-state response and impulse response. Traditional XML wraps each data element in a semantically meaningful tag. The rich semantic capability of XML is in general better suited to data exchange rather than live delivery where variable bandwidth is a factor. TML addresses the live scenario by using a terse XML envelope designed for efficient transport of live sensor data in groupings known as TML clusters. It also provides a mechanism for temporal correlation to other transducer data."

The TML Implementation Specification has been produced as part of the OGC Sensor Web Enablement activity. In this effort, OGC members are "specifying interoperability interfaces and metadata encodings that enable real time integration of heterogeneous sensor webs into the information infrastructure. Developers will use these specifications in creating applications, platforms, and products involving Web-connected devices such as flood gauges, air pollution monitors, stress gauges on bridges, mobile heart monitors, Webcams, and robots as well as space and airborne earth imaging devices."

As described in OGC's Sensor Web Enablement Overview, "A sensor network is a computer accessible network of many, spatially distributed devices using sensors to monitor conditions at different locations, such as temperature, sound, vibration, pressure, motion or pollutants1. A Sensor Web refers to web accessible sensor networks and archived sensor data that can be discovered and accessed using standard protocols and application program interfaces (APIs). OGC's Sensor Web Enablement (SWE) presents many opportunities for adding a real-time sensor dimension to the Internet and the Web. This has extraordinary significance for science, environmental monitoring, transportation management, public safety, facility security, disaster management, utilities' Supervisory Control And Data Acquisition (SCADA) operations, industrial controls, facilities management and many other domains of activity. The OGC voluntary consensus standards setting process coupled with strong international industry and government support in domains that depend on sensors will result in SWE specifications that will quickly become established in all application areas where such standards are of use."

An OGC Network comment notes that "TML is a well documented specification, rich in detail and has an across the board application for use in any environment requiring sensor data and fusion of multiple type sensor data integration including marine and weather phenomena data. Moreover, TML addresses ambiguity with to respect time and space and also include full bi-directional control benefits... TML has been widely used throughout the Department of Defense and other government agencies. The USAF and National Geospatial Intelligence Agency sponsored the development of TML and has embraced the use of the standard as a key enabling technology to fuse disparate sensor data ranging from imagery, signals data, to measurements."

The TML document was submitted to the Open Geospatial Consortium by several organizations: 3eTI, Intergraph, Innovative Research Ideas and Services Corporation, National Geospatial-Intelligence Agency, Oak Ridge National Laboratory, Radiance, Inc., SeiCorp, Inc., and York University. Other contributors included Rockwell-Collins, Air Force Research Laboratory/Sensors Directorate, Lockheed-Martin, US Special Operations Command (USSOCOM), General Dynamics, Khoral, Modus Operandi, SSAI, SAIC, and Northrop Grumman.

The OGC Sensor Web Enablement Working Group has produced seven candidate specifications, and others are planned:

  • Observations and Measurements (O&M): Standard models and XML Schema for encoding observations and measurements from a sensor, both archived and real-time.

  • Sensor Model Language (SensorML): Standard models and XML Schema for describing sensors systems and processes; provides information needed for discovery of sensors, location of sensor observations, processing of low-level sensor observations, and listing of taskable properties.

  • Transducer Markup Language (TransducerML or TML): The conceptual model and XML Schema for describing transducers and supporting real-time streaming of data to and from sensor systems.

  • Sensor Observations Service (SOS): Standard web service interface for requesting, filtering, and retrieving observations and sensor system information. This is the intermediary between a client and an observation repository or near real-time sensor channel.

  • Sensor Planning Service (SPS): Standard web service interface for requesting user-driven acquisitions and observations. This is the intermediary between a client and a sensor collection management environment.

  • Sensor Alert Service (SAS): Standard web service interface for publishing and subscribing to alerts from sensors.

  • Web Notification Services (WNS): Standard web service interface for asynchronous delivery of messages or alerts from SAS and SPS web services and other elements of service workflows.

"Along with other specifications coming out of the OGC's Sensor Web Enablement activity, TML enables developers to create loosely coupled open systems for discovering, controlling and using Web-accessible sensors and transducers of all kinds, from simple digital switches to satellite-borne imaging systems."

Bibliographic Information

OpenGIS Transducer Markup Language (TML) Implementation Specification. 'Transducer Markup Language 1.0.0'. Edited by Steve Havens. Copyright © 2006-2007 Open Geospatial Consortium, Inc. Open Geospatial Consortium Inc. Date: 2007-07-02. 258 pages. OGC reference number: OGC 06-010r6. Version: 1.0.0 Category: OpenGIS Implementation Specification; OpenGIS Publicly Available Standard, Engineering Specification. File: '06-010r6_Transducer_Markup_Language_TML.pdf', source PDF.

Normative Annex A in the TML specification presents the relevant XML Schema Documents. Related OpenGIS XML schemas are available online.

Transducer Markup Language (TML) Enablement

As presented by the OGC-Network description of the Transducer Markup Language, TML enables the following:

  • Interoperability and fusion of disparate sensor data: TML enables the interoperability of heterogeneous sensor systems by providing a self describing data exchange protocol based on XML. This enables the fusion of multi-sensor, multi-source data into a common operating picture.
  • Data exchange across multiple sensor types: TML is application independent, therefore well suited to address sensor data exchange across multiple operational domains.
  • Registration of different sensors and the correlations between them: TML maintains relative and absolute time sequencing of data from various (or all as required) sensors within a system. TML enables the analysts to compare temporal and spatially similar collected data and or compare disparate temporal or spatially collected data thus providing multiple domain coupling.
  • One common processor handles all incoming sensor data: TML as a common sensor model and format facilitates the development of a common processor and improves multi-sensor/type data fusion. This enables each sensor to have a common memory structure.
  • Faster and more accurate targeting: TML provides high fidelity exchange of sensor data and facilitates precision space-time registration as well as error characterization of data.
  • Plug and play sensor: TML promotes plug-n-play sensors and is therefore adaptable to new sensors because no modification is required to the TML enabled processor.
  • Preservation of raw sensor data: Through time-tagging TML allows precise ordering of raw sensor data and the reconstruction of raw data streams. Moreover, this ability ensures data can be smartly archived and retrieved without being corrupted or having to search through volumes of data. The operator can easily set up a search engine to "find" specific data either temporally or spatially.
  • Sensor discovery: TML enables a user to obtain a list of available sensors on a real-time basis, and to select specific sensors from which to receive data.
  • Bi-directional Control: TML seamlessly allows control of transducer sin the same data stream as sensor data. Complex sensors with included control s ystems such as image positioning, camera shutter, local alarms, etc. are controlled from the same client application.
  • Small Highly Efficient Footprint: TML, by design, has a very low overhead structure which allows for much higher bandwidth use. Very high rate clocks can be used to time stamp data.
  • Sensor and Transducer Modeling: A large array of modeling parameters are included in TML. Each sensor or transducer can be characterized with a full range of parameters including transfer functions, calibration data, unit step function, etc.

Transducer Markup Language: Specification Excerpts

Scope: This document describes TML and how it captures necessary information to both understand and process transducer data. TML is intended for communicating transducer data between a transducer node (containing one or more transducers) and a transducer processing/control device (application). Descriptions of how TML captures the actual data and descriptions of necessary information such as system calibration, transducer behavior, conditions of operation and data collection parameters critical to logical data structure model are all captured within TML. This document specifically describes the TML transducer behavior models and TML data models. It does not describe a service for delivering TML-structured data, but there is a brief discussion of SOA and streaming platform considerations.

TML Concepts: Transducer Markup Language (TML) is a language for capturing and characterizing not only data from transducers, but information necessary for the processing and understanding of that data by the eventual recipient of the transducer data. Both sensors and transmitters can be captured and characterized within TML, leading to the use of the term transducer rather than sensor. TML handles not only static but also streaming transducer data. TML permits the data stream to handle live transducer data both being added to the stream and being deleted from the stream.

Descriptions relevant to the later processing and understanding of the data, including calibration, operational conditions, device settings, transducer properties and characteristics, relationships of transducers to each other in a multi-component system and system behavior are all among the properties captured in a TML description. Logical models, behavioral models, transfer functions, and other information critical to the processing of data are all captured within TML.

TML is capable of precise time-tagging of data, so that it is possible to know precisely when a physical phenomenon was measured at the individual measurement level, and also captures latency or delay information at a fine resolution. This enables the precise determination of when a data point was taken, as well as aiding in interpolation between data points and the reconstruction of events.

Interoperability and Integration of Transducer Data: The primary purpose of TML is to enable widespread sharing while providing greater understanding of transducer data, such that we can gain a better picture of our world through transducers. Transducers are our interface to the real world. Just as our minds rely on multiple senses to gain a better situational understanding of our environment so must we utilize multiple modalities of transducers to gain a better situational understanding of our world.

To manage the complex task of processing and associating millions of sensory inputs, we must utilize computing machines to coordinate and associate data and represent only interesting data in a summarized manner such that humans can make higher-level decisions in a timely manner. To facilitate this task we must enable a seamless and unambiguous communication language for transducers and computing machines to act as a homogeneous system. To accomplish this a transducer communication language must be complete, consistent, and efficient as possible. Many times these properties are mutually exclusive.

TML is a major technological and economic growth opportunity, cost savings or avoidance for commercial activities, and revolutionary efficiencies in State and Federal Government. TML transcends the need for direct or 'stove piped' approaches of data dissemination designed for specific domains, and enable the sharing/fusion of cross-domain information... TML enables the user the ability to receive multiple types of data, including but not limited to SIGINT, IMINT, and MASINT, and visualize it in the manner most suitable to the current operation. The user will no longer be tied to receiving product data that only marginally suits the current situation. TML provides for rapid reception of selected and disparate sensor data through existing architectures and horizontally fuse heterogeneous sensors, promote data interoperability, to present an integrated picture...

TML Data Stream: The TML data stream is a product generated from a TML system. The TML data stream carries the time varying data representing various external and internal phenomena. The opening TML tag initiates a stream. A stream may be promptly terminated at any point at which time the reading machine should add a closing TML tag to make the terminated stream valid XML. If the stream is terminated normally a closing tag from the sender will terminate the stream.

TML implements a time tagged implementation of XML. What this means is that a system time clock count is inserted into the start tag of TML data elements to signify the relative time (from the sysClk) (or absolute time, in some cases) to indicate when the data contained in each TML element was acquired. The system clock should be of sufficient resolution to adequately relate time differences at a transducer sample sub-sampling interval (approximately an order of magnitude faster that the fastest sample clock in the system) and enough digits to minimize the possibility of a roll over.

The transducer is the interface between the data world of computers and the real world. All information and actions between the two domains go through a transducer. A transducer therefore transforms a real world phenomenon into data or vice-versa. TML will characterize the what, where, and when aspects of transducer data.

TML Structure: TML provides a self-contained lightweight XML envelope for efficient transport of transducer data as well as all necessary information to decode, process, analyze and understand the transducer data. Transducer system, spatial and temporal metadata are also characterized and carried along with the data, allowing analysis of transducer data and processing of the data only using information carried in the TML data stream.

TML achieves interoperability through several key features. These include: (1) ability to plug TML behavior and response models into a standardized transducer definition; (2) ability to interface with and utilize arbitrary sensor output; (3) normalization of discordant datasets using TML behavior models...

TML Application Notes

Excerpted and adapted from the online TML Application Notes:

  • Note #1: Why Would I Use TML? — Transducer Markup Language (TML) is an XML-based system for capturing, characterizing and enabling the transport of raw sensor data. It has elements designed to make it a self-contained package, including not only the raw data but also information about that data which is necessary to process it later.

    In common terms, TML gives you the raw data and describes that data in a self-contained stream. Once you get a TML data stream you won't need to consult other sources of information before processing the data. Many systems deliver data, but not only is it usually at least somewhat processed, but much of the information about that data ('metadata') is lost. This could include information such as calibration information, sensor performance data, geometry about a system and precise data about the time that data was captured.

    TML captures the raw data, information about that data such as when it was captured and the conditions under which it was captured, and delivers it to you whether it comes from a single source or multiple sources. How TML does this will be covered in future TML Application Notes, but for now, the key question is: why would someone want to use TML? You would use TML if you:

    • have multiple sources of data that you need to fuse together
    • need to reconstruct an event which is described by sensor data
    • need to process data and need to have all relevant metadata in one place
    • want to develop common tools to process, display, analyze and fuse data from multiple sources
    • want to exchange raw data with others

  • Note #2: TML and Time — TML allows you to know precisely when something happened because allsensor data is time-tagged. What exactly is the relationship between TML and time?

    TML is precise about time. Every piece of transducer data is captured in a 'data' element, and key to that data element is a time stamp. In the world of capturing sensor data, sometimes absolute time is available and sometimes it isn't. When an acceptable local time source is available, TML can use it. When it isn't, we generally use a local high-resolution system clock — which could be simple counter.

    TML tags data using the best available clock, and relies on the data description having enough information to determine the correct time which corresponds to that clock value if it's just a counter. TML'sdata description tells you what's in the data, then includes information tying a particular local clock value to the actual time of day. If an actual time isn't available, TML lets you provide data purely with relative clock values.

    In essence, TML provides a mechanism by which data can be precisely time-tagged with precise, accurate clocks, and then tells you how to correspond local clock values to actual time.

    In systems with multiple sensors, there are often multiple clocks, and this is where it becomes critical for the system designer to know the relationship between the various local clocks being used. For maximum benefit TML enabled systems provide information within the data descriptions about the multiple clocks so that they can be correlated during the data processing step.

  • Note #3: TML and Multiple Data Sources — TML allows you to handle multiple data sources and to fuse them for processing and analysis, using each data source to enhance and complement the others.

    TML recognizes that most systems are made up of multiple components. In fact, TML has a system element made up of individual components. In a TML data stream, each individual component is known as either a transducer or a process, and each one is given a unique identifier. When these elements are combined into a single data stream, data from each transducer or processor is tagged with its identifier in addition to a clock value, allowing you to determine precisely the data source as well as when it was captured.

    In addition to describing data from an individual transducer or process, TML also has something called a 'relation', which describes both logical and physical relationships between transducers and processes. For example, to determine the resolution of an image, it is necessary to know the performance specifications of a camera, but you also need to know its location and position relative to items in its field of view. If there is a GPS system with a camera, you will need to know their location relative to each other to determine the resolution within the camera's image.

    In this example, both the GPS system's physical and logical relationships to the camera are important. The GPS may be a support sensor giving a location or it may be the source of time stamps. TML will capture that relationship along with the performance capabilities of the camera.

  • Note #4: In TML, Everything's A Transducer — TML captures any time-varying data and makes it possible to characterize data from any source. It does this by treating everything as a transducer or sensor. This means that any source of data can be characterized in TML, since it can be described using the TML transducer element. The data doesn't even have to be numerically quantifiable, TML can handle text data or any form of binary data just as easily, although text data might be more difficult to fuse later on.

    The basic theory behind TML is that a sensor is just a source of data, and the format of data from that source can be described in the TML data description. This means that even if there isn't a physical device involved, it can still be treated like one.

    Something like a camera whose output is converted to JPEG format, then compressed for transmission, can be expressed in terms of multiple transducers; the camera, the JPEG conversion and the compression at the end can all be considered transducers. Technically the JPEG converter and the compression would be considered processes, but they're treated much the same as transducers are, and for the purposes of this discussion are pretty much equivalents.

    This means that domain-specific things, such as a physical constant or property associated with a particular process, can be expressed with time-varying values, and that the values don't have to be considered 'metadata' to be included in the data description as part of some other transducer.

  • Note #5: TML Standardizes Data — Transducer Markup Language (TML) standardizes data from any time-varying source and therefore makes it possible to develop and use common processing tools to handle it. TML does this through the use of a common data model. Simply put, this means that TML takes all sensor output and captures it in the same way. All cameras have their output captured in the same way, all proximity sensors have their data captured in the same way, and so on.

    Typically two sensors from different manufacturers, even if they're of the same type, might not store their output data in the same manner. TML standardizes things by storing the information from any system of this type in the same way.

    In this way, anyone reading TML-format data only has to read one format rather than different formats for each sensor. All sensors would output the same type of data in the same place, making it possible for everyone to know at once how to process data from that type of sensor...

  • Note #6: TML Keeps Track of Data Streams — TML standardizes data from any source and allows the integration of multiple data sources into a single data stream. There can be systems of transducers, and systems of systems, and individual components of systems. A frequently asked question is how TML keeps track of all of this.

    The answer lies in identification tags, which are represented in TML as UIDs, or Unique IDentifiers. Key TML elements, such as transducer descriptions or geometry models, all have their own UID element. The UID allows the TML producer to assign that TML element a unique identifier.

    A complete UID might look like "AAASensors:Sys0001", which would convey the information that "AAASensors" has a "Sys0001" identifier. While there may be many identifiers associated with "AAASensors", they will all have different identifiers. A UID functions much like a telephone number. While there may be many people with the telephone number 555-1234, only one such number will exist within each area code (205). Similarly, while there may be more than one element identified as "Sys0001", only one such label will exist within "AAASensors".

    Once a UID is assigned, all future references to that particular TML element can be done via the UID to specify exactly which element is of interest. It is also important to note that UIDs are assigned by the manufacturer or developer when a TML enabled transducer/sensor is being created, and that the use of UIDs is totally transparent to the end user.

  • Note #7: TML Improves Uncertainty Calculations — TML has built-in capabilities to improve uncertainty calculations. Accuracy is the key component of TML, therefore every value carried within TML has an associated accuracy with it. This allows the user to know, as far as possible, how much uncertainty is associated with each numerical value.

    For example, latency in a system contributes to uncertainty, but if someone provides a value such as 3 seconds for latency, is that always applicable? What if the latency is really 3 seconds plus or minus 50 milliseconds (uncertainty)? TML recognizes this and captures not only a value but an accuracy as well (e.g. 3 seconds plus/minus 50 m/s), so that the end user can better determine overall system uncertainty.

    In addition to providing a rich and robust environment for capturing data, TML also has facilities to characterize sensor and system accuracies. A wide variety of elements within TML capture both value and accuracy, including the descriptions of system performance, spatial geometry, latency, and temporal behavior. In everything from relative to absolute values, from offsets to ambiguity spaces, TML captures accuracies in as many places as possible. This all contributes to having enough data so that a TML client can calculate the overall system uncertainty as accurately as possible...

  • Note #8: TML and Overhead — A commonly asked question is 'How much data overhead is incurred in the use of TML?' The answer is "very nominal". TML does impose some overhead but it's a relatively fixed amount, and the more data you send the less significant TML's overhead is.

    While the Sensor Data Description section contains all necessary information to characterize the sensors, 'behavior' and phenomena data can be sent in advance as a separate text file — once. There is no need to repeatedly send the Description or any part of the Description along with the data stream unless and until there is a change to the sensor or sensor system.

    This means that the only recurring information which needs to be sent with the data are the TML data tags which identify portions of the data stream for later analysis. The data tags tend to be a fixed length, consisting primarily of <data> and </data> tags which also include time and reference identifiers necessary for later correlation and analysis. Typically these tags contain 60-70 bytes of data for each data cluster, while data clusters typically range in size from 10K to 100K...

About The Open Geospatial Consortium

"The Open Geospatial Consortium, Inc (OGC) is an international industry consortium of 352 companies, government agencies and universities participating in a consensus process to develop publicly available interface specifications. OpenGIS Specifications support interoperable solutions that 'geo-enable' the Web, wireless and location-based services, and mainstream IT. The specifications empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications.

Through its Specification Program, Interoperability Program, and Outreach and Community Adoption Program, OGC develops, releases and promotes open standards for spatial processing. In the OGC Specification Program the Technical Committee and Planning Committee work in a formal consensus process to arrive at approved (or "adopted") OpenGIS Specifications. The OGC Interoperability Program is a series of hands-on engineering initiatives to accelerate the development and acceptance of OpenGIS Specifications. OGC and its members offer resources to help technology developers and users take advantage of OGC's open standards. Technical documents, training materials, test suites, reference implementations and other interoperability resources developed in OGC's Interoperability Initiatives are available on the OGCNetwork. In addition, OGC and its members support publications, workshops, seminars and conferences to help technology developers, integrators and procurement managers introduce OGC plug and play capabilities into their architectures.

Requests for proposals (RFPs), requests for information (RFIs), and requests for comment (RFCs) are documents that are published to solicit input into OGC's formal Technical Committee process for consensus-based development of open interfaces. Calls for Participation (CFPs), Requests for Quotation (RFQs), and Requests for Technology (RFTs) are documents that are published prior to the launch of OGC Interoperability Initiatives (testbeds, pilot projects, planning studies etc.). Interoperability Program Reports (IPRs) and Draft Interoperability Program Reports (DIPRs) are generated in these Interoperability Initiatives.

The standards tracks of OGC and ISO are fully coordinated through shared personnel and through various resolutions of ISO TC211 and OGC. They are often complementary and where they overlap, there is no competition, but common action (e.g., in the geometry model). OGC provides fast-paced specification development and promotion of standards adoption, similar to other industry standards consortia such as W3C, IETF, and OMG. ISO is the dominant de jure international standards development organization (SDO), providing international government authority important to institutions and stockholders. Through OGC's cooperative relationship with ISO, many of OGC's OpenGIS Specifications either have become ISO standards or are on track to become ISO standards. OGC maintains contact with a number of other standards organizations (W3C, IETF, OMG, AMIC and others), generally offering expertise related to spatial issues and receiving expertise necessary to ensure that OGC's standards framework is consistent with other IT standards frameworks..."

Principal References


Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Bottom Globe Image

Document URI: http://xml.coverpages.org/ni2007-08-10-a.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org