This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com
- W3C Web Services Resource Access WG Publishes Six Last Call Working Drafts
- Interoperability of Medical Device Information and the Clinical Applications: An HL7 RMIM Based on IEEE 11073 DIM
- Last Call Review for Encrypted Key Package Content Type and Algorithms
- W3C Media Annotations Working Group Publishes Drafts
- OSGi Service Platform Enterprise Specification Release 4, Version 4.2
- CACM: A View of Cloud Computing
- Collaboration Tools for Global Software Engineering
W3C Web Services Resource Access WG Publishes Six Last Call Working Drafts
Doug Davis, Ashok Malhotra, Katy Warr, Wu Chou (et al, eds) W3C Technical Reports
W3C announced the publication of six Last Call Working Draft specifications from the WSRA Working Group. The W3C Web Services Resource Access (WSRA) Working Group, chaired by Bob Freund, is chartered to standardize a general mechanism for accessing and updating the XML representation of a resource-oriented Web Service and metadata of a Web Service, as well as a mechanism to subscribe to events from a Web Service. The drafts note the most recent changes in a "Change Log" section. The Last Call period extends through 11-May-2010.
The Web Services Enumeration (WS-Enumeration) specification "describes a general SOAP-based protocol for enumerating a sequence of XML elements that is suitable for traversing logs, message queues, or other linear information models... There are numerous applications for which a simple single-request/single-reply metaphor is insufficient for transferring large data sets over SOAP. Applications that do not fit into this simple paradigm include streaming, traversal, query, and enumeration. This specification defines a simple SOAP-based protocol for enumeration that allows the data source to provide a session abstraction, called an enumeration context, to a consumer that represents a logical cursor through a sequence of data items. The consumer can then request XML element information items using this enumeration context over the span of one or more SOAP messages. Somewhere, state MUST be maintained regarding the progress of the iteration. This state MAY be maintained between requests by the data source being enumerated or by the data consumer. WS-Enumeration allows the data source to decide, on a request-by-request basis, which party will be responsible for maintaining this state for the next request. In its simplest form, WS-Enumeration defines an operation, Enumerate, used to establish the creation of an enumeration session and another operation, Pull, which allows a data source, in the context of a specific enumeration, to produce a sequence of XML elements in the body of a SOAP message. Each subsequent Pull operation returns the next N elements in the aggregate sequence..."
Web Services Eventing (WS-Eventing) describes "a protocol that allows Web services to subscribe to or accept subscriptions for notification messages; it provides an extensible way for subscribers to identify the delivery mechanism they prefer. Web services often want to receive messages when events occur in other services and applications. A mechanism for registering interest is needed because the set of Web services interested in receiving such messages is often unknown in advance or will change over time. There are many mechanisms by which notifications can be delivered to event sinks. This specification defines a protocol for one Web service (called a 'subscriber') to register interest (called a 'subscription') with another Web service (called an 'event source') in receiving messages about events (called 'notifications'). The subscriber can manage the subscription by interacting with a Web service (called the 'subscription manager') designated by the event source. To improve robustness, a subscription can be leased by an event source to a subscriber, and the subscription expires over time. The subscription manager provides the ability for the subscriber to renew or cancel the subscription before it expires.
Web Services Transfer (WS-Transfer) describes a general SOAP-based protocol for accessing XML representations of Web service-based resources; it defines a mechanism for acquiring XML-based representations of entities using the Web service infrastructure in two types of entities: Resources, which are entities addressable by an endpoint reference that provide an XML representation, and Resource factories, which are Web services that can create new resources. Specifically, WS-Transfer defines two operations for sending and receiving the representation of a given resource and two operations for creating and deleting a resource and its corresponding representation... Web Services Metadata Exchange (WS-MetadataExchange) defines how metadata associated with a Web service endpoint can be represented as WS-Transfer resources or HTTP resources, how metadata can be embedded in WS-Addressing endpoint references, and how metadata could be retrieved from a metadata resource, and how metadata associated with implicit features can be advertised... Web Services Fragment (WS-Fragment) extends the WS-Transfer WS-Transfer specification to enable clients to retrieve and manipulate parts or fragments of a WS-Transfer enabled resource without needing to include the entire XML representation in a message exchange... The Web Services Event Descriptions (WS-EventDescriptions) document describes a mechanism by which an endpoint can advertise the structure and contents of the events it might generate..."
Interoperability of Medical Device Information and the Clinical Applications: An HL7 RMIM Based on IEEE 11073 DIM
M. Yuksel, A. Dogac, I. Cingil, A. Okcan; METU-SRDC Technical Report
"Medical devices are essential to the practice of modern healthcare services. Their benefits will increase if the clinical applications can seamlessly acquire the medical device data. The need to represent medical device observations in a format that can be consumable by clinical applications has already been recognized by the industry. Yet, the solutions proposed involve bilateral mappings from IEEE 11073 DIM to a specific message standard or to a specific document standard. Considering that there are many different types of clinical applications such as the EHR and the PHR systems, the clinical workflows and the clinical decision support systems each conforming to different standard interfaces, detailing a mapping mechanism for every one of them introduces significant work and thus limits the potential health benefits of medical devices.
In this paper, to facilitate the interoperability of clinical applications and the medical device data, we derive HL7 RMIM of medical device domain from HL7 RIM based on IEEE 11073 DIM. This makes it possible to trace the medical device data back to a standard common denominator, that is, HL7 RIMfrom which all the other medical domains under HL7 are derived. Hence, once the medical device data is obtained in the RMIM format, it can easily be transformed into HL7 based standard interfaces through XML transformations because these interfaces all have their building blocks from the same RIM.
To demonstrate this, we provide the mappings from the developed RMIM to some of the prominent HL7 based standard interfaces. Additionally, to be able to get data from proprietary devices in IEEE 11073 format, we developed a 'Medical Device Modeling Tool' that enables translation of medical device data instances in proprietary message formats to the IEEE 11073 format...
As a future work, we will use these mappings to automate the clinical follow up of patients with cardiac implants within the scope of the iCARDEA Project. This project has set out to semi-automate the patient follow up processes through personalized care plans by correlating the data coming from medical devices with the context of the patient obtained from his EHRs and PHRs, and hence its first requirement was to achieve the interoperability of all this information. As another research thread, we intend to investigate the interoperability of the developed RMIM with the other standards in the field such as CEN EN 13606 (EHRcom). In fact, [an earlier IEEE paper ] describes how the clinical statements of HL7 CDA and CEN EHRcom can be transformed into each other's representation. In addition, we developed XML mappings from HL7 CDA to CEN 13606-1 which will constitute the initial steps of this future work.
See also: XML Standards and Healthcare
Last Call Review for Encrypted Key Package Content Type and Algorithms
Sean Turner and Russ Housley (eds), IETF Internet Draft
The Internet Engineering Steering Group (IESG) requests public comment on two related IETF Internet Draft specifications submitted for consideration as IETF Proposed Standards. The IESG plans to make a decision on this request within the next few weeks, and solicits final public comment; please send substantive comments to the IETF at the mailing lists by 2010-04-28.
The specification Encrypted Key Package Content Type defines the encrypted key package content type, which can be used to encrypt a content that includes a key package, such as a symmetric key package or an asymmetric key package. It is transport independent. The Cryptographic Message Syntax (CMS) can be used to digitally sign, digest, authenticate, or further encrypt this content type. It is designed to be used with the CMS Content Constraints extension, which does not constrain the EncryptedData, EnvelopedData, and AuthEnvelopedData... Using the existing CMS mechanisms, producers of authenticated plaintext key packages can be authorized by including a CCC extension containing the appropriate content type in the producer's certificate. However, these mechanisms cannot be used to authorize the producers of encrypted key material. In some key management systems, encrypted key packages are exchanged between entities that cannot decrypt the key package. The encrypted key package itself may be authenticated and passed to another entity. In these cases, checking the authorization of the producer of the encrypted key package may be desired at the intermediate points.
The encrypted key package content type is designed for use with 'Cryptographic Message Syntax (CMS) Content Constraints X.509 Certificate Extension' (CCC). To authorize an originator's public key to originate an encrypted key package, the object identifier associated with the encrypted key package content type is included in the originator's public key certificate CCC certificate extension. For CCC to function, originators encapsulate the encrypted key package in a SignedData, EnvelopedData, or AuthEnvelopedData, and then during certificate path validation the recipient determines whether the originator is authorized to originate the encrypted key package. In CCC terminology, the encrypted key package is a leaf node. Additional authorization checks may be required once the key package is decrypted. For example, the key package shown below consists of a SignedData layer that encapsulates an encrypted key package that encapsulates a SignedData layer containing a symmetric key package..."
Algorithms for Encrypted Key Package Content Type describes "the conventions for using several cryptographic algorithms with the encrypted key package content type. Specifically, it includes conventions necessary to implement EnvelopedData, EncryptedData, and AuthEnvelopedData... EnvelopedData supports a number of key management techniques. Implementations that claim conformance to this document must support the key transport mechanisms and should support the key agreement mechanisms... EncryptedData requires that keys be managed by means other than EncryptedData; therefore, the only algorithm specified is the content encryption algorithm. Implementations must support AES-128 Key Wrap with Padding... AuthEnvelopedData, like EnvelopedData, supports a number of key management techniques. The key management requirements for AuthEnvelopedData are the same as EnvelopedData. The difference is the content encryption algorithm. Implementations MUST support 128-bit AES-GCM and should support 256-bit AES-GCM.."
See also: the Algorithms specification
W3C Media Annotations Working Group Publishes Drafts
Brian Sletten, InfoQueue
"The W3C Media Annotations Working Group has recently posted drafts of its Ontology for Media Resource 1.0 and API for Media Resource 1.0 efforts. They have also updated the Use Cases document to reflect some of the intentions of these projects.
The basic goal of the Working Group is to produce an API and domain model for handling the explosion of media content on the Web. There is not and never will be a single set of audio and video formats used by all residents of the Internet, so they felt the need to establish a mechanism for describing this content and connecting it to other content of different formats. Leveraging the power of Semantic Web technologies, they want to be able to do things such as: (1) Retrieve media metadata in a common form, even if it is stored differently in different formats; (2) Define a mapping between the metadata supported by the source formats; (3) Manage user-specified metadata for the media resources — e.g. reviews, ratings, tagging' (4) Handle structured annotations from the various sources.
The drafts support twenty-five different formats including Cable Labs's Video-On-Demand formats, Dublin Core publication metadata, the exchangeable image file format (EXIF) for digital camera, Digital Bazaar's Video Metadata, MPEG ID3 tags, QuickTime, Adobe's XMP, YouTube's Data API Protocol, etc. The ontology reuses many terms from popular vocabularies such as Dublin Core, but wanted to define its key ideas in its own terminology. The draft not only defines these terms, but also specifies a one-way mapping into the metadata and annotations of the native formats.
The properties include terms such as: 'ma:identifier' = URI for the resource; 'ma:title' = Title of the resource; 'ma:language' = IETF BCP 47 language used in the resource; 'ma:locator' = URL associated with the content; 'ma:creator' = A primary author of the content; 'ma:compression'; Compression codecs used; 'ma:duration' = Duration (in seconds)... The API being defined is intended to manage these mappings and provide a common programming model for interacting with the content so that format-specific code is not required. The API can be used by either client or server-side code as suggested by two different scenarios..."
OSGi Service Platform Enterprise Specification Release 4, Version 4.2
Staff, OSGi Alliance Announcement
The OSGi Alliance announced the approval and publication of the Enterprise Specification for enterprise application and application server developers. Building upon the OSGi Service Platform Release 4 Core Specification Version 4.2, the 492-page OSGi Service Platform Enterprise Specification Release 4, Version 4.2 is a customized set of services intended to fulfill the specific needs of enterprise environments and enterprise customers. The enterprise service set includes declarative services and blueprint container specifications, seamless access to OSGi and non-OSGi remote services, Web application specifications, database integration, management and configuration services, naming and directory services, and more...
The OSGi framework provides a local service registry for bundles to communicate through service objects, where a service is an object that one bundle registers and another bundle looks up. The Enterprise Specification enhances this model by defining endpoints that represent services hosted in a remote systems. It allows for seamless access to remote services within the OSGi Service Platform without changing the service layer.
The remote system may or may not be based on OSGi. The Enterprise Specification includes the specifications of: (1) Remote Services — The Remote Services specification defines a number of service properties that participating bundles can use to convey information to a distribution provider. The distribution provider creates endpoints that are accessible to remote clients or registers proxies that access services hosted external to the OSGi framework. (2) Remote Service Admin Specification — The Remote Services Admin Service Specification defines an API for the distribution provider and discovery of services in a network. A management agent can use this API to provide an actual distribution policy. This management agent can export and import services as well as discovering services in the network. (3) SCA Configuration Type — Distribution providers support a number of communication protocols configured by specific configuration types. The SCA Remote Services Configuration Specification defines such a configuration type.
The OSGi Alliance is a worldwide consortium of technology innovators that advances a proven and mature process to assure interoperability of applications and services based on its component integration platform. The alliance provides specifications, reference implementations, test suites and certification to foster a valuable cross-industry ecosystem. OSGi Alliance members develop and facilitate the deployment of OSGi specifications, which serve as the platform for universal middleware in server and embedded environments. Deployment of these open OSGi standards greatly increases the value of a wide range of computers and devices that use Java technology. Alliance members represent diverse markets including SmartHome, automotive electronics, mobile and enterprise. Member company industries include leading service and content providers, infrastructure/network operators, utilities, software developers, gateway suppliers, consumer electronics/device suppliers (wired and wireless) and research institutions..."
See also: the OSGi download page
CACM: A View of Cloud Computing
Michael Armbrust, Armando Fox (et al), Communications of the ACM
"Cloud computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1,000 servers for one hour costs no more than using one server for 1,000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT...
Our goal in this article is to reduce that confusion by clarifying terms, providing simple figures to quantify comparisons between of cloud and conventional computing, and identifying the top technical and non-technical obstacles and opportunities of cloud computing. We provide a ranked list of critical obstacles to growth of cloud computing. The first three affect adoption, the next five affect growth, and the last two are policy and business obstacles. Each obstacle is paired with an opportunity to overcome that obstacle, ranging from product development to research projects.
From 'Top 10 Obstacles and Opportunities for Cloud Computing': (1) Business Continuity and Service Availability; (2) Data Lock-In; (3) Data Confidentiality/Auditability; (4) Data Transfer Bottlenecks; (5) Performance Unpredictability; (6) Scalable Storage; (7) Bugs in Large-Scale Distributed Systems; (8) Scaling Quickly; (9) Reputation Fate Sharing; (10) Software Licensing...
Regardless of whether a cloud provider sells services at a low level of abstraction like EC2 or a higher level like AppEngine, we believe computing, storage, and networking must all focus on horizontal scalability of virtualized resources rather than on single node performance. Moreover: Applications software needs to both scale down rapidly as well as scale up, which is a new requirement. Such software also needs a pay-for-use licensing model to match needs of cloud computing. Infrastructure software must be aware that it is no longer running on bare metal but on VMs. Moreover, metering and billing need to be built in from the start. Hardware systems should be designed at the scale of a container (at least a dozen racks), which will be the minimum purchase size. Cost of operation will match performance and cost of purchase in importance, rewarding energy proportionality by putting idle portions of the memory, disk, and network into low-power mode. Processors should work well with VMs and flash memory should be added to the memory hierarchy, and LAN switches and WAN routers must improve in bandwidth and cost..."
Collaboration Tools for Global Software Engineering
Filippo Lanubile, Christof Ebert, Rafael Prikladnicki, Aurora Vizcaíno; IEEE Software
"Tools are essential to collaboration among team members, enabling the facilitation, automation, and control of the entire development process. Adequate tool support is especially needed in global software engineering because distance aggravates coordination and control problems, directly or indirectly, through its negative effects on communication.
In this column, we present current collaborative development environments and tools to enable effective software development, either global or collocated. Our summary is not comprehensive. Rather, we identified technologies that really matter by conducting surveys at recent ICGSE conferences and in companies where we're consulting to improve their distributed engineering capabilities... We briefly look into seven standard collaborative development tools, including Version-Control Systems, Trackers, Build Tools, Modelers, Knowledge Centers, and Web 2.0 Applications...
Collaborative project management tools such as ActiveCollab and WorldView offer a Web-based interface to manage project information for calendars and milestone tracking. Such tools give managers an overview of project status at different detail levels, such as team member locations and contact information. WorkSpaceActivityViewer provides an overview of ongoing project activities by using information extracted from developers' workspaces. Requirements Engineering: Major RE tools such as Doors and IRqA let multiple engineers use natural language text to describe project use cases and requirements and to record dependencies among and between them. Both tools have a document-oriented, Word-based interface. They also provide a Web interface for users who need access to requirements information but not to local installations. The collaboration tool eRequirements is entirely Web-based, and provides Web access to collaboratively explore and manage use cases and requirements...
New collaboration tools and associated best practices are emerging almost daily. We see two major trends. First, practically all engineering tools will provide collaboration features. These features help individual tools shared by a team, but they're implemented differently on different tools and so don't allow data integration across tools. A second, related trend is improved federation of engineering tools. Eclipse will help initially, but ensuring efficiency, consistency, and information security across multiple tools, teams, and companies finally requires a strong product life-cycle management (PLM) strategy. Tools such as Teamcenter and Easee allow secure federation and collaborative work with integrated data backbones... Effective tool support for collaboration is a strategic initiative for any company with distributed resources, no matter whether the strategy involves offshore development, outsourcing, or supplier networks. Software needs to be shared, and appropriate tool support is the only way to do this efficiently, consistently, and securely..."
XML Daily Newslink and Cover Pages sponsored by:
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: firstname.lastname@example.org
Newsletter unsubscribe: email@example.com
Newsletter help: firstname.lastname@example.org
Cover Pages: http://xml.coverpages.org/