A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
Headlines
ECRIT: Additional Data Related to a Call for Emergency Call Purposes
Brian Rosen and Hannes Tschofenig (eds), IETF Internet Draft
Members of the IETF Emergency Context Resolution with Internet Technologies (ECRIT) Working Group have published an initial -00 draft specfication for Additional Data Related to a Call for Emergency Call Purposes. The XML Schema is presented in Section 4.
"Abstract: When an emergency call is sent to a PSAP, the device that sends it, as well as any service provider in the path of the call may have information about the call which the PSAP may be able to use. This document describes an XML data structure that contains this kind of information in a standardized form. A URI that points to the structure can be included in the SIP signaling with the call."
Details: "When an emergency call is sent to a PSAP, there is a rich set of data in the headers with the call, but the device, as well as any other service provider in the path may have even more information that would be useful to a PSAP. This information may include the identity and contact information of the service provider, subscriber identity and contact information, the type of service the service provider provides, what kind of device the user has, etc. Some kinds of devices or services have device or service dependent data. For example, a car telematics system or service may have crash information. A medical monitoring device may have sensor data. While the details of the information may vary by device or service, there needs to be a common way to send such data to a PSAP. For the call takers this will enable more intelligent decision making and therefore better response in case of an emergency. A prerequisite is to offer the technical capabilities to let call takers to gain access to this information stored elsewhere -- granting that they have authorization to access it.
This document focuses on the data that can be obtained about a call and an existing SIP header field, the Call-Info header, is used for this purpose by defining a new token, namely 'emergencyCallData' carried in the "purpose" parameter. If the "purpose" parameter set to 'emergencyCallData' then the Call-Info contains a HTTPS URL that points to an XML data structure with information about the call. The initial XML data structure was defined by a working group within the National Emergency Number Association (NENA) and is included in this document. The data structure contains an element which itself is a URI that has device or service dependent data. Thus the common Additional Data about a Call defined by this document contains a 'hook', in the form of a URI for a device or service dependent data structure..."
See also: IETF ECRIT and Emergency Management
Google, PayPal, Equifax, Others Form Open Identity Exchange
Thomas Claburn, InformationWeek
Google, PayPal, Equifax, VeriSign, Verizon, CA, and Booz Allen Hamilton on Wednesday [2010-03-03] at the RSA 2010 Conference announced that they have formed a non-profit organization to oversee the exchange of online identity credentials on public and private sector Web sites. The organization, The Open Identity Exchange (OIX), will serve as a trust framework provider. A trust framework is a certification program that allows organizations and individuals to exchange digital credentials and to trust the identity, security, and privacy assertions associated with those credentials...
With help from the OpenID Foundation (ODIF) and the Information Card Foundation (ICF), OIX has been authorized to serve as a trust framework for the U.S. government. It will certify identity management providers to make sure they meet federal standards. Google, Equifax, and PayPal will be the first three identity providers to issue digital identity credentials as a way to enable privacy-protected registration and login at U.S. government Web sites. Verizon is expected to be the fourth, once it completes the certification process..."
According to the announcement: "The National Institutes of Health (NIH) is the first government website accepting these credentials, including OpenID and Information Card logins, a capability it demonstrated today at the RSA Conference. Citizens can use open identity technologies to support a number of online services across websites, including customized library searches, access to training resources, conference registration, and medical research wikis, with strong privacy protections, all designed to ensure accessible and transparent communication between the government agency and U.S. citizens...
The first official OIX trust framework meets the requirements set forth by the U.S. Identity, Credential, and Access Management (ICAM) Trust Framework Provider Adoption Process (TFPAP) established by the U.S. General Services Administration (GSA). This trust framework will enable the American public to participate in open, transparent and participatory government while maintaining full control of how much or how little personal information they share with federal websites at all times..."
See also: the Open Identity Exchange (OIX) announcement
OASIS Opens Discussion List for Operational Aspects of Privacy Management
Staff, OASIS Announcement
OASIS members have requested the creation of a new discussion list on 'Operational Aspects of Privacy Management' with a view to possible formation of a Technical Committee: 'OASIS Privacy Management Reference Model (PMRM) TC'.
As proposed, the OASIS TC will "develop a Privacy Management Reference Model, which is intended to serve as a template for developing operational solutions to privacy issues, as an analytical tool for assessing the completeness of proposed solutions, and as the basis for establishing categories and groupings of privacy management controls. The Reference Model will not be a specification in the formal sense, but is intended to be used as the basis for an implementation standard, which would be developed independently. Comprehensive Use Cases will be solicited and developed in several areas to test the completeness and robustness of the Reference Model.
OASIS members and non-members alike are invited to participate in the new discussion list in order to discuss the merits of the proposal and advisability of creating a new OASIS TC. The discussion list at 'privacymgmt-discuss@lists.oasis-open.org' is governed by Section 2.1 of the OASIS TC Process, and may last up to 90 days. Typically, participants in the list will determine whether there is sufficient interest to form an OASIS TC, and then collaborate on a draft TC charter for submission. Only subscribers to the list will have the ability to post to it. If you do not wish to subscribe, but would like to monitor the discussion, archives of the list will be made publicly accessible.
Update: see the initial discussion list message from Michael Willett, co-moderator with John Sabo. Michael provides references for documents downloadable from the ISTPA (International Security, Trust, and Privacy Alliance) web site. including: (1) Privacy Management Reference Model 2.0 ["an open, policy-configurable framework for resolving privacy policy requirements into operational privacy services and functions", cache/archive] and (2) Analysis of Privacy Principles: Making Privacy Operational ["a study wehich looks in depth at twelve major global privacy instruments, and derives a set of core privacy 'requirements' which can be useful for governments and businesses evaluating options for designing and implementing operational privacy controls", cache/archive].
See also: the PMRM Webinar slide set
Interoperability Is Key to Securing Data at Rest
William Jackson, Government Computer News
"Securing data in transit using cryptography has become fairly routine thanks to standardized protocols for data transfers and certificate exchanges. But securing data at rest while keeping it accessible remains a challenge.The problem is not a lack of working technology, but the challenge of interoperability, according to Gary Palgon, Vice President for Product Management for nuBridges, a managed file transfer company: "Once you move outside a proprietary solution, visibility into the data ends. We're using the same algorithms to encrypt, but it pretty much stops there. There is no standardized way for managing and exchanging cryptographic keys that make data useable once it has been encrypted, making the soution immature..."
NuBridges is using the RSA Conference here this week to announce its plans to form an industry group to foster interoperability for data protection. The work would complement other efforts such as the Cryptographic Key Management Project of the National Institute of Standards and Technology, which is identifying scalable and usable crypto key management and exchange strategies for use by government...
Thales, a communications security company, recently announced that it is working with IBM to integrate IBM Tivoli Key Lifecycle Manager into Thales Encryption Manager for Storage to provide a hardened key management system. TEMS already supports the draft IEEE key management standard, and the addition of the Tivoli would enable the ability to manage keys across a broader range of encryption-enabled storage devices and end-points, including IBM's LTO4 tape and DS8000 disk storage systems... [So] the goal of the proposed standards effort will be to extend these key management abilities beyond proprietary systems..."
According to the nuBridges announcement: " With a growing variety of solutions being developed and implemented around data tokenization, nuBridges invites tokenization providers to work together toward creating a single standard that will ensure a high level of security and interoperability... The initial goal of the Tokenization Standards Organization is to define an interoperable standard to address the business processes associated with tokens and tokenization functions. The specification will encompass anticipated customer requirements for token definitions, token lifecycle management (generation, use and destruction), 'client/server' security, and policies for use... The need for a Tokenization Standards Organization is the result of a rapid increase in the acceptance of tokenization as a security model for guarding payment card information as well as personally identifiable information (PII) and protected health information (PHI) during the past year. Another impacting trend is the use of tokenization to reduce scope for Payment Card Industry Data Security Standards (PCI DSS) audits..."
See also: the Tokenization Standards Organization announcement
Reliable Distributed Storage
Gregory Chockler, Rachid Guerraoui (et al), IEEE Computing Now
With the advent of storage area network (SAN) and network attached storage (NAS) technologies, as well as the increasing availability of cheap commodity disks, distributed storage systems are becoming increasingly popular. These systems use replication to cope with the loss of data, storing data in multiple basic storage units—disks or servers—called base objects. Such systems provide high availability: The stored data should remain available at least whenever any single server or disk fails; sometimes they tolerate more failures... A popular way to overcome disk failures uses a redundant array of inexpensive disks (RAID). In addition to boosting performance with techniques such as striping, RAID systems use redundancy—either mirroring or erasure codes—to prevent loss of data following a disk crash. However, a RAID system generally contains a single box, residing at a single physical location, accessed via a single disk controller, and connected to clients via a single network interface. Hence, it still constitutes a single point of failure.
In contrast, a distributed storage system emulates a robust shared storage object by keeping copies of it in several places, so that data can survive complete site disasters. The systems can achieve this using cheap commodity disks or low-end PCs for storing base objects. Researchers typically focus on abstracting a storage object that supports only basic read and write operations by clients, providing provable guarantees. The study of these objects is fundamental, for they provide the building blocks for more complex storage systems. Moreover, such objects can be used to store files, for example, which makes them interesting in their own right...
A distributed storage algorithm implements read and write operations by accessing a collection of base objects and processing their responses. Communication can be intermittent and clients transient. Implementing such storage is, however, nontrivial.
Building distributed storage systems is appealing: Disks are cheap and the system can significantly increase data availability. Distributed storage algorithms can be tuned to provide high consistency, availability, and resilience, while at the same time inducing a small overhead compared to a centralized unreliable solution. Not surprisingly, combining desirable storage properties incurs various tradeoffs. In addition, practical distributed storage systems face many other challenges, including survivability, interoperability, load balancing, and scalability..."
Sponsors
XML Daily Newslink and Cover Pages sponsored by:
IBM Corporation | http://www.ibm.com |
Microsoft Corporation | http://www.microsoft.com |
Oracle Corporation | http://www.oracle.com |
Primeton | http://www.primeton.com |
Sun Microsystems, Inc. | http://sun.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/