This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com
- W3C Last Call Review for Web Security Context: User Interface Guidelines
- An Identity-Based Key Agreement Protocol for the Network Layer
- Supporting Healthcare Collaboration with Lotus Sametime and DB2 pureXML
- WS-TemporalPolicy: A WS-Policy Extension for Describing Service Properties with Time Constraints
- Cloud Standardization: VMware Submitting vCloud API to DMTF
- Researchers Say Gazelle Browser Offers Better Security
- Five Years On, Can-Spam Gets Help
- Data Independence and Survival Best Practices for Social Networks
- U.K. Throws Weight Behind Open Source
W3C Last Call Review for Web Security Context: User Interface Guidelines
Thomas Roessler and Anil Saldhana (eds), W3C Technical Report
W3C's Web Security Context Working Group has published a second Last Call Working Draft for "Web Security Context: User Interface Guidelines." Public comment is invited through March 19, 2009. The specification deals with the trust decisions that users must make online, and with ways to support them in making safe and informed decisions where possible. In order to achieve that goal, the specification includes recommendations on the presentation of identity information by Web user agents. We also include recommendations on conveying error situations in security protocols. The error handling recommendations both minimize the trust decisions left to users, and represent known best practice in inducing users toward safe behavior where they have to make these decisions. To complement the interaction and decision related parts of this specification, a section on 'Robustness Best Practices' addresses the question of how the communication of context information needed to make decisions can be made more robust against attacks. This document specifies user interactions with a goal toward making security usable, based on known best practice in this area. This document is intended to provide user interface guidelines, most sections assume the audience has a certain level of understanding of the core PKI (Public Key Infrastructure) technologies as used on the Web. Since this document is part of the W3C specification process, it is written to clearly lay out the requirements and options for conforming to it as a standard. User interface guidelines that are not intended for use as standards do not have such a structure. Readers more familiar with that latter form of user interface guideline are encouraged to read this specification as a way to avoid known mistakes in usable security. This specification comes with two companion documents: (1) "Web Security Experience, Indicators and Trust: Scope and Use Cases", edited by Tyler Close. (2) "Web User Interaction: Threat Trees", edited by Thomas Roessler. In the analysis, high-level threats are decomposed into the vulnerabilities that can be used by an attacker to realize that threat. These vulnerabilities can be met by countermeasures, which can in turn have vulnerabilities of their own, and so on. This second Last Call Working Draft incorporates feedback gathered during the first Last Call period, both from the public and from implementers participating in the Working Group. Compared to the first Last Call Working Draft of this specification, most material on logotypes, petnames, and key continuity management has been dropped. A document with disposition of comments received in response to the first Last Call is also available.
See also: the W3C Security Activity Statement
An Identity-Based Key Agreement Protocol for the Network Layer
Christian Schridde, Matthew Smith, and Bernd Freisleben; Conference Paper
This paper was presented at the Sixth Conference on Security and Cryptography for Networks (SCN 2008), Amalfi, Italy. A new identity-based key agreement protocol designed to operate on the network layer is presented. Endpoint addresses, namely IP and MACaddresses, are used as public keys to authenticate the communication devices involved in a key agreement, which allows us to piggyback much of the security overhead for key management to the existing network infrastructure. The proposed approach offers solutions to some of the open problems of identity-based key agreement schemes when applied to the network layer, namely multi-domain key generation, key distribution, multi-domain public parameter distribution, inter-domain key agreement and network address translation traversal. Several issues for deploying the proposed system in practice are discussed. The authors show how the public parameters and the identity keys are distributed in multi-provider scenarios and how dynamic IP addresses are handled. Furthermore, a detailed description of how our system deals with the NAT problem is given. One of the important issues of any multi-organizational cryptographic system is the distribution of the public parameters and keys. It should be noted that a main requirement is to try to minimize the number of global distribution steps in favor of local distribution steps, since this distributes the workload and reduces the risk of a global compromise.... The most critical element in all IBEs or PKIs in key escrow mode is the distribution of identity keys (private keys) and the prevention of identity misbinding. In traditional PKI and IBE systems, this is usually done manually and out-of-band and thus creates a lot of work. While it can be argued that due to the fact that on the AS level most customers receive an out-of-band message when they receive their endpoint address, adding a fingerprint to the identity key would not put much extra burden on the system. However, a more elegant solution for the long term is to integrate the key distribution into the IP distribution system... Unlike other identity-based encryption solutions, the presented approach is based on the well tested mathematics also used in the traditional Diffie-Hellman key agreement and Rivest-Shamir-Adleman public key cryptography approaches instead of elliptic curves or quadratic residues. It has been shown how our identity-based key agreement protocol can be used as a generic low level security mechanism and how it can deal with the implementation issues of multidomain key generation, key distribution, multi-domain public parameter distribution, key expiration, inter-domain key agreement and network address translation traversal. There are several areas of future work. For example, a more detailed description of the integration of the proposed identity-based approach into existing network management protocols and tools, in particular the integration into the DHCP protocol, should be provided. Furthermore, the large-scale practical deployment of the proposed approach in IP, Voice-over-IP, or mobile telephone communication scenarios is an interesting area for future work.
See also: the conference web site
Supporting Healthcare Collaboration with Lotus Sametime and DB2 pureXML
Susan Malaika and Christian Pichler, IBM developerWorks
More resources to treat illnesses, increasing costs of medication, and medical specialists spread around the globe are all reasons for demanding fast, reliable, and convenient information exchange to support collaboration in the healthcare environment. Information exchange between institutions, businesses, and even across continents is a problem not only in the healthcare environment, but also in any environment having the need to collaborate. In healthcare systems, sensitive information must not be accessible without the consent of the patient, the treating practitioner, or both. Security is definitely a topic for further work to improve the idea of supporting collaboration through instant messaging. Placing the plug-in configuration files on server systems is another area for further development. Integration of forms-based displays, enabling the modification of data, in addition to data display are other areas of great interest. Using an instant messaging client as a friendlier interface into database systems, where one of the parties in the exchange is a database agent rather than a human, is yet another possibility. Building applications that support collaboration requires technologies that enable the exchange of information in common, standardized formats. These agreed formats are often created by governments, industry consortia, and standards-developing organizations, such as Health Level 7 (HL7) in the healthcare area. One specification of HL7 in particular, the Clinical Document Architecture (CDA), is designed for standardized exchange of patient information. Increasingly, patient information is represented, stored, and exchanged electronically utilizing healthcare formats, such as HL7 CDA, typically implemented using the Extensible Markup Language (XML). Having patient information available electronically allows the use of modern and well-established technologies such as instant messaging. Instant messaging is already present in the healthcare environment, but the prototype described in this article is based on an end-to-end XML architecture. In an end-to-end XML architecture, information encoded in HL7 CDA XML documents is stored in the same format in an IBM DB2 pureXML database, exchanged in the same format utilizing Web services, and visualized in the same format using IBM Lotus Sametime Connect Client. This article illustrates the ease and simplicity of extending the IBM Lotus Sametime Connect Client to support collaboration while utilizing an end-to-end XML architecture. The benefits of storing, exchanging, and visualizing information while utilizing the same format of information include simpler design and speedier development, resulting in a powerful prototype that can be improved as feedback is received.
WS-TemporalPolicy: A WS-Policy Extension for Describing Service Properties with Time Constraints
Markus Mathes, Steffen Heinzl, Bernd Freisleben; Conference Paper
This paper was published in the Proceedings of the First IEEE International Workshop On Real-Time Service-Oriented Architecture and Applications (RTSOAA 2008) of the 32nd Annual IEEE International Computer Software and Applications Conference (COMPSAC 2008). "A web service has several functional properties (e.g. its operations) and non-functional properties (e.g. quality of service and security parameters). Functional properties are usually static, whereas non-functional properties are often dynamic and thus vary over time. To describe properties with time constraints, the paper introduces WSTemporalPolicy. WS-TemporalPolicy empowers a service developer to attach a validity period to the properties described in a WS-Policy. The generation, validation, storage and retrieval, and deployment process of temporal policies is supported by the Temporal Policy Runtime Environment. Implementation issues and two use cases are presented to illustrate the use of temporal policies... The paper has four main contributions: (1) The need for temporal properties is identified, (2) WS-TemporalPolicy is defined, (3) the Temporal Policy Runtime Environment which identifies the components required for managing temporal policies is presented, and (4) the use of WSTemporalPolicy for describing dynamic properties is shown by two use cases concerning QoS and real-time processing. [The authors conclude that] "There is a fundamental need to describe dynamic properties in service-oriented environments based on web or Grid services, e.g. to define time-dependent parameters like QoS parameters. Available standards are not suitable to describe these parameters, since they are designed to describe properties with a static nature. In this paper, they introduce WS-TemporalPolicy and validity periods that are expressed via an expiration date, start time and end time, or a relative duration. The dependencies among several WS-TemporalPolicies can be defined by means of actions and events leading to complex dependency trees. The functional components to implement an infrastructure to support WS-TemporalPolicy are presented. Possible areas for future work are the further development of the prototypical Temporal Policy Runtime Environment and an integration into widespread Grid middlewares, such as the Globus Toolkit. A further interesting extension is a more fine-grained policy weaver and 'real' client-side strategies for buying resources..."
Note: Six examples of (contrasting) related work are cited in this paper:  Diego Zuquim Guimaraes Garcia and Maria Beatriz Felgar de Toledo presented "Semantics-enriched QoS Policies for Web Service Interactions" at the 12th Brazilian Symposium on Multimedia and the Web; this was an extension to the Web Services Policy Framework (WS-Policy) standard to complement WSDL descriptions with semantics- enriched QoS policies using the Ontology Web Language (OWL) and ABLE Rule Language (ARL) standards, and an extension to the UDDI standard to include QoS policies.  Akhil Sahai, Carol Thompson, and William Vambenepe (HP) presented "Specifying and Constraining Web Services Behaviour Through Policies" at the W3C Workshop on Constraints and Capabilities for Web Services.  "WS-Policy4MASC [A WS-Policy Extension Used in the Manageable and Adaptable Service Compositions (MASC) Middleware] by Vladimir Tosic (et al.) presented a new XML language for policy specification in the Manageable and Adaptable Service Compositions (MASC) middleware which can be also used for other Web service middleware; it extends the Web Services Policy Framework (WS-Policy) by defining new types of policy assertions.  The "Web Services Agreement Specification (WS-Agreement)" from the Open Grid Forum (OGF) describes a protocol for establishing agreement between two parties, such as between a service provider and consumer, using an extensible XML language for specifying the nature of the agreement, and agreement templates to facilitate discovery of compatible agreement parties.  "A Policy Framework for Collaborative Web Service Customization", by IBM China Research Lab (Haiqi Liang, Wei Sun, Xin Zhang, Zhongbo Jiang) proposed an approach to facilitating web service customization in the programatical way through the collaboration between service provider and consumer. A specification for declaring web service customization policies is defined based on the WS-Policy framework, through which service provider can declare the service's customization capabilities.  "Efficient Selection and Monitoring of QoS-aware Web Services with the WS-QoS Framework" (Freie Universitaet Berlin) introduced a 'Web service QoS (WS-QoS)' solution that enables an efficient, dynamic, and QoS-aware selection and monitoring of Web services.
Cloud Standardization: VMware Submitting vCloud API to DMTF
Drue Reeves, Burton Group Blog
Many have seen the announcement from VMware regarding their new vCloud API, part of their vCloud initiative. Details on the API are sketchy, but Dan Chu (VMware VP of emerging business) said that VMware was looking to (according to Network World) "build on its work with Distributed Management Task Force (DMTF) on the open virtualisation format (OVF)"....one would presume by submitting vCloud to the DMTF for standardization. Some of you may be asking, what is this about? Why is VMware doing this and why submit to the DMTF? Well, the reason is quite simple actually. Although we often associate "cloud infrastructure" with Amazon's EC2, the reality is, the cloud will be comprised of several Infrastructure-as-a Service (IaaS) vendors, each touting their own service features, quality, engagement models...and yes, API. The API is necessary to "engage" their service. It's a way for the vendor to expose service features and enable competitive differences. For example, Amazon's EC2 API is a SOAP- and HTTP Query-based API used to send proprietary commands to create, store, provision, manage Amazon Machine Images (AMIs). The trouble is, if every IaaS provider creates their own API (and they will) then customers and developers alike will need to learn several different APIs in order to engage each vendor's service. That's the reason why VMware created vCloud API. They needed an API as a means for customers to create, provision, manage VMs in the VDC-OS enviornment—the infrastructure many IaaS providers may use to create a competitive offering to Amazon's EC2. VMware submitted the vCloud API to the DMTF as a way to standardize the API, hoping that customers and developers will put pressure on IaaS vendors to adopt the API as a standard, easing development issues. Vendors who don't adopt the API will be seen as non-standard/proprietary...
Researchers Say Gazelle Browser Offers Better Security
Kurt Mackie, Application Development Trends
A team consisting of Microsoft Research personnel and university staff members has demonstrated a potentially more secure Web browser called Gazelle. This research team, led by Helen J. Wang and others, appears to be doing work that's separate from Microsoft's Internet Explorer 8 team. IE8 and Google Chrome frequently appear in the paper as examples of browsers that get security wrong. The team claims that the Gazelle browser, which ran on Windows Vista and uses Internet Explorer's Trident renderer, offers greater security by using a browser-based operating system called a "browser kernel." The browser kernel consists of approximately 5,000 lines of C# code and is "resilient to memory attacks," according to the authors. Gazelle separates same-origin domains, such as ad.datacenter.com and user.datacenter.com, whereas Google Chrome considers them from the same site. The browser kernel even manages address bars and menus in the browser, plus it controls whether or not browser plug-ins can interoperate with the operating system. The overlay of transparent content, which can trick users into clicking on content from another origin, is thwarted by a policy that makes dynamic content-containing windows opaque... "The Multi-Principal OS Construction of the Gazelle Web Browser" [PDF] was authored by Helen J. Wang, Chris Grier, Alexander Moshchuk, Samuel T. King, Piali Choudhury, and Herman Venter. Abstract: "Web browsers originated as applications that people used to view static web sites sequentially. As web sites evolved into dynamic web applications composing content from various web sites, browsers have become multi-principal operating environments with resources shared among mutually distrusting web site principals. Nevertheless, no existing browsers, including new architectures like IE 8, Google Chrome, and OP, have a multi-principal operating system construction that gives a browser-based OS the exclusive control to manage the protection of all system resources among web site principals. In this paper, we introduce Gazelle, a secure web browser constructed as a multi-principal OS. Gazelle's Browser Kernel is an operating system that exclusively manages resource protection and sharing across web site principals. This construction exposes intricate design issues that no previous work has identified, such as legacy protection of cross-origin script source, and cross-principal, cross-process display and events protection. We elaborate on these issues and provide comprehensive solutions. Our prototype implementation and evaluation experience indicates that it is realistic to turn an existing browser into a multi-principal OS that yields significantly stronger security and robustness with acceptable performance and backward compatibility."
See also: the Gazelle Web Browser paper
Five Years On, Can-Spam Gets Help
Avi Baumstein, InformationWeek
"The right mix of technical measures can keep most unwanted e-mail out of customers' in-boxes. Overall, spam has increased, from about 60% of all email in 2003 to more than 90% of e-mail today. Perhaps the act's most positive effect of U.S. Congress 'Can-Spam' Act (other than prosecutions of a handful of spammers and a drop in the amount of pornographic spam) was the guidance it provided to companies on how to send email ads and correspond with customers. But that only goes so far in an industry dominated by fraudsters and criminals. In fact, even before the measure was passed, it was derided as the 'You Can-Spam Act' because, rather than outlawing spam, it merely prohibited certain deceptive practices, effectively making all other spam legal. The act also pre-empted more stringent state and local laws. More than five years after Can-Spam was passed, anti-spam companies continue to search for the right combination of technical measures that will rid customers' in-boxes of unwanted commercial e-mail. Greg Shapiro, CTO and VP of messaging vendor Sendmail, lists three such measures: First, have Internet service providers block outgoing port 25 and scan customers' outgoing e-mail; second, authenticate senders; and third, build reputation systems for senders and domains. The Sender Policy Framework, an open standard, aims to provide sender authentication. SPF, which specifies a technical method to prevent sender-address forgery, has gained steam in the last few years. Domain Keys Identified Mail (DKIM) extends the concept of sender authentication beyond SPF, adding cryptographic signatures to outgoing e-mail. Receiving servers verify that the message is legitimate by looking up the public key in DNS. By proving that an e-mail is authorized to come from a particular domain, DKIM enables the use of more advanced reputation systems. Current systems track the reputation of IP addresses, deciding how to handle messages based on the sending IP's track record. Vendors are now working to develop systems that track the reputation of the domain included in the 'From' header, eliminating the inaccurate results that IP reputation provides when mail is forwarded or companies use shared-hosted mail servers. Domain reputation can even combat phishing, because look-alike domain names (substituting similar-looking characters for letters in URLs of well-known companies) could receive poor reputation scores and have their e-mail dropped in the bit bucket. The groundwork for these new technologies is in place, and more innovations are on the way. Many anti-spam vendors have added sender-IP reputation systems to their arsenals, for example. In addition, the Internet Engineering Task Force is looking into standardizing protocols for querying reputation databases, enabling interoperability.
Data Independence and Survival Best Practices for Social Networks
Karl Dubost and Olivier Thereaux, W3C Draft Proposal
A draft document on "Data Independence and Survival Best Practices" has been prepared by members of the W3C 'public-social-web-talk' list, relative to the Social Networks Incubator Group. Excerpt: "At a regular pace, we hear about social networks catastrophes. One of the last example is the bookmark service Magnolia which has lost all data from its users. Some people who have subscribed to their own RSS feed of bookmarks have recovered their data. Social networks catastrophes are of a different types and with different consequences, but often revolve around personal data. These data can be 'fully' private to completely public with a lot of granularity in between (an opacity defined by shades). We consider 'data' to be any kind of content produced by a person and disposed on the Web. It could be photographs, drawings, text, code, etc. We also include the enriched data. When data are put online, they will be enriched through human or automatic interactions (example: tagging on photographs). These enriched data are part of the personal data value, which is worth keeping in the longterm... What are services? Services could be a simple blog, a social network, an online simple backup system, a messenger communication tool, etc. Some of these services are accessed through a browser, some through specific clients or Web applications. Many users are also unaware of what is done with their data and how some different online services belong to some unique data aggregator companies. Using services and exposing data comes in many different contexts. Some people are using their own computers, some people are sharing a computer at home or in a public space such as a classrom or an internet cafe. This creates more challenges for both one's privacy and one's data independence. It is not always possible to save data locally when travelling and accessing or uploading data online... It is very tempting to rely on the distant services to keep copy of your own data when you lack of space on your own personal computer. This is a dangerous solution without redundancy of this copy. If you can't have a local copy, you should at least duplicate your data on another service. Sometimes there are no obvious ways of keeping data... As a counterpart to the good practice of always having a local duplicate of data remotely held in a data silo, it is generally wise to organize remote backup of the kind of data you would usually only keep on a local computer. A local computer can be stolen, crash, break, burn or be flooded. In the latter two cases, the data you may have backed up on any media in your house or office is likely to be destroyed. too..."
See also: the posting
U.K. Throws Weight Behind Open Source
Sean Michael Kerner, InternetNews.com
A new policy is aimed to promote open source within the British government while avoiding proprietary vendor lock-in. The U.K. is taking an aggressive stance on open source software, giving new teeth to directives designed to push and promote open source within the British government. The new policy from Tom Watson, the U.K.'s Minister for Digital Engagement, directs government agencies to adopt open source in new ways while even preferring it over proprietary software in some cases. It builds on an earlier, less-ambitious policy that aimed chiefly to encourage consideration of open source software. Watson said he intended the moves to ensure that the national IT infrastructure isn't locked into proprietary software, and that open standards become the building blocks for government IT services. "This Government has long had the policy, last formally articulated in 2004, that it should seek to use open source where it gave the best value for money to the taxpayer in delivering public services," Tom Watson, the U.K.'s Minister for Digital Engagement, wrote in the action plan. "So we consider that the time is now right to build on our record of fairness and achievement and to take further positive action to ensure that open source products are fully and fairly considered throughout government IT." In the U.K. the new plan calls for government agencies to specify that data is available in an open standard format, such as the OpenDocument Format, or ODF. The U.K. isn't saying that open standards necessarily need to be from open source software, though. As part of the action plan, the U.K. will also support emerging open versions of previously proprietary standards, such as Adobe's PDF format and Microsoft's Office Open XML formats. Avoiding proprietary software lock-in is a key plank of the U.K.'s plan as well. The plan calls for the government to require proprietary vendors participating in procurement bids to specify exit costs for their software. Open source is already deployed within the U.K.'s various IT infrastructures, according to Watson. He said that 50 percent of the main departmental Web sites use the open source Apache Web server. Additionally, the National Health Service (NHS) in Britain is moving to an open source operating system... [Note Simon Phipps' summary: "In (the memo) 'Open Source, Open Standards and Re-Use', the government's Minister for Digital Engagement significantly revised the brave but toothless policy of 2004 "that it should seek to use Open Source where it gave the best value for money to the taxpayer in delivering public services"...]
See also: Simon Phipps' blog response
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/