A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com
Headlines
- WS-I Reliable Secure Profile Version 1.0
- XML? XSD? Check! Next Web Services
- Red Hat MRG V1 Supports Advanced Messaging Queuing Protocol (AMQP)
- New Open Grid Forum (OGF) Documents Issued for Public Comment
- Use of XACML Request Context to Obtain an Authorisation Decision
- WS-Naming: Location Migration, Replication, and Failure Transparency Support for Web Services
- Use of WS-TRUST and SAML to Access a Credential Validation Service
- Mark Little on Transactions, Web Services, and REST
- Sun Bundles MySQL Database, GlassFish App Server
WS-I Reliable Secure Profile Version 1.0
J. Durand, A. Karmarkar, G. Pilz (eds), WS-I Working Group Draft
Members of the WS-I Reliable Secure Profile Working Group have released a public working draft for "Reliable Secure Profile Version 1.0." This document defines the WS-I Reliable Secure Profile 1.0, consisting of a set of non-proprietary Web services specifications, along with clarifications, refinements, interpretations and amplifications of those specifications which promote interoperability. This Profile is intended to be composed with the WS-I Basic Profile 1.2, WS-I Basic Profile 2.0, WS-I Basic Security Profile 1.0, and WS-I Basic Security Profile 1.1. Composability of Reliable Secure Profile (RSP) with the previously mentioned profiles offers the following guarantee to users: conformance of an artifact to RSP does not prevent conformance of this artifact to these other profiles, and vice-versa. It treats "reliable secure" messaging with reference to specifications for Reliable Messaging, Secure Conversation, MakeConnection, and Secure Reliable Messaging. It incorporates by reference the relevant specifications published by IETF, OASIS, and W3C, as documented in Appendix A 'Referenced Specifications': (1) Web Services Reliable Messaging (WS-ReliableMessaging) 1.1; (2) Internationalized Resource Identifiers (IRIs); (3) Web Services Addressing 1.0 - SOAP Binding; (4) WS-SecureConversation 1.3; (5) Web Services Make Connection 1.0; (6) WS-SecurityPolicy 1.2. Section 1 "Introduction" introduces the Profile, and explains its relationships to other profiles. Section 2 "Profile Conformance" explains what it means to be conformant to the Profile. Each subsequent section addresses a component of the Profile, and consists of two parts; an overview detailing the component specifications and their extensibility points, followed by subsections that address individual parts of the component specifications.
See also: The Web Services-Interoperability Organization (WS-I)
XML? XSD? Check! Next Web Services
David Carver, 'Intellectual Cramps' Blog
Over the last year, I concentrated much of my community contributions and development time for eclipse on the XML and XSD editors provided by the eclipse Web Tools Platform. The reason was primarily from frustration of using the tools when it came to real world B2B schemas produced by various standards organizations. Web Standard Tools 3.0 addresses many of the issues that plagued the XML and the XSD tools, and is much better suited to handling the large and complex XML related files that have to be exchanged. The next victim that needs to be patched and bandaged up are the Web Services tools provided by the project. In particular the WSDL Editor and more importantly the WS-I Profile tooling, and the Web Services Explorer... I'll be devoting some of my time to trying to help the Web Services team, patch up and address the performance and compliance related issues. In particular, we need to make sure that eclipse is the standard for WS-I profile compliance testing. It also should be extensible enough that new profile plugins can be added easily as the profiles are completed. The compliance reports also need to have the ability to run in eclipse, and without eclipse. There needs to be a way to generate the WS-I Compliance reports quickly and easily. Better support for newly released standards like WS-Reliable Messaging, MTOM, WS-MakeConnection, WS-Policy and WS-PolicyAttachment needs to be included in the base frameworks. Some of these would be simple to do by including the appropriate XML schemas as optional jars to be installed. The same thing needs to happen for ebXML 2.0 and the forth coming ebXML 3.0...
Red Hat MRG V1 Supports Advanced Messaging Queuing Protocol (AMQP)
Staff, Red Hat Announcement
Red Hat, Inc. has announced the availability of Red Hat Enterprise MRG V1, which offers significant Red Hat Enterprise Linux realtime enhancements as well as a high performance, multi-platform messaging system. The Red Hat Enterprise MRG Beta was announced in late 2007. Since that time Red Hat has worked closely with partners and customers in an effort to ensure that the final product provides the desired features, quality, performance and adherence to open protocol standards. With Red Hat as a leading contributor to Linux realtime kernel development and the Advanced Messaging Queuing Protocol (AMQP) Project, Red Hat Enterprise MRG is being delivered on schedule while also exceeding its original performance goals. Performance testing of Red Hat Enterprise MRG messaging on both Intel- and AMD-based systems is ongoing, and has already demonstrated exceptional results. Most recently, an Intel 8-way (2 x Quad-Core) Xeon X5482 system configured with four Gigabit Ethernet network adapters running Red Hat Enterprise MRG was able to achieve over six million OPRA (Options Price Reporting Authority) messages/second. Adrian Kunzle, managing director, Head of Architecture, JPMorganChase: "JPMorgan welcomes Red Hat's use of the AMQP protocol throughout its MRG product. The AMQP protocol is an open standard for electronic messaging commonly used by enterprises to link together automated business processes. Today's news is another step towards AMQP becoming the preferred connectivity for automated business on the Internet." MRG Messaging is an open source, high performance, reliable messaging distribution that implements the AMQP specification. It provides reliable messaging (pub-sub, event, and large message transfer), supporting reliability, clustering, and durable messaging. It is based on the Apache Qpid Project, currently in incubation. AMQP (Advanced Message Queuing Protocol)is a specification for how commodity messaging and middleware works. Its primary aim is to create an open standard for messaging in order to create open and interoperable messaging. AMQP defines both a wire-level protocol for messaging (the transport layer) and the higher-level semantics for messaging (the functional layer). It is completely free to use and is being developed by the AMQP Working Group. AMQP will be submitted to a standards body at some point by the AMQP Working Group.
See also: Advanced Message Queueing Protocol (AMQP)
New Open Grid Forum (OGF) Documents Issued for Public Comment
Staff, Open Grid Forum Announcement
OGF Editor Greg Newby announced the publication of several Open Grid Forum documents, together with an invitation for public comment. The Open Grid Forum (OGF) is a community of users, developers, and vendors leading the global standardization effort for grid computing. The OGF community consists of thousands of individuals in industry and research, representing over 400 organizations in more than 50 countries. OGF works to accelerate adoption of grid computing worldwide because its members believe grids will lead to new discoveries, new opportunities, and better business practices. The work of OGF is carried out through community-initiated working groups, which develop standards and specifications in cooperation with other leading standards organizations, software vendors, and users. One of the primary purposes of the Open Grid Forum is to publish documents. These documents provide information and specifications to developers and others involved with Grid computing. Documents are most often authored by members of OGF Working Groups or Research Groups (WGs and RGs), but may be submitted by any person. There is a multi-stage review for OGF documents, including editorial review and public comment. For Recommendation track documents, "proposed" recommendations are the basis for reference implementations and may, with sufficient experience, become full OGF recommendations. Three other OGF document types are also published: (1) Informational - to inform the community about a useful idea or set of ideas. (2) Experimental - To inform the community about a useful experiment, testbed, or implementation of an idea of set of ideas; (3) Community Practice - to inform the community of common practice or process, with the objective to influence the community. Public comments are a very important part of the OGF document approval process. Through public comments, documents are given scrutiny by people with a wide range of expertise and interests. Ideally, a OGF document will be self-contained, relying only on the other documents and standards it cites to be clear and useful. Public comments of any type are welcomed, from small editorial comments to broader comments about the scope or merit of the proposed document. The simple act of reading a document and providing a public comment that you read it and found it suitable for publication is very useful, and provides valuable feedback to the document authors.
See also: OGF documents
Use of XACML Request Context to Obtain an Authorisation Decision
D. Chadwick, L. Su, R. Laborde (eds), Open Grid Forum PR
This OGF Proposed Recommendation was announced for public comment through August 13,2008. It was produced by members of the OGSA Authorization WG (OGSA-AUTHZ-WG), chartered to define the specifications needed to allow for basic interoperability and plug-ability of authorization components in the OGSA framework. "The purpose of this document is to specify a protocol for accessing a Policy Decision Point (PDP) by a Grid Policy Enforcement Point (PEP) in order to obtain access control decisions containing obligations. The protocol is a profile of the SAML2.0 profile of XACML, tailored especially for grid use. The document describes how an XACML request context can be created and transferred by a Grid Policy Enforcement Point (PEP) to a Police Decision Point (PDP) in order to obtain authorisation decisions (possibly including obligations) for Grid applications. The XACML request context contains attributes of the subject, resource, action and environment, and is transported to the PDP in a SAMLv2 request message. The XACML response context contains an authorization decision and optional obligations that must be enforced by the PEP, either before, with or after enforcement of the user's request... The SAML2.0 profile of XACMLv2.0 specifies extensions to SAML2.0 to enable an XACML request context to be carried in a SAML request message to a PDP, as an XACMLAuthzDecisionQuery extension; and an XACML response context to be carried in a SAML response message to the PEP, as an XACMLAuthzDecisionStatement extension. This profile uses the SAML2.0 profile of XACMLv2.0 specified in Section 3 of "SAML 2.0 profile of XACMLv2.0" to carry the XACMLAuthzDecisionQuery (as a SAML Request extension) and return the XACMLAuthzDecisionStatement (as a SAML Statement extension used in a SAML Response)..."
See also: the OASIS Extensible Access Control Markup Language (XACML) TC
WS-Naming: Location Migration, Replication, and Failure Transparency Support for Web Services
Andrew Grimshaw, Mark Morgan, Karolina Sarnowska; OGF Journal
Naming transparencies, i.e., abstracting the name and binding of the entity being used from the endpoints that are actually doing the work, are used in distributed systems to simplify application development by hiding the complexity of the environment. In this paper we demonstrate how to apply traditional distributed systems naming and binding techniques in the Web Services realm. Specifically, we show how the WS-Naming profile on WS-Addressing Endpoint References can be used for identity, transparent failover, replication, and migration. We begin with a discussion of the traditional distributed systems transparencies. We then present four detailed use cases. Next, we provide brief background on both WS-Addressing and WS-Naming. Finally, we show how WS-Naming can be used to provide transparent implementations of our use cases. Naming as a means for providing transparency has been used extensively both in the earlier cited projects, and in standards efforts over the years. The best known is the Domain Name Service (DNS) that maps strings to IP addresses. DNS is clearly a successful standard. Its has some significant limitations. Most importantly it was not designed to support a highly dynamic binding environment where the mappings could change rapidly. More recent naming schemes include Life Science IDentifiers (LSIDs) from OMG and Handle.net from CNRI. LSIDs share many of the same goals and objectives as WS-Naming, but were designed do not fit well in the context of the existing Web Services infrastructures. LSIDs must always be resolved, requiring the clients to be LSID aware. The same applies to Handle.net handles, as well as a licensing model that made their adoption difficult for many commercial enterprises. After almost a year of discussion of the use cases and requirements in both face-to-face meetings and teleconferences, the OGSA-Naming Working Group in the OGF developed the following set of required and desired properties for a Web Services naming scheme. There was not complete agreement on some of the requirements. In particular the desirability of aliases was hotly debated... WS-Naming (a Profile on WS-Addressing) attempts to satisfy the requirements... WS-Naming was developed to address two short-comings of WS-Addressing. First, EPRs cannot be compared against each other in any canonical way to determine if they refer to the 'same' endpoint. Indeed, the specification explicitly states that EPRs cannot be compared. Second, given the way many WS-Addressing implementations work, an endpoint cannot migrate... WS-Naming describes two extensibility profiles on the standard WS-Addressing specification whereby target service endpoints add additional information to their WSAddressing EndpointReferenceType's metadata element; namely an endpoint identifier element (EPI) that serves as a globally unique (both in space and time) abstract name for that resource, and a list of zero or more resolver EPRs.
See also: the OGSA Naming Working Group
Use of WS-TRUST and SAML to Access a Credential Validation Service
Chadwick and Linying Su, OGF Final Draft Technical Document
This OGF final draft document provides information to the Grid community about a proposed standards track protocol. It defines a protocol for an authorization component to access an external credential validation service (CVS) prior to calling a policy decision point (PDP). The protocol is a profile of a SAML attribute assertion carried by WS-Trust. The CVS is a necessary functional component in authorization which performs the task of validating the user's presented credentials before the valid attributes (extracted from the credentials) are used by the policy decision point (PDP) in order to make an access control decision. The protocol is a profile of a SAMLv2 attribute assertion carried by WS-Trust. It allows tokens/credentials in to be presented in any format to the CVS, but always returns tokens formatted as XACML attributes, so that they are ready for submission to the PDP. There are different ways in which the CVS access protocol might be used. The protocol might be called by the context handler in either the PEP or the PDP, and might carry the authenticated name of the subject with or without a bag of credentials, and with or without references to various CISs that should be contacted to pull credentials. The PEP may provide any arbitrary set of credentials, e.g., member of university X, member of grid project Y, registered doctor, certified engineer etc. issued by any arbitrary set of attribute authorities (AAs) or credential issuers, in any standard format; as well as any arbitrary set of references (meta-information) to credential issuing services. This document does not specify how the PEP obtains these credentials or CIS meta-information, but they might be provided by the end user, or the end user's client software, or by another component of the authorization infrastructure, such as an out of band meta-data transfer service. The target resource will only trust a limited set of CISs, and these trust relationships will be configured into its Credential Validation Service (CVS) in the form of a Credential Validation Policy. A CVS will validate the presented credentials according to its configured Credential Validation Policy, and will return the set of valid attributes (in XACML format) to the PEP. The PEP may then pass these to the PDP for it to make an access control decision... The CVS can operate in three ways—credential push mode, credential pull mode or both modes..."
See also: WS-Trust 1.4
Mark Little on Transactions, Web Services, and REST
Stefan Tilkov, InfoQueue
In this interview, Red Hat Director of Standards Mark Little talks about transactions, their role for web services, and the possibility of an end to the Web services vs. REST debate Web services and transactions: "Web services transactions development has been going on for almost as long as web services has been developed... There is a range of extended transactions. Basically the principle about extended transactions is to relax the very properties that are inherent within an atomic transaction, so if you go and look at the literature then you'll find that another acronym that is put around atomic transactions is also known as ACID transactions. That is ACID - A for atomic, everything happens or nothing happens, C for consistent, the state of the system moves from one consistent state to another, I for isolation, so you can't see dirty data and D for durable, so that if the work happens it is made persistent even if there is a crash, you'll eventually get the same state. Extended transactions relax those properties, so you might relax atomicity... So that's what we have been doing over the last eight years, we have been looking at extended transaction work that has been done and trying to come up with a way of allowing people to develop extended transaction models that are good for their particular use case, rather than try as a transaction industry has done twenty years prior to this, shoehorn the ACID transaction into absolutely everything, let's have targeted models, targeted implementations, and we have got there. So it has taken eight or nine years to get there but finally in OASIS there's the WS-TX technical committee, which has defined a framework, WS-Coordination, which allows you to plug in different intelligences, so this would be the different types of extended transaction models. Out of the box, the standard provides two extended transaction models, because of the use cases that we currently have that we need to adopt. One is Business Activity, which is for these long running units of work, the other is Atomic Transaction, so despite what I said earlier about atomic transactions not being good for web services, if you can recall that back when web services where first starting and even through to today, people are using them for interoperability, as much, if not actually more than for Internet scale computing. Atomic transaction in the WS-TX spec is really there for interoperability between heterogonous systems running on closely coupled networks. You could use it across the Internet, there's absolutely nothing to prevent you from doing that, but there are really good reasons why you shouldn't. The Atomic Transactions spec in WS-TX has given us transaction industry interoperability between obviously Red Hat, IBM, Microsoft, and a couple of other companies. All heterogeneous transaction protocols within about a year and a half of the spec's being finalized, probably less actually, whereas if you are looking when we last tried to do this, which was in the OMG within the Object Transaction Service work, that really took us about ten years. So there were definitely benefits for doing it in web services.
See also: the WS-TX v1.2 specification review
Sun Bundles MySQL Database, GlassFish App Server
Charles Babcock, InformationWeek
In one of the first results of its $1 billion purchase of MySQL, Sun Microsystems has packaged the popular open source database with its GlassFish application server and is offering the two as a $65,000-per-year bundle. GlassFish runs Java applications and through the work of the open source project developers, some popular scripting language applications as well, primarily Ruby, Groovy, Perl, and Python. A capability to run PHP scripts is being worked on within the project. GlassFish is both a standard—it's the reference implementation of an application server for Java Enterprise Edition 5—and it's a popular open source project hosted by Sun. Sun's Mark Herring, VP of marketing for the Sun software infrastructure group, said Sun is seeing 500,000 downloads of GlassFish a month, a rate that indicates a healthy pickup by developers. In comparison, Sun claims MySQL gets downloaded 60,000 to 70,000 times a day. Downloads don't always mean what they seem to; many are thrown away without being put to use and others reflect repeat downloads by one developer, looking for the latest updates. Nevertheless, Sun officials are eager to pit their GlassFish/MySQL combination against commercial database/application server offerings that Sun estimates would cost $3 million to use over a three-year period. GlassFish with MySQL would cost about $240,000 in a similar configuration over the same timeframe, said Herring. In both cases, the estimates are based on 10 two-way servers, each running a database system, and 20 two-way servers, each running an application server, both clustered approaches designed to cope with heavy Web traffic. "It's meant to be a disruptive force," Herring said. The Apache Web server was open source code that successfully cut into the market for commercial Web servers offered by Microsoft, IBM and others. Sun has tested and certified GlassFish to run with MySQL, making sure interfaces between the two perform as planned, for customer ease of implementation and use. Herring said the bundle was aimed at businesses with 1,000 employees or less who don't want to invest heavy IT resources to implement an application server and database.
See also: the GlassFish Community
Sponsors
XML Daily Newslink and Cover Pages sponsored by:
IBM Corporation | http://www.ibm.com |
Oracle Corporation | http://www.oracle.com |
Primeton | http://www.primeton.com |
Sun Microsystems, Inc. | http://sun.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/