This issue of XML Daily Newslink is sponsored by:
IBM Corporation http://www.ibm.com
- XML: The Bridge Between GWT and PHP
- Recently Launched: W3C Social Web Incubator Group
- One Time Password (OTP) Pre-Authentication
- Fake Non-Realtime Non-Twitter Non-Video Blog from XML Prague
- Storage Made Easy With S3
- Alfresco-Drupal Integration via CMIS
- Dynamic Trust Management
- TIBCO Donates Enterprise AJAX Tool to Dojo
- Graphical Assembly Toolkit for Semantic Web Apps: SPARQL Inferencing
XML: The Bridge Between GWT and PHP
Federico Kereki, IBM developerWorks
Google Web Toolkit (GWT) applications, apart from connecting to servlets in time-honored Java fashion, can also use PHP Web services to send and receive data in XML. GWT allows easy access to server-side servlets programmed in the Java language, and data is passed transparently, behind the scenes, between client and server. However, as you work with GWT, you are not limited to communicating with such servlets, and you can freely exchange data with all types of Web services. In many cases (for simple services), you can do these transfers with plain text, but whenever the data becomes structured or just more complicated (think RSS, for example), odds are that XML will represent it. This article examines a simple GWT application and a couple of PHP Web services, showing several different ways to produce and consume XML documents... The main point to remember is that GWT is not limited to using its own RPC method and can also happily coexist with XML, producing and consuming such documents with ease.
See also: the Google Web Toolkit
Recently Launched: W3C Social Web Incubator Group
W3C has announced the creation of the Social Web Incubator Group. Incubator Activity work is not on the W3C standards track. W3C Incubator Activity projects are typically either (1) potentially foundational technologies—including ideas for technologies with the potential for broad use to support the infrastructure of the Web, or (2) Web-Based Applications built upon the infrastructure of the Web. "The mission of the new W3C Social Web Incubator Group is to understand the systems and technologies that permit the description and identification of people, groups, organizations, and user-generated content in extensible and privacy-respecting ways. The topics covered with regards to the emerging Social Web include, but are not limited to: accessibility, internationalization, portability, distributed architecture, privacy, trust, business metrics and practices, user experience, and contextual data. The scope includes issues such as widget platforms (such as OpenSocial, Facebook and W3C Widgets), as well as other user-facing technology, such as OpenID and OAuth, and mobile access to social networking services. The group is concerned also with the extensibility of Social Web descriptive schemas, so that the ability of Web users to describe themselves and their interests is not limited by the imagination of software engineers or Web site creators. Some of these technologies are independent projects, some were standardized at the IETF, W3C or elsewhere, and users of the Web shouldn't have to care. The purpose of this group is to provide a lightweight environment designed to foster and report on collaborations within the Social Web-related industry or outside which may, in due time affect the growth and usability of the Social Web, rather than to create new technology. Our goal is to provide a forum through which collaborations relating to social web standards can be formed, and through which the results of practical standards-oriented collaborations can be reported and discussed. This is not a Working Group, although the members of the group are free to undertake work together, including and especially work outside the W3C, and to report it and discuss it within the group and the wider W3C..."
See also: the W3C Incubator Activity
One Time Password (OTP) Pre-Authentication
Gareth Richards (ed), IETF Internet Draft
Members of the IETF Kerberos Working Group (KRB-WG) have released an updated Internet Draft for the "OTP Pre-Authentication" specification. The document describes the use of the Kerberos framework to carry out One Time Password (OTP) authentication. Kerberos provides a means of verifying the identities of principals, (e.g., a workstation user or a network server) on an open (unprotected) network. This is accomplished without relying on assertions by the host operating system, without basing trust on host addresses, without requiring physical security of all the hosts on the network, and under the assumption that packets traveling along the network can be read, modified, and inserted at will. Kerberos performs authentication under these conditions as a trusted third- party authentication service by using conventional (shared secret key) cryptography. Extensions to Kerberos (outside the scope of this document) can provide for the use of public key cryptography during certain phases of the authentication protocol. Such extensions support Kerberos authentication for users registered with public key certification authorities and provide certain benefits of public key cryptography in situations where they are needed... This "OTP Pre-Authentication" draft describes a FAST factor that allows One-Time Password (OTP) values to be used in the Kerberos V5 (RFC 4120) pre-authentication in a manner that does not require use of the user's Kerberos password. The system is designed to work with different types of OTP algorithms such as time-based OTPs, counter-based tokens, and challenge-response systems such as (RFC 2289). It is also designed to work with tokens that are electronically connected to the user's computer via means such as a USB interface. This FAST factor provides the following facilities: client-authentication, replacing-reply-key and KDC-authentication. It does not provide the strengthening-reply-key facility. This proposal is partially based upon previous work on integrating single-use authentication mechanisms into Kerberos and allows for the use of the existing password-change extensions to handle personal identification number (PIN) change as described in RFC 3244..."
See also: the IETF Kerberos Working Group (KRB-WG)
Fake Non-Realtime Non-Twitter Non-Video Blog from XML Prague
Rick Jelliffe, O'Reilly Technical
I wasn't there, but the XML Prague presentations are online now. Here are my thoughts from rummaging through some of them. There was a strong emphasis on XSLT and XPath-based systems: I think this reflects a technical opportunity that has been difficult for the big boys to take advantage of, since it does not fit into their product lines or marketing stories well. [Examples:] (1) Michael Kay has a presentation XML Schema moves forward. Michael has implemented large chunks of XML Schema in his SAXON XSLT2 processor, and has excellent access to the XML Schema WG as an editor of XSLT2 and XPath2 and member (Invited Expert) to the W3C XML Schema WG. XML Schema 1.1 is currently a 'Working Draft in Last Call' at W3C... It looks like a stimulating talk I would have enjoyed. I use SAXON XSLT on almost every project, and most programmers I know use it by default. If you are using Java, it is certainly worth looking at seriously. It has a .NET version too, which should be just as good. (2) Tony Graham has a good general talk on Testing XSLT. He quite likes unit tests in moderation, but is not keen on metrics in general, I think. The end is the most interesting, where he emphasizes that human eyes are always needed for testing...The talk has a good list of current test tools. (3) Jeni Tennison follows this up with a specific look at XSpec, a unit testing system for XSLT that looks reasonable. A scenario (equivalent to a pattern in Schematron) has a context in the input document (equivalent to a rule context in Schematron) and then expect for patterns in the output document (equivalent to an assertion test to an external document in Schematron)... (4) Ken Holman gave a talk Introduction to Code-Lists in XML. I think this should be required reading for any professional involved in schema creation...
See also: the XML Prague 2009 web site
Storage Made Easy With S3
Andrew Glover, IBM developerWorks
Amazon Simple Storage Service (S3) is a publicly available service that Web application developers can use for storing digital assets such as images, video, music, and documents. S3 provides a RESTful API for interacting with the service programmatically. Learn how to use the open source JetS3t library to leverage Amazon's S3 cloud service for storing and retrieving data... The cloud is an abstract notion of a loosely connected group of computers working together to perform some task or service that appears as if it is being fulfilled by a single entity. The architecture behind the scenes is also abstract: each cloud provider is free to design its offering as it sees fit. Software as a Service (SaaS) is a related concept, in that the cloud offers some service to users. The cloud model potentially lowers users' costs because they don't need to buy software and the hardware to run it -- the provider of the service has done that already. S3 is a publicly available service that lets Web developers store digital assets (such as images, video, music, and documents) for use in their applications. When you use S3, it looks like a machine sitting on the Internet that has a hard drive containing your digital assets. In reality, a number of machines (spread across a geographical area) contain the digital assets (or pieces of them, perhaps). Amazon also handles all the complexity of fulfilling a service request to store your data and to retrieve it. You pay a small fee (around 15 cents per gigabyte per month) to store assets on Amazon's servers and one to transfer data to and from Amazon's servers. Rather than reinvent the wheel, Amazon's S3 service exposes a RESTful API, which enables you to access S3 in any language that supports communicating over HTTP. The JetS3t project is an open source Java library that abstracts away the details of working with S3's RESTful API, exposing the API as normal Java methods and classes. It's always best to write less code, right? And it makes a lot of sense to borrow someone else's hard work too. As you'll see in this article, JetS3t makes working with S3 and the Java language a lot easier and ultimately a lot more efficient... Logically, S3 is a global storage area network (SAN), which appears as a super-big hard drive where you can store and retrieve digital assets. Technically though, Amazon's architecture is a bit different. Assets you choose to store and retrieve via S3 are called objects. Objects are stored in buckets. You can map this in your mind using the hard-drive analogy: objects are to files as buckets are to folders (or directories). And just like a hard drive, objects and buckets can be located via a Uniform Resource Identifier (URI)...
See also: the Amazon S3 web site
Alfresco-Drupal Integration via CMIS
Jeff Potts, Optaros Screencast
"I created a new screencast that shows the Alfresco-Drupal CMIS integration in action over at Optaros Labs. The screencast shows content moving back-and-forth between Alfresco and Drupal, content being displayed in a Drupal site that lives in Alfresco, and a CMIS CQL query being executed against the Alfresco repository from Drupal. Using the CMIS and CMIS Alfresco modules Optaros recently contributed (based on joint development effort between Optaros, Acquia, and Alfresco), we demonstrate: (1) HTML content created in Alfresco being 'pushed' to Drupal and converted to a standard Drupal node, where it can be affected by any of the existing Drupal modules like 5-star rating or comments. (2) Content created in Drupal being 'pushed' back to Alfresco, where it could trigger a workflow, archiving, or any other Alfresco action (3) The ability to browse and query, from within the Drupal interface, CMIS repositories, including standard keyword search as well as CMIS queries. Although this demo is specific to Alfresco and Drupal, the CMIS module itself is designed to facilitate integration of other repositories as they become CMIS compliant.
Dynamic Trust Management
Matt Blaze, Sampath Kannan, (et al.), IEEE Computer
A service-oriented architecture (SOA) separates functions into services, which process requests from peers over a network. In processing a request, the service can, in turn, send requests to secondary services and so on. The Global Information Grid (GIG), an ongoing effort by the US Department of Defense (DoD) and Intelligence Community (IC), rationalizes and modernizes the architecture of US network-centric operations. It couples a common network architecture to advanced information assurance techniques and, as GIG's name implies, focuses on the information the network carries and the services it provides, rather than on the network's attributes. There are clear tradeoffs among security, flexibility, and cost in possible designs for such SOAs. Traditional (pre-GIG) DoD network architectures have created logical airgaps between different networks such as the NIPRNET and SIPRNET, and services are replicated in each such network environment. Information security is, in principle, guaranteed with separated networks, since there is no network path from the more secure to the less secure network. Although the GIG is a DoD-specific project, many of the trust management problems it exposes also occur naturally in existing and emerging commercial and other public networked computing environments, particularly those based on SOAs... Trust management provides a unified approach to specifying and interpreting security policies, credentials, and relationships. We define some important trust management terms informally. An access request seeks access to a resource, possibly in a specified mode. A policy is a specification of conditions under which access may be granted. A credential is a claim of meeting the conditions of some policy. A transaction is an access request followed by the granting of the request and subsequent access to the resource. An agent or component is any entity that interacts with other agents in the system by means of transactions. An agent is trusted in a transaction if its access request is granted. The KeyNote system's goal is to define notions of trust using policy specifications and to check that a transaction request has the credentials necessary to satisfy the relevant policy. Thus, rather than simply classifying the world as trusted and untrusted, this approach allows moresophisticated policies and notions of trust. For example, trust that an unknown party's public key is correct can be built up by having trusted parties certify this to be the case. Thus, systems such as KeyNote let users have very fine-grained notions of trust and manage trust flexibly. An obvious but important point is that most trust management systems, including KeyNote, start with a complete absence of trust between parties. Only the active presentation of a satisfactory credential can overcome this lack of trust. We believe that an architecture based on trust management systems and languages offers an extremely promising approach for analysis and compliance enforcement in SOAs....
See also: GlobalSecurity.org
TIBCO Donates Enterprise AJAX Tool to Dojo
Darryl K. Taft, eWEEK
See also: the Dojo Foundation web site
See also: the W3C SPARQL Working Group
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: firstname.lastname@example.org
Newsletter unsubscribe: email@example.com
Newsletter help: firstname.lastname@example.org
Cover Pages: http://xml.coverpages.org/