This issue of XML Daily Newslink is sponsored by:
- Announcing the Beta Release of the Zermatt Developer Identity Framework
- Microsoft Code Name "Zermatt" White Paper for Developers
- W3C Publishes First Working Draft for POWDER Formal Semantics
- A Standard for Openness?
- What is New in <oXygen/> XML Editor 9.3?
- Python Backing Eyed for NetBeans
- Integrate Your PHP Application with Google Calendar
- Evaluating XPaths from the Java platform
Announcing the Beta Release of the Zermatt Developer Identity Framework
Vittorio Bertocci, Blog
"We just made available for download the bits of the Beta of 'Zermatt' Developer Identity Framework. Zermatt is the codename of a .NET framework that helps developers build claims-aware applications to address challenging application security requirements using a simplified application access model. Let me expand a bit on that. If you want to develop applications that take advantage of claims and identity Metasystem goodness in general, Zermatt makes your life easier by providing base classes, controls but especially capabilities and a programming model that take care of most of the plumbing for you. Regardless of the role (IP, RP, subject) or the style (Active, Passive, 'Passive-Aggressive'), Zermatt shields you from the sheer handling of protocols and tokens and provides you with a great model for externalizing your access logic... Zermatt beta provides: (1) An HttpModule [the Federated Access Module, or FAM] that takes care of handling the token processing pipeline: fully extensible and web.configurable, it exposes programmable events for every relevant step in the authentication lifecycle; (2) A new claim model, which unifies token and principal programming models achieving direct reuse of some classic access control techniques (IsInRole, PrincipalPermission) without requiring a rewrite (3) Visual ASP.NET controls which take care of enhancing web pages with capabilities such as: information card signin and one-off information card requests, passive signin, session management and passive STS capabilities. All of those include comprehensive property management a and a rich events model; (3) Full control of session management: intended audience, pages whitelist, session duration, custom session tickets, etc; (5) A unified token handling model that works across ASP.NET and WCF applications alike; (6) Base classes for authoring STS, which handle automatically historically tedious tasks such as RST and RSTR parsing; (7) Native support for handling information cards: serialization, deserialization, issuance. Integration with the STS programming model for simplifying the development of cardspace-ready STSes; (8) Delegate authentication. Applications can now request new tokens on behalf of their callers, greatly simplifying three tier architectures and enabling a whole new class of scenarios..."
See also: Kim Cameron's blog
Microsoft Code Name "Zermatt" White Paper for Developers
Keith Brown, Pluralsight White Paper
Microsoft's Federated Identity team announced a public beta of Microsoft Code Name "Zermatt". A 33-page 2008-07 White Paper on Zermatt by Keith Brown (for "informational purposes only") introduces concepts and terminology to help developers understand the benefits and concepts behind the claims-based model of identity. Brown focuses upon building relying parties using Zermatt, talks about issuance and security token services (STS), and provides an example of an STS built using Zermatt. "It's not surprising nowadays to see a single company with tens or hundreds of web applications and services, many of which have their own private silo for user identities, and most of which are hardwired to use one particular means of authentication. Developers know how tedious it is to build identity support into each application, and IT pros know how expensive it is to manage the resulting set of applications. One very useful step toward solving the problem has been to centralize user accounts into an enterprise directory... Zermatt is a set of .NET Framework classes; it is a framework for implementing claims-based identity in your applications... When you build claims-aware applications, the user presents her identity to your application as a set of claims. One claim could be the user's name, another might be her email address. The idea here is that an external identity system is configured to give your application everything it needs to know about the user with each request she makes, along with cryptographic assurance that the identity data you receive comes from a trusted source... The user delivers a set of claims to your application piggybacked along with her request. In a web service, these claims are carried in the security header of the SOAP envelope. Regardless of how they arrive, they must be serialized somehow, and this is where security tokens come in. A security token is a serialized set of claims that is digitally signed by the issuing authority. The signature is important: it gives you assurance that the user didn't just make up a bunch of claims and send them to you... A security token service (STS) is the plumbing that builds, signs, and issues security tokens using interoperable protocols, [and] Zermatt makes it easy to build your own STS... In order to make [transactions] interoperable, several WS-* standards are used [in the scenario]. Policy is retrieved using WS-MetadataExchange, and the policy itself is structured according to the WS-Policy specification. The STS exposes endpoints that implement the WS-Trust specification, which describes how to request and receive security tokens. Most STSs today issue SAML tokens (Security Assertion Markup Langauge). SAML is an industry-recognized XML vocabulary that can be used to represent claims in an interoperable way. This adherence to standards means that you can purchase an STS instead of building it yourself...
See also: NetworkWorld
W3C Publishes First Working Draft for POWDER Formal Semantics
Stasinos Konstantopoulos and Phil Archer (eds), W3C Technical Report
W3C announced that members of the Protocol for Web Description Resources (POWDER) Working Group have published a First Public Working Draft for "Protocol for Web Description Resources (POWDER): Formal Semantics." The document underpins the Protocol for Web Description Resources (POWDER). It describes how the relatively simple operational format of a POWDER document can be transformed through two stages, first into a more tightly constrained XML format (POWDER-BASE), and then into an RDF/OWL encoding (POWDER-S) that may be processed by Semantic Web tools. The Protocol for Web Description Resources, POWDER, offers a simple method of associating RDF data with groups of resources. Its primary 'unit of information' is the Description Resource (DR). This comprises three elements: (1) attribution—who is providing the description; (2) scope—defined as a set of IRIs over which the description applies to the resources de-referenced from those IRIs; (3) the description itself—the 'descriptor set'. To some extent, this approach is in tension with the core semantics of RDF and OWL. To resolve that tension, it is necessary to extend RDF semantics as described below. In order to minimize the required extension, while at the same time preserving the relatively simple encoding of POWDER in XML which is generally readable by humans, we define a multi-layered approach. The operational semantics, i.e. the encoding of POWDER in XML, is first transformed into a more restricted XML encoding that is less easily understood by humans and depends on matching IRIs against regular expressions to determine whether or not they are within the scope of the DR. This latter encoding is, in its own turn, transformed into the extended-RDF encoding. The data model makes the attribution element mandatory for all POWDER documents. These may contain any number of Description Resources (DRs) that effectively inherit the attribution of the document as a whole. Descriptor sets may also be included independently of a specific DR and these too inherit the attribution. This model persists throughout the layers of the POWDER model.
A Standard for Openness?
Rick Jelliffe, O'Reilly Articles
In public discourse and public policy, "open standards" are a now Good Thing (in the sense of 1066 and All That). However, the more that "open standard" is deemed good and important without having a common meaning, the more that interests will attempt to stretch its meaning in one way or another. In this case, one way means to allow actual royalty-bearing (RAND) standards as "open standards" and the other way is to require open source (or even free) implementations. The Wikipedia entry for Open Standards shows the variability in the definitions of the term. Most pressingly, it has examples of legislation that uses the buzzword "open" but in very different ways... Openness is a motherhood term now, so of course there will be surprises and debate about what kind of motherhood we actually mean. My opinion, for what it is worth, is that RAND-z and RF is necessary but not sufficient for openness, and that governments embarking on an open standard policy need to put in place some patent-limitation plan which would bring existing, market dominating, royalty-bearing standards into the RAND-z fold by, say, 2010... I have other concerns about the current kinds of licenses and promises at consortia and by the global corporations. The first is that the licenses are typically made only in respect to particular standards: so if company A has granted a license for standard B, and you implement standard B, you are OK, but if you use the same technology on implementing standard C you are not covered. The second is that even if company A has granted license for both standard B and C, and you implement either, you still may not actually be covered: this is because typically the grant is to "necessary" or "essential" claims... What is needed for openness is open licenses that all the technology being granted being usable in any open standard, whether there are alternative implementation strategies or not.
See also: "Open Standard" formulations
What is New in <oXygen/> XML Editor 9.3?
Staff, SyncRO Soft Ltd. Announcement
"The <oXygen/> XML Editor and XSLT Debugger is a complete cross platform XML editor providing the tools for XML authoring, XML conversion, XML Schema, DTD, Relax NG and Schematron development, XPath, XSLT, XQuery debugging, SOAP and WSDL testing. The integration with the XML document repositories is made through the WebDAV, Subversion and S/FTP protocols. <oXygen/> has also support to browse, manage and query native XML and relational databases. The <oXygen/> XML editor is also available as an Eclipse IDE plugin, bringing unique XML development features to this widely used Java IDE. XML Editor Version 9.3, announced 2008-07-02, adds as main feature the support for editing and processing resources inside ZIP-based packages including Microsoft Office 2007 (OOXML) and OpenDocument (ODF) documents. Version 9.3 also features: improvements in the visual Author mode; improvements in the DITA Map editor; improvements in the Text editor; component updates. <oXygen/> XML Editor allows you to extract, validate, edit and process the XML data stored in Office 2007 files and any other ZIP-based archive. These capabilities allow developers to use data from Office 2007 documents together with validation and transformations (using XSLT or XQuery) to other file formats. Validation is done using the latest ECMA XML Schemas. <oXygen/> offers a complete framework that provides a powerful support for editing and validating documents from the Office Open XML zipped package. The NVDL schema can be easily customized to allow user defined extension schemas for use in the OOXML files. With the <oXygen/> Directories Comparison tool you can compare and merge Office 2007 (OOXML) files or other ZIP-based archives seamlessly. The archive files are presented as directories allowing the usual comparison and merge operations inside them... <oXygen/> XML Editor also supports browsing, processing and modification of files from other ZIP archives like OpenDocument (ODF). The OpenDocument format (ODF) is a free and open file format for electronic office documents, such as spreadsheets, charts, presentations and word processing documents. The standard was developed by the Open Office XML technical committee of the Organization for the Advancement of Structured Information Standards (OASIS) consortium and it is based on the XML format originally created and implemented by the OpenOffice.org office suite. The most common filename extensions used for ODF are: ODT (text), ODS (spreadsheet), ODP (presentation), ODG (graphics), ODF (mathematical formulas). <oXygen/> offers support for code insight and validation of documents inside ODF packages." Note the claim (unverified): "<oXygen/> XML Editor is the first tool which offers developers support for editing, transforming and validating documents composing the Open Document format (ODF) package directly through the archive support."
See also: the XML Editor description
Python Backing Eyed for NetBeans
Paul Krill, InfoWorld
See also: the NBPython Project
Integrate Your PHP Application with Google Calendar
Vikram Vaswani, IBM developerWorks
"As a developer too, I find Google Calendar makes for interesting water-cooler conversation: With its Data API, developers can easily build new applications around the data stored in public and private user calendars. This API, which follows the REST model, can be accessed through any XML-capable development toolkit, and already has client libraries for many common programming languages... Google Calendar allows Web application developers to access user-generated content and event information through its REST-based Developer API. PHP's SimpleXML extension and Zend's GData Library are ideal for processing the XML feeds generated by this API and using them to build customized PHP applications. This article introduces the Google Calendar Data API, demonstrates how you can use it to browse user-generated calendars; add and update calendar events; and perform keyword searches... As with all REST-based services, the API works by accepting HTTP requests containing one or more XML-encoded input arguments and returning XML-encoded responses that can be parsed in any XML-aware client. With the Google Calendar Data API, the response always consists of an Atom or RSS feed containing the requested information. A typical Google Calendar feed includes more than enough information to build a useful and relevant application. To see an example, log into your Google Calendar account, navigate to your calendar settings, and find the link for your calendar's private address URL. The Google Calendar Data API is a mature, convenient, and flexible way for developers to create their own custom Web front-end to Google Calendar.
See also: Google Calendar Reference Guide
Evaluating XPaths from the Java platform
Brett D. McLaughlin, IBM developerWorks
For Java programmers who work with XML documents using SAX, DOM, JDOM, JAXP, and more, the XQuery API for Java is a welcome addition to the programmer's toolkit. Now the power of XQuery is available to Java programmers without resorting to system calls or unwieldy APIs, all in a Sun-standardized package... With XPath and XQuery, you're not stuck pulling data from XML into a programming language, and then using that language's tools to search the data. In addition to the constraint of your programming language with that approach, you typically lose most of the XML semantics and structure, such as what element was a child of what other element, and so on. XPath and XQuery allow you to search XML without needing a programming language... Much of using XPath from Java technology is simply to learn new syntax, get an API and a few tools configured, and then apply what you already know about XPath. That shouldn't make you think that using XPath in the Java environment is trivial, though. Beyond a need for complexity, XPath offers a tremendous amount of flexibility when you work with XML from Java programming. It certainly moves you far beyond what most basic SAX, DOM, JAXP, JDOM, or other., implementations provide —although some vendors and projects provide XPath-capable extensions to the basics that those specs and APIs offer. XPath offers a wonderful gateway to the more complex XQuery language, and Java and XQuery combinations (using the XQJ API). Rather than immediately move on to XQuery, you'll do well to polish your XPath skills, and learn to select complex node sets from within your Java applications, and manipulate those as needed. You'll find lots of cases where you don't need anything beyond XPath. On top of that, XQuery builds upon XPath—both from a lexical perspective and in terms of the XQJ API, which can actually evaluate XPaths as well as execute XQueries -- so you're improving your XQuery skills implicitly. Most of all, have fun with the increased flexibility that XPath offers, especially when evaluated from the Java environment.
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/