The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: July 07, 2010
XML Daily Newslink. Wednesday, 07 July 2010

A Cover Pages Publication
Provided by OASIS and Sponsor Members
Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation

OASIS Announces Public Review for DITA Version 1.2
Kristen Eberlein, Robert Anderson, Gershon Joseph (eds), OASIS Public Review Draft

Public review of the Darwin Information Typing Architecture (DITA) Version 1.2 specification (Committee Draft 03) has been announced following submission of the package by members of the OASIS Darwin Information Typing Architecture (DITA) Technical Committee. The public review period ends September 05, 2010.

The Darwin Information Typing Architecture (DITA) 1.2 specification "defines both a set of document types for authoring and organizing topic-oriented information and a set of mechanisms for combining, extending, and constraining document types. The associated The DTDs and XSDs, along with the XML catalog files, define DITA markup for the DITA vocabulary modules and DITA document types. While the DTDs and XSDs should define the same DITA elements, the DTDs are normative if there is a discrepancy. If there is a discrepancy between the written specification (this document) and the DTDs, the written specification takes precedence. While the DITA 1.2 documentation does contain some introductory information, it is intended neither as an introduction to DITA nor as a users guide. The intended audience of this documentation consists of implementers of the DITA standard, including tool developers and XML architects who develop specializations. The documentation contains several parts, including the Architectural Specification, Language Reference, Conformance Statement, and Appendices. The DITA 1.2 written specification is available in XHTML, CHM, PDF, and DITA XML source, where the XHTML version is authoritative.

Prior to the release of DITA 1.2, the document types and specializations for technical content were included as an integral part of the base DITA specification. With the release of DITA 1.2, the document types and specializations for technical content have a dedicated section in the DITA specification. This change reflects the addition of an increasing number of specializations that are part of the DITA standard.

The document types and specializations included in the technical content package and described in this section were designed to meet the requirements of those authoring content for technically oriented products in which the concept, task, and reference information types provide the basis for the majority of the content. These information types are used by technical-communication and information-development organizations that provide procedures-oriented content to support the implementation and use of computer hardware, computer software, and machine-industry content. However, many other organizations producing policies and procedures, multi-component reports, and other business content also use the concept, task, and reference information types as essential to their information models... The DITA technical content package includes domain specializations for programming elements, software elements, user interface elements, task requirements elements, Extensible Name and Address Language (xNAL) elements, abbreviated form element, and glossary reference (glossref) element...."

See also: the DITA 1.2 public review announcement

New from CalConnect and IETF: Timezone XML Specification
Michael Douglass and Cyrus Daboo (eds), IETF Internet Draft

IETF has published an intial Standards Track Internet Draft for the Timezone XML Specification. This specification defines a "format for describing timezone information for software and services. It provides a new XML schema for thoroughly and completely representing and sharing timezone information that is consistent across platforms and software systems, together with registration of an IANA media type 'application/timezone+xml'. Appendix A includes Relax NG compact schemas for the TZ XML content items. Creation of this specification was suggested by discussion that took place within the Calendaring and Scheduling Consortium's Timezone Technical Committee. A companion IETF Timezone Service Protocol Specification defines a timezone service protocol that allows reliable, secure and fast delivery of timezone information to client systems such as calendaring and scheduling applications or operating systems.

In this specification, the term 'content item' denotes a typed data entity that is a basic unit of sharing timezone information. The term 'tz xml' is used to describe the overall set of content items and their relationships in defining timezone information. It refers to the complete set of XML elements described in this specification. The term 'user' denotes a single user, group of users, or registered organization that shares knowledge of timezone information. Users can be involved in authoring, providing or consuming timezone information.

The term 'service' in 'Timezone XML Specification' denotes an application entity that facilitates the storing and sharing of tz XML with other services or clients. A service can facilitate authoring, providing or consuming of timezone information. The entity represented by a service could be a web application, a web server or something more precise such as a specific application running on a specific device. The term 'client' denotes an application entity that requests timezone information in order to properly handle time and timezones in software applications. The entity represented by a client could be an operating system, a server application, a client application or a cloud service. The client endpoint interacts with the service endpoint.

Some key elements defined in the specification include TZ:TimeZone XML element (the top-level element of any timezone information document, which must contain several top-level, timezone properties as well as one TZ:StandardTime element), TZ:EffectiveYearRange XML element (with TZ:TransitionPeriod XML element within TZ:EffectiveYearRange), and TZ:ControllingAuthority XML element (this object is defined as a political body or government that has political control over the definition of a timezone). The TZ:TimeZone XML element incorporates TZ:Name XML element within TZ:TimeZone, TZ:Description XML element within TZ:TimeZone, TZ:InfoURL XML element within TZ:TimeZone, TZ:CalendarScale XML element within TZ:TimeZone, TZ:ReferencedControllingAuthority XML element within TZ:TimeZone, and TZ:StandardTime XML element within TZ:TimeZone..."

See also: The Calendaring and Scheduling Consortium (CalConnect)

Apache Software Foundation Announces Apache Cayenne Version 3.0
Staff, ASF Announcement

Members of the Apache Software Foundation (ASF) have announced the release of Apache Cayenne Version 3.0. Cayenne is an "easy-to-use, Open Source Java framework for object relational mapping (ORM), persistence, and caching. In development for nearly 10 years, and an ASF Top-Level Project since 2006, Apache Cayenne is the backbone for high-demand applications and Websites accessed by millions of users each day, such as Unilever, the National Hockey League, and the Law Library of Congress, the world's largest publicly-available legal index...

Cayenne's powerful featureset and GUI tools successfully meet an extensive range of persistence needs, flexibly scaling to support database generation, reverse engineering, Web Services and non-Java client integration, schema mapping, on-demand object and relationship faulting, database auto-detection, and more. Through its mature technology, track record of solid performance in high-volume environments, and vibrant user community, Apache Cayenne is an exceptional choice for persistence services.

Cayenne ObjectContext does not require a transaction wrapper around it to talk to DB. Transactions are created transparently to the user and only for the duration of a DB operation. E.g. if you decide to read a relationship that hasn't been resolved yet, a transaction will be created with a scope of a DB select... ObjectContexts can be nested, so commit/rollback at the object level can be hierarchically structured. For example you may change some object properties in a dialog window, commit back to the parent context attached to the parent window, without committing to DB... Nesting contexts is great, but there's more to it. A 'child' ObjectContext can be located in a remote application, talking to a 'parent' over a web service... Since Cayenne fully separates the mapping model from the Java code, and does not require class enhancement or annotation preprocessing, this gives a whole lot of flexibility as to what persistent Java objects can be.... A big help in day-to-day work is CayenneModeler - a cross-platform, IDE-independent GUI mapping tool. It frees you from the need to deal with raw model, provides support for various ORM-related DB operations...

As 1.5 is a minimal JDK requirement for Cayenne 3.0, we were able to switch many public APIs to use generics. In the past we supported flattened relationships, i.e. relationships mapped across more than 2 tables; now we added a similar ability for attributes, Object properties can be mapped to columns from a joined table over one or multiple joins... Now you can either declare a callback method on an entity object or declare a listener class to receive entity events, JPA-style... Pluggable Query Cache is what makes Cayenne scale to serve gazillions of requests... Cayenne is a mature and powerful framework, but by no means we think of Cayenne as a finished product; there are lots of ways to make it even better..."

See also: the Apache Cayenne v.3.0 Technical Fact Sheet

InfoWorld Review: Microsoft ADFS 2.0 and Forefront Identity Manager 2010
Keith Schultz, InfoWorld

"Managing user access in businesses today is something like playing traffic cop in an intersection of a thousand roads. From Web-based applications to homegrown programs, from desktop PCs to the latest crop of smartphones, IT has to be able to control access to every sort of resource while allowing users to access them from anywhere and any platform. A bigger challenge is providing seamless access to applications and systems across corporate or network boundaries...

Microsoft has updated Forefront Identity Manager (FIM) 2010 and Active Directory Federation Services (ADFS) to aid IT in applying identity management across domains and business boundaries. Both of these tools are intended to extend user access control across the enterprise; FIM uses a common platform to tie user, certificate, group, and policy management together, while ADFS provides trust accounts between different networks or organizations. Together, they provide a powerful platform for extending user management beyond the company domain or network edge.

Active Directory Federation Services, first available in Windows Server 2003, is now a server role in Windows Server 2008 R2. ADFS is a single-sign-on technology that uses claims-based authentication to validate a user's identity across domains. Normally when the user's account is in one domain and the resource is in another, the resource will prompt the user for local credentials. ADFS eliminates the secondary credential request; the user's identity is validated, and access provided, based on information in the user's home directory.

Forefront Identity Manager 2010 is a powerful platform for managing user identities, credentials, and identity-based access policies for both Windows and non-Windows environments. In FIM 2010, Microsoft took smart card and certificate management and merged it with identity lifecycle tools to streamline administration and improve user security and compliance. FIM 2010 also empowers users through self-service tools to manage their own group memberships or reset their user password from the Windows logon page. FIM 2010 is based on Web standards for greater extensibility and will work with third-party certificate authorities..."

See also: the Forefront Identity Manager 2010 overview

Terminology for Talking about Privacy by Data Minimization
Andreas Pfitzmann, Marit Hansen, Hannes Tschofenig (eds), IETF Internet Draft

IETF has published an initial level -00 Internet Draft on Terminology for Talking about Privacy by Data Minimization: Anonymity, Unlinkability, Undetectability, Unobservability, Pseudonymity, and Identity Management. This document is an attempt to consolidate terminology in the field privacy by data minimization. Starting the definitions from the anonymity and unlinkability perspective and not from a definition of identity (the latter is the obvious approach to some people) reveals some deeper structures in this field...

Early papers from the 1980ies about privacy by data minimization already deal with anonymity, unlinkability, unobservability, and pseudonymity and introduce these terms within the respective context of proposed measures. This memo shows relationships between commonly used terms and thereby develops a consistent terminology. Then, we contrast these definitions with newer approaches, e.g., from ISO IS 15408. Finally, we extend this terminology to identity (as a negation of anonymity and unlinkability) and identity management. Identity management is a much younger and much less well-defined field—so a really consolidated terminology for this field does not exist.

Adoption of this terminology will help to achieve better progress in the field by avoiding that those working on standards and research invent their own language from scratch.

This document is organized as follows: First, the setting used is described. Then, definitions of anonymity, unlinkability, linkability, undetectability, and unobservability are given and the relationships between the respective terms are outlined. Afterwards, known mechanisms to achieve anonymity, undetectability and unobservability are listed. The next sections deal with pseudonymity, i.e., pseudonyms, their properties, and the corresponding mechanisms. Thereafter, this is applied to privacy-enhancing identity management.

Bluetooth 4.0 Specification Approved
Agam Shah, ComputerWorld

The Bluetooth 4.0 low-power wireless networking specification has been approved, and the technology will start appearing in devices such as smart meters and laptops later this year... The Bluetooth 4.0 standard is an update to the previous Bluetooth 3.0 wireless technology, which was announced in 2009. The new standard adds a low-power specification for transmitting small bursts of data over short ranges. The standard will also include the high-speed data transfer capabilities introduced with Bluetooth 3.0, which allows devices to jump on Wi-Fi 802.11 networks to transfer data at up to 25M bps (bits per second)...

The technology could first make its way to watches, smart meters, pedometers and other gadgets that run on coin-cell batteries; laptops and smartphones could ultimately include Bluetooth 4.0 and be able to collect data from gadgets. That should help in activities such as monitoring health and energy usage, according to Mike Foley, Bluetooth SIG Executive Director... Wireless capabilities are continuously being added to gadgets like cameras to help them communicate with other devices. Technologies such as Wi-Fi maintain continued connectivity, which could affect the battery life of devices. Bluetooth 4.0 could be used for devices to exchange low-level information over short distances without using much energy..."

According to the Bluetooth Special Interest Group (SIG) announcement: "The hallmark feature enhancement to the Specification, Bluetooth low energy technology opens entirely new markets for devices requiring low cost and low power wireless connectivity, creating an evolution in Bluetooth wireless technology that will enable a plethora of new applications—some not even possible or imagined today. Many markets such as health care, sports and fitness, security, and home entertainment will be enhanced with the availability of small coin-cell battery powered wireless products and sensors now enabled by Bluetooth wireless technology.

Bluetooth low energy wireless technology, the hallmark feature of the v4.0 Bluetooth Core Specification, features: ultra-low peak, average and idle mode power consumption; ability to run for years on standard coin-cell batteries; low cost; multi-vendor interoperability; enhanced range..."

See also: the SIG announcement

Web 2.0 for Business Entities: Making Empowerment Work
Max J. Pucher, SNS Research

"Some BPM consultants propose that processes are the most important corporate asset. I disagree because a process is an abstract entity that produces no value. Value is defined by human interaction in the real world. While abstract processes promise to make that human interaction more controllable they ignore human nature and workplace psychology, much as socialism and communism do. These are idealistic concepts that fail in the real world of individual human agents. People are at their best when they feel that their contribution is valued as an individual...

Empowerment is often misunderstood as authority for decisionmaking for everyone. Some tries at empowerment have failed to show the hoped for results because they followed the idea that all people are the same. The most important element of empowerment is the realization that people are different. Not clever and dumb, or lazy and hardworking, but just people in the wrong place.

Empowerment requires two important elements: first, people coaching and second, business and process transparency. Calling for a coach is not an admittance of failure and sending in a coach is not punishment but a support action. Transparency is best achieved by a collaborative process support infrastructure—certainly not your run-of-the-mill BPM/SOA software. Transparency enables monitoring of (business) goal achievement of each team to verify if goals are set sensibly and well understood.

Today most IT solutions either use pre- and hardcoded processes that are then enforced. Employees have to execute standardized processes (to reduce cost) for an abstract, statistically classified customer. IT is seen mostly as a people and cost reduction tool by automating and industrializing. The missed opportunity is however that it could also be used for a new kind of architectured collaboration (i.e., Web2.0 for business entities) that truly enables empowerment. IT would suddenly not be an expense but turn the business into a new kind of organization with unheard of dynamics..."

See also: the author's blog on ACM (Adaptive Case Management)

Policy and the Cloud
Phil Wainewrigh, ZDNet Blog

"Every time I get into a discussion about security and trust in cloud computing these days, I end up talking about service level agreements. People considering cloud computing rightly worry about whether their data is going to be secure, and private, and accessible when they need it. The umbrella term they use for that is 'security', but their worries encompass a broad range of performance, security, privacy, operational and cost criteria.... I end up talking about SLAs—the contracts that govern the provider's commitment to meet all those various criteria. It turns out that, once you drill down into what people really want, the answer is much more granular and textured than a single metric about security, privacy, or whatever. We're actually talking about a framework for governance across a broad range of policy settings.

The trouble is, such dynamic SLAs are only possible with automation. A traditional SLA will set static limits, and then the provider or the customer (often both independently) can program their monitoring tools to send out alerts as those limits get close...

At present, the only way to change your cloud computing service levels is to move from one cloud provider to another. Without interoperability standards and a common language to describe service levels, that's a custom process that's hard to automate. Nor is it in the interests of providers to rush to create standards that make it easier for customers to shop around for cloud services on the fly.

Yet customers will want that flexibility and so it's only a matter of time before providers start to offer enough visibility and control to give them real choice over service levels—at first on a proprietary basis within individual cloud infrastructures, and later on across multiple clouds, as standards gradually evolve..."

See also: the OASIS ID-Cloud Technical Committee

Reading Data from Gnumeric Spreadsheets Directly Through XML
Colin Beckingham, IBM developerWorks

"When keeping accounts, bookkeepers often like to manage dynamic data using spreadsheets and produce static reports with a different application. However, allowing the static reporting program to read directly from the spreadsheet can be problematic. With Gnumeric as the spreadsheet and PHP as the reporting application, this article shows how spreadsheet data stored as XML, with proper management of namespaces, allows reading of data directly from the spreadsheet. You save time, increase accuracy, and avoid copy-and-paste and other errors.

Static reports such as income and expense, trial balance, and balance sheets, might be produced by a different application that needs access to the dynamic data to complete the report. Copying and pasting from one application to the other takes time and is subject to error, and transfer of information using comma-separated values and other techniques is clunky at best.

When the spreadsheet stores its data as XML, an XML-aware static reporting program can read that data directly. Our worked example uses Gnumeric, which stores its data in XML format, as the spreadsheet and PHP, which is able to read the XML directly, as a reporting application. The example concerns depreciation or capital cost allowances (CCA). A machine with a productive life of many years that is used in the production process wears out over time and at the end of its lifetime needs to be replaced. Accountants expense a certain percentage of the depreciated value each year... Because the uncompressed Gnumeric file is pure XML, you can get an idea of the structure of the data with a basic text editor. Note that Gnumeric can store the spreadsheet data in a compressed format. To view the data directly, make sure the spreadsheet is stored using zero compression from the Gnumeric preferences...

The combination of Gnumeric and PHP encompasses the best of both worlds. The Gnumeric spreadsheet accommodates changeable information and stores the data in XML. The PHP reporting application can read the XML directly, so it is not deprived of accurate data..."


XML Daily Newslink and Cover Pages sponsored by:

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

XML Daily Newslink:
Newsletter Archive:
Newsletter subscribe:
Newsletter unsubscribe:
Newsletter help:
Cover Pages:

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: