This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com
- CMIS Version 1.0 Approved as an OASIS Standard
- Apache Software Foundation Announces New Top-Level Projects
- Schema for Representing Resources for Calendaring and Scheduling Services
- An Evaluation Framework for Data Modeling Languages in Network Management Domain
- Public Review: Identity Metasystem Interoperability Version 1.0 Errata
- W3C Opens India Office During International Conference in New Delhi
- Additional Portable Symmetric Key Container (PSKC) Algorithm Profiles
- Tapping the Computing Cloud for Smarter Water
CMIS Version 1.0 Approved as an OASIS Standard
Staff, OASIS Announcement
On May 04, 2010, OASIS announced that members had voted to approve the Content Management Interoperability Services (CMIS) Version 1.0 specification as an OASIS Standard. Contributing sponsor members included Adobe, Alfresco, ASG, Booz Allen Hamilton, Day Software, dotCMS, EMC, FatWire, fme AG, IBM, ISIS Papyrus, Liferay, Microsoft, Nuxeo, Open Text, Oracle, SAP, Saperion, WeWebU, and others.
CMIS version 1.0 is a new open standard that enables information to be shared across Enterprise Content Management (ECM) repositories from different vendors. Advanced via a collaboration of major ECM solution providers worldwide, CMIS is now an official OASIS Standard, a status that signifies the highest level of ratification. Using Web services and Web 2.0 interfaces, CMIS dramatically reduces the IT burden around multi-vendor, multi-repository content management environments. Companies no longer need to maintain custom code and one-off integrations in order to share information across their various ECM systems. CMIS also enables independent software vendors (ISVs) to create specialized applications that are capable of running over a variety of content management systems.
David Choy of EMC, chair of the OASIS CMIS Technical Committee: 'CMIS makes it possible for business units to deploy systems independently and focus on application needs rather than on infrastructure considerations. With CMIS, integrating content between two or more repositories is faster, simpler and more cost-effective. This is how it should be'...
Mary Laplante, vice president and senior analyst for the Gilbane Group: 'CMIS has the potential to be a game-changing standard, not only through its promise to facilitate affordable content management, but also as an enabler of whole new classes of high-value, information-rich applications that have not been feasible to date. At the end of the day, companies simply need better approaches to integrating systems. Business agility increasingly separates the winners from the losers, and agility is perhaps the biggest single benefit that CMIS offers'. CMIS is offered for implementation on a royalty-free basis..."
See also: the OASIS announcement
Apache Software Foundation Announces New Top-Level Projects
Staff, Apache Announcement
"The Apache Software Foundation (ASF) — the all-volunteer developers, stewards, and incubators of 143 Open Source projects and initiatives — have announced the creation of six new Top-Level Projects (TLPs), setting an all-time record of the most new TLPs launched in a single month. A Top-Level Apache Project signifies that a Project's community and products have been well-governed under the ASF's meritocratic, consensus-driven process and principles. Whilst a project is developing within the Apache Incubator or as a sub-project of an existing TLP, it benefits from hands-on mentoring from other Apache contributors, as well as the Foundation's widely-emulated process, stewardship, outreach, support, and community events.
(2) Apache Mahout provides scalable implementations of machine learning algorithms on top of Apache Hadoop and other technologies. It offers collaborative filtering, clustering, classification, feature reduction, data mining algorithms, and more. (3) Apache Tika is an embeddable, lightweight toolkit for content detection, and analysis. (4) Apache Nutch is a highly-modular, Web searching engine based on Lucene Java with added Web-specifics, such as a crawler, a link-graph database, and parsers for HTML and other document formats. (5) Apache Avro is a fast data serialization system that includes rich and dynamic schemas in all its processing. A sub-project of Apache Hadoop, Avro features rich data structures; a compact, fast, binary data format; a container file to store persistent data. (6) Apache HBase is a distributed database modeled after Google's Bigtable. (7) Apache UIMA (Unstructured Information Management Architecture) is a framework for analyzing unstructured information, such as natural language text. (8) Apache Cassandra is an advanced, second- generation "NoSQL" distributed data store that has a shared-nothing architecture. (9) Apache Subversion is a widely-used versioning control system. (10) Apache Click is a modern Java EE Web application framework that provides a natural, rich client style programming mode. (11) Apache Shindig is an OpenSocial container and helps you to start hosting OpenSocial apps quickly by providing the code to render gadgets, proxy requests, and handle REST and RPC requests...
Established in 1999, the all-volunteer Foundation oversees more than one hundred leading Open Source projects, including Apache HTTP Server -- the world's most popular Web server software. Through The ASF's meritocratic process known as "The Apache Way," more than 300 individual Members and 2,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is funded by individual donations and corporate sponsors including Facebook, Google, HP, Microsoft, Progress Software, SpringSource/VMware, and Yahoo!
See also: the Apache Software Foundation web site
Schema for Representing Resources for Calendaring and Scheduling Services
Ciny Joy, Cyrus Daboo, Michael Douglass (eds), IETF Internet Draft
IETF has published an initial level -00 Internet Draft for a specification Schema for Representing Resources for Calendaring and Scheduling Services. This specification describes a schema for representing resources for calendaring and scheduling. A 'resource' in the scheduling context is any shared entity that can be scheduled by a calendar user, but does not control its own attendance status. The Object model chosen is the lowest common denominator to adapt for LDAP.
Details: "A 'resource object' definition should contain all information required to find and schedule the right resource. For this, it should contain all, or a set of the required attributes; see Resource Kind and Unique ID, which MUST be present in any resource object. Additional proprietary attributes may be defined as well, but must begin with 'X-'. Clients encountering attributes they don't know about must ignore them.
This document specifies whether a given Attribute or Property is required for a query to find the right resource, or is used to just give additional information during scheduling of the resource. Attributes or Properties required to contact the resource are not included in this specification. LDAP attributes defined in RFC 4519 and VCARD properties defined in vCard Format Specification can be used to include contact information for the resource.
An Evaluation Framework for Data Modeling Languages in Network Management Domain
Hui Xu and Debao Xiao (eds), IETF Internet Draft
IETF has released an Internet Draft for the specification An Evaluation Framework for Data Modeling Languages in Network Management Domain. Abstract: With rapid development of next generation networks, it is expected that a separate effort to study data modeling languages in the interest of network management should be undertaken. Based on a good understanding of the requirements of data modeling in next generation network management domain, evaluation on management data modeling languages becomes an essential way for the purpose of standardization to replace proprietary data models in the near future. Our project aims to establish a framework for evaluation to measure the capabilities of management data modeling languages in meeting those requirements by a set of criteria, which are modeling approaches, interoperability, conformance, extensibility, readability, data representations and security considerations.
The definition of Information Model (IM) and Data Model (DM) should be seriously considered for network management solutions. IMs always model Managed Objects (MOs) at a conceptual level and are protocol- neutral, while DMs are defined at a concrete level, implementing in different ways and are protocol-specific. As for each network management model, a data modeling language is quite necessary for the description of the managed resources...
Four main modeling approaches should be considered, including data- oriented one, command-oriented one, object-oriented/object-based one and document-oriented one. The data-oriented approach models all management aspects through data objects, and at least two operations ('get' and 'set') should be defined. The command-oriented approach defines a large number of management operations, specifying not the details but the commands to get/set selected information. The object-oriented/object-based approach combines the data-oriented approach and the command-oriented approach in view of integration. The document-oriented approach represents state information, statistics information and configuration information of a device as a structured document.
Application of Proposed Framework to NETCONF-based Data Modeling: Using our proposed evaluation framework, we compare the possible NETCONF data modeling languages that are XML Schema, RELAX NG, OWL, and YANG... it can be summarized that, most properties of XML Schema surpass those of previous data modeling languages, especially in aspects such as interoperability, data representation and extensibility, which have been quite well-known by its numerous users. However, its machine readability is not so satisfying, for the reason that, what it is accomplished in is not semantic expressiveness but content definition. Furthermore, compared to its wide application, XML Schema is both too complicated and excessively general as a data modeling language for special use only in the scope of network management. On the other hand, definition of a NETCONF-based management DM is much more than an XML instance document description, or in other words, XML Schema is still not expressive enough with a view to NETCONF-based data modeling. All these reasons above lead to the fact that, there are still no DMs defined by XML Schema yet. IETF Operations & Management (OPS) Area discusses issues related to SMI-to-XML Schema conversion and XSD for accessing SMIv2 DMs, since SNMP-based network management has been supplemented with a lot of proprietary MIB modules defined by different device vendors, and discarding MIB objects and SMIv2 syntax when designing a new DM does reduce this benefit from experience of so many years..."
Public Review: Identity Metasystem Interoperability Version 1.0 Errata
Michael B. Jones (ed), OASIS Public Review Draft
Members of the OASIS Identity Metasystem Interoperability (IMI) Technical Committee have released a Committee Draft of Identity Metasystem Interoperability Version 1.0 Errata for public review through May 20, 2010. The review materials include a prose Standalone Errata document and a corrected XSD XML schema file.
The Errata document lists errata for OASIS Standard Identity Metasystem Interoperability Version 1.0 specification produced by the Identity Metasystem Interoperability (IMI) Technical Committee. This standard was approved by the OASIS membership on 1-July-2009.
IMI Version 1.0 is a specification intended for developers and architects who wish to design identity systems and applications that interoperate using the Identity Metasystem Interoperability specification. An Identity Selector and the associated identity system components allow users to manage their Digital Identities from different Identity Providers, and employ them in various contexts to access online services. In this specification, identities are represented to users as 'Information Cards'. Information Cards can be used both at applications hosted on Web sites accessed through Web browsers and rich client applications directly employing Web services.
The IMI specification also provides a related mechanism to describe security-verifiable identity for endpoints by leveraging extensibility of the WS-Addressing specification. This is achieved via XML elements for identity provided as part of WS-Addressing Endpoint References. This mechanism enables messaging systems to support multiple trust models across networks that include processing nodes such as endpoint managers, firewalls, and gateways in a transport-neutral manner.
See also: the OASIS IMI TC announcement
W3C Opens India Office During International Conference in New Delhi
Staff, W3C Announcement
"As part of its efforts to ensure that core Web standards meet global needs, W3C announces today the opening of a new Office in India. The Office is hosted in New Delhi by the Technology Development for Indian Languages (TDIL) Programme, part of the India government's Department of Information Technology. W3C and TDIL will celebrate this collaborative effort at an opening ceremony on 6 May 2010, as part of a conference organized by TDIL on the topic of technology, standards and internationalization.
Technology Development for Indian Languages (TDIL) programme, is a flagship programme of Department of Information Technology, Govt. of India involved in research, development, standardization and proliferation of Language Technology in India in twenty-two constitutionally recognized Indian languages. TDIL Programme is also associated with international standardization bodies like the Unicode Consortium and ELRA... Ms. Swaran Lata Director and Head, Human Centered Computing Division and Manager of the new W3C India Office: 'We look forward to promoting W3C standards all over India in all twenty-two constitutionally recognized Indian languages, and welcome this opportunity to bring together people in India who are interested in participating in the development of the future Web.'..."
Language diversity in India poses challenges to Web technology developers and users. To mark the opening of the new Office, TDIL organized an international conference on 6-7 May in New Delhi: "World Wide Web: Technology, Standards and Internationalization." Conference participants discuss issues some of these challenges and how best to promote open standards participation and dissemination in India. The program begins with the formal opening of the new Office, then is followed by discussions on internationalization, mobile access, Web architecture, Semantic Web, Accessibility, and other areas of W3C work.
W3C is hosted by three organizations on three continents: the Massachusetts Institute of Technology (MIT) in the United States, the European Research Consortium for Informatics and Mathematics (ERCIM) in Europe, and the Keio University in Japan. Individual Offices are located in Australia, Benelux, Brazil, China, Finland, Germany and Austria, Greece, Hungary, India, Israel, Italy, Korea, Morocco, Senegal, Southern Africa, Spain, Sweden, United Kingdom and Ireland.
See also: the W3C announcement
Additional Portable Symmetric Key Container (PSKC) Algorithm Profiles
Philip Hoyer, Mingliang Pei, Salah Machani, Andrea Doherty; IETF Internet Draft
Members of the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group have published an updated Internet Draft for Additional Portable Symmetric Key Container (PSKC) Algorithm Profiles. Document abstract: "The Portable Symmetric Key Container (PSKC) contains a number of XML elements and XML attributes carrying keys and related information. Not all algorithms, however, are able to use all elements and for other algorithm certain information is mandatory. This lead to the introduction of PSKC algorithm profiles that provide further description about the mandatory and optional information elements and their semantic, including extensions that may be needed. The main PSKC specification defines two PSKC algorithm profiles, namely 'HOTP' and 'PIN'. This document extends the initial set and specifies nine further algorithm profiles for PKSC.
The document specifies a set of algorithm profiles for PKSC (formerly defined in 'Portable Symmetric Key Container'), namely OCRA (OATH Challenge Response Algorithm), TOTP (OATH Time based OTP), SecurID-AES, SecurID-AES-Counter, SecurID-ALGOR, ActivIdentity-3DES, ActivIdentity-AES, ActivIdentity-DES, and ActivIdentity-EVENT...
Related technical work is based on earlier work by the members of OATH (Initiative for Open AuTHentication) to specify a format that can be freely distributed to the technical community. The authors believe that a common and shared specification will facilitate adoption of two-factor authentication on the Internet by enabling interoperability between commercial and open-source implementations.
This IETF Working Group was chartered to develop the necessary protocols and data formats required to support provisioning and management of symmetric key authentication tokens, both proprietary and standards based... Current developments in deployment of Shared Symmetric Key (SSK) tokens have highlighted the need for a standard protocol for provisioning symmetric keys. The need for provisioning protocols in PKI architectures has been recognized for some time. Although the existence and architecture of these protocols provides a feasibility proof for the KEYPROV work assumptions built into these protocols mean that it is not possible to apply them to symmetric key architectures without substantial modification..."
Tapping the Computing Cloud for Smarter Water
Martin LaMonica, CNET News.com
"If irrigation systems were half as smart as scientific calculators, they could cut water usage by 20 percent to 50 percent, says ET Water Systems. The company earlier this year introduced its SmartBox, a replacement for commercial-scale irrigation controllers that determines the watering needs for landscaping based on a mash-up of site-specific data and local weather. A new version of the device, set for release this summer, will be able to get firmware upgrades over the cell network it uses.
According to ET Water CEO Pat McIntyre, "Water conservation is an area that's often considered overlooked by investors and technology entrepreneurs, and that's largely because there isn't a large economic incentive to conserve." Still, the Novato, Calif.-based company plans in the fourth-quarter to seek $5 million of venture capital to expand its distribution...
Landscape managers replace irrigation controllers with the SmartBox (the company also developed a retrofit option) and then configure the system with a Web-based application. People can input variables, such as soil type, slope, shading, and plant type (mature trees versus turf, for example). That watering schedule is adjusted automatically by ET Water computers by incorporating weather information from WeatherBug.com.
The payback for these irrigation systems, which cost about $2,000 installed, is around two years; they are designed for college campuses or office building parks..."
See also: the OASIS Blue Member Section
XML Daily Newslink and Cover Pages sponsored by:
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/