The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: August 31, 2006
Standards for Automated Resource Management in the Computing Environment

Contents


Overview

This document provides references to standards activities in multiple related areas of computing resource management, including software solution installation and deployment technologies in a heterogeneous environment.


Common Information Model (CIM)

"The DMTF Common Information Model (CIM) provides a common definition of management information for systems, networks, applications and services, and allows for vendor extensions. CIM's common definitions enable vendors to exchange semantically rich management information between systems throughout the network. CIM is comprised of a Specification and a Schema. The Schema provides the actual model descriptions, while the Specification defines the details for integration with other management models...

The latest version of the Schema [as of 2004-09], CIM 2.8, provides new classes for storage and also offers modeling for the Java 2 Enterprise Edition (J2EE) environment. It also introduces the concept of management profiles, provides support for managing security principals and describing their authentication policy and privileges, manages IPsec policy and resulting security associations, and features modeling of the management infrastructure for discovery..."

"CIM allows for the exchange of management information in a platform-independent and technology-neutral way. It is an object-oriented model, describing an organization's computing and networking environments (its hardware, software and services). All managed elements are positioned within this model, clarifying semantics, streamlining integration and reducing costs by enabling end-to-end multi-vendor interoperability in management systems..."

WBEM and CIM Interoperability Working Group: "Today, Web-Based Enterprise Management (WBEM) provides the ability for the industry to deliver a well-integrated set of standard-based management tools leveraging Web technologies. The DMTF has developed a core set of standards that make up WBEM: (1) a data model, the Common Information Model (CIM) standard; (2) an encoding, the xmlCIM Encoding Specification; (3) a transport mechanism, CIM Operations over HTTP. CIM defines a platform and technology-independent information model describing enterprise and service provider compute and networking environments. Managed entities (such as hardware, software, systems, storage, networks, etc.) are described, as well as their capabilities, statistics, metrics and other relationships. The xmlCIM Encoding Specification defines XML elements, written in Document Type Definition (DTD), that are used to represent CIM classes and instances. The CIM Operations over HTTP specification defines a mapping of CIM operations onto HTTP that allows implementations of CIM to communicate in a standardized manner. It completes the technologies that support WBEM... WS-CIM is a merging of the former CIM-SOAP group and the Structured Protocol sub-team from the Server Management Working Group. WS-CIM is a consolidated group where work takes place to enable the DMTF's infrastructure protocols to take advantage of Web Services. Initially, WS-CIM will specify how resources modeled using CIM are exposed, described, managed and discovered via Web services..."

References:

Data Center Markup Language (DCML)

Data Center Markup Language (DCML) is an open, vendor-neutral language used "to describe data center environments, dependencies between data center components and the policies governing management and construction of those environments. DCML provides a structured data format to describe, construct, replicate, recover and communicate about data center environments. DCML encompasses a wide array of data center elements, including UNIX, Linux, Windows and other servers, software infrastructure and applications, network components, and storage components."

A version 1.1 Data Center Markup Language Framework Specification was published in May 2004 by members of the DCML Organization, which includes over 20 of the world's leading software, service provider, and systems vendors. This DCML Framework Specification "defines the DCML data oriented framework for use by all DCML sub-specifications and DCML compliant management systems and tools. It utilizes a data oriented approach to solve the problem of large scale systems management, particularly in a data center environment. DCML stitches together multiple management systems and tools to form a unified management view of the environment. In this unified view, management systems can exchange domain knowledge about the environment with other management systems in the same environment. This common data oriented approach is the first step toward a unified management view of the environment, allowing systems to communicate by importing and exporting data in vocabularies of a common XML-based language..."

In August 2004, plans were announced "for the Data Center Markup Language (DCML) Organization to advance its specification, technical agenda, membership, and operations as part of the global OASIS standards consortium." Transitioning the activities of the DCML Organization to an OASIS DCML Member Section is designed to "promote the use of utility computing by providing a standard way to represent the IT environment and enabling data center automation and system management solutions to easily exchange information about the environment under management."

On September 29, 2004, four new OASIS Technical Committees were announced in connection with the DCML Member section. See details in the news story "OASIS Forms Four Technical Committees to Advance Data Center Markup Language (DCML)."

  • OASIS DCML Framework TC: "The purpose of the OASIS DCML Framework TC is to create a data model and format for exchanging information about the contents of data centers and other IT resources, and the information used in managing those contents. The OASIS DCML Framework TC will continue work on the DCML Framework specification produced by the DCML organization. The Framework specification will be used as the foundation by other OASIS DCML TCs in creating DCML sub-specifications aimed at representing specific types of information."

  • OASIS DCML Applications and Services TC: "The purpose of the OASIS DCML Applications and Services TC is to extend the OASIS DCML Framework Specification by defining extensions to represent Applications and Services and the information necessary to manage these components. The Applications and Services extensions will be designed so as to be able to represent both specific abstract application and services architectures and, if further extended, specific products or instantiations of a application or service."

  • OASIS DCML Server TC: "The purpose of the OASIS DCML Server TC is to define extensions to the DCML Framework specification to facilitate the representation and management of information about servers. 'Server' refers to a logical or physical compute resource in the datacenter."

  • OASIS DCML Network TC: "The purpose of the OASIS DCML Network TC is to design a data model and XML-based format for the exchange of information about networking elements in a data center. The OASIS DCML Network TC builds on and supports the DCML Framework specification produced by the OASIS DCML Framework TC while focusing on the specifics of network equipment and technology."

References:

Distributed Management Task Force (DMTF)

The Distributed Management Task Force, Inc. (DMTF) "is the industry organization leading the development of management standards and integration technology for enterprise and Internet environments. DMTF standards provide common management infrastructure components for instrumentation, control and communication in a platform-independent and technology neutral way. DMTF technologies include information models (CIM), communication/control protocols (WBEM), and core management services/utilities...

The DMTF Technical Committee oversees the following working groups: Applications/Metrics; Architecture; Behavior and State; Database; Desktop Management Interface (DMI); DEN/LDAP Mapping; Networks; Policy; Pre-OS; Security Protection and Management; Server Management; Support; System and Devices; User and Security; Utility Computing; WBEM Infrastructure and Protocols..."

The DMTF has developed a core set of standards that make up WBEM, which includes the Common Information Model (CIM), CIM-XML, CIM Query Language, WBEM Discovery using Service Location Protocol (SLP) and WBEM Universal Resource Identifier (URI) mapping. In addition, the DMTF has developed a WBEM Management Profile template, allowing for simplified profile development to deliver a complete, standalone definition for the management of a particular system, subsystem, service or other entity...

DMTF Board members include representatives from Cisco, Dell, EMC, Hewlett-Packard Company, Hitachi, Ltd., IBM, Intel Corporation, Microsoft Corporation, Novell, Oracle, Sun Microsystems, Inc., Symantec Corporation, VERITAS Software, WBEM Solutions.

The DMTF Utility Computing Working Group was chartered to (1) "Unify the computer industry on a common manageability model and profiles for utility computing. In support of this goal, the WG will write or collaborate with other standards organizations to create interoperable profiles for utility computing services. (2) Define how to assemble complete service definitions, that is, the composition of the models, the management building blocks, the business/domain specific functional interfaces, bindings, and transports..."

References:

GGF Application Contents Working Group (ACS)

The Global Grid Forum Application Contents Working Group (ACS), initiated in January 2005, is part of the GGF Scheduling and Resource Management Project, itself chartered to "generate best practice scheduling and resource management documents, protocols, and API specifications to enable interoperability."

From the ACS-WG Overview:

"In order to install and operate complex systems such as three-tier systems more efficiently and automatically, it is necessary to specify and manage, as a unit, a diverse set of application related information. The Application Contents Service (ACS) provides central management of such application information. Because application contents can consist of many different artifacts it is useful to be able to bundle them into a single archive to reduce operational overhead and minimize the possibility of inconsistency. The archive must be complete to exclude the instability and/or mismatch between the contents, but can make use of the reference to the external but stable storage to improve the efficiency in transport and storage.

The importance of a standards-based Application Contents service in conjunction with a configuration and deployment service is established in the current draft of the OGSA document.

The Application Contents Service Working Group (ACS-WG) will focus on two main topics: (1) Application Repository Interface (ARI), specifying repository service and its interface to Application Contents; and (2) Application Archive Format (AAF), specifying archive format to register a set of Application Contents to the ACS as a unit. The Application Contents include application binaries and related information; e.g., program binaries, configuration data, procedure descriptions for lifecycle management, requirements descriptions for the hardware and underlying middleware, policy rules, and anything needed to create a task on grid systems. They may be real entities or location pointers. On the other hand, the Application Contents doesn't include information updated by a task and information describing a status of a task. ACS doesn't interpret or execute information in each content, rather it just manages them for use by other OGSA-services...

IUDD and AAF Overview from the presentation "Installable Unit Deployment Descriptor," by Thomas Studwell. "Principles: For more complete autonomic functionality, the installation of OS and grid container must be automated and born from the network. Generic enough to apply to any computing container solution, Grid or otherwise. Must be able to deal with heterogeneous pools of hardware. An increasing percentage of software is aggregated as a component within a larger, integrated 'solution'. Grid applications are, by definition, an aggregated solution. Customers outages are often caused by their inability to rollout changes to applications because of the complex interdependencies with other application components and products. In order to enable autonomic deployment and configuration management, standardized formats are needed for declaring the structure of a solution and dependencies among its software components... IUDD in Relationship to ACS: Application Contents Service defines a repository interface (ARI) and format for contents of the respository (AAF). While the requirements for a grid application archive are unique to grid, the description of the contents are not. The description must define the application artifacts, dependencies, and deployment mechanisms. Add software life cycle management to the mix and you have the requirements for Installable Unit Deployment Descriptor (IUDD). Only requirements difference between AAF and IUDD are any wrappers needed for storage within ACS repository... IBM, InstallShield (Macrovision), Zero G, and Novell collaborated on a set of specifications to 'define the schema of an XML document describing the characteristics of an installable unit (IU) of software that are relevant for its deployment, configuration and maintenance.' Published by W3C on July 15, 2004, made available to Industry under RF terms. Publication [was] coincident with announcements calling for formation of a standards workgroup to formalize an Industry standard for IUDD Schema... Standardization goal Have a single Industry standard to describe all aspects of a software solution needed to provide complete lifecycle maintenance. IUDD specifications will be IBM's submission to a formal workgroup. Summary: Requirements for IUDD and AAF are very similar Leverage a broader standard like IUDD so configuration problem is addressed at many levels of granularity with same data Candidate standards such IUDD and provisioning implementation efforts such as NaReGI and others provide foundation. Encourage vendors of install products to participate in ACS-WG..."

From the ACS-WG Charter: "In the Charter Discussion BOF at GGF12, we've discussed the relation of ACS with the emerging efforts Solution Installation Schema recently submitted to the W3C, inviting Tom Studwell from IBM, representing the technology. With the publication of Solution Installation Schema, IBM and others made a call to the Industry to form a workgroup to standardize an Installable Unit Deployment Descriptor (IUDD) schema that can be used throughout the Industry. We believe this schema, if standardized, may be able to be used as a base specification for the Application Archive Format (AAF). We expect the ACS WG would form a liaison with this IUDD workgroup when formed..."

From Section 7.1.3 of the Application Contents Service Specification: [With respect to the Solution Installation] "The structure of the Solution Installation and AAF looks similar, in which both make use of the XML document in descriptors, and incorporate multiple set of application binaries to be bundled in one. However, we see two are different in scopes of target application and purposes. Solution Installation is mainly focus on pre-packaged ISV software and doesn't scope a submitting them to grid systems; rather they are for the integration of the group of the pre-packaged software. It also focus on the automated installation but not scope working with certain Grid services, e.g., binding between a job and resource, agreement between consumer and provider, job scheduling, and so on. On the other hand, ACS aims at handling the application tailored or customized to the user requirement, and executed in Grid systems. Solution Installation is an emerging specification submitted to W3C in June, 2004, which aims to be platform-independent standard...

References:

  • ACS-WG web site
  • ACS-WG Charter
  • ACS-WG mailing list archive
  • ACS Working Group Co-Chairs: Keisuke Fukui, Thomas Studwell, and Peter Ziu.
  • Contact: Keisuke Fukui (Fujitsu Laboratories Ltd).
  • Application Contents Service Specification. Edited by Keisuke Fukui (Fujitsu). Proposed draft. November 1, 2004. Copyright (c) Global Grid Forum. 31 pages. "This is a sample version of the expected Application Contents Service (ACS) Specification and is created for the ACS Charter Discussion BOF at GGF12 in order to demonstrate the scope and outline of the spec... In this document, we define Application Contents Service (ACS) to manage Application Contents. ACS is an OGSA service, which maintains Application Contents in a repository and provides functions to access them, retrieve their change histories and so on. We also define a standard format to archive Application Contents associated with a single application for registering Application Contents in an ACS and exchanging them between ACSs." [source .DOC, cache]
  • Installable Unit Deployment Descriptor. By Thomas Studwell (IBM Autonomic Computing). Presented to GGF ACS-WG. GGF13 Meetings, March 14, 2005. 22 slides. From the GGF13 ACS-WG materials and minutes archive. This presentation covers: The Problem Space; Principles; The Installable Unit Deployment Descriptor Relationship to ACS; IUDD, what it is and what it isn't; IUDD concepts, structure and capabilities. See the excerpts. [source .PPT, cache]
  • Strawman Requirements for the ACS WG Activity. 2005-02-01 or later.

GGF Configuration Description, Deployment, and Lifecycle Management Working Group (CDDLM)

The CDDLM Working Group is part of the Global Grid Forum (GGF) Scheduling and Resource Management Project.

From the CDDLM-WG Charter:

Deploying any complex, distributed service presents many challenges related to service configuration and management. These range from how to describe the precise, desired configuration of the service, to how we automatically and repeatably deploy, manage and then remove the service. Description challenges include how to represent the full range of service and resource elements, how to support service "templates", service composition, correctness checking, and so on. Deployment challenges include automation, correct sequencing of operations across distributed resources, service lifecycle management, clean service removal, security, and so on. Addressing these challenges is highly relevant to Grid computing at a number of levels, including configuring and deploying individual Grid Services, as well as composite systems made up of many co-operating Grid Services.

Hence, the CDDLM-WG will address how to: describe configuration of services; deploy them on the Grid; and manage their deployment lifecycle (instantiate, initiate, start, stop, restart, etc.). The intent of the WG is to gather researchers, developers, practitioners, and theoreticians in the areas of services and application configuration, deployment, and deployment life-cycle management and to explore the community need for a broader effort in this area...

There are many proprietary and open source systems that partially overlap CDDLM, but there is no consistent and standardized system or method which interoperates across various platforms, languages, and services/applications in a secure and reliable way. As examples, an ideal CDDLM would enable: unified definitions of configuration parameters to replace or mask the many different configuration notations and access mechanisms in use today (e.g. XML, ini files, SQL); methods for validation of configurations at definition- and run-time; system composition from sub-systems; separation of concerns of functionality and configuration; auto-discovery, self-monitoring, etc. Within group there exists extensive experience gained with one such system, called Smart Framework for Object Groups (SmartFrog)...

Potential Relationship to other Working/Research Groups: CMM and OASIS Web Services Distributed Management (WSDM) TC; GRAAP WG (WS Agreement); BoF on 'Grid Business Processes'; OGSA WG; Data Center Markup Language (DCML).

Overview from the CDDLM WG Component Model Specification. "The target of the CDDLM WG is to come up with the specifications for CDDML a) language, b) component model, and c) deployment API... document serves to outline the requirements for a software object to be deployable by the CDDLM framework. [The Component Model] specification is closely related to a set of three specifications that together comprise service configuration description, deployment, and lifecycle management.

  1. Configuration Description Language: The CDDLM Configuration Description Language (CDL) is an XML-based language for declarative description of system configuration that consists of components (deployment objects) defined in the CDDLM Component Model. The Deployment API uses a deployment descriptor in CDL in order to manage deployment lifecycle of systems. The language provides ways to describe properties (names, values, and types) of components including value references so that data can be assigned dynamically with preserving specified data dependencies. A system is described as a hierarchical structure of components. The language also provides prototype-based template functionality (i.e., prototype references) so that the user can describe a system by referring to component descriptions given by component providers.

  2. Component Model: The CDDLM Component Model outlines the requirements for creating a deployment object responsible for the lifecycle of a deployed resource. Each deployment object is defined using the CDL language and mapped to its implementation The deployment object provides a WS-ResourceFramework (WSRF) compliant "Component Endpoint" for lifecycle operations on the managed resource. The model also defines the rules for managing the interaction of objects with the CDDLM Deployment API in order to provide an aggregate, controllable lifecycle and the operations which enable this process.

  3. Deployment API: The deployment API is the WSRF-based SOAP API for deploying applications to one or more target computers. Every set of computers to which systems can be deployed hosts one or more "Portal Endpoints", WSRF resources which provide a means to create new "System Endpoints". A System Endpoint represents a deployed system. The caller can upload files to it, then submit a deployment descriptor for deployment. A System Endpoint is effectively a component in terms of the Component Model specification -it implements the properties and operations defined in that document. It also adds the ability to resolve references within the deployed system, enabling remote callers to examine the state of components with it..."

References:

  • CDDLM-WG: GGF Configuration Description, Deployment, and Lifecycle Management Working Group
  • CDDLM-WG mailing list archive
  • CDDLM-WG documents
  • CDDLM-WG Charter
  • Working Group Co-Chairs: Dejan Milojicic and Takashi Kojo (NEC). Secretary: Stuart Schaefer (Softricity).
  • Configuration Description, Deployment, and Lifecycle Management: Component Model Specification Draft 2005-04-18. Edited by Stuart Schaefer (Softricity). 50 pages. Appendix A (Component Object Definitions): A.1 XML Schema, A.2 WSDL 1.1, A.4 Topic Space. Copyright Global Grid Forum. Status: 'It is expected that this version will go to last call. Summary: "This document, produced by the CDDLM working group within the Global Grid Forum (GGF), provides a definition of the CDDLM component model and the process whereby a Grid Resource is configured, instantiated, and destroyed." See also the corresponding Schema (XSD) describing the types of the CDDLM Component Model. Sources: .DOC source [cache] and XSD, both referenced in the April 22, 2005 posting from Stuart Schaefer.

  • CDDLM Foundation Document. Reference: GWD-R [draft-ggf-cddlm-foundation8.doc]. Edited by D. Bell, T. Kojo, P. Goldsack, S. Loughran, D. Milojicic, S. Schaefer, J. Tatemura, and P. Toft. Configuration Description, Deployment, and Lifecycle Management (CDDLM WG). November 30, 2003, modified January 28, 2005. 41 pages. This document describes initial thinking behind CDDLM specifications. [source PDF]

  • XML Configuration Description Language Specification. Edited by J. Tatemura (NEC). Global Grid Forum. 8/9/2004. 30 pages. "This [CDL] document represents one of the two CDDLM language specifications. This specification is based on XML, which provides interoperability with other XML-based Grid specifications, and other specification is based on SmartFrog language developed at HP Labs, which provides user-friendly syntax and functionalities. The two languages will be compatible..." [source PDF]

  • CDDLM Configuration Description Language (CDL). By Jun Tatemura (NEC Laboratories America). Presented at GGF13. March 15, 2005. 28 slides. Posted by Jun Tatemura (NEC) on April 26, 2005. Provides a CDL Core concept overview and answers 'CDL FAQs' from the draft reviews (essential questions regarding CDL design philosophy): [1] Inheritance (prototype references): Why Inheritance? Isn't inheritance just a feature of front-end systems? CDDLM could receive a CDL document after inheritance resolved; [2] Value references: Why do we need cdl:refroot? [3] Data Types: Is CDL yet another schema language? Are you trying to replace XML Schema?... CDL is designed to leverage (not replace) XML Schema...A CDL processor may optionally generate XSDs for unresolved and/or resolved configuration data... [cache]

  • Configuration Description, Deployment, and Lifecycle Management: CDDLM Deployment API. Edited by Steve Loughran (Internet Systems and Storage Laboratory, Hewlett-Packard Laboratories). Draft 2005-02-25. Copyright (c) Global Grid Forum (2004-2005). 34 pages. "This document defines the WS-Resource Framework-based deployment API for performing such tasks. A CDDLM deployment infrastructure must implement this service in order for remote callers to create applications on the infrastructure. This document is accompanied by an XML Schema (XSD) file and a WSDL service declaration. The latter two documents are to be viewed as the normative definitions of message elements and service operations. This document is the normative definition of the semantics of the operations themselves. Purpose of the Deployment API: The deployment API is the SOAP/WS-ResourceFramework (WS-RF) API for deploying applications to one or more target computers, physical or virtual. The API is written assuming that the end user is deploying through a console program, a portal UI or some automated process. This program will be something written by a third party to facilitate deployment onto a grid fabric or other network infrastructure which is running the relevant CDDLM services..." [source .DOC]

  • CDDLM Discussion: CDDLM vs. Provisioning. Prepared by Takashi Kojo (NEC). 1/21/2004. This document compares how CDDLM fits into a bigger picture for provisioning. Scenarios (simple job submission, provisioning, failure recovery) involve job management, resource allocation, local resource management, deployment, resources. [source PDF]

  • "WS-Resource Framework and CDDLM." By Steve Loughran (HP). Revision 0.2. 23-March-2004. GGF CDDLM-WG document. Summary: "From a Web Services perspective, WS-Resource Framework (WS-RF) is both unusual and hence controversial. Its proposed mechanism for adding state to web services runs contrary to what is widely perceived as the way forward for Service Oriented Architecture, the conceptual model that is currently being advocated as the best way to write large-scale distributed systems. However, from a Grid Services perspective, there is little new in the framework, as it is very much like OGSI. By refactoring the OGSI specifications, the framework does make it easier for Web Services and Grid Services to co-exist..." Steve Loughran indicated in a note of April 29, 2005 that since the time of writing (March 2004), the team has actually done a wsrf version, though it has not yet been implemented. [source .DOC, cache]
  • "System Administration and CDDLM." By Paul Anderson and Edmund Smith (School of Informatics, University of Edinburgh). September 17, 2004. In Proceedings of the GGF12 CDDLM Workshop (2004). "This paper presents our impressions of the solutions developed by CDDLM in the light both of recent advances in systems administration, and of the many decades of experience of the systems administration community in managing resources. We also briefly examine the relationship between CDDLM and our current project investigating techniques for managing grid fabrics." [cache]

  • . SourceForge Project. "SmartFrog (Smart Framework for Object Groups) is a framework for configuring and automatically activating distributed applications. The SmartFrog framework is released under LGPL license." See also the home page.

GGF XML Configuration Description Language Specification

Note: Most of the information in this section is superseded by updated information (2005-04 or later) in preceding section, GGF Configuration Description, Deployment, and Lifecycle Management Working Group (CDDLM).

The GFF CDDLM (Configuration Description, Deployment, and Lifecycle Management) Working Group was chartered to "describe configuration of services; deploy them on the Grid; and manage their deployment lifecycle (instantiate, initiate, start, stop, restart, etc.)."

XML Configuration Description Language Specification: "Successful realization of the Grid vision of a broadly applicable and adopted framework for distributed system integration, virtualization, and management requires the support for configuring Grid services, their deployment, and managing their lifecycle. A major part of this framework is a language in which to describe the components and systems that are required. This document, produced by the CDDLM working group within the Global Grid Forum (GGF), provides a definition of the XML-based configuration description language and its requirements...

The CDDLM WG addresses how to: describe configuration of services; deploy them on the Grid; and manage their deployment lifecycle (instantiate, initiate, start, stop, restart, etc.). The intent of the WG is to gather researchers, developers, practitioners, and theoreticians in the areas of services and application configuration, deployment, and deployment life-cycle management and to explore the community need for a broader effort in this area. The target of the CDDLM WG is to come up with the specifications for CDDML language, component model, and basic services. This [CDL] document represents one of the two CDDLM language specifications. This specification is based on XML, which provides interoperability with other XML-based Grid specifications, and other specification is based on SmartFrog language developed at HP Labs, which provides user-friendly syntax and functionalities. The two languages will be compatible..." [revision 0.3 excerpt, 8/9/2004]

References:

OASIS Remote Control XML Technical Committee

In September 2005, members of the OASIS Remote Control XML Technical Committee submitted a preliminary RCXML specification to OASIS, providing a starting point for the TC's discussion and work. The specification was developed by Acogito under contract to the Telecommunications Technology Association of South Korea.

In August 2005, OASIS announced the formation of a new Remote Control XML Technical Committee. Excerpts from the CFP:

"The purpose of the proposed Technical Committee is to develop a set of XML standards to support control of devices to be accessed and controlled remotely. There are a growing number of devices available and under development that can be controlled remotely, including a wide variety of household and industrial implements, and using a broad variety of transmission methods (including infrared, radiofrequency and other methods). However, there is not yet a dominant standard for the commands and command syntax for this control. XML provides a suitable and very flexible framework for this content.

The problem to be solved is standardization of the growing number of independent product-specific solutions to the control problem as products and devices proliferate. The standards to be developed should address as wide a range of controllable devices as possible. However, it is not possible to anticipate the complete range of commands and syntax that will be needed as new devices are developed. For example, it is easy to anticipate that there will be a need for an 'ON' or 'OFF' command, or for a need to map 'settable' controls on the device to a logically controllable identifier, and to also be able to map the domain of acceptable settings (either discrete or continuously variable over a range) to that identifier. However, there likely will be devices in the future with controls not easily accommodated by this first level of syntax. The proposed standard should anticipate this by defining the procedure for periodic expansion and extension.

This Committee will propose RCXML (Remote Control XML). The RCXML technology will allow users to interact with any appliance which can be controlled by remote control such as TVs, PVRs, VCRs, lighting, heating/cooling systems, security systems, watering systems etc from a remote location. Any appliance which has the RCXML interpreter as a middleware can be operated through the RCXML scenario sent via wire/wireless network. Even a remote control service provider that doesn't have detailed information about a certain device should be able to develop and provide control service based on RCXML.

International Standard Technology: In Korea, RCXML is already being adopted as the domestic standard under TTA's support (Telecommunications Technology Association, Korea). Approval of this standard internationally should encourage a broader range of electric/electronic device manufacturers to apply it to their products. The result will be that the provider can easily create a business model for new service, and, since this is based on XML, it will be readily available on a standardized and readily tooled basis..."

References:

OASIS Solution Deployment Descriptor (SDD) TC

On April 29, 2005 OASIS issued a Call for Participation in connection with a new Solution Deployment Descriptor (SDD) TC. The purpose was summarized thus: "Deployment and lifecycle management of a set of interrelated software, hereinafter referred to as a solution, is a predominantly manual operation because there is currently no standardized way to express installation packaging for a multi-platform environment. Each hosting platform or operating system has its own format for expressing packaging of a single installable unit but, even on these homogeneous platforms, there is no standardized way to combine packages into a single aggregated unit without significant re-creation of the dependency and installation instructions. The problem is compounded when the solution is to be deployed across multiple, heterogeneous, platforms. A standard for describing the packaging and means to express dependencies and various lifecycle management operations within the package would alleviate these problems and subsequently enable automation of these highly manual and error-prone tasks. The purpose of this Technical Committee is to define XML schema to describe the characteristics of an installable unit (IU) of software that are relevant for core aspects of its deployment, configuration, and maintenance. This document will be referred to as the Solution Deployment Descriptor (SDD). SDDs, previously described as IUDDs, also are described in [a Member Submission to W3C, see W3C Solution Installation Schema]

SDDs will benefit member companies and the industry in general by providing a consistent model and semantics to address the needs of all aspects of the IT industry dealing with software deployment, configuration, and lifecycle management. The benefits of this work include:

  • ability to describe software solution packages for both single and multi-platform heterogeneous environments
  • ability to describe software solution packages independent of the software installation technology or supplier
  • ability to provide information necessary to permit full lifecycle maintenance of software solutions

SDD TC Scope: "The Technical Committee will define XML schema for SDDs, as well as a package format to associate SDDs, resource content, and software artifacts. SDDs are intended to describe the aggregation of installable units at all levels of the software stack. The resulting XML schema shall be partitioned to allow for layered implementations covering the range of applications from the definition of atomic units of software (Smallest Installable Units) to complex, multi-platform, heterogeneous solutions. A solution is any combination of products, components or application artifacts addressing a particular user requirement. This includes what would traditionally be referred to as a product offering (e.g. a database product), as well as a solution offering (e.g. a business integration platform comprising multiple integrated products), or a user application (e.g. a set of application artifacts like J2EE applications and database definitions). All the software constituents of a solution can be represented by a single SDD as a hierarchy of installable unit aggregates. In addition to the installable units that comprise a solution, the SDD also describes the requirements of targets onto which the solution can be deployed. There are a number of aspects of software deployment, configuration, and life-cycle management that are expressly outside of the scope of this technical committee. Specifically this committee will not specify host platform models, host platform management interfaces, or the design or implementations of deployment or life-cycle managers. Other standards efforts in other parts of the industry cover these aspects and other related standards activities may emerge. This technical committee may develop recommendations regarding these aspects but will feed these recommendations through appropriate liaison with the respective standards committees..."

References:

OASIS Web Services Distributed Management (WSDM)

The OASIS WSDM Technical Committee was chartered to "define web services management, including using web services architecture and technology to manage distributed resources; this TC will also develop the model of a web service as a manageable resource. The TC will collaborate with various evolving activities within other standards groups, including, but not limited to, DMTF (working with its technical work groups regarding relevant CIM Schema), GGF (on the OGSA common resource model and OGSI regarding infrastructure), and W3C (the web services architecture committee)..."

In February 2005 the OASIS WSDM Technical Committee submitted its Web Services Distributed Management (WSDM) Committee Draft Specification Version 1.0 to the Consortium membership for approval as an OASIS Standard. The approved WSDM Committee Draft Specification 1.0 balloted includes both Management Using Web Services (WSDM-MUWS) and Management of Web Services (WSDM-MOWS). MUWS "defines how an Information Technology resource connected to a network provides manageability interfaces such that the IT resource can be managed locally and from remote locations using Web services technologies. MUWS is composed of two parts: MUWS Part 1 and provides the fundamental concepts for management using Web services. MUWS Part 2 provides specific messaging formats used to enable the interoperability of MUWS implementations." Although MUWS Part 2 has a dependency upon Part 1, MUWS Part 1 is independent of Part 2. MUWS Part 1 provides a sample list of types of management capabilities exposed by MUWS: they are "the management capabilities generally expected in systems that manage distributed IT resources; examples of manageability functions that can be performed via MUWS include monitoring the quality of a service, enforcing a service level agreement, controlling a task, and managing a resource lifecycle." Certification by OASIS member organizations that they are successfully using the WSDM specification consistently with the OASIS IPR Policy have been received from Computer Associates, Hewlett-Packard, International Business Machines, Amberpoint, and TIBCO Software. OASIS Sponsor Members represented on the WSDM TC include Actional Corp, BEA Systems, BMC Software, CA, Dell, Fujitsu, HP, Hitachi, IBM, Novell, Oracle, and TIBCO.

As of September 2004, the TC was developing two separate specifications: (1) Web Services Distributed Management: Management of Web Services (WSDM-MOWS), and (2) Web Services Distributed Management: Management Using Web Services (WSDM-MUWS).

"Management of Web services (MOWS) is a particular case of Management using Web services (MUWS) in which a resource is an element of the Web Services Architecture...The Web services concepts, according to the WSDL specification, are defined as follows. A service is an aggregate of endpoints each offering the service at an address and accessible according to a binding. A service has a number of interfaces that are realized by all of its endpoints. Each interface describes a set of messages that could be exchanged and their format. Properly formatted messages could be sent to the endpoint at the address in a way prescribed by the binding. A description (document, artifact) is composed of definitions of interfaces and services. A description may contain both or either of the definitions..."

Management Using Web services defines how an Information Technology resource connected to a network provides the manageability interfaces such that it can be managed remotely using Web services technologies... Management Using Web Services (MUWS) enables management of distributed information technology (IT) resources using Web services. Many distributed IT resources use different management interfaces. By leveraging Web service technology, MUWS enables easier and more efficient IT management systems by providing a flexible common framework for manageability interfaces that benefit from the features of Web services protocols. Universal management and interoperability across the many various distributed IT resources can be achieved using MUWS. The types of management capabilities exposed by MUWS are the management capabilities generally expected in distributed IT management systems. Examples of manageability functions that can be performed via MUWS include: (1) monitoring quality of services; (2) enforcing service level agreements; (3) controlling tasks; (4) managing resource life-cycles..."

References:

Open Services Gateway Initiative (OSGi)

The OSGi specifications define a standardized, component oriented, computing environment for networked services. Adding an OSGi Service Platform to a networked device (embedded as well as servers), adds the capability to manage the life cycle of the software components in the device from anywhere in the network. Software components can be installed, updated, or removed on the fly without having to disrupt the operation of the device. Software components are libraries or applications that can dynamically discover and use other components. Software components can be bought off the shelf or are developed in house. The OSGi Alliance has developed many standard component interfaces that are available from common functions like HTTP servers, configuration, logging, security, user administration, XML, and many more. Plug compatible implementations of these components can be obtained from different vendors with different optimizations...

Software component architectures address an increasing problem in software development: The large number of configurations that need to be developed and maintained. The standardized OSGi component architecture simplifies this configuration process significantly.

In the OSGi Service Platform, bundles are the only entities for deploying Java-based applications. A bundle is comprised of Java classes and other resources which together can provide functions to end users and provide components called services to other bundles, called services. A bundle is deployed as a Java ARchive (JAR) file. JAR files are used to store applications and their resources in a standard ZIP-based file format...."

In an OSGi framework, services are deployed using bundles and these bundles feature two types of dependencies: (1) Package dependencies. A bundle can export a package which others import. These dependencies, although dynamic, are relatively easy to handle. (2) Service dependencies. Services, encapsulated in deployable components (bundles) can be started and stopped any time. Other components often depend on these services and need to deal with changes in their availability..." [Offermans]

The OSGi service platform delivers an open, common architecture for service providers, developers, software vendors, gateway operators and equipment vendors to develop, deploy and manage services in a coordinated fashion. It enables an entirely new category of smart devices due to its flexible and managed deployment of services. The primary targets for the OSGi service platform are devices such as set top boxes, service gateways, cable modems, consumer electronics, PCs, industrial computers, cars, smart handhelds and more. These devices that implement the OSGi specifications will enable service providers like telcos, cable operators, utilities, and others to deliver differentiated and valuable services over their networks...

Chapter 17 of the OSGi Service Platform Specification provides the XML Parser Service Specification: "This specification addresses how the classes defined in JAXP can be used in an OSGi Service Platform. It defines how: (1) implementations of XML parsers can become available to other bundles; (2) bundles can find a suitable parser; (3) a standard parser in a JAR can be transformed to a bundle...

The OSGi Service Platform is the optimal Java based application server for networked devices, however small or large they are. This non-proprietary service platform spans: Digital mobile phones; Automotive; Telematics; Embedded appliances; Residential gateways; Industrial computers; Desktop PCs; High-end servers, including mainframes.

References:

Oscar Bundle Repository (OBR)

Oscar is an OSGi framework implementation, presented here as an example of open source. "Oscar Bundle Repository (OBR) is an incubator and repository for OSGi bundles. A bundle is the OSGi term for a component for the OSGi framework. A bundle is simply a JAR file containing a manifest and some combination of Java classes, embedded JAR files, native code, and resources. A bundle may provide some specific functionality for the user or it may implement a service that other bundles can use; bundles can only use functionality from other bundles through service interfaces and package sharing..."

OBR uses an XML-based repository file of bundle meta-data. The meta-data can be divided into three groups: required, human readable, and not currently used.

The OSGi Alliance is a consortium working on defining standards for delivering and managing dynamically downloadable services into networked environments. The OSGi Alliance has defined a dynamically extensible framework for this purpose, which supports the dynamic deployment and execution of components, called bundles. The OSGi framework provides an excellent platform for building dynamically extensible applications. OBR has two two main goals: (1) Provide a repository of useful and/or didactic bundles that can be easily deployed into existing OSGi frameworks; (2) Promote a community effort around bundle creation by increasing the visibility of individual bundles...

Even though OSGi targets the embedded device market, the framework is ideally suited for experimenting with component-oriented and service-oriented computing in general. For example, Oscar can be easily embedded into other projects and used as a plugin or extension mechanism; it serves this purpose much better than other systems that are used for similar purposes, such as Java Management Extensions (JMX)... [adapted from the OBR web site]

References:

  • Oscar Bundle Repository (OBR). SourceForge.
  • Oscar. An OSGi framework implementation.
  • OSGi and Gravity Service Binder Tutorial. "This tutorial creates successively more complex OSGi bundles to illustrate most of the features and functionality offered by the OSGi framework. It culminates by demonstrating how the Gravity Service Binder can be used to greatly simplify creating OSGi applications."

Simple Network Management Protocol (SNMP)

"The Simple Network Management Protocol (SNMP) forms part of the internet protocol suite as defined by the Internet Engineering Task Force. The protocol can be used to monitor network-attached devices for any conditions that warrant administrative attention... Architecturally, the SNMP framework has three fundamental components: master agents, subagents, and management stations. Each Internet Protocol (IP) addressable system in a network, such as a node or a router, hosts a master agent for that system. A master agent typically limits its activity to parsing and formatting of the protocol. If the system has multiple manageable subsystems present, the master agent passes on the requests it receives to one or more subagents. These subagents model objects of interest within a subsystem and interface to that subsystem for monitoring and management operations. The role of the master agent and subagent can merge, in which case it is simply referred to as an agent. A clean separation of the protocol from the structure of management information has made it easy to use SNMP to monitor and often manage hundreds of different subsystems within a network. Within the SNMP architecture, each managed subsystem is modeled through via a Management Information Base (MIB) defined specifically for it. This MIB specifies precisely the management data and operations that a subagent makes possible. This model permits management across all layers of the OSI reference model and extending into applications such as databases, email, and the J2EE reference model. The manager or management station provides the third component. It functions as the equivalent of a client in a client-server architecture. It issues requests for management operations on behalf of an administrator or application, and receives traps from agents as well..." [from the Wikipedia article]

The Simple Network Management Protocol is a protocol for Internet network management services. It is formally specified in a series of related RFC documents. SNMPv1 is now historic, and SNMPv3 is now standard and is described by IETF RFCs 3410-3418; 3410 is informational..."

"The SNMP Management Framework presently consists of five major components: (1) An overall architecture, described in RFC 2571; (2) Mechanisms for describing and naming objects and events for the purpose of management. The first version of this Structure of Management Information (SMI) is called SMIv1 and is described in STD 16, RFC 1155, STD 16, RFC 1212 and RFC 1215. The second version, called SMIv2, is described in STD 58, RFC 2578, RFC 2579 and RFC 2580; (3) Message protocols for transferring management information. The first version of the SNMP message protocol is called SNMPv1 and is described in STD 15, RFC 1157. A second version of the SNMP message protocol, which is not an Internet standards track protocol, is called SNMPv2c and is described in RFC 1901 and RFC 1906. The third version of the message protocol is called SNMPv3 and is described in RFC 1906, RFC 2572 and RFC 2574. (4) Protocol operations for accessing management information. The first set of protocol operations and associated PDU formats is described in STD 15, RFC 1157. A second set of protocol operations and associated PDU formats is described in RFC 1905. (5) A set of fundamental applications is described in RFC 2573. The view-based access control mechanism is described in RFC 2575... [from the FAQ, Part 1]

References:

SNIA Storage Management Initiative Specification (SMI-S)

The SMI-S specification [Version 1.0.1] "documents a secure and reliable interface that allows storage management systems to identify, classify, monitor, and control physical and logical resources in a Storage Area Network. The Technical Specification defines the a method for the interoperable management of a heterogeneous Storage Area Network (SAN). This Technical Specification describes the information available to a WBEM Client from an SMI-S compliant CIM Server. It describes an object-oriented, XML-based, messaging-based interface designed to support the specific requirements of managing devices in and through Storage Area Networks (SANs)...

Rationale: "Storage Area Networks (SANs) are emerging as a prominent layer of IT infrastructure in enterprise class and midrange computing environments. Applications and functions driving the emergence of SAN technology include: (1) Sharing of vast storage resources between multiple systems, (2) LAN free backup, (3) Remote, disaster tolerant, on-line mirroring of mission critical data, (4) Clustering of fault tolerant applications and related systems around a single copy of data. To accelerate the emergence of SANs in the market, the industry requires a standard management interface that allows different classes of hardware and software products supplied by multiple vendors to reliably and seamlessly interoperate for the purpose of monitoring and controlling resources. The SNIA Storage Management Initiative (SMI) was created to develop this specification (SMI-Specification or SMI-S), the definition of that interface. This standard provides for heterogeneous, functionally rich, reliable, and secure monitoring/control of mission critical global resources in complex and potentially broadly distributed multi-vendor SAN topologies. As such, this interface overcomes the deficiencies associated with legacy management. To achieve the architectural objectives and support the key technological trends [identified in the Introduction] the SMI-S specification document describes an object-oriented, XML-based messaging based interface designed to support the specific requirements of managing devices in and through Storage Area Networks..."

References:

Systems Management Architecture for Server Hardware (SMASH)

In June 2005, DMTF announced the release of a Version 1.0 Preliminary Standard defining the Server Management Command Line Protocol Specification (SM CLP). The document specifies a common command line syntax and message protocol semantics for managing computer resources in Internet, enterprise, and service provider environments. It includes a direct mapping to a subset of the CIM Schema and an XML Schema definition for the Command Response in XML format.

The Systems Management Architecture for Server Hardware (SMASH) is being developed by the DMTF Server Management Working Group (SMWG). "SMASH is a suite of specifications that deliver architectural semantics, industry standard protocols and profiles to unify the management of the data center. The SMASH Command Line Protocol (CLP) specification enables simple and intuitive management of heterogeneous servers in the data center independent of machine state, operating system state, server system topology or access method, facilitating local and remote management of server hardware in both Out-of-Service and Out-of-Band management environments. SMASH also includes the SMASH Managed Element Addressing Specification, SMASH CLP-to-CIM Mapping Specification, SMASH CLP Discovery Specification, SMASH Profiles, as well as a SMASH Architecture White Paper." [DMTF reference page]

SMASH status 2004-09-07: [said Winston Bumpus, president, DMTF] "Since the DMTF's Server Management Working Group (SMWG) was announced, the group has attracted 194 members from 44 companies, showing the industry's remarkable unity behind this effort to deliver vendor-independent, platform neutral server management. This groundswell of support has resulted in an expanded scope for the DMTF's server management standards, and the outcome is SMASH — a suite of specifications that will enable unprecedented simplicity for server management across diverse IT environments in the data center and beyond..." [see "DMTF Unveils Details Of Breakthrough Server Management Standards" below]

Problem statement for the DMTF Server Management Working Group: "There is no uniform way of managing heterogeneous servers (i.e., from multiple vendors) independent of machine state, operating system state, server system topology and access mechanisms. There is a need to extend the CIM standard to cover various server system topologies such as blades and virtualized server systems. In addition, there is a need for a lightweight, industry standard human-oriented command line interface that can be mapped to CIM to implement the above. The goals of the Server Management Working Group are to define a platform independent, industry standard management architecture instantiated through wire level protocols built upon IP based technologies that: (1) Extend the CIM schema (presenting the work in parallel to the Sys/Dev WG) to represent new server system topologies; (2) Leverage the CIM/XML protocol and identify enhancements if necessary; (3) Define a CLI protocol (syntax & semantics); (4) Define profiles for different server system topologies in order to support base-level compliance; (5) Define an architectural model for understanding the semantic behavior of server management components; (6) Demonstrate interoperability..." [from the WG Charter]

References:

Universal Plug and Play (UPnP)

"Universal Plug and Play (UPnP) enhances peer-to-peer network connectivity for personal computers, wireless devices, and other intelligent appliances, in a distributed, open networking architecture. UPnP uses existing standard protocols, such as TCP/IP, Hypertext Transfer Protocol (HTTP), and Extensible Markup Language (XML) to seamlessly connect networked devices and to manage data transfer among connected devices... UPnP provides an architectural framework for creating self-configuring, self-describing devices and services. Networks managed by UPnP require no configuration by users or network administrators because UPnP supports automatic discovery. UPnP enables a device to dynamically join a network, obtain an IP address, and convey its capabilities on request. Control points can use the UPnP application programming interface (API) to learn about the presence and capabilities of devices that are registered on the network. A device can leave a network smoothly and automatically when it is no longer in use. UPnP uses no device drivers: the protocol is media-independent and can be used on any operating system (OS). UPnP enables control over a device user interface through the browser and offers programmatic control to applications. UPnP enables developers to write their own user interfaces for devices, forgoing the vendor-provided interface..." [from Microsoft MSDN 'Universal Plug and Play (UPnP)']

"A UPnP-based network consists of a set of UPnP devices that can be monitored by one or more control points. A UPnP device can contain a number of services and nested devices. For identification purposes, the device must host an XML device description document that lists specific properties about the device, the services associated with the device, and the nested devices. The XML schema for UPnP device descriptions is called the UPnP Template Language (UTL). The device description document must also include a Uniform Resource Locator (URL) for the service description. The service description is an XML document that lists the actions and state variables that apply to a specific service offered by the device..." [UPnP Framework, Microsoft]

"The UPnP Forum is an industry initiative designed to enable simple and robust connectivity among stand-alone devices and PCs from many different vendors... The Forum [as of 2004-10] consists of more than 720 vendors, including industry leaders in consumer electronics, computing, home automation, home security, appliances, printing, photography, computer networking, and mobile products. Companies with interests in particular device classes are encouraged to become Forum members and participate in designing schema templates for their device classes. By defining and publishing UPnP device and service descriptions, members of the UPnP Forum are creating the means to easily connect devices and simplify the implementation of networks...

UPnP technology is broad in scope in that it targets home networks, proximity networks, and networks in small businesses and commercial buildings. It enables data communication between any two devices under the command of any control device on the network. UPnP technology is independent of any particular operating system, programming language, or physical medium.

The UPnP architecture supports zero-configuration networking and automatic discovery whereby a device can dynamically join a network, obtain an IP address, announce its name, convey its capabilities upon request, and learn about the presence and capabilities of other devices. DHCP and DNS servers are optional and are only used if they are available on the network. A device can leave a network smoothly and automatically without leaving any unwanted state information behind.

Like the creation of Internet standards, the UPnP initiative involves a multi-vendor collaboration for establishing standard Device Control Protocols (DCPs). Similar to Internet-based communication, these are contracts based on wire protocols that are declarative, expressed in XML, and communicated via HTTP..." [adapted from About, UPnP Forum]

References:

W3C Open Software Description Specification (OSD)

On August 12, 1997 Microsoft Corporation and Marimba, Inc. submitted an Open Software Description Specification (OSD) to W3C, recommending that the proposal be considered as part of the Push Workshop scheduled for September, 1997.

Abstract: "This document provides an initial proposal for the Open Software Description (OSD) format. OSD, an application of the eXtensible Markup Language (XML), is a vocabulary used for describing software packages and their dependencies for heterogeneous clients. We expect OSD to be useful in automated software distribution environments.

Specific uses and benefits of OSD: There are a number of ways in which the OSD vocabulary can be used:

  • Standalone use: First and foremost, the OSD vocabulary can be used in a stand-alone XML document to declare dependencies (conditional relationships) between different software components for different operating systems and languages. The OSD file provides instructions that can be used to download and install only the required software components depending on the configuration of the target machine and what software is already present.
  • Accompanying aan archive file (e.g. JAR or CAB): The OSD vocabulary can be embedded inside an archive file, such as a Java Archive (JAR) file or a Cabinet (CAB) file. In such circumstances, the OSD should provide a list of additional dependencies required for installing a piece of software, accompanied with optional installation instructions for how to use any files contained in the archive.
  • Referenced from HTML: Often HTML pages that require additional software to be downloaded and installed for viewing the page.
  • Automatic Distribution: This is probably the most interesting scenario: 'push' applications can use the OSD vocabulary to automatically trigger downloads of software. In these scenarios, the OSD vocabulary provides the necessary information so only the needed software components are downloaded and installed..."

The Open Software Description Format (OSD). W3C Note. Submitted to W3C 13-August-97. Last Updated: August 11, 1997. Latest version URL: http://www.w3.org/TR/NOTE-OSD. By Arthur van Hoff (Marimba, Incorporated), Hadi Partovi (Microsoft Corporation), and Tom Thai (Microsoft Corporation).

W3C Staff comment by Dan Connolly: "The OSD specification proposes an interchange format, based on XML, for descriptions of software packages, a specific sort of web resource. The first work in W3C on labels for web resources was PICS; though it was focussed on labels that allow software to filter out content based on ratings, it was designed as a first step toward generalized labels, annotations, and metadata. The Metadata Activty addresses the next steps: structured labels, rules, integration with digital signatures, and so on. Metadata related to content distribution was a theme of our September 1997 Workshop on Push Technology. None of the specific proposed vocabularies for content distribution gained critical mass to charter work in W3C, but work on the generalized infrastructure continued. The PICS label design was recast as RDF: an enriched label data model with an XML syntax. RDF has been applied to the problem of software package labels and relationships: the rpmfind service provides RDF descriptions of over 30,000 software packages, along with tools to install the software and manage dependencies.

References:

W3C Solution Installation Schema

[See now OASIS Solution Deployment Descriptor (SDD) TC]

On June 11, 2004, W3C received a Member Submission from IBM and Novell for the Solution Installation Schema, co-authored by InstallShield Software Corporation, Inc. (InstallShield) and Zero G Software, Inc. The technical submission is published in two parts: Specification and XML schema of Installable Unit Deployment Descriptor and Specification and XML schema of Installable Unit Package Format. The purpose of the specification is "to define the schema of an XML document describing the characteristics of an installable unit (IU) of software that are relevant for its deployment, configuration and maintenance. The XML schema is referred to as the Installable Unit Deployment Descriptor or IUDD schema."

Bibliographic Information

According to the Submission Request abstract:

"IUDDs are intended to describe the aggregation of installable units at all levels of the software stack, including middleware products aggregated together into a platform; and user solutions composed of application-level artifacts which run on such a platform. The XML schema is flexible enough to support the definition of atomic units of software (Smallest Installable Units) as well as complex, multi-platform, heterogeneous solutions...

A solution is any combination of products, components or application artifacts addressing a particular user requirement. This includes what would traditionally be referred to as a product offering (e.g., a database product), as well as a solution offering (e.g., a business integration platform comprising multiple integrated products), or a user application (e.g., a set of application artifacts like J2EE applications and database definitions). All the software constituents of a solution can be represented by a single IUDD as a hierarchy of installable unit aggregates. The top-level aggregation is the root installable unit. In addition to the installable units that comprise a solution, the IUDD also describes the logical topology of targets onto which the solution can be deployed...

The submitters and co-authors have collaborated to develop this specification as a basis for the description of software solution packaging for the purposes of deployment and maintenance of software artifacts on platforms... The benefits of this specification include: (1) the ability to describe software solution packages for both single and multi-platform heterogeneous environments; (2) the ability to describe software solution packages independent of the software installation technology or supplier; (3) the ability to provide information necessary to permit full lifecycle maintenance of software solutions..."

Commentary: Thomas Studwell, Senior Technical Staff Member for Autonomic Computing Technology at IBM, on IUDD in the standards arena: Next, 2005: "What we're doing next is, when we published the Solution Installation specifications last year, we made a call to the industry to form a work group to formalize this set of specifications (or to formalize a set of specifications that satisfy the same requirements). We've been working with a number of competitors and partners to kick off that work group. The key thing about this is that we're going to formalize a specification related to the deploying of software on multiple heterogeneous platforms and that specification is referenced in other works that are going on in the industry... The work we're doing is called the Installable Unit Deployment Descriptor. It can form a basis that will be referenced in a number of works. For example, GGF has a committee doing some work called the Configuration Description, Deployment and Lifecycle Management, CDDLM, which included a specification for describing the software that gets deployed on a system. This completely overlaps the work we're doing. We worked with that technical committee to have them agree that they would take a look at refactoring their specification to reference that aspect, which is to be standardized in this new work group. So we get a lot more mileage out of a specification and almost immediately achieve broader acceptance by having this single specification. And there are other workgroups that can reference this same type of information."

References:

Web-Based Enterprise Management (WBEM) Initiative

"Web-Based Enterprise Management (WBEM) is a set of management and Internet standard technologies developed to unify the management of enterprise computing environments. WBEM provides the ability for the industry to deliver a well-integrated set of standard-based management tools leveraging the emerging Web technologies. The DMTF has developed a core set of standards that make up WBEM, which includes a data model, the Common Information Model (CIM) standard; an encoding specification, xmlCIM Encoding Specification; and a transport mechanism, CIM Operations over HTTP..."

The WBEMsource initiative is "an umbrella organization, providing coordination between open source WBEM projects, with the goal of achieving interoperability and portability between them... The WBEMsource initiative works with the DMTF, The Open Group and other interested standards organizations to create and enhance standards, and with developers of open source implementations of WBEM to create an environment of interoperable WBEM implementations... Several companies are actively involved in the WBEMsource initiative, including: BMC Software, Cisco Systems, the Distributed Management Task Force (DMTF), Evidian, Hewlett-Packard, IBM, The Open Group, the Storage Networking Industry Association (SNIA), and Sun Microsystems. The DMTF, The Open Group, and the SNIA are industry consortia that are also involved in developing the standards that underpin the initiative..."

References:

Web Services for Management (WS-Management)

On September 15, 2005, Microsoft announced that the company, along with Advanced Micro Devices Inc. (AMD), BMC Software Inc., Computer Associates, Dell Inc., Fujitsu-Siemens Computers, Intel Corporation, NEC Corp., Novell Inc., Sun Microsystems Inc., Symantec Corp. and WBEM Solutions Inc., had submitted the Web Services for Management (WS-Management) specification to the Distributed Management Task Force Inc. (DMTF) for further refinement and finalization as a Web services-based management standard. WS-Management received extensive industry feedback since the October 2005 release, and the WS-Management Version 2 specification was published in March 2005 following numerous feedback and interoperability workshops with implementations beyond those of the twelve co-authors. Sun Microsystems, Intel, and Microsoft announced their plans to deliver products implementing WS-Management. See the news story "WS-Management Specifications Submitted to DMTF for Standardization."

A Web Services for Management (WS-Management) specification edited by Alan Geller (Microsoft) was published in October 2004. This initial joint publication of the specification named Advanced Micro Devices (AMD), Dell, Intel and Sun Microsystems as co-developers. The WS-Management specification describes a general SOAP-based protocol for managing systems such as PCs, servers, devices, Web services and other applications, and other manageable entities. According to Microsoft's announcement, WS-Management "reshapes the concept of distributed management. A key distributed application area is the management of systems and devices. Web services offer a strong foundation for building robust and interoperable systems management solutions. Designed to scale from small footprint controllers to enterprise class servers while maintaining security, WS-Management will help to create a common way of surfacing management-related operations and events within connected systems." Key terms in the WS-Management systems management model include a System as a top-level managed entity composed of one or more Resource Instances; a Resource Instance, also called a Resource or an Instance, is a single manageable item such as a disk drive or a running process. A Resource Service is a Web service that provides access to a single category of manageable items, such as disk drives or running processes, that share the same operations and representation schema. An Agent is application that provides management services for a System by exposing a set of Resource Services. A Manager is a Web service that is used to manage one or more Systems by sending messages to and/or receiving messages from an Agent for that System." The WS-Management specification is designed to satisfy basic requirements of systems management in terms of web services. It is intended to "(1) constrain Web services protocols and formats so Web services can be implemented in management agents with a small footprint, in both hardware and software; (2) define minimum requirements for compliance without constraining richer implementations; (3) ensure composability with other Web services specifications, such as WS-ReliableMessaging and WS-AtomicTransactions; (4) minimize additional mechanism beyond the current Web service architecture." Namespaces are declared in the WS-Management document for other WS-* specifications, including WS-MetadataExchange, WS-Addressing, WS-Eventing, WS-Enumeration, and WS-Transfer.

References:

Web Services Resource Transfer (WS-RT)

In August 2006, HP, IBM, Intel, and Microsoft published initial draft Version 1.0 of a Web Services Resource Transfer (WS-RT) specification as "the first of single set of specifications for resource access/manipulation, events and management. The specification defines extensions to WS-Transfer. While its initial design focuses on management resource access its use is not necessarily limited to those situations. WS-RT is one of the (WS-*) Web service specifications, designed to be composed with each other to provide a rich set of tools for the Web services environment; the specification relies on other Web service specifications to provide secure, reliable, and/or transacted message delivery and to express Web service metadata..." WS-RT defines a new namespace URI: http://schemas.xmlsoap.org/ws/2006/08/resourceTransfer; the RDDL namespace document provides references to the prose specification, WSDL, and XSD.

Background: "In March 2006 HP, IBM, Intel and Microsoft announced plans to address customer's concerns around competing management specifications. The roadmap provided a high-level overview of the strategy being used to achieve the goal of having a single set of specifications for resource access/manipulation, events and management. As the work progresses specifications will be made available for public review and feedback. The first of these specifications, WS-ResourceTransfer, is now available." [WSDM Management Whitepaper overview, 2006-08]

"The operations described in the WS-ResourceTransfer (WS-RT) specification constitute an extension to the WS-Transfer specification, which defines standard messages for controlling resources using the familiar paradigms of 'get', 'put', 'create', and 'delete'. The extensions deal primarily with fragment-based access to resources to satisfy the common requirements of WS-ResourceFramework and WS-Management. specification intends to meet the following requirements: (1) Define a standardized technique for accessing resources using semantics familiar to those in the system management domain: get, put, create and delete; (2) Define WSDL 1.1 portTypes, for the Web service methods described in this specification, compliant with WS-I Basic Profile 1.1; (3) Define minimum requirements for compliance without constraining richer implementations; (4) Compose with other Web service specifications for secure, reliable, transacted message delivery; (5) Provide extensibility for more sophisticated and/or currently unanticipated scenarios; (6) Support a variety of encoding formats including (but not limited to) both SOAP 1.1 and SOAP 1.2 Envelopes..." [from the Version 1.0 spec]

References:

  • WSDM/WS-Man Reconciliation: An Overview and Migration Guide. By IBM Corporation.
  • Web Services Resource Transfer (WS-RT). Version 1.0, August 2006. By Brian Reistad (Microsoft Corporation), Bryan Murray (HP), Doug Davis (IBM), Ian Robinson (Editor, IBM), Raymond McCollum (Editor, Microsoft Corporation), Alexander Nosov (Microsoft Corporation), Steve Graham (IBM), Vijay Tewari (Intel Corporation), William Vambenepe (HP). [Source: IBM, xmlsoap.org, HP, Microsoft]
  • WS-RT WSDL [source]
  • WS-RT XML Schema (xsd) [source]
  • WS-RT-200608 ZIP archive (spec, XSD, WSDL); see the file listing. [source]
  • William Vambenepe (HP). Blog. "[it] builds on top of WS-Transfer to allow more flexible access to the representation of the resource (e.g. retrieving only a portion of the representation instead of the whole thing). This level of features corresponds to the WS-Transfer extensions present in WS-Management or to what WS-ResourceProperties offers in the WSRF world. Attentive readers of the roadmap might remember that it mentions a WS-TransferAddendum specification. There won't be any such specs, instead there will soon be a backward-compatible update of WS-Transfer..."
  • See also: Remote Shell Web Services Protocol. July 2006. Version 1.0, Beta. Authors: Alexander Nosov (Microsoft), Brian Reistad (Microsoft), Johannes Helander (Microsoft), Raymond McCollum, Microsoft - Editor) Steve Menzies (Microsoft), Vishwa Kumbalimutt (Microsoft). "This specification describes a set of extensions to the standard WS-Management protocol for accessing common command shell processors." [cache]


General: Articles, Papers, News

  • [March 27, 2006] "WS-Convergence." By Anne Thomas Manes. March 27, 2006. "Remember a couple of years back when the vendors lined up in factions to fight over specifications? (1) WS-Reliability vs WS-ReliableMessaging; (2) WS-CAF vs WS-Transaction; (3) WS-MessageDelivery vs WS-Addressing; (4) Liberty/SAML vs WS-Trust/WS-Federation. Inevitably, the lines were drawn with Sun and Oracle on one side and IBM and Microsoft on the other. Sun and Oracle made a habit of submitting the first versions of their specs to a standards body, while IBM and Microsoft closely guarded the first two or three revisions, while promising to submit them to a standards body "at some point in the future". Nonetheless, the IBM/Microsoft faction always seemed to win more mindshare. The situation is much improved since Microsoft and Sun buried the hatchet two years ago, and IBM and Microsoft have finally submitted most of their specs (WS-SX, WS-RX, and WS-TX) to OASIS. But there's still one outstanding competing specification stack that still need to be resolved: that of resources, events, and management. Unlike previous situations, in this case IBM and Microsoft are on different sides — and the dispute revolves around simplicity vs. richness... The Microsoft stack is lighterweight and was only recently placed on a standards track. (WS-Management is now governed by DMTF, and WS-Transfer, WS-Enumeration, and WS-Eventing were submitted to W3C in mid-March.) The IBM stack focuses on richness of functionality and is being managed by OASIS. WSDM is an OASIS standard..."

  • [March 15, 2006] "Toward Converging Web Service Standards for Resources, Events, and Management." A Joint White Paper from Hewlett Packard Corporation, IBM Corporation, Intel Corporation, and Microsoft Corporation. Version 1.0. March 15, 2006. Authors: Kevin Cline (Intel), Josh Cohen (Microsoft), Doug Davis (IBM), Donald F. Ferguson (IBM), Heather Kreger (IBM), Raymond McCollum (Microsoft), Bryan Murray (HP), Ian Robinson (IBM), Jeffrey Schlimmer (Microsoft), John Shewchuk (Microsoft), Vijay Tewari (Intel), William Vambenepe (HP). "HP, IBM, Intel and Microsoft plan to develop a common set of specifications for resources, events, and management that can be broadly supported across multiple platforms. The parties will do this by building on existing specifications and defining a set of enhancements that enable this convergence. In many scenarios, vendors and customers building solutions using Web services will find that the existing specifications support their scenarios. Vendors and customers may use the new specifications and functions when needing the common capabilities. The common functionality we cover includes: (1) Resources: The ability to create, read, update and delete information using Web services; (2) Events: The ability to connect Web services together using an event driven architecture based on publish and subscribe; (3) Management: Provide a Web service model for building system and application management solutions, focusing on resource management. Moreover the common interoperable collection of specifications is designed such that organizations can easily extend the specifications to cover additional advanced scenarios..." Also available from HP. [cache]

  • [May 10, 2005] "Simplify Deployment Tasks with Solution Installation Technology. A Close Encounter with Solution Installation Descriptors." By Charlie Halloran (Senior Software Engineer, IBM). From IBM developerWorks (May 10, 2005). ['Solution installation technology in the IBM Autonomic Computing Toolkit is best understood in terms of the Solution Installation descriptor. With examples included here, learn how to use the Solution Installation descriptor to reap the benefits of self-configuring technology from the Autonomic Computing Toolkit. By eliminating tasks normally required of the software packager and the user who's installing it, solution installation technology saves time and eliminates errors.'] "The promise of autonomic computing technology is to enable computer systems and products to manage themselves. One aspect of self-managing software is its ability to configure itself. The solution installation technology in the Autonomic Computing Toolkit enables applications to be self-configuring, making it easier for customers to start using the software without investing a lot of time and skills installing and configuring. To bring an application on board, three things need to be done: (1) Check for a proper installation environment; (2) Copy all required files and objects from install media; (3) Make any modifications to the default configuration to reflect the proper operating environment. The solution installation technology in the Autonomic Computing Toolkit contains an architected data format and a run-time library that lets these three steps be completely described and implemented, resulting in a much simpler and smoother installation experience, and yields an application that is ready to be used. The advantages of Solution Installation are numerous. Software can be packaged to list dependencies, check them, and allow or disallow the installation depending on the results. Individual packages can be combined into one large aggregation by wrapping them together, thus preserving the testing done on each component package. Finally, actions to be performed can be included so that all the steps needed to be ready to run can be executed automatically. These concepts are demonstrated by the samples included in the [IBM] Autonomic Computing Toolkit..."

  • [April 29, 2005] "Approaches for Service Deployment." By Vanish Talwar and Dejan Milojicic (Hewlett-Packard Laboratories); Qinyi Wu, Calton Pu, Wenchang Yan, and Gueyoung Jung (Georgia Tech). In IEEE Internet Computing Volume 9, Number 2 (March/April 2005), pages 70-80. "Traditional IT service-deployment technologies are based on scripts and configuration files, which have a limited ability to express dependencies and verify configurations, resulting in hard-to-use and erroneous system configurations. Emerging language- and model-based tools promise to address these deployment challenges, but their benefits aren't yet clearly established. The authors compare manual, script-, language-, and model-based deployment solutions in terms of scale, complexity, expressiveness, and barriers to first use. In Sercice Oriented Computing, changes to a service component must be propagated or contained so that the services using that component continue to function correctly. Unplanned changes, such as those caused by failures, must also accommodate dependencies services that depend on a failed service, for example, might need to be restarted. A concrete and serious challenge in SOC is the long-lived and evolving nature of large-scale services. A system update at even a moderately sized data center can require changes to 1,000 machines, some of which might have interdependencies among their services... Automation of service deployment is beneficial for improved correctness, speed, and documentation, but automation comes at an increased cost in development time and administrators' learning curves. This initial overhead might be acceptable if overall gains are significant and worthwhile, but IT managers face a more general question: which of these approaches should they adopt, and when? From the perspective of our programming-language-inspired methodology, the four deployment approaches differ in nature, yet are synergistic. The manual approach is imperative; the script-based one is automated imperative; the language-based one is declarative; and the model-based one is goal-driven. Ease of use and barriers to first use typically determine the optimal choice, but to define the best deployment method in an SOC environment, our results favor the trend toward using a model-based approach because each successful service composition increases total system complexity as well as scale... Ultimately, no universally optimal solution exists the best approach is the one that closest matches the deployment need. When the number of deployed systems is small or systems' configurations rarely change, a manual solution is the most reasonable approach. For services with more comprehensive configuration changes, a script-based deployment strategy offers several benefits. In larger environments in which changes involve dependencies, language-based solutions are likely a better choice. If the changes also involve significant perturbations to the underlying service's design, the model-based approach is probably ideal. From the perspective of documentability, manual deployment offers poor support; scripts offer minimal support for the deploy-time changes; language-based approaches support incremental documentability based on inheritance and composition; and model-based approaches add runtime documenting by virtue of capturing all the changes in the deployed service's lifetime... Integration with development tools such as Eclipse should both improve ease of use and decrease barriers to first use because of graphical user interfaces combined with default configuration templates. We also plan further examination of deployment in different underlying environments such as PlanetLab, Grid, and Enterprise."

  • [March 11, 2005] "A Little Wisdom About WSDM." By Heather Kreger (Lead Architect for Web Services and Management for Standards and Emerging Technologies, IBM). ['Uncover the motivation behind the development of Web Services Distributed Management (WSDM) 1.0, a new standard that OASIS just approved. This paper gives you an overview of the specification and shows you some of the key design tenants.'] " The industry has been wrestling with the complexity of managing its business systems for years. This complexity stems from the variety of IT resource providers and application providers that enterprises use to build their business systems. A variety of management systems already co-exist to be able to manage the breadth of resources. Ultimately, this creates a classic integration issue: the problem of management integration. OASIS has just approved a new standard from the Web Services Distributed Management Technical Committee (WSDM) as the first step to solving the management integration problem of Web Services Distributed Management: Management Using Web Services (MUWS) and Web Services Distributed Management: Management of Web Services (MOWS) specifications. WSDM provides significant value to three major groups: (1) Customers with heterogeneous IT environments: WSDM allows management software from different vendors to interoperate more easily, enabling end-to-end and even cross-enterprise management. (2) ISV's producing management software: WSDM provides standards for identifying, inspecting, and modifying characteristics of resources in the IT environment. Management applications can take advantage of these to deliver functionality and increase the number and type of resources that management software can address. Over time this will reduce the cost of such applications and broaden their potential function. (3) Manufacturers of devices: WSDM provides the ability to expose management interfaces using Web services in a standard way, regardless of how the internal instrumentation is done. Any management vendor can use these Web services interfaces, reducing the amount of custom support required. The approval of WSDM as an OASIS standard is of interest to all three groups, as well as industry analysts concerned with systems management or Web services. The WSDM 1.0 specifications lay the foundation for using Web services as a management platform. Now that the specification has been approved, there will be activities to educate the industry and align it with other system management activities. WSDM will continue to evolve in many ways. WSDM will evolve as the specifications it depends on become OASIS and W3C standards. WSDM will evolve as it begins to be applied in the Distributed Management Task Force (DFTM) and other management organizations and adds new functionality to its scope. WSDM will also evolve as the industry sorts out competing and overlapping specifications..."

  • [September 2004] "Installable Unit Deployment Descriptor for Autonomic Solution Management." By Christine Draper, Randy George, Marcello Vitaletti (IBM Software Group). Pages 742-746 in Database and Expert Systems Applications, 15th International Workshop on (DEXA'04), August 30 - September 03, 2004, Zaragoza, Spain. "Today's enterprise solutions consist of multiple components whose deployment must be coordinated across multiple, heterogeneous hosting environments, including operating systems, application servers, databases and other middleware. The requirements of software components on their hosting environments and their interdependencies must be fully specified in order to enable an autonomic management of solutions during their life cycle. The Installable Unit Deployment Descriptor illustrated in this paper supports a fully declarative specification of a solution and provides a foundation for autonomic deployment services, as well as for the exchange of solution data across different tools and management applications... The IUDD specification is not dependent on a specific package format for the assembly of resources that must be laid down during install. A companion, independent specification defines a flexible packaging format, by which the physical location of packaged resources can be specified for different media (file system, network and removable volumes) via a media descriptor. The XML schema supports a fully declarative specification of a solution and provides a foundation for autonomic deployment services, as well as for the exchange of solution data across different tools and management applications. Adoption of the IUDD schema may deliver a number of benefits, including: (1) A common packaging structure that allows a product or solution to be installed by multiple installer technologies; (2) Support for aggregating independently packaged IUs into more complex solutions; (3) The sharing of common runtime services by multiple installer technologies; (4) Improved accuracy of dependency checking, leading to more reliable installation; (5) Improved consistency between product installs regarding identification, feature selection, and lifecycle management.

  • [May 14, 2004] Resource Management in OGSA. Edited by Frederico Buchholz Maciel (Hitachi, Ltd). Global Grid Forum. Common Management Model (CMM) WG. May 14, 2004. 21 pages. "Any computing environment requires some degree of system management: monitoring and maintaining the health of the systems, keeping software up-to-date, maintaining user accounts, managing storage and networks, scheduling jobs, managing security, and so on. The complexity of the management task increases as the number and types of resources requiring management increases, and is further complicated when those resources are distributed... Today, system administrators can choose from a wide variety of management tools from system vendors, third party suppliers and the open source community. However, these tools tend to operate independently and to use proprietary interfaces and protocols to manage a limited set of resources, making it difficult for an organization to build an efficient, well-integrated management system. This issue is being addressed through the development of manageability standards that will enable conforming management tools to manage conforming resources in a uniform manner, and to interoperate with each other. In turn this will enable system administrators to choose their management tools and suppliers in the knowledge that, regardless of their origin, the tools can work cooperatively in an integrated management environment. This document offers a detailed discussion of the issues of management in a Grid based on the Open Grid Services Architecture (OGSA). It first defines the terms and describes the requirements of management as they relate to a Grid, and then organizes the interfaces, services, activities, etc. that are involved in Grid management, including both management within the Grid and the management of the Grid infrastructure. It concludes with a comprehensive gap analysis of the state of manageability in OGSA, primarily identifying Grid-specific management functionality that is not provided for by emerging Web services-based distributed management standards..." [source PDF]


Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/computingResourceManagement.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org