The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: September 15, 2010
XML Daily Newslink. Wednesday, 15 September 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



WSMO-Lite: Lightweight Semantic Descriptions for Services on the Web
Dieter Fensel, Florian Fischer, Jacek Kopecky (et al), W3C Submission

W3C has acknowledged receipt of a technical contribution WSMO-Lite: Lightweight Semantic Descriptions for Services on the Web, presented as a W3C Member Submission. The contributors are representatives of The Open University, Ontotext AD, Forschungszentrum Informatik (FZI), and University of Manchester. The contributors suggest that the Consortium consider this as an input for work in a new Lightweight Semantic Web Service Description incubator group or working group at W3C.

"The WSMO-Lite specification defines a lightweight set of semantic service descriptions in RDFS that can be used for annotations of various WSDL elements using the SAWSDL annotation mechanism. These annotations cover functional, behavioral, nonfunctional and information semantics of Web services, and are intended to support tasks such as (semi-)automatic discovery, negotiation, composition and invocation of services. It exploits RDF and RDFS as well as their various extensions such as OWL and RIF for semantic service descriptions.

WSMO-Lite was designed to addresses the following requirements: (1) Identify the types and a simple vocabulary for semantic descriptions of services (a service ontology); (2) Define an annotation mechanism for WSDL using this service ontology; (3) Provide the bridge between WSDL, SAWSDL and (existing) domain-specific ontologies such as classification schemas, domain ontology models, etc."

The W3C Team Comment from Carine Bournez: "The WSMO-Lite submission is a restricted subset of the WSMO (Web Services Modelling Ontology) submission to be used as an extension to SAWSDL and WSDL for Web Services descriptions. It proposes a layered approach for addition of semantics at the different description levels: the data model (in XML Schema) is annotated with a domain ontology, the interface (in WSDL) is annotated with conditions for invocation of the service and effects of that invocation, the overall functionality of the service is annotated thanks to a service classification ontology... The WSMO-Lite Submission provides a simple way to handle simple semantic description of Web services without having to use a complete SWS modelling framework to start with. It is a useful addition to SAWSDL for annotations of existing services and the combination of both techniques can certainly be applied to a large number of semantic Web services use cases."

See also: the W3C Team Comment on the WSMO-Lite Member Submission


IETF Discussion List for Security Content Automation Protocol (SCAP)
Staff, IETF Announcement

IETF has announced the creation of a non-working group discussion list for discussions relating to the applicability of the SCAP content formats to current and emerging IETF protocols. Current SCAP specifications and Internet use scenarios are in scope for this discussion list. Current SCAP specifications include the Common Configuration Enumeration (CCE), Common Vulnerability Evaluation (CVE), Common Platform Enumeration (CPM), Open Vulnerability and Assessment Language (OVAL), and Extensible Configuration Checklist Description Format (XCCDF).

"SCAP is a suite of specifications that standardize the format and nomenclature by which security software products communicate software flaw and security configuration information. SCAP is a multi-purpose protocol that supports automated vulnerability checking, technical control compliance activities, and security measurement. Goals for the development of SCAP include standardizing system security management, promoting interoperability of security products, and fostering the use of standard expressions of security content.

NIST Special Publication (SP) 800-117, Guide to Adopting and Using the Security Content Automation Protocol, defines the SCAP as being comprised of two major elements. First, SCAP is a protocol (a suite of six specifications that standardize the format and nomenclature by which security software communicates information about publicly known software flaws and security configurations annotated with common identifiers and embedded in XML. Second, SCAP also utilizes software flaw and security configuration standard reference data, also known as SCAP content. This reference data is provided by the National Vulnerability Database (NVD), which is managed by NIST and sponsored by the Department of Homeland Security (DHS).

SCAP 1.0 uses the following XML-based specifications: (1) Extensible Configuration Checklist Description Format (XCCDF) - a language for authoring security checklists/benchmarks and for reporting results of checklist evaluation; (2) Open Vulnerability and Assessment Language (OVAL)- a language for representing system configuration information, assessing machine state, and reporting assessment results; (3) Common Platform Enumeration (CPE) - a nomenclature and dictionary of hardware, operating systems, and applications; (4) Common Configuration Enumeration (CCE) - a nomenclature and dictionary of security software configurations; (5) Common Vulnerabilities and Exposures (CVE) - a nomenclature and dictionary of security-related software flaws; (6) Common Vulnerability Scoring System (CVSS) 2.0 - an open specification for measuring the relative severity of software flaw vulnerabilities..."

See also: NIST SCAP reference page


W3C Publishes XMLHttpRequest Level 2 Specification
Anne van Kesteren (ed), W3C Technical Report

Members of the W3C Web Applications Working Group have published a Working Draft for the XMLHttpRequest Level 2 specification. The XMLHttpRequest Level 2 specification "enhances the XMLHttpRequest object with new features, such as cross-origin requests, progress events, and the handling of byte streams for both sending and receiving.

The XMLHttpRequest object implements an interface exposed by a scripting engine that allows scripts to perform HTTP client functionality, such as submitting form data or loading data from a server. It is the ECMAScript HTTP API.

The name of the object is 'XMLHttpRequest' for compatibility with the Web, though each component of this name is potentially misleading. First, the object supports any text based format, including XML. Second, it can be used to make requests over both HTTP and HTTPS (some implementations support protocols in addition to HTTP and HTTPS, but that functionality is not covered by this specification). Finally, it supports 'requests' in a broad sense of the term as it pertains to HTTP; namely all activity involved with HTTP requests or responses for the defined HTTP methods."

The W3C WebApps Working Group is part of the Rich Web Clients Activity in the W3C Interaction Domain. "With the ubiquity of Web browsers and Web document formats across a range of platforms and devices, many developers are using the Web as an application environment. Examples of applications built on rich Web clients include reservation systems, online shopping or auction sites, games, multimedia applications, calendars, maps, chat applications, weather displays, clocks, interactive design applications, stock tickers, office document and spreadsheet applications, currency converters, and data entry/display systems... The work of the Web Applications (WebApps) WG covers both APIs and formats. APIs are the assorted scripting methods that are used to build rich Web applications, mashups, Web 2.0 sites. Standardizing APIs improves interoperability and reduces site development costs. Formats covers certain markup languages, including Widgets for deploying small Web applications outside the browser, and XBL for skinning applications. W3C's Interaction Domain is responsible for developing technologies that shape the Web's user interface. These technologies include (X)HTML, the markup language that started the Web. Participants also work on second-generation Web languages initiated at the W3C: CSS, MathML, SMIL and SVG and XForms all have become an integral part of the Web..."

See also: the W3C Rich Web Client Activity


New Energy Management (EMAN) Working Group Proposed in IETF
Staff, IESG Announcement

The Internet Engineering Steering Group (IESG) announced that a new IETF working group on 'Energy Management' has been proposed in the Operations and Management Area. The IESG has not made any determination as yet, but a draft charter has been submitted, and is published for informational purposes only. Public comments to the IESG mailing list are invited through September 21, 2010. From the charter proposal: "Energy management is becoming an additional requirement for network management systems due to several factors including the rising and fluctuating energy costs, the increased awareness of the ecological impact of operating networks and devices, and the regulation of governments on energy consumption and production...

The basic objective of energy management is operating communication networks and other equipments with a minimal amount of energy while still providing sufficient performance to meet service level objectives. A discussion of detailed requirements has already started in the OPSAWG, but further exploration in the EMAN WG is needed.

Today, most networking and network-attached devices neither monitor nor allow control energy usage as they are mainly instrumented for functions such as fault, configuration, accounting, performance, and security management. These devices are not instrumented to be aware of energy consumption. There are very few means specified in IETF documents for energy management, which includes the areas of power monitoring, energy monitoring, and power state control...

A particular difference between energy management and other management tasks is that in some cases energy consumption of a device is not measured at the device itself but reported by a different place. For example, at a Power over Ethernet (PoE) sourcing device or at a smart power strip, in which cases one device is effectively metering another remote device. This requires a clear definition of the relationship between the reporting devices and identification of remote devices for which monitoring information is provided... The WG will investigate existing standards such as those from the IEC, ANSI, DMTF and others, and reuse existing work as much as possible..."


Designing a Scalable Model for the Stanford Digital Repository
Tom Cramer and Katherine Kott, D-Lib Magazine

"The Stanford Digital Repository (SDR) is a preservation repository designed to make digital assets available over the long-term by helping ensure their integrity, authenticity and reusability. Discipline- or domain-specific contents include materials such as maps and geospatial data sets in the National Geospatial Digital Archive, digitized medieval manuscripts from the Parker on the Web research site, or archival copies of video games, virtual worlds and contextual materials associated with the Preserving Virtual Worlds project. The Stanford Digital Repository has largely achieved its original mission. With three years of continuous operation, it has grown to support more than 80 TB of unique scholarly assets, comprising hundreds of thousands of digital objects in a diversity of formats. With numerous successful media migrations and significant changes in staffing, the Stanford's preservation system has navigated the first of its ongoing sustainability challenges.

SDR's future service profile can be firmly scoped around a few core functions ensuring content fixity, authenticity and security. Content deposit, accessioning, conversion and overall management occur 'above' SDR, orchestrated through a digital object registry. Content access, including discovery and delivery to scholars and the general public, occur in purpose-built access systems, in digital stacks. This separation of concerns allows SDR to focus its efforts on large-scale content ingestion, administration, selective preservation actions and limited retrieval. Upstream conversion processes, and rich discovery and delivery systems will be supported through well-defined APIs.

SDR's technical architecture will address and improve on the critical priorities that have emerged in operating the first generation repository. These include adopting Fedora as a metadata management system to leverage the community's investment in and ongoing support for an open source platform that aligns well with SDR's overall technical design. Experience has also shown the need to decompose functions into more granular and loosely-coupled services (i.e., from 'ingest' to 'checksum'), both for increased control of processes as well as for throughput.

The preservation subsystems will require balancing support for accommodating large objects and a multitude of smaller objects. SDR's data model must shift to reduce the incremental analysis and development required to support new content types and collections. Content files will be stored in directories following the BagIt design, with metadata files stored in discrete chunks, leveraging Fedora's object design and XML management capabilities. Taken individually, the changes along any one of these vectors represents an incremental enhancement; taken altogether though, these changes are substantial enough to move the Stanford Digital Repository to a second generation system and set of services..."

See also: the U.S. National Geospatial Digital Archive


Virtify PIM Supports European Regulatory for Product Information Management
Staff, Virtify Announcement

Virtify has announced the release of its new software product Virtify PIM Enterprise, part of the Virtify Enterprise Content Compliance (ECC) Software Suite. Virtify PIM is designed to help companies stay compliant with the new EU labeling standard, and is the first off-the-shelf, Web-based product that enables the fast and efficient creation of PIM submissions in a collaborative, best-practices environment.

Primary PIM, according to the European Medicines Agency, is a XML-based standard "for the electronic exchange of product information in the context of marketing authorisation applications. It describes how the required information should be created and validated so that it can be exchanged successfully between applicants and competent authorities. The design of the standard aims to minimise the repetition of information that is included many times in different locations within the documents. Its guiding design principle is to hold any piece of information only once and to allow its use as many times as necessary to create the required documents. It will obviate the need to supply either paper or Microsoft Word documents, as are currently required. The standard utilises XML (Extensible Markup Language) to structure and control the product information being exchanged..."

Key capabilities of Virtify PIM Enterprise include: (1) Comprehensive PIM Lifecycle Management with integrated workflow for rapid status and centralized management of PIM submissions including content creation, review, approval, and translations, as well as bi-directional communication and comments from EMA; (2) Role-based, granular access control at the PIM document, PIM element, PIM fragment, and language level; this unique structured content approach allows for the easy tagging and reuse of common content elements across documents; (3) Automated XML generation with no programming knowledge required by business users; (4) Integrated Translation Management through standard web services for translation tracking and status reporting through the Virtify PIM dashboard..."

Satish Tadikonda, President and CEO of Virtify: "We have engineered Virtify PIM exclusively for life sciences companies, providing the necessary features and functions that will greatly simplify the submission of XML-based product labeling information... electronic authoring and publishing for PIM XML must integrate with existing translation processes and software. Successfully managing PIM complexity is not a trivial undertaking, especially for companies using manual processes and traditional document management systems. Virtify PIM is designed to support changes to the PIM standard without costly system redesign. This is made possible through Virtify's unique 'XML Rules Injection' technology which separates the rules engine from the core PIM application so that changes to rules can be easily uploaded without redesigning the entire software system. This approach provides organizations with a great deal of flexibility and control over the timing and scheduling of rules updates, significantly reduces software maintenance and validation costs, and provides rapid compliance with evolving business rules..."

See also: PIM (Product Information Management) description


CloudBees Introduces Hudson-as-a-Service
Ian Roughley, InfoQueue

"CloudBees has introduced its fist PaaS offering: Husdon-as-a-Service (HaaS) that liberates the continuous building and testing of projects into the cloud. [Wikipedia: "Hudson is a continuous integration tool written in Java, which runs in a servlet container, such as Apache Tomcat or the GlassFish application server. It supports SCM tools including CVS, Subversion, Git and Clearcase and can execute Apache Ant and Apache Maven based projects, as well as arbitrary shell scripts and Windows batch commands."]

"By utilizing elastic server resources in the cloud as needed, workloads needed to building projects can be better assign, resulting in reduced build times. CloudBees HaaS works with existing GIT or SVN repsoitories, or CloudBees can provide you with a private and secure SVN or GIT repository, as well as a Maven repository."

From the CloudBees HaaS web site: "Any number of Hudson jobs can run in parallel, thanks to the unique dynamic Hudson build agents provisioning feature. Build agents are dedicated for the duration of the build, so you can perform arbitrary operations, for example starting servers on non-privileged ports... On every Hudson build agent you get, your Hudson workspace will be current so you do not have to redo a fresh checkout for each job run; this means your jobs execute much faster and at a vastly reduced cost...

You can browse your build history, workspace and artifacts at all times on your Hudson master. We provide secure and dedicated Maven snapshot and release repositories which we have integrated with a Hudson plugin so that dependent Maven builds can access upstream artifacts even across multiple build agents. All build agents currently provide multiple versions of Ant, Maven & Sun JDK. New tools are easy for us to install given sufficient demand, but you can also install tools into your workspace, and they will be cached for subsequent builds. We automatically generate a private/public key for your Hudson instance which all build agents will be provisioned with, and you can install the public key on any servers that your builds need access to..."

See also: the HaaS service description


Erlang: Effective for Multicore CPUs and Networked Applications
Joe Armstrong, Communications of the ACM Contributed Article

"Erlang is a concurrent programming language designed for programming fault-tolerant distributed systems at Ericsson and has been (since 2000) freely available subject to an open-source license. More recently, we've seen renewed interest in Erlang, as the Erlang way of programming maps naturally to multicore computers. In it the notion of a process is fundamental, with processes created and managed by the Erlang runtime system, not by the underlying operating system.

All Erlang processes are isolated from one another and in principle are 'thread safe.' When Erlang applications are deployed on multicore computers, the individual Erlang processes are spread over the cores, and programmers do not have to worry about the details. The isolated processes share no data, and polymorphic messages can be sent between processes. In supporting strong isolation between processes and polymorphism, Erlang could be viewed as extremely object-oriented though without the usual mechanisms associated with traditional OO languages.

One problematic area in Internet applications where Erlang has found notable success is implementing instant-messaging systems. An IM system looks at first approximation very much like a telephone exchange. IM and telephone exchanges must both handle very large numbers of simultaneous transactions, each involving communication with a number of simultaneously open channels. The work involved in parsing and processing the data on any one channel is small, but handling many thousands of simultaneous channels is a technical challenge. Erlang's usefulness in IM is demonstrated by three projects: MochiWeb, Ejabberd (Erlang implementation of the XMPP protocol), and RabbitMQ...

In traditional databases, data is stored in rectangular tables, where the items in a table are instances of simple types (such as integers and strings). Such storage is not particularly convenient for storing an associative array or arbitrary tree-like structure. Examples of the former are JavaScript JSON data structures (called hashes in Perl and Ruby and maps in C++ and Java) and of the latter XML parse trees. These objects are difficult to store in a regular tabular structure. Erlang has for a long time had its own database, called mnesia, that includes table storage but allows any item in a table cell to also be an arbitrary Erlang data structure. Databases implemented in Erlang are particularly well-suited for such storage, especially when they interface with some form of communicating agent..."

See also: the Wikipedia article for Erlang


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-09-15.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org