A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by:
Oracle Corporation http://www.oracle.com
Headlines
- Public Review for OASIS Web Services Quality Factors Version 1.0
- Last Call: An Architecture for Network Management using NETCONF and YANG
- Oracle Identity Management 11g Advances Application Security
- Lily: Cloud-Scalable NoSQL-Based Content Store And Search Repository
- IETF Recharters Common Authentication Technology Next Generation WG
- Google Gets Semantic with Metaweb Acquisition
- International Workshop on Cloud Privacy, Security, Risk, and Trust
- From Indiana University: XML Metadata Concept Catalog (XMC Cat)
- SMIL 3: Open Source Tools and Techniques for Synchronized Multimedia
Public Review for OASIS Web Services Quality Factors Version 1.0
Eunju Kim, Yongkon Lee, Yeongho Kim (et al, eds), OASIS PRD
Members of the OASIS Web Services Quality Model Technical Committee have approved Committee Draft 02 of the specification Web Services Quality Factors Version 1.0 for public review through September 20, 2010. This OASIS TC is developing specifications about quality of web services such as Web Services Quality Model (WSQM), Web Services Quality Factors (WSQF), and Web Services Quality Description Language (WS-QDL). WSQM is an overall model of quality of web services, and WSQF specifies quality factors. WS-QDL describes the WSQM in the type of XML schema.
The purpose of this document is to provide the standard for quality factors of web services in their development, usage and management. Web services usually have distinguished characteristics. They are service-oriented, network-based, variously bind-able, loosely-coupled, platform independent, and standard-protocol based. As a result, a web service system requires its own quality factors unlike installation-based software. For instance, as the quality of web services can be altered in real-time according to the change of the service provider, considering real-time property of web services is very meaningful in describing the web services quality.
This document presents the quality factors of web services with definition, classification, and sub-factors case by case. For each quality factor, related specifications annexed with a brief explanation. This specification can be generally extended to the definition of quality of SOA and provide the foundation for quality in the SOA system...
Web services have distinguished characteristics different from installation-based software because of their service-oriented nature. The provider and consumer of services could belong to different ownership domains so that there are many cases that a service cannot meet the consumer's service requirements in respect of service quality and content... the client and the server cannot guarantee for proper operating performance. They may be operated platform-independently, so it requires more efforts for guaranteeing interoperability between them. Even though web services are based on standard protocols of communication, misconception of the protocols can produce critical results in non-interoperable services... Therefore, it is required to induce quality items in alignment with consideration of these characteristics of web service quality during overall web service lifecycle..."
See also: the announcement
Last Call: An Architecture for Network Management using NETCONF and YANG
Phil Shafer (ed), IETF Internet Draft
The Internet Engineering Steering Group (IESG) has received a request from the IETF NETCONF Data Modeling Language Working Group (NETMOD) to consider An Architecture for Network Management using NETCONF and YANG as an IETF Informational RFC. The IESG plans to make a decision in the next few weeks, and solicits final comments on this action; please send substantive comments to the IETF by 2010-08-05.
The IETF Network Configuration (NETCONF) Working Group was chartered "to produce a protocol suitable for network configuration suitable for operators in today's highly interoperable networks. Operators from large to small have developed their own mechanisms or used vendor specific mechanisms to transfer configuration data to and from a device, and for examining device state information which may impact the configuration. Each of these mechanisms may be different in various aspects, such as session establishment, user authentication, configuration data exchange, and error responses... The NETCONF protocol is using XML for data encoding purposes, because XML is a widely deployed standard which is supported by a large number of applications. The NETCONF protocol should be independent of the data definition language and data models used to describe configuration and state data..."
Document abstract: "NETCONF gives access to native capabilities of the devices within a network, defining methods for manipulating configuration databases, retrieving operational data, and invoking specific operations. YANG provides the means to define the content carried via NETCONF, both data and operations. Using both technologies, standard modules can be defined to give interoperability and commonality to devices, while still allowing devices to express their unique capabilities. This document describes how NETCONF and YANG help build network management applications that meet the needs of network operators...
YANG is a data modeling language for NETCONF. It allows the description of hierarchies of data nodes ('nodes') and the constraints that exist among them. YANG defines data models and how to manipulate those models via NETCONF protocol operations. Each YANG module defines a data model, uniquely identified by a namespace URI. These data models are extensible in a manner that allows tight integration of standard data models and proprietary data models. Models are built from organizational containers, lists of data nodes and data node forming leafs of the data tree... Since NETCONF content is encoded in XML, it is natural to use XML schema languages for their validation. To facilitate this, YANG offers a standardized mapping of YANG modules into Document Schema Description Languages. DSDL is considered to be the best choice for the given purpose because it addresses not only grammar and datatypes of XML documents but also semantic constraints and rules for modifying information set of the document..." [Note: the HTML version of this specification was produced from the IETF's "xml2rfc" single-source production tool that lets one convert an XML source file into a text, HTML, nroff, expanded XML, etc.]
See also: the HTML format generated by xml2rfc v1.35
Oracle Identity Management 11g Advances Application Security
Staff, Oracle Announcement
Oracle has announced the release of Oracle announced Oracle Identity Management 11g, providing "a complete, integrated and open set of best-of-breed components built on a common platform and engineered to deliver unparalleled integration both within and across the suite through a series of common components. As the industry's first Service-Oriented Security architecture, Oracle Identity Management 11g provides developers with shared services for identity administration and password management, strong authentication and authorization, workflow and auditing, thus radically simplifying application security. This services based architecture is also designed to naturally extend to cloud computing environments, providing a single point of control for on-premise and off-premise applications and systems.
The entire Oracle Identity Management 11g product line is optimized to support the evolving needs of modern enterprises, such as cloud computing, with a unified, secure, easy-to-deploy set of identity management functions. In addition to delivering services-based architecture, tighter integration and dynamic new user interfaces throughout the Suite, key enhancements include Oracle Identity Manager Version 11g, a full-featured identity administration and provisioning with integrated user and role administration, as well as Universal Delegated Administration based on fine grained authorization policies and self service request and approval models based on open, flexible BPEL workflows.
Oracle Access Manager 11g supports Single Sign-On (SSO) for enterprise web applications, now providing in-memory session management based on Oracle Coherence. Additionally, SSO Security Zones support secure application boundaries. Oracle Adaptive Access Manager 11g supports enterprise fraud prevention with One Time Password Anywhere, which delivers one-time password support through short message service (SMS), Interactive Voice Response, email and instant messaging.
Oracle Identity Analytics 11g now supports enterprise Compliance and Governance combining business intelligence and security, while running on a rich Identity Warehouse; features include Cert360—an intelligent 360 degree view of an organization's security and compliance health. The Oracle OpenSSO Fedlet and OpenSSO STS 11g provides for full integration and certification of Sun's Fedlet for rapid on-boarding of federation partners, as well as the Secure Token Service (STS) functionality of Sun Open SSO STS for identity propagation. The Oracle Enterprise Manager Grid Control Management Pack for Identity Management 11g offers advanced monitoring, diagnostics and performance management for all Oracle Identity Management 11g components..."
See also: the Oracle Identity Management web site
Lily: Cloud-Scalable NoSQL-Based Content Store And Search Repository
Steven Noels, Blog
Developers of the open source Lily Project have announced a Proof of Architecture Release. Lily fuses Apache HBase, the Google BigTable-inspired NoSQL column-oriented database, and SOLR, the industry-standard search engine running on top of Apache Lucene, and provides infinitely scalable storage and search for large content collections. The Lily content repository offers a rich and flexible content model, with strong versioning support, and a queue system that keeps SOLR indexes up to date with repository updates. The Lily content model has been academically validated and accommodates data mapped from various domains, such as rich hypermedia, HTML5, NewsML, MXF, CMIS, RDF and many more.
The Lily Proof of Architecture release is made specifically for the audience Lily has been designed for: content technologists, developers of content applications such as WCMS, CMS, DAM, DMS and RM, which are being confronted with the lack of scale and reliability a relational DBMS back-end often exhibits when data and usage volumes explode. Lily has been specifically architected to be fully distributable, allowing it to run on large server farms or in the cloud, making use of such large-scale infrastructure to provide room for growth.
The Lily low-level, generic content repository will be accessible through industry-standard APIs such as CMIS, according to the FAQ document: 'academic validation taught us that we offer the primitives to provide backing for a CMIS-inspired model. However, the Lily repository model supports more than that, which means CMIS support will likely be an add-on requiring specific mapping of your Lily schema to the CMIS model. Ideally, we want to develop CMIS support together with technology partners in order to make it useful and practical'...
Google, Facebook, Amazon, Digg and other vested web properties didn't turn to classic enterprise technology (such as RDBMs) to address their non-classical challenges of availability and scalability. Instead, they turned towards the core of the problem, and invented novel theories, concepts and solutions to cope with their enormous growth and subsequent demand... After careful consideration we made a selection (Apache HBase) that will serve as the base foundation of Lily. Most importantly, the tool we selected will address scalability and availability at the core of our product design, while still allowing us to deliver a product that can be installed on customer infrastructure. With this new underlying technology, scale will no longer be a challenge, but become a welcomed product opportunity..."
See also: the Lily FAQ document
IETF Recharters Common Authentication Technology Next Generation WG
Staff, IESG Secretary
The Internet Engineering Steering Group (IESG) Secretary has announced rechartering of the IETF 'Common Authentication Technology Next Generation (KITTEN) Working Group' in the Security Area of the IETF. Members of the Working Group plan to transition proposed SASL mechanisms as GSS-API mechanisms, including A SASL Mechanism for SAML and A SASL & GSS-API Mechanism for OpenID.
The Generic Security Services (GSS) API and Simple Authentication and Security Layer (SASL) provide various applications with a security framework for secure network communication. The purpose of the Common Authentication Technology Next Generation (Kitten) working group (WG) is to develop extensions/improvements to the GSS-API, shepherd specific GSS-API security mechanisms, and provide guidance for any new SASL-related submissions.
The KITTEN Working Group is chartered to specify the following extensions and improvements to the GSS-API: (1) Providing new interfaces for credential management, which include initializing credentials, iterating credentials, and exporting/importing credentials; (2) Specifying interface for asynchronous calls; (3) Defining interfaces for better error message reporting; (4) Providing a more programmer friendly GSS-API for application developers; this could include reducing the number of interface parameters, for example, by eliminating parameters which are commonly used with the default values...
The transition from SASL to GSS-API mechanisms will allow a greater set of applications to utilize said mechanisms with SASL implementations that support the use of GSS-API mechanisms in SASL. This WG should review proposals for new SASL and GSS-API mechanisms, but may take on work on such mechanisms only through a revision of this charter. The WG should also review non-mechanism proposals related to SASL and the GSS-API..." [Note 2010-07-26: A SASL Mechanism for OAuth]
See also: A SASL Mechanism for SAML
Google Gets Semantic with Metaweb Acquisition
Jim Rapoza, InformationWeek
"While most emerging technologies tend to happen quickly, one that has been 'emerging' for a really long time is the Semantic Web. However, Google's recent acquisition of Metaweb may be the signal that the Semantic Web has finally arrived... Most of the key standards such as RDF for tagging, OWL for setting ontologies and Sparql for handling queries, are now in place. In recent years, Metaweb's Freebase has emerged as an example of the power of semantic technologies. Freebase creates structured data out of concepts from sites around the web, such as Wikipedia, and makes it very simple to query and use that data.
In spirit, Freebase is definitely a Semantic Web project, since it is concerned with the semantics of content on the web. However, it isn't a pure semantic web project, since, while it does use standards such as RDF and OWL, it does not natively use Sparql as a query engine. In comparison, the similar DBPedia project does use Sparql and most semantic web standards. But outside of these issues, Metaweb's Freebase is definitely semantic, and the fact that Google has acquired the company could signal an increased focus on the Semantic Web within Google..."
According to the Google blog article: "Today, we've acquired Metaweb, a company that maintains an open database of things in the world. Working together we want to improve search and make the web richer and more meaningful for everyone... With efforts like rich snippets and the search answers feature, we're just beginning to apply our understanding of the web to make search better... In addition to our ideas for search, we're also excited about the possibilities for Freebase, Metaweb's free and open database of over 12 million things, including movies, books, TV shows, celebrities, locations, companies and more.
Google and Metaweb plan to maintain Freebase as a free and open database for the world. Better yet, we plan to contribute to and further develop Freebase and would be delighted if other web companies use and contribute to the data. We believe that by improving Freebase, it will be a tremendous resource to make the web richer for everyone. And to the extent the web becomes a better place, this is good for webmasters and good for users..."
See also: Jack Menzel's blog article
International Workshop on Cloud Privacy, Security, Risk, and Trust
Staff, Indiana University Workshop Announcement
"CPSRT 2010 (nternational Workshop on Cloud Privacy, Security, Risk, and Trust) will bring together a diverse group of academics and industry practitioners in an integrated state-of-the-art analysis of privacy, security, risk, and trust in the cloud. The workshop will address cloud issues specifically related to access control, trust, policy management, secure distributed storage and privacy-aware map-reduce frameworks. CPSRT 2010 will be held in conjunction with the Second IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2010), November 30 - December 3, 2010, at Indiana University, USA.
Cloud computing has emerged to address an explosive growth of web-connected devices, and handle massive amounts of data. It is defined and characterized by massive scalability and new Internet-driven economics. Yet, privacy, security, and trust for cloud computing applications are lacking in many instances and risks need to be better understood.
Privacy in cloud computing may appear straightforward, since one may conclude that as long as personal information is protected, it shouldn't matter whether the processing is in a cloud or not. However, there may be hidden obstacles such as conflicting privacy laws between the location of processing and the location of data origin. Cloud computing can exacerbate the problem of reconciling these locations if needed, since the geographic location of processing can be extremely difficult to find out, due to cloud computing's dynamic nature. Another issue is user-centric control, which can be a legal requirement and also something consumers want. However, in cloud computing, the consumers' data is processed in the cloud, on machines they don't own or control, and there is a threat of theft, misuse or unauthorized resale. Thus, it may even be necessary in some cases to provide adequate trust for consumers to switch to cloud services.
In the case of security, some cloud computing applications simply lack adequate security protection such as fine-grained access control and user authentication (e.g. Hadoop). Since enterprises are attracted to cloud computing due to potential savings in IT outlay and management, it is necessary to understand the business risks involved. If cloud computing is to be successful, it is essential that it is trusted by its users. Therefore, we also need studies on cloud-related trust topics, such as what are the components of such trust and how can trust be achieved, for security as well as for privacy..."
See also: the CloudCom 2010 web site
From Indiana University: XML Metadata Concept Catalog (XMC Cat)
Staff, Pervasive Technology Institute Announcement
The Indiana University Data to Insight Center (D2I) recently announced the release of XMC Cat as a a new software tool to help today's scientists face is sorting and making sense of the massive amounts of data produced by advanced scientific instruments and supercomputers. D2I undertakes research to harness the vast stores of digital data being produced by modern computational resources, allowing scientists and companies to make better use of these data and find the important meaning that lies within them...
XMC Cat is a web service toolkit for capturing and storing metadata during the execution of scientific workflows to enable data discovery and reuse. Its advantages include adaptability to domain schemata through configuration instead of code changes, support for automatic capture of metadata through curation plugins, and search and browse capabilities through a web-based GUI that dynamically adjusts to the domain schema. This allows XMC Cat to be deployed in different scientific domains without requiring new code to be written. It is currently in use in the LEAD Science Gateway.
The LEAD project uses the Lead Metadata Schema (LMS) which is a profile of the FGDC standard for spatial metadata. To store metadata in XMC Cat based on the LMS, the schema is partitioned into concepts and these definitions are loaded into concept and element definition tables in XMC Cat. By partitioning the LMS into concepts and storing the definitions of those concepts as data, XMC Cat is loosely coupled to the LMS and can instead be easily adapted to different scientific schemas.
The LEAD project found that much of the metadata of greatest value to domain scientists during the data discovery process is domain-specific and not directly defined in the LMS. Instead, the FGDC standard that the LMS extends contains an Entity and Attribute section that allows for the description of domain-specific entities contained in the data. The documentation for the FGDC describes spatial entities such as roads, which in turn have certain attributes used to describe them..."
See also: the XMC Cat news item
SMIL 3: Open Source Tools and Techniques for Synchronized Multimedia
Colin Beckingham, IBM developerWorks
"Synchronized multimedia plays an important role in modern communications strategies. In broad strokes, the coordinated and ordered presentation of video, audio, still images, text, and other elements offers a dynamic, alternative, and editable approach in a world where the competition for an audience is intense. In addition, the presentation of elements in parallel can appeal to several different audiences at the same time.
In complex situations, a comprehensive development package is needed. Synchronized Multimedia Integration Language (SMIL) is a W3C specification that expresses the required instructions for this kind of presentation in XML format which allows for great complexity and sophistication: starting, stopping, overlapping, and interleaving components as required. The SMIL specification was expanded to Version 3, adding even more interesting techniques and flexibility to enable multimedia producers to compete more effectively in grabbing the attention of all those eyeballs and ears.
Right now, in the context of SMIL 3, you can use a plain-text editor for editing and Ambulant as a player, or you can use your own preferred tools... Ambulant is an open source player that is quite close to a complete SMIL 3 implementation. The program is open source and available on the three major platforms: Linux, Microsoft Windows, and Mac OS X. Ambulant comes in two formats: stand-alone and browser plug-in. A developer should choose the stand-alone version. Even though browser plug-ins can end up as the user engine of choice, the stand-alone version has much better reporting and debug capabilities...
SMIL 3 is an effort to add useful functionality and modularize the engine for use on a wide variety of platforms in preparation for a world of many devices with different sizes, types, and capabilities. In a production environment, developers need complete and reliable tools, but in the meantime, Ambulant is good as a learning aid for SMIL 3.
See also: the W3C Synchronized Multimedia Integration Language (SMIL 3.0) specification
Sponsors
XML Daily Newslink and Cover Pages sponsored by:
IBM Corporation | http://www.ibm.com |
ISIS Papyrus | http://www.isis-papyrus.com |
Microsoft Corporation | http://www.microsoft.com |
Oracle Corporation | http://www.oracle.com |
Primeton | http://www.primeton.com |
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/