The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: November 24, 2010
XML Daily Newslink. Wednesday, 24 November 2010

Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



IETF Draft: Session Initiation Protocol (SIP) Event Package for the Common Alerting Protocol (CAP)
Brian Rosen, Henning Schulzrinne, Hannes Tschofenig (eds), IETF Internet Draft

A revised version of the IETF Standards Track specification Session Initiation Protocol (SIP) Event Package for the Common Alerting Protocol (CAP) has been published.

"The Common Alerting Protocol (CAP) is an XML document format for exchanging emergency alerts and public warnings. The abstract architectural description for the distribution of alerts can be found in the IETF I-D Requirements, Terminology and Framework for Exigent Communications. This document specifies how CAP documents are distributed via the event notification mechanism available with the Session Initiation Protocol (SIP). Additionally, a MIME object is registered to allow CAP documents to be exchanged in other SIP messages.

The 'common-alerting-protocol' Event Package: RFC 3265 defines a SIP extension for subscribing to remote nodes and receiving notifications of changes (events) in their states. It leaves the definition of many aspects of these events to concrete extensions, i.e., event packages. This document defines such a new "common-alerting-protocol" event package. RFC 3903 defines an extension that allows SIP User Agents to publish event state. Event Publication Agents (EPA) use PUBLISH requests to inform an Event State Compositor (ESC) of changes in the "common-alerting-protocol event package. Acting as a notifier, the ESC notifies subscribers about emergency alerts and public warnings.

Document Section 7 identifies five issues that require further discussion: (1) Rate Control: The -00 version of the document introduced rate control for notifications Section 3.3.3 (Rate Control). Is this functionality is needed? (2) Early Warning Service URNs: Specifying services is always difficult since there is no universally agreed service semantic. This document contains a proposal that re-use the classification in the CAP specification; is the proposal acceptable? (3) Event Filter: By using RFC 4660, filters in the body of a SUBSCRIBE the number of notifications can be reduced to those of interest to the subscriber. There is a certain overhead associated with the generic usage of those event filters. Should alternatives be considered? (4) Forked SUBSCRIBE Requests: This document allows forked subscribe request. This is useful when a single service is offered by more than one entity and therefore related to the cases discussed in I-D.forte-lost-extensions... For example, imagine a warning service like 'urn:service:warning.geo' that is advertised by a number of different service providers. (5) Security: The security consideration section was re-written and focuses now mostly on two types of attacks, namely amplificiation and forgery. Does this reflect the understanding of the group?..."

See also: the IETF Authority-to-Citizen Alert (ATOCA) Working Group      [TOC]


The Strongest Link: Libraries and Linked Data
Gillian Byrne and Lisa Goddard, D-Lib Magazine

"Since 1999 the W3C has been working on a set of Semantic Web standards that have the potential to revolutionize web search. Also known as Linked Data, the Machine-Readable Web, the Web of Data, or Web 3.0, the Semantic Web relies on highly structured metadata that allow computers to understand the relationships between objects. Semantic web standards are complex, and difficult to conceptualize, but they offer solutions to many of the issues that plague libraries, including precise web search, authority control, classification, data portability, and disambiguation.

This article will outline some of the benefits that linked data could have for libraries, will discuss some of the non-technical obstacles that we face in moving forward, and will finally offer suggestions for practical ways in which libraries can participate in the development of the semantic web.

As Linked Data initiatives proliferate there has, unsurprisingly, been increased debate about exactly what we mean when we refer to Linked Data and the Semantic Web. Are the phrases interchangeable? Do they refer to a specific set of standards? A specific technology stack?

For the purposes of this paper we use the term 'Semantic Web' to refer to a full suite of W3C standards including RDF, SPARQL query language, and OWL web ontology language. As for 'Linked Data' we will accept the two part definition offered by the research team at Freie Universitat Berlin: 'The Web of Data is built upon two simple ideas: First, to employ the RDF data model to publish structured data on the Web. Second, to [use http URIs] to set explicit RDF links between data items within different data sources'. We can see from this definition that Linked Data has two distinct aspects: exposing data as RDF, and linking RDF entities together..."

See also: Wikipedia on Linked Data      [TOC]


Long Live the Web: A Call for Continued Open Standards and Neutrality
Tim Berners-Lee, Scientific American

"The Web is critical not merely to the digital revolution but to our continued prosperity — and even our liberty. Like democracy itself, it needs defending... [When] the world wide web went live, on my physical desktop in Geneva, Switzerland, in December 1990, it consisted of one Web site and one browser, which happened to be on the same computer. The simple setup demonstrated a profound concept: that any person could share information with anyone else, anywhere. In this spirit, the Web spread quickly from the grassroots up. Today, at its 20th anniversary, the Web is thoroughly integrated into our daily lives. We take it for granted, expecting it to 'be there' at any instant, like electricity...

The Web as we know it, however, is being threatened in different ways. Some of its most successful inhabitants have begun to chip away at its principles. Large social-networking sites are walling off information posted by their users from the rest of the Web. Wireless Internet providers are being tempted to slow traffic to sites with which they have not made deals.

Why should you care? Because the Web is yours. It is a public resource on which you, your business, your community and your government depend. The Web is also vital to democracy, a communications channel that makes possible a continuous worldwide conversation. The Web is now more critical to free speech than any other medium. It brings principles established in the U.S. Constitution, the British Magna Carta and other important documents into the network age: freedom from being snooped on, filtered, censored and disconnected...

Several principles are key to assuring that the Web becomes ever more valuable. The primary design principle underlying the Web's usefulness and growth is universality. When you make a link, you can link to anything. That means people must be able to put anything on the Web, no matter what computer they have, software they use or human language they speak and regardless of whether they have a wired or wireless Internet connection... Decentralization is another important design feature. You do not have to get approval from any central authority to add a page or make a link. All you have to do is use three simple, standard protocols: write a page in the HTML (hypertext markup language) format, name it with the URI naming convention, and serve it up on the Internet using HTTP (hypertext transfer protocol). Decentralization has made widespread innovation possible and will continue to do so in the future... A great example of future promise, which leverages the strengths of all the principles, is linked data... The goal of the Web is to serve humanity. We build it now so that those who come to it later will be able to create things that we cannot ourselves imagine."

See also: the 2006 paper on linked data      [TOC]


James Clark Blogs on XML vs the Web
James Clark, Random Thoughts Blog

"Twitter and Foursquare recently removed XML support from their Web APIs, and now support only JSON. This prompted Norman Walsh to write an interesting post, in which he summarised his reaction as 'Meh'. I won't try to summarise his post; it's short and well-worth reading. From one perspective, it's hard to disagree. If you're an XML wizard with a decade or two of experience with XML and SGML before that, if you're an expert user of the entire XML stack (e.g., XQuery, XSLT2, schemas), if most of your data involves mixed content, then JSON isn't going to be supplanting XML any time soon in your toolbox.

Personally, I got into XML not to make my life as a developer easier, nor because I had a particular enthusiasm for angle brackets, but because I wanted to promote some of the things that XML facilitates, including: (1) textual (non-binary) data formats; (2) open standard data formats; (3) data longevity; (4) data reuse; (5) separation of presentation from content. If other formats start to supplant XML, and they support these goals better than XML, I will be happy rather than worried...

From this perspective, my reaction to JSON is a combination of 'Yay' and 'Sigh'. It's 'Yay', because for important use cases JSON is dramatically better than XML. In particular, JSON shines as a programming language-independent representation of typical programming language data structures. This is an incredibly important use case and it would be hard to overstate how appallingly bad XML is for this. The fundamental problem is the mismatch between programming language data structures and the XML element/attribute data model of elements. This leaves the developer with three choices, all unappetising...

So what's the way forward? I think the Web community has spoken, and it's clear that what it wants is HTML5, JavaScript and JSON. XML isn't going away but I see it being less and less a Web technology; it won't be something that you send over the wire on the public Web, but just one of many technologies that are used on the server to manage and generate what you do send over the wire. In the short-term, I think the challenge is how to make HTML5 play more nicely with XML. In the longer term, I think the challenge is how to use our collective experience from building the XML stack to create technologies that work natively with HTML, JSON and JavaScript, and that bring to the broader Web developer community some of the good aspects of the modern XML development experience..."

See also: the posting from Norm Walsh      [TOC]


Intellegere Foundation Awarded Grant for Ontology Support
Pete Nielsen, Intellegere Foundation Announcement

"The Intellegere Foundation was recently awarded a three year grant for organizing and hosting a state-of-the-art/state-of-the-practice series of 'Ontology Summits' at the National Institute of Standards and Technology (NIST). The Ontology Summits at NIST are making significant contributions to technology development and important new application implementations. This grant provides for continuing this success.

Ontology generally includes study of the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology deals with questions concerning how entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences. 'Ontologies are already helping to shape the future of artificial intelligence and compute r science,' according to Jim Shultz, Intellegere Director and member of the Smart Grid Interoperability Panel...

Ontology is increasingly being used in computer science and information technology in the formal representation of knowledge as a set of concepts within a domain, and the relationships between those concepts. It is used to reason about the entities within that domain, and to describe the domain... Ontologies are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework.

Since 2006, this Ontology Summit series has played a key role in bringing the global ontology community together around certain priority issues and challenges. Organized using the latest collaboration tools as a virtual meeting of several months duration, followed by a two-day symposium hosted by the National Institute of Standards and Technology (NIST) on-site in Gaithersburg, MD, the Ontology Summit has succeeded in bringing key issues to the table and crystallizing areas of common agreement. Each summit has been successful in forging consensus so that the broader technical community can gain a sense of how ontologies can help society, articulating what the position is of the ontology community broadly speaking, and catalyzing a series of funded technology projects.

See also: the Intellegere Foundation      [TOC]


Virtualization and Cloud Technologies Add Complexity to Disaster Recovery
Staff, Symantec Report

"Symantec Corp has announced the global results of its sixth annual Symantec Disaster Recovery Study, which demonstrates the growing challenge of managing disparate virtual, physical and cloud resources because of added complexity for organizations protecting and recovering mission critical applications and data. In addition, the study shows that virtual systems are not properly protected.

The study highlights that nearly half (44 percent) of data on virtual systems is not regularly backed up and only one in five respondents use replication and failover technologies to protect virtual environments. Respondents also indicated that 60 percent of virtualized servers are not covered in their current disaster recovery (DR) plans. This is up significantly from the 45 percent reported by respondents in 2009.

The sixth annual Symantec Disaster Recovery Study demonstrates the challenges that data center managers have in managing disparate virtual, physical and cloud resources. These ever-changing resources add complexity for organizations protecting and recovering mission-critical applications and data. In fact, the data found virtual machines are not properly protected due to resource and other storage constraints that hamper backups. The study also found a huge gap in terms of how fast they think they can recover and how fast they actually do. In addition, organizations still experience more downtime than they should from basic causes such as system upgrades, power outages and cyberattacks. Finally, the study shows significant improvements in disaster recovery testing frequency; however disruption to employees, sales and revenue is still high.

Symantec's recommendations: (1) Treat all environments the same: Ensure that mission-critical data and applications are treated the same across environments (virtual, cloud, physical) in terms of DR assessments and planning. (2) Use integrated tool sets: By using fewer tools that manage physical, virtual and cloud environments it will help organizations save time, training costs and help them to better automate processes. (3) Simplify data protection processes: Embrace low-impact backup methods and deduplication to ensure that mission-critical data in virtual environments is backed up, efficiently replicated off campus. (4) Plan and automate to minimize downtime: Prioritize planning activities and tools that automate and perform processes which minimize downtime during system upgrades. (5)Identify issues earlier: Implement solutions that detect issues, reduce downtime and recover faster to be more in line with expectations. (6) Don't cut corners: Organizations should implement basic technologies and processes that protect in case of an outage, and not take shortcuts that will have disastrous consequences..."

See also: the text of the Symantec report      [TOC]


Apache Tuscany SCA Java 2.0-Beta-1 Released
Simon Laws, Apache Software Foundation Announcement

"The Apache Tuscany team is pleased to announce the 2.0-Beta1 release of the Java SCA 2.0 project. Apache Tuscany/SCA provides a runtime environment based on Service Component Architecture (SCA), which is a set of OASIS specifications aimed at simplifying SOA application development. This is the first beta release on our way to a full 2.0 release. We've made the move from milestone to beta releases as, at the time of the branch being taken for this release, the Tuscany 2.0 SCA Java runtime supported all of the mandatory features as defined in the OASIS SCA specifications.

The OASIS specifications include: (1) SCA Assembly Model V1.1; (2) SCA Policy Framework V1.1; (3) SCA Java Common Annotations and APIs V1.1; (4) SCA Java Component Implementation V1.1; (5) SCA Web Services Binding V1.1; (6) SCA JMS Binding V1.1."

Apache Tuscany "simplifies the task of developing SOA solutions by providing a comprehensive infrastructure for SOA development and management that is based on Service Component Architecture (SCA) standard. With SCA as it's foundation, Tuscany offers solution developers several advantages. It provides a model for creating composite applications by defining the services in the fabric and their relationships with one another. The services can be implemented in any technology.

Tuscany enables service developers to create reusable services that only contain business logic. Protocols are pushed out of business logic and are handled through pluggable bindings. This lowers development cost. Applications can easily adapt to infrastructure changes without recoding since protocols are handled via pluggable bindings and quality of services (transaction, security) are handled declaratively. Existing applications can work with new SCA compositions. This allows for incremental growth towards a more flexible architecture, outsourcing or providing services to others..."

See also: the OASIS Open Composite Services Architecture (CSA) Member Section      [TOC]


Establishing Trust in Cloud Computing
Khaled M. Khan and Qutaibah Malluhi, IEEE IT Professional

"[...] Unfortunately, the adoption of cloud computing came before the appropriate technologies appeared to tackle the accompanying challenges of trust. This gap between adoption and innovation is so wide that cloud computing consumers don't fully trust this new way of computing. To close this gap, we need to understand the trust issues associated with cloud computing from both a technology and business perspective.

Security plays a central role in preventing service failures and cultivating trust in cloud computing. In particular, cloud service providers need to secure the virtual environment, which enables them to run services for multiple clients and offer separate services for different clients. In the context of virtualization, the key security issues include identity management, data leakage (caused by multiple tenants sharing physical resources), access control, virtual machine (VM) protection, persistent client-data security, and the prevention of cross-VM side-channel attacks.

Vendors and research communities are working to address these cloud-specific security concerns. For example, Intel's SOA Expressway claims to enforce persistent security on client data by extending the perimeter of enterprises into the cloud provider (so the enterprises retain a certain amount of control over the computing tasks and data consigned to cloud). The VMsafe API provides VM security protection at the host level. Its VMotion capabilities can dynamically move VMs between physical devices as required...

Even when data is physically spread out and stored in various remote locations and processed by remote machines and software, the data owner could retain control of these activities using an approach similar to LongArm. The LongArm is a modular, stand alone remote control system designed to provide single or multiple workstation control of a wide variety of devices, such as radios, antenna switches, weather stations, and uninterruptible power supplies..."

See also: LongArm Remote Control Software      [TOC]


Opera 11 Beta Launches
Darryl K. Taft, eWEEK

"Opera Software has released the first beta of Opera 11, with support for tab stacking. In a November 23, 2010 announcement, Opera introduced tab stacking as a better way to organize your open tabs. Traditionally, tabs were opened side-by-side, but now Opera users can stack their tabs, grouping them by site or by theme. Tab stacking reduces clutter and makes it easier to identify and work with sets of open tab.

Opera has included its mouse gestures technology in Opera 11. Mouse gestures provide a simple and effective way to control Opera with a few mouse movements. Opera initially introduced mouse gestures in Opera 5. Yet, in Opera 11, a new visual interface highlights mouse paths and helps guide the discovery, use and mastery of these shortcuts..."

From the Opera 11 Beta web site: "In Opera 11, tab stacking lets you drag one tab over another to create a group. Now, you can keep dozens of web pages open, organized and under control. A safer address field: Opera's new address field hides the complexity of long web addresses and gives you better control of your security when browsing. Click on the badge for the website to see information about the site you are visiting. You can even get information about Opera Turbo data savings. Extensions support: You can now browse Opera's extensions catalog to add new functionality easily and customize Opera just how you want it...

As for visual mouse gestures: Mouse gestures are another Opera innovation that has been made easier-to-use with the addition of an interface that guides you. This allows new users to discover the speed and power that mouse gestures offer. Opera 11 also provides better performance: developers have been hard at work fine-tuning our browser engine to put Opera even further ahead in a number of benchmarks. In Opera 11, pages load faster and complex applications run more smoothly. There is enhanced HTML5 support: Support for new standards and HTML5 technologies means that rich, dynamic web applications and multiplayer games can be supported by Opera 11... A new auto-update system ensures that your extensions and Opera Unite apps are always up to date with the latest enhancements. Search suggestions predict queries as you type, making searching quicker and easier; Google search predictions are now built into Opera... Enhanced email in your browser: A new mail panel gives you control over the order in which your accounts and mail items show up. You can just drag items where you want them. The mail panel can also show your mail panel when you are using it and hide when you leave a mail tab... Faster installation: Even with its many new features, Opera 11 is 30% smaller than Opera 10.60...."

See also: the Opera 11 beta description      [TOC]


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-11-24.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org