The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: July 06, 2010
XML Daily Newslink. Tuesday, 06 July 2010

A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS and Sponsor Members
Edited by Robin Cover


This issue of XML Daily Newslink is sponsored by:
Microsoft Corporation http://www.microsoft.com



NIST Interagency Report: Forensics Web Services (FWS)
Pat O'Reilly, NIST Computer Security Division Announcement

The U.S. National Institute of Standards and Technology (NIST) Computer Security Division Information Technology Laboratory has announced the publication of Forensics Web Services (FWS) as NIST Interagency Report (NISTIR) 7559. Edited by Anoop Singhal, Murat Gunestas, and Duminda Wijesekara, this document "proposes a design and architecture for forensic web services (FWS) that would securely maintain transactional records between other web services. These secure records can be re-linked to reproduce the transactional history by an independent agency.

The advance of Web services technologies promises to have far-reaching effects on the Internet and enterprise networks. Web services based on the Extensible Markup Language (XML), Simple Object Access Protocol (SOAP), and related open standards, and deployed in Service Oriented Architectures (SOA) allow data and applications to interact without human intervention through dynamic and ad hoc connections. Web services technology can be implemented in a wide variety of architectures, can co-exist with other technologies and software design approaches, and can be adopted in an evolutionary manner without requiring major transformations to legacy applications and databases.

In web services, the service-level compositional techniques create complex interdependencies between services belonging to different organizations that can be exploited due to some localized or compositional flaws. Therefore such exploits/attacks can affect multiple servers and organizations, resulting in financial loss or infrastructural damage. Investigating such incidents requires that dependencies between service invocations be retained in a participating party neutral and secure way. Material evidence currently extractable from web servers such as log records, firewall alerts from end point services, and the like, do not have forensic value because defendants can claim that they did not send that message. In this report, we describe a participant neutral solution for a forensically valid evidence gathering mechanism for web services.

This document proposes a Forensic Web Services framework provides a service to other web services by logging service invocations. The design shows how collected logs can provide the capability to produce a collection of digital evidence to expose the attack from its logs.

See also: NIST Interagency / Internal Reports


Requirements, Terminology and Framework for Exigent Communications
Henning Schulzrinne and Hannes Tschofenig (eds), IETF Internet Draft

IETF has published a first public draft of the specification Requirements, Terminology and Framework for Exigent Communications. The document is one of several initial contributions to the new IETF technical activity for "Authority to Citizen Alert (ATOCA)." Document abstract: "Various agencies need to provide information to the restricted group of persons or even to the generic public before, during and after emergency situations. While many aspects of such systems are specific to national or local jurisdictions, emergencies span such boundaries and notifications need to reach visitors from other jurisdictions. This document summarizes requirements for protocols to allow alerts to be conveyed to IP-based end points.

During large-scale emergencies, public safety authorities need to reliably communicate with citizens in the affected areas, to provide warnings, indicate whether citizens should evacuate and how, and to dispel misinformation. Accurate information can reduce the impact of such emergencies.

Traditionally, emergency alerting has used church bells, sirens, loudspeakers, radio and television to warn citizens and to provide information. However, techniques, such as sirens and bells, provide limited information content; loud speakers cover only very small areas and are often hard to understand, even for those not hearing impaired or fluent in the local language. Radio and television offer larger information volume, but are hard to target geographically and do not work well to address the 'walking wounded' or other pedestrians. Both are not suitable for warnings, as many of those needing the information will not be listening or watching at any given time, particularly during work/school and sleep hours...

This document aims to generalize the concept of conveying alerts to IP-based systems and at the same time to re-define the actors that participate in the messaging communication. More precisely, 'exigent communications' is defined as 'communication that requirs immediate action or remedy. Information about the reason for action and details about the steps that have to be taken are provided in the alert message. An alert message (or warning message) is a cautionary advice about something imminent (especially imminent danger or other unpleasantness). In the context of exigent communication such an alert message refers to a future, ongoing or past event as the signaling exchange itself may relate to different stages of the lifecycle of the event. The alert message itself, and not the signaling protocol, provides sufficient context about the specific state of the lifecycle the alert message refers to... Three types of communication models can be envisioned: alerts addressed to all individuals within a certain geographic area, alerts delivered to dedicated end points via unicast messaging, and alerts based upon opt-in subscription model... This document provides terminology, requirements and the architecture for IP-based protocols to enhance and complement existing authority-to-citizen warning systems..."

See also: the draft charter for the IETF ATOCA Working Group


SIP Event Package for the Common Alerting Protocol (CAP)
Brian Rosen, H. Schulzrinne, H. Tschofenig (eds), IETF Internet Draft

IETF has published an initial level -00 Internet Draft in connection with the Authority to Citizen Alert (ATOCA) Working Group: Session Initiation Protocol (SIP) Event Package for the Common Alerting Protocol (CAP). The Common Alerting Protocol (CAP) "is an XML document format for exchanging emergency alerts and public warnings. This document allows CAP documents to be distributed via the event notification mechanism available with the Session Initiation Protocol (SIP)."

RFC 3265 ('Session Initiation Protocol (SIP)-Specific Event Notification') defines a SIP extension for subscribing to remote nodes and receiving notifications of changes (events) in their states. It leaves the definition of many aspects of these events to concrete extensions, known as event packages. This document defines such an event package... The document defines a new "common-alerting-protocol" event package. Event Publication Agents (EPA) use PUBLISH requests to inform an Event State Compositor (ESC) of changes in the common-alerting-protocol event package. Acting as a notifier, the ESC notifies subscribers about emergency alerts and public warnings...

As described in RFC 3265, the NOTIFY message will contain bodies describing the state of the subscribed resource. This body is in a format listed in the Accept header field of the SUBSCRIBE request, or a package-specific default format if the Accept header field was omitted from the SUBSCRIBE request. For an initial notify, unlike for other event packages, there is no current initial state, unless there's a pending alert. Hence, returning a NOTIFY with a non-empty body only makes sense if there are indeed active alerts. The contents of a CAP document may contain public information, depending on the alert message type and the intended recipient of the alert message. It is, however, expected that in many cases providing CAP documents does not require authorization by subscribers..."

Per the WG Charter: "The goal of the ATOCA working group is not to specify how originators of alerts obtain authorization, but rather how an ATOCA system can verify authorization and deliver messages to the intended recipients. A critical element of the work are the mechanisms that assure that only those pre-authorized agents can send alerts via ATOCA, through an interface to authorized alert distribution networks (e.g., iPAWS/DM-Open in the U.S.). The ATOCA effort is differentiated from and is not intended to replace other alerting mechanisms (e.g., PWS, CMAS, ETWS), as the recipients of ATOCA alerts are the wide range of devices connected to the Internet and various private IP networks, which humans may have 'at hand' to get such events, as well as automatons who may take action based on the alerts. This implies that the content of the alert contains some information, which is intended to be consumed by humans, and some which is intended to be consumed by automatons..."

See also: the draft charter for the IETF ATOCA Working Group


Secrecy of Cloud Computing Providers Raises IT Security Risks
Ellen Messmer, Network World

"Despite how attractive cloud computing can sound as an outsourcing option, there's widespread concern that it presents a security and legal minefield for businesses and government. Cloud service providers often cultivate an aura of secrecy about data centers and operations, claiming this stance improves their security even if it leaves everyone else in the dark.

Businesses and industry analysts are getting fed up with this cloud computing version of "don't ask, don't tell," where non-disclosure agreements (NDA) dominate, questions aren't answered, and data center locations and practices are treated like national security secrets.

But public cloud service providers argue their penchant for secrecy is appropriate for the cloud model—and at any rate, everyone's doing it. They often hold out their SAS-70 audit certifications to appease any worry, though some don't have even that...

The argument over transparency vs. secrecy in cloud computing is leading to a culture clash between the more traditional ways of handling data outsourcing and the newer cloud-computing utility methods and mindset. Organizations with certain kinds of sensitive data are simply unlikely to find public cloud computing a right fit until the day comes when they can be sure their favorite security mechanisms are running in their cloud environment..."

See also: the OASIS ID-Cloud Technical Committee


Nuxeo CMIS-Compliant Framework for Case-Centric Content Management
Staff, Nuxeo Announcement

"Nuxeo, an Open Source Enterprise Content Management (ECM) company, has announced a new Nuxeo Case Management Framework (Nuxeo CMF) package for application builders, ISVs and information architects that need to deliver content-centric applications to business users. The open source case management framework is delivered as an easy-to-use downloadable product complete with templates, samples and documentatio n to fast-track deployments of content applications based on the Nuxeo Enterprise Platform (Nuxeo EP).

Nuxeo Case Management Framework leverages the power and extensibility of Nuxeo EP, with added functionality to support the flow of related content items that needs to be handled, managed and analyzed in a case-centric metaphor. It is designed to accelerate solution delivery and lower development costs, helping application builders, information architects and systems integrators fast-track the achievement of business goals and productivity gains by getting people working smarter and faster. Applications in production today using the Nuxeo ECM platform include invoice processing, contracts management, correspondence management, personnel file management, incident tracking, among others.

Nuxeo CMF is the first case management framework from an ECM provider released as open source. Nuxeo is deeply committed to both open source and open standards - ensuring full compliance with key initiatives such as CMIS (Content Management Interoperability Services), Dublin Core, PDF/A, OpenSocial and OSGi. Strong standards support provides assurance for customers that content is accessible, transferable, and can be preserved for long-term retention needs. As ECM interoperability standards such as CMIS gain broad adoption, Nuxeo CMF-based deployments are assured of full participation in new and innovative multi-repository applications that are demanded by large and distributed enterprises.

The Nuxeo Case Management framework is a full-featured, extensible toolkit that natively includes document management, collaboration, workflow, reporting, dashboards and unified in-box management functionality. This allows application builders to focus on building templates, plug-ins and business logic to meet specific content management requirements determined by industry-specific use cases or externally-imposed regulatory mandates..."

See also: the Nuxeo Case Management Framework description


The XML Flavor of HTML5: Recommendations for Developers
Uche Ogbuji, IBM developerWorks

For a while, there has been a struggle for the future of markup on the web, a struggle between the W3C's XHTML 2 and HTML5, developed by the major browser vendors under a separate organizational umbrella. First, the W3C took over HTML5, and now it recently announced the sunset of the XHTML 2 effort. This makes a significant difference to the future of XML on the web, and furthermore, because of HTML5's momentum, it is now a technology that every XML developer already has to deal with.

Browser vendors had been largely ignoring the W3C, and had formed the Web Hypertext Application Technology Working Group (WHAT WG) in order to evolve HTML, creating HTML5. Support for W3C XHTML was stagnant. The W3C first recognized the practicalities by providing a place to continue the HTML5 work, and it accepted defeat by retiring XHTML efforts in 2009. There's no simple way to assess whether or not this means the end of XHTML in practice. HTML5 certainly is not at all designed to be XML friendly, but it does at least give lip service in the form of an XML serialization for HTML, which, in this article, I'll call XHTML5. Nevertheless, the matter is far from settled...

The article is written for what I call the desperate web hacker: someone who is not a W3C standards guru, but interested in either generating XHTML5 on the web, or consuming it in a simple way —that is, to consume information, rather than worrying about the enormous complexity of rendering. I'll admit that some of my recommendations will be painful for me to make, as a long-time advocate for processing XML the right way. Remember that HTML5 is still a W3C working draft, and it might be a while before it becomes a full recommendation. Many of its features are stable, though, and already well-implemented on the web.

It is best to save XHTML5 for the very outermost components that connect to browsers. All flavors of XHTML are better seen as rendering languages than information-bearing languages. You should carry the main information throughout most of your system in other XML formats, and then convert to XHTML5 only at the last minute. You might wonder what is the point of creating XHTML5 even at the last minute, but remember Postel's law, which recommends being strict in what you produce. By producing XHTML5 for browsers, you make it easier for others to extract information from your websites and applications. In this age of mash-ups, web APIs, and data projects, that is a valuable characteristic..."

See also: the HTML5 draft specification


New Opportunities for Linked Data Nose-Following
Jonathan Rees, Blog

"For those interested in deploying RDF on the Web, I'd like to draw your attention to three new proposed standards from IETF, 'Web Linking', 'Defining Well-Known URIs', and 'Web Host Metadata', that create new follow-your-nose tricks that could be used by semantic web clients to obtain RDF connected to a URI — RDF that presumably defines what the URI 'means' and/or describes the thing that the URI is supposed to refer to.

All sorts of other statements can be made about a web page, such as a type (wiki page, blog post, etc.), SKOS concepts, links to comments and reviews, duration of a recording, how to edit, who controls it administratively, etc. Anything you might want to say about a web page can be said in RDF.

Embedded metadata is easy to deploy and to access, and should be used when possible. But while embedded metadata has the advantages of traveling around with the content, a protocol that allows the server responsible for the URI to provide metadata over a separate 'channel' has two advantages over embedded metadata: First, the metadata doesn't have to be put into the content; and second, it doesn't have to be parsed out of the content. And it's not either/or: There is no reason not to provide metadata through both channels when possible.

As with any new protocol, figuring out exactly how to apply the new proposed standards will require coordination and consensus-building. For example, the choice of the 'describedby' link relation and 'host-meta' well-known URI need to be confirmed for linked data, and agreement reached on whether multiple Link: headers is in good taste or poor taste... Consideration should be given to Larry Masinter's suggestion to use multiple relations reflecting different attitudes the server might have regarding the various metadata sources: For example the server may choose to announce that it wants the 'Link' metadata' to override any embedded metadata, or vice versa..."


Customizing MediaWiki
Chris Herborth, IBM developerWorks

The MediaWiki application is probably best known for being the engine behind Wikipedia. Many people are finding that MediaWiki provides a usable environment for sharing information within workgroups and even entire organizations, as well as online communities. MediaWiki allows users to share information via blogs, wikis, and files. It also allows you to secure uploaded files, tag files for easy locating, and locate experts using a tag cloud.

MediaWiki extensions can add new tags to the wiki markup used to write articles, add new reporting and administrative features by creating special pages, change the look and feel of the wiki through formatting skins, and even integrate with external authentication methods... Extensions are written in PHP and make use of MediaWiki's various internal hooks, classes, and methods to get their jobs done efficiently. While you can develop and deploy MediaWiki using any supported Web server and your favorite PHP development environment, this article explores Eclipse V3.5.2, PHP Development Tools (PDT) V2.2.0, MAMP Pro V1.9, and MediaWiki V1.15.3 (the current stable version of MediaWiki)...

MediaWiki takes advantage of PHP's ability to mix code and HTML markup to give you control over the look and feel of your wiki through the use of skins. Besides the main PHP code, a skin can include various CSS files and supporting images or JavaScript...

As you can see from the CHUser extension [documented], adding support for custom XML tags is easy and lets you do almost anything. All PHP features and MediaWiki services are available to you, so you can pull data from (or send data to) external systems, change your behavior based on the current user's credentials and permissions or insert JavaScript to run tasks directly in the viewer's browser. The possibilities and endless and limited only by your specific needs and requirements..."

See also: the MediaWiki Extension Matrix


Sponsors

XML Daily Newslink and Cover Pages sponsored by:

IBM Corporationhttp://www.ibm.com
ISIS Papyrushttp://www.isis-papyrus.com
Microsoft Corporationhttp://www.microsoft.com
Oracle Corporationhttp://www.oracle.com
Primetonhttp://www.primeton.com

XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: newsletter-subscribe@xml.coverpages.org
Newsletter unsubscribe: newsletter-unsubscribe@xml.coverpages.org
Newsletter help: newsletter-help@xml.coverpages.org
Cover Pages: http://xml.coverpages.org/



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/newsletter/news2010-07-06.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org