This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com
- W3C Releases First Public Working Draft for RDFa Core 1.1
- XHTML+RDFa 1.1: Support for RDFa via XHTML Modularization
- Reporting of DKIM Verification Failures
- Harris Interactive Survey: Cloud Computing Leaves Consumers Cold
- Appirio Makes $1 Million Cloud Savings Promise
- Volcano's Fury Gives NOAA a Chance to Try Out New Computer Models
- Update from Oracle: GlassFish Roadmap
- Simpler JAX-RS Integration with Ajax: Apache Wink with Jackson JSON
W3C Releases First Public Working Draft for RDFa Core 1.1
Ben Adida, Mark Birbeck, Shane McCarron, Ivan Herman (eds), W3C Technical Report
Members of the W3C RDFa Working Group have published a First Public Working Draft for the specification RDFa Core 1.1: Syntax and Processing Rules for Embedding RDF Through Attributes. The document is intended to become a W3C Recommendation. A sample test harness is available, though its set of tests is not intended to be exhaustive. Users may find the tests to be useful examples of RDFa usage. This document is expected to supersede the RDFa in XHTML (RDFa 1.0) specification
RDFa provides a set of XHTML attributes to augment visual data with machine-readable hints as well as providing a few new ones. Attributes that already exist in widely deployed languages (e.g., HTML) have the same meaning they always did, although their syntax has been slightly modified in some cases. For example, in (X)HTML, '@rel' already defines the relationship between one document and another. However, in (X)HTML there is no clear way to add new values; RDFa sets out to explicitly solve this problem, and does so by allowing URIs as values. It also introduces the idea of 'compact URIs'—referred to as CURIEs in this document—which allow a full URI value to be expressed succinctly...
Background: "The current Web is primarily made up of an enormous number of documents that have been created using HTML. These documents contain significant amounts of structured data, which is largely unavailable to tools and applications. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience: an event on a web page can be directly imported into a user's desktop calendar; a license on a document can be detected so that users can be informed of their rights automatically; a photo's creator, camera setting information, resolution, location and topic can be published as easily as the original photo itself, enabling structured search and sharing.
RDFa Core is a specification for attributes to express structured data in any markup language. The embedded data already available in the markup language (e.g., XHTML) is reused by the RDFa markup, so that publishers don't need to repeat significant data in the document content. The underlying abstract representation is RDF, which lets publishers build their own vocabulary, extend others, and evolve their vocabulary with maximal interoperability over time. The expressed structure is closely tied to the data, so that rendered data can be copied and pasted along with its relevant structure... RDFa shares some of the same goals with microformats. Whereas microformats specify both a syntax for embedding structured data into HTML documents and a vocabulary of specific terms for each microformat, RDFa specifies only a syntax and relies on independent specification of terms (often called vocabularies or taxonomies) by others. RDFa allows terms from multiple independently-developed vocabularies to be freely intermixed and is designed such that the language can be parsed without knowledge of the specific vocabulary being used..."
See also: W3C Semantic Web
XHTML+RDFa 1.1: Support for RDFa via XHTML Modularization
Shane McCarron (ed), W3C Technical Report
A First Public Working Draft for XHTML+RDFa 1.1: Support for RDFa via XHTML Modularization has been released for comment by the W3C RDFa Working Group. Where the specification "RDFa Core 1.1" defines attributes and syntax for embedding semantic markup in host languages, this document defines one such Host Language. This language is a superset of XHTML 1.1, integrating the attributes as defined in RDFa Core 1.1. This document is intended for authors who want to create XHTML-Family documents that embed rich semantic markup.
There are a number of substantive differences between this version and its predecessor, found in Sections 8 and 9 and Appendix A of "RDFa Syntax 1.0": (1) Inheritance of basic processing rules from RDFA-CORE; (2) The inclusion of an implementation of the markup language using XML Schema; (3) The addition of '@lang' to be consistent with recent changes in XHTML11-2e.
XHTML+RDFa 1.1 is an XHTML family markup language. It extends the XHTML 1.1 markup language with the attributes defined in RDFa Core 1.1. This document also defines an XHTML Modularization-compatible module for the RDFa Core attributes in both XML DTD and XML Schema formats...
The W3C RDFa Working Group was chartered to support the developing use of RDFa for embedding structured data in Web documents in general. The term 'Semantic Web' refers to W3C's vision of the Web of linked data. Semantic Web technologies enable people to create data stores on the Web, build vocabularies, and write rules for handling data. Linked data are empowered by technologies such as RDF, SPARQL, OWL, and SKOS. Linked Data lies at the heart of what Semantic Web is all about: large scale integration of, and reasoning on, data on the Web. To achieve and create Linked Data, technologies should be available for a common format (RDF), to make either conversion or on-the-fly access to existing databases (relational, XML, HTML, etc). It is also important to be able to setup query endpoints to access that data more conveniently. W3C provides a palette of technologies (RDF, GRDDL, POWDER, RDFa, the upcoming R2RML, RIF, SPARQL) to get access to the data..."
See also: the W3C RDFa Working Group Charter
Reporting of DKIM Verification Failures
Murray S. Kucherawy (ed), IETF Internet Draft
IETF has published an initial level -00 Internet Draft for the Standards Track specification Reporting of DKIM Verification Failures. IETF RFC 4871 DomainKeys Identified Mail (DKIM) Signatures, together with updates in Request for Comments 5672, "defines a domain-level authentication framework for email using public-key cryptography and key server technology to permit verification of the source and contents of messages by either Mail Transfer Agents (MTAs) or Mail User Agents (MUAs). The ultimate goal of this framework is to permit a signing domain to assert responsibility for a message, thus protecting message signer identity and the integrity of the messages they convey while retaining the functionality of Internet email as it is known today. Protection of email identity may assist in the global control of 'spam' and 'phishing'."
The document Reporting of DKIM Verification Failures presents an extension to the DomainKeys Identified Mail (DKIM) specifications to allow public keys for verification to include a reporting address to be used to report message verification issues, and extends an Internet Message reporting format to be followed when generating such reports. The specification was produced by members of the IETF Messaging Abuse Reporting Format (MARF) Working Group.
Details: Where DKIM introduced a standard for digital signing of messages for the purposes of sender authentication, there exist cases in which a domain name owner might want to receive reports from verifiers that determine DKIM-signed mail apparently from its domain is failing to verify according to DKIM or fails to conform to the domain's published signing practices according to 'DKIM Sender Signing Practises' (RFC 5617).... This memo also includes defined 'ro' tags as the means by which the sender can request reports for specific circumstances of interest. Verifiers MUST NOT generate reports for incidents that do not match a requested report, and MUST ignore requests for reports not included in this these lists.
The updated companion specification 'An Extensible Format for Email Feedback Reports', also from the IETF MARF Working Group, defines an extensible format and MIME type that may be used by mail operators to report feedback about received email to other parties. This format is intended as a machine-readable replacement for various existing report formats currently used in Internet email..."
See also: the updated MARF Email Feedback Reports
Harris Interactive Survey: Cloud Computing Leaves Consumers Cold
Antone Gonsalves, InformationWeek
U.S. adults remain distrustful of cloud computing services that would let them store, edit, or play digital content, a survey shows. More than half of adults surveyed disagreed with the concept that files stored online are safer than files stored locally on a hard drive. A March survey of 2,320 online adults conducted by Harris Interactive found that from 55% to 69% of the respondents would be only somewhat or not at all interested in using cloud computing for pictures, music, office documents, videos, or financial services, such as tax files or bank records...
E-mail was the one exception, with just under half of the people surveyed saying they would be extremely, or very interested, or interested in using cloud computing for this service. One of the main issues the respondents had with cloud computing is security. Four in five of the respondents agreed that security was a concern. Only a quarter said they would trust a cloud-computing service for files with personal information, while three in five said they would not...
Fully 58% of the people surveyed disagreed with the concept that files stored online were safer than files stored locally on a hard drive and 57% would did not trust that their files are safe online. Nevertheless, more than three in five online Americans agreed that having access to all their files wherever they are, a major advantage of cloud-computing services, would make their lives easier..."
Appirio Makes $1 Million Cloud Savings Promise
Chris Kanaracus, InfoWorld
Appirio has launched a new offer, pledging that if customers who use its "cloudsourcing" services to migrate entirely to public cloud applications and technologies don't save at least $1 million per year, it will make up the difference. There are some ground rules. To be eligible, companies must be currently spending at least $5 million per year and have 500 or more employee... In addition, Appirio is not promising instant results. Customers would work on a phased road map plan with Appirio's consultants, and the process of switching over to cloud applications and infrastructure could take two or three years. Once that is complete, savings will be measured over the following year... Appirio expects most interested businesses will fall in the 500 to 5,000 employee range...
Ryan Nichols, Appirio vice president of cloudsourcing and cloud strategy said that when customers make the switch, entire categories of spending drop away, such as the costs of maintaining servers. He also noted a recent Gartner prediction that 20 percent of companies won't own any IT assets by 2012..."
According to the announcement: "The Appirio program, which will run for the next six months, gives customers the confidence they need to get out of the business of running data centers and the inspiration to take greater advantage of the cost and innovation benefits enabled by cloud computing... For those companies who are looking to cloudsource some or all of their IT, Appirio has the expertise, services and technology to make it happen with less risk. A typical cloudsourcing relationship begins by first understanding the current customer IT environment, then developing a business-case driven cloud roadmap that articulates areas of savings across software, hardware, personnel and other IT cost centers. Appirio then brings to bear its proven cloud migration, cloud development and cloud management services, with specialized back-end technology that can connect leading cloud platforms...
Enterprise cloud adoption has reached a tipping point. An increasing number of companies are already moving their applications, platforms and infrastructure to the public cloud, drawn not only by the cost savings but the top line business impact—the ability to react faster, enter new markets or just work more productively..."
See also: the Appirio announcement
Volcano's Fury Gives NOAA a Chance to Try Out New Computer Models
William Jackson, Government Computer News
"The eruption of a volcano on Iceland that has shut down commercial air traffic in much of Europe since late last week is giving the U.S. National Oceanic and Atmospheric Administration a chance to try out an advanced computer model for predicting volcanic ash dispersion. The current state of the art and science of volcanic ash is limited by a lack of detailed information about the composition of the clouds of ash spewed by erupting volcanoes, which can threaten aircraft and change the Earth's weather...
NOAA plans to begin testing a version of a computer simulation that includes chemistry data in an effort to produce more accurate results... Tracking and predicting the movement of clouds of volcanic ash is important because of their potential effect on aircraft and weather. The threat of damage to aircraft has disrupted European and transatlantic air travel this week, even though little still is known about the threat.
Existing computer models are helpful in predicting where the plumes will go. One being used to track the Eyjafjallajokull plume over northern Europe is called HYSPLIT — the Hybrid Single Particle Lagrangian Integrated Trajectory Model — developed by NOAA's Air Resources Laboratory and used by the weather service...
A model that offers more detail is the Flow Following Finite-Volume Icosahedral Model, or FIM, being developed by NOAA Research. FIM is not brand new, but the chemistry elements executed within it have only been tested in the last few days, said Stan Benjamin, director of the forecast branch of the research division's Global Systems Division. It is expected to get trial runs on the Eyjafjallajokull plume this week and to soon join other production computer models used by NOAA..."
Update from Oracle: GlassFish Roadmap
Staff, GlassFish Community Developer Announcement
In March 2010, Oracle shared plans for GlassFish with the community. This covered the few community changes as well as the upcoming releases such as support for clustering and more in GlassFish 3.1 in 2010...
GlassFish is an open source application server which implements Java EE 5. The Java EE 5 platform includes the latest versions of technologies such as such as JavaServer Pages (JSP) 2.1, JavaServer Faces (JSF) 1.2, Servlet 2.5, Enterprise JavaBeans 3.0, Java API for Web Services (JAX-WS) 2.0, Java Architecture for XML Binding (JAXB) 2.0, Web Services Metadata for the Java Platform 1.0, and many other new technologies.
Roadmap Highlights: GlassFish centralized admin and clustering will be in the open source version. The Open Source version is fully featured, including full JavaEE 6 support (not just the Web Profile) and things like administration and clustering. Shoal-based, in-memory replication is part of this. The Oracle distribution of GlassFish is just the Open Source version + branding elements + Closed-Source AddOns...
Oracle's decision to port the upper level components of Fusion Middleware to GlassFish is based on demand from our commercial customers to make such functionality available on different Application Servers. As a commercial product to date, we have not heard demand from our customers to make these components available on GlassFish—hence we do not have a product plan to do so today...
GlassFish will be certified on Oracle's Java Virtual Machines (JRockit and Hotspot) as well as other major JVM providers. Certifying and integrating GlassFish with Coherence is on the roadmap of GlassFish Server 3.1. In areas such as security and web services, Oracle will be doing interoperability and compatibility testing with Oracle Fusion Middleware..."
See also: the GlassFish FAQ document
Simpler JAX-RS Integration with Ajax: Apache Wink with Jackson JSON
Nick Maynard, IBM developerWorks
This article presents method for configuring an existing Apache Wink-enabled Web application to use the Jackson JSON provider to solve some of the problems. An example, with sample code for a simple Jackson-enabled JAX-RS Web service, illustrates the advantages of this provider.
"Apache Wink is a simple yet solid framework for building RESTful Web services. It is comprised of a Server module and a Client module for developing and consuming RESTful Web services. The Wink Server module is a complete implementation of the JAX-RS v1.0 specification. On top of this implementation, the Wink Server module provides a set of additional features that were designed to facilitate the development of RESTful Web services.
The Wink Client module is a Java based framework that provides functionality for communicating with RESTful Web services. The framework is built on top of the JDK HttpURLConnection and adds essential features that facilitate the development of such client applications..."
See also: the Apache Incubator project
XML Daily Newslink and Cover Pages sponsored by:
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/