This issue of XML Daily Newslink is sponsored by:
ISIS Papyrus http://www.isis-papyrus.com
- OASIS Public Review for Energy Interoperation Version 1.0
- What Is a URI and Why Does It Matter?
- Jeni Tennison on Priorities for RDF
- Schema: Representing Resources for Calendaring and Scheduling Services
- OGC Calls for Participation in Major Geo Standards Testbed
- NIST Invites Comment on Personal Identity Verification Specification
OASIS Public Review for Energy Interoperation Version 1.0
Toby Considine (ed), OASIS PRD
Members of the OASIS Energy Interoperation Technical Committee have released an approved Committee Specification Draft of Energy Interoperation Version 1.0 for public review through December 27, 2010.
"Energy interoperation describes an information model and a communication model to enable collaborative and transactive use of energy, service definitions consistent with the OASIS SOA Reference Model, and XML vocabularies for the interoperable and standard exchange of: dynamic price signals, reliability signals, emergency signals, communication of market participation information such as bids, and load predictability and generation information.
This work facilitates enterprise interaction with energy markets, which: (1) Allows effective response to emergency and reliability events; (2) Allows taking advantage of lower energy costs by deferring or accelerating usage; (3) Enables trading of curtailment and generation; (4) Supports symmetry of interaction between providers and consumers of energy; (5) Provides for aggregation of provision, curtailment, and use. The definition of a price and of reliability information depends on the market context in which it exists. It is not in scope for this TC to define specifications for markets or for pricing models, but the TC will coordinate with others to ensure that commonly used market and pricing models are supported. While this specification uses Web Services to describe the services, no requirement or expectation of specific messaging implementation is assumed...
Energy Interoperation (EI) supports transactive energy. EI also supports demand response approaches ranging from limited direct load control to override-able suggestions to customers. EI includes measurement and verification of curtailment. EI engages Distributed Energy Resources (DER) while making no assumptions as to their processes or technology. While this specification supports agreements and contractual obligations, it offers flexibility of implementation to support specific programs, regional requirements, and goals of the various participants including the utility industry, aggregators, suppliers, and device manufacturers. It is not the intent of the TC to imply that any particular contractual obligations are endorsed, proposed, or required in order to implement this specification. Energy market operations are beyond the scope of this specification although the interactions that enable management of the actual delivery and acceptance are within scope. Energy Interoperation defines interfaces for use throughout the transport chain of electricity as well as supporting today's intermediation services and those that may arise tomorrow..."
What Is a URI and Why Does It Matter?
Henry S. Thompson, Ariadne
This article describes how recent developments in Web technology have affected the relationship between URI and resource representation and the related consequences. "Historically, URIs were mostly seen as simply the way you accessed Web pages. These pages were hand-authored, relatively stable and simply shipped out on demand. More and more often that is no longer the case... Insofar as there are definitive documents about all this, they all agree that URIs are, as the third initial says, identifiers, that is, names. They identify resources, and often (although not always) allow you to access representations of those resources. 'Resource' names a role in a story, not an intrinsically distinguishable subset of things, just as 'referent' does in ordinary language. Things are resources because someone created a URI to identify them, not because they have some particular properties in and of themselves. 'Representation' names a pair: a character sequence and a media type. The media type specifies how the character string should be interpreted. For example JPG or HTML or MP3 would be likely media types for representations of an image of an apple, a news report about an orchard or a recording of a Beatles song, respectively.
As long ago as the mid-1990s, information scientists had taken the URI-resource-representation split to its logical conclusion: it was OK to create URIs for resources for which no representation existed yet (for example a planned but not-yet-drafted catalogue entry), or even for resources for which no (retrievable) representation could in principle ever exist (a particular physical book, or even its author). By the end of the 1990s, the generalisation of the resource concept was complete, and we find, in the defining document for URIs (since superseded, but without significant change in this regard): 'A resource can be anything that has identity. Familiar examples include an electronic document, an image, a service (e.g., 'today's weather report for Los Angeles'), and a collection of other resources. Not all resources are network 'retrievable'; e.g., human beings, corporations, and bound books in a library can also be considered resources'.
Since then the principle that a URI can be used to identify anything; that is, that there are few if any limits on what can 'be a resource', has assumed more and more importance, particularly within one community, namely the participants in what is termed the Semantic Web programme. This move is not just a theoretical possibility: there are more and more URIs appearing 'in the wild' which do not identify images, reports, home pages or recordings, but rather people, places and even abstract relations... What if we have a URI which identifies, let us say, not the Oaxaca weather report, but Oaxaca itself, that city in the Sierra Madre del Sur south-east of Mexico City? What should happen if we try to access that URI? If the access succeeds, the representation we get certainly will not reproduce Oaxaca very well: we will not be able to walk around in it, or smell the radishes if it happens to be 23 December.
This is the point at which the word 'representation' is a problem. Surely we can retrieve some kind of representation of Oaxaca: a map, or a description, or a collection of aerial photographs. These are representations in the ordinary sense of the word, but not in the technical sense it is used when discussing Web architecture. Unfortunately, beyond pointing to the kind of easy examples we have used all along (a JPG is a good representation of an image, a HTML document can represent a report very well, an MP3 file can represent a recording pretty faithfully), it is hard to give a crisp definition of what 'representation' means in the technical sense... There is real debate underway at the moment as to exactly what it means for a Web server to return a 200 OK response code, and about exactly what kind of response is appropriate to a request for a URI which identifies a non-information resource. This question arises because, particularly in the context of Semantic Web applications, although no representation of the resource itself may be available, a representation of an information resource which describes that resource may be available..."
Jeni Tennison on Priorities for RDF
Jeni Tennison, Blog
"A couple of weeks ago I did a talk at the W3C TPAC Plenary Day about why RDF hasn't had the uptake that it might and what could be done about it... I'm going to put my stake in the ground and say that there are three areas where I think W3C should be concentrating its efforts: (1) standardising (something like) TriG — Turtle plus named graphs; (2) Standardising an API for the RDF data model; (3) standardising a path language for RDF that can be used by that API and others for easy access... and that it should specifically not put its efforts into standardising another syntax for RDF based on JSON... Fundamentally, unlike XML or JSON, RDF is defined first and foremost as a model rather than as a syntax. That means it can be expressed in a number of syntaxes, the most common of which are RDF/XML, Turtle and N-Triples though of course there's also RDFa, RDF/JSON, JSON-LD and N3 and if you start factoring in named graphs you can add TriX, TriG and N-Quads to the list. [But] there are actually two ways in which having multiple syntaxes makes adoption harder...
The second point is to work on standardising the APIs that are available for developers who work with RDF. Why standardise APIs? Because it would make accessing RDF easier and more predictable for web developers, who often work across multiple languages and platforms. Developers don't really care about syntax—although having something readable is useful for debugging—they care about the way in which they get to interact with in-memory structures that hold the data.
RDF needs an API that exposes its internal model (of literals and resources and triples and graphs and datasets) in a way that isn't too onerous for people to use. There are lots and lots of RDF APIs about, within the various parsers that are available for different platforms; the only one that's approaching a standard is the one embedded within the RDFa API specification. I would like to see that disentangled from RDFa and for it, or something like it, to gain traction amongst the writers of RDF libraries such as the Redland RDF libraries, RDFLib, Moriarty, Reddy, rdfQuery and so on and on.
But having an API for RDF's data model is not enough. I think there is a lot that we can learn from XML's experience here. James Clark's recent blog post about XML and the web describes what it's like for developers working with XML compared to JSON... As far as I'm concerned, W3C and the RDF community should be concentrating on a syntax for RDF that doesn't come saddled with those kinds of assumptions, which I think is Turtle + graphs; something like TriG. They should be concentrating on developing a standard API for RDF access that has a chance of adoption among the developers of RDF libraries, and on working out what parts of SPARQL and FRESNEL could be used to create a path language that could be reused in several contexts, including within such an API. And these should be done in preference to a RDF syntax in JSON which doesn't solve the core problems, and in fact just adds another syntax to the mix..."
Schema: Representing Resources for Calendaring and Scheduling Services
Ciny Joy, Cyrus Daboo, Michael Douglass (eds), IETF Internet Draft
IETF has published a revised Standards Track Internet Draft for the specification Schema for Representing Resources for Calendaring and Scheduling Services. This specification is a result of discussions that took place within the Calendaring and Scheduling Consortium's Resource Technical Committee. The authors thank the participants of that group, and specifically the following individuals for contributing their ideas and support: Arnaud Quillaud, Adam Lewenberg, Andrew Laurence, Guy Stalnaker, Mimi Mugler, Dave Thewlis, Bernard Desruisseaux, Alain Petit, and Jason Miller.
"This specification describes a schema for representing resources for calendaring and scheduling. A resource in the scheduling context is any shared entity that can be scheduled by a calendar user, but does not control its own attendance status."
Details: "This specification defines a schema for representing resources to ease the discovery and scheduling of resources between any calendar client and server. LDAP and vCard mappings of the schema are described in this document. The Object model chosen is the lowest common denominator to adapt for LDAP. A resource object definition should contain all information required to find and schedule the right resource. For this, it should contain all, or a set of the attributes described in Section 5 'Resource Attributes'. The 'cn' attribute, described in Section 5.1 'Common Name' must be present in any resource object. Additional proprietary attributes may be defined as well, but must begin with 'X-'. Clients encountering attributes they don't know about must ignore them... LDAP Resource ObjectClass Definition: In LDAP, a resource object should be defined as an objectclass with attributes as defined in Section 5 'Resource Attributes'. This objectClass must be an auxiliary class. Its Superior class is the calEntry objectClass as defined in Section 220.127.116.11 of RFC 2739...
Definition of the CalendarResource ObjectClass: Attributes or Properties required to contact the resource are not included in this specification. LDAP attributes defined in RFC 4519 and VCARD properties defined in vCard Format Specification can be used to include contact information for the resource. New LDAP objectclasses and attributes defined in this document need to be registered by the Internet Assigned Numbers Authority (IANA). Once the assignment is done, this document needs to be updated with the correct OID numbers for all the newly defined objectclasses and attributes. Section 8.2 presents the VCard Property Registration details..."
OGC Calls for Participation in Major Geo Standards Testbed
Staff, Open Geospatial Consortium Announcement
The Open Geospatial Consortium has issued a Request for Quotations/Call for Participation (RFQ/CFP) for the OGC Web Services, Phase 8 (OWS-8) Interoperability Initiative, a testbed to advance OGC's open interoperability framework for geospatial capabilities. The RFQ/CFP is available online. The organizations sponsoring OWS-8 seek open standards that address their interoperability requirements.
"OWS-8 will be organized around the following threads: (1) Observation Fusion: Detection, tracking, and bookmarking of moving objects in video using Sensor Web Enablement (SWE) and other OGC encoding and interface standards; as well as OGC Web Coverage Service (WCS) Interface Standard 2.0 application profile, Web Coverage Processing Service Interface Standard, and coverage access and processing. (2) Geosynchronization (Gsync): Geodata bulk transfer (distributing data sets in a consistent manner offline and over networks), with Web services and client components to support synchronization and updates of geospatial data across a hierarchical Spatial Data Infrastructure (SDI). (3) Cross-Community Interoperability (CCI): Advancement of semantic mediation approaches to query and use; mediating among heterogeneous data models via the OGC Web Feature Service WFS Interface standard; style registries and styling services; KML; and UML/OCL for schema automation on domain models. (4) Aviation: Maturing the delivery, filtering and update of Aeronautical Information Exchange (AIXM) 5.1 using WFS-Transactional and OGC Filter Encoding standards; continuing development of reusable tools, benchmarking of compression techniques for enhanced performance, advancing styling and portrayal, and validating the emerging metadata and OGC Geography Markup Language (GML) Encoding Standard profiles; event notification architecture, including digital Notice to Airmen (NOTAM) events; Weather Information Exchange Model (WXXM) using coverages for encoding weather forecast and radar datasets; supporting on-demand Coordinate Reference System (CRS) transformations; exploring distributed architectures for Units of Measure (UoM), demonstrating probabilistic Terminal Aerodrome Forecasts (TAF) decision making applications; and reviewing/validating WXXM schemas.
Many fusion processes are deployed in closed architectures with existing single provider software and hardware solutions. The goal of the fusion threads is to move those capabilities into a distributed architecture based on open standards including standards for notifications, authentication, and workflow processing. The Aviation Thread builds on work from OWS-6 and OWS-7, addressing certain aviation applications, including flight planning and operations. OWS-8 expands the scope to include ICAO (International Civil Aviation Organization) symbology and portrayal, performance issues, handling digital NOTAM, and other tasks...
OWS testbeds are part of OGC's Interoperability Program, a global, hands-on and collaborative prototyping program designed to rapidly develop, test and deliver proven candidate specifications into OGC's Specification Program, where they are formalized for public release. In OGC's Interoperability Initiatives, international teams of technology providers work together to solve specific geoprocessing interoperability problems posed by the Initiatives' Sponsors. OGC Interoperability Initiatives include testbeds, pilot projects, interoperability experiments and interoperability support services — all designed to encourage rapid development, testing, validation and adoption of OGC standards..."
NIST Invites Comment on Personal Identity Verification Specification
Timothy Polk, Donna Dodson, William Burr (et al, eds), NIST Special Puplication
NIST has announced that the Draft Special Publication 800-78-3, Cryptographic Algorithms and Key Sizes for Personal Identity Verification, is now available for public comment. NIST requests comments of draft SP 800-78-3 by 5:00pm EST on December 3, 2010.
The scope of this recommendation encompasses the PIV Card, infrastructure components that support issuance and management of the PIV Card, and applications that rely on the credentials supported by the PIV Card to provide security services. The recommendation identifies acceptable symmetric and asymmetric encryption algorithms, digital signature algorithms, key establishment schemes, and message digest algorithms, and specifies mechanisms to identify the algorithms associated with PIV keys or digital signatures. Algorithms and key sizes have been selected for consistency with applicable Federal standards and to ensure adequate cryptographic strength for PIV applications. All cryptographic algorithms employed in this specification provide at least 80 bits of security strength.
Homeland Security Presidential Directive (HSPD) 12 mandated the creation of new standards for interoperable identity credentials for physical and logical access to Federal government locations and systems. Federal Information Processing Standard 201 (FIPS 201), Personal Identity Verification (PIV) of Federal Employees and Contractors, was developed to establish standards for identity credentials... FIPS 201 defines requirements for the PIV lifecycle activities including identity proofing, registration, PIV Card issuance, and PIV Card usage. FIPS 201 also defines the structure of an identity credential that includes cryptographic keys. This document contains the technical specifications needed for the mandatory and optional cryptographic keys specified in FIPS 201 as well as the supporting infrastructure specified in FIPS 201 and the related Special Publication 800-73, 'Interfaces for Personal Identity Verification' (SP800-73), and SP 800-76, 'Biometric Data Specification for Personal Identity Verification' (SP800-76), that rely on cryptographic functions.
In this revision the document has been modified to: (1) align the set of acceptable RSA public key exponents with FIPS 186-3, and (2) permit the use of SHA-1 after 12/31/2010 when signing revocation information, under limited circumstances. In particular, the following changes are introduced in draft SP 800-78-3: [i] the maximum value allowed for the RSA public key exponent is now 2256; the minumum value allowed for the RSA public key exponent remains 65,537; [ii] CRLs and OCSP status responses that only provide status information for certificates that were signed with RSA with SHA-1 and PKCS #1 v1.5 padding may be signed using RSA with SHA-1 and PKCS #1 v1.5 padding through 12/31/2013..."
XML Daily Newslink and Cover Pages sponsored by:
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/