This issue of XML Daily Newslink is sponsored by:
Sun Microsystems, Inc. http://sun.com
- Mobile Web Application Best Practices
- SCA Extensions for Event Processing and Pub/Sub
- Introduction to Virtual Service Oriented Grids
- Electricity Savings from Data Center SSDs Could Power an Entire Country
- Revised IETF Internet Draft: vCard XML Schema
- OGC Demonstrates Interop for Building Energy and Construction Costs
- Stereotype Annotations Cut Down XML Configuration in Spring
Mobile Web Application Best Practices
Bryan Sullivan and Adam Connors (eds), W3C Technical Report
Members of the W3C Mobile Web Best Practices Working Group have published a Third Public Working Draft for "Mobile Web Application Best Practices." It is subject to major changes and is therefore not intended for implementation; in particular, the list of Best Practices is not settled yet. This document specifies Best Practices for the development and delivery of Web applications on mobile devices. The recommendations expand upon statements made in the Mobile Web Best Practices 1.0 (BP1), especially those that relate to the exploitation of device capabilities and awareness of the delivery context. Furthermore, since BP1 was written, networks and devices have continued to evolve, with the result that a number of Best Practices that were omitted from BP1 can now be included. The recommendation is primarily directed at creators, maintainers and operators of Web applications. Readers of this document are expected to be familiar with the creation of Web sites, and to have a general familiarity with the technologies involved, such as Web servers, HTTP, and Web application technologies. Readers are not expected to have a background in mobile technologies or previous experience with BP1.
The approach in writing this document has been to collate and present the most relevant engineering practices prevalent in the development community today and identify those that: (a) facilitate the exploitation of device capabilities to enable a better user experience; or (b) are considered harmful and can have non-obvious detrimental effects on the overall quality of the application. The goal of this document is not to invent or endorse future technologies. However, there are a number of cases where explicitly omitting a Best Practice that referred to an emerging technology on the grounds that it was too recent to have received wide adoption would have unnecessarily excluded a valuable recommendation. As such, some Best Practices have been included on the grounds that the Working Group believes that they will soon become fully qualified Best Practices (e.g. in prevalent use within the development community)...
SCA Extensions for Event Processing and Pub/Sub
Boris Lublinsky, InfoQueue
A new SCA specification, Assembly Model Specification Extensions for Event Processing and Pub/Sub describes the Event Processing and Pub/Sub Extensions for the SCA Assembly Model, which deals with: (1) Event Processing, which is computing that performs operations on events, including creating, reading, transforming, and deleting events or event objects/representations. Event Processing components interact by creating event messages which are then distributed to other Event Processing components. An Event Processing component can, in addition, interact with other SCA components using SCA's regular service invocation mechanisms. (2) Publication and Subscription (often shortened to Pub/Sub), which is a particular style of organizing the components which produce and consume events in which the producing components are decoupled from the consuming components. Components that are interested in consuming events specify their interest through a subscription rather than an interface. The same event may be received by multiple subscribers... The introduction of event processing provides a more loosely-coupled method of combining components than using service interfaces. Events place fewer requirements on the components at each end of the communication. Effectively, in event processing it is only the event types that are shared between the producers and the consumers. Even looser coupling can be achieved through the use of Pub/Sub. In this case, producers are not connected directly to any consumers, instead producers are connected with consumers through a logical intermediary — the pub/sub engine...
Events in SCA have an event type associated with them. Each event type is identified by a unique event type name. An event can have no event type metadata associated with it—for example, this can be the case for events which are created by pre-existing non-SCA event sources. SCA has a canonical representation of event types in terms of XML and of event shapes in terms of XML schema. SCA event shapes are describable using an XML infoset, although they don't have to be described using XML Schema, other type systems could be used. SCA events can have a wire format that is not XML. Events can also have programming language specific representations. The details of the mapping between language specific formats and XML are defined by the SCA implementation language specifications.
Bibliographic information: Assembly Model Specification Extensions for Event Processing and Pub/Sub. SCA Version 1.0. April 15. 2009. Technical Contacts: Michael Beisiegel (IBM Corporation), Vladislav Bezrukhov (SAP AG), Dave Booz (IBM Corporation), Martin Chapman (Oracle), Mike Edwards (IBM Corporation), Anish Karmarkar (Oracle), Ashok Malhotra (Oracle), Peter Niblett (IBM Corporation), Sanjay Patil (SAP AG). Scott Vorthmann (TIBCO). OSOA source reference page. See this document as a contribution to the OASIS Service Component Architecture / Assembly (SCA-Assembly) TC by IBM, Oracle (noted earlier, SAP...
Introduction to Virtual Service Oriented Grids
Enrique Castro-Leon, Jackson He (et al), InfoQueue
While the idea of virtual service-oriented grids may be a new business concept, the technologies that build the groundwork for this idea go back many decades to the early days of computing research. That being said, the combination of these technologies brings non-functional, yet significant capabilities to a system. Virtual service-oriented grids have the capacity to fundamentally change the way business is conducted in much the same way that the Internet did by reinserting a middleman in the form of software, rather than human. The key to this paradigm shift lies in services, the abstraction of interoperability and reuse... The technologies for virtualization, service orientation, and grid computing have been amply documented in books and the research and trade literature. We do not attempt to go deeper or duplicate the excellent work that other authors have done. Instead, we explore how the interplay between virtualization, service orientation, and grids are fundamentally changing the value economics of how information technology is delivered, and in the process, how organizations that depend on information technology to carry their day-to-day business are affected in turn. The authors believe we are witnessing a true inflection point...
The upper blocks start with the archetype Standard Generalized Markup Language (SGML) developed in the 1960s by Charles Goldfarb, Edward Mosher and Raymond Lone at IBM. The Hypertext Markup Language (HTML), a derivative of SGML, became the language of the Web and the Internet. The World Wide Web was initially used as a presentation interface for humans and certain aspects of it evolved into XML Web services for interoperable machine to machine communication. This machine to machine interface enabled modular composite service oriented architecture (SOA) applications, first within large enterprises (the inside-out model) and across enterprises small and large (the outside-in model). The middle blocks track the evolution of virtualization, initially applied to mainframes and eventually to computers based on commodity processors with hypervisors running as an intermediate layer between the hardware and the operating system. The bottom blocks track the evolution of computer hardware, first with the single-instruction, single-data (SISD) style of computation initiated by mainframes in the 1950s. To improve throughput, certain computers were architected to apply a program to multiple data streams (single-instruction, multiple-data or SIMD.) These computers required data with a highly regular structure to take advantage of the extra power, and hence their applicability was limited. These restrictions were relaxed with SIMD computers, which allowed the constituent computers to operate on different data. Initially nodes in SIMD computers were linked together using proprietary high-speed interconnects forming supercomputing clusters all located in a single room. This setup was cost prohibitive for most applications except those requiring the highest performance. These restrictions were relaxed for grid computing where nodes can be geographically distributed, sometimes connected through the Internet...
See also: the associated book
Electricity Savings from Data Center SSDs Could Power an Entire Country
Chris Preimesberger, eWEEK
"SSD market researcher iSuppli says the increased deployment of SSDs could enable the world's data centers to reduce their cumulative electricity consumption by a whopping 166,643 megawatt hours from 2008 to 2013. Most people are already aware that solid-state server and storage disks only use a portion — as little as one-half or less — of the electrical power that a spinning hard disk requires, simply because there are no moving parts that need energy to activate them. As SSDs move slowly but surely into the data center, noticeable dribs and drabs of bottom-line power savings are starting to become reality... Currently, SSDs in data centers are used almost exclusively to power high-speed transactional applications, such as financial services, Web 2.0 services and the like. NAND flash read/write speeds are commonly 100 times faster than those of spinning hard disks. Krishna Chander, iSuppli senior analyst for storage systems: 'SSDs potentially could replace 10 percent of the high-end and high-RPM hard disk drives used in data centers that are 'short stroked' (used for rapid reads and writes of transaction data coming into these drives at fast speeds), rather for storage capacity. Each of these 15,000 RPM serial-attached SCSI (SAS) drives draws about 14 watts during [a normal] day. SSDs, on the other hand, draw about half the power of these HDDs, at an estimated 7 watts. A 50 percent savings in power consumption is a noticeable improvement, so even a small penetration of SSDs in enterprise data centers could result in massive power savings... According to most SSD industry analysts, a 10 percent changeover from HDDs to SSDs over the next four years in high-end, high-transactional data centers is a conservative estimate. Some believe that due to current economic conditions and pressure from the public to 'get greener,' companies are looking to save money on power consumption in any way they can, as quickly as they can. Thus, a 20 to 40 percent changeover might be more likely by 2013, some analysts say... The Environment Protection Agency's EnergyStar program, which is scheduled to issue a set of power-saving specifications for servers on May 15, 2009 has determined that data centers in the United States account for about 2 percent of all the power used in the nation. That outstrips the power consumption for all U.S. television sets (there are an estimated 200 million plugged into the power grid), which account for about 1.5 percent of the power-usage total..."
Revised IETF Internet Draft: vCard XML Schema
Simon Perreault (ed), IETF Internet Draft
A revised version of the vCard XML Schema specification has been published as an Internet Draft. If approved, this Standards Track RFC will update RFC 2739 and obsolete IETF RFCs 2425, 2426, and 4770. In version -01, the XML schema design has been completely reworked as a result of Working Group feedback. RELAX NG compact syntax is now used; parameters and value types are now elements, a new XML namespace from IANA's range has been chosen. The "vCard Format Specification" document defines a data format for representing and exchanging information about individuals. It is used for representing and exchanging a variety of information about an individual (e.g., formatted and structured name and delivery addresses, email address, multiple telephone numbers, photograph, logo, audio clips, etc.). It is a text-based format (as opposed to a binary format). This document defines an XML representation for vCard. The underlying data structure is exactly the same, enabling a 1-to-1 mapping between the original vCard format and the XML representation. The XML formatting may be preferred in some contexts where an XML engine is readily available and may be reused instead of writing a stand-alone vCard parser. Design Considerations: The general idea is to map vCard parameters, properties, and value types to XML elements. For example, the "FN" property is mapped to the "fn" element. That element in turn contains a text element whose content corresponds to the vCard property's value. vCard parameters are also mapped to XML elements. They are contained in property elements. Line folding is a non-issue in XML. Therefore, the mapping from vCard to XML is done after the unfolding procedure is carried out. Conversely, the mapping from XML to vCard is done before the folding procedure is carried out...
The original vCard format is extensible. New properties, parameters, data types and values (collectively known as vCard objects) can be registered from IANA. It is expected that these vCard extensions will also specify extensions to the XML format described in this document. This is not a requirement: a separate document may be used instead... A vCard XML parser MUST ignore elements that are not part of this specification. In the original vCard format, the VERSION property was mandatory and played a role in extensibility. In XML, this property is absent. Its role is played by the vCard core namespace identifier, which includes the version number. vCard revisions will use a different namespace. Since vCard also has provisions for extending value enumerations, such as the allowed TYPE parameter values, these values are expressed using tags in XML.
OGC Demonstrates Interop for Building Energy and Construction Costs
Staff, OGC Announcement
The OGC has announced completion of the AECOO-1 (Architecture, Engineering, Construction, Owner and Operator) Phase 1 Testbed, a 9-month effort to increase interoperability among software used by architects, construction companies, cost estimators and building energy analysts. Effective management of buildings and other capital facilities increasingly requires information exchange among all disciplines and professions that have a stake in the design, construction and operation of those facilities. The AECOO-1 Testbed exchanged building information using Industry Foundation Class (IFC) standards to analyze tradeoffs between construction cost and energy efficiency. This work was preliminary to possible future development of open standards for Web service interfaces. Results will be submitted for consideration by bSa's National Building Information Modeling Standard (NBIMS) Project Committee. AECOO-1 was a broad international effort in which participants cooperated in solving a discrete set of AECOO community problems defined by the sponsors. The testbed also facilitated cooperation among AECOO standards bodies to achieve results no group could achieve alone. AECOO-1 focused on two important aspects of building design and construction: (1) building performance and energy analysis and 2) quantity take-offs. These topics were explored within the framework of the American Institute of Architects (AIA) Integrated Delivery Process and addressed interoperability involving intelligent building models with 3D geometric capabilities...
The AECOO-1 Testbed looked at streamlining communications between building stakeholders during the conceptual design phase to get an early understanding of the tradeoffs between construction cost and energy efficiency. To that end, the project documented in Information Delivery Manuals (IDMs) requirements for quantity takeoffs and energy analysis needs, and used these to define Model View Definitions (MVDs) — specific subsets of Industry Foundation Classes (IFCs) — which are needed to integrate requirements into software used during business workflows. AECOO-1 also worked on a mapping of the IDM and MVD requirements to the NBIMS capability maturity model in order to identify building project information that can improve process management and decision making. Participants in the demonstration include: Bentley Systems, Digital Alchemy, Faithful & Gould, Graphisoft, LBNL, PhiCubed/Sofi, Nemetscheck NA, NIST, and Tokmo Solutions. The buildingSMART alliance is a council of the National Institute of Building Sciences. The Alliance is the umbrella organization for two permanent projects: the National CAD Standard and the National Building Information Modeling Standard. The Alliance was established to (1) coordinate the profound constructive changes coming to the fragmented real property industry in North America; (2) be the coordination point for fund raising and uniform marketing of member programs; and (3) provide a centralized process for strategic planning, resource allocation and decision making for the member programs. Its collective goal is open interoperability and full lifecycle implementation of building information models. The focus is to guarantee lowest overall cost, optimum sustainability, energy conservation and environmental stewardship to protect the Earth's ecosystem.
Stereotype Annotations Cut Down XML Configuration in Spring
Jim White, DevX.com
Annotations have been part of the Spring Model-View-Controller (MVC) Framework since Spring 2.0. In Spring, annotations can greatly reduce the amount of XML bean configuration and wiring needed. Given the many components of the Spring MVC environment (handler mapping, controller, view resolver, and view), XML configuration can turn unwieldy in a hurry. So, Spring MVC configuration is certainly one area that can really benefit from reduced configuration. As of Spring 2.5, the framework added new annotations to more easily configure and assemble the components of a multi-layered application, such as you might find in an MVC-designed system. In fact, an important type of annotation added in Spring 2.5, stereotype annotations, is for configuring the Spring MVC controller components. Are these new annotations critical to your applications? Rod Johnson (Spring founder, project lead, and CEO of SpringSource) has indicated that the future of Spring MVC lies in the new Spring 2.5 annotations... The component annotations introduced in Spring 2.5 are really just a continuation of the "stereotype" annotations introduced in Spring 2.0. The stereotype annotation that Spring 2.0 introduced was '@Repository'. Stereotype annotations are markers for any class that fulfills a role within an application. This helps remove, or at least greatly reduce, the Spring XML configuration required for these components. Specifically, the roles or stereotypes defined in Spring today include Repository, Service, and Controller. Spring also defines these stereotypes as specializations of a more generic stereotype, Component. The Component annotation allows the Spring team to create new stereotypes in future versions of the framework. It also allows you to define your own stereotype components. So Spring actually defines four stereotype annotations.
XML Daily Newslink and Cover Pages sponsored by:
|Sun Microsystems, Inc.||http://sun.com|
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/