This issue of XML Daily Newslink is sponsored by:
- Sun Java CAPS: Sun Bolsters SOA Software With Data Management
- AtomPub Multipart Media Creation
- Microsoft Silverlight to Back Ruby, Python in Browser
- NIST Seeks Comments on Scheme to Score IT Security Configurations
- Experiences with the Conversion of SenseLab databases to RDF/OWL
- The Future of BPM at BEA/Oracle
- Microsoft Oslo Platform Advances UML and Declarative Programming
- NIST Envisions 'Thinking Machine'
- Web Services Part 2: WSDL and WADL
- Data Infrastructure Drives NBA Finals
Sun Java CAPS: Sun Bolsters SOA Software With Data Management
Paul Krill, InfoWorld
Sun Microsystems is updating its open source SOA and business integration software, adding a data management option and leveraging enterprise service bus capabilities based around the JBI (Java Business Integration) specification. Event processing and business process management are featured as well. Release 6 of Sun Java CAPS (Java Composite Application Platform Suite) adds the new Sun MDM (Master Data Management) suite, which can be bundled with Java CAPS 6 or acquired separately. MDM suite provides a single view of customer data across disparate systems, Sun said. With the suite, users can leverage information across their organizations, identify the most valuable customers, and find opportunities to cross-sell and up-sell products and reduce operational costs. Single views can be provided whether the subject is a consumer, patient, citizen, or subscriber. A rules engine is featured for mapping of fields. Business process management functions in Java CAPS 6 include support for BPEL (Business Process Execution Language) 2, improved performance and dynamic management of service endpoints. The event processor in the product identifies trends and patterns in real time to address business events and take corrective action. Java CAPS 6 incorporates Sun's Glassfish Enterprise Server application server, which interoperates with .Net environments via the WSIT/Project Metro (Web Services Interoperability), which constitutes Sun's Web services stack. The NetBeans IDE also is supported in Java CAPS 6. Sun has pledged to open source all its software; components within Java CAPS 6 still can be acquired separately without paying a subscription fee. But the bundled package, which features customer support and includes MDM, costs $100 to $120 per employee per year... Java CAPS 6 enables users to leverage existing and new IT investments to build an agile infrastructure; components can be added and integrated in a modular fashion. ESB capabilities in version 6 lets users build composite applications and connect with components and protocols in an SOA. The ESB in Java CAPS 6 features software gained via Sun's SeeBeyond acquisition. But with this version, the ESB supports JBI and Sun's Open ESB effort.
See also: the announcement
AtomPub Multipart Media Creation
Joe Gregorio (ed), IETF Internet Draft
The version -00 Internet Draft "AtomPub Multipart Media Creation" specification defines how an Atom Publishing Protocol collection should process multipart/related requests and also defines how a service announces that it accepts multipart/related entities. The Atom Publishing Protocol (IETF RFC 5023) defines Media Collections and how to create a Media Resource by POSTing the media to the Media Collection. IETF RFC 5023 does not define handling multipart/related (RFC 2387) represenatations nor does it specify how the acceptance of such representations should be advertised in the Service Document. This specification covers both the processing and the Service Document aspects of handling multipart/related content... The primary objective of multipart/related POSTs is to reduce round- trips for creating Media Resources. There will be three round trips in the typical Media Resource creation scenario; POST of the media, GET of the Media Link Entry, and subsequent PUT of the updated Media Link Entry. This specification reduces that to just a single round-trip by allowing the client to package up the media and the associated Media Link Entry into a single multipart/related representation which is POSTed to the Media Collection. The design of the handling of multipart/related representations was aimed at backward compatibility, that is for non-multipart/related aware clients to fully function. A second aim was to retain and utilize the expressiveness of the current app:accept element in the Service Document. The last aim was to ease the burden on clients by allowing the mulitpart representation to be constructed in an order that was convenient for the client... The applicability of multipart/related representations to AtomPub Collections is restricted to just creating new entries in Media collections. It does not specify the creation or use of a resource that supports a GET to return the multipart/related representation nor does it specify the creation or use of a resource that supports a PUT of a multipart/related representation.
See also: Atom references
Microsoft Silverlight to Back Ruby, Python in Browser
Paul Krill, InfoWorld
NIST Seeks Comments on Scheme to Score IT Security Configurations
William Jackson, Government Computer News
When is a word not a word? When it doesn't have a definition. When is a group not a group? When no one knows the members. Paradoxes like these also must also be addressed in the technology world: Data classes must be created and relationships among data understood. Resolving such issues is the province of ontologists—experts in word meaning and using appropriate words to build actionable machine commands . They have reached a concept agreement to create a technology system making it possible for programmers to build thinking machines that reason about complex problems. The agreement was reached at the two-day Ontology Summit held during the National Institute of Standards and Technology's Interoperability Week last month. The summit was a joint initiative between NIST, the Ontolog virtual community of practice and National Center for Ontological Research. Steve Ray, NIST's manufacturing systems integration chief: "The Ontology Summit established the critical set of requirements and ground rules needed before we can begin serious construction of the repository; it will save enormous amounts of time and money and facilitate new, complex systems in all sectors for manufacturing control, supply chain management and even biomedical management systems. Ontologies can be used to answer queries, publish reusable knowledge bases, export data to other systems, search across databases, and facilitate interoperability across multiple, heterogeneous systems and databases. The agreement calls for an electronic, scalable, open ontology repository containing diverse collections of concepts, such as dictionaries, compendiums of medical terminology and product classifications. The system would enable distinguishable, computable, reusable, and sharable information, including data, documents and services. Ontologists envision users having the ability to search and query across and within the different ontology sections of the repository. Its information would range from conceptual domains and specific disciplines of communities to technical schema, such as Resource Description Framework (RDF), part of the World Wide Web Consortium's specifications originally designed as a metadata data model; the Web Ontology Language, a family of knowledge representation languages for authoring ontologies; and the Common Logic Framework for a family of logic languages intended to facilitate the exchange and transmission of knowledge in computer-based systems, in addition to standard Internet languages such as Extensible Markup Language (XML).
See also: W3C OWL
Experiences with the Conversion of SenseLab databases to RDF/OWL
Matthias Samwald and Kei-Hoi Cheung (eds), W3C Technical Report
Members of the W3C Semantic Web in Health Care and Life Sciences Interest Group (HCLS) have published an Interest Group Note "Experiences with the Conversion of SenseLab databases to RDF/OWL." One of the challenges facing Semantic Web for Health Care and Life Sciences is that of converting relational databases into Semantic Web format. The issues and the steps involved in such a conversion have not been well documented. To this end, we have created this document to describe the process of converting SenseLab databases into OWL. SenseLab is a collection of relational (Oracle) databases for neuroscientific research. The conversion of these databases into RDF/OWL format is an important step towards realizing the benefits of Semantic Web in integrative neuroscience research. This document describes how we represented some of the SenseLab databases in Resource Description Framework (RDF) and Web Ontology Language (OWL), and discusses the advantages and disadvantages of these representations. Our OWL representation is based on the reuse and extension of existing standard OWL ontologies developed in the biomedical ontology communities. The purpose of this document is to share our implementation experience with the community... The SenseLab ontologies will be further integrated with other neuroscientific and biomedical ontologies. User friendly applications will be developed to query a multitude of interrelated ontologies in a scientifically meaningful way. To this end, we have implemented a prototype Web application called 'Entrez Neuron' that allows the user to query data across multiple sources based on key words. The user can browse the query results and retrieve more detailed information about neurons based on a 'brain-anatomy/neuron' view. We experienced clear benefits from using Semantic Web technologies for the integration of SenseLab data with other neuroscientific data in a consistent, flexible and decentralised manner. The main obstacle in our work was the lack of mature and scalable open source software for editing the complex, expressive ontologies we were dealing with. Since the quality of these tools is rapidly improving, this may cease to be an issue in the near future. The detailed analysis of the experiences with the SenseLab ontologies and other complex biomedical ontologies may help drive the improvement of current ontology editors.
The Future of BPM at BEA/Oracle
Bruce Silver, Intelligent Enterprise, BPMS Watch
"Since TIBCO-Staffware and BEA-Fuego, both of which seemed crazy to me at the time, I've had a change of heart about consolidation in the BPMS business. At the time of those acquisitions, integration middleware vendors had one view of what BPM is—essentially a business wrapper around SOA—and workflow vendors plus the BPM pureplays had different one, focused on improving "work" and optimizing business performance. And it was not clear which vision would prevail. The middleware vendors were certainly bigger companies with more cash and resources, and in the software industry bigger usually wins. But TIBCO and BEA, confounding my own expectations, did not embed their acquisitions as a human workflow subcomponent underneath their existing integration-oriented suite, but instead made the acquired company the centerpiece of their BPM offering. In fact SOA, the bigger business at both TIBCO and BEA, became the sub-component, with BPM at the top... Oracle uses BPMN modeling in an OEM version of IDS Scheer ARIS (extended with some Oracle-specific configuration dialogs for human tasks, business rules, and notifications) to generate skeleton BPEL that is fleshed out in the SOA Suite design tool. There is a simplified BPEL outline called Blueprint intended to serve as a diagram shared by business and IT to eliminate the roundtripping problem, but it's not as clean as a true BPMN-based design. ALBPM uses a common graphical notation for the process model and the implementation design. In version 6.1, that notation has been made (mostly) BPMN-compliant. I think this is the right way to do it, so on this point score one for ALBPM... if Oracle's goal is to maximize success in the "straight" BPM market, making ALBPM the environment for both modeling (replacing ARIS) and end-to-end implementation makes the most sense, moving SOA Suite (BPEL) down to the SOA layer and replacing the links to AquaLogic SOA components with links to their Oracle Fusion counterparts.
Microsoft Oslo Platform Advances UML and Declarative Programming
Paul Krill, InfoWorld
See also: TechTarget
NIST Envisions 'Thinking Machine'
Kathleen Hickey, Government Computer News
The National Institute of Standards and Technology is developing a system of standardized measurements to evaluate the impact of security configurations on operating systems and applications. A draft document edited by Karen Scarfone Peter Mell "Interagency Report 7502: The Common Configuration Scoring System (CCSS)" has now been released for public comment. NIST's draft document states that "Each security configuration decision can have positive and negative effects of varying degrees to the security of a host... Without a standardized way to quantify these effects, organizations cannot easily make sound decisions as to how each security issue should be addressed, nor can they quantitatively determine the overall security strength or weakness for a host." The report proposes a set of measures for security configuration issues and a formula to combine those measures into scores for each issue, collectively called the Common Configuration Scoring System (CCSS). It is derived from the Common Vulnerability Scoring System (CVSS) for measuring the relative severity of vulnerabilities caused by software flaws. CCSS adjusts the basic components of CVSS to focus on security configuration issues rather than software flaws. NIST's Common Vulnerability Scoring System (CVSS) provides an open framework for communicating the characteristics and impacts of IT vulnerabilities. Its quantitative model ensures repeatable accurate measurement while enabling users to see the underlying vulnerability characteristics that were used to generate the scores. Thus, CVSS is well suited as a standard measurement system for industries, organizations, and governments that need accurate and consistent vulnerability impact scores. Initially, CCSS addresses only configuration issues that are constant over time and environments. It deals with how readily a weakness could be exploited and how exploitation could affect hosts. Those characteristics are base metrics, and they are the inputs into the equation that calculates a base score. NIST plans to expand CCSS to include environmental metrics, which represent characteristics unique to a particular environment.
See also: NIST CVE
Web Services Part 2: WSDL and WADL
Brennan Spies, Ajaxonomy.com Blog
This article discusses the reasons for defining the web service contract between client and server, the existing methods for doing it, and the important concepts of each. An important part of any web service is the contract (or interface) which it defines between the service and any clients that might use it. This is important for a number of reasons: visualization with tools, interaction with other specifications (e.g., web service choreography), code generation, and enforcing a high-level agreement between the client and service provided (that still gives the service freedom to change the underlying implementation). Taken together, they give pretty compelling use cases for having web services contracts, although advocates of minimalism may disagree... With the rise in popularity of RESTful web services, there became a need to describe contracts for these types of web services as well. Although WSDL 2.0 attempts to fill the gap by providing support for HTTP binding, another specification fills this need in an arguably better way: WADL , a specification developed at Sun by Marc Hadley. Though it has not been submitted to any official standards body (OASIS, W3C, etc.), WADL is promising because of its more comprehensive support for REST-style services... Contract-first [vs. the "code-first" approach] is generally considered to be best practice in order to shield the consumers of a service from changes in the underlying code base. By providing an XML-based contract, you are also protecting the client from the vagaries of how different Web Service toolkits generate contracts from code, differences in the way that language types are translated to XML types, etc. Though writing WSDL or WADL rather than code may involve some additional learning curve at the beginning, it pays off in the long run with more robustly designed services... WADL does a nice job of capturing the style of REST. As with any other technology, though, most will wait to use it until it sees some significant adoption.
See also: Part 1
Data Infrastructure Drives NBA Finals
Scot Petersen, eWEEK
The NBA's data infrastructure enables its game and statistical information to be shared around the world in an instant: one could start calling NBA the NDA (National Data Association). Data collection starts right at courtside, where the NBA's Precision Time System connects the scoreboard and shot clocks right to the referees' belt units and whistles and on to an LED lighting system encircling the floor. Refs can start the clock with their belt units and stop it via a sensor in their whistles. The stoppage lights up the LED system, which also lights up when the clocks read zero. The timing system is linked to the statistics center at courtside, which logs who stopped the clock and when. That data is fed into a set of Lenovo X60 Tablet notebooks with integrated touch-screens and software by IDS (Information & Display Systems). Statisticians touch a layout of the court, noting which team has possession, who took a shot and whether he missed or made it, who dished out the assist, who grabbed a rebound, and who committed a foul. The data collected on these notebooks is instantly transmitted around the Garden to monitors for the media and broadcasters... The data feed is also transmitted over a closed-network T-1 line to NBA headquarters in Secaucus, N.J., where the information is put online, giving real-time game information and box scores on NBA.com. The Web site saw record traffic in 2008 of 1.2 billion visits and 300 million video streams, said Steve Grimes, vice president of Interactive Services for NBA Entertainment. The league also controls the video feeds coming from the TV networks, putting that online at NBA.com, as well as tagging each video segment with information such as team, player, type of play and time of game. The data is fed into a database that enables the league and teams to call up specific play scenarios as a review, coaching and game-planning tool...
XML Daily Newslink and Cover Pages are sponsored by:
|BEA Systems, Inc.
|Sun Microsystems, Inc.
XML Daily Newslink: http://xml.coverpages.org/newsletter.html
Newsletter Archive: http://xml.coverpages.org/newsletterArchive.html
Newsletter subscribe: email@example.com
Newsletter unsubscribe: firstname.lastname@example.org
Newsletter help: email@example.com
Cover Pages: http://xml.coverpages.org/