The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
Advanced Search
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

Cover Stories
Articles & Papers
Press Releases

XML Query

XML Applications
General Apps
Government Apps
Academic Apps

Technology and Society
Tech Topics
Related Standards
Last modified: August 26, 2003
Conceptual Modeling and Markup Languages

[January 24, 2001] Provisional sketch.

The problem: markup languages are now commonly pressed into service as "data modeling languages" and "conceptual modeling languages" although the particular features of (SGML/XML) markup languages render them unsuitable to the task. More appropriate modeling formalisms are needed, which (1) can capture essential semantic relationships [express, constrain, validate the relations]; (2) impose no artificial requirements based upon the limitations of a presumed transfer [interchange/import/export] syntax; (3) support principles of semantic transparency as the preeminent concern; (4) are accessible to and optimized for use by the principal domain experts and 'end users' as stakeholders [software engineers and programmers are secondary stakeholders]; (5) are sufficiently formal as to support testing for conceptual integrity, and more or less directly implementable through other interfaces. One guess is that conceptual modeling languages are relevant to this concern.

The following reference list should begin with a significant mini-essay on the relationship between conceptual modeling languages and (bracketed/braced meta-) markup languages. I don't have such at essay at present, but see the mini-rant in the "February 28 2000" entry below -- written apparently in a moment of frustration over the "syntax-centric" myopia in the markup language community. Such an essay would summarize why markup languages -- which prescribe serialization structures optimized for data interchange but do not formally support expression of semantic relationships -- do not make good conceptual modeling languages. The essay would summarize why the "real" interoperability problems are not solved by XML or any other markup-level formalism which is unable to formally express and constrain relationships. By "relationships" I mean Object relationships, independent of any artificial limitations having to do with a serialization model, notions of tree/graph/hierarchy, and particular syntax. The essay would clarify how a general-purpose conceptual modeling language (without prescribing a specific modeling methodology) would constitute a powerful modeling tool supporting the design of interoperable computing applications. The conceptual modeling language would be able express (naturally, transparently) semantic relations and appropriate integrity constraints which help ensure that instances of objects and property values are semantically "valid". From a conceptual model representation one may programatically generate graphical models for visualization (e.g., UML diagrams), XML schemas, DTDs, other schemas, and so forth.

Everyone agrees upon the basic issues, I think: (1) the (quint)essence of the interoperability problem is semantics, not syntax; (2) markup languages don't address semantics. To award priority to syntax (with formal language support) and relegate semantics to the "informal prose documentation" is backwards, even if we grant that semantics is not completely formalizable. Current work on the creation of a "semantic web" signifies progress; but are we still missing something? Suppose we say that ontology specification languages and schema languages (RDF, OIL, DAML, TopicMaps, etc.) are aspects of (and perspectives on) the "semantic web" problem... what if a "conceptual modeling formalism" supplies a framework within which these approaches come together? What are the requirements for a general, large-scale, conceptual modeling (meta)language?

[Provisional] references, food for thought. Please contribute references for this list...

  • "XML and 'The Semantic Web'." This document references a number of ontology specification languages.

  • [August 26, 2003]   Preliminary Program for ER2003 International Conference on Conceptual Modeling.    A provisional program listing has been published for the 22nd International Conference on Conceptual Modeling, to be held October 13-16, 2003 in Chicago, IL, USA. The Conference will incorporate four workshops on special aspects of conceptual modeling: eCOMO2003: Conceptual Modeling Approaches for e-Business; IWCMQ: International Workshop on Conceptual Modeling Quality; AOIS: Agent-Oriented Information Systems; XSDM 2003: Workshop on XML Schema and Data Management. In addition to the four workshops, the conference will feature two pre-conference tutorials: "Object-Process Methodology and Its Application to the Visual Semantic Web" and "Data Modeling using XML." The four keynote presentations will address concepts central to the Semantic Web and Web Services, including Semantic Web application modeling, the interplay of data quality and data semantics, XML in Enterprise Information Integration, and agent-based workflow systems. "ER" in the conference short title ER2003 reflects the original roots of the conference, which focused on the Entity-Relationship Model. The International Conference on Conceptual Modeling "provides a forum for presenting and discussing current research and applications in which conceptual modeling is the major emphasis. There has been a dramatic impact from trends of increased processing power, storage capacity, network bandwidth, interconnectivity, and mobility of computing devices. As processes and interactions in this environment grow more complex, proper design becomes more important. Conceptual modeling continues to have a vital role in advanced information systems development." The foundational role of conceptual modeling in XML application design is captured by Michael Kay's comment on hierarchical data models: "XML is hierarchical [because] it is optimized for data interchange... this absolutely gives you a design challenge because the models that you get from your data analysis are graphs rather than trees."

  • [January 17, 2003] "From Model to Markup: XML Representation of Product Data." By Joshua Lubell (US National Institute of Standards and Technology, Manufacturing Systems Integration Division). Paper presented December 2002 at the XML 2002 Conference, Baltimore, MD, USA. With Appendix (Complete PDM Example with EXPRESS Schema, RELAX NG Schema [Compact Syntax], RELAX NG Schema [XML Syntax], W3C XML Schema, Flat XML Data, Hierarchical XML Data). "Business-to-consumer and business-to-business applications based on the Extensible Markup Language (XML) tend to lack a rigorous description of product data, the information generated about a product during its design, manufacture, use, maintenance, and disposal. ISO 10303 defines a robust and time-tested methodology for describing product data throughout the lifecycle of the product. I discuss some of the issues that arise in designing an XML exchange mechanism for product data and compare different XML implementation methods currently being developed for STEP... ISO 10303, also informally known as the Standard for the Exchange of Product model data (STEP), is a family of standards defining a robust and time-tested methodology for describing product data throughout the lifecycle of the product. STEP is widely used in Computer Aided Design (CAD) and Product Data Management (PDM) systems. Major aerospace and automotive companies have proven the value of STEP through production implementations. But STEP is not an XML vocabulary. Product data models in STEP are specified in EXPRESS (ISO 10303-11), a modeling language combining ideas from the entity-attribute-relationship family of modeling languages with object modeling concepts. To take advantage of XML's popularity and flexibility, and to accelerate STEP's adoption and deployment, the ISO group responsible for STEP is developing methods for mapping EXPRESS schemas and data to XML. The rest of this paper discusses some of the issues that arise in designing an XML exchange mechanism for product data. It also compares two implementation approaches: a direct mapping from EXPRESS to XML syntax rules, and an indirect mapping by way of the Unified Modeling Language (UML)... Observations: Developers of mappings from product data models to XML have learned some valuable lessons. The most important lesson is that, although XML makes for an excellent data exchange syntax, XML is not well suited for information modeling. After all, if XML were a good modeling language, it would be much easier to develop a robust mapping from EXPRESS to XML. UML, on the other hand, is a good modeling language and, therefore, it should be easier to map EXPRESS to UML than to XML. Although mapping EXPRESS to UML does not make the difficulties of representing product models in XML go away, it offloads a significant part of the effort from the small and resource-strapped STEP developer community to the much larger and resource-rich UML developer community. Another lesson learned is that the choice of XML schema language used in the mapping is not very important, as long as the schema language supports context-sensitive element types and provides a library of simple data types for items such as real numbers, integers, Booleans, enumeration types, and so on. Therefore, both the W3C XML Schema definition language and RELAX NG are good XML schema language choices. The DTD is a poor schema language choice for the reasons discussed earlier..." See Lubell's RELAX NG List Note of 2003-01-17: "Although David Carlson's "Modeling XML Applications with UML" (Addison Wesley) mentions WXS but not RELAX NG, his hyperModel tool can generate both WXS and RELAX NG schemas from XMI. Carlson used to have a web-accessible form where you could upload an XMI file and process it using hyperModel, but the website wasn't working the last time I tried to use it. He has also implemented hyperModel as a plug-in to Eclipse and is selling this as a commercial product. I haven't come across any documentation for Carlson's UML-to-RELAX NG mapping. The RELAX NG in my XML 2002 paper "From Model to Markup" was created using the web-accessible hyperModel tool, although I had to tweak the output in order to handle bidirectional UML associations and to use interleave properly..."

  • [September 30, 2002]   STARLab Object Role Modeling Markup Language (ORM-ML) Represents ORM Models in XML.    The research team at STARLab (Systems Technology and Applications Research Laboratory, Vrije Universiteit Brussel) has developed an Object Role Modeling markup language (ORM-ML) for representing ORM models in an XML based syntax. Stylesheets may be written to convert this ORM-ML syntax into other syntaxes for processing by business rule engines. The team "has chosen ORM for its rich constraint vocabulary and well-defined semantics and elected to use XML Schema to define this communication 'protocol' for conceptual schemas seen as XML document instances. The design approach respects the ORM structure as much as possible by not 'collapsing' it first through the usual relational transformer that comes with most ORM-based tools (or UML, or EER tools). ORM-ML allows the representation of any ORM schema without loss of information or change in semantics, except for the geometry and topology (graphical layout) of the schema (e.g., location, shapes of the symbols), which easily may be provided as a separate graphical style sheet to the ORM Schema." [Full context]

  • [September 28, 2002] "ConTeXtualized Local Ontology Specification via CTXML." By Paolo Bouquet and Stefano Zanobini (Dept. of Computer Information and Communication Technologies, University of Trento, Italy), Antonia Donà and Luciano Serafini (ITC-Irst, Trento, Italy). Presented at the AAAI-02 Workshop on Meaning Negotiation (MeaN-02), July 28, 2002, Edmonton, Alberta, Canada. 8 pages, with 6 references. "In many application areas, such as the semantic web, knowledge management, distributed databases, it has been recognized that we need an explicit way to represent meanings. A major issue in all these efforts is the problem of semantic interoperability, namely the problem of communication between agents using languages with different semantic. Following [a paper by Bonifacio, Bouquet, and Traverso], we claim that a technological infrastructure for semantic interoperability between 'semantically autonomous' communities must be based on the capability of representing local ontologies and mappings between them, rather than on the attempt of creating a global, supposedly shared, conceptualization. The goal of this paper is to define a theoretical framework and a concrete [XML-based] language for the specification of local ontologies and mappings between them... Despite the effort for defining a standard semantic for various domains, people seem to resist to such an attempt of homogenization. Partly, this is due to practical problems (it can be very costly to change the overall organization of a database, or the classification of large collections of documents. But we believe that there are also theoretical reasons why this homogeneity is not accepted, and in the end is not even desirable. In fact, lots of cognitive and organizational studies show that there is a close relationship between knowledge and identity. Knowledge is not simply a matter of accumulating 'true sentences' about the world, but is also a matter of interpretation schemas, contexts, mental models, [and] perspectives which allow people to make sense of what they know. Therefore, any attempt of imposing external interpretation schemas (and a definition of meaning always presupposes some interpretation schema, at least implicitly) is perceived as an attack to an individual's or a community's identity. Moreover, interpretation schemas are an essential part of what people know, as each of them provides an alternative lens through which reality can be read. Thus, imposing a single schema is always a loss of global knowledge, as we throw away possibly innovative perspectives... If we accept that interpretation schemas are important, then we need to approach the problem of semantic interoperability from a different perspective. Instead of pushing towards a greater uniformity, we need a theoretical framework in which: (1) different conceptualizations (called 'local ontologies') can be autonomously represented and managed (and, therefore, we call them contextualized); (2) people can discover and represent relationships between local ontologies; (3) the relationships between local ontologies can be used to semantic-based services without destroying the 'semantic identity' of the involved parties... We see meaning negotiation as the process that dynamically enable agents to discover relationships between local ontologies. The goal of this paper is to create an 'environment' in which the preconditions for meaning negotiation are satisfied. In particular, on the one hand, we define a theoretical framework in which local ontologies and mappings between them can be represented; on the other hand, we provide a language for describing what we call a context space, namely a collection of contexts and their mappings; this language is called ConTeXt Markup Language (CTXML) and is based on XML and XML-Schema. Local ontologies are represented as contexts... we will use knowledge management as our main motivation for contextualized local ontologies; however, as we said at the beginning, we believe that similar motivations can be found in any semantically distributed application, e.g., the semantic web..." [cache]

  • "CaseTalk is a casetool which enables you to create conceptual models based upon elementary fact expression in natural language... CaseTalk as a conceptual modeling tool is for technical system developers somewhat difficult to place in the development life cycle... End users are able to validate the model by the re-verbalizing capabilities of CaseTalk. CaseTalk supports developers and database designers by transforming the conceptual model into different logical models like UML classdiagrams and ER models. Diagramming further suppports analists, developers and others involved in the modeling and documening cycles. CaseTalk is founded upon a solid methodology called Fully Communication Oriented Information Modeling (FCO-IM) which is the most prominent successor of NIAM, also known as Object Role Modeling... With CaseTalk you create business information models without assumptions as to implementation technology. Pure conceptual models based on plain natural language, that everybody can understand and verify..." Contact: Marco Wobben.

  • [September 28, 2002] "Linguistic Based Matching of Local Ontologies." By Bernardo Magnini, Luciano Serafini, and Manuela Speranza (ITC-Irst Istituto per la Ricerca Scientifica e Tecnologica, Trento, Italy). Presented at the AAAI-02 Workshop on Meaning Negotiation (MeaN-02), July 28, 2002, Edmonton, Alberta, Canada. "This paper describes an automatic algorithm of meaning negotiation that enables semantic interoperability between local overlapping and heterogeneous ontologies. Rather than reconciling differences between heterogeneous ontologies, this algorithm searches for mappings between concepts of different ontologies. The algorithm is composed of three main steps: (i) computing the linguistic meaning of the label occurring in the ontologies via natural language processing, (ii) contextualization of such a linguistic meaning by considering the context, i.e., the ontologies, where a label occurs; (iii) comparing contextualized linguistic meaning of two ontologies in in order to find a possible matching between them. differences, but by designing systems that will enable interoperability (in particular, semantic interoperability) between autonomous communities. Autonomous communities organize their 'local knowledge' according to a local ontology. A local ontology is a set of terms and relations between them used by the members of the autonomous community to classify, communicate, update, and, in general, to operate with local knowledge. Materializations of a local ontology can be, for instance, the logical organization of a web site used by the community to share information, the directory structure of a shared file system, the schema of a database used to store common knowledge, the tag-structure of an XML schema document used to describe documents or services shared by the members of the community. In all these cases, we think that two of the main intuitions underlying local ontologies are the following: (1) Each community (team, group, and so on) within an organization has its own conceptualization of the world, which is partial (i.e., covers only a portion of the world), approximate (i.e., has a degree of granularity), and perspectival (i.e., reflects the community's viewpoint on the world -- including the organization and its goals and processes); 2. There are possible mappings between different and autonomous conceptualizations. These mappings cannot be defined beforehand, as they presuppose a complete understanding of the two conceptualizations, which in general is not available. This means that these mappings are discovered dynamically via a process that we call meaning negotiation... In the [Section 2] we define a theoretical framework, context space, were local ontologies and mappings between local ontologies are represented. A context space is composed of a set of contexts and a set of mappings. Contexts are the main data structure used to represent local knowledge, mappings represent the results of matching two (or general many) contexts. In the Section 'Linguistic-based interpretation' we describe the computing of the local semantics of a context. Knowledge in a context is represented by structured labeled 'small' linguistic expressions, as complex noun phrases, prepositional phrases, abbreviations, etc. The semantics of this structure is computed by combining the semantics of each single label... In the last section we describe how the local semantics of the labels of different contexts are compared in order to find possible overlaps and mappings between two structures and finally we draw some conclusions..." [cache]

  • [April 19, 2001]   Addison-Wesley Publishes "Modeling XML Applications with UML".    Dave Carlson of Ontogenics Corporation has completed a major published work on XML/UML modeling: Modeling XML Applications with UML. Practical e-Business Applications. The book is now available from Addison-Wesley as part of the 'Object Technology Series' edited by Grady Booch, Ivar Jacobson, and James Rumbaugh. Carlson's book "focuses on the design and visual analysis of XML vocabularies. It explores the generation of DTD and Schema languages from those vocabularies, as well as the design of enterprise integration and portals -- all using UML class diagrams and use case analysis. Also featured are extensive details on the deployment of XML vocabularies and portals, showing how to put these elements to work within distributed e-business systems. You will learn practical techniques that can be applied to both small and large system development projects using either formal or informal processes. Topics covered in the book include: An overview of XML vocabularies, HTML presentations, and XSLT stylesheets; An overview of the UML diagrams and the Unified Process; Defining business vocabulary and creating XML Schemas; Designing and customizing e-business portals using XML; Mapping UML to XML, including UML relationships to XML hyperlinks; Generating XML Schemas from the UML class diagrams; Transforming custom XML vocabularies into the RosettaNet XML standard; Transforming XML vocabularies into HTML using XSLT; Transforming XML documents into Portlets; Simple Object Access Protocol (SOAP) as an XML-based messaging standard for business-to-business integration. Dave Carlson's web site '' references additional white papers and examples for XML/UML modeling, including "Modeling the UDDI Schema with UML" and "Modeling XHTML with UML." [Full context]

  • [October 3, 2001] UN/CEFACT UML to XML Design Rules Project Team. "The purpose of the project is to produce a set of formal syntax production rules, describing in a very detailed and strict way how to convert standardized business messages, which are defined in UMM-compliant UML class diagrams, into physical XML representations. The project deliverables include a Technical Specification describing the set of formal design rules needed to convert the UML class diagram representing a standardized business message into a physical XML representation. This Technical Specification will also include specifications for the UML representation of a standardized business message. The project team will consist of a group of experts who have broad knowledge in the area of Electronic Business (ebXML, UN/EDIFACT, and national EDI/eBusiness), XML technologies, XML implementation, use of UML and technical specification development." See the details of the project (proposal); cache

  • ER 2000. 19th International Conference on Conceptual Modeling. October 9-12, 2000. Note Session R7: "Conceptual Modeling and XML." Session Chair: Tok Wang Ling (National University of Singapore, Singapore) (1) "Object Role Modelling and XML-Schema", Linda Bird, Andrew Goodchild, and Terry Halpin Linda Bird, Andrew Goodchild (The University of Queensland, Australia), and Terry Halpin (Microsoft Corporation, USA). (2) "Constraints-Preserving Transformation from XML Document Type Definition to Relational Schema," Dongwon Lee and Wesley W. Chu (University of California at Los Angeles, USA). (3) "X-Ray: Towards Integrating XML and Relational Database Systems," Gerti Kappel, Elisabeth Kapsammer, Stefan Rausch-Schott, and Werner Retschitzegger (University of Linz, Austria). Note that the Second International Workshop on the World Wide Web and Conceptual Modeling (WCM2000) was held in conjunction with ER2000. See in the CFP: "In this workshop, we propose to explore ways we can apply conceptual modeling to address these challenges. In particular, we wish to consider building conceptual data models to describe knowledge about patterns of data, about objects in our models, and about interactions among these objects and with Web users. These conceptual models should provide the basis for high-quality organization, presentation, search, information extraction, intelligent caching, electronic business processing, and Web site development and management." [cache]

  • Journal of Conceptual Modeling. "The Journal of Conceptual Modeling is a free publication sponsored and published by InConcept, a Minnesota based information services consulting company that specializes in data modeling using Object-Role Modeling. JCM is a free journal dedicated to data modeling, design, and implementation issues. The goal of this publication is to promote communication between professionals, share knowledge, and to educate our readers. The target audience is large: database professionals and developers, end users and business professionals, students and teachers, and anyone else using, developing, or considering development of a database system."

  • "UML and Object Role Modeling (ORM)." "Dr. Halpin discusses the Unified Modeling Language within the context of Object Role Modeling (ORM) and shows how ORM models can be used in conjunction with UML models. Although the Unified Modeling Language (UML) facilitates software modeling, its object-oriented approach is arguably less than ideal for developing and validating conceptual data models with domain experts. Object Role Modeling (ORM) is a fact-oriented approach specifically designed to facilitate conceptual analysis and to minimize the impact on change. Since ORM models can be used to derive UML class diagrams, ORM offers benefits even to UML data modelers. This 10-part series provides a comparative overview of both approaches..."

  • VisioModeler 3.1. Microsoft Unsupported Product Edition. "If you use the Microsoft Visio 2000 Enterprise Edition to create Object Role Modeling (ORM) conceptual information models, you may want to download Microsoft VisioModeler. Using the Microsoft VisioModeler program, you can display several ORM constructs in diagram form -- for example, nested relationships and any constraint other than internal uniqueness or simple mandatory -- that you can't currently display in Microsoft Visio. You can export models you build with VisioModeler to Microsoft Visio 2000 Enterprise Edition to take advantage of updated database drives and other features. No further development is planned for the Microsoft VisioModeler program, which is now [03/01/2001] classified as an unsupported product. A future modeling solution based on the Microsoft Visio engine will support most of the ORM constructs..." 25 MB, requiures Windows 9x, Windows NT, or Windows 2000. Download ['MSVM31.EXE']. See also the tutorial on the use of VisioModeler.

  • [February 11, 2002] "Combining UML, XML and Relational Database Technologies. The Best of All Worlds For Robust Linguistic Databases." By Larry S. Hayashi and John Hatton (SIL International). Pages 115-124 in Proceedings of the IRCS Workshop on Linguistic Databases (11-13 December 2001, University of Pennsylvania, Philadelphia, USA. Organized by Steven Bird, Peter Buneman and Mark Liberman. Funded by the National Science Foundation). "This paper describes aspects of the data modeling, data storage, and retrieval techniques we are using as we develop the FieldWorks suite of applications for linguistic and anthropological research. Object-oriented analysis is used to create the data models. The models, their classes and attributes are captured using the Unified Modeling Language (UML). The modeling tool that we are using stores this information in an XML document that adheres to a developing standard known as the XML Metadata Interchange format (XMI). Adherence to the standard allows other groups to easily use our modeling work and because the format is XML, we can derive a number of other useful documents using standard XSL transformations. These documents include (1) a DTD for validating data for import, (2) HTML documentation of diagrams and classes, and (3) a database schema. The latter is used to generate SQL statements to create a relational database. From the database schema we can also generate an SQL-to-XML mapping schema. When used with SQL Server 2000 (or MSDE), the database can be queried using XPath rather than SQL and data can be output and input using XML. Thus the Fieldworks development process benefits from both the maturity of its relational database engine and the productivity of XML technologies. With this XML in/out capability, the developer does not need to translate between object-oriented data and relational representation. The result will be, hopefully, reduced development time. Another further implication is the potential for an increased interoperability between tools of different developers. Mapping schemas could be created that allow FieldWorks to easily produce and transfer data according to standard DTDs (for example, for lexicons or standard interlinear text). Data could then be shared among different tools -- in much the same way that XMI allows UML data to be used in different modeling tools... Stroustrup [The C++ Programming Language] states that 'constructing a useful mathematically-based model of an application area is one of the highest forms of analysis. Thus, providing a tool, language, framework, etc., that makes the result of such work available to thousands is a way for programmers and designers to escape the trap of becoming craftsmen of one-of-a-kind artifacts.' UML is an excellent example of such a language and framework. UML tools that make use of XMI provide even greater longevity and availability to the modeling work. XML is also such a language and framework. Because XMI is XML, we have been able to use standard XML tools and the XML functionality of SQL Server to easily derive a number of implementation specific products... UML, XMI and XML provide a stable foundation for data modeling and software development. We expect our UML models to have longevity and we trust that the XMI representation will allow us to easily derive new functionality and better interface implementations as technology changes." [cache]

  • [November 06, 2001] "Database Modeling in Visual Studio .NET." [Visio-Based Database Modeling in Visual Studio .NET Enterprise: Architect: Part 1.] By Terry Halpin (Microsoft Corporation). November 2001. ['This three-part series on the new Visio tool for database modeling explains how to create a basic ORM source model, how to use the verbalizer, how to add set-comparison constraints, and much more.'] "The database modeling solution within Microsoft Visio Enterprise 2000 provided basic support for conceptual information analysis using Object-Role Modeling (ORM), as well as logical database modeling using relational, IDEF1X, crowsfoot and object-relational notations. ORM schemas could be forward engineered to logical database schemas, from which physical database schemas could be generated for a variety of database management systems (DBMSs). Physical databases could have their structures reverse engineered to logical database schemas or to ORM schemas. The recently released Microsoft Visio 2002 products include Standard and Professional editions only. The Professional edition subsumes the formerly separate Technical edition, but does not include Enterprise. Although Visio 2002 Professional does include an ORM stencil, this is for drawing only -- its ORM diagrams cannot be mapped to logical database schemas and cannot be obtained by reverse engineering from a physical database. Visio 2002 Professional includes a database modeling solution for defining new logical database schemas or reverse engineering them from existing databases, but forward engineering to physical database schemas is not provided. For some time, Microsoft has supported database design and program code design (using UML) in its Visual Studio product range. After acquiring Visio Corporation, Microsoft had two separate products (Visio Enterprise and Visual Studio) that overlapped significantly in functionality, with each supporting database design and UML. As a first step towards unifying these product offerings, the deep modeling solutions formerly within Visio Enterprise have been enhanced and moved into a new Microsoft product known as Visio for Enterprise Architects (VEA), included in Microsoft Visual Studio .NET Enterprise Architect. These Visio-based modeling solutions are included in beta 2 of Visual Studio. NET Enterprise, with the final release to follow later. The deep ORM solution in VEA is completely different from the simple ORM drawing stencil in Visio Professional, and cannot translate between them. However, the database modeling solution in VEA can import logical database models from Visio Professional, and then forward engineer them to DDL scripts or physical database schemas..."

  • [August 24, 2001] "Semantic Data Modeling Using XML Schemas." By Murali Mani, Dongwon Lee, and Richard R. Muntz (Department of Computer Science, University of California, Los Angeles, CA). [To be published] in Proceedings of the 20th International Conference on Conceptual Modeling (ER 2001), Yokohama, Japan, November, 2001. "Most research on XML has so far largely neglected the data modeling aspects of XML schemas. In this paper, we attempt to make a systematic approach to data modeling capabilities of XML schemas. We first formalize a core set of features among a dozen competing XML schema language proposals and introduce a new notion of XGrammar. The benefits of such formal description is that it is both concise and precise. We then compare the features of XGrammar with those of the Entity-Relationship (ER) model. We especially focus on three data mod- eling capabilities of XGrammar: (a) the ability to represent ordered binary relationships, (b) the ability to represent a set of semantically equivalent but structurally different types as 'one' type using the closure properties, and (c) the ability to represent recursive relationships... Ordered relationships exist commonly in practice such as the list of authors of a book. XML schemas, on the other hand, can specify such ordered relationships. Semantic data modeling using XML schemas has been studied in the recent past. ERX extends ER model so that one can represent astyle sheet and a collection of documents conforming to one DTD in ERX model. But order is represented in ERX model by an additional order attribute. Other related work include a mapping from XML schema to an extended UML, and a mapping from Object-Role Modeling (ORM) to XML schema . Our approach is different from these approaches: we focus on the new features provided by an XML Schema -- element-subelement relationships, new datatypes such as ID or IDREF(S), recursive type definitions, and the property that XGrammar is closed under union, and how they are useful to data modeling... The paper is organized as follows. In Section 2, we describe XGrammar that we propose as a formalization of XML schemas. In Section 3, we describe in detail the main features of XGrammar for data modeling. In Section 4, we show how toconvert an XGrammar to EER model, and vice versa. In Section 5, an application scenario using the proposed XGrammar and EER model is given. Finally, some concluding remarks are followed in Section 6. ... [Conclusions:] In this paper, we examined several new features provided by XML schemas for data description. In particular, we examined how ordered binary relationships 1:n (through parent-child relationships and IDREFS attribute) as well as n:m (through IDREFS attribute) can be represented using an XML schema. We also examined the other features provided by XML grammars -- representing recursive relationships using recursive type definitions and union types. The EER model, conceptualized in the logical design phase, can be mapped on to XGrammar (or its equivalent) and, in turn, mapped into other final data models, such as relational data model, or in some cases, the XML data model itself (i.e., data might be stored as XML documents themselves). We believe that work presented in this paper forms a useful contribution to such scenarios." Also available in Postscript format. [cache PDF, Postscript]

  • [May 22, 2001] "Augmenting UML with Fact-orientation." By Terry Halpin. Published in the Proceedings of the Hawai'i International Conference on System Sciences 2001, Section "Unified Modeling Language: A Critical Review and Suggested Future," [HICSS-34], January 3-6, 2001, Outrigger Wailea Resort, Island of Maui. "The Unified Modeling Language (UML) is more useful for object-oriented code design than conceptual information analysis. Its process-centric use-cases provide an inadequate basis for specifying class diagrams, and its graphical language is incomplete, inconsistent and unnecessarily complex. For example, multiplicity constraints on n-ary associations are problematic, the constraint primitives are weak and unorthogonal, and the graphical language impedes verbalization and multiple instantiation for model validation. This paper shows how to compensate for these defects by augmenting UML with concepts and techniques from the Object Role Modeling (ORM) approach. It exploits 'data use cases' to seed the data model, using verbalization of facts and rules with positive and negative examples to facilitate validation of business rules, and compares rule visualizations in UML and ORM. Three possible approaches are suggested: use ORM for conceptual analysis then map to UML; supplement UML with population diagrams and user-defined constraints; enhance the UML metamodel... The UML notation includes the following kinds of diagram for modeling different perspectives of an application: use case diagrams, class diagrams, object diagrams, statecharts, activity diagrams, sequence diagrams, collaboration diagrams, component diagrams and deployment diagrams. This paper focuses on conceptual data modeling, so considers only the static structure (class and object) diagrams. Class diagrams are used for the data model, and object diagrams for data populations. Although not yet widely used for designing database applications, UML class diagrams effectively provide an extended Entity-Relationship (ER) notation that can be annotated with database constructs (e.g., key declarations)... This paper identifies several weaknesses in the UML graphical language and discusses how fact-orientation can augment the object-oriented approach of UML. It shows how verbalization of facts and rules, with positive and negative examples, facilitates validation of business rules, and compares rule visualizations in UML and ORM on the basis of specified modeling language criteria... [Conclusion:] Fact-orientation, as exemplified by ORM, provides many advantages for conceptual data analysis, including expressibility, validation by verbalization and population at both fact and constraint levels, and semantic stability (e.g., avoiding changes caused by attributes evolving into associations). ORM also has a mature formal foundation that may be used to refine the semantics of UML. Object-orientation, as exemplified by UML, provides several advantages such as compactness, and the ability to drill down to detailed implementation levels for object-oriented code. If UML is to be used for conceptual analysis of data, some ORM features can be adapted for use in UML either as heuristic procedures or as reasonably straightforward extensions to the UML metamodel and syntax. These include mixfix verbalizations of associations and constraints for associations, and exploitation of data use cases by populating associations with tables of sample data using role names for the column headers. However there are some fundamental aspects that need drastic surgery to the semantics and syntax of UML if it is ever to cater adequately for non-binary associations and some commonly encountered business rules. This paper revealed some serious problems with multiplicity constraints on n-ary associations, especially concerning non-zero minimum multiplicities. For example, they cannot be used in general to capture mandatory and minimum occurrence frequency constraints on even single roles within n-aries, much less role combinations. Moreover, UML's treatment of set-comparison constraints is defective. Although it is possible to fix these problems by changing UML's metamodel to be closer to ORM's, such a drastic change to the metamodel may well be ruled out for pragmatic reasons (e.g., maintaining backward compatibility and getting the changes approved). In contrast to UML, ORM has only a small set of orthogonal concepts that are easily mastered. UML modelers willing to learn ORM can get the best of both approaches by using ORM as a front-end to their data analysis and then mapping the ORM models to UML, where the additional constraints can be captured in notes or textual constraints. Automatic transformation between ORM and UML is feasible, and is currently being researched." [alt URL, cache]

  • "Evaluating Conceptual Modeling Languages." By Tim Menzies, Robert F. Cohen, and Sam Waugh. February 20, 1998. "An important assumption for many KA researchers is structure preservation; i.e., conceptual models can be converted in a straight forward manner into a design for an implementation. This assumption may not always hold. Seemingly trivial variants in a qualitative conceptual models can block pragmatically desirable properties such as KB-testability and KB-maintainability. KB-testability and KB-maintainability are an important property: we must not build knowledge bases that are untestable or unmaintainable. Hence, we argue against using these features within conceptual models. The tools used for identifying these features (instance generators, graph theory, studying KB-testability and KB-maintainability) are quite general and could be used to find restrictions to other conceptual modeling languages... If there are no blocks to structure preservation, then our conceptual modeling need not be restricted. Unrestricted conceptual modeling requires that we can structurally preserve conceptual structures. If structure preservation is blocked by certain features of a conceptual modeling language, then if we use those features, our implementation may be compromised. In order to identify such blocks to structure preservation, we need to first identify pragmatic success criteria for our KBs..."

  • "UML for XML Schema Mapping Specification." By Grady Booch (Rational Software Corp.), Magnus Christerson (Rational Software Corp.), Matthew Fuchs (CommerceOne Inc.), and Jari Koistinen (CommerceOne Inc.). 12/08/99. "This paper describes a graphical notation in UML for designing XML Schemas. UML (Unified Modeling Language) is a standard object-oriented design language that has gained virtually global acceptance among both tool vendors as well as software developers. UML has been standardized by the Object Management Group (OMG). XML Schema is an emerging standard from W3C. XML Schema is a language for defining the structure of XML document instances that belong to a specific document type. XML Schema can be seen as replacing the XML DTD syntax. XML Schema provides strong data typing, modularization and reuse mechanisms not available in XML DTDs. There is currently no W3C recommendation for XML Schema, although several have been proposed and W3C is actively working on producing a recommendation. This paper describes the relationship between UML and the SOX schema used by CommerceOne. Our intention is, however, to adapt the mapping to the W3C recommendation when that becomes available. W3C discussions up to this point indicate the notation described here will be upward compatible with the eventual recommendation."

  • [February 02, 2001] "Modeling the UDDI Schema with UML." By David Carlson (CTO, Ontogenics Corp., Boulder, Colorado). A white paper and example of XML modeling from (February 2001). 12 pages. "Complex XML vocabulary definitions are often easier to comprehend and discuss with others when they are expressed graphically. Although existing tools for editing schemas provide some assistance in this regard (e.g., Extensibility XML Authority) they are generally limited to a strict hierarchical view of the vocabulary structure. More complex structures are often represented in schemas using a combination of containment and link references (including ID/IDREF, simple href attributes, and more flexible extended XLink attributes). These more object-oriented models of schema definition are more easily represented using UML class diagrams. The Unified Modeling Language (UML) is the most recent evolution of graphical notations for object-oriented analysis and design, plus it includes the necessary metamodel that defines the modeling language itself. UML has experienced rapid growth and adoption in the last two years, due in part to its vendor-independent specification as a standard by the Object Management Group (OMG). This UML standard and metamodel are the basis for a new wave of modeling tools by many vendors. For more information and references to these topics, see the Web portal at, which was created for the purpose of communicating information about the intersection of XML and UML technologies. The OMG has also adopted a standard interchange format for serializing models and exchanging them between UML tools, called the XML Metadata Interchange (XMI) specification. (XMI is actually broader in scope than this, but its use with UML is most prevalent.) Many UML modeling tools now support import/export using the XMI format, so it's now possible to get an XML document that contains a complete UML model definition. This capability is the foundation for the remainder of this white paper. Using an XMI file exported from any UML tool, I have written an XSLT transformation that generates an XML Schema document from the UML class model definition. As an example that demonstrates the benefits of modeling XML vocabularies with UML, I have reverse-engineered a substantial part of the UDDI specification. The current UDDI specification (as of January 2001) includes an XML Schema definition that is based on an old version of the XML Schema draft, well before the out-dated April 7th 2000 draft. In contrast, the UDDI schemas described here are compliant with the XML Schema Candidate Recommendation, dated October 24, 2000. The reverse engineering was accomplished manually, by reading the UDDI specification and creating a UML model in Rational Rose..." [cache]

  • "Using UML to Define XML Document Types." By W. Eliot Kimber and John Heintz. In Markup Languages: Theory & Practice 2/3 Summer 2000), pages 295-320 (with 6 references). "The paper defines a convention for the use of UML to define XML documents. It uses UML stereotypes to map application-specific types to XML syntactic constructs. Shows how UML can be used in this way to map from abstract information data models to XML-specific implementation models in a natural and easy-to-understand way... The paper presents an approach to using UML (Unified Modeling Language) models to define XML (Extensible Markup Language) document types (DTDs - document type definitions or schemas). This use of UML serves as an alternative to other forms of 'XML schema'. The XML encoding of data is fundamentally an implementation representation of data that conforms to some higher-level abstract data model or object model. In this way, the use of UML to define the XML implementation of business objects is exactly analogous to using UML to define the Java or CORBA or C++ or SQL (Standard Query Language) implementations of those objects. Thus, for the purpose of this paper, there is assumed to be a more abstract data model of which the XML is an implementation, referred to as the 'XML implementation representation' of the data. We have two primary motivations for developing this approach to modeling document structure. First, as XML practitioners, we need a convenient and efficient way to develop and document document models in a way that is easy to communicate to both users of the system and implementors. Second, as system designers, we want a way to formally and tightly integrate the document model parts of our design into the overall system design, which is already modeled using UML (and, in our case, the Catalysis method of capturing multi-layered designs using UML). Thus, while this use of UML to model XML document types has immediate benefits as an alternative to DTD syntax or XML Schema, it has a much more profound long-term benefit as it enables the full integration of document models into larger system designs. A secondary motivation is using this model to help emphasize that XML is always an implementation view of more abstract data models and should not be used, by itself, as the one place where data models are captured or documented. In this paper, the term 'DTD' is used in the 'document type definition' sense, that is, the set of rules that governs the use of XML to represent sets of data that conform to a particular type, not in the 'document type declaration' sense, that is, the syntactic component of XML documents that provides the document's local syntax rules used by the XML parser when parsing the document. Using UML for the task of defining XML document types has a number of advantages, including: Enabling the clear and formal binding of abstract data models to their XML implementation representations Providing clear graphical representations of the XML implementation model Providing true modularization of DTDs through UML's normal package mechanism. Enabling the use of generally available tools with which most programmers have at least some passing familiarity (e.g., Rational Rose, ObjectDomain, etc.) Providing the opportunity for information modeling and implementation activities to take advantage of UML-based design and modeling methodologies such as the Catalysis method. More effectively and directly binding of the documentation for the element types to the model definition. The document is a first-class part of the model rather than being either just comments in the declaration set or a completely separate document that is not tightly coupled to the formal DTD definition. Using XMI (XML Metadata Interchange) for the storage of the UML models, arbitrarily sophisticated XML structures can be used within the XMI data set to model the documentation... This paper first presents the definitions of the stereotypes, formally defined through a UML model in which the stereotypes are types, then demonstrates how those stereotypes are used using a simple demonstration DTD. The paper assumes that you are familiar with both UML and XML syntax and concepts. Note that as presented in this paper all the models are data models, not object models, meaning that they define static types. However, the data models could be further refined into object models that provide methods for the types. This paper does not address this aspect of using UML to model documents and document types... It should be clear from the sample document type that UML can be used productively to directly model XML DTDs using normal UML techniques. Additional work to be done includes: (1) Refine the use of specific UML syntactic facilities and conventions. For example, is an element type a refinement of a more abstract type or is it an implementation?]; (2) Refine the content constraint language; (3) Gain more practical experience with using this technique in real-world situations; (4) Refine the use of Catalysis methods and conventions (templates, refinement, etc.) in this model."

  • [February 13, 2001] More on Object Role Modelling, UML, UDDI, by David Carlson. "I'd like to clarify one point before commenting. By "approaches of UDDI," I assume that you are referring to the white paper that I wrote on modeling UDDI schema with UML. These are not approaches of UDDI per se, but rather conventions that I follow when using UML to model schemas. The UDDI was just one example of this. The use of UML attributes vs associations to other classes are both perfectly valid models in UML. The mapping that I use for UML to schema is based on the OMG specification of XMI, which defines a complete UML to DTD mapping. I've adapted this for a mapping to XSD, then I proposed a set of UML stereotypes and tagged values (based on XML Schema) that allow customization of the mapping, starting with the XMI mapping as a basis. The Rational white paper had a much more limited UML profile based on DTD. (I am preparing a new white paper that contains the full UML profile description for my proposal.) My suggestion for best practices is to use UML attributes for properties having simple datatypes, and to use associations to other complexTypes. I find that this lead to *much* more readable UML diagrams compared to using assoications for everything. One of my primary goals in using UML do model schemas is to improve our ability to communicate the conceptual models with non-XML-savvy stakeholders..."

  • [September 05, 2002] "Architectures for Intelligent Systems." By John F. Sowa In IBM Systems Journal Volume 41, Number 3 (2002). Special Issue on Artificial Intelligence. "People communicate with each other in sentences that incorporate two kinds of information: propositions about some subject, and metalevel speech acts that specify how the propositional information is used -- as an assertion, a command, a question, or a promise. By means of speech acts, a group of people who have different areas of expertise can cooperate and dynamically reconfigure their social interactions to perform tasks and solve problems that would be difficult or impossible for any single individual. This paper proposes a framework for intelligent systems that consist of a variety of specialized components together with logic-based languages that can express propositions and speech acts about those propositions. The result is a system with a dynamically changing architecture that can be reconfigured in various ways: by a human knowledge engineer who specifies a script of speech acts that determine how the components interact; by a planning component that generates the speech acts to redirect the other components; or by a committee of components, which might include human assistants, whose speech acts serve to redirect one another. The components communicate by sending messages to a Linda-like blackboard, in which components accept messages that are either directed to them or that they consider themselves competent to handle..." Note from a posting of Sowa: "This paper outlines an architecture that is being developed by the new company, VivoMind LLC, which is in the process of being organized now. That architecture is based on message passing technology that we intend to make publicly available in order to make it easier to incorporate independently developed modules into an AI system. Some modules may be free, open-source code, and others may be commercial, proprietary code. But as long as they observe the common interfaces specified in the architecture, they can all communicate and interact as part of a distributed system..." Also available from the author's website.

  • [August 08, 2002] "UML For W3C XML Schema Design." By Will Provost. From August 07, 2002. ['Will Provost offers a UML profile for W3C XML Schema'] "Even with the clear advantages it offers over the fast-receding DTD grammar, W3C XML Schema (WXS) cannot be praised for its concision. Indeed, in discussions of XML vocabulary design, the DTD notation is often thrown up on a whiteboard solely for its ability to quickly and completely communicate an idea; the corresponding WXS notation would be laughably awkward, even when WXS will be the implementation language. Thus, UML, a graphical design notation, is all the more attractive for WXS design. UML is meant for greater things than simple description of data structures. Still the UML metamodel can support Schema design quite well, for wire-serializable types, persistence schema, and many other XML applications. UML and XML are likely to come in frequent professional contact; it would be nice if they could get along. The highest possible degree of integration of code-design and XML-design processes should be sought. Application of UML to just about any type model requires an extension profile. There are many possible profiles and mappings between UML and XML, not all of which address the same goals. The XML Metadata Interchange and XMI Production for W3C XML Schema specifications, from the OMG, offer a standard mapping from UML/MOF to WXS for the purpose of exchanging models between UML tools. The model in question may not even be intended for XML production. WXS simply serves as a reliable XML expression of metadata for consumption in some other tool or locale. My purpose here is to discuss issues in mapping between these two metamodels and to advance a UML profile that will support complete expression of an WXS information set... The major distinction is that XMI puts UML first, so to speak, in some cases settling for a mapping that fails to capture some useful WXS construct, so long as the UML model is well expressed. My aim is to put WXS first and to develop a UML profile for use specifically in WXS design: (1) The profile should capture every detail of an XML vocabulary that an WXS could express. (2) It should support two-way generation of WXS documents. I suggest a few stereotypes and tags, many of which dovetail with the XMI-Schema mapping. I discuss specific notation issues as the story unfolds, and highlight the necessary stereotypes and tags... David Carlson [Modeling XML Applications with UML: Practical e-Business Applications] has also done some excellent work in this area, and has proposed an extension profile for this purpose. I disagree with him on at least one major point of modeling and one minor point of notation, but much of what is developed here lines up well with Carlson's profile..." See other references in: (1) "XML and 'The Semantic Web'"; (2) "XML Schemas"; (3) "XML Metadata Interchange (XMI)."

  • Modeling XML Applications with UML: Practical e-Business Applications.. By David Carlson. Addison-Wesley, 2001. [to appear April 2001] ISBN: 0-201-70915-5. Back-cover description from 2001-02-09: "XML (eXtensible Markup Language) and UML (Unified Modeling Language) are two of the most significant advances to have emerged in the respective fields of Web application development and object-oriented modeling. Modeling XML Applications with UML reveals how to integrate these two technologies to create dynamic, interactive Web applications and achieve business-to-business application integration. The book focuses on the design and visual analysis of XML vocabularies. It explores the generation of DTD and Schema languages from those vocabularies, as well as the design of enterprise integration and portals -- all using UML class diagrams and use case analysis. The book also features extensive information on the deployment of XML vocabularies and portals, showing how to put these elements to work within distributed e-business systems. You will learn practical techniques that can be applied to both small and large system development projects using either formal or informal processes. For those who may be new to XML and UML, the book includes a brief overview of these topics, although some background knowledge is recommended."

  • [August 23, 2001] "Modeling XML Vocabularies with UML: Part I." By Dave Carlson. From August 22, 2001. "The arrival of the W3C's XML Schema specification has evoked a variety of responses from software developers, system integrators, XML document analysts, authors, and designers of B2B vocabularies. Some like the richer structure and semantics that can be expressed with these new schemas as compared to DTDs, while others complain about excessive complexity. Many find that the resulting schemas are difficult to share with wider audiences of users and business partners. I look past many of these differences of opinion to view XML Schema simply as implementation syntax for models of business vocabularies. Other forms of model representation and presentation are more effective than W3C XML Schema when specifying new vocabularies or sharing definitions with users. In particular, I favor the Unified Modeling Language (UML) as a widely adopted standard for system specification and design. My goal in this article and in this series is to share some thoughts about how these two standards are complementary and to work through a simple example that makes the ideas concrete. Although this discussion is focused on the W3C XML Schema specification, the same concepts are easily transferred to other XML schema languages. Indeed, I have already applied the same techniques to creating and reverse engineering DTDs and SOX schemas, as well as RELAX, TREX, and RELAX NG. In general, I use the term 'schema' when referring to the family of XML schema languages... The focus of the present article has been capturing the conceptual model of a vocabulary, which is the logical first step in the development process. The next article presents a list of design choices and alternative approaches for mapping UML to W3C XML Schema. The UML model presented in this first article will be refined to reflect the design choices made by the authors of the W3C's XML Schema Primer, where this example originated. For our purposes, these authors are the stakeholders of system requirements. The third article will introduce a UML profile for XML schemas that allows all detailed design choices to be added to the model definition and then used to generate a complete schema automatically. The result is a UML model that is used to generate a W3C XML Schema, which can successfully validate XML document instances copied from the Schema Primer specification. Along the way, I'll introduce a web tool used to generate schemas from UML and reverse engineer schemas into UML." See in this connection: (1) "Conceptual Modeling and Markup Languages"; (2) the web site; (3) Carlson's book, Modeling XML Applications with UML: Practical e-Business Applications; (4) Kimber and Heintz, "Using UML to Define XML Document Types" [MLTP 2/3, 'defines a convention for the use of UML to define XML documents'].

  • [September 21, 2001] "Modeling XML Vocabularies with UML: Part II." By Dave Carlson. From September 19, 2001. "Mapping UML Models to XML Schema: This is where the rubber meets the road when using UML in the development of XML schemas. A primary goal guiding the specification of this mapping is to allow sufficient flexibility to encompass most schema design requirements, while retaining a smooth transition from the conceptual vocabulary model to its detailed design and generation. A related goal is to allow a valid XML schema to be automatically generated from any UML class diagram, even if the modeller has no familiarity with the XML schema syntax. Having this ability enables a rapid development process and supports reuse of the model vocabularies in several different deployment languages or environments because the core model is not overly specialized to XML... The default mapping rules described in this article can be used to generate a complete XML schema from any UML class diagram. This might be a pre-existing application model that now must be deployed within an XML web services architecture, or it might be a new XML vocabulary model intended as a B2B data interchange standard. In either case, the default schema provides a usable first iteration that can be immediately used in an initial application deployment, although it may require refinement to meet other architectural and design requirements. The first article in this series presented a process flow for schema design that emphasized the distinction between designing for data-oriented applications versus text-oriented applications. The default mapping rules are often sufficient for data-oriented applications. In fact, these defaults are aligned with the OMG's XML Metadata Interchange (XMI) version 2.0 specification for using XML as a model interchange format. This approach is also well aligned with the OMG's new initiative for Model Driven Architecture (MDA). Text-oriented schemas, and any other schema that might be authored by humans and used as content for HTML portals, often must be refined to simplify the XML document structure. For example, many schema designers eliminate the wrapper elements corresponding to an association role name (but this also prevents use of the XSD <all> model group). This refinement and many others can be specified in a vocabulary model by setting a new default parameter for one UML package, which then applies to all of its contained classes..." See: (1) Part I of Carlson's article; (2) "XML Schemas."

  • [October 19, 2001] "Modeling XML Vocabularies with UML: Part III." By Dave Carlson. From October 10, 2001. ['The third and final part of Dave Carlson's series on modeling XML vocabularies covers a specific profile of UML for use with XML Schema, and describes how UML can contribute to the analysis and design of XML applications.' See previously Part I and Part II.] "This article is the third installment in a series on using UML to model XML vocabularies. The examples are based on a simple purchase order schema included in the W3C XML Schema Primer, and we've followed an incremental development approach to define and refine this vocabulary model with UML class diagrams. The first objective of this third article is to complete the process of refining the PO model so that the resulting schema is functionally equivalent to the one contained in the XSD Primer. The second objective is to broaden our perspective for understanding how UML can contribute to the analysis and design of XML applications... The following list summarizes several goals that guide our work. (1) Create a valid XML schema from any UML class structure model, as described in the first two parts of this series. (2) Refine the conceptual model to a design model specialized for XML schema by adding stereotypes and properties that are based on a customization profile for UML. (3) Support a bi-directional mapping between UML and XSD, including reverse engineering existing XML schemas into UML models. (4) Design and deploy XML vocabularies by assembling reusable modules. Integrate XML and non-XML information models in UML; to represent, for example, both XML schemas and relational database schemas in a larger system... Even this relatively narrow scope covers a broad terrain. The following introduction to a UML profile for XML adds a critical step toward all of these goals. These extensions to UML allow schema designers to satisfy specific architectural and deployment requirements, analogous to physical database design in a RDBMS. And these same extensions are necessary when reverse engineering existing schemas into UML because we must map arbitrary schema structures into an object-oriented model... One of the benefits gained by using UML as part of our XML development process is that it enables a thoughtful approach to modular, maintainable, reusable application components. In the first two parts of this series, the PurchaseOrder and Address elements were specified in two separate diagrams, implying reusable submodels. UML includes package and namespace structures for making these modules explicit and also specifying dependencies between them... A package, shown as a file folder in a diagram, defines a separate namespace for all model elements within it, including additional subpackages. These UML packages are a very natural counterpart to XML namespaces. A dashed line arrow between two packages indicates that one is dependent on the other. When used in a schema definition, each package produces a separate schema file. The implementation of dependencies varies among alternative schema languages. For DTDs they might become external entity references. For the W3C XML Schema, these package dependencies create either <include> or <import> elements, based on whether or not the target namespaces of related packages are equal. A dependency is shown from the PO package to the XSD_Datatypes package, but an import element is not created because this datatype library is inherently available as part of the XML Schema language. This object-oriented approach to XML schema design facilitates modular reuse, just as one would do when using languages such as Java or C++..."

  • [May 21, 2001]   BOX Tool Generates XML DTDs and Vector Graphics Diagrams from UML/XMI.    A posting from Christian Nentwich announces the release of a software tool called BOX ('Browsing Objects in XML') which "reads UML models in XMI and exports the contained diagrams in vector graphics form, including SVG and VML. The BOX tool includes, amongst other things, (1) An implementation of the UML metamodel [mainly Foundation/Core, not behavioral packages], in the uml package; (2) A parser for XMI; (3) An additional parser for diagram information specific to the Unisys exporter, in the unisys package; (4) Several exporters in the export package, which you have to manually call at the moment; (5) Heuristics for reconstructing diagrams from the rather poor information made public by the exporter; (6) Sample UML models." BOX was written for a research project in 1998-2001; though currently unmaintained and underdocumented, it is licensed as free software under the GNU General Public License. A research paper on 'Browsing Objects in XML' from 1999 describes BOX as a "a portable, distributed, and interoperable approach to browsing UML models with off-the-shelf browser technology; its approach to browsing UML models leverages XML and related specifications, such as the Document Object Model (DOM), the XML Metadata Interchange (XMI) and a Vector Graphic Markup Language (VML). BOX translates a UML model that is represented in XMI into VML. BOX has been successfully evaluated in two industrial case studies which used BOX to make extensive domain and enterprise object models available to a large number of stakeholders over a corporate intranets and the Internet. We discuss why XML and the BOX architecture can be applied to other software engineering notations and argue that the approach taken in BOX can be applied to other domains that already started to adopt XML and have a need for graphic representation of XML information. These include browsing gene sequences, chemical molecule structures, and conceptual knowledge representations." [Full context]

  • [October 19, 2001] "Extracting UML Conceptual Models from Existing XML Vocabularies." Presentation to be given by Dave Carlson, 29-October-2001 at the UBL Meeting in Menlo Park, California. "This talk will review specific examples of reverse engineering UML conceptual model diagrams from several xCBL SOX modules, e.g., Catalog, Order, TradingParty, and Auction. The goal of this prototype work is to extract an accurate vocabulary structure that is independent of the schema implementation language. The resulting UML models can be more easily reviewed, refined and submitted as an initial library of ebXML Core Components..." From a posting by Jon Bosak to the UBL mailing list (October 18, 2001) [source]. Related references: (1) "UML/XML Submissions for the UN/CEFACT eBTWG 'UML to XML Design Rules' Project"; (2) "UML to XML Design Rules Project"; (3) "Universal Business Language (UBL)."

  • [February 28, 2000] From the "Articles" collection: "Toolkit for Conceptual Modeling (TCM)." By Roel Wieringa (project supervisor), Frank Dehne, and Henk van de Zandschulp [Faculty of Computer Science of the University of Twente, and the Division of Mathematics and Computer Science of the Vrije Universiteit Amsterdam]. "The Toolkit for Conceptual Modeling is a collection of software tools to present conceptual models of software systems in the form of diagrams, tables, trees, and the like. A conceptual model of a system is a structure used to represent the requirements or architecture of the system. TCM is meant to be used for specifying and maintaining requirements for desired systems, in which a number of techniques and heuristics for problem analysis, function refinement, behavior specification, and architecture specification are used. TCM takes the form of a suite of graphical editors that can be used in these design tasks. These editors can be categorized into: (1) Generic editors for generic diagrams, generic tables and generic trees. (2) Structured Analysis (SA) editors for entity-relationship diagrams, data and event flow diagrams, state transition diagrams, function refinement trees, transaction-use tables and function-entity type tables. (3) Unified Modeling Language (UML) editors for static structure diagrams, use-case diagrams, activity diagrams, state charts, message sequence diagrams, collaboration diagrams, component diagrams and deployment diagrams (only the first three and last two UML editors are functional at this moment). (4) Miscellaneous editors such as for JSD (process structure and network diagrams), recursive process graphs and transaction decomposition tables. TCM supports constraint checking for single documents (e.g. name duplication and cycles in is-a relationships). TCM distinguishes built-in constraints (of which a violation cannot even be attempted) from immediate constraints (of which an attempted violation is immediately prevented) and soft constraints (against which the editor provides a warning when it checks the drawing). TCM is planned to support hierarchic graphs, so that it can handle for example hierarchic statecharts. Features to be added later include constraint checking across documents and executable models. [...] A conceptual model of a system is an abstraction of the behavior or decomposition of the system. A conceptual model can be presented visually by means of diagrams, graphs, trees, tables or structured text. During software development, a number of stakeholders must reach a common understanding of the behavior and structure of the software. These are for example users and sponsors (or their representatives), analysts, designers and programmers. An important function of conceptual models is to facilitate this understanding..." See also the published review of TCM. [Why is this item in the reference collection? You should already know. Reminder: Syntax is easy, semantics is hard. Syntax is not the "real" problem, nor even the most fundamental perspective, if we are seeking to provide an open, interoperable, maintainable environment in which all the SMEs, programmers, end users, and other relevant stakeholders may contribute to the design/development/documentation/use/maintenance of the information system modelled in software. It's all about semantic transparency; markup syntax is a grotesque monstrosity in this environment. This explains why attaching methods directly to markup elements only "sorta" works, part of the time, for short-term results, in small projects of limited complexity. XML appears to be a necessary step (though an unnecesary distraction and misguided agenda in some cases) in the evolution of sound thinking toward a standards-based "group-think" formalism that enables solution of the "real" (sic!) problem. Conceptual modeling tools and requirements engineering concepts are part of that enterprise. You can construct (even generate) a hundred (equally) competent XML syntaxes for one sound conceptual model; the markup model syntaxes are ephemeral, and not the essence of anything important. A sound conceptual model can be. The (current) focus on syntax-based formalisms will probably look silly once enough people realize the nature of "the problem" at a more abstract and foundational level, along with the theoretic foundations of its solution. See for example: (1) Journal of Conceptual Modeling; (2) Object Role Modeling (ORM); (3) Alexander Borgida, "Features Of Languages For The Development Of Information Systems At The Conceptual Level." The paper "surveys several languages which purport to allow the description of an IS in a manner which models the real-world enterprise more naturally and directly than has been the case traditionally. The goal of this approach is to facilitate: (a) the design and maintenance of the IS, by adopting a vocabulary which is more appropriate for the problem domain, and by structuring the IS description as well as the description process; (b) the use of the IS, by making it easier for the user to interpret the data stored, and thus obtain information..." In recent years, Alex Borgida has been working on description logics.</opEd>]

  • "The ER Model, XML and the Web." Presentation by Dr. Peter P. Chen (Professor of Computer Science Louisiana State University). Metadata and DAMA 2001; March 4-8, 2001,Hilton Anaheim, Anaheim, California. "The ER Model and its variants have been used successfully in data modeling and database design in more than twenty years. In the past few years, the Web has become an increasingly popular user interface to files and databases. The XML, which is developed by the World Wide Web Consortium, is positioned to be the mainstream markup language for the web of the future. Recently, several XML working groups are in the process of developing specifications related to data types, schemas, and data models. Whether the ER model (or its variants) can serve as the model of the web is a subject of a debate within the XML working groups. In this talk, Peter Chen looks at some of the current modules of XML and shows the similarities and differences between the main concepts in these modules and the main concepts in the ER model. Then, he discusses reasons why the ER model is a good candidate for the model of the web... [Dr. Chen is the originator of the Entity-Relationship Model (ER Model), which serves as the foundation of many systems analysis and design methodologies, computer-aided software engineering (CASE) tools, and repository systems.]" See also Chen's presentation "Conceptual Modeling and XML Schema Design" at XML DevCon Spring 2001.

  • [January 25, 2001] Object Role Modelling and XML-Schema. By Linda Bird and Andrew Goodchild (Distributed System Technology Center - DSTC), and Terry Halpin (Microsoft Corporation). Presented at ER 2000 (19th International Conference on Conceptual Modeling, October 9-12, 2000). 14 pages. Abstract: "XML is increasingly becoming the preferred method of encoding structured data for exchange over the Internet. XML-Schema, which is an emerging text-based schema definition language, promises to become the most popular method for describing these XML-documents. While text-based languages, such as XML-Schema, offer great advantages for data interchange on the Internet, graphical modelling languages are widely accepted as a more visually effective means of specifying and communicating data requirements for a human audience. With this in mind, this paper investigates the use of Object Role Modelling (ORM), a graphical, conceptual modelling technique, as a means for designing XML-Schemas. The primary benefit of using ORM is that it is much easier to get the model 'correct' by designing it in ORM first, rather than in XML. To facilitate this process we describe an algorithm that enables an XML-Schema file to be automatically generated from an ORM conceptual data model. Our approach aims to reduce data redundancy and increase the connectivity of the resulting XML instances... ORM was chosen for designing XML schemas for three main reasons. Firstly, its linguistic basis and role-based notation allows models to be easily validated with domain experts by natural verbalization and sample populations. Secondly, its data modeling support is richer than other popular notations (Entity-Relationship (ER) or Unified Modeling Language (UML)), allowing more business rules to be captured. Thirdly, its attribute-free models and queries are more stable than those of attribute-based approaches..." On 'Object Role Modelling,' see

  • [January 25, 2001] "Leveraging the UML Metamodel: Expressing ORM Semantics Using a UML Profile." By David Cuyler (Sandia National Laboratories). In Journal of Conceptual Modeling Issue Number 17 (December, 2000) "This paper is a proposal for a UML Profile to facilitate expression of Object Role Modeling semantics in terms of the UML 1.3 Metamodel. The profile uses the extension mechanisms inherent to UML to clarify usage and semantics where necessary, and it proposes the use of the XML Metadata Interchange (XMI) specification for model exchange. Once expressed in terms of the UML Metamodel, ORM models can then be shared among UML-based tools and can be stored, managed and controlled via UML-based repositories. The paper provides an example of an ORM model fragment converted to the XMI format, in accordance with the profile.... Object Role Modeling (ORM), as defined by the work of Dr. Terry Halpin and with a heritage in Natural Language Information Analysis Method (NIAM), has one of the richest content models of any persistent modeling grammar. ORM is unique among information modeling techniques as it can be used to document a persistent data model for both relational and object schemata. Dr. Halpin has recently published several works comparing ORM with UML, and in them has implied that conversion of an ORM model to UML might be possible. This paper provides a definition, in the form of a UML Profile, that provides the extensions necessary to perform this conversion and to accurately reflect the semantic content of an ORM model. ORM semantics and usage differ from those typically associated with UML primarily in the following areas: [1] What would normally be considered an Attribute in UML is represented in ORM as an Association (FactType). [2] A typical ORM Constraint restricts the allowed population of an AssociationEnd (Role) or a set of AssociationEnds. This contrasts with the UML, where constraints typically govern whole Associations, Classes, or Behavioral Features. [3] The ORM analysis process relies heavily on sample populations of associations (Links) to assist in the determination of Constraints. This is not consistently used in UML techniques. [4] ORM methods are typically used to model persistent data stores, helping to optimize the data structure and reduce the incidence of anomalies in the population of the data store. UML is typically used to model run-time characteristics of software..." See Appendix D: "Semantic Content of the Example ORM Source Model Expressed in XMI 1.1 According to this Profile."

  • Conceptual modeling versus visual modeling: a technological key to building consensus." By Gary F. Simons. A paper presented at Consensus ex Machina = Joint International Conference of the Association for Literary and Linguistic Computing and the Association for Computing and the Humanities, Paris, 19-23 April 1994. "...A conceptual modeling language, like an object-oriented language, encapsulates all of the information about a real world entity (including its behavior) in the object itself. A conceptual modeling language goes beyond a standard object-oriented language (like Smalltalk) by replacing the simple instance variables with attributes that encapsulate integrity constraints and the semantics of relationships to other objects. Because conceptual modeling languages map directly from entities in the real world to objects in the computer-based model, they make it easier to design and implement systems. The resulting systems are easier to use since they are semantically transparent to users who already know the problem domain..."

  • "Features of Languages for the Development of Information Systems at The Conceptual Level." By Alexander Borgida. IEEE Software Volume 2, Number 1 (January 1985), pages 63-73. "A computer system which stores, retrieves and manipulates information about some portion of the real world can be viewed as a model of that domain of discourse. There has been considerable research recently on languages which allow one to capture more of the semantics of the real world in these computerized Information Systems -- research which has variously been labelled as Semantic Data Modeling, Semantic Modeling or Conceptual Modeling. This review paper presents a list of the features which appear to distinguish these languages from those traditionally used to describe and develop database-intensive applications, and considers the motivation for these features as well as the potential advantages to be gained through their use. The paper, which is intended for those familiar with current data processing practices, also compares in greater detail four programming languages which incorporate semantic modeling facilities, and discusses some of the methodologies and tools for Information System development based on these languages... The view that in an IS 'the symbol system in the computer is a model of the real world' has been noted and examined by numerous researchers, and much of the work on 'conceptual modeling' (abbreviated CM henceforth) takes this as a fundamental tenet. It is important to note that the IS must model the user's conceptualization of the application domain, not the designer's independent perception, nor should it be a model of the way data is stored in the computer..." See the longer excerpt from the paper. Some of the ideas in this paper were formative in my thinking; Gary Simons (SIL), who first pointed me to this paper, has implemented a number of designs incorporating some of the ideas. See (hint!) "Extended Objects" in Communications of the ACM (CACM) Volume 36/8 (August 1993), pages 19-24; the article "describes extensions CELLAR has made to the object-oriented paradigm." CELLAR (Computing Environment for Linguistic, Literary, and Anthropological Research) features a design which "separates the conceptual model of a data set from multiple (interchangeable) views for display and [markup-language] encoding formats for import and export." [Postscript, cache PDF]

  • "Data Modeling with Markup Languages (DM²L)," By François Bry and Norbert Eisinger. Lehr- und Forschungseinheit für Programmier- und Modellierungssprachen, Institut für Informatik der Ludwig-Maximilians-Universität M|nchen. 23-March-2000, revised 3-July-2000. "Modern markup languages, such as SGML (Standard Generalized Markup Language) and XML (eXtensible Markup Language), which were initially conceived for modeling texts, are now receiving an increasing attention as formalisms for data and knowledge modeling. XML is currently establishing itself as a successor of HTML (HyperText Markup Language) for a better modeling of texts as well as of other kinds of data. There are several reasons for this evolution..."

  • [August 29, 2000] "Constraints-preseving Transformation from XML Document Type Definition to Relational Schema." By Dongwon Lee and Wesley W. Chu (Department of Computer Science University of California, Los Angeles Los Angeles, CA 90095, USA Email: {dongwon,wwc} UCLA CS-TR 200001 (Technical Report). 22 pages (with 21 references). Also, paper presented at the 19th International Conference on Conceptual Modeling (ER), Salt Lake City, Utah, October, 2000. "As the Extensible Markup Language (XML) is emerging as the data format of the internet era, more needs to efficiently store and query XML data arise. One way towards this goal is using relational database by transforming XML data into relational format. In this paper, we argue that existing transformation algorithms are not complete in the sense that they focus only on structural aspects, while ignoring semantic aspects. We show the kinds of semantic knowledge that needs to be captured during the transformation in order to ensure correct relational schema at the end. Further, we show a simple algorithm that can: (1) derive such semantic knowledge from the given XML Document Type Definition (DTD), and (2) preserve the knowledge by representing them in terms of semantic constraints in relational database terms. By combining the existing transformation algorithms and our constraints- preserving algorithm, one can transform XML DTD to relational schema where correct semantics and behaviors are guaranteed by the preserved constraints. Our implementation and complete experimental results are available from [XPRESS Home Page]. . . [Conclusion:] Since the schema design in relational databases greatly affects the query processing efficiency, how to transform the XML DTD to its corresponding relational schema is an important problem. Further, due to the XML DTD's peculiar characteristics and its incompatibility between the hierarchical XML and flat relational model, the transformation process is not a straightforward task. After showing a variety of semantic constraints hidden implicitly or explicitly in DTD, we presented two algorithms on: 1) how to discover the semantic constraints using one of the existing transformation algorithms, and 2) how to re-write the semantic constraints in relational notation. Then, using a complete example developed through the paper, we showed semantic constraints found in both XML and relational terms. The final relational schema transformed from our CPI algorithm not only captures the structure, but also the semantics of the given DTD. Further research direction of using the semantic constraints towards query optimization and semantic caching is also presented." Available in PDF format. [cache]

  • "MODEL-ECS: Executable Conceptual Modelling Language." By Dickson Lukose. "In this paper, the author describes a graphically based executable conceptual modelling language called MODEL-ECS, for operationalising the Knowledge Analysis and Design Support (KADS) methodology. The main contribution of this paper is in the development of all the necessary constructs for modelling executable Problem Solving Methods (PSMs). These constructs include: conditional construct;while loop; repeat loop; and case construct. Knowledge passing between Knowledge Sources in a PSM is achieved using coreference links, and line of identity. The advantage of using MODEL-ECS comes from the expressibility provided by Conceptual Graphs, and the executability that is provided by the Executable Conceptual Structures."

  • CommonKADS. CML2 (Conceptual Modelling Language) is the CommonKADS knowledge modelling language.

  • Operational Conceptual Modeling Language (OCML). "The OCML modelling language supports the construction of knowledge models by means of several types of constructs. It allows the specification and operationalization of functions, relations, classes, instances and rules. It also includes mechanisms for defining ontologies and problem solving methods, the main technologies developed in the knowledge modelling area. About a dozen projects in KMi are currently using OCML to provide modelling support for applications in areas such as knowledge management, ontology development, e-commerce and knowledge based system development. OCML modelling is also supported by a large library of reusable models, providing a useful resource for the knowledge modelling community... Monica Crubezy has recently produced a customisation of Protégé-2000, which allows OCML ontologies to be used with UPML models. Wenjin Lü is working on an XML syntax for OCML, which will be released soon."

  • "An Overview of the OCML Modelling Language." By Enrico Motta. Paper presented at the 8th Workshop on Knowledge Engineering Methods and Languages (KEML '98). January 1998.

  • "Intensional and Extensional Languages in Conceptual Modelling." By Marko Niinimäki (Department of Computer Science, University of Tampere, Finland). In Proceedings of the Tenth European-Japanese conference on information modelling and knowledge bases. Hotelli Riekonlinna, Saariselka, Lapland, Finland, May 8-11, 2000. "In conceptual modelling, there are many competing views of the role of the modelling language. In this paper, we propose a clarifying classification of different kinds of languages. This classification, based on the semantical background theory of each kind of language, divides the modelling languages into three categories: extensional modelling languages, languages based on concept calculus (intensional languages) and hybrid languages. The classification provides the background for studies of applicability of a modelling lan-guage. Based on an example, we observe that some features of the intensional approach are actually only terminologically different from those of the exten-sional one. We observe, too, that because of the clear semantic background, hybrid languages seem promising. Using them in conceptual modelling would benefit from a good methodology."

  • [May 21, 2001] "The Model Of Object Primitives: Representation of Object Structures based on State Primitives and Behaviour Policies." By Nektarios Georgalas (BT Adastral Park, B54/Rm125, Martlesham Heath, IPSWICH, IP5 3RE). In Succeeding with Object Databases: A Practical Look at Today's Implementations with Java and XML, edited by Roberto Zicari and Akmal Chaudhri. John Willey and Sons Publishers, 2000. ISBN 0471383848. "In contemporary business environments, different problems and a variety of diverse requirements compel designers to adopt numerous modelling methodologies that use semantics customised to suit ad-hoc needs. This fact hinders the unanimous acceptance of one modelling paradigm and lays the ground for the adoption of customised versions of some. Based on this principle, we devised and present the Model of Object Primitives which aims at providing a minimum as well as generic set of semantics without compromising expressive capability. It is a class-based model that accommodates the representation of static and dynamic characteristics, i.e., state and behaviour, of objects acting within the problem domain. It uses classes and policies to represent object state and behaviour and collections to collate them into complex structures. In this paper we introduce MOP and provide an insight into its semantics. We examine three case studies that use MOP to represent the XML, ODMG and Relational data-models and also schemata which are defined in these models. Subsequently, another two case studies illustrate practically how MOP can be used in distributed software environments to customise the behaviour or construct new components based on the powerful tool of behaviour policies... MOP, the Model of Object Primitives, is a class-based model that aims at analysing and modelling objects using their primitive constituents, i.e., state and behaviour. MOP contributes major advantages: (1) Minimal and rich. The semantics set includes only five basic representation mechanisms, namely, state, behaviour, collection, relationship and constraint. These suffice for the construction of highly complex schemata. MOP, additionally, incorporates policies through which it can express dynamic behaviour characteristics. (2) Cumulative and expandable concepts. The aggregation mechanism introduced by the Collection Class allows for the specification of classes that can incrementally expand. Since a class can participate in many collections, we can create Collections where one contains the other such that a complex structure is gradually produced including state as well as behaviour constituents. (3) Reusable concepts. A MOPClass can be included into more than one Collections. Therefore, concepts modelled in MOP are reusable. As such, any behaviour that is modelled as a combination of a Behaviour Class and a MOP Policy can be reusable. This provides for the usage of MOP in modelling software components. Reusability is a principal feature of components. (4) Extensible and customisable. MOP can be extended to support more semantics. Associating its main modelling constructs with constraints, more specialised representation mechanisms can be produced. (5) Use of graphs to represent MOP schemata and policies. MOP classes and relationships within a schema could potentially be visualised as nodes and edges of a graph. MOP policies are described to be graph-based as well. This provides for the development of CASE tools, similar to MOPper, which alleviate the design of MOP-based models. It is our belief that MOP can play a primary role in the integration of heterogeneous information systems both conceptually and practically. Conceptually, MOP performs as the connecting means for a variety of information models and can facilitate transformations among them. It was not within the paper's scope to study a formal framework for model transformations. However, the XML, ODMG and Relational data-model case studies give good evidence that MOP can be efficiently used to represent semantics of diverse modelling languages. This is a necessary condition before moving onto studying model transformations. Practically, MOP provides effective mechanisms to manage resources within a distributed software environment. Both practical case studies presented above show that MOP can assist in the construction of new components or in customising the behaviour of existing components. This is because MOP aids the representation and, therefore, the manipulation of context resources, state or behaviour, in a primitive form. Moreover, the adoption of policies as the means to describe the dynamic aspects of component behaviour, enhances MOP's role. Consequently, it is our overall conclusion that the Model of Object Primitives constitutes a useful paradigm capable of delivering efficient solutions in the worlds of data modelling and distributed information systems." See especially Section 4.1, "XML in MOP." Similarly: "An Information Management Environment based on the Model of Object Primitives."

  • "A Summary of the ER'97 Workshop on Behavioral Modeling." "Conceptual models are not just for databases any more. From its genesis in data modeling, the field of conceptual modeling has broadened to include behavioral constructs. The advent of technologies such as object orientation, active databases, triggers in relational databases, workflow systems, and so forth has placed greater emphasis on the need to model behavioral aspects of systems in addition to structural aspects. The literature reflects an increasing interest in conceptual models that cover both system structure and system behavior. The problem of how to design a database system based on a semantic data model is well understood. The focus of traditional design is on issues such as constraint satisfaction, information redundancy, access times, etc. We apply well-studied information-preserving transformations (such as normalization or denormalization) to arrive at a database with the characteristics we desire. However, when we add behavior to the conceptual model, we introduce additional design challenges that are less well understood, such as controlling the amount of concurrency, optimizing communications between active components, ensuring correct synchronization of active components, satisfying real-time constraints, etc. Researchers are devoting increasingly more energy to the problems of behavioral modeling in conjunction with traditional conceptual data modeling. Behavioral modeling is not new, but its tighter integration with traditional conceptual modeling has opened new questions and opportunities..."

Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation


XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI:  —  Legal stuff
Robin Cover, Editor: