[Cache from: http://www.wilshireconferences.com/anaheim/sessions.htm; see the canonical URL if possible.]

 

CONFERENCE SESSION DESCRIPTIONS
This 6-track program, including the following 60 sessions,
takes place Tuesday, March 6 through Thursday, March 8


Tuesday, March 6


C1: Data Architecture at USAA
"Pouring" the Foundation for the Information Age
 
Andres Perez 
Enterprise Data Architect
USAA

USAA has been a customer driven company for over 70 years. It is an aggressive employer of Information Technology and has now recognized the challenges brought about by the Information Age demand for significant change. USAA has recognized that data integrity compromises made through the years in a furious attempt to develop needed IT applications had resulted in an unmanaged data replication environment. This environment is characterized by excessively and uncontrolled redundancy, incoherent information, hundreds of scheduled and ad-hoc conflicting reports, extremely large costs associated with integration of new applications and purchased packages, "departmentalized" data files, "data specialists," ad-hoc Tiger Teams launched on "data quests," etc.

The cost of not managing data exceeded the "pain threshold" of the company. Since 1997, USAA has been working on the development and deployment of capabilities that will allow it to effectively manage its data resource. These include better process such as quality management & data sharing management, new technical direction, and ultimately, consistent, reliable and easy to use data. The activities include the implementation of an Enterprise-wide Data Management Organization to establish policies, standards, guidelines, etc., for the implementation of IT applications. Efforts also include the deployment of the Integrated Data Clearinghouse, a major technological deployment to enable the ease of data sharing in a controlled replication environment.


C2: Data Quality as a Profit Center  
Wendy Wood
Data Quality Analyst
SBC Services

The highly interactive world of eBusiness demands accessible, complete and correct data. The quality and inaccessibility of legacy data are frustrating both to the business and IT. The current state of the data in many companies is redundant and ‘out of sync’, little understood and in many cases still hidden in batch processes. To adequately support eBusiness data needs to be migrated to systems where it is accessible and ‘in sync’ with other instances of the same data. To create these systems data must be analyzed and its rules understood in a timely fashion. Over the last several years our Data Quality team has used domain profiling to help users get to know their data quickly and to understand its relationships. We have used these techniques to speed up and improve the data in replacement operational systems as well as in data warehouses and are beginning to support eBusiness implementations. 

There are some techniques we have found useful for analyzing data that can significantly speed up data analysis tasks and contribute to improving overall data quality. We will share some of these techniques, some things we have learned along the way and some of our users' experiences.  Most importantly we have turned our Data Quality Team into a DATA QUALITY PROFIT CENTER. Our clients are actually asking for our services and the savings to the company are phenomenal!


C3: Introduction to the Unified Modeling Language  
Terry Quatrani 
Rose Evangelist 
Rational Software Corporation

Modeling has been an essential part of engineering, art and construction for centuries. Complex software and database designs that would be difficult for you to describe textually can readily be conveyed through design diagrams. Each diagram focuses on one aspect of your application. Modeling provides three key benefits: visualization, complexity management and clear communication. UML stands for Unified Modeling Language and is the standard language for visualizing, specifying, constructing, and documenting the artifacts of a software-intensive system.

This session will introduce the key notational elements and diagrams in the UML. The UML is not just a language for application programmers. It is a language that is being used by everyone involved in a software development project (e.g. business analyst, data analyst, application developer, tester).  


C4: Implementation/Use of Operational Meta Data 
to Improve Data Quality in the Data Warehouse

Michael Jennings
Architect/Manager (specializing in business intelligence, data warehousing, and eCRM)
Hewitt Associates

Incorrect interpretation or use of information can often result from decision support architectures that fail to take advantage of opportunities to improve data quality and identification in the data warehouse. This missed opportunity often leads to additional time and expenses being expended in order to reconcile and audit information in the warehouse. Incorporation of operational meta data into the decision support data model design and data acquisition processes can correct this situation by providing a means to measure data quality directly at a row level of granularity. The benefits of operational meta data use include source system identification, data quality measurement, improved management of ETL processes and database administration.   Key Messages: 

·       Provides a description of the different types of operational meta data columns

·       Provides strategies for incorporation of operational meta data into the DSS data model and ETL designs

·       Provides administrative and auditing method examples for use of operational meta data

·       Provides an understanding to the audience as to the importance of operational meta data


C5: A Repository Model
David Hay
President
Essential Strategies

This presentation shows a proposed model for a "metadata repository", in terms of the specific things expected to be kept track of in such a repository. It will include analysis objects, such as entities and attributes, design objects, such as tables and columns, and, if time permits, a section on business rules. 

For several years now, the speaker has been searching for a catalogue to use for storing the "data about data" that are required to support a data warehouse or any major application. Alas, it isn't called a "catalogue" any more, or even a "data dictionary". It is now called a "metadata repository", and in keeping with this new high-falutin' name, actual examples are way more complex and abstract than seems to be really needed.

Now, no one has ever accused David Hay of being afraid to be abstract when the modeling situation required it. But both the commercial repositories and the generic models being promoted by the likes of Microsoft, the Metadata Coalition, and the Object Management Group take this too far. The fact of the matter is that, in most situations, there are relatively few, very well defined, things that we want to keep track of in a catalogue. To model these things should not be very difficult. The models in this article took less than two days to develop.


C6: XML Without Fear  
Alan Perkins  
Vice President, Client Services  
Visible Systems Corporation  

This presentation describes methods and techniques for managing meta data for XML-based applications easily and effectively. These include Corporate Portals, Enterprise Application Integration, Business-to-Business Communication, e-Commerce, and a myriad of other applications. XML (eXtensible Markup Language) is becoming the new standard language for building Corporate Portals, integrating legacy databases, and facilitating B2C (Business to Customer) and B2B (Business to Business) data communication. The single most important factor in successfully using XML is meta data management: definition, documentation, and deployment. Meta Data management is the only practical, effective way for enterprises to use XML for all of their data sources, data stores, data communication, and information management applications. It is also the easiest. This presentation describes an approach to meta data management that translates directly to useable, quality XML (easy to create, easy to manage, and easy to change).


C7: Data Management Support for Enterprise Architecture  
Brett Champlin  
Architecture Consultant  
Allstate Insurance Company  

Enterprise Architecture is an information based, information intensive discipline. If this information is important to the business, it must be captured, stored and managed. Like a "Customer database" or a "Product database", an "Architecture database" is critical for a successful Enterprise Architecture program.

This presentation will describe how to build, manage and leverage a repository for managing Enterprise Architecture components (models, objects, data, relationships). Enterprise Architecture is comprised of Business, Information, Application and Technology Architecture Frameworks. To be successful, EA requires a repository (or knowledgebase) that provides for reuse, integration, and dissemination of the underlying components. This talk will present an example EA repository and a survey of current tools capable of providing these functions.  


C8:  Business Rule Specification, Validation and Transformation: Advanced Aspects   
Terry Halpin  
Technical Lead in Database Design  
Microsoft Corporation
 

Although business rules are fundamental to information modeling, harvesting their full potential can be challenging in practice. At the analysis level, powerful notations and procedures are needed to capture all the relevant rules and validate them with domain experts. At the design level, judicious choices are needed between alternative model representations to ensure an optimized implementation. This presentation provides practical guidance for meeting these challenges, focusing on advanced aspects of business rules specification, validation and transformation. Rule visualizations in UML, ER and ORM are illustrated and compared so that users of any of these notations can make use of the principles discussed.     

Among other things, the presentation reveals some fundamental but previously unknown problems with UML and extended ER in the context of n-ary relationships. Although ORM is immune to these problems, the presentation includes practical advice for overcoming these problems regardless of the notation being used. It also illustrates how to supplement UML/ER with notes/annotations/verbalizations to emulate ORM's richer rule syntax. Hence the presentation is also valuable to anyone who prefers to model in UML or ER rather than ORM.    


C9: Logical Process Modeling – A Companion to a Logical Data Model  
Anne Marie Smith  
Assistant Professor  
LaSalle University  

Data does not exist in a vacuum, it is acted upon by processes. To fully understand the data and the meta data, it is necessary to also understand and document the processes that affect the data. An excellent way to gain this understanding and to prepare to implement an application (transaction processing, data warehousing or electronic commerce) is to carefully and completely model the business processes in conjunction with the relevant data. Business processes represent the flow of data through a series of tasks that are designed to result in specific business outcomes. This presentation will explain the concepts of business processes and logical process modeling and the interaction between data and process. The presentation draws on actual experiences in business process analysis and modeling in a variety of situations for different types of systems across several industries.  This presentation should be of interest to all data professionals since data is dependent upon processes for its existence. Data professionals can learn much by engaging in process modeling while modeling data, and by doing so, increase the relevance of data management with system architects, designers and programmers who are traditionally concerned with processes. 


C10: Redefining Meta Data Strategy in the 21st Century  
Ron Klein  
Manager, Data Systems & Architecture  
Carswell Thomson Professional Publishing

This paper presents two major directions and required innovations in the field of meta data. They are a consequence of the evolution on the today's available and near future technology like the Web and intelligent agents. The first innovation regards the development of specialized engines to help data management groups map enterprise internal meta-data to the common industry meta data. This engine is named Meta Data Recognition Device. The second innovation and maybe the one that will impact the whole society is the Information Classification Schema. This classification schema will be used the same way as the botanical and zoology classification of species. Taught at the high school and university levels will position future professionals to profess a common understanding of information categories with the same underlying understanding and interpretation. The Information Classification Schema and the Meta Data Recognition Device together will allow new uses of meta data that will position data management into an unforeseen future.  The attendee will learn:

The importance of this theme is the recognition that in the e-World, there is a convergence between data management and Publishing Concepts and Practices. There is also the enormous opportunity to integrate the hierarchical structure of mark-up languages, like XML, and relational structures like data modeling. It is also a call for further investigation on how data management professionals will adapt to innovative methods and approaches.


C11: Build Your Own Meta Data Repository  
Joseph Newcum  
Senior Data Architect, Enterprise Data Management Group  
Bank One
 

Purchased meta data repositories are not always a viable solution to a meta data maintenance problem. Most are costly, may require heavy modification to be useful, and still may only partially fulfill all stakeholders needs. A "home-grown" repository solution can avoid all of these issues. This presentation discusses why one large organization decided to build their own web-based meta data repository from scratch, the technologies chosen, the evolution of a viable meta-model, the methodology developed, the present status and future direction of the repository, and lessons learned along the way. Attendees will take away from this presentation:


C12: The Role of Data Administration in Managing an Enterprise Portal
Arvind Shah
President and Managing Principal
Performance Development Corporation

The Enterprise Portal is a central gateway to the processes, databases, systems and workflows of an enterprise. When personalized to the job responsibilities of employees via the Intranet, the enterprise portal provides a seamless, single point of access to all of the resources that employees need to do their jobs. When further personalized securely via the Internet and Extranets to the interests of suppliers, customers and business partners, the enterprise portal becomes the integrating conduit of the many disparate databases, systems and workflows each enterprise uses to carry out business with others. It also becomes a single place to manage rapid enterprise change.

Implementation of an enterprise portal requires interfaces with legacy systems and data warehouses. The portal architecture planning and modeling are required for the portal design. The configuration of the portal continuously changes as the e-business changes. The meta data, therefore, will play a key role in maintaining and managing an enterprise portal on an ongoing basis. The presentation addresses the issues the Data Administration has to address in order to assure successful functioning of an enterprise portal.


C13: Developing a Corporate Data Architecture in a Federated World
Deborah Henderson, IT Architect, Ontario Hydro Networks, Inc.
and Vladimir Pantic, Consultant, IBM Canada Ltd.

This session will review the modeling constructs necessary to control a federated enterprise environment, (OLTP and DW) and the process and options that can be put in place to support the architecture.

The Corporate environmental data administration model : Data architectural compliance and processes,  “Who is doing what” during the development and validation of the models, the model vitality process 

 Enterprise Conceptual modeling:

Logical modeling. ER modeling as an intermediate step towards development of a Dimensional Model (DM). Companies are developing the data warehouse with extensive help of external resources who are not necessarily experts in business. ER helps the development team understand the business concepts within the organization.

Dimensional modeling and DSS methods. We will explain how to use dimensional model with different DSS methods:

Enforcing Standards :  Naming conventions, normalization, data quality conformed dimensions

Data ownership and stewardship. Building the “horizontal” as oppose to “vertical” organization and its impact to the data in the organization.

You  will learn:


C14: Facilitation and the Successful Architect  
Shelley Lieberman  
Principal Consultant  
Everware, Inc.  

This presentation covers the successful hands-on development of an enterprise architecture and process improvement plan for the Alcoholic Beverage Control Agency (ABC) by using facilitation techniques. It covers how and when facilitation was used to make enterprise improvements in both manual and automated functions.  The first step was to first develop an as-is view of the agency with documented issues. The next step was to produce the to-be views of the agency, including information architecture and an e-commerce technology plan. Facilitation with the business experts and IT was used to decide the to-be views.  This presentation will cover how we decided on the facilitation sessions, what subject matter was covered in the sessions, and the agendas used. Then the critical success factors for these facilitation sessions will be discussed; such as, proper planning, user involvement, IT involvement, strong alignment with the business, and a feasible plan.  


C15: The Practical Use of a Universal Data Model in the Data Warehouse
David Lepley
Data Analyst, Data Integration Services
Tyco Electronics 

During the past year we have been upgrading our data warehouse architecture at Tyco Electronics. For all intents and purposes we have added an ODS that is based on an abstract/universal data model. It is an ODS in the sense that it is a near-current copy of integrated data from authoritative sources. It contains the business rules pertaining to the corporate and local company hierarchies (i.e., a generic taxonomy structure) and meta data about other master data such as Product, Customer, Person, and Organization data. It also contains the master data, itself (i.e. customer and product data), but not transactional data (e.g., invoices, purchase orders, etc.). The business rules are stored in a home-grown active repository and are used to validate the master data in a data acquisition facility.

Meta Data and data are stored in a central database according to both the local and corporate business rules. Our primary business driver is Tyco's plans to quickly acquire and integrate several other companies in the same industry. These companies will be permitted to act locally according to local rules using existing systems, but report globally, based on a corporate business model. Reporting locally, based on the local business model and/or the corporate business model, will also be supported by the data warehouse. Data Integration Services is not aware of who is being acquired until the acquisition is publicly announced. We must be in a position to respond quickly on very short notice.

Now that it is pretty well complete, we can acquire and integrate data fairly quickly with very little custom programming, if any. The data is available via intranet inquiries which need no changes to accommodate new acquisitions. The only applications that access the data in the database are the internet inquiries; a maintenance application for maintaining hierarchies; a maintenance application for maintaining roles that people play in regards to products, customers, and the organization; a batch application for acquiring part, customer, and local coding structures; and a batch application for extracting data. Common APIs were developed for validating, populating, and accessing data. The APIs shield developers from the complexities of the abstract data model. While this code is very complex, it only needs to be written once.

All data is stored as entity occurrences, attribute values, business ids, or relationships. We have three kinds of relationships ... codes, cross references, and roles. The data is integrated using surrogate keys for relationships. This separates the data from the various id structures used at the various local companies. Our structures are primarily a staging area for authoritative data. Operational applications subscribe to extracts and store the data locally.

Attendees will be provided with:


C16: Understanding and Managing Reference Data  
Malcolm Chisholm  
Manager  
Deloitte & Touche

Reference data is important because it is found in all databases. It is widely agreed it must be free from data quality defects, but it is rarely treated as a class of data in its own right. This presentation discusses the diverse nature of reference data, and what unites it. The need for management at the enterprise level is covered, with a series of practical steps on how to do this. Finally, an Internet-based approach for automatic synchronization of reference data across remote databases is discussed (this was implemented by the presenter and his colleagues at the United Nations).  The attendee will learn:


C17: Architecting and Implementing a Web-Based Corporate 
Meta Data Repository (CMR) at the Census Bureau  
Gail Wright, Technical Director for Federal Consulting, Oracle Corporation, 
and Program Manager, Corporate Meta Data Repository project at the Bureau of Census

The speaker will present the Census Bureau Corporate Meta Data Repository (CMR) application, and will discuss the architecture, technology, design, business uses, and management issues with this system. The CMR is based on standards: ISO/IEC 11179, FGDC, and Dublin Core, and includes a Data Element Registry, Survey Designer, Data Product Registry, Data Set Registry, and more, supporting the web-enabled maintenance, browsing, and XML interchange of survey meta data.


C18: Building the XML Meta Data Repository  
David Plotkin
Senior Data Administrator
Longs Drugs

Meta Data repositories are useful to keep track of the meta data describing your systems, and to facilitate "where-used" analysis. If you are implementing XML, the data type description documents (DTDs) and their relationships to the elements and attributes that make them up is valuable meta data. With an XML meta data repository, you can track the structure of your DTDs, make changes to that structure and regenerate the DTDs, and even create XML documents by linking the elements and attributes to their physical implementations in a database, copylib, etc. You can even expose your XML meta data through a web browser to make it easy to access.


C19: The 7 Deadly Sins of CRM
Jill Dyche
Partner
Baseline Consulting Group

A recent Gartner Group study estimated that 65% of CRM programs fail. With all the hoopla surrounding Customer Relationship Management, it's almost impossible for companies to delineate a clear CRM strategy let alone experience the benefits. Consultant and author Jill Dyche has been conducting CRM Readiness Assessments for Fortune 500 and dot.com companies alike. In this presentation she will share what in her experience are the major roadblocks to implementing a successful CRM initiative, including the roles played by integrated corporate data and the data warehouse.


C20: Elevating the Role of Information Resource Management Business Effectiveness
Larry P. English  
President  
INFORMATION IMPACT International, Inc.

The organization that is not managing its information cannot manage its business. Without managed, quality information, the enterprise cannot “know” what it needs to know to understand its customers and customer needs, manage operations, analyze its performance and make the strategic decisions for the future of the enterprise. This is even more crucial for service sector organizations, such as banks, insurance and government organizations whose products are, in fact, information.

Mr. English describes how you transform and elevate your data administration or data resource management to a function embraced as a critical competency. The principles used to manage other business resources, such as human and financial resources, apply to managing information and knowledge as strategic resources. Implementation of these principles is required to transform the enterprise from an Industrial-Age to a competitive Information-Age organization. This presentation discusses how the organization can harness the power of today's information technology to exploit its information resources for competitive advantage and business effectiveness.


C21: Panel: Comparison of Modeling Techniques

Graham Witt

David Hay

Terry Halpin

Terry Quatrani

Entity Relationship Modeling (ER) Unified Modeling Language (UML) and Object Role Modeling (ORM) are analytical techniques used for data and object modeling.  There has been much inconsistency among practitioners as to the value and appropriate use of each technique. Our panel of experts will each provide an overview of each modeling methodology and then discuss the benefits and drawbacks of each technique.


C22: Meta Data - Myth and Realities  
John Ladley
President 
Knowledge InterSpace, Inc.

After almost 20 years of information management theory, movements and gurus, why isn't anything any better? Corporations recognize information as an asset, but still can't get going. In addition, the vendor community has provided a long list of semi-useful, non-standard solutions.  In this presentation, John Ladley will review the reasons why current mindsets of information management will never work, and what needs to be done to draw corporations into truly exploiting the information asset. This conference is for managers of data administrations, information management executives and CIO / CKOs. 


C23: Implementing Data Warehousing and Data Mart Labels in a Meta Data Repository
Patti Munier
Senior Data Analyst
United Parcel Service

Seven years ago, UPS mandated that table column names be based on data elements registered in the UPS Corporate Meta Data Repository. At the same time, Data Administration put into place three related initiatives; stricter enforcement of standards for formulating data element names; mapping of legacy data elements to standard approved elements; and association of ‘parent’ data elements to denormalized ‘children’ and related elements.

More recently, as Data Warehouse and Datamart business presentation labels were introduced, UPS standardized the components of these labels, and registered them in its Meta Data Repository with direct ties to the underlying standard approved data elements. All this enabled UPS to populate and associate ‘Business Rules’ with table columns, data elements, and the labels used in presenting the data to end-users. Coupling this with UPS’s ability to use its Meta Data Repository to analyze the impact of data elements on table structures, copybooks, and programs, UPS found itself in a position to start documenting business rules across the board. 

UPS has a globally implemented, dynamic, web-enabled Meta Data Repository that makes information about corporate data assets available to widely dispersed internal users for impact analysis, reuse, and reporting. UPS is now working on developing a beta project to store and associate Data Warehouse business rules in the Meta Data Repository.  Some key points that will be covered are:


C24: Beyond the Theory: Building a Scalable and Integrated Clickstream Analysis  
Xiaojing Wang, Director of Data Warehouse Infrastructure, CNET  
& Femi Anthony, Principal Data Warehouse Engineer, CNET

The Internet has become an integral part of many corporations today. Consequently, clickstream analysis becomes critical in providing businesses with information needed for tracking and analyzing business trends, and exploring potential new opportunities.  CNET has successfully delivered a high performance system that is processing 20 million page views a day. We are able to obtain the significant ROI for the applications based on this system.  This presentation shares our experience in the following area:


Wednesday, March 7


C25: Business Information Management at J&J
Larry Dziedzic  
Information Management Architect  
Johnson & Johnson  

The presentation will focus on the process and learning experiences that took place while beginning to implement a global Business Information Management Architecture at J&J.  There are 189 J&J affiliates throughout the world and they are all unique in many ways.  The challenge is to recognize the uniqueness while at the same time beginning to pull global information stewardship details together.  Because of the diversity of the affiliates, some will be ahead of the Business Information Management methodology, some will be able to use it immediately and others will need to work it into later plans. The presentation will layout the layering and interfacing necessary to provide and effective communication mechanism.  The attendee will learn about:  


C26: Measuring The Quality of Models  
Peter A. McDougall  
Senior Data Administrator  
Insurance Corporation of British Columbia

This presentation examines the issue of how to measure the quality of a data model. This presentation offers the viewpoint that since a data model is "a description of the business", then how well a model communicates that description provides the key to evaluating its quality. Thus, the criteria for evaluating a model are based upon aspects of communication.  Furthermore, since a data model is a composite object, the presentation will describe how a model's quality is actually derived from the collective quality of its components. Thus any quality measures shouldn't be applied to the model as a whole, but instead to its smaller, atomic-level pieces. As such, five (5) communications-based yardsticks - Accuracy, Clarity, Consistency, Conciseness and Completeness - will be introduced. The concepts will be described, and examples will be given on how the measures are applied to the detail parts of a model. "Summing" the lower-level quality measures produces the overall quality of the entire model.   

The presentation will also focus on the model review process. Two techniques called Direct Feedback and Business-Based Questioning, plus how the quality measures are used with these methods, will be described. These techniques focus on understanding the business and its relationship to the message from the model. They take a non-judgmental perspective and are designed to develop a collaborative framework used for working towards a quality product.  Lastly the presentation will describe how communications-based criteria ultimately produce better models. It will show that when allowed to focus on the content of the message, and not its form, modelers can create more articulate and better-quality descriptions of the business.  From this presentation the audience will learn:


C27: Organizational and Development Strategies for Creating 
a High-ROI Enterprise Data Warehouse  

Brent Lautenschlegar  
Principal  
Reflection Technology Corporation
 

This presentation will present strategies and tactics for identifying and exploiting high-return business opportunities, for creating an effective business/IT organizational model, and for adopting a short incremental development methodology that will allow delivery of real business benefits within a very short time frame. The speaker will discuss real return-on-investment results and business transformation that he has realized as a result of leading two enterprise data warehouse development projects at two Fortune 500 companies.  Attendees will learn:


C28: Data - The Good, The Bad, and the Ugly  
Is Meta Data the Way to Knowledge Management?  

Gil Laware, Assistant Professor, Purdue University  
& Frank Kowalkowski, President, Knowledge Consultants, Inc.

Time-to-market conditions and competitive responses dramatically impact businesses need for timely, accurate, quality distributed data. Multiple data implementations in I/S systems without data architecture hinder a quick response. A cohesive picture of business knowledge needs to be developed from the ugly, bad and good data implementations. Ugly being the chaotic data implementations in which little is known about what the data is, why it exists, where is it used, and who is responsible for its quality. The Bad is the partial implementations of data and meta data using old and new technologies. The Good is a well-managed integrated data environment, with its supporting meta data, that provides a basis for achieving the knowledge needed to meet these business challenges. The reality is the Good, Bad and Ugly may need to coexist for sometime. How is this done and what are the issues?  Attendees will learn:


C29: Meta Data Standards at Object Management Group
Andrew Watson
Vice President and Technical Director
Object Management Group (OMG)

OMG is the home of four of the industry's most significant meta data specifications. The Common Warehouse Metamodel (CWM) provides standards for building data warehouses. The Unified Modeling Language (UML) is the pre-eminent software design notation. XML Meta data Interchange (XMI) provides both standards for exchanging meta data (such as UML designs) using XML, and a convenient way of designing XML Data Type Definitions (DTDs) using UML. Finally, the Meta Object Facility (MOF) is a standard for meta data storage closely related to the other three specifications. This talk will briefly describe OMG, its standards, and their relationship to each other.


C30: Embracing XML - Strategic Implications for Data Administrators/Architects
Peter Aiken
Institute for Data Research
Virginia Commonwealth University 

As data administrators move to embrace XML, there are a number of lessons to be learned from early experiences with the technologies. These point to a series of strategic implications for data administrators/architects including: (1) An expanded definition of data management to include unstructured organizational data (2) Expanded data management roles in applications development using portal technologies (3) Preparation of organizational data (including data quality) for e-business. Combined these implications point to a more complex role for data managers. Understanding these strategic implications will better prepare organizations for the next decade. 


C31:Enterprise Data Management without an Enterprise Data Model: 
Working in the Real World  
Sheri Dumire-Hamilton  
Data Administration/Database Administration Center of Excellence
Kodak

The ideal of having an enterprise data model as a guide can’t be realized in many organizations. Data administrators need to identify what particular benefits will provide the most value to their organization and then determine approaches other than an enterprise data model that may achieve that goal.   Identifying the source and authority for corporate data is a common need that can be met by savvy data administrators with good coordination and communication. Resolving issues with shared terminology and technology change requires data administrators to have credibility within the corporation and the skills or contacts within the business to address the problem. 


C32: How do You Convince Management to Fund Your Proposal?  
David Davis  
Vice-President, Enterprise Data Management Group  
Bank One  

People with technical backgrounds often stress the technical aspects of a proposal to their detriment. The context of the proposal, its timing and how it is presented often affect the acceptance or disapproval of a good proposal. Various anecdotes, analogies, marketing and forming alliances can lead to successful, approved proposals and projects. The best implementation, technique, new technology and method does not guarantee acceptance and funding.


C33: Data Warehouse Project Planning  
Sid Adelman  
Founder  
Sid Adelman & Associates

The success of data warehouse implementations has been spotty. Many organizations have been lulled into believing the nature of the data warehouse obviates any need for planning a project. This presentation includes the current state of data warehouse projects (success and failure), why project planning is critical to the success of the data warehouse, what constitutes a data warehouse project plan, and how the project plan relates to the technical infrastructure.


C34: Metadirectories vs. Meta Data Repositories  
James Jonas
Product Manager
Oracle Corporation

The Burton Group analysts first coined the term metadirectory as “the join of all the directories in the enterprise”. Directories are pervasive throughout all companies as they manage the identity and relationship of people and resources inside e-mail systems, networks, security systems (x.509) and other LDAP compliant systems. The investment in directories is significant, as the total number of directories is counted in the millions.

It is the job of the metadirectory to intelligently “join” disparate directories into a unified whole, thus allowing for information reuse throughout the organization. Yet the meta data world speaks of investing in meta data repositories, not metadirectories. This presentation will explore the relationship of metadirectories and meta data repositories. It will show how these two types of ‘meta’ storage facilitates act as complimentary technologies. It will also articulate why an understanding of metadirectories is a must for a comprehensive meta data strategy.  


C35: Ramping Up for Meta Data and Knowledge Management
Don Soulsby
Director of Architecture Strategies
Computer Associates


From the presentation, the attendee will have a better understanding of:


C36: Building the Scalable Data "E-fracstructure"
Tim McBreen
Senior Principal and E-Business Practice Leader
Knightsbridge Solutions

Today's e-business world continues to focus on scalable infrastructures that support the high transaction volumes generated through e-commerce -- not on the complex data integration issues that arise once the transaction problem has been solved.  But neglecting legacy data creates an e-business silo and compromises the powerful synergy now possible with an integrated data solution.  These questions must be considered:  

 - How do you solve the information side of the equation-i.e., how do you integrate the new e-business customer, product, and service information with the existing corporate information sources so that you have an enterprise view of your customer and associated products/services?

 - How do you build an active self-service data warehouse, sourced by all the transaction systems, that is used by your customers to answer their questions and service needs without forcing them to rely on legacy service channels (call centers, in-store resolution)?

 The purpose of this presentation is to show how a scalable data e-frastructure delivers the cross-channel data integration that supports e-business strategies, including a single view of the customer and self-service options in the e-channel.   Organizations are awash in data, and it's growing exponentially with the expansion into e-business. At the same time, CRM strategies are unraveling because the siloed e-channel is not integrated with legacy data sources.   The solution is an integrated data e-frastructure that uses high-performance tools and technologies to deliver extreme scalability, massive throughput, robust performance, and low cost of ownership. Such architectures are based upon the principle of "Build it once, build it right, scale often," so that the solution can scale as required to meet current and future terabyte-class data requirements.

  Attendees will take away a presentation showing what a scalable e-frastructure looks like and a roadmap that will aid in prioritizing activities and building a development plan. 

            - Data sources
            - Platforms 
            - Frameworks 
            - Putting the pieces together


C37: Data Architecture on a Shoestring  
Becky Kirkpatrick  
Data Architect  
Union Pacific Technologies

"Where is the data I need?" "Where did it come from?" "What does it mean?" "Who owns it?" These were the questions needing answers, but answers were not readily at hand. Users relied on tribal knowledge to find needed data. Our solution? A "card catalog" of Union Pacific Railroad 's data. Commercial meta data repositories are neither affordable nor flexible enough to meet the need, so with three people and a few months we put together "LookUP", Union Pacific's web-enabled data resource directory. Learn about the steps that Union Pacific is taking to deliver a powerful, interactive roadmap of information resources.


C38: Mapping the UML to the Zachman Framework  
Neal Fishman  
Enterprise Architect  
Equifax, Inc.  

The UML consists of nine models and the Object Constraint Language (OCL). The Zachman Framework for Enterprise Architecture identifies at least thirty models. This presentation will review each UML model type (use case, class, object, component, deployment, activity, statechart, collaboration, sequence, and OCL), and review which of the Zachman cells they map to. The presentation will then explore the use of Stereotypes to augment the native UML models in creating more model types to complete the mapping to the Framework.  


C39: Managing Customer Information for CRM  
What do you need to know and how well do you need to know it?  

Danette McGilvray
Customer Information Quality Program Manager  
Agilent Technologies  

Can you claim to know your customer if the information in your systems about that customer is wrong? How can you manage the relationship with your customer if the basic processes for acquiring, maintaining, and using the customer information are not working? See how the quality of your customer information is a critical success factor for any Customer Relationship Management initiative. Leave the session with an understanding of what you need to manage the data, processes, people, and technology surrounding customer information and it's contribution to the success of CRM and any data mining activity.


C40: Just In Time Meta Data Integration
Bob Carasik  
Systems Architect  
Wells Fargo Bank  

Wells Fargo Bank is currently taking an enterprise wide look at its meta data resources and is taking a federated, approach to sharing directory, database, messaging and other forms of meta data. Intranet search technology and repository software a both have a place in this effort.

Many meta data projects founder on the difficulties of translating meta data into common formats and creating formal design documents where exist. I propose a lightweight strategy for meta data integration. High quality meta data frequently costs too much to provide, relative to its benefits to users. By assuming some costs in the form of human labor, meta data users such as applications integrators and development projects can get good value from lower-quality meta resources such as intranet query results and physical design documents. If meta data is included in an enterprise portal or knowledge management effort, a great deal of benefit can be realized. Convenient tools for XML translation and schema management make it easy to leverage both internal industry-standard message designs and interface definitions. Attendees will learn:  


C41: Architectures for Marrying Online Applications with Information Repositories
Faisal Shah 
Co-Founder and Chief Technology Officer 
Knightsbridge Solutions

Many e-businesses plan to differentiate themselves from competitors by integrating data warehouses or other information repositories with their online applications. Such integration can enable a highly personalized customer experience and can materialize income-generating products; both these benefits can be integral to the success of the e-business. Though this online system/information system integration is conceptually simple, it can be technically complex. Complexity arises from the fundamentally divergent technology characteristics of these two types of systems. For example, online systems must exhibit very high availability and must service many short transactions very quickly; conversely, information systems are free to exhibit lower availability characteristics and service a small number of very long-running queries. Successfully marrying online and information systems involves converging these divergent characteristics using techniques which are not only feasible but don't break the bank. 


C42: Getting the Rest of Your Organization Ready for XML  
Korki Whitaker  
Progressive Insurance

In many organizations, there are pockets of IT professionals who know what XML is, and how it may be used to help gain competitive advantages. But what about the rest of IT, as well as business partners within the organization? How do you get them to understand why there is a push to XML, the benefits of XML, and how it will change how we communicate both internally and externally? This presentation will address these questions and explain what one large company is doing to foster an XML environment where it is appropriate.


C43: Data Modeling Contentious Issues  
Karen Lopez  
Principal Consultant  
InfoAdvisors, Inc.

A highly interactive session where attendees will evaluate the options and best practices of common and advance data modeling issues, such as:

Participants in this session will be presenting with an issue along with a range of responses or possible solutions. Participants will vote on their preferred response, then the group as a whole will discuss the results, along with the merits of each possible response.  If the specific issue has been discussed in other presentations, a summary of the responses of the other groups will be presented.

The goal of this workshop is to help practitioners identify potential points of conflict in data modeling, as well as alternative approaches to resolving the issues. This presentation is targeted at experienced data modelers and assumes extensive data modeling skills.


C44: Data Stewardship - Fact or Fiction  
Diana C. Young  
President  
Applied Information Strategies

The term data stewardship or information stewardship has been tossed around for the past decade. Yet, understanding the what, why, and how to accomplish it remains elusive. Part of the dilemma is due to the absence of a standard model by which businesses can shape their organizations. Another part of problem is the lack of exposure to successful implementations in order to benefit from lessons learned. Ultimately, the successful path to stewardship is based upon an understanding of the principles of information stewardship, aligning those principles with the business in a value-added approach, and planning and achieving both short and long term improvements in the business. This presentation will address:  

During the presentation, Ms. Young will share her experiences in developing and implementing a corporate stewardship program during her tenure as FDIC's data administration chief. She will also highlight some of the challenges, approaches, and lessons learned in client engagements focused on developing their information stewardship and information quality environments.


C45: How to Make Your Business Processes Smarter
Ronald G. Ross
Principal
Business Rule Solutions

Most businesses today are out of touch with their business rules. They have little real sense of whether their business rules are being applied consistently – or even what they are. As a result, many organizations have little real power to adjust or redeploy their business rules quickly as the need arises. Yet the clear reality of doing business in the 21st century – with timeframes collapsing all around us – is that rapid and even real-time response is a must. 

In this presentation, Mr. Ross shows where your company’s guidance process has broken down and explains what you should be doing to fix it. Step one is to “database” your business logic. Step two is to rethink your architectures and move them toward intelligent processes. Step three is getting your business rules into the hands of developers, workers and most importantly, managers – right at their fingertips, anywhere, anytime.


C46: Meta-Architecture and Enterprise Meta Data Management  
E. Manning Butterworth  
Senior Manager of Data Architecture  
Reynolds & Reynolds

At Reynolds & Reynolds our enterprise model contains both business and technical meta data. It ties together information all the way from company objectives to the people, systems, and data that support those objectives. The meta-architecture is the navigational map through that enterprise model. That is, the meta-architecture shows how objects are related in the nearly 20 different model types comprising the enterprise model and it explicitly shows how questions of interest can be answered by relating the individual pieces of meta data. The meta-architecture complements the enterprise model framework by explicitly identifying links from model to model and object to object. Both the meta-architecture and the enterprise model are implemented in a single tool capable of representing objects as diverse as data standards, people, and network nodes. 

A framework is a succinct representation of an information architecture and is an essential part of an enterprise model development methodology. However, the interrelationships among the models comprising the framework are implicit. The meta-architecture makes these interrelationships explicit. It thereby serves as a design aid for meta data repository development, as a communication vehicle with business stakeholders, and as a working tool for analysts collecting and organizing the meta data. The meta-architecture does even more, however. It not only organizes the meta data, it also joins the concepts of the enterprise model with the specific constructs of the tool used to represent the meta data.  Attendees will learn


C47: E-Business Chaos: Protecting Yourself Against Problem Imported Data  
Michael Scofield  
Director of Data Quality  
Experian  

As various kinds of internet and E-businesses emerge, companies are exchanging data at a greater pace, both transactional, and in bulk. In either kind of data exchange, you need to understand the quality and meaning of the data you are receiving. This is especially true where companies are starting up fast, morphing quickly, and under stress to change their business rules and policies. They, in turn, may be getting the data from someone else.  

Based on experience of a company which imports monthly over a billion records of data from external sources, Mr. Scofield will explore some of the potential dangers of bulk importation of data, and describe simple, practical techniques for protecting yourself against changes in definition, quality, completeness, etc. No matter what the medium or technology of data exchange, the dangers of importing data remain the same. As businesses buy more application packages, they lose control of their data architecture, and require data warehouses to integrate all their data. Similar challenges present themselves in such transfer of data.   


C48: Same Old Work, New Dilemma: 
A New Approach of Data Design for Interactive Web Portal Applications  

Ho-Chun Ho  
Director of Information Systems  
PointandQuote.com, a division of Kemper Insurance  

This presentation will review the lessons learned and the reality of data architecture and database design for an interactive web portal in B2B and B2C eCommerce. We will examine how PointandQuote.com designed its data strategy to accomplish high availability and performance (ranked by Holistix in the top 87th percentile on the world wide web). The audience will learn how to use conventional data design disciplines, as well as innovative techniques and tools to provide high-speed, reliable and scalability web applications  

PointandQuote.com business requirements

Overview of PointandQuote.com application architecture

Data design challenges of interactive web application

Our solution


Thursday, March 8


C49: Enterprise Information Architecture: "Starter Kit" Models  
Jane Carbone  
Director of Information Architecture Services  
DATANOMICS, Inc.

This presentation reflects the speaker's experience in building and using enterprise architecture frameworks to create architecture models and related data models. The presentation provides a "drill-down" for the "models" dimension of the "data" component of the "Starter Kit" architecture framework. It introduces a standardized approach to building conceptual information architecture models. It describes the link from architecture model to conceptual data model. It includes examples and guidelines for construction of Current State and Target State information architecture models. Attendees will learn:

The presentation also includes examples and guidelines for construction of Level 0 and Level n architecture models and translation to data model. Examples demonstrate the integration of e-Business, EAI and CRM.


C50: The Grammar of Business Rules
Terry Moriarty  
President  
Inastrol

New data and object modelersare taught to hunt for all the business’s nouns as they that represent the most likely candidates for entities or object classes. Sentences with the pattern of “Noun – Verb – Noun” probably represent relationships while a generalization hierarchy often lurks behind sentences with an “is a” verb statement. Do other patterns exist in language that can help us in uncovering and structuring an organization’s business rules? This presentation strives to discover the grammar of business rules by drawing on the Zachman Enterprise Systems Architecture Framework and the sentence diagramming technique many of us learned in high school.

Terry Moriarty has developed a methodology that integrates business rules analysis with the meta-data management environment to address major business concerns, such as Customer Relationship and Product information management. Her dynamic business models have been used as the basis of customer models for companies within the financial services, telecommunication, software/hardware technology manufacturing and retail consumer product industries.


C51: Conceptual Data Modeling in an Object-Oriented Process  
Scot Becker  
Principal Consultant  
InConcept, Inc.

This presentation will detail what an object oriented (OO) process is and what the pros and cons of using an OO process are. Further, this presentation will introduce a more rigorous way to model data (namely, Object-Role Modeling or ORM) and the associated business rules and requirements, and how to incorporate that rigor into an OO process resulting in better quality of analysis and design artifacts for more accurate, robust, and precise software. 


C52: A Success Story: Enterprise Customer Meta Data Definition/Implementation  
Barbara Peterson  
Enterprise Data Standards Program Manager  
Agilent Technologies

During this presentation the speaker provide a process to define enterprise customer meta data which is required to enable data sharing across businesses and central functions including Ecommerce, sales and marketing, support and ERP. This presentation is based on 10 years of successful, hands-on experience defining and implementing the enterprise customer data model/data standards required for HP and currently, Agilent Technologies. The program includes a single data standards process and a formal set of roles and responsibilities.


C53: eRespository for eBusiness
Warren Selkow
Siebel Corp.

eCommerce is causing businesses worldwide to change their concepts of infrastructure and what a corporate architecture looks like.  As this happens, corporations will be faced with the problems of knowing what they need to know and how to share it.  These problems will cause the realization in the data management activity that the nature of repository will change to support the business’s continually changing needs.  This presentation will address the issues of meta-data organization, location and facilitation.  Attendees will learn


C54:  Incorporating Click Stream Analysis in Decision Support Services
The challenge in data warehousing and meta data management
Patricia Klauer, Senior Consultant, Apex Solutions, Inc
& John Murphy, Senior Consultant, Apex Solutions, Inc

On-line commercial activities offer a new set of challenges to the decision support arena. Not only are traditional marketing, transactional and product information required, but significant knowledge of customers and customer activities can be gained. Used in the right mix, exceptionally accurate analytical and predictive models related to customers and their behaviors can be developed. The new sets of operational data related to click stream activity will be useful not only in the on-line e-commerce space but also in the developing interactive video and wireless web services. Additionally, organizations founded in web space are moving to brick as well as brick to web. The integration of these marketing and information channels presents a unique opportunity to see customers in a 360-degree view.

This session will discuss some of the unique challenges faced by primarily marketing organizations in integrating the volumes, quality, and required speed of delivery of new analytical data marts and their underlying decision support services. Topics covered include:


C55: Action Business Rules – Getting to Yes  
Judi Reeder  
Principal  
Knowledge Partners Inc.  

Action Business Rules test conditions and, upon finding them true, start a transaction or event. When capturing Action Business Rules, one of the key tasks discovers and documents those conditions and their values that impact the decision. This presentation discusses examples of decision areas where Action Business Rules were developed using facilitated sessions. Attendees will then be divided into groups to work their own set of ‘conditional terms’ that enable making a decision.

Attendees will learn:


C56: Enterprise Model In Action  
Natalie Arsenault  
Administrative Vice-President
First Union National Bank

Most companies consider a data model as a tool to develop databases, but at First Union, data modeling is used for more than just database development. First Union's Enterprise Data model is being used to help focus business understanding and guide the business using our Construct methodology, a way of capturing key business concepts. As a follow-up to Enterprise Modeling Through Constructs (DAMA  Meta Data 2000 Conference), Enterprise Model in Action will cover expanded usage of the Enterprise Model constructs at First Union and its benefit to enhancing the business understanding. The talk will focus on how to leverage an enterprise model to support practical applications in various efforts. 


C57: Making Sense Out of Madness: Managing Messy Data
Karen Meng  
Database Manager  
Bay Medical Management  

No matter how wonderful your database system performs, if the quality of data stinks, so will the value of your database. Within two months, Bay Medical Management changed their paradigm of how to manage data. They implemented a "Data Clarification Procedure" to ensure high quality data for radiologists and other medical professionals. This presentation will describe the real-life pitfalls and successes of creating the "Data Clarification Procedure" for a medical claims processing system. The presentation will explain how Bay Medical Management, LLC is using the "Data Clarification Procedure" in order to develop a nationwide data repository for radiologists. Attendees will learn:


C58: Synchronizing Your Operational Systems 
with Your Enterprise Information Portal (EIP) using Meta Data Management  

Joe Danielewicz  
Manager of Data Administration  
Motorola, SPS  

This presentation compares and contrasts Web Portals vs. Enterprise Information Portals (EIP), demonstrates how to using XML & middleware to synchronize your EIP with operational systems, and how to manage your meta data in order to bring meaning to your EIP.  

 Portal Concepts

 Enterprise Information Portals (EIP)

 EIP Portal Organization

 The Nature of Operational Data

 Meta Data Brings Meaning to Data

 New Technologies will Help Portal Development

 XML  

 Mercator Middleware

 Developing a Virtual Meta Data Repository


C59: New Trends in Data Management
Don Soulsby, Computer Associates
James Jonas, Oracle Corporation

Meta data is going "mainstream" as application areas, and the pressure for interoperability, are increasing steadily. Likewise, the tools and technologies for defining, storing and managing meta data are becoming more robust. And we're not just talking about familiar tabular data structures. More and more, you're likely to need meta data definitions for unstructured documents, web pages, spatial data, and even digital images and video. This panel session looks at some of the emerging trends in meta data technologies, tools and applications. 


C60: OMG CWM - An Architecture for Enterprisewide E-Business Intelligence Integration  
Sridhar Iyengar  
Unisys Fellow  
Unisys

The use of meta data repositories and related open standards that have historically focused on application development and monolithic data warehousing has now been extended to support component middleware frameworks and more recently to federated data warehouses and data marts using the OMG Common Warehouse Metamodel (CWM). Just as OMG UML unified the object oriented development community, CWM which builds on UML and the foundational meta data standards of MOF (Meta Object Facility) and XMI (XML Meta Data Interchange) promises to unify the much more complex world of databases, data warehouses, data marts and enterprise information portals.  

CWM has been designed by some of the best designers of databases, data warehouses and meta data repositories in the industry and is a comprehensive model driven middleware neutral data interchange and interoperability standard for integrating legacy data (file and hierarchical data base systems), relational data, web data (XML and HTML) as well as analytical data (OLAP and Data mining systems). This presentation presents and E-Business Intelligence integration architecture and early results from the labs of CWM developers and implementers such as Unisys, UBS, Oracle, IBM and Hyperion. The session will also cover the latest status on CWM and a progress report on how the efforts at unifying CWM and the former MDC OIM (Meta Data Coalition Open Information Model)  


Return to Meta-Data/DAMA Conference Home Page