[Cache from: http://www.wilshireconferences.com/anaheim/sessions.htm; see the canonical URL if possible.]
CONFERENCE
SESSION DESCRIPTIONS
This 6-track program, including the
following 60 sessions,
takes place
Tuesday, March 6 through Thursday, March 8
Tuesday, March 6
C1:
Data
Architecture at USAA
"Pouring" the Foundation for the Information Age
Enterprise Data Architect
USAA
USAA
has been a customer driven company for over 70 years. It is an aggressive
employer of Information Technology and has now recognized the challenges brought
about by the Information Age demand for significant change. USAA has recognized
that data integrity compromises made through the years in a furious attempt to
develop needed IT applications had resulted in an unmanaged data replication
environment. This environment is characterized by excessively and uncontrolled
redundancy, incoherent information, hundreds of scheduled and ad-hoc conflicting
reports, extremely large costs associated with integration of new applications
and purchased packages, "departmentalized" data files, "data
specialists," ad-hoc Tiger Teams launched on "data quests," etc.
The
cost of not managing data exceeded the "pain threshold" of the
company. Since 1997, USAA has been working on the development and deployment of
capabilities that will allow it to effectively manage its data resource. These
include better process such as quality management & data sharing management,
new technical direction, and ultimately, consistent, reliable and easy to use
data. The activities include the implementation of an Enterprise-wide Data
Management Organization to establish policies, standards, guidelines, etc., for
the implementation of IT applications. Efforts also include the deployment of
the Integrated Data Clearinghouse, a major technological deployment to enable
the ease of data sharing in a controlled replication environment.
C2: Data
Quality as a Profit Center
Data Quality Analyst
SBC Services
The highly interactive world of
eBusiness demands accessible, complete and correct data. The quality and
inaccessibility of legacy data are frustrating both to the business and IT. The
current state of the data in many companies is redundant and ‘out of sync’,
little understood and in many cases still hidden in batch processes. To
adequately support eBusiness data needs to be migrated to systems where it is
accessible and ‘in sync’ with other instances of the same data. To create
these systems data must be analyzed and its rules understood in a timely
fashion. Over the last several years our Data Quality team has used domain
profiling to help users get to know their data quickly and to understand its
relationships. We have used these techniques to speed up and improve the data in
replacement operational systems as well as in data warehouses and are beginning
to support eBusiness implementations.
There are some techniques we have found useful for analyzing data that can significantly speed up data analysis tasks and contribute to improving overall data quality. We will share some of these techniques, some things we have learned along the way and some of our users' experiences. Most importantly we have turned our Data Quality Team into a DATA QUALITY PROFIT CENTER. Our clients are actually asking for our services and the savings to the company are phenomenal!
Using business rules validation and synchronization techniques to validate critical tables for Local Number Portability Processing has resulted in the clean up the tables. This clean up has essentially stopped the related flow through problems saving record person hours in problem solving.
Using extensive Domain Profiling caused the 'MR2000' project to save at least 500-600 design hours when discovering they didn't need a table that they had assumed was required. This same domain profiling allowed them to correctly capture the real data resulting in space savings. Additional and significant space savings were obtained by identifying those obsolete elements.
Using similar care-taking and some basic programming techniques we recovered one million telephone numbers and fixed the discrepancies for anther three million over the last year. This activity allowed for full support for the Mandated Telephone Pooling and resolved the data ‘product’ problems.
C3: Introduction
to the Unified Modeling Language
Terry
Quatrani
Rose Evangelist
Rational Software Corporation
Modeling
has been an essential part of engineering, art and construction for centuries.
Complex software and database designs that would be difficult for you to
describe textually can readily be conveyed through design diagrams. Each diagram
focuses on one aspect of your application. Modeling provides three key benefits:
visualization, complexity management and clear communication. UML stands for
Unified Modeling Language and is the standard language for visualizing,
specifying, constructing, and documenting the artifacts of a software-intensive
system.
This
session will introduce the key notational elements and diagrams in the UML. The
UML is not just a language for application programmers. It is a language that is
being used by everyone involved in a software development project (e.g. business
analyst, data analyst, application developer, tester).
Background
on the UML
The
UML diagrams
Basic
UML notation used in each diagram
C4: Implementation/Use of Operational
Meta Data
to Improve Data Quality in the Data Warehouse
Michael Jennings
Architect/Manager (specializing in business intelligence, data warehousing, and
eCRM)
Hewitt Associates
Incorrect interpretation or use of
information can often result from decision support architectures that fail to
take advantage of opportunities to improve data quality and identification in
the data warehouse. This missed opportunity often leads to additional time and
expenses being expended in order to reconcile and audit information in the
warehouse. Incorporation of operational meta data into the decision support data
model design and data acquisition processes can correct this situation by
providing a means to measure data quality directly at a row level of
granularity. The benefits of operational meta data use include source system
identification, data quality measurement, improved management of ETL processes
and database administration. Key Messages:
·
Provides a description of the different types of operational meta data
columns
·
Provides strategies for incorporation of operational meta data into the
DSS data model and ETL designs
·
Provides administrative and auditing method examples for use of
operational meta data
·
Provides an understanding to the audience as to the importance of
operational meta data
C5: A Repository Model
David Hay
President
Essential Strategies
This presentation shows a proposed model for a "metadata repository", in terms of the specific things expected to be kept track of in such a repository. It will include analysis objects, such as entities and attributes, design objects, such as tables and columns, and, if time permits, a section on business rules.
For several years now, the speaker has been searching for a catalogue to use for storing the "data about data" that are required to support a data warehouse or any major application. Alas, it isn't called a "catalogue" any more, or even a "data dictionary". It is now called a "metadata repository", and in keeping with this new high-falutin' name, actual examples are way more complex and abstract than seems to be really needed.
Now, no one has ever accused David Hay of being afraid to be abstract when the modeling situation required it. But both the commercial repositories and the generic models being promoted by the likes of Microsoft, the Metadata Coalition, and the Object Management Group take this too far. The fact of the matter is that, in most situations, there are relatively few, very well defined, things that we want to keep track of in a catalogue. To model these things should not be very difficult. The models in this article took less than two days to develop.
C6: XML
Without Fear
Alan
Perkins
Generate XML DTD's and DCD's directly from "XML Designs"
Forward-engineer XML designs as part of an Enterprise Architecture model
Reverse-engineer legacy databases or data stores.
C7: Data
Management Support for Enterprise Architecture
Brett
Champlin
Architecture
Consultant
Allstate
Insurance Company
Enterprise
Architecture is an information based, information intensive discipline. If this
information is important to the business, it must be captured, stored and
managed. Like a "Customer database" or a "Product database",
an "Architecture database" is critical for a successful Enterprise
Architecture program.
This
presentation will describe how to build, manage and leverage a repository for
managing Enterprise Architecture components (models, objects, data,
relationships). Enterprise Architecture is comprised of Business, Information,
Application and Technology Architecture Frameworks. To be successful, EA
requires a repository (or knowledgebase) that provides for reuse, integration,
and dissemination of the underlying components. This talk will present an
example EA repository and a survey of current tools capable of providing these
functions.
The principles of enterprise architecture
The practice of enterprise architecture
Enterprise architecture frameworks
Example enterprise architecture repository
Survey of enterprise architecture
tool
C8: Business
Rule Specification, Validation and Transformation: Advanced Aspects
Terry
Halpin
Technical
Lead in Database Design
Microsoft
Corporation
Although
business rules are fundamental to information modeling, harvesting their full
potential can be challenging in practice. At the analysis level, powerful
notations and procedures are needed to capture all the relevant rules and
validate them with domain experts. At the design level, judicious choices are
needed between alternative model representations to ensure an optimized
implementation. This presentation provides practical guidance for meeting these
challenges, focusing on advanced aspects of business rules specification,
validation and transformation. Rule visualizations in UML, ER and ORM are
illustrated and compared so that users of any of these notations can make use of
the principles discussed.
Among
other things, the presentation reveals some fundamental but previously unknown
problems with UML and extended ER in the context of n-ary relationships.
Although ORM is immune to these problems, the presentation includes practical
advice for overcoming these problems regardless of the notation being used. It
also illustrates how to supplement UML/ER with notes/annotations/verbalizations
to emulate ORM's richer rule syntax. Hence the presentation is also valuable to
anyone who prefers to model in UML or ER rather than ORM.
Business rule verbalization patterns: positive, default and negative form
Business rules on n-ary relationships: problems and solutions
Business rule visualization in ER, UML and ORM: a
comparative review and
synthesis
Business rule choices: transformation and optimization
C9: Logical
Process Modeling – A Companion to a Logical Data Model
Anne
Marie Smith
Assistant
Professor
LaSalle
University
Data
does not exist in a vacuum, it is acted upon by processes. To fully understand
the data and the meta data, it is necessary to also understand and document the
processes that affect the data. An excellent way to gain this understanding and
to prepare to implement an application (transaction processing, data warehousing
or electronic commerce) is to carefully and completely model the business
processes in conjunction with the relevant data. Business processes represent
the flow of data through a series of tasks that are designed to result in
specific business outcomes. This presentation will explain the concepts of
business processes and logical process modeling and the interaction between data
and process. The presentation draws on actual experiences in business process
analysis and modeling in a variety of situations for different types of systems
across several industries. This
presentation should be of interest to all data professionals since data is
dependent upon processes for its existence. Data professionals can learn much by
engaging in process modeling while modeling data, and by doing so, increase the
relevance of data management with system architects, designers and programmers
who are traditionally concerned with processes.
What is a business process?
The components of a business process
The concepts of logical process modeling
The concepts of physical process modeling
A “Primer” of performing process modeling – using examples from the
presenter’s actual experiences
Using a data-definition approach to process definition
C10: Redefining
Meta Data Strategy in the 21st Century
Ron
Klein
Manager,
Data Systems & Architecture
Carswell
Thomson Professional Publishing
This
paper presents two major directions and required innovations in the field of
meta data. They are a consequence of the evolution on the today's available and
near future technology like the Web and intelligent agents. The first innovation
regards the development of specialized engines to help data management groups
map enterprise internal meta-data to the common industry meta data. This engine
is named Meta Data Recognition Device. The second innovation and maybe the one
that will impact the whole society is the Information Classification Schema.
This classification schema will be used the same way as the botanical and
zoology classification of species. Taught at the high school and university
levels will position future professionals to profess a common understanding of
information categories with the same underlying understanding and
interpretation. The Information Classification Schema and the Meta Data
Recognition Device together will allow new uses of meta data that will position
data management into an unforeseen future.
The attendee will learn:
Understand the similarities between meta data usage in data
Management and meta-data usage in e-Meta Data;
Learn about the convergence between mark-up languages definition and data
management
Expand the horizon on meta data interchange
The
importance of this theme is the recognition that in the e-World, there is a
convergence between data management and Publishing Concepts and Practices. There
is also the enormous opportunity to integrate the hierarchical structure of
mark-up languages, like XML, and relational structures like data modeling. It is
also a call for further investigation on how data management professionals will
adapt to innovative methods and approaches.
C11: Build
Your Own Meta Data Repository
Joseph
Newcum
Senior
Data Architect, Enterprise Data Management Group
Bank
One
Purchased
meta data repositories are not always a viable solution to a meta data
maintenance problem. Most are costly, may require heavy modification to be
useful, and still may only partially fulfill all stakeholders needs. A
"home-grown" repository solution can avoid all of these issues. This
presentation discusses why one large organization decided to build their own
web-based meta data repository from scratch, the technologies chosen, the
evolution of a viable meta-model, the methodology developed, the present status
and future direction of the repository, and lessons learned along the way.
Attendees will take away from this presentation:
A set of decision points that can be used to assess whether a purchased or
home-grown repository is best for them
A meta-model capable of supporting any concept that an enterprise might wish to
maintain as meta data
An n-tier client/server application architecture for the repository
A methodology and a list of tools for developing such a repository
Some helpful tips on meeting - and continuing to meet - diverse customer's
requirements for meta data
C12: The Role of Data Administration in
Managing an Enterprise Portal
Arvind Shah
President
and Managing Principal
Performance Development Corporation
The Enterprise Portal is a central gateway to the processes, databases, systems and workflows of an enterprise. When personalized to the job responsibilities of employees via the Intranet, the enterprise portal provides a seamless, single point of access to all of the resources that employees need to do their jobs. When further personalized securely via the Internet and Extranets to the interests of suppliers, customers and business partners, the enterprise portal becomes the integrating conduit of the many disparate databases, systems and workflows each enterprise uses to carry out business with others. It also becomes a single place to manage rapid enterprise change.
Implementation of an enterprise portal requires interfaces with legacy systems and data warehouses. The portal architecture planning and modeling are required for the portal design. The configuration of the portal continuously changes as the e-business changes. The meta data, therefore, will play a key role in maintaining and managing an enterprise portal on an ongoing basis. The presentation addresses the issues the Data Administration has to address in order to assure successful functioning of an enterprise portal.
What is an enterprise portal
Enterprise portal in relation with legacy databases and data warehouse
Key components of an enterprise portal
Enterprise portal architecture development
Enterprise portal maintenance issues
Problems and pitfalls to avoid
The role of the DA/DBA in managing an enterprise portal
C13: Developing a Corporate Data Architecture
in a Federated World
Deborah Henderson,
IT Architect,
Ontario Hydro Networks, Inc.
and Vladimir Pantic, Consultant, IBM Canada Ltd.
This session will review the modeling constructs necessary to control a federated enterprise environment, (OLTP and DW) and the process and options that can be put in place to support the architecture.
The Corporate environmental data
administration model : Data architectural compliance and processes,
“Who is doing what” during the development and validation of the
models, the model vitality process
Enterprise Conceptual modeling:
Conceptual Data Model (CDM) in OLTP environment. Impact of the company’s organization structure and/or process of production of CDM and top-down data model development process.
Conceptual Data Model in DSS environment. Compliance of the CDM with the company’s strategic and tactical goals in order to respond to the market promptly and adequately.
Do we need CDM in DSS environment or not? Our standpoint is that we need CDM as a part of DSS environment, because modern companies have to heavily rely on external resources to deliver, maintain and enhance capabilities of their DSS.
Logical modeling. ER modeling as an intermediate step towards development of a Dimensional Model (DM). Companies are developing the data warehouse with extensive help of external resources who are not necessarily experts in business. ER helps the development team understand the business concepts within the organization.
Dimensional modeling and DSS methods. We will explain how to use dimensional model with different DSS methods:
Dimensional model and OLAP analysis
Dimensional model and data mining
Dimensional model and statistical analysis
Enforcing Standards : Naming conventions, normalization, data quality conformed dimensions
Data ownership and stewardship. Building the “horizontal” as oppose to “vertical” organization and its impact to the data in the organization.
You will learn:
Practical modeling tips for flexibility and control of a federated data-modeling environment for DW deployment that works in the transactional modeling world too.
How to adjust the dimensional data model to work with different DSS methods (statistical analysis, data mining, OLAP).
How to control the model development process synchronizing various models developed during the development process, mechanisms to enforce standards.
C14: Facilitation
and the Successful Architect
Shelley
Lieberman
Everware,
Inc.
This presentation covers the successful hands-on development of an enterprise architecture and process improvement plan for the Alcoholic Beverage Control Agency (ABC) by using facilitation techniques. It covers how and when facilitation was used to make enterprise improvements in both manual and automated functions. The first step was to first develop an as-is view of the agency with documented issues. The next step was to produce the to-be views of the agency, including information architecture and an e-commerce technology plan. Facilitation with the business experts and IT was used to decide the to-be views. This presentation will cover how we decided on the facilitation sessions, what subject matter was covered in the sessions, and the agendas used. Then the critical success factors for these facilitation sessions will be discussed; such as, proper planning, user involvement, IT involvement, strong alignment with the business, and a feasible plan.
How to incorporate facilitation in an enterprise architecture plan
How to develop a realistic facilitation plan with focused facilitation sessions
How to translate a facilitation plan into bite-size agendas
How to encourage user involvement and buy-in of results
Critical success factors of facilitation
C15: The Practical Use of a Universal Data
Model in the Data Warehouse
David Lepley
Data Analyst, Data Integration Services
Tyco Electronics
During the past year we have been upgrading our data warehouse architecture at Tyco Electronics. For all intents and purposes we have added an ODS that is based on an abstract/universal data model. It is an ODS in the sense that it is a near-current copy of integrated data from authoritative sources. It contains the business rules pertaining to the corporate and local company hierarchies (i.e., a generic taxonomy structure) and meta data about other master data such as Product, Customer, Person, and Organization data. It also contains the master data, itself (i.e. customer and product data), but not transactional data (e.g., invoices, purchase orders, etc.). The business rules are stored in a home-grown active repository and are used to validate the master data in a data acquisition facility.
Meta Data and data are stored in a central database according to both the local and corporate business rules. Our primary business driver is Tyco's plans to quickly acquire and integrate several other companies in the same industry. These companies will be permitted to act locally according to local rules using existing systems, but report globally, based on a corporate business model. Reporting locally, based on the local business model and/or the corporate business model, will also be supported by the data warehouse. Data Integration Services is not aware of who is being acquired until the acquisition is publicly announced. We must be in a position to respond quickly on very short notice.
Now that it is pretty well complete, we can acquire and integrate data fairly quickly with very little custom programming, if any. The data is available via intranet inquiries which need no changes to accommodate new acquisitions. The only applications that access the data in the database are the internet inquiries; a maintenance application for maintaining hierarchies; a maintenance application for maintaining roles that people play in regards to products, customers, and the organization; a batch application for acquiring part, customer, and local coding structures; and a batch application for extracting data. Common APIs were developed for validating, populating, and accessing data. The APIs shield developers from the complexities of the abstract data model. While this code is very complex, it only needs to be written once.
All data is stored as entity occurrences, attribute values, business ids, or relationships. We have three kinds of relationships ... codes, cross references, and roles. The data is integrated using surrogate keys for relationships. This separates the data from the various id structures used at the various local companies. Our structures are primarily a staging area for authoritative data. Operational applications subscribe to extracts and store the data locally.
Attendees will be provided with:
A practical business rule data model that is actively used to validate data
A scheme for varying data-related business rules by brand or organization
An overview of how the universal database fits into the data warehouse architecture
An overview of the business drivers, strengths, and pitfalls of this approach
C16: Understanding
and Managing Reference Data
Malcolm
Chisholm
Manager
Deloitte
& Touche
Reference data is important because it is found in all databases. It is widely agreed it must be free from data quality defects, but it is rarely treated as a class of data in its own right. This presentation discusses the diverse nature of reference data, and what unites it. The need for management at the enterprise level is covered, with a series of practical steps on how to do this. Finally, an Internet-based approach for automatic synchronization of reference data across remote databases is discussed (this was implemented by the presenter and his colleagues at the United Nations). The attendee will learn:
What reference data is. Though structurally simple, it is quite diverse, and yet
there are important features that unify it as a class of data
Why reference data is so important to an enterprise. Its uses are varied and
some are often unexpected, e.g. it contains the majority of texts to be
translated in multilingual applications
Practical steps on how to manage reference data at the enterprise level. This is
necessary because of the wide scope of reference data (crossing databases and
even enterprises).
A review of an Internet-based application used within the United Nations to synchronize reference data across different databases on different platforms. This is intended to stimulate the attendee to visualize reference data solutions for their own environments.
C17: Architecting
and Implementing a Web-Based Corporate
Meta Data Repository (CMR) at the Census
Bureau
and
The
speaker will present the Census Bureau Corporate Meta Data Repository (CMR)
application, and will discuss the architecture, technology, design, business
uses, and management issues with this system. The CMR is based on standards:
ISO/IEC 11179, FGDC, and Dublin Core, and includes a Data Element Registry,
Survey Designer, Data Product Registry, Data Set Registry, and more, supporting
the web-enabled maintenance, browsing, and XML interchange of survey meta data.
Architecting a corporate meta data repository that is open, component-based,
web-enabled, extensible, and can be used to drive business applications
Designing a Meta Data portal around your corporate meta data repository, and
integrating structured meta data with unstructured meta data
Implementing an XML meta data interchange (decisions and alternatives)
Using technical meta data to drive your corporate meta data repository
applications, and creating a flexible meta data-driven security, configuration
management, workflow, and business rule framework that allows different
organizations to customize their use of the CMR
Deploying a Corporate Meta Data Repository (how organizations can adapt to using
and exploiting a CMR, and begin to operate in a different business paradigm)
C18: Building
the XML Meta Data Repository
David Plotkin
Senior Data Administrator
Longs Drugs
Meta
Data repositories are useful to keep track of the meta data describing your
systems, and to facilitate "where-used" analysis. If you are
implementing XML, the data type description documents (DTDs) and their
relationships to the elements and attributes that make them up is valuable meta
data. With an XML meta data repository, you can track the structure of your DTDs,
make changes to that structure and regenerate the DTDs, and even create XML
documents by linking the elements and attributes to their physical
implementations in a database, copylib, etc. You can even expose your XML meta
data through a web browser to make it easy to access.
A brief introduction to the structure of XML DTDs
What an XML metamodel looks like
How to implement the metamodel in a repository
How to give users web access to the metamodel
How to generate a revised DTD from the repository
C19: The 7 Deadly Sins of CRM
Jill Dyche
Partner
Baseline Consulting Group
A recent Gartner Group study estimated
that 65% of CRM programs fail. With all the hoopla surrounding Customer
Relationship Management, it's almost impossible for companies to delineate a
clear CRM strategy let alone experience the benefits. Consultant and author Jill
Dyche has been conducting CRM Readiness Assessments for Fortune 500 and dot.com
companies alike. In this presentation she will share what in her experience are
the major roadblocks to implementing a successful CRM initiative, including the
roles played by integrated corporate data and the data warehouse.
C20: Elevating the Role of Information
Resource Management Business Effectiveness
Larry
P. English
The organization that is not managing its information cannot manage its business. Without managed, quality information, the enterprise cannot “know” what it needs to know to understand its customers and customer needs, manage operations, analyze its performance and make the strategic decisions for the future of the enterprise. This is even more crucial for service sector organizations, such as banks, insurance and government organizations whose products are, in fact, information.
Mr. English describes how you transform and elevate your data administration or data resource management to a function embraced as a critical competency. The principles used to manage other business resources, such as human and financial resources, apply to managing information and knowledge as strategic resources. Implementation of these principles is required to transform the enterprise from an Industrial-Age to a competitive Information-Age organization. This presentation discusses how the organization can harness the power of today's information technology to exploit its information resources for competitive advantage and business effectiveness.
Why the "systems approach" of application development has failed, and how we must replace it
Trends shaping the economy, business, and society; and its impact on information resource management
The Information Age as a paradigm
From data administration to information stewardship
Information resource management in the e-business world: the virtual enterprise
The secrets to gaining and sustaining management commitment
C21:
Panel: Comparison of Modeling Techniques
Graham Witt
David Hay
Terry Halpin
Terry
Quatrani
Entity Relationship Modeling (ER) Unified Modeling Language (UML) and Object Role Modeling (ORM) are analytical techniques used for data and object modeling. There has been much inconsistency among practitioners as to the value and appropriate use of each technique. Our panel of experts will each provide an overview of each modeling methodology and then discuss the benefits and drawbacks of each technique.
C22: Meta
Data - Myth and Realities
John Ladley
President
Knowledge InterSpace, Inc.
After
almost 20 years of information management theory, movements and gurus, why isn't
anything any better? Corporations recognize information as an asset, but still
can't get going. In addition, the vendor community has provided a long list of
semi-useful, non-standard solutions. In
this presentation, John Ladley will review the reasons why current mindsets of
information management will never work, and what needs to be done to draw
corporations into truly exploiting the information asset. This conference is for
managers of data administrations, information management executives and CIO /
CKOs.
Meta data standards and why current movement in this area still has a ways to go
Why repositories are doomed
What specific steps to take to create effective information management cultures
in the age of the web and unstructured information
How to tie information management and meta data initiatives to business value
C23: Implementing
Data Warehousing and Data Mart Labels in a Meta Data Repository
Patti Munier
Senior Data Analyst
United Parcel Service
Seven years ago, UPS mandated that table
column names be based on data elements registered in the UPS Corporate Meta Data
Repository. At the same time, Data Administration put into place three related
initiatives; stricter enforcement of standards for formulating data element
names; mapping of legacy data elements to standard approved elements; and
association of ‘parent’ data elements to denormalized ‘children’ and
related elements.
More recently, as Data Warehouse and
Datamart business presentation labels were introduced, UPS standardized the
components of these labels, and registered them in its Meta Data Repository with
direct ties to the underlying standard approved data elements. All this enabled
UPS to populate and associate ‘Business Rules’ with table columns, data
elements, and the labels used in presenting the data to end-users. Coupling this
with UPS’s ability to use its Meta Data Repository to analyze the impact of
data elements on table structures, copybooks, and programs, UPS found itself in
a position to start documenting business rules across the board.
UPS has a globally implemented, dynamic, web-enabled Meta Data Repository that makes information about corporate data assets available to widely dispersed internal users for impact analysis, reuse, and reporting. UPS is now working on developing a beta project to store and associate Data Warehouse business rules in the Meta Data Repository. Some key points that will be covered are:
Key Repository design decisions and benefits
Data element rationalization
Data integrity processes
Business Rules challenges
Business Rules beta test
C24: Beyond
the Theory: Building a Scalable and Integrated Clickstream Analysis
Xiaojing
Wang
&
The
Internet has become an integral part of many corporations today. Consequently,
clickstream analysis becomes critical in providing businesses with information
needed for tracking and analyzing business trends, and exploring potential new
opportunities. CNET has
successfully delivered a high performance system that is processing 20 million
page views a day. We are able to obtain the significant ROI for the applications
based on this system. This
presentation shares our experience in the following area:
Obtaining useful data from website log
Designing flexible data models to enable advanced web traffic analysis
Implementing a highly efficient, scaleable back-end infrastructure for data
acquisition, transformation, and loading
Utilizing various tools
Wednesday, March 7
C25: Business Information Management at
J&J
Larry
Dziedzic
Information
Management Architect
Johnson
& Johnson
The
presentation will focus on the process and learning experiences that took place
while beginning to implement a global Business Information Management
Architecture at J&J. There are 189 J&J affiliates throughout the world and
they are all unique in many ways. The
challenge is to recognize the uniqueness while at the same time beginning to
pull global information stewardship details together. Because of the diversity of the affiliates, some will be
ahead of the Business Information Management methodology, some will be able to
use it immediately and others will need to work it into later plans. The
presentation will layout the layering and interfacing necessary to provide and
effective communication mechanism. The
attendee will learn about:
The problems and benefits of dealing with a multinational organization and
implementing business information management
The success and/or failure of such and implementation.
What we have learned from the experience and what we would do differently.
C26: Measuring
The Quality of Models
Peter
A. McDougall
Senior
Data Administrator
Insurance
Corporation of British Columbia
This
presentation examines the issue of how to measure the quality of a data model.
This presentation offers the viewpoint that since a data model is "a
description of the business", then how well a model communicates that
description provides the key to evaluating its quality. Thus, the criteria for
evaluating a model are based upon aspects of communication.
Furthermore, since a data model is a composite object, the presentation
will describe how a model's quality is actually derived from the collective
quality of its components. Thus any quality measures shouldn't be applied to the
model as a whole, but instead to its smaller, atomic-level pieces. As such, five
(5) communications-based yardsticks - Accuracy, Clarity, Consistency,
Conciseness and Completeness - will be introduced. The concepts will be
described, and examples will be given on how the measures are applied to the
detail parts of a model. "Summing" the lower-level quality measures
produces the overall quality of the entire model.
The
presentation will also focus on the model review process. Two techniques called
Direct Feedback and Business-Based Questioning, plus how the quality measures
are used with these methods, will be described. These techniques focus on
understanding the business and its relationship to the message from the model.
They take a non-judgmental perspective and are designed to develop a
collaborative framework used for working towards a quality product.
Lastly the presentation will describe how communications-based criteria
ultimately produce better models. It will show that when allowed to focus on the
content of the message, and not its form, modelers can create more articulate
and better-quality descriptions of the business.
From this presentation the audience will learn:
Why communications-based measures are useful to evaluate the quality of a model
The five (5) criteria used to measure a model's quality
A set of techniques, and a process for applying the measures
Why the approach creates models that have quality built-in, instead of
'inspected-in'
C27: Organizational
and Development Strategies for Creating
a High-ROI Enterprise Data Warehouse
Brent
Lautenschlegar
Principal
Reflection
Technology Corporation
This
presentation will present strategies and tactics for identifying and exploiting
high-return business opportunities, for creating an effective business/IT
organizational model, and for adopting a short incremental development
methodology that will allow delivery of real business benefits within a very
short time frame. The speaker will discuss real return-on-investment results and
business transformation that he has realized as a result of leading two
enterprise data warehouse development projects at two Fortune 500 companies.
Attendees will learn:
The benefits of the enterprise data warehouse approach vs. a data mart approach
How to put into place an incremental development methodology that minimizes data
transformation c) how to identify and exploit the best business opportunities
How to realize business benefits quickly -- i.e. within 60 days of project
initiation
C28: Data
- The Good, The Bad, and the Ugly
Is
Meta Data the Way to Knowledge Management?
Gil
Laware, Assistant
Professor, Purdue
University
& Frank
Kowalkowski, President, Knowledge
Consultants, Inc.
Time-to-market
conditions and competitive responses dramatically impact businesses need
for timely, accurate, quality distributed data. Multiple data implementations in
I/S systems without data architecture hinder a quick response. A cohesive
picture of business knowledge needs to be developed from the ugly, bad and good
data implementations. Ugly being the chaotic data implementations in which
little is known about what the data is, why it exists, where is it used, and who
is responsible for its quality. The Bad is the partial implementations of data
and meta data using old and new technologies. The Good is a well-managed
integrated data environment, with its supporting meta data, that provides a
basis for achieving the knowledge needed to meet these business challenges. The
reality is the Good, Bad and Ugly may need to coexist for sometime. How is this
done and what are the issues? Attendees
will learn:
To develop practical linkages between data implementations and business
knowledge.
To assess what approaches can be used to develop key meta data that drives the
accurate, quality data for knowledge assessment and the development of knowledge
structures for the business.
To develop a meta data strategy that meets the business time-to-market
conditions.
C29: Meta Data Standards at Object
Management Group
Andrew Watson
Vice President and Technical Director
Object Management Group (OMG)
OMG is the home of four of the industry's most significant meta data specifications. The Common Warehouse Metamodel (CWM) provides standards for building data warehouses. The Unified Modeling Language (UML) is the pre-eminent software design notation. XML Meta data Interchange (XMI) provides both standards for exchanging meta data (such as UML designs) using XML, and a convenient way of designing XML Data Type Definitions (DTDs) using UML. Finally, the Meta Object Facility (MOF) is a standard for meta data storage closely related to the other three specifications. This talk will briefly describe OMG, its standards, and their relationship to each other.
C30: Embracing XML - Strategic Implications
for Data Administrators/Architects
Peter Aiken
Institute for Data Research
Virginia Commonwealth University
As
data administrators move to embrace XML, there are a number of lessons to be
learned from early experiences with the technologies. These point to a series of
strategic implications for data administrators/architects including: (1) An
expanded definition of data management to include unstructured organizational
data (2) Expanded data management roles in applications development using portal
technologies (3) Preparation of organizational data (including data quality) for
e-business. Combined these implications point to a more complex role for data
managers. Understanding these strategic implications will better prepare
organizations for the next decade.
The role that XML will play in future data management
The requirements and promise for management of unstructured data
The implications for organizational CASE tool usage
Lessons learned from early XML adopters
C31:Enterprise
Data Management without an Enterprise Data Model:
Working in the Real World
Data
Administration/Database Administration Center of Excellence
Kodak
The
ideal of having an enterprise data model as a guide can’t be realized in many
organizations. Data administrators need to identify what particular benefits
will provide the most value to their organization and then determine approaches
other than an enterprise data model that may achieve that goal.
Identifying the source and authority for corporate data is a common need
that can be met by savvy data administrators with good coordination and
communication. Resolving issues with shared terminology and technology change
requires data administrators to have credibility within the corporation and the
skills or contacts within the business to address the problem.
How to identify which benefits of an enterprise data model will provide the most
value to your organization
Different approaches to resolving issues of shared terminology with differing
definitions
Methods for identifying the true authoritative source and the issues with
technology change
C32: How
do You Convince Management to Fund Your Proposal?
David
Davis
Vice-President,
Enterprise Data Management Group
Bank
One
People with technical backgrounds often stress the technical aspects of a proposal to their detriment. The context of the proposal, its timing and how it is presented often affect the acceptance or disapproval of a good proposal. Various anecdotes, analogies, marketing and forming alliances can lead to successful, approved proposals and projects. The best implementation, technique, new technology and method does not guarantee acceptance and funding.
A technique of creating analogies that will fit their situation and help 'sell'
their concepts
Share successes and failures and critique approaches
Learn the importance of 'sound bites', charts and diagrams to sell their
proposals and get them accepted
C33: Data
Warehouse Project Planning
Sid
Adelman
Founder
Sid
Adelman & Associates
The
success of data warehouse implementations has been spotty. Many organizations
have been lulled into believing the nature of the data warehouse obviates any
need for planning a project. This presentation includes the current state of
data warehouse projects (success and failure), why project planning is critical
to the success of the data warehouse, what constitutes a data warehouse project
plan, and how the project plan relates to the technical infrastructure.
The components of a project plan
Attaching resources to tasks
Making the project plan a tool for communication with management
How the project plan is critical to the project's schedule
C34: Metadirectories
vs. Meta Data Repositories
Product Manager
Oracle Corporation
The
Burton Group analysts first coined the term metadirectory as “the join of all
the directories in the enterprise”. Directories are pervasive throughout all
companies as they manage the identity and relationship of people and resources
inside e-mail systems, networks, security systems (x.509) and other LDAP
compliant systems. The investment in directories is significant, as the total
number of directories is counted in the millions.
It
is the job of the metadirectory to intelligently “join” disparate
directories into a unified whole, thus allowing for information reuse throughout
the organization. Yet the meta data world speaks of investing in meta data
repositories, not metadirectories. This presentation will explore the
relationship of metadirectories and meta data repositories. It will show how
these two types of ‘meta’ storage facilitates act as complimentary
technologies. It will also articulate why an understanding of metadirectories is
a must for a comprehensive meta data strategy.
C35: Ramping Up for Meta Data and
Knowledge Management
Don Soulsby
Director of Architecture Strategies
Computer Associates
From the presentation, the attendee will
have a better understanding of:
Value of meta data for data warehousing
First steps for successful knowledge management
Meta data based knowledge mapping
Meta data repositories and XML
C36: Building the Scalable Data
"E-fracstructure"
Tim McBreen
Senior Principal and E-Business Practice Leader
Knightsbridge Solutions
Today's e-business world continues to focus on scalable infrastructures that support the high transaction volumes generated through e-commerce -- not on the complex data integration issues that arise once the transaction problem has been solved. But neglecting legacy data creates an e-business silo and compromises the powerful synergy now possible with an integrated data solution. These questions must be considered:
- How do you solve the information side of the equation-i.e., how do you integrate the new e-business customer, product, and service information with the existing corporate information sources so that you have an enterprise view of your customer and associated products/services?
- How do you build an active self-service data warehouse, sourced by all the transaction systems, that is used by your customers to answer their questions and service needs without forcing them to rely on legacy service channels (call centers, in-store resolution)?
The purpose of this presentation is to show how a scalable data e-frastructure delivers the cross-channel data integration that supports e-business strategies, including a single view of the customer and self-service options in the e-channel. Organizations are awash in data, and it's growing exponentially with the expansion into e-business. At the same time, CRM strategies are unraveling because the siloed e-channel is not integrated with legacy data sources. The solution is an integrated data e-frastructure that uses high-performance tools and technologies to deliver extreme scalability, massive throughput, robust performance, and low cost of ownership. Such architectures are based upon the principle of "Build it once, build it right, scale often," so that the solution can scale as required to meet current and future terabyte-class data requirements.
Attendees will take away a presentation showing what a scalable e-frastructure looks like and a roadmap that will aid in prioritizing activities and building a development plan.
Business case for building a scalable data e-frastructure
Case studies of companies who have responded to the problem
The scalable solution e-frastructure
- Data sources
- Platforms
- Frameworks
- Putting the
pieces together
Prioritization of activities
The challenges and opportunities of integrating data across all channels
The characteristics and benefits of a scalable data e-frastructure
Evaluate the applicability of real-world e-frastructure solutions to your environment
A roadmap to build your own data e-frastructure plan
C37: Data
Architecture on a Shoestring
Becky
Kirkpatrick
Data
Architect
Union
Pacific Technologies
"Where
is the data I need?" "Where did it come from?" "What does it
mean?" "Who owns it?" These were the questions needing answers,
but answers were not readily at hand. Users relied on tribal knowledge to find
needed data. Our solution? A "card catalog" of Union Pacific Railroad
's data. Commercial meta data repositories are neither affordable nor flexible
enough to meet the need, so with three people and a few months we put together
"LookUP", Union Pacific's web-enabled data resource directory. Learn
about the steps that Union Pacific is taking to deliver a powerful, interactive
roadmap of information resources.
Hear how Union Pacific used Zachman's framework to organize information about
the enterprise architecture.
Get tips & techniques on how to do data management and architecture on a
shoestring budget!
See a demonstration of an indispensable, web-enabled data resource directory, a
"card catalog" to data, hardware, software and more!
C38: Mapping
the UML to the Zachman Framework
Neal
Fishman
Identifying the UML models
The Zachman cells
Using Stereotypes
Mapping the models
C39: Managing
Customer Information for CRM
What
do you need to know and how well do you need to know it?
Danette McGilvray
Customer
Information Quality Program Manager
Agilent
Technologies
The connection - CRM and customer information quality
Managing the right information
Working together - the roles of data, processes, people, and technology
Case study - Agilent CRM
Challenges and best practices
C40: Just In Time Meta Data Integration
Bob
Carasik
Systems
Architect
Wells
Fargo Bank
Wells
Fargo Bank is currently taking an enterprise wide look at its meta data
resources and is taking a federated, approach to sharing directory, database,
messaging and other forms of meta data. Intranet search technology and
repository software a both have a place in this effort.
Many
meta data projects founder on the difficulties of translating meta data into
common formats and creating formal design documents where exist. I propose a
lightweight strategy for meta data integration. High quality meta data
frequently costs too much to provide, relative to its benefits to users. By
assuming some costs in the form of human labor, meta data users such as
applications integrators and development projects can get good value from
lower-quality meta resources such as intranet query results and physical design
documents. If meta data is included in an enterprise portal or knowledge
management effort, a great deal of benefit can be realized. Convenient tools for
XML translation and schema management make it easy to leverage both internal
industry-standard message designs and interface definitions. Attendees will
learn:
How to use XML mappings for a hub and spoke approach to translation
How to leverage an enterprise portal so that meta data is easier to find.
Why "Quality isn't Job One"; scope and timeliness are.
C41: Architectures
for Marrying Online Applications with Information Repositories
Faisal Shah
Co-Founder and Chief Technology Officer
Knightsbridge Solutions
Many
e-businesses plan to differentiate themselves from competitors by integrating
data warehouses or other information repositories with their online
applications. Such integration can enable a highly personalized customer
experience and can materialize income-generating products; both these benefits
can be integral to the success of the e-business. Though this online
system/information system integration is conceptually simple, it can be
technically complex. Complexity arises from the fundamentally divergent
technology characteristics of these two types of systems. For example, online
systems must exhibit very high availability and must service many short
transactions very quickly; conversely, information systems are free to exhibit
lower availability characteristics and service a small number of very
long-running queries. Successfully marrying online and information systems
involves converging these divergent characteristics using techniques which are
not only feasible but don't break the bank.
How
to distinguish feasible integration architectures from infeasible ones.
How
to evaluate situations where common technologies such as relational
databases have to be abandoned in high transaction and high data volume
settings and what some of these alternatives are.
Some
of the challenges associated with updating information repositories in
real-time and architectural solutions to these challenges.
C42: Getting
the Rest of Your Organization Ready for XML
Korki
Whitaker
Progressive
Insurance
In
many organizations, there are pockets of IT professionals who know what XML is,
and how it may be used to help gain competitive advantages. But what about the
rest of IT, as well as business partners within the organization? How do you get
them to understand why there is a push to XML, the benefits of XML, and how it
will change how we communicate both internally and externally? This presentation
will address these questions and explain what one large company is doing to
foster an XML environment where it is appropriate.
How to discover what XML activities are currently occurring within your
organization
How to introduce XML to all of your employees
How to incorporate the various needs for XML training
C43: Data
Modeling Contentious Issues
Karen
Lopez
Principal
Consultant
InfoAdvisors,
Inc.
A
highly interactive session where attendees will evaluate the options and best
practices of common and advance data modeling issues, such as:
Party/party role
Natural vs. surrogate keys
Abstraction vs. specification
Conceptual, logical, physical
UML vs. ERD
Privacy vs. fair use and data mining
Derived/calculated data in logical models
Logical & physical data model separation
...and others
Participants
in this session will be presenting with an issue along with a range of responses
or possible solutions. Participants will vote on their preferred response, then
the group as a whole will discuss the results, along with the merits of each
possible response. If the specific
issue has been discussed in other presentations, a summary of the responses of
the other groups will be presented.
The
goal of this workshop is to help practitioners identify potential points of
conflict in data modeling, as well as alternative approaches to resolving the
issues. This presentation is targeted at experienced data modelers and assumes
extensive data modeling skills.
C44: Data
Stewardship - Fact or Fiction
President
Applied
Information Strategies
The
term data stewardship or information stewardship has been tossed around for the
past decade. Yet, understanding the what, why, and how to accomplish it remains
elusive. Part of the dilemma is due to the absence of a standard model by which
businesses can shape their organizations. Another part of problem is the lack of
exposure to successful implementations in order to benefit from lessons learned.
Ultimately, the successful path to stewardship is based upon an understanding of
the principles of information stewardship, aligning those principles with the
business in a value-added approach, and planning and achieving both short and
long term improvements in the business. This presentation will address:
Information stewardship principles and objectives - a brief review
The four factions of stewardship: strategic, tactical, operational, and
technical - what are they and how they align with business processes and
functions
Stewardship roles, responsibilities, and the "A" word - accountability
The four pillars of successful implementation - policy, program, practice, and
promotion.
During
the presentation, Ms. Young will share her experiences in developing and
implementing a corporate stewardship program during her tenure as FDIC's data
administration chief. She will also highlight some of the challenges,
approaches, and lessons learned in client engagements focused on developing
their information stewardship and information quality environments.
C45: How to Make Your Business Processes
Smarter
Ronald G. Ross
Principal
Business Rule Solutions
Most businesses today are out of touch
with their business rules. They have little real sense of whether their business
rules are being applied consistently – or even what they are. As a result,
many organizations have little real power to adjust or redeploy their business
rules quickly as the need arises. Yet the clear reality of doing business in the
21st century – with timeframes collapsing all around us – is that
rapid and even real-time response is a must.
In this presentation, Mr. Ross shows where your company’s guidance process has broken down and explains what you should be doing to fix it. Step one is to “database” your business logic. Step two is to rethink your architectures and move them toward intelligent processes. Step three is getting your business rules into the hands of developers, workers and most importantly, managers – right at their fingertips, anywhere, anytime.
Why business rules are inevitable
Restoring traceability
Single-sourcing your rules
The crucial role of data and meta-data
The needs of 21st century workers
Rule management: first steps and beyond
C46: Meta-Architecture
and Enterprise Meta Data Management
E.
Manning Butterworth
Senior
Manager of Data Architecture
Reynolds
& Reynolds
At
Reynolds & Reynolds our enterprise model contains both business and
technical meta data. It ties together information all the way from company
objectives to the people, systems, and data that support those objectives. The
meta-architecture is the navigational map through that enterprise model. That
is, the meta-architecture shows how objects are related in the nearly 20
different model types comprising the enterprise model and it explicitly shows
how questions of interest can be answered by relating the individual pieces of
meta data. The meta-architecture complements the enterprise model framework by
explicitly identifying links from model to model and object to object. Both the
meta-architecture and the enterprise model are implemented in a single tool
capable of representing objects as diverse as data standards, people, and
network nodes.
A
framework is a succinct representation of an information architecture and is an
essential part of an enterprise model development methodology. However, the
interrelationships among the models comprising the framework are implicit. The
meta-architecture makes these interrelationships explicit. It thereby serves as
a design aid for meta data repository development, as a communication vehicle
with business stakeholders, and as a working tool for analysts collecting and
organizing the meta data. The meta-architecture does even more, however. It not
only organizes the meta data, it also joins the concepts of the enterprise model
with the specific constructs of the tool used to represent the meta data.
Attendees will learn
The nature, purpose, and benefits of a meta-architecture
Practical methods to build a meta-architecture
How to use an enterprise model to answer business questions
C47: E-Business
Chaos: Protecting Yourself Against Problem Imported Data
Michael
Scofield
Director
of Data Quality
Experian
As
various kinds of internet and E-businesses emerge, companies are exchanging data
at a greater pace, both transactional, and in bulk. In either kind of data
exchange, you need to understand the quality and meaning of the data you are
receiving. This is especially true where companies are starting up fast,
morphing quickly, and under stress to change their business rules and policies.
They, in turn, may be getting the data from someone else.
Based
on experience of a company which imports monthly over a billion records of data
from external sources, Mr. Scofield will explore some of the potential dangers
of bulk importation of data, and describe simple, practical techniques for
protecting yourself against changes in definition, quality, completeness, etc.
No matter what the medium or technology of data exchange, the dangers of
importing data remain the same. As businesses buy more application packages,
they lose control of their data architecture, and require data warehouses to
integrate all their data. Similar challenges present themselves in such transfer
of data.
C48: Same
Old Work, New Dilemma:
A New Approach of Data Design for Interactive Web Portal
Applications
Ho-Chun
Ho
Director
of Information Systems
PointandQuote.com,
a division of Kemper Insurance
This
presentation will review the lessons learned and the reality of data
architecture and database design for an interactive web portal in B2B and B2C
eCommerce. We will examine how PointandQuote.com designed its data strategy to
accomplish high availability and performance (ranked by Holistix in the top 87th
percentile on the world wide web). The audience will learn how to use
conventional data design disciplines, as well as innovative techniques and tools
to provide high-speed, reliable and scalability web applications
PointandQuote.com business requirements
Auto insurance application and quote process
Business case and operating models
Capacity and response time requirements
Overview of PointandQuote.com application architecture
Technologies and platform
Database
Rules engine
Data transformation
B2B communication
B2C interface and front-end presentation
Customer support and contact management front-end interface
XML
Data design challenges of interactive web application
Compromized transactionality
High demand for self service
Availability- 24X7
Presentation driven design
Table (data) driven navigation paths
Non-business data elements
Our solution
Two levels of abstraction: business data model and meta data model
Sessionless or stateless design
Design for speed and reliability
Design for thin client
Presentation driven design: "the question dictionary"
approach
Table (data) driven navigation control
Separation of data and rules
Thursday, March 8
C49: Enterprise
Information Architecture: "Starter Kit" Models
Director
of Information Architecture Services
DATANOMICS,
Inc.
This
presentation reflects the speaker's experience in building and using enterprise
architecture frameworks to create architecture models and related data models.
The presentation provides a "drill-down" for the "models"
dimension of the "data" component of the "Starter Kit"
architecture framework. It introduces a standardized approach to building
conceptual information architecture models. It describes the link from
architecture model to conceptual data model. It includes examples and guidelines
for construction of Current State and Target State information architecture
models. Attendees will learn:
How to construct standardized information architecture model
How to decompose standardized information architecture model
How to create a conceptual data model from a Level n information architecture
model
The
presentation also includes examples and guidelines for construction of Level 0
and Level n architecture models and translation to data model. Examples
demonstrate the integration of e-Business, EAI and CRM.
Overview of the "Starter Kit"
Overview of the Starter Kit Architecture Framework
Framework models
Current state/target state architecture model
Level 0/Level n architecture model
Why an architecture model is not a data model
Translation of architecture model to conceptual data model
Presentation model
C50: The Grammar of Business Rules
Terry
Moriarty
President
Inastrol
New data and object modelersare taught to hunt for all the business’s nouns as they that represent the most likely candidates for entities or object classes. Sentences with the pattern of “Noun – Verb – Noun” probably represent relationships while a generalization hierarchy often lurks behind sentences with an “is a” verb statement. Do other patterns exist in language that can help us in uncovering and structuring an organization’s business rules? This presentation strives to discover the grammar of business rules by drawing on the Zachman Enterprise Systems Architecture Framework and the sentence diagramming technique many of us learned in high school.
Terry Moriarty has developed a methodology that integrates business rules analysis with the meta-data management environment to address major business concerns, such as Customer Relationship and Product information management. Her dynamic business models have been used as the basis of customer models for companies within the financial services, telecommunication, software/hardware technology manufacturing and retail consumer product industries.
Techniques for dissecting business ramblings to form well structures business
rules
How the Zachman Framework can be used as a technique for classifying business
terms
How adverbs and adjectives provide clues in uncovering business rules
Why many nouns really represent important business states and roles
C51: Conceptual
Data Modeling in an Object-Oriented Process
Principal
Consultant
InConcept,
Inc.
This
presentation will detail what an object oriented (OO) process is and what the
pros and cons of using an OO process are. Further, this presentation will
introduce a more rigorous way to model data (namely, Object-Role Modeling or ORM)
and the associated business rules and requirements, and how to incorporate that
rigor into an OO process resulting in better quality of analysis and design
artifacts for more accurate, robust, and precise software.
What an OO process is; pros/cons of an OO process
What ORM is (briefly); pros/cons of ORM
How to integrate ORM into an OO process such that the two techniques work in
tandem while emphasizing the strengths of each
Case study illustration of the benefits of this approach
C52: A
Success Story: Enterprise Customer Meta Data Definition/Implementation
Barbara
Peterson
During
this presentation the speaker provide a process to define enterprise customer
meta data which is required to enable data sharing across businesses and central
functions including Ecommerce, sales and marketing, support and ERP. This
presentation is based on 10 years of successful, hands-on experience defining
and implementing the enterprise customer data model/data standards required for
HP and currently, Agilent Technologies. The program includes a single data
standards process and a formal set of roles and responsibilities.
Learn the steps required to define an enterprise customer data model/meta data
that supports the entire enterprise and allows successful data sharing
Understand how to successfully create and manage a multi-geographic,
cross-functional/business 'Customer Information/Data Standards Council' to
enable standards to be defined based on specific need and current technologies
Learn how to develop metrics for measuring adherence/compliance to standards in
existing/new systems
C53: eRespository for eBusiness
Warren Selkow
Siebel Corp.
eCommerce is causing businesses
worldwide to change their concepts of infrastructure and what a corporate
architecture looks like. As this
happens, corporations will be faced with the problems of knowing what they need
to know and how to share it. These
problems will cause the realization in the data management activity that the
nature of repository will change to support the business’s continually
changing needs. This presentation will address the issues of meta-data
organization, location and facilitation. Attendees
will learn
What the evolving repository will have to contain.
How meta-data will have to be organized.
How a repository will have to be positioned in the WWW world.
Why Management will care and give financial and moral support.
C54: Incorporating Click Stream
Analysis in Decision Support Services
The challenge in data warehousing and
meta data management
Patricia Klauer, Senior Consultant, Apex Solutions, Inc
& John Murphy, Senior Consultant, Apex Solutions, Inc
On-line commercial activities offer a new set of challenges to the decision support arena. Not only are traditional marketing, transactional and product information required, but significant knowledge of customers and customer activities can be gained. Used in the right mix, exceptionally accurate analytical and predictive models related to customers and their behaviors can be developed. The new sets of operational data related to click stream activity will be useful not only in the on-line e-commerce space but also in the developing interactive video and wireless web services. Additionally, organizations founded in web space are moving to brick as well as brick to web. The integration of these marketing and information channels presents a unique opportunity to see customers in a 360-degree view.
This session will discuss some of the unique challenges faced by primarily marketing organizations in integrating the volumes, quality, and required speed of delivery of new analytical data marts and their underlying decision support services. Topics covered include:
The Architectural systems challenge for near real time analysis.
Click stream data acquisition and cleansing – Knowing what your seeing.
What to do when your initial load is 3.5 Tb – Keeping your costs under control
The near real time storage challenge – Reducing costs while keeping data as an asset.
The use and abuse of cookies and email - End user data quality and the opt-in challenge
Click stream components, aggregation and summarization – Developing visits and views with cleansed quality data.
Integration of transaction, demographic and click stream data.
The components of an analytical architecture. Sandboxes, data marts and the analytical environment.
The 24-hour cycle – measuring your results and modifying your activities using an analytical data mart to calculate true ROIs.
C55: Action
Business Rules – Getting to Yes
Judi Reeder
Principal
Knowledge
Partners Inc.
Action
Business Rules test conditions and, upon finding them true, start a transaction
or event. When capturing Action Business Rules, one of the key tasks discovers
and documents those conditions and their values that impact the decision. This
presentation discusses examples of decision areas where Action Business Rules
were developed using facilitated sessions. Attendees will then be divided into
groups to work their own set of ‘conditional terms’ that enable making a
decision.
Attendees will
learn:
How to
structure a session to capture Action Business Rules
Sample
workflow and meta model to support the undertaking
Sample
templates for capturing and testing the Action Business Rules
C56: Enterprise
Model In Action
Natalie
Arsenault
Most
companies consider a data model as a tool to develop databases, but at First
Union, data modeling is used for more than just database development. First
Union's Enterprise Data model is being used to help focus business understanding
and guide the business using our Construct methodology, a way of capturing key
business concepts. As a follow-up to Enterprise Modeling Through Constructs (DAMA
Meta Data 2000 Conference), Enterprise Model in Action will cover expanded
usage of the Enterprise Model constructs at First Union and its benefit to
enhancing the business understanding. The talk will focus on how to leverage an
enterprise model to support practical applications in various efforts.
How to use the Enterprise constructs to support a data stewardship program
Developing key business requirements using the constructs
Helping to set business project priorities using the constructs
C57: Making Sense Out of Madness: Managing
Messy Data
Karen
Meng
No
matter how wonderful your database system performs, if the quality of data
stinks, so will the value of your database. Within two months, Bay Medical
Management changed their paradigm of how to manage data. They implemented a
"Data Clarification Procedure" to ensure high quality data for
radiologists and other medical professionals. This presentation will describe
the real-life pitfalls and successes of creating the "Data Clarification
Procedure" for a medical claims processing system. The presentation will
explain how Bay Medical Management, LLC is using the "Data Clarification
Procedure" in order to develop a nationwide data repository for
radiologists. Attendees will learn:
How to create a process to ensure high quality data
How to avoid common problems when working with messy data
How to fix problems-- if possible-- with inherited data
C58: Synchronizing
Your Operational Systems
with Your Enterprise Information Portal (EIP) using
Meta Data Management
Joe
Danielewicz
Manager
of Data Administration
Motorola,
SPS
This
presentation compares and contrasts Web Portals vs. Enterprise Information
Portals (EIP), demonstrates how to using XML & middleware to synchronize
your EIP with operational systems, and how to manage your meta data in order to
bring meaning to your EIP.
Portal Concepts
Internet or Web Portals
myyahoo.com
Motorola Knowledge Café
Enterprise Information Portals
(EIP)
aka Corporate Portals or Enterprise Portals
mysap.com
more customized, focused
EIP Portal Organization
EIP Portal Personalization
central point of access for all users in your company
Personalization using role assignment
The Nature of Operational Data
Optimized for OLTP
Inflexible views of data
OLTP performance
Meta Data Brings Meaning to Data
Recursive Method for Unlocking Meta Data
New Technologies will Help Portal Development
XML
- Extensible Markup Language
Middleware
Meta
Data Management Key to Both XML and Middleware
XML
DTD’s vs. XML Schema’s
XML enabled Meta Data Repositories
Mercator Middleware
Need a method to tie portal to operational systems
Middleware can keep the latest org chart on our portal
Middleware can gather the information from a portal and submit an OLTP (IMS,
CICS) transaction back to the operational system
Developing a Virtual Meta Data Repository
C59: New Trends in Data Management
Don Soulsby, Computer Associates
James Jonas, Oracle Corporation
Meta data is going "mainstream" as application areas, and the pressure for interoperability, are increasing steadily. Likewise, the tools and technologies for defining, storing and managing meta data are becoming more robust. And we're not just talking about familiar tabular data structures. More and more, you're likely to need meta data definitions for unstructured documents, web pages, spatial data, and even digital images and video. This panel session looks at some of the emerging trends in meta data technologies, tools and applications.
C60: OMG
CWM - An Architecture for Enterprisewide E-Business Intelligence Integration
Sridhar
Iyengar
Unisys
Fellow
Unisys
The
use of meta data repositories and related open standards that have historically
focused on application development and monolithic data warehousing has now been
extended to support component middleware frameworks and more recently to
federated data warehouses and data marts using the OMG Common Warehouse
Metamodel (CWM). Just as OMG UML unified the object oriented development
community, CWM which builds on UML and the foundational meta data standards of
MOF (Meta Object Facility) and XMI (XML Meta Data Interchange) promises to unify
the much more complex world of databases, data warehouses, data marts and
enterprise information portals.
CWM
has been designed by some of the best designers of databases, data warehouses
and meta data repositories in the industry and is a comprehensive model driven
middleware neutral data interchange and interoperability standard for
integrating legacy data (file and hierarchical data base systems), relational
data, web data (XML and HTML) as well as analytical data (OLAP and Data mining
systems). This presentation presents and E-Business Intelligence integration
architecture and early results from the labs of CWM developers and implementers
such as Unisys, UBS, Oracle, IBM and Hyperion. The session will also cover the
latest status on CWM and a progress report on how the efforts at unifying CWM
and the former MDC OIM (Meta Data Coalition Open Information Model)