Focus on decision making

Published:

3- Materials and Methods

This research focus on Decision Making in Sales Management System. Decision support systems (DSS) can take part in a major role in enhancing the DM's decision-making abilities. DSS can support and enhance a DM's decision-making facility by processing data and allowing participant to simulate a variety of scenarios quickly and make efficient decisions in an efficient manner. A DSS can also help to assess and evaluate the benefits and risks of exploration inside the organization. This chapter describes the related work on Decision Making.

3.1- Previous work on Decision Making

In data warehousing applications, several OLAP queries involved the processing of holistic aggregators such as computing the "top n," median, quantiles , etc. They presented a novel approach called vibrant bucketing to competently assess these aggregators. They separation data into equal width buckets and additional separation intense buckets into sub buckets as required by allocating and reclaiming memory space. The bucketing process dynamically gets used to the input order and distribution of input data sets. The histograms of the buckets and sub buckets are stored in our new data structure called structure trees. A recent selection algorithm based on usual sampling is generalized and its analysis extended. They have also evaluated our new algorithms with this general algorithm and a number of other recent algorithms. Investigational results demonstrate that their new algorithms significantly out perform previous ones not only in the runtime but also in correctness (Fu and Rajasekaran, 2000).

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The decision support systems are growing in order to handle complex data. A number of recent works have exposed the interest of combining on-line analysis processing (OLAP) and data mining. They suppose that pairing OLAP and data mining would provide outstanding solutions to treat complex data. To do that, they planned an improved OLAP operator based on the agglomerative hierarchical clustering (AHC). The here planned operator, called OPAC (Operator for Aggregation by Clustering) is able to give significant aggregates of facts refereed to complex objects. We entire this operator with a tool allowing the user to evaluate the best Partition from the AHC results matching to the most interesting aggregates of facts. Categories and Subject Descriptors: H.2.8 Data mining, Image databases: Database applications, I.5.2 Classifier design and assessment, Pattern analysis: Design Methodology, I.5.3 Algorithms, Similarity measures: Clustering. General Terms: Algorithms, Measurement (Messaoud et al, 2004).

The technologies such as decision support systems (DSS) are supportive in solving numerous kinds of problems, particularly those that are based on quantitative data and/or are premeditated in scope. For strategic decisions, though, decision makers can advantage greatly from a tool that tracks and organize qualitative and other imprecise information. Such a tool would assist cultivate and leverage an organization's intellectual resources to help user's address decision making in a additional informed fashion. Although DSS technologies have not usually been used in such state, they can be adapted to do so. This paper addresses the development of a qualitative DSS in a health care setting those permissible hospital administrators to utilize the qualitative information obtained by its network of field representatives for strategic benefit. This type of system, and its related benefits, can be extended to other business situations (Sauter, 2005).

Here dealed with a multi-echelon inventory system in which one vendor supplies goods to multiple buyers. The vendor produced the goods at a limited rate and customer demand come about at each buyer at a constant rate. There is a holding price per unit accumulate per unit time at the vendor and at each buyer. Each time a production is carried out the vendor acquire a setup price. Furthermore, placing an order at a buyer involve a permanent ordering price. Shortages are not acceptable. The goal is to decide the order quantities at the buyers and the production and delivery agenda at the vendor in order to reduce the average total price per unit time. They formulate the difficulty in terms of integer-ratio strategy and we develop a heuristic procedure. We also illustrate how the difficulty be supposed to be addressed in case of independence among the vendor and the buyers. Both way out procedures are demonstrate with a numerical example. Finally, they presented the results of a numerical study which exemplify the performance of the heuristic for computing integer-ratio policies. In addition, we compare the integer-ratio policies with the decentralized rule, and a sensitivity analysis of parameters is also reported (Abdul-Jalbar et al, 2006).

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

The supply chain research can leaded to an raise in effectiveness, business integration, responsiveness and ultimately market competitiveness. In the sugar industry, such research has extended fast over the past two decades, and has been aggravated by low world sugar cost and increasing costs of production. However, in the present competitive business atmosphere, a more customer-driven and holistic approach to supply chain management is compulsory. This study focuses on warehouse and distribution management for the export channel of Thai sugar industry. The plan is to propose the best inventory position and transportation route in the distribution system based on Genetic Algorithm (GA). It makes available a methodical and flexible framework to solve the difficulty of price minimization of sugar transport from the mills to seaports. The results express that the tool is not only helpful for reduce the cost, but also for managing sugar warehousing, distribution route and seaport exporting. While the focus of this paper is on sugar supply chain, much of the information is related to distribution management of other agricultural commodities as well (Chiadamrong and Kawtummachai, 2007).

Managers increasingly appearance net-sourcing decisions of whether and how to outsource selected software applications above the Internet. This paper exemplify the development of a net-sourcing decision support system (DSS) that provides support for the initial net-sourcing decision of whether to net-source or not to do so. The development follows a five-stage technique focusing on observed modeling with internal confirmation during the development. It start with recognize potential decision criteria from the literature followed by the collection of experimental data. Logistic regression is then used as a statistical method for selecting related decision criteria. Applying the logistic regression analysis to the dataset delivers competitive significance and strategic vulnerability as related decision criteria. The development concludes with designing an interior and a complementary DSS module. The paper analysis the developed DSS and its underlying development methodology (Loebbecke and Huyskens, 2007).

Process mean choice for a container-filling process is an important decision in a single-vendor single-buyer supply chain. Since the process mean decide the vendor's conforming and yield rates, it influence the vendor-buyer decisions concerning the production lot size and number of shipments delivered from the vendor to buyer. It follows, consequently, that these decisions should be strong-minded concurrently in order to manage the supply chain total cost. He developed a model that amalgamates the single-vendor single-buyer difficulty with the process mean selection problem. This integrated model permits the vendor to deliver the produced lot to buyer in number of unequal-sized shipments. Furthermore, every outgoing item is check, and each item deteriorating to meet a lower requirement limit is reprocessed. Further, in order to study the benefits of using this integrated model, two baseline cases are developed. The first of which deem a hierarchical model where the vendor find out the process mean and schedules of production and shipment individually. This hierarchical model is used to illustrate the force of integrating the process mean selection with production/inventory decisions. The other baseline case is studied in the compassion analysis where the best possible solution for a given process is compared to the optimal solution when the variation in the process output is insignificant. The integrated model is predictable to lead to decrease in reprocessing cost, negligible loss to customer due to the deviation from the most advantageous target value, and accordingly, providing better products at reduced cost for customers. Also, a solution procedure is develop to locate the best possible solution for the future model and compassion analysis is conducted to examine the effect of the model key parameters on the best possible solution (Darwish , 2008).

This study take on a well-defined game theory with a decision support system (DSS) to implant the significant factors involved in coordination game stability. A data-warehouse-based DSS is used as a synchronize instrument beside with the revelation of quality report cards. Feedback on stipulate, giving information on aggregated data, beside with the DSS, might be enough to improve stipulate behavior. The plan of this study was to apply a DSS with pre-play communication of the coordination game theory to the development of doctors' antibiotic prescribing behavior. We establish the group using the system had a better decrease in antibiotic recommendation than the non-DSS group. This study fulfilled that the DSS with game-theory modeling has made important assistance for improv­ing the doctors' prescribe behavior. Future research directions and managerial implications are addressed as well (Lin et al, 2008).

3.2- Data Warehouse Development Phase

3.2.1- Dimensional Analysis

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Building a data warehouse is very different from building an operational system. This becomes famous particularly in the requirements gathering phase. Since of this difference, the traditional methods of collecting requirements that work well for operational systems cannot be applied to data warehouse.

3.3- Information correspondence- a new impression.

The new technique for determining requirements for a data warehouse system is based on business dimension. It flows out of the require of the users to base their analysis on business dimensions. The new concept incorporates the basic extent and the business dimensions beside which the users analyze these basic measurements. Using the new technique, we come up with the measurement and the related dimensions that have to be captured and kept in the data warehouse. We come up with what is known as an information package for the definite subject.

Our primary goals in the requirements definition phase.

Essentially, information packages enable you to:

  • Define the common subject areas.
  • Design key business metrics.
  • Choose how data must be presented.
  • Decide how users will aggregate or roll up.
  • Choose the data quantity for users' analysis or query.
  • Decide how data will be accessed.
  • Establish data granularity.
  • Approximate data warehouse size.
  • Determine the frequency for data refreshing.
  • Ascertain how information must be packaged.

3.3.1- Data Source

The planned data warehouse for decision making will take out information from the following accessible data sources.

3.3.2- Sale Management System.

Sales:

  • Customer
  • Purchase order
  • Currency
  • Time
  • Store
  • Products
  • Inter company
  • All these dimension based on period.

3.3.3- Data Structure

Disparate System has the following Relational Data Structures.

Sales Management System.

3.4- Architecture Information

Operating System.

  • Windows 2000 Server
  • Windows 2000 professional
  • Windows XP

Networks

Networks are simply a group of computers connected by cable or other media so they can share information.

Network element

  • Server
  • Client
  • Peer
  • Media
  • Resources
  • User

Networking Model

  • Centralized

Advantages of Centralized

  • Ease of back up
  • Security
  • Low Cost

3.5- Data Transformation

It is not enough just to list the probable data sources. I must list related data structures as possible sources because of the associations of the data structures with the possible data in the data warehouse. Once we have listed the data sources, you require determining how the source data will have to be transformed properly into the type of data suitable to be stored in the data warehouse. In our requirement definition document, include details of data transformation. This will essentially involve mapping of source data to the data in the data warehouse. Specify where the data about our metrics and divide that need to take place before moving the data into the data warehouse.

3.6- Data storage space

From our interview with the users, I have found out the level of detailed data you need to keep in the data warehouse. I shall have an idea of the number of data marts we require for supporting the users. Also, I will know the details of the metrics and the business dimensions.

When I find out about the types of analysis the users will typically do, you can determine the types of aggregation that must be kept in the data warehouse. This will give us information about additional storage requirements. Our requirements definition document must include sufficient details about storage requirements.

3.7- Information Package Diagrams

The presence of information package diagrams in the requirements definition document is the major and significant difference between operational system and data warehouse systems. Information package diagram are the best approach for determining requirements for a data warehouse. The information package diagrams come together the information requirements for the data warehouse. They hold the critical metrics measuring the performance of the business units, the business dimensions along which the metrics are analyzed, and the details how drill-down and roll-up analysis are done.

3.8- System examination

A data warehouse is an information delivery system for providing information for strategic decision making. It is not a system for running day-to-day business.

Who are the users that can make use of the information in the data warehouse?

Where do you got for getting the requirement?

Broadly users of the data warehouse can classify as follows:

  • Senior executives (including the sponsors)
  • Key departmental managers
  • Business analysts
  • Operational systems DBA's
  • Other nominated by the above.

Executive will give us a sense of direction and scope for data warehouse. They are the ones closely involved in the focused area. The key departmental managers are the report to the executives in the area of focus. Business analysts are the ones who prepare reports and analyses for the executives and managers. The operational system DBA and IT applications staff give us information about the data sources for the warehouse.

What requirements we need to gather are.

  • Data elements: fact classes, dimensions.
  • Recording of data in terms of time.
  • Data extracts from source system.
  • Business rules: attributes, ranges, domains, operational records.

3.9- Design

Ideology of Dimensional Modeling

The requirements explanation completely drives the data design for the data warehouse. Data design consists of putting together the data structures. A set of data elements from a data structure. Logical data design includes determination of the various data elements that are required and grouping of the data elements into structures of data. Logical data design also includes establishing the associations among the data structures. The outcome of the requirements gathering phase is documented in detail in the requirements explanation document. An important component of this document is the set of information package diagram. The information package diagram from the basis for the logical data design for the data warehouse. The data design process results in a dimensional data model.

3.9.1- Plan Decision

Before designing the dimensional data model, following design decisions must take into account.

Choosing the process.

Selecting the subjects from the information packages for the first set of logical structures to be designed.

Choosing the grain.

Determine the level of detail for the data in the structures.

Identifying and conforming the dimensions.

Choosing the business dimensions ( such as product, market, time , etc) to be included in the first set of structures and making sure that each particular data element in very business dimension is conformed to one another.

Choosing the facts.

Selecting the metrics or units of measurements such as.

3.9.2- Dimensional Modeling

Dimensional modeling obtains its name from the business dimensions we require to incorporate into the logical data model. It is a logical design technique to structure the business dimensions and the metrics that are analyzed along these dimensions. The dimensional model consists of the exact data structure needed to represent the business dimension. These data structures also control the metrics or facts.

I observe the list of measurements or metrics that the automaker wants to use for analysis. Next look at the column headings. These are the business dimensions along which the automaker wants to analyze the measurements or metrics. Under each column deading you see the dimension hierarchies and sort within that business dimension. What we see under each column heading are the attributes relating to that business dimension. Reviewing the information package diagram for automaker sales, we notice three types of data entities for each business dimension.

  • Measurements of Facts
  • Business Dimensions
  • Attributes

So when I put together the dimension model to signify the information contained in the automaker sales information package, we need to come up with data structure to represent these three types of data entities. Let us discuss how we can do this.

3.9.3- Information theme:- Sales Analysis Dimensions

First, let us work with the measurements or metrics seen at the bottom of the information package diagram. These are the facts for analysis. In the sales diagram, the facts are as follows.

  • Prod_id
  • Sales_date
  • Qty
  • Sales_amt
  • Sales_amtstat

Each of these data item is a measurement or fact.

In relational database expressions, you may call the data structure a relational table. So the metrics or facts from the information package diagram will from the fact table.

Let us now move on to the other sections of the information package diagram, taking the business dimensions one by one. Look at the customer business dimension.

The customer business dimension is used when I want to analyze the facts by customer. Occasionally our analysis might be a breakdown by individual customer. Another analysis could be at even a higher level by customer product categories. The list of data item relating to the customer dimension is as follow.

  • Customer Code
  • City Code
  • State Code
  • State Code
  • Name
  • Address

What can I do with all these data items in our dimensional model? All of these relate to the product in a few ways. I can, group all of these data items in one data structure or one relational table. We can call this table the product dimension table. The data item in the above list would all be attributes in this table.

Looking future into information package diagram, I note the other business dimensions shown as column heading.

The data items exposed within each column would then be the attributes for each corresponding dimension table .Before I decide how to arrange the facts and dimension tables in our dimensional model needs to achieve and what its purpose are. Here are some of the criteria for combining the table into a dimensional model.

  • The model should provide the best data access.
  • The whole model must be query-centric.
  • It must be optimized for queries and analysis.
  • The model must show that the dimension tables interact with the fact table.
  • It should also be structured in such a way that every dimensions can interact equally with the fact table.
  • The model should allow drilling down or rolling up along dimension hierarchies.

With these requirements, I find that a dimensional model with the fact table in the center and the dimension table arranged around the fact table satisfies the conditions. In this arrangement, each of the dimension table has a direct association with the fact table in the middle. This is necessary because every dimension table with its attributes must have an even chance of participating in a query to analyze the attributes in the fact table.

Such an arrangement in the dimensional model looks like a star information, with the fact table at the core of the star and the dimension tables beside the spikes of the star. The dimensional model is therefore called a star schema. Let us observe the star schema for the Sales. The sales fact tale is in the center. Around this fact table are the dimensions tables of product, store, customer, and time. Each dimension table is associated to the fact table in one-to-many relationship. In other words, for one row in the product dimension table, there are one or more related rows in the fact table.

3.10- E-R Modeling vs. Dimensional Modeling

We are familiar with data modeling for operational or OLTP system. We take on the entity-relationship (E-P) modeling method to create the data models for these systems. Following figure lists the characteristics of OLTP system and illustrate why E-R modeling is suitable for OLTP system.

  • OLTP system capture details of events or transactions
  • OLTP SYSTEMS focus on individual events
  • An OLTP system is a window into micro-level transactions
  • Picture at a detail level necessary to run the business
  • Suitable only for questions at transactions level
  • Data consistency, non-redundancy, and efficient data storage critical.

Entity-Relationship Modeling

Removes data redundancy

Ensures data consistency

Express microscopic relationship

ER modeling for OLTP systems.

Let us recapitulate the characteristics of the data warehouse information and evaluate how dimensional modeling is suitable for this purpose.

DW meant to answer questions on overall process.

DW focuses in on how manager view the business.

DW reveals business style.

Information is centered on a business process.

Answers show how the business measures the process.

The measures to be student in many ways along several business dimension.

Dimensional Modeling

Captures critical measures

Views along dimensions

Intuitive to business users

3.10.1- The Star Schema

The STAR schema structure intuitively answers the question of what, when, by whom, and to whom. From the STAR schema, the users can easily imagine the answers to these questions. When a query is made beside the data warehouse, the outcome of the query is produced by unite or joining one or more dimension tables with the fact table. The relationship of a exacting row in the fact table is within the rows in each dimension table. A common type of analysis is the drilling down of summary numbers to get at the details at the lower levels. The users can easily discern all of this drill down analysis by reviewing the STAR schema.

3.10.2- The Snowflake Schema

"Snow flaking" is a technique of normalizing the dimension table in a STAR schema. When we totally normalize all dimension tables, the resultant structure resembles a snowflake with the fact table in the central point. Let us review a simple STAR schema fro sales in a manufacturing company. The following options indicate the different easy you many want to consider for normalization:

  • Partially normalize only a few dimension tables, leaving the others intact.
  • Partially or fully normalize only a few dimension table, leaving the rest intact.
  • Partially normalize every dimension table
  • Fully normalize every dimension table.

Advantages

  • Schema less perceptive and end-users are put off by the difficulty
  • capability to browse through the contents difficult
  • Degraded query performance because of additional joins.

3.11- Data Extraction, Transformation, and Loading

The activities that relate to ETL in a data warehouse are by for most time-consuming and human-intensive. Special recognition of the extent and complexity of these activities in the requirements will go a long way in easing the pain while setting up the architecture. Let us separate out the functions and state the particular contemplation needed in the requirements definition.

3.11.1- Data Extraction

Obviously categorize all the internal data sources. Specify all the computing platform and source files from which the data is to be taking out. If you are going to include external data sources, decide the compatibilities of your data structures with those of the external sources. Also point out the methods for data extraction.

3.11.2- Data Transformation

Several types of transformation functions are wanted before data can be mapped and prepared for loading into data warehouse repository. These functions comprise input selection, division of input structures, normalization and demoralization of source structures, aggregation, conversion, resolving of missing, this turns out to be long and complex list of functions. This list is by no means complete for every data warehouse, but it gives a good approaching into what is occupied to complete the ETL process.

  • Unite a number of source data structure data structure into a single row in the target database of the data warehouse.
  • Split one source data structure into a number of structures to go into several rows of the target database.
  • Read data from data dictionaries and catalogs of source systems.
  • Read data from a Varity of file structures including flat files, indexed files.
  • (VSAM), and legacy system databases (hierarchical / network).
  • Load detail for populating atomic fact tables.
  • Aggregate for populating aggregate or summary fact table.
  • Transform data from one format in the source platform to another format in the target platform.
  • Derive target values for input fields (example :age from data of birth)
  • Change cryptic values to values meaningful to the users.

3.12- Information entrance and Delivery.

3.12.1- OLAP in the Data Warehouse

On-line Analytical Processing (OLAP) is a sort of software technology that allow analysts, managers and executives to get insight into data through fast, consistent, interactive access in a extensive verity of possible views of information that has been transformed from unrefined data to reflect the real dimensionality of the enterprise as understood by the user.

The data warehouse provides the best opportunity for analysis and OLAP is the vehicle for carrying out involved analysis. The data warehouse environment is also best for data access when analysis is carried out. For effective analysis, users should have easy methods of performing complex analysis beside a number of business dimensions. They require an environment that presents a multidimensional view of data, providing the foundation for analytical processing through simple and flexible access to information.

Decision makers must be able to analyze data along any number of dimensions, at any level of aggregation, with the ability of viewing results in a variety of ways. They must have the ability to drill down and roll up along the hierarchies of every dimension .with out a solid system for true multidimensional analysis, data warehouse is incomplete.

3.12.2- Strategy for an OLAP Scheme

Let us consider the primary twelve guidelines for an OLAP system.

  • Multidimensional Conceptual view.
  • Transparency.
  • Accessibility.
  • Consistent Reporting Performance.
  • Client/Server Architecture.
  • Generic Dimensionality.
  • Dynamic Sparse Matrix Handling.
  • Multi-user Support.
  • Unrestricted Cross-dimensional Operations.
  • Intuitive Data Manipulation.
  • Flexible Reporting.
  • Unlimited Dimensions and Aggregation Levels.

In addition to these twelve basic guidelines, also take into account the following requirements, not all distinctly specified by Dr.Codd.

  • Drill-through to Detail Level.
  • OLAP Analysis Models.
  • Treatment of Non-normalized Data.
  • Storing OLAP outcome.
  • absent Values.
  • Incremental Database Refresh.
  • SQL Interface.

3.12.3- OLAP Characteristics

OLAP system

  • Let business users have a multidimensional and logical view of the data in the data warehouse.
  • Facilitate interactive query and complex analysis for the users;
  • Allow users to drill down for better detail and roll up for aggregations of metrics beside a single business dimension or across multiple dimensions.
  • Provide skill to perform intricate calculations and comparisons, and
  • Present result in a number of meaningful ways, including charts and graphs.

3.12.4- General Features.

Essential Features.

3.12.5- Implementation Steps

These are the steps or activities at a very high level. Each step consists of numerous tasks to achieve the objectives of the step. Here are the major steps:

  • Dimensional modeling
  • Design and structure of the MDDB
  • Selection of the data to be moved into the OLAP system
  • Data gaining or extraction for the OLAP system
  • Data loading into the OLAP server
  • Computation of data aggregation and derived data
  • Implementation of application on the desktop
  • Previous of user training

3.13- Completion and Maintenance

3.13.1- The Physical Design Process

Physical design finds the work of the project team closer to implementation and deployment.

3.13.2- Physical Design Steps

The following graphic representation shows the steps in the physical design process for a data warehouse.

3.14- Physical Design Concern

3.14.1- Physical Design Objectives

When we perform the logical design of the database, our goal is to construct a conceptual model that reflects the information content of the real-world situation. The logical model represents the taken as a whole data components and the relationships. The objectives of the physical design process don not center on the structure. In physical design, we are getting closer to the operating system, the database software, the hardware, and the platform.

If we want to summarize the major objectives of the physical design process are improving performance on the one hand, and improving the management of stored data on the other. We base our physical design decisions on the usage of data. The frequency of access, the data volumes, the specific features supported by the selected RDBMS, and the configuration of the storage medium influence the physical design decisions.

  • Improve Performance.
  • Ensure Scalability.
  • Manage Storage.
  • Provide Ease of Administration.
  • Design for flexibility

3.14.2- From Logical Model to Physical Model

In the logical model we have the tables, attributes, primary key, and relationships. The physical model contains the structures and relationships represented in the database schema coded with the data definition language (DDL) of the DBMS. Following are the activities that transform a logical model into a physical model.

3.14.3- Sales Physical Model

3.15- Indexing the Data Warehouse

In a query -centric system like the data warehouse environment, the necessitate to process queries faster dominates. There is no surer way of turning our users away from data warehouse than by irrationally slow queries. For the user in an analysis session going through a rapid succession of complex queries, we have to match the space of the query outcome with the speed of though. Among the various techniques to improve performance indexing ranks very high.

What types of indexes must we construct in our data warehouse? The DBMS vendors offer a variety of choices. The choice in no longer confined to sequential index files. All vendors support B-tree indexes for efficient data retrieval. Another option is the Bit-Mapped index.

3.15.1- Data Warehouse Deployment

The main concern in the deployment phase relates to the users getting the teach, support, and the hardware and tools they need to obtain into the warehouse. In the deployment phase, we found a feedback mechanism for the users to let the project team know how the deployment is going. Although the users must have received training, significant handholding is essential in this phase. Be prepared to provide the support. Let us examine each major activity in the deployment phase.

3.15.2- Enlargement and Maintenance

Immediately following the early deployment, the project team must conduct review sessions. Here are the major review tasks.

  • Review the testing process and suggest recommendations.
  • Review the goals and happenings of the pilots.
  • Assessment the methods used in the early training sessions.
  • Verify the results of the initial deployment, matching these with user expectations.

3.15.3- Monitoring the Data Warehouse

Monitoring the data warehouse is comparable to what happens in an OLTP system, except for one huge difference. Monitoring an OLTP system decrease in comparison with the monitoring activity in a data warehouse environment. The scope of the monitoring activity in the data warehouse expands in excess of many features and functions. Unless data warehouse monitoring takes place in a official manner, desired results cannot be achieved. The results of the monitoring gives you the data needed to plan for growth and to improve performance.

Following figure presents the data warehousing monitoring activity and its helpfulness. As we can examine, the statistics serve as the life-blood of the monitoring activity. They lead into growth planning and fine-tuning of the data warehouse.