Multivariate analysis

Published:

CHAPTER-XII: INTRODUCTION TO MULTIVARIATE ANALYSIS

Summery

Multivariate analysis is a tool for a decision marker (may be a manager or researcher) in the process of decision-making by means of data on hand. All research activity requires for analysis of raw data. The various techniques of multivariate analysis are systematic and are used by several experts. They are also modified from time to time based on the applications and recommendations of a number of experts from time-to-time. Hence as a statistical tool, multivariate analysis can enable the decision-makers to overcome the uncertainties associated with occasions.

12.1 FACTOR ANALYSIS:

There are several methods in statistics to find out possible relationships between and among the variables. One such method is factor analysis. Karl Pearson early in this century developed the system of principal component Analysis and later H. Hoteling widely used this method in psychology. It was Charles Spearman who in 1904 introduced the theory that the interrelationship of all variables involved (i.e., measures of intellectual performances at that time), could be accounted for by two factors, such as a simple underlying general ability factor, plus a factor specific to each variable. Soon after few years, the two factor analysis discussed by Spearman was generalized, principally by psychologists named as Garnett and Thurstone to multiple factor analysis capable of analyzing correlation matrices into as many as common factors implicit in the variables as may be necessary to account for all the observed correlations. What is Factor analysis?

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Factor analysis is a branch of multivariate statistical analysis which concentrated with the internal relationships of a set of variables. It is synthesis of variables- that analyses distinct factors at work among the variable. These new entities are themselves variables, hypothetical variables, which are fewer than the raw variables. The purpose of the factor analysis is therefore, to find out these common factors which provide the shape of real structure hidden in the multiplicity of the variables. In other wards, it not only explaining the observed relationship among a number of variables, but also explains the basis of influences and the development of classificatory schemes. It is thus a methodology for classifying manifestations or variables.

Main Characteristics:

Factor analysis attempts to determine the quantitative relationship between the variables where the relationships are due to certain general causal factors. The specific characteristics of the factor analysis are:

i) Factor analysis is concerned with how much relationship exists, while other analysis indicates only any significant relationship that exists.

ii) Factor analysis is a fine quantitative too!. It explains both how many are in action.

iii) Factor analysis yields more extensive evidence of other kinds of interaction, extent of assistance or opposition of influence in their effects.

This shows that a factor analysis “without hypothesis formation” can arrive at highly structured answers regarding: (i) number of factors at work, (ii) nature of factors, (iii) their correlation and

(iv) their magnitude to the variance of particular variables or most of the variables.

The Derivation:

Let that there are ‘n' variables which need to be measures for each of the ‘p' subjects. Hence the generalization will be

the Fi are the m common factors, the ei are the n specific errors, and the aij are the factor factor loadings. The have mean zero and standard deviation 1(one), and are generally assumed to be independent of each other. ei are called as stochastic disturbance term and are also assumed to be independent. There also exist mutual relationship between Fi and ei.

When the above generalization is converted into matrix form, this will be

The above equation is equivalent to

Thus in the above equation is the correlation matrix of . Sine the errors are assumed to be independent, cov (e) should be a diagonal matrix.

Thus we have . The squared factor loadings are called its community (the variance it has in common with the other variables through the common factor). The ith error variance is called the specificity of Xi (the variance that is specific to variance ‘i').

Thus it can be said that factor analysis as a method of investigating whether a number of variables of interest are linearly related to a smaller number of unobservable factors.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Following is an example of practical use of factor analysis.

Box-12.1: Use of Factor analysis in studies

Title- ‘Customers Expectations towards Car in an Unorganised Environment- A Factrol Analysis

By- Chimum Kumar Nath

In- African Journal of Business management, Vol-2(4), April 2009.

Introduction:

Car Manufacturing Companies Today Are Facing New Challenges To Serve The Even-Changing Customer Attitude Towards The Purchase Of New Generation Car. New Car Buyers May Be Grouped Or Categorized On The Basis Of Relative Emphasis They Place On Economy, Comfort, Performance, Convenience, And Luxury....Yet products are used in different ways and under various conditions to meet differing buyer needs. This might result in creating different segmentation on the target market.

Objective:

In market there are various types of cars available with different specification to cater the needs of customers. Factor analysis allows us to look at these groups of customers that tend to be related to each other and estimate what underlying reasons might cause these variables to be more highly correlated with each other. the basic objective of this paper is to make a correlation analysis of the responses of the customers, regarding various attribute rating of a new generation car. Further the paper seeks to determine the underlying benefits customers are looking from a new generation car by classifying then according to their relative importance they put in the attribute rating by the method of principal component analysis.

Methodology:

the sample data consists of 75 respondents who have car. A small town of Assam has been chosen as sample area. The respondents are asked to indicate their degree of agree ness with some statements (V1 to V6) using seven point Likert scale (strongly disagree-1, strongly agree-7) where V1-stands for new generation car should be fuel efficient, V2- A new generation car should be spacious and comfortable, V3- A new generation car should be available in easy finances, V4- A new generation car should enhance the prestige of the owner…

Epilogue:

In short it is concluded from the study that customers are purchasing new generation car because of several considerations and those contributions are attributed by the author in two major labels like (i) the economic benefit factor and (ii) Social benefit factor.

Note- Readers who are interest to go in-detail of the above research publication are requested to refer the above citation.

12.2 DISCRIMINANT ANALYSIS:

The credit for introducing the concept of discriminant analysis goes to R. A. Fisher in 1936 who was investigating to solve certain problems of physical anthropology and biology. In social sciences the analysis was first applied by M. M. Tataauoka and D. V. Tieddeman in 1954 for psychological and educational testing of children. Discriminant analysis is a useful tool for the economists and business executives. With the help of this technique one can ascertain the economic differences of two regions or two states, or among states for evolving a suitable strategy for development. Market behaviour among different groups and nature of consumption among different groups of consumers etc., can also be predicted with the help of this technique.

What is Discriminant Analysis?

Discriminant analysis is a statistical technique to study the differences between two or more groups of objects with respect to several variables simultaneously. It is used to classify an observation into one of the several a priori groupings dependent upon the observations individual characteristics. It is generally used to classify and/or make predictions in problems where the dependent variable appears in qualitative form. The basic assumption is that they differ on several characteristic variables. These variables are called discriminant variables, which can be measured at the ratio or interval levels. Discriminant analysis helps a researcher in two ways:

1) It establishes the degree of differences between and among groups.

2) It provides with a means to classify any case into a group with which it most closely resembles.

In other words discriminant analysis is used for the interpretation of group differences and also classification of a particular case into a group. In simple, the discriminant analysis establishes relationships among groups in terms of discriminating variables. Hence assigning an observation X of unknown origin to one of the two or more distinct groups on the basics of the value of the observation is the main technique of the discriminant analysis.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

In studying the differences between two or more than two groups of objects, two of the main steps in this analysis are:

1) First to find out the main variables which are responsible to distinguish groups

2) Second, combination of discriminant variables into an equation to know the magnitude of the differences. Which is known as discriminant function? It is a mathematical equation which combines group characteristic to identify a group.

Main Use of the Analysis:

The two main uses of discriminant analysis are:

1) Interpretation of group differences:

On the basis of a certain characteristics of a group, the groups are discriminated. The main enquiry relates to find out the magnitude of differences. The characters used to distinguish among the groups are called the discriminant variables”.

2) Classification of cases into groups:

On the basis of discriminating functions of the groups and utilizing discriminating variables the cases are classified into groups.

Steps of executing discriminant analysis:

Let us derive the various steps of executing a problem in discriminant analysis. For better understanding, the data set available to a researcher can be transformed into the following notation.

after defining the data set, the next step is of calculating mean value of each data set individually and also combined mean of the all the two data set. Let that be the mean value of first data set and is the mean of second data set. The combined mean can be derived by using the formula as

Where p1 and p2 are the priori probabilities of the data classes. For example since we are having two data classes as derived in above two matrices above, hence, the probability here is assumed to be 0.5.

Than it requires to measure the scatter measure of classes. In case of discriminant analysis, there is the provision of formulating within-class and between-class scatters to formulate class reparability.

Within-class scatter(Sw) is the expected covariance of each of the classes. Hence it can be measures by using the formula

Thus if we use this formula in our example than it will be

It is given that all the covariance matrices are symmetric. In the above formula Cov1 and Cov2 are the covariance of data set -1 and covariance of data set-2.

The next step is of measuring the covariance matrix. This can be done by using the formula

Where as the between class scatters can be computed by the following formula:

Here Sb can be taken as the covariance of data set whose members are the mean vectors of each class. Data set in case of discriminant analysis can be transformed and test vectors can be classified in the transformed space by two well known approaches. The class-dependent transformation is one such approach which involves maximizing the ratio in between-class variance to within-class variance. The main aim in this approach is to maximize this ratio so that adequate class separability is obtained. The optimizing criteria in case of discriminant analysis is the ration of the between class scatter to the within-class scatter. On the other hand, class-independent transformation involves maximizing the ratio of overall variance to within class variance. This approach uses only one optimizing criterion to transform the data sets and hence, all the data points irrespective of their class identity are transformed by using this transform.

Limitations:

The discriminant analysis operates under the following conditions:

1) A group is drawn from a population which has a multivariate normal distribution. This exists when each variable has a normal distribution about fixed values on all others. This enables to compute precisely the tests of significance and probabilities of group members.

2) Population co-variance matrices are assumed as equal for each group. Accordingly the linear discriminant function is a simple linear combination of discriminating variables.

3) When two variables are perfectly correlated both cannot be used at the same time in the discriminant analysis. Hence only such variables are taken as discriminant variables which are not perfectly correlated.

4) When the variables are selected for the discriminant analysis one should see that no variable is a linear combination of the other discriminant variables.

Under discriminant analysis when the variables are analyzed to identify their groups, it should be noted that the groups are not dependent on the discriminant variables, if they are, then the analysis becomes a multiple regression.

12.3 CLUSTER ANALYSIS:

Classification of data is an important aspect in research. Data have to be classified according to the need of the research design for analysis. The study of the science of classification is of recent origin and known in various names such as typology and taxonomy. Now the science of classification is termed as cluster analysis. Indians are the first analysts of classification. Muni Bachhayana, centuries earlier to Christ, was the earliest user of cluster analysis in his book “Kama Sutra” or principles of love. He classified men and women into four clusters on the basis of their physical structure, mental horizon and social behaviour in order to analyse the love-life of human beings. He classified men into four classes: Aswa (horse), Brisha (bull, Mriga deer) and Sasa (hare) in descending order of strength and beauty. In classifying females Bachhayana has not compared them with the animals as in case of males except the last class. The four clusters of females are: padmini (lotus, the symbol of tenderness, beautyand fragrance) Chitrini (beauty as if a painting) Sankhini (smooth and white as conch shell as well as sweet voice) and Hastini (femate elephant).

In morden times J. Czekanowski, a German anthropologist, formalized the principles of classification for cluster analysis. In 1930s psychologists, specially J. Zubin and R. Tryon further developed the procedures. The latter's publication of “cluster Analysis” in 1939 firmly established the cluster analysis as an important tool for classification of entities. The cluster analysis has widely utilized in biological and social sciences from 1960 onwards especially after the publication of two important works: “Principles of Numerical Taxonomy” by R. Sokal and F. Rohlf in the year 1963 and “Pattern of clustering by Multivariate Mixture Analysis” by J.M. Wolfe in 1970.

Cluster analysis is a technique to group variables, individuals and entities. Once the variables are classified on certain characteristics it makes the work of researcher easier for further analysis by taking only a smaller sample space. Otherwise once the species of an entity is classified, it is easier to analyse it, since the characteristics of the class to which it belongs is already known. Cluster analysis is regarded as an alternative for factor and principal component analysis for the reduction of data in a research design.

A cluster, according to B. Eviritt is a “continuous regions of space containing a relatively high density of points separated by such other regions by, containing a relatively low density points.” In other words cluster is a homogeneous group of cases or variables or entries. Identification of clusters inherent in a data set is called cluster analysis.

The most important use of cluster analysis is development of a typology or classification. In economics, especially in applied economics, where an analysis is made on the data collected from the primary sources, classification is a first and the most important aspects of research. Classification of households on the basis of socio-economic-cultural parameters is a good basis for analyzing consumer behaviour. Hence the cluster analysis is regarded as the first step for any empirical study in economics.

Investigation of useful conceptual schemes for grouping variables or entities is another use of cluster analysis. In different disciplines different data sets require special procedures for the classification of the variables according to the need of the researchers. Accordingly a researcher has to explore different methods of classification or clustering that can be applied in other cases of research in similar situations.

Thirdly, hypothesis generation and hypothesis testing is another field of cluster analysis. Analysis of data to generate hypothesis with regards to the nature of classification is a research in specific field. Testing can be taken also for certain hypothesis in order to determine whether the hypothesis is present or not in the data set as already defined through other procedures.

Importance of cluster analysis has increased in recent years in inorganic chemistry, atomic theory of matter and in science because in these fields classification contains major field of research. Now, in social sciences the cluster analysis is widely used because of the availability of high speed computers which can handle large matrices to analyse field data.

Steps in Cluster Analysis:

There are four basic steps through which a cluster analyst can work to classify entities.

(1) Selection:

the first stage of the analysis is to select a suitable sample size from the problem population on which the cluster analysis will be made.

(2) Definition of variables:

the variables in the data set have to be defined clearly so that the entities in the sample can be measured.

(3) Computation of similarities:

In the third stage the entries have to be measured for their closeness in terms of similarity or dissimilarity measures. In cluster analysis separate entries on binary and other procedures. A researcher has to select one of the methods for cluster analysis.

(4) Groupings:

Grouping is the final stage of the cluster analysis. There are several methods of grouping in cluster analysis. One popular method adopted by the researchers is to construct some means to identify the number of cluster in the entities.

Methods of clustering:

There are two types of clustering technique generally followed in the research work. They are (i) hierarchical clustering, and (ii) K-means clustering.

(i). Hierarchical clustering:

Hierarchical clustering is one of the most straightforward methods. It is again divided in two types namely a. agglomerative clustering and b. divisive clustering.

a. Agglomerative clustering-

it begins with considering every observations being a cluster unto itself. At successive steps of clustering, similar clusters of observations are to be merged. The clustering ends with everybody in one slot, but are useless clusters. This type of clustering starts by considering each observation as a cluster. At the next step, the two observations which has the smallest value for the distance measure and joined into a single cluster. Than if needed the third observation is added to the cluster that already contains two observations or two other observations are merged to form a new cluster. However, at every step, either individual observations are added to existing clusters or two existing clusters are combined. There are two procedures of clustering the observations. They are

1. Single Linkage Method-

This method believes in clustering nearest neighbors one after another. The concept of single linkage method will be clear by considering the following matrix. From the matrix M1 it can be seen that each observations in the matrix highlights the distance between two entries. This measure of distance between the observations is mapped into a matrix for the distance between the clusters. The lowest distance between the two observations is to be selected at first. These elements are than brought together to fuse.

M1 =

1

2

3

4

5

1

0

2

6

10

9

2

--

0

5

9

8

3

--

--

0

4

5

4

--

--

--

0

3

5

--

--

--

--

0

In the above matrix, 2 is the lowest distance between the entries 1 and 2, hence, at the beginning these two observations are to be fused first. Hence on the basis of the nearest neighbouring principle, the entries (1, 2) are fused with the rest of the observations and M1 is reduced to matrix M2 with the fresh value of observations as

m(1 2) 3 = minimum (m13 ,m23) =m23 = 5

m(1 2) 4 = minimum (m14 ,m24) = m24 = 9

m(1 2) 5 = minimum (m15 ,m25)=m25=8

Putting the values of observation in its place, the new matrix M2 will be

M2=

(1 2)

3

4

5

(1 2)

0

5

9

8

3

--

0

4

5

4

--

--

0

3

5

--

--

--

0

Let us now search for smallest observation in M2 matrix. The observation 3 is the smallest between 4 and 5. Hence these two variables need to be fused further. The variables for the new matrix can be calculated as

m(1 2) 3 = minimum (m13 m23) = m23 = 5

m(1 2) (45) = minimum (m14, m15, m24, m25) = m25 = 8

The new matrix M3 will be as follows.

M3 =

(1 2)

3

(4 5)

(1 2)

0

5

8

3

--

0

4

(4 5)

--

--

0

From the above matrix, the lowest observation is 4 between the observations 3 and (4 5) and this requires to be fused. The final matrix M4 can be drawn as follows:

M4

(12)

(345)

(1 2)

0

5

(345)

--

0

Following table shows the process of fusion.

Table-12.1: Coefficients at different fusions

Observations fused at different stages

Number of times (coefficients)

1 -2

2

4-5

3

3 – (4 5)

4

(1 2) – (3 4 5)

5

In case of complete linkage method, fusion of the observations is done on the basis of maximum value of the distances but the process is started from the smallest distance (readers who are interested to get some detailed knowledge on these methods are advised to refer any standard books on multivariate analysis).

b. Divisive clustering-

It is on the other hand one which starts with observation in the cluster and ends up with every observation in an individual clusters.

ii. k-means clustering:

K-means clustering, as the name (k) standards for number of clusters one want to form in advance. It does not require computation of all possible distances between the observations. Here the researcher has to know the numbers of clusters in advance. One has to start with an initial set of means and has to classify observations based on their distances to the centre. Next step is to compute the cluster means again, of course using the observations that are assigned to the clusters. Further one has to reclassify all the observations based on the new set of means. These steps of reclassifying the means are to be continued till that end where cluster means don't change much between successive steps. Finally, the means of the clusters are to be calculated once again and the observations are assigned to their permanent clusters.

12.4 DIMENSIONAL ANALYSIS:

The concept of dimensional analysis was developed in the year 1822 by J.B.J.Fourier, who was a French mathematician. W.Stanlay Jevons was the first social scientist who used this concept in 1879 in his research related to economics. Later Maurice Allsis, a French economist in the year 1943, presented a systematic theory of dimensions in economics.

The two basic functions of measurement are (i) it enables us to compare two different objects and (ii) it helps us to find out the exact difference between two or more measured units. For example when two individuals namely Mr X and Mr Yare compared, we have to say Mr. Y is taller than Mr. X. here the unit of measurement is height.

Thus measurements are expressed in terms of an unit of measurement. The common units of measurements are grams, liters, meters, rupees, dollar etc. Any meaningful quantity is a number multiplied by an unit of measurement. For example- 2 kilogram of Ghee implies the number 2 is the pure number and kilogram is the unit of measurement. Dimensional analyses can me carried out by using a number of mathematical equations (readers who wish to get in detail on dimensional analysis are requested to follow some standard book on the specific subject).

12.5 META ANALYSIS:

Meta analysis as a systematic method of multivariate analysis is not of recent origin rather it exists in the literature since 1931. Jay Lash for the first time used this concept in research for conducting some agricultural analysis. Latter on a numbers of other researchers like Samuel Stouffer (1946), Karl Pearson (1933), Ronald Fisher (1932) and so on had used this concept more or less in their different analysis.

‘Meta' is a Greek word which implies ‘over'. Hence meta analysis leads to overall analysis. The basic objectives of this analysis are that- contemporary research studies are more technical in nature and used more statistical techniques. Hence the integration of the statistics techniques into the research work is called as the ‘meta-analysis'. In this analysis the studies already conducted are considered for analysis. Thus it is called as analysis of the analysis. Here the summery and the findings of any analysis are studied by using some statistical techniques to test the reliability of the studies.

A method has been evolved to assess quality of the studies. Each studies has to be coded by excellent methodology. The process may include:

1. Careful investigation is needed to go through the methodology followed in the existing research study

2. Make a comparison with the methodology chosen and the analysis of results carried out. An indeed can be developed by using the Spearman-Brown formula as:

where n is the number of jury member, r is the mean reliability of all the jury members. For example-if the numbers of judges are 5 than 10 correlation coefficient is to be computed and mean of these correlations have to be used. Thus for n judges n (n-1)/2 correlations have to be estimated as per the weights scale of the judge.

Spearman-Brown formula as discussed above is suitable when the number of jury is less. But the difficult may arise if there will be more jury member. In such a case if the above formula is used than it unnecessarily increases the task of the researcher to execute the correlation.

There exists another formula suggested by Guilford as

Hence the mean reliability is given by the formula

Thus finding reliability of a study by using Spearman-Brown formula is easier but it has its limitations. On the other hand, if one used the second approach than it requires lots of exercises in the form of calculation but the reliability of getting a good result is more.

12.6 CONJOINT ANALYSIS:

Conjoint analysis as another technique of multivariate analysis has received a great deal of attention from both academicians and practitioners to determine respondents' preference for a product or service or concept or idea. Conjoint analysis as a multivariate technique used specifically to understand human psychological perceptions. This analysis is now-a-days widely used in management research to understand consumer perception of products or services or concepts or ideas. Alternatively, it assumes that consumers evaluate the value for the price they paid to purchase products or services or ideas or concepts and the utility they have derived out of the purchase. As a measure of psychological attributes, it measures the psychological judgments (i.e., respondents' preferences or acceptable threshold etc.) or perceived similarities or differences between choices of alternatives available.

The Market Vision Research, USA has narrated conjoint analysis as a technique share the basic tenet of decomposing products into their component parts to analyze how decisions are made and then predict how decisions will be made in the future. Hence conjoint analysis is used to understand the importance of different product components or product features, as well as to determine how decisions are likely to be influenced by the inclusion, exclusion or degree of that feature.

It says that a product or service or idea has different attributes. By an attribute, it may mean a characteristic, a property, a quality, a specification or an aspect. A respondents decision while purchasing a good is based on not just one attribute but a combination of several attributes. Conjoint analysis is a relatively recent creation (1970's) of decision-making tool, particularly in marketing research. As already discussed, this technique has its roots in decision making and information processing from the field of psychometrics. There are four approaches that are generally discussed by the researchers:

Trade-off matrices
Full profile card sort
Hybrid conjoint and
Discrete choice modeling

1. Trade-off matrices:

The trade-off matrix represents a combination of the levels of two attributes. Respondents completing this task have to fill their preferences through rank order for all the observations entered in the matrix. For attributes in which there was a clear a priori order of preference, the ranks of two of the cells were always known. That is, the combination that offers the most channels at the lowest price is the most preferred combination and the fewest channels at the highest price is the least preferred combination.

2. Full profile card sort:

This approach of the conjoint analysis is regarded as the traditional concept. It helps respondents to evaluate several product concepts, one at a time, defined on all attributes simultaneously. The concepts undertaken in the research is printed on the white sheet called as ‘cards'. Each card has one level of attribute and respondents are asked to either rate or rank each concept printed on the card. The process of sorting these profiles into stakes caused this approach to be referred to as ‘card sort'.

3. Hybrid conjoint

The hybrid methods of conjoint analysis is best suit to the problems having six or more attributes and includes respondents' self-explicated utilities. Here respondents are directly asked to indicate their preferences for each level of attributes and this information is included in part-worth estimates. Respondents are first asked to indicate rank order preference for levels within each attribute and then the importance of the attribute. Then respondents are asked to evaluate a series of pared-comparison questions. In the paired comparisons, respondents are presented with two product concepts and asked to indicate their preferences using a rating scale, with the middle point indicating both concepts are equally liked by the respondents.

4. Discrete choice modeling

This approach is choice based conjoint analysis or other wise called as discrete choice modeling. Choice based approach presents multiple concepts to respondents and asks about their choice. Here the respondents will pick up the alternatives easily and hence this technique of conjoint analysis is little bit easier than other available techniques of multivariate analysis.

Thus conjoint analysis as technique of multivariate analysis is able to infer the ‘true' value structure that influences consumer decision making. Hence it is sometimes referred to as ‘trade-off' analysis because respondents in case of this study are forced to make trade-offs between product features.

CONCLUSION:

Several multivariate techniques are discussed above. Each technique has its own merit and demerit. It is however a great task of selecting an appropriate multivariate technique based on the nature of data available. Thus if takes proper care of the nature of study, it would be definitely possible to arrive at better alternatives and more reliable solutions, there by avoiding all kinds of loop holes.

SUMMERY:

3. Multivariate analysis is a tool for a decision marker (may be a manager or researcher) in the process of decision-making by means of data on hand. All research activity requires for analysis of raw data.

4. Cluster analysis is a collection of statistical methods, which identifies groups of samples that behaves similarly or shows similar characteristics.

5. The simplest mechanism is to partition the samples using measurements that capture similarity or distance between the samples. Often in marketing research studies, cluster analysis is also referred to segmentation method or market segmentation.

6. Discriminate analysis used to classify an observation into one of the several a priori groupings dependent upon the observations individual characteristics. It is generally used to classify and/or make predictions in problems where the dependent variable appears in qualitative form.

7. Cluster analysis is a technique to group variables, individuals and entities. Once the variables are classified on certain characteristics it makes the work of researcher easier for further analysis by taking only a smaller sample space.

8. Meta analysis is other wise called as analysis of the analysis. Here the summery and the findings of any analysis are studied by using some statistical techniques to test the reliability of the studies.

9. Conjoint analysis is used to understand the importance of different product components or product features, as well as to determine how decisions are likely to be influenced by the inclusion, exclusion or degree of that feature.

10. Selecting an appropriate multivariate technique based on the nature of data available is a task full of difficulty.

IMPORTANT QUESTIONS:

1. Do it always be needed to use multivariate analysis in a survey data? What are the prerequisites for doing multivariate analysis involving large amount data?

2. Explain how factor analysis is useful in management research. Are there any independent and dependent variables in factor analysis? Justify your answer.

3. Formulate a marketing problem where factor analysis could be useful.

4. Is factor analysis applicable in solving a social science problem? If yes, explain the concept with proper answer.

5. What is the major difference between hierarchical clustering methods and the k- means clustering method?

6. A cosmetic manufacturer wants to know the current status of its product in the market, so that he can decide whether to position his new brand, and whether to reposition his existing brand. For this plan a study and decide your target segment.