Data Analysis and Findings

5300 words (21 pages) Essay

1st Jan 1970 Psychology Reference this

Disclaimer: This work has been submitted by a university student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

The primary objective of this chapter is to present the contributions from the initial exploratory study, the pilot study and detailed analysis of the outcome of the data that were collected in the quantitative explanatory stage via questionnaire designs. It specifically presents key results from the survey response analysis, respondents and their demographic profiles, data screening and preliminary analysis, measures of validity and reliability, path analysis and detailed results from the hypotheses testing.

5.2 Initial exploratory study and Results

The sixteen contact center executives that were interviewed at the exploratory phase has been argued as sufficient for exploratory study () and were analyzed through the approach that is provided by Yin (2003). Attached is appendix 6 that contained the list of questions that was used to explore applications within the contact center industry. Importantly, the overall results support the proposed CRM application – caller satisfaction model. This is because majority of the executives explicitly agreed that CRM applications within the contact center industry have completely revolutionized their operation processes. Below are few quotes from managers that exemplify the impacts of CRM applications on contact center operational efficiency and caller satisfactions:

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Find out more

Here in the contact center industry, CRM applications have been of greater assistance to our operation processes specifically in quantifying and forecasting our objectives and the expected results such as (Average Handling Time, Average Abandonment Rate, First Call Resolution, Caller Satisfaction etc.)

The items in your model are all familiar to us because we use them on daily basis, although we may not call them the same name as in your model. Very important among what you should know is that here in the contact center we are measuring many operational variables on daily, weekly, fortnightly, monthly, quarterly and yearly basis. We are measuring them to determine our operational efficiencies both internally and externally.

Yes CRM technologies such as workforce management, interactive voice response (IVR), predictive dialer, voice over internet protocol (VOIP), automatic call distributor (ACD) etc. have all been assisting our operation processes in achieving the desired efficiency.

Within the contact center, I can certainly tell you that CRM applications such as data base management, online connections between the frontline and back office and constant training of agents on the needs of customers are strong inputs to the achievement of our caller satisfaction.

To some extent it does, but note that there are other factors outside the contact center that majorly influence caller satisfaction, such as product quality, price and management policies. All these are external to contact center operations, but within our operations in the contact center I agreed that your proposed model extensively captured the determinants of call satisfaction.

Following the above is the detailed discussion of the analysis and results as achieved from the quantitative stage of the research.

5.3 Analysis of Survey Response

5.3.1. Response Rate

For compliance with data collection requirements, 400 questionnaires were distributed to contact center managers in Malaysia via mail and web survey. This type of data collection method is consistent with existing industry literatures such as Yim et al (2005). From this number, only 173 questionnaires were returned out of which 5 were discarded because they were incomplete. Thus, putting the total usable responses for further analysis at 168 and constituting an overall 43.3% response rate for this study.

The obtained sample size in this study appears to be very adequate and the response rate is also comparable to many contact center studies that have used managers and senior executives as the study sample. In those studies their respective response rates were between 15 and 49 percent (Yueh et al., 2010; Dean, 2009; Richard, 2007; Roland and Werner, 2005; Sin et al., 2005; Yim et al., 2005).

Out of the 173 respondents, 103 answered through the mail questionnaire, while the remaining 70 responded through the Web. To avoid multiple responses from same company, the researcher did compare the respondents from the online and mail on key variables like their annual revenue, experience, number of employees etc. And the results show that those who respond to mail questionnaire are different to those that responded to the online questionnaire.

5.3.1 Test of Non-Response Bias

Evidence from existing literatures have established that the non-respondents sometimes differs systematically from the respondents both in attitudes, behaviors, personalities, motivations, demographics and/or psychographics, in which any or all of which might affect the results of the study (Malhotra, Hall, Shaw, & Oppenheim, 2006). In this study, non-response and the response bias has been tested using the t-tests to compare the similarities between the mean, standard deviation and standard error mean of the early and late responses in variables such as gender, industry, revenue, number of employees, experience, qualification and age. In line with Churchill and Brown (2004) and Malhortra et al (2006) that have both empirical argued that late respondents could be used in place of non-respondents, primarily because they wouldn’t have probably responded if not that they had been extensively given followed up approach.

Malhortra et al (2006) went further to argue that the non-respondents are assumed as having similar characteristics like the late respondents. To standardize this procedure, this study has divided the sample into two (namely: early responses – those that returned the questionnaires within two weeks after the distribution and late responses – those that returned the questionnaires after two weeks from the date of distribution.

The above classification has led into classifying 102 respondents as early responses and 66 respondents as late responses. The results of the t-test indicated that there were no statistical significant differences in their demographic variables, except for the early respondent that shows a higher qualification (Postgraduate vs. Undergraduate), an indication which shows that the executives who has higher education tend to value academic researches due to their experience in postgraduate studies. For further verifications, below is table 5.1 that depicts the details of the test of non-respondent bias.

Table 5.1: Test of Non-Respondent Bias

Variable

Response

Number of Cases

Mean

Standard Deviation

Std Error Mean

Gender

Early

Late

102

66

1.41

1.42

.495

.498

.049

.061

Industry

Early

Late

102

66

2.52

2.45

.728

.706

.072

.087

Revenue

Early

Late

102

66

2.51

2.50

.841

.685

.083

.084

No of Employee

Early

Late

102

66

2.42

2.64

.710

.515

.070

.063

Experience

Early

Late

102

66

2.17

2.42

.902

.658

.089

.081

Qualification

Early

Late

102

66

4.33

3.70

.871

.744

.086

.092

Age

Early

Late

102

66

2.44

2.64

.815

.648

.081

.080

Position

Early

Late

102

66

3.44

3.62

.654

.739

.065

.091

Sequel to the above, this study tends to conclude that there is non-response bias that could significantly affect the study’s ability to generalize its findings. The above result has therefore given this study the opportunity to utilize the entire 168 responses in the data analysis.

5.4 Profiles of the Respondents

For ease of understanding is a tabulation of the profiles of the respondents, their firm’s structure and the demographic information about the participants in table 5.2. A critical look at the table has indicated that the responding firms and its participants are broadly representative of the target population in Malaysian contact center industry.

This is because the results in table 5.2 are consistent with the industry reports which established that Malaysia contact center executives are male dominated (57.7%) as against the female that are 42.3% respondents (Frost and Sullivan, 2009). This figure is very common within the contact center industry where their working schedules might be sometimes inconvenient for the ladies (Roland and Werner, 2005).

Similarly the respondents’ profile indicated that those organizations whose employees are below 100 are represented with 8.9% respondents, films numbering between 101 and 500 are moderately represented with 33.9%, while those that are between 501 and above are over represented with 57.1%. The low respondent from the less populated companies might be connected to their less involvement in CRM applications, meanwhile the larger films are likely to be over represented simply because of their ability to financially acquire and utilize the costly CRM technologies, making them more willing to participate in the survey (Yim et al., 2005). It became very apparent right from the initial telephone contact that smaller contact center firms tended not to have implement CRM applications and technologies and therefore confirming the reasons for their less willing to participate in the study survey. Whereas the larger companies tended to be very familiar with CRM applications and technologies, and therefore establishing the reasons for their more inclined to participating in the study, a strong evidence that has helped in explaining the over representations of Services (56%), Wholesale (31%), manufacturing (10.7%) and others (2.3%) as shown in table 5.2.

As could be seen in the table below that majority of the respondents reported between 5 and 10 years (46.4%) of work experience, and were older than 18 years, and at least had some tertiary educations.

Majority of the respondents earned an annual revenue of between RM1million and above (89.9%), with few minority (10.1%) earning below RM1million. This findings is in line with the industry trend that the majority of contact center operators that are earning higher revenue have in one way or the other implemented CRM applications and technologies (Frost and Sullivan, 2009; Callcentre.net, 2008;2003). These higher amounts of earnings have indicated how busy the industry activities are, particularly in its recent development on the foreign direct investment (FDI) in the outsourced business unit (Frost and Sullivan, 2009). This was why it was very difficult to see leading contact center executives such as the Senior Vice President and the Vice President to respond to the survey, an issue that made the majority of the respondents to fall under key operating executives like the call center manager (58.3%) and the Operation Manager (30.4%).

Conclusively, the above discussions have indicated that the sample for this study has not deviate from the general population of contact center and therefore making the sample a perfect representative of the selected population of interest.

Table 5.2: Profiles of the Respondents

Variable

Category

Number of Cases

Percentage

%

Gender

Male

Female

97

71

57.7

42.3

Industry

Manufacturing

Wholesale

Services

Others

18

52

94

4

10.7

31.0

56.0

2.3

Revenue

Between RM100, 000 – RM900, 000

Between RM1M – RM9, 900 000M

RM10M and above

17

71

80

10.1

42.3

47.6

No of Employees

Below 100

101 – 500

501 and Above

15

57

96

8.9

33.9

57.1

Years of Working Experience

Less than 5 years

Between 5 and 10 years

Between 10 and 20 years

Above 20 years

30

78

49

11

17.9

46.4

29.2

6.5

Qualification

No certification held

Primary school Certificate

School Certificate/SPM

Tertiary school certificate

Postgraduate Degrees

11

25

71

61

6.5

14.9

42.3

36.3

Age

Under 18

Between 18 and 35 years

Between 36 and 45 years

Between 46 and 55 years

Over 55 years

7

87

60

10

4

4.2

51.8

35.7

6.0

2.4

Position

Senior Vice President

Vice President

Call Center Manager

Operation Manager

Others

1

98

51

18

.6

58.3

30.4

10.7

5.5 Data Screening and Preliminary Analysis

5.5.1 Overview

To establish the assumption of psychometric properties before applying necessary data analyzes techniques; this study employed a series of data screening approach among which includes; detection and treatment of missing data, outliers, normality, multicollinearity etc. This is because the data distribution and the selected sample size have a direct impact on whatever choice of data analysis techniques and tests that is choosen ().

Find out how UKEssays.com can help you!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

5.5.2 Missing Data

As evident in previous studies that missing data is an issue of major concern to many researchers and has the capability of negatively affecting the results of any empirical research (). Ten returned mail surveys (10.3% of mailed surveys) had missing data, whereas there was no missing data in the online questionnaire. This is because the online questionnaire was structured in a way that the respondent wouldn’t be able to submit it if it has any missing data. The treatment of this missing data is very crucial because AMOS the statistical instrument for analyzing the data will not run if there is any missing value. Hair et al (2010) argued that it is better for researchers to delete the case respondent if the missing data is more that 50% and the study does not have any sample size problems. Alternative to this is the general treatment of missing data through SPSS by replacing missing values with mean or median of nearby points or via linear interpolation.

For this research, the ten missing mailed questionnaires were replaced with the median of nearby values since they are all minor omissions. As observed in this study that the most common item of missing data was the demographic variables such as level of annual income or current number of employees. These items mainly referred to the size of the respondent’s firm. Based on the need to protect their identity this research concluded that the missing data might be intentional simply for administrative purposes.

5.5.3 Checking for Outliers

Statistical evidence has established outliers as any observations which are numerically distant if compared to the rest of the dataset (Bryne, 2010). In line with this are several existing literatures that have been conducted on the different methods of detecting outliers within a given research, among which includes classifying data points based on an observed (Mahalanobis) distance from the research expected values (Hair et al., 2010; Hau & Marsh, 2004). Part of the constructive arguments in favor of outlier treatments based on Mahalanobis distance is that it serves as an effective means of detecting outliers through the settings of some predetermined threshold that will assist in defining whether a point could be categorized as outlier or not (Gerrit et al., 2002).

For this research, the table of chi-square statistics has been used as the threshold value to determine the empirical optimal values for the research. This decision is in line with the arguments of Hair et al (2010) which emphasized on the need to create a new variable in the SPSS excel to be called “response” numbering from the beginning to the end of all variables. The Mahalanobis could simply be achieved by running a simple linear regression by selecting the newly created response number as the dependent variable and selecting all measurement items apart from the demographic variables as independent variables. Doing this has assisted this study in creating a new output called Mah2 upon which a comparism was made between the chi-square as stipulated in the table and the newly Mahalanobis output.

It was under this Mah2 that this current study identified 16 items out of the total of 168 respondents as falling under outliers because their Mah2 is greater than the threshold value as indicated in the table of chi-square statistics that is related to the 40 measurement items in the independent variable of this study and was subsequently deleted from the dataset. Sequel to the treatment of these outliers, the final regressions in this study was done using the remaining 152 samples in the data.

5.5.4 Assumptions Underlying Statistical Regressions

Many of the modern statistical tests have been relying upon some specified assumptions about the actual variable to be used in the data analysis. Arguably, researchers and statistician have confirmed on the need to meet these basic assumptions in order for the research results to be trustworthy (). This is because a trustworthy result will prevent the occurrence of either Type I or Type II error, or even the error in over or under estimating the significance of a research. As noted by (), the knowledge and general understanding of the previous and current situations on the theory will be jeopardize if there is violations of these basic assumptions that might lead to a serious biases in the research findings. The three notable of these basic assumptions are linearity, normality and homoscedasticity (Hair et al., 2010).

5.5.4.1 Assumption of Normality

For every regression analysis, researchers always assume that the variables have gotten normal distributions. This is because a non-normally distributed variable will be highly skewed and could potentially distort the relationships between the variables of interest and the significance of the tests results (). To prevent the occurrence of this abnormality in this current study, the researcher has conducted necessary data cleansing such as determining the z-score of each items and transforming them through cdfnorm in SPSS 14. Sequel to the transformation of data, this study has conducted visual inspections of the data through histogram, stem and leaf plots, normal Q-Q plot, boxplot to determine the data skewness and kurtosis so as to ascertain the normality of the data. Importantly both the critical ratios in the skewness and kurtosis of this study falls within the suggested standards of CR < 2/3 and CR < 7, a strong evidence that indicate the normality of the data. Similarly conducted in this study is Kolmogorov-Smirnov tests which have also provided evidence of the normality of the data that is used in this study. Very relevant on this area of research is the analyses conducted by Bryne (2010) which further confirmed that treatment of normality has done in this research are efficient means of reducing the probability of incurring either Type I or Type II errors and also improving the accuracy of the research estimates.

5.5.4.2 The assumptions of Linear Relationship

As argued that for any standard multiple regression analysis to be accurate in its estimates of the relationships that exist between the dependent and the independent variables the relationships must be linear in nature. This is because there as been several instances in some social sciences researches where there have occurred non linear relationships between the variables of study (). The occurrence of non linearity has been argued to increase the chances of committing a Type I or Type II error. Several authors like (), (), (), have suggested three methods of detecting non-linearity, among which includes the use of items from existing theory or previous studies in the current analyses. There is linearity between the dependent and independent variables because all items in the independent variables were adopted from existing theories. Therefore there is no problem of the non-linearity.

5.5.4.3 The assumption of Homoscedasticity

The existence of Homoscedasticity in a research means that the variance of errors in such analysis is the same across all its levels in the independent variables (). There is no Homoscedasticity in this current study as obtained in the estimates of its correlations among the exogenous variables. None of the independent variables have offending estimates either, therefore confirming non existence of any distortions or probability of committing Type 1 error.

5.5.5 Sample Size and Power

Since there is little evidence on the statistical power and the factor loading to be selected in SEM and AMOS literatures, this study has the criteria in analysis as recommended by Bryne (2010). This involves identifying the significant factor loadings to be use for a factor analysis through its sample size, and given the 470 cases in this study, a factor loading of 0.50 or greater has been considered to be significant as a criterion for the assessment of factor loadings.

5.5.6 Common Method Variance

Previous statistical literatures have established common method bias as a major source through which measurement errors can occur and could substantially have negative impact on the observed relationships that exist the between the measured variables (). A major cause of the common method bias is items characteristics; which normally occurred through the use of same respondents for both the dependent and the independent variables (). Strong argument in support of this type of bias is that it will generate significant artificial covariances (). et al. () suggested that for researchers to prevent the error in common method bias there is need to separately measure the predictor and the criterion variables through different sources. For this study, common method bias was prevented through measuring predictor variables based on managers opinion of the impacts of CRM dimensions on their operational activities, while the criterion variables was asked based on the outcome of their 2009 customer satisfaction and first call resolution survey. This procedure was made possible because within the contact center industry each company generally conduct customer survey either through interactive voice response (IVR) or through telephone, email or sms survey. A good reason upon which FCR and caller satisfaction where measured based on ordinal scale in this study, empirically aliening with some existing literatures and industry standard of measuring FCR and caller satisfaction based percentage method (Roalnd and Werner, 2005; Yim et al., 2005).

5.6 Initial Analysis and Measurement Refinement

Consistent with the available literatures on structural equation modeling and many scholarly recommendations, this study deem it fit to adopt a two step model building method as previously adopted by Roland and Werner, (2005) and Yim et al (2005) both conducted within the inbound units of the contact center industry. The first Step involved the Exploratory Factor Analysis (EFA) to purify and validate untested new measurement scales, and the second step which involved confirmatory factor analysis (CFA) meant to validate pre-existing measurement scales within the context of the current study (Bryne, 2010; Hair et al., 2006).

At the onset of this study, the researcher developed a set of ratio scales to measure the individual contact center performance in terms of their first call resolution and caller satisfaction. But the proposed ratio scale was turned down by the chosen managers at the face validity as been a subject of privacy and confidentiality. These group of experts alternatively suggested that it is best to use the industry standard which might ask the managers to rate their company’s performance based on their previous customer survey. Whereas, the managers’ suggestion are theoretically in line with the previous studies such as Roland and Werner (2005), Yim et al (2005) and Feinberg et al (2002; 2000) that all asked managers to rate their company’s performance based on the percentage of their callers surveyed that report top box first call resolution (FCR) and caller satisfaction. The “top box” FCR and caller satisfactions refers to the callers that reported they were extremely satisfied with the outcomes of their calling, and this primarily depends on the whatever the company wants the top score to be measuring. This process as requested by the managers at the face validity stage and also in-line with major existing literatures that have measured first call resolution and caller satisfaction eventually narrowed the EFA process.

The purpose of the EFA was primarily to identify, reduce and assist in validating the underlying factors that might determine FCR and caller satisfaction, this study concomitantly abide by its identification of the single construct that is being used in the industry of study and previous studies like Feinberg et al (2002; 2000), Roland and Werner, (2005) and Yim et al (2005). As argued by Hair et al (2006) that the objective of the exploratory factor analysis was to generally prepare the obtained data for any subsequent bivariate or multivariate regression analysis using the AMOS software. Contrary to EFA, the confirmatory factor analysis was used in this study to confirm and reduced the numbers of the factors from other constructs such as CRM dimensions (Customer Orientation, CRM Organization, Knowledge Management and Technology Based CRM) and perceived service quality. Following the suggestions in the existing literatures on SEM, this study made used of SPSS 14.0 software in performing the EFA, while AMOS software was also used in conducting the CFA (Bryne, 2010; Hair et al., 2006).

5.7 Exploratory Factor Analysis

This study conducted a detail visual inspections on the likely correlation matrix primarily to establish factorability and ensure that a substantial numbers of the correlations are greater than 0.50. To effectively do this, a scan was done on the significance values primarily to look for any likely variable that its majority of values are greater than the suggested 0.05 (). Following this was a scan on the correlation coefficients looking for any that might be greater than the suggested 0.9 (). Important to note under this is that if majority of the variables have a value that is greater than 0.5 or the correlation coefficient has a value greater than 0.9, then the researcher should be aware that there is the probability of problems arising from singularity in its data (). Hair et al (2006) suggested that researchers should eliminate one of the two variables that are causing the problem through checking of their determinants. To identify their determinants, one will need to check on the list at the bottom of the matrix. For the data that is used in this current study its value is 6.55E-019 (which is 0.066) a value that is far greater than the suggested value of 0.00001. Therefore, indicating that there is no multicollinearity problem in these data. In summary, all the questions in the CRM dimensions correlate very well and none of the coefficient of their correlation is particularly large; indicating that there is no need for eliminating any of the measurements at this stage.

5.8 KMO and Bartlett’s Test

This is the output two under SPSS that specifically measured Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and the Bartlett’s test of sphericity. The KMO statistic has been theoretically argued as varying between 0 and 1 (). This is because value of 0 generally indicated that the total of the partial correlations is observed to be large relative to the total of the correlations, an indication of the existence of diffusion within the patterns of the correlations (hence, conducting factor analysis is most likely to show an inappropriate result). Meanwhile if there is a value that is close to 1, that is an indication that the patterns of the correlations is observed to be relatively compact, therefore factor analysis is expected to yield a distinct and set of reliable factors ().

As empirically recommended by Kaiser (1974) that it is generally acceptable to accept any value that is more than 0.5, importantly the implications is that any value that falls below 0.5 is an indication to collect more data or/and include new variables. For Kaiser (1974) any value that falls between 0.5 and 0.7 could be referred as mediocre, while the values that are between 0.7 and 0.8 could be categorized as good, for the ones between 0.8 and 0.9 could be seen as great, and finally those values that are above 0.9 could be categorized as superb. For the data in this current study the value is 0.82, which empirically falls within the category of data that are classified as great; conclusively we are confident that the conducted factor analysis is very appropriate for our data.

The importance of the findings is that Bartlett’s test is empirically structured to test the null hypothesis so as to determine if the original correlation matrix is truly identity matrices. Arguably it is said that for any factor analysis to efficiently work, the researchers need to establish some existing relationships between the variables of interest. And for any Bartlett test to be significant it must obtain a statistical significance value that is less than the suggested 0.05 (). For this current study, the significant test has indicated to us that the observed R-matrix in this study is not an identity matrix, thereby confirming that there exist some relationships between those variables (customer orientation, CRM organization, knowledge management, technology based CRM, perceived service quality, first cal resolution and caller satisfaction) that have been included in this study for further analysis. Importantly the Bartlett’s test for the data in this study is highly significant at (p < 0.001), this outcome has statistically confirmed that factor analysis is very appropriate in this study.

Factor Extraction

Through factor analysis this study was able to retrieve a list of eigenvalues that are associated with each of the linear components factors before the data extraction, after the data extraction and after the component is rotated. Through the SPSS output 4 at the appendix we will be able to see that the eigenvalues that are associated with each of the factors mainly represents the actual variance that is explained by that very particular linear relationship and equally display

The primary objective of this chapter is to present the contributions from the initial exploratory study, the pilot study and detailed analysis of the outcome of the data that were collected in the quantitative explanatory stage via questionnaire designs. It specifically presents key results from the survey response analysis, respondents and their demographic profiles, data screening and preliminary analysis, measures of validity and reliability, path analysis and detailed results from the hypotheses testing.

5.2 Initial exploratory study and Results

The sixteen contact center executives that were interviewed at the exploratory phase has been argued as sufficient for exploratory study () and were analyzed through the approach that is provided by Yin (2003). Attached is appendix 6 that contained the list of questions that was used to explore applications within the contact center industry. Importantly, the overall results support the proposed CRM application – caller satisfaction model. This is because majority of the executives explicitly agreed that CRM applications within the contact center industry have completely revolutionized their operation processes. Below are few quotes from managers that exemplify the impacts of CRM applications on contact center operational efficiency and caller satisfactions:

Here in the contact center industry, CRM applications have been of greater assistance to our operation processes specifically in quantifying and forecasting our objectives and the expected results such as (Average Handling Time, Average Abandonment Rate, First Call Resolution, Caller Satisfaction etc.)

The items in your model are all familiar to us because we use them on daily basis, although we may not call them the same name as in your model. Very important among what you should know is that here in the contact center we are measuring many operational variables on daily, weekly, fortnightly, monthly, quarterly and yearly basis. We are measuring them to determine our operational efficiencies both internally and externally.

Yes CRM technologies such as workforce management, interactive voice response (IVR), predictive dialer, voice over internet protocol (VOIP), automatic call distributor (ACD) etc. have all been assisting our operation processes in achieving the desired efficiency.

Within the contact center, I can certainly tell you that CRM applications such as data base management, online connections between the frontline and back office and constant training of agents on the needs of customers are strong inputs to the achievement of our caller satisfaction.

To some extent it does, but note that there are other factors outside the contact center that majorly influence caller satisfaction, such as product quality, price and management policies. All these are external to contact center operations, but within our operations in the contact center I agreed that your proposed model extensively captured the determinants of call satisfaction.

Following the above is the detailed discussion of the analysis and results as achieved from the quantitative stage of the research.

5.3 Analysis of Survey Response

5.3.1. Response Rate

For compliance with data collection requirements, 400 questionnaires were distributed to contact center managers in Malaysia via mail and web survey. This type of data collection method is consistent with existing industry literatures such as Yim et al (2005). From this number, only 173 questionnaires were returned out of which 5 were discarded because they were incomplete. Thus, putting the total usable responses for further analysis at 168 and constituting an overall 43.3% response rate for this study.

The obtained sample size in this study appears to be very adequate and the response rate is also comparable to many contact center studies that have used managers and senior executives as the study sample. In those studies their respective response rates were between 15 and 49 percent (Yueh et al., 2010; Dean, 2009; Richard, 2007; Roland and Werner, 2005; Sin et al., 2005; Yim et al., 2005).

Out of the 173 respondents, 103 answered through the mail questionnaire, while the remaining 70 responded through the Web. To avoid multiple responses from same company, the researcher did compare the respondents from the online and mail on key variables like their annual revenue, experience, number of employees etc. And the results show that those who respond to mail questionnaire are different to those that responded to the online questionnaire.

5.3.1 Test of Non-Response Bias

Evidence from existing literatures have established that the non-respondents sometimes differs systematically from the respondents both in attitudes, behaviors, personalities, motivations, demographics and/or psychographics, in which any or all of which might affect the results of the study (Malhotra, Hall, Shaw, & Oppenheim, 2006). In this study, non-response and the response bias has been tested using the t-tests to compare the similarities between the mean, standard deviation and standard error mean of the early and late responses in variables such as gender, industry, revenue, number of employees, experience, qualification and age. In line with Churchill and Brown (2004) and Malhortra et al (2006) that have both empirical argued that late respondents could be used in place of non-respondents, primarily because they wouldn’t have probably responded if not that they had been extensively given followed up approach.

Malhortra et al (2006) went further to argue that the non-respondents are assumed as having similar characteristics like the late respondents. To standardize this procedure, this study has divided the sample into two (namely: early responses – those that returned the questionnaires within two weeks after the distribution and late responses – those that returned the questionnaires after two weeks from the date of distribution.

The above classification has led into classifying 102 respondents as early responses and 66 respondents as late responses. The results of the t-test indicated that there were no statistical significant differences in their demographic variables, except for the early respondent that shows a higher qualification (Postgraduate vs. Undergraduate), an indication which shows that the executives who has higher education tend to value academic researches due to their experience in postgraduate studies. For further verifications, below is table 5.1 that depicts the details of the test of non-respondent bias.

Table 5.1: Test of Non-Respondent Bias

Variable

Response

Number of Cases

Mean

Standard Deviation

Std Error Mean

Gender

Early

Late

102

66

1.41

1.42

.495

.498

.049

.061

Industry

Early

Late

102

66

2.52

2.45

.728

.706

.072

.087

Revenue

Early

Late

102

66

2.51

2.50

.841

.685

.083

.084

No of Employee

Early

Late

102

66

2.42

2.64

.710

.515

.070

.063

Experience

Early

Late

102

66

2.17

2.42

.902

.658

.089

.081

Qualification

Early

Late

102

66

4.33

3.70

.871

.744

.086

.092

Age

Early

Late

102

66

2.44

2.64

.815

.648

.081

.080

Position

Early

Late

102

66

3.44

3.62

.654

.739

.065

.091

Sequel to the above, this study tends to conclude that there is non-response bias that could significantly affect the study’s ability to generalize its findings. The above result has therefore given this study the opportunity to utilize the entire 168 responses in the data analysis.

5.4 Profiles of the Respondents

For ease of understanding is a tabulation of the profiles of the respondents, their firm’s structure and the demographic information about the participants in table 5.2. A critical look at the table has indicated that the responding firms and its participants are broadly representative of the target population in Malaysian contact center industry.

This is because the results in table 5.2 are consistent with the industry reports which established that Malaysia contact center executives are male dominated (57.7%) as against the female that are 42.3% respondents (Frost and Sullivan, 2009). This figure is very common within the contact center industry where their working schedules might be sometimes inconvenient for the ladies (Roland and Werner, 2005).

Similarly the respondents’ profile indicated that those organizations whose employees are below 100 are represented with 8.9% respondents, films numbering between 101 and 500 are moderately represented with 33.9%, while those that are between 501 and above are over represented with 57.1%. The low respondent from the less populated companies might be connected to their less involvement in CRM applications, meanwhile the larger films are likely to be over represented simply because of their ability to financially acquire and utilize the costly CRM technologies, making them more willing to participate in the survey (Yim et al., 2005). It became very apparent right from the initial telephone contact that smaller contact center firms tended not to have implement CRM applications and technologies and therefore confirming the reasons for their less willing to participate in the study survey. Whereas the larger companies tended to be very familiar with CRM applications and technologies, and therefore establishing the reasons for their more inclined to participating in the study, a strong evidence that has helped in explaining the over representations of Services (56%), Wholesale (31%), manufacturing (10.7%) and others (2.3%) as shown in table 5.2.

As could be seen in the table below that majority of the respondents reported between 5 and 10 years (46.4%) of work experience, and were older than 18 years, and at least had some tertiary educations.

Majority of the respondents earned an annual revenue of between RM1million and above (89.9%), with few minority (10.1%) earning below RM1million. This findings is in line with the industry trend that the majority of contact center operators that are earning higher revenue have in one way or the other implemented CRM applications and technologies (Frost and Sullivan, 2009; Callcentre.net, 2008;2003). These higher amounts of earnings have indicated how busy the industry activities are, particularly in its recent development on the foreign direct investment (FDI) in the outsourced business unit (Frost and Sullivan, 2009). This was why it was very difficult to see leading contact center executives such as the Senior Vice President and the Vice President to respond to the survey, an issue that made the majority of the respondents to fall under key operating executives like the call center manager (58.3%) and the Operation Manager (30.4%).

Conclusively, the above discussions have indicated that the sample for this study has not deviate from the general population of contact center and therefore making the sample a perfect representative of the selected population of interest.

Table 5.2: Profiles of the Respondents

Variable

Category

Number of Cases

Percentage

%

Gender

Male

Female

97

71

57.7

42.3

Industry

Manufacturing

Wholesale

Services

Others

18

52

94

4

10.7

31.0

56.0

2.3

Revenue

Between RM100, 000 – RM900, 000

Between RM1M – RM9, 900 000M

RM10M and above

17

71

80

10.1

42.3

47.6

No of Employees

Below 100

101 – 500

501 and Above

15

57

96

8.9

33.9

57.1

Years of Working Experience

Less than 5 years

Between 5 and 10 years

Between 10 and 20 years

Above 20 years

30

78

49

11

17.9

46.4

29.2

6.5

Qualification

No certification held

Primary school Certificate

School Certificate/SPM

Tertiary school certificate

Postgraduate Degrees

11

25

71

61

6.5

14.9

42.3

36.3

Age

Under 18

Between 18 and 35 years

Between 36 and 45 years

Between 46 and 55 years

Over 55 years

7

87

60

10

4

4.2

51.8

35.7

6.0

2.4

Position

Senior Vice President

Vice President

Call Center Manager

Operation Manager

Others

1

98

51

18

.6

58.3

30.4

10.7

5.5 Data Screening and Preliminary Analysis

5.5.1 Overview

To establish the assumption of psychometric properties before applying necessary data analyzes techniques; this study employed a series of data screening approach among which includes; detection and treatment of missing data, outliers, normality, multicollinearity etc. This is because the data distribution and the selected sample size have a direct impact on whatever choice of data analysis techniques and tests that is choosen ().

5.5.2 Missing Data

As evident in previous studies that missing data is an issue of major concern to many researchers and has the capability of negatively affecting the results of any empirical research (). Ten returned mail surveys (10.3% of mailed surveys) had missing data, whereas there was no missing data in the online questionnaire. This is because the online questionnaire was structured in a way that the respondent wouldn’t be able to submit it if it has any missing data. The treatment of this missing data is very crucial because AMOS the statistical instrument for analyzing the data will not run if there is any missing value. Hair et al (2010) argued that it is better for researchers to delete the case respondent if the missing data is more that 50% and the study does not have any sample size problems. Alternative to this is the general treatment of missing data through SPSS by replacing missing values with mean or median of nearby points or via linear interpolation.

For this research, the ten missing mailed questionnaires were replaced with the median of nearby values since they are all minor omissions. As observed in this study that the most common item of missing data was the demographic variables such as level of annual income or current number of employees. These items mainly referred to the size of the respondent’s firm. Based on the need to protect their identity this research concluded that the missing data might be intentional simply for administrative purposes.

5.5.3 Checking for Outliers

Statistical evidence has established outliers as any observations which are numerically distant if compared to the rest of the dataset (Bryne, 2010). In line with this are several existing literatures that have been conducted on the different methods of detecting outliers within a given research, among which includes classifying data points based on an observed (Mahalanobis) distance from the research expected values (Hair et al., 2010; Hau & Marsh, 2004). Part of the constructive arguments in favor of outlier treatments based on Mahalanobis distance is that it serves as an effective means of detecting outliers through the settings of some predetermined threshold that will assist in defining whether a point could be categorized as outlier or not (Gerrit et al., 2002).

For this research, the table of chi-square statistics has been used as the threshold value to determine the empirical optimal values for the research. This decision is in line with the arguments of Hair et al (2010) which emphasized on the need to create a new variable in the SPSS excel to be called “response” numbering from the beginning to the end of all variables. The Mahalanobis could simply be achieved by running a simple linear regression by selecting the newly created response number as the dependent variable and selecting all measurement items apart from the demographic variables as independent variables. Doing this has assisted this study in creating a new output called Mah2 upon which a comparism was made between the chi-square as stipulated in the table and the newly Mahalanobis output.

It was under this Mah2 that this current study identified 16 items out of the total of 168 respondents as falling under outliers because their Mah2 is greater than the threshold value as indicated in the table of chi-square statistics that is related to the 40 measurement items in the independent variable of this study and was subsequently deleted from the dataset. Sequel to the treatment of these outliers, the final regressions in this study was done using the remaining 152 samples in the data.

5.5.4 Assumptions Underlying Statistical Regressions

Many of the modern statistical tests have been relying upon some specified assumptions about the actual variable to be used in the data analysis. Arguably, researchers and statistician have confirmed on the need to meet these basic assumptions in order for the research results to be trustworthy (). This is because a trustworthy result will prevent the occurrence of either Type I or Type II error, or even the error in over or under estimating the significance of a research. As noted by (), the knowledge and general understanding of the previous and current situations on the theory will be jeopardize if there is violations of these basic assumptions that might lead to a serious biases in the research findings. The three notable of these basic assumptions are linearity, normality and homoscedasticity (Hair et al., 2010).

5.5.4.1 Assumption of Normality

For every regression analysis, researchers always assume that the variables have gotten normal distributions. This is because a non-normally distributed variable will be highly skewed and could potentially distort the relationships between the variables of interest and the significance of the tests results (). To prevent the occurrence of this abnormality in this current study, the researcher has conducted necessary data cleansing such as determining the z-score of each items and transforming them through cdfnorm in SPSS 14. Sequel to the transformation of data, this study has conducted visual inspections of the data through histogram, stem and leaf plots, normal Q-Q plot, boxplot to determine the data skewness and kurtosis so as to ascertain the normality of the data. Importantly both the critical ratios in the skewness and kurtosis of this study falls within the suggested standards of CR < 2/3 and CR < 7, a strong evidence that indicate the normality of the data. Similarly conducted in this study is Kolmogorov-Smirnov tests which have also provided evidence of the normality of the data that is used in this study. Very relevant on this area of research is the analyses conducted by Bryne (2010) which further confirmed that treatment of normality has done in this research are efficient means of reducing the probability of incurring either Type I or Type II errors and also improving the accuracy of the research estimates.

5.5.4.2 The assumptions of Linear Relationship

As argued that for any standard multiple regression analysis to be accurate in its estimates of the relationships that exist between the dependent and the independent variables the relationships must be linear in nature. This is because there as been several instances in some social sciences researches where there have occurred non linear relationships between the variables of study (). The occurrence of non linearity has been argued to increase the chances of committing a Type I or Type II error. Several authors like (), (), (), have suggested three methods of detecting non-linearity, among which includes the use of items from existing theory or previous studies in the current analyses. There is linearity between the dependent and independent variables because all items in the independent variables were adopted from existing theories. Therefore there is no problem of the non-linearity.

5.5.4.3 The assumption of Homoscedasticity

The existence of Homoscedasticity in a research means that the variance of errors in such analysis is the same across all its levels in the independent variables (). There is no Homoscedasticity in this current study as obtained in the estimates of its correlations among the exogenous variables. None of the independent variables have offending estimates either, therefore confirming non existence of any distortions or probability of committing Type 1 error.

5.5.5 Sample Size and Power

Since there is little evidence on the statistical power and the factor loading to be selected in SEM and AMOS literatures, this study has the criteria in analysis as recommended by Bryne (2010). This involves identifying the significant factor loadings to be use for a factor analysis through its sample size, and given the 470 cases in this study, a factor loading of 0.50 or greater has been considered to be significant as a criterion for the assessment of factor loadings.

5.5.6 Common Method Variance

Previous statistical literatures have established common method bias as a major source through which measurement errors can occur and could substantially have negative impact on the observed relationships that exist the between the measured variables (). A major cause of the common method bias is items characteristics; which normally occurred through the use of same respondents for both the dependent and the independent variables (). Strong argument in support of this type of bias is that it will generate significant artificial covariances (). et al. () suggested that for researchers to prevent the error in common method bias there is need to separately measure the predictor and the criterion variables through different sources. For this study, common method bias was prevented through measuring predictor variables based on managers opinion of the impacts of CRM dimensions on their operational activities, while the criterion variables was asked based on the outcome of their 2009 customer satisfaction and first call resolution survey. This procedure was made possible because within the contact center industry each company generally conduct customer survey either through interactive voice response (IVR) or through telephone, email or sms survey. A good reason upon which FCR and caller satisfaction where measured based on ordinal scale in this study, empirically aliening with some existing literatures and industry standard of measuring FCR and caller satisfaction based percentage method (Roalnd and Werner, 2005; Yim et al., 2005).

5.6 Initial Analysis and Measurement Refinement

Consistent with the available literatures on structural equation modeling and many scholarly recommendations, this study deem it fit to adopt a two step model building method as previously adopted by Roland and Werner, (2005) and Yim et al (2005) both conducted within the inbound units of the contact center industry. The first Step involved the Exploratory Factor Analysis (EFA) to purify and validate untested new measurement scales, and the second step which involved confirmatory factor analysis (CFA) meant to validate pre-existing measurement scales within the context of the current study (Bryne, 2010; Hair et al., 2006).

At the onset of this study, the researcher developed a set of ratio scales to measure the individual contact center performance in terms of their first call resolution and caller satisfaction. But the proposed ratio scale was turned down by the chosen managers at the face validity as been a subject of privacy and confidentiality. These group of experts alternatively suggested that it is best to use the industry standard which might ask the managers to rate their company’s performance based on their previous customer survey. Whereas, the managers’ suggestion are theoretically in line with the previous studies such as Roland and Werner (2005), Yim et al (2005) and Feinberg et al (2002; 2000) that all asked managers to rate their company’s performance based on the percentage of their callers surveyed that report top box first call resolution (FCR) and caller satisfaction. The “top box” FCR and caller satisfactions refers to the callers that reported they were extremely satisfied with the outcomes of their calling, and this primarily depends on the whatever the company wants the top score to be measuring. This process as requested by the managers at the face validity stage and also in-line with major existing literatures that have measured first call resolution and caller satisfaction eventually narrowed the EFA process.

The purpose of the EFA was primarily to identify, reduce and assist in validating the underlying factors that might determine FCR and caller satisfaction, this study concomitantly abide by its identification of the single construct that is being used in the industry of study and previous studies like Feinberg et al (2002; 2000), Roland and Werner, (2005) and Yim et al (2005). As argued by Hair et al (2006) that the objective of the exploratory factor analysis was to generally prepare the obtained data for any subsequent bivariate or multivariate regression analysis using the AMOS software. Contrary to EFA, the confirmatory factor analysis was used in this study to confirm and reduced the numbers of the factors from other constructs such as CRM dimensions (Customer Orientation, CRM Organization, Knowledge Management and Technology Based CRM) and perceived service quality. Following the suggestions in the existing literatures on SEM, this study made used of SPSS 14.0 software in performing the EFA, while AMOS software was also used in conducting the CFA (Bryne, 2010; Hair et al., 2006).

5.7 Exploratory Factor Analysis

This study conducted a detail visual inspections on the likely correlation matrix primarily to establish factorability and ensure that a substantial numbers of the correlations are greater than 0.50. To effectively do this, a scan was done on the significance values primarily to look for any likely variable that its majority of values are greater than the suggested 0.05 (). Following this was a scan on the correlation coefficients looking for any that might be greater than the suggested 0.9 (). Important to note under this is that if majority of the variables have a value that is greater than 0.5 or the correlation coefficient has a value greater than 0.9, then the researcher should be aware that there is the probability of problems arising from singularity in its data (). Hair et al (2006) suggested that researchers should eliminate one of the two variables that are causing the problem through checking of their determinants. To identify their determinants, one will need to check on the list at the bottom of the matrix. For the data that is used in this current study its value is 6.55E-019 (which is 0.066) a value that is far greater than the suggested value of 0.00001. Therefore, indicating that there is no multicollinearity problem in these data. In summary, all the questions in the CRM dimensions correlate very well and none of the coefficient of their correlation is particularly large; indicating that there is no need for eliminating any of the measurements at this stage.

5.8 KMO and Bartlett’s Test

This is the output two under SPSS that specifically measured Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and the Bartlett’s test of sphericity. The KMO statistic has been theoretically argued as varying between 0 and 1 (). This is because value of 0 generally indicated that the total of the partial correlations is observed to be large relative to the total of the correlations, an indication of the existence of diffusion within the patterns of the correlations (hence, conducting factor analysis is most likely to show an inappropriate result). Meanwhile if there is a value that is close to 1, that is an indication that the patterns of the correlations is observed to be relatively compact, therefore factor analysis is expected to yield a distinct and set of reliable factors ().

As empirically recommended by Kaiser (1974) that it is generally acceptable to accept any value that is more than 0.5, importantly the implications is that any value that falls below 0.5 is an indication to collect more data or/and include new variables. For Kaiser (1974) any value that falls between 0.5 and 0.7 could be referred as mediocre, while the values that are between 0.7 and 0.8 could be categorized as good, for the ones between 0.8 and 0.9 could be seen as great, and finally those values that are above 0.9 could be categorized as superb. For the data in this current study the value is 0.82, which empirically falls within the category of data that are classified as great; conclusively we are confident that the conducted factor analysis is very appropriate for our data.

The importance of the findings is that Bartlett’s test is empirically structured to test the null hypothesis so as to determine if the original correlation matrix is truly identity matrices. Arguably it is said that for any factor analysis to efficiently work, the researchers need to establish some existing relationships between the variables of interest. And for any Bartlett test to be significant it must obtain a statistical significance value that is less than the suggested 0.05 (). For this current study, the significant test has indicated to us that the observed R-matrix in this study is not an identity matrix, thereby confirming that there exist some relationships between those variables (customer orientation, CRM organization, knowledge management, technology based CRM, perceived service quality, first cal resolution and caller satisfaction) that have been included in this study for further analysis. Importantly the Bartlett’s test for the data in this study is highly significant at (p < 0.001), this outcome has statistically confirmed that factor analysis is very appropriate in this study.

Factor Extraction

Through factor analysis this study was able to retrieve a list of eigenvalues that are associated with each of the linear components factors before the data extraction, after the data extraction and after the component is rotated. Through the SPSS output 4 at the appendix we will be able to see that the eigenvalues that are associated with each of the factors mainly represents the actual variance that is explained by that very particular linear relationship and equally display

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: