Methodology chapter examining the employability of accounting graduates

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.




In this chapter we dealt with the methods and techniques that was employed in the gathering and analysis of data. The chapter states the population and sample of accounting graduates in Nigeria that was studied, describe the data, data types and sources. The chapter also explains relevant variables, variable estimations, the analytical models and the methods of data analysis that was employed in this study.

3.2The Population and Sample

All accounting graduates from Nigeria constitute the population. Nigeria is divided into six (6) geopolitical zones, North-East, North-West, North-Central, South-East, South-West and South-South with thirty six states (36) and a federal capital territory. This study adopts the purposive random sampling method to obtain the sample size of accounting graduates. In order to achieve this, the study focused on accounting graduates working in the South-West and South-South geopolitical zones. Accounting graduates working both in public and private sectors including those that are self-employed constitute the sample for this study. Because accounting graduates generally behave and reason alike as a result of the nature of their training, the findings from the study can be used to generalise for others.

3.3Research Design

In the course of this study, we adopted the survey method. The choice of the survey method is on the ground that the survey method, which could be cross-sectional on longitudinal, was used because our study is to find out the extent of the relationship between the dependent and independent variables. By the survey method, we used the questionnaire to elicit all the relevant data from our respondents for the purpose of gaining understanding and to evaluate the relationship of the variables that were studied.

3.4Sampling Technique

In this study, we adopt the simple random sampling (srs) technique to choose members of our sample size. By this method, each graduate (accounting) working in the twelve (12) states and cities has equal chance of being selected to be part of the sample size. The researcher or his assistant visited the respondents during their working hours and administered questionnaire after obtaining permission from them.

3.5Source of Data

We obtained our data from primary source to effect the analysis of this research work. The primary data was elicited through the use of questionnaire which was administered to the respondents. The number of questionnaire administered in each state is one hundred (100) making a total of one thousand, two hundred (1200) questionnaire in all. The respondents were selected at random during their working hours. This enabled the researcher or his assistants to meet the respondents personally, and hence provide the best response to the questionnaire.

3.6The Research Instrument

The research instrument used in this study is the questionnaire. This is shown as Appendix in this work. The questionnaire was given to the respondents to fill after they have been randomly sampled by the researcher. In all, there are thirty five (35) questions all drawn to test the five hypotheses stated in the first chapter of this work. The questionnaire for this study was adapted from the work of Bundy and Norris (1992).

3.7Questionnaire Administration

The research instrument, which is the questionnaire, was administered through the use of research assistants. Six (6) B.Sc. degree holders in Accounting and Business Administration were employed on temporary basis. As soon as the terms of employment were concluded, they were trained. The training was to familiarise them with how to meet people and elicit responses from them. Second, the questionnaire was read and explained to them and they were given the opportunity to ask questions on issues that they themselves did not seem to understand in the questionnaire. To ensure that the questionnaire is not manipulated by any of the research assistants, unscheduled visits was made by the researcher to the states and cities where the questionnaires were being administered. At the end of the exercise, one thousand, one hundred and thirty five (1,135) pieces of questionnaire were retrieved out of which one thousand, one hundred and fourteen (1,114) samples were properly filled and used for our analysis. The balance twenty one (21) questionnaire sheets that were not properly filled were separated and discarded by the researcher.

3.8Method of Data Analysis

The study employed a combination of statistical and econometric tools in the data estimation and analyses procedure. The multidimensional nature of the dependent variable suggest that estimation may be bias without utilizing scientific techniques of dimension reduction before conducting subsequent analysis. Consequently, the study employed exploratory factor analysis and then the multivariate regression using the Ordinary Least Squares techniques. Both methods are discussed below.

Factor Analysis

In the behavioural studies, factor analysis is frequently used to uncover the latent structure (dimensions) of a set of variables to assess whether instruments measure substantive constructs (Cortina, 1993). It is well known that there exist significant statistical redundancies in most primary data generated using questionnaires. The purpose is to reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables retain most of the sample’s information and eliminates redundancies. The essential purpose of factor analysis is to describe the covariable relationship among many variables in terms of a few underlying, but unobservable, random quantities, called factors. The common factor model proposes that each observed response or measure is influenced partially by underlying common factors and also by unique factors, neither of which can be found.

Generally, in this study, factor analysis was used to reduce the dimensionality of the dependent variable (job preference) which has thirty five (35) items, into smaller number of factors than the original total number of variables. By determining factor classifications through the factor analysis with measured data, and by using these factors as variables instead of using the observed responses, we can reduce the number of variables to a set that are more describable and simpler. The new variables that will emerge called factors are uncorrelated, and are ordered by the fraction of the total information each retains.

Thus factor analysis (exploratory) explores empirical data in order to observe characteristic features and intriguing relationships without imposing a definite model on the data unlike the confirmatory factor analysis (CFA). The focus of factor analysis in this study is to reduce the redundancy that could arise from among the variables/items used to capture employment preference by using a smaller number of factors.

In factor analysis, we represent observed variables (1, 2…, p) as linear combinations of a small set of random variables {f1, f2…, fp (m<p)} called factors. The factors are underlying constructs or latent variables that “generate” the ’s. If the original variables (1, 2…, p) are at least moderately correlated, the basic dimensionality of the system is less than p. The goal of factor analysis is to reduce the redundancy among the variables by using a smaller number of factors.

As a result of the “explosive” nature of the dependent variable (job preference) arising from the number of variables underpinning the construct, there is need for a parsimonious summarisation of the variables and this will be done using factor analysis scores generated after conducting an exploratory factor analysis on the primary data generated. After that, the factors scores will be regressed on the explanatory variables to generate the necessary beta estimates. This is important as dimension reduction is one of the major tasks for multivariate analysis, it is especially critical for multivariate regressions (Maitra & Yan, 2008).

The principal component extracting method with an oblique rotation that consists of a direct Oblimin with the Kaiser normalisation will be used as the method to conduct the exploratory factor analyses. However, before conducting the exploratory factor analysis, we shall also undertake certain diagnostics checks on the data. Firstly, we shall examine the Kaiser-Meyer-Olkin measure of sampling adequacy tests (KMO index) and the Bartlett’s test for sphericity to ascertain if there exist significant intercorrelations between items and hence to examine if the correlation matrix is an identity matrix.

Ordinary Least Squares Regression

The study makes use of ordinary least squares regression analysis as the data analysis method. Gujarati (2003) suggests four critical assumptions that must be met before utilizing the OLS regression. The assumptions are Normality, Multicollinearity, Heteroscedasticity and Autocorrelation. However, given that the data is not time-series, the autocorrelation assumption does not apply. For Normality, the study will utilize the Kolmogorov-Simirnov test resulting from the non-parametric nature of the data. Multicollinearity is one of the most important problems facing the use of multiple regression analysis because of the probability of collinearity between independent variables are Variance Inflation Factor (VIF) for each independent variable. In testing for heteroscedasticity which checks for constancy of the error terms, the Breusch-pagan-Godfrey test was performed on the residuals as a precaution. Where the presence of heteroscedasticity is found in the residuals, one appropriate method to treat heteroscedasticity is to adapt Robust Standard Errors that addresses the issue of errors that are not independent and identically distributed.

Model Specification

The Factor Analysis model expresses each variable as a linear combination of underlying common factors, {f1, f2…, fp} with an accompanying residual term to account for that part of the variable that is unique. For in any observation vector y, the model is as follows:

1-1 = 11f1 + 12f2 +……. + 1mfm + e1

2-2 = 12f1 + 22f2 +……. + 2mfm + e2

p-p = pf1 + p2 f2 +……. + 1mfm + ep

Where m is the number of factors which should be substantially smaller than p, otherwise, we don’t achieve a parsimonious description of the variables as functions of a few underlying factors.  represents mean vectors associated with the variables. The coefficient  is the weights usually called the factor loading, so that ij is the loading of the ith variable on the jth factor. fi represents the jth factor. with appropriate assumptions, ij indicates the importance of the factor (fi) to the variable (1) and can be used in the interpretation of fi. The ei variable describes the residual variation specific to the ith variable. The factors (fi) are often called the common factors while the residual variables (ei) are often called the specific factors.

Following the computation of scores for the number of factors generated from the exploratory factor analysis conducted on the dependent variable (employment preference), the factor scores will then be regressed on the explanatory variables. The regression models are thus specified thus:

Empr-f1 = 0 + 1 Gender + 2 Indus + 3 Socvs + 4 Age + 5 Orgsize + e1

Empr-f2 = 0 + 2 Gender + 3 Indus + 4 Socvs + 5 Age + 6 Orgsize + e2

Empr-f3 = 0 + 7 Gender + 8 Indus + 9 Socvs + 10 Age + 11 Orgsize + e3

Empr-fn = 0 + n Gender + n Indus + n Socvs + n Age + n Orgsize + en

Where: Empr-f1 = Employment preference factor score 1

Empr-f2 = Employment preference factor score 2

Empr-f3 = Employment preference factor score 3

Empr-fn = Employment preference factor score

Gender = Student gender

Indus – Industry

Socvs = Societal values

Age = Student age

Orgsize = Organizational Size

ei – en = error term

1 - n = slope coefficients

Apriori sign: 1... n = > 0