Construction of a Research Questionnaire
✅ Paper Type: Free Essay | ✅ Subject: Statistics |
✅ Wordcount: 2604 words | ✅ Published: 15th Jan 2018 |
Construction of appropriate questionnaire items
Section 2, Question 3
Describe what is involved in testing and validating a research questionnaire. (The answer to question 3 should be no fewer than 6 pages, including references)
The following criteria will be used in assessing question 3:
- Construction of appropriate questionnaire items
- Sophistication of understanding of crucial design issues
- Plan for use of appropriate sampling method and sample
- Plan to address validity and reliability in a manner appropriate to methodology
In order to construct an appropriate research questionnaire, it is imperative to first have a clear understanding of the scope of the research project. It would be most beneficial to solidify these research goals in written form, and then focus the direction of the study to address the research questions. After developing the research questions, the researcher would further read the related literature regarding the research topic, specifically searching for ideas and theories based on the analysis of the construct(s) to be measured. Constructs are essentially “mathematical descriptions or theories of how our test behavior is either likely to change following or during certain situations” (Kubiszyn & Borich, 2007, p. 311). It is important to know what the literature says about these construct(s) and the most accurate, concise ways to measure them. Constructs are psychological in nature and are not tangible, concrete variables because they cannot be observed directly (Gay & Airasian, 2003). Hopkins (1998) explains that “psychological constructs are unobservable, postulated variables that have evolved either informally or from psychological theory” (p. 99). Hopkins also maintains that when developing the items to measure the construct(s), it is imperative to ask multiple items per construct to ensure they are being adequately measured. Another important aspect in developing items for a questionnaire is to find an appropriate scale for all the items to be measured (Gay & Airasian, 2003). Again, this requires researching survey instruments similar to the one being developed for the current study and also determining what the literature says about how to best measure these constructs.
The next step in designing the research questionnaire is to validate it-to ensure it is measuring what it is intended to measure. In this case, the researcher would first establish construct validity evidence, which is ensuring that the research questionnaire is measuring the ideas and theories related to the research project. An instrument has construct validity evidence if “its relationship to other information corresponds well with some theory” (Kubiszyn & Borich, 2007, p. 309). Another reason to go through the validation process is to minimize factors that can weaken the validity of a research instrument, including unclear test directions, confusing and/or ambiguous test items, and vocabulary and sentence structures too difficult for test takers (Gay & Airasian, 2003).
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Find out more about our Essay Writing Service
After developing a rough draft of the questionnaire, including the items that measure the construct(s) for this study, the researcher should then gather a small focus group that is representative of the population to be studied (Johnson, 2007). The purpose of this focus group is to discuss the research topic, to gain additional perspectives about the study, and to consider new ideas about how to improve the research questionnaire so it is measuring the constructs accurately. This focus group provides the researcher with insight on what questions to revise and what questions should be added or deleted, if any. The focus group can also provide important information as to what type of language and vocabulary is appropriate for the group to be studied and how to best approach them (Krueger & Casey, 2009). All of this group’s feedback would be recorded and used to make changes, edits, and revisions to the research questionnaire.
Another step in the validation process is to let a panel of experts (fellow researchers, professors, those who have expertise in the field of study) read and review the survey instrument, checking it for grammatical errors, wording issues, unclear items (loaded questions, biased questions), and offer their feedback. Also, their input regarding the validity of the items is vital. As with the other focus group, any feedback should be recorded and used to make changes, edits, and revisions to the research questionnaire (Johnson, 2007).
The next step entails referring to the feedback received from the focus group and panel of experts. Any issues detected by the groups must be addressed so the research questionnaire can serve its purpose (Johnson, 2007). Next, the researcher should revise the questions and research questionnaire, considering all the input obtained and make any other changes that would improve the instrument. Any feedback obtained regarding the wording of items must be carefully considered, because the participants in the study must understand exactly what the questions are asking so they can respond accurately and honestly. It is also imperative to consider the feedback regarding the directions and wording of the research questionnaire. The directions of the questionnaire should be clear and concise, leaving nothing to personal interpretation (Suskie, 1996). The goal is that all participants should be able to read the directions and know precisely how to respond and complete the questionnaire. To better ensure honesty of responses, it is imperative to state in the directions that answers are anonymous (if applicable), and if they mistakenly write any identifying marks on the questionnaire, those marks will be immediately erased. If that type of scenario is not possible in the design of the study, the researcher should still communicate the confidentiality of the information obtained in this study and how their personal answers and other information will not be shared with anyone. Whatever the case or research design, the idea is to have participants answer the questions honestly so the most accurate results are obtained. Assuring anonymity and/or confidentiality to participants is another way to help ensure that valid data are collected.
The next phase entails pilot-testing the research questionnaire on a sample of people similar to the population on which the survey will ultimately be administered. This group should be comprised of approximately 20 people (Johnson, 2007), and the instrument should be administered under similar conditions as it will be during the actual study. The purpose of this pilot-test is two-fold; the first reason is to once again check the validity of the instrument by obtaining feedback from this group, and the second reason is to do a reliability analysis. Reliability is basically “the degree to which a test consistently measure whatever it is measuring” (Gay & Airasian, 2003, p. 141). A reliability analysis is essential when developing a research questionnaire because a research instrument lacking reliability cannot measure any variable better than chance alone (Hopkins, 1998). Hopkins goes on to say that reliability is an essential prerequisite to validity because a research instrument must consistently yield reliable scores to have any confidence in validity. After administering the research questionnaire to this small group, a reliability analysis of the results must be done. The reliability analysis to be used is Cronbach’s alpha (Hopkins, 1998), which allows an overall reliability coefficient to be calculated, as well as coefficients for each of the sub-constructs (if any). The overall instrument, as well as the sub-constructs, should yield alpha statistics greater than .70 (Johnson, 2007). This analysis would decide if the researcher needs to revise the items or proceed with administering the instrument to the target population. The researcher should also use the feedback obtained from this group to ensure that the questions are clear and present no ambiguity. Any other feedback obtained should be used to address any problems with the research questionnaire. Should there be any problems with particular items, then necessary changes would be made to ensure the item is measuring what it is supposed to be measuring. However, should there be issues with an entire construct(s) that is yielding reliability and/or validity problems, then the instrument would have to be revised, reviewed again by the panel of experts, and retested on another small group. After the instrument goes through this process and has been corrected and refined with acceptable validity and reliability, it is time to begin planning to administer it to the target population.
After the research questionnaire has established validity and reliability, the next step is to begin planning how to administer it to the participants of the study. To begin this process, it is imperative to define who the target population of the study is. Unfortunately, it is often impossible to gather data from everyone in a population due to feasibility and costs. Therefore, sampling must be used to collect data. According to Gay and Airasian (2003), “Sampling is the process of selecting a number of participants for a study in such a way that they represent the larger group from which they were selected” (p. 101). This larger group that the authors refer to is the population, and the population is the group to which the results will ideally generalize. However, out of any population, the researcher will have to determine those who are accessible or available. In most studies, the chosen population for study is usually a realistic choice and not always the target one (Gay & Airasian, 2003). After choosing the population to be studied, it is important to define that population so the reader will know how to apply the findings to that population.
The next step in the research study is to select a sample, and the quality of this sample will ultimately determine the integrity and generalizability of the results. Ultimately, the researcher should desire a sample that is representative of the defined population to be studied. Ideally, the researcher wants to minimize sampling error by using random sampling techniques. Random sampling techniques include simple random sampling, stratified sampling, cluster sampling, and systematic sampling (Gay & Airasian, 2003). According to the authors, these sampling techniques operate just as they are named: simple random sampling is using a means to randomly select an adequate sample of participants from a population; stratified random sampling allows a researcher to sample subgroups in such a way that they are proportional in the same way they exist in the population; and cluster sampling randomly selects groups from a larger population (Gay & Airasian, 2003). Systematic sampling is a form of simple random sampling, where the researcher simply selects every tenth person, for example. These four random sampling techniques, or variations thereof, are the most widely used random sampling procedures. While random sampling allows for the best chance to obtained unbiased samples, sometimes it is not always possible. Therefore, the researcher resorts to nonrandom sampling techniques. These techniques include convenience sampling, purposive sampling, and quota sampling (Gay & Airasian, 2003). Convenience sampling is simply sampling whoever happens to be available, while purposive sampling is where the researcher selects a sample based on knowledge of the group to be sampled (Gay & Airasian, 2003). Lastly, quota sampling is a technique used in large-scale surveys when a population of interest is too large to define. With quota sampling, the researcher usually will have a specific number of participants to target with specific demographics (Gay & Airasian, 2003).
The sampling method ultimately chosen will depend upon the population determined to be studied. In an ideal scenario, random sampling would be employed, which improves the strength and generalizability of the results. However, should random sampling not be possible, the researcher would mostly likely resort to convenience sampling. Although not as powerful as random sampling, convenience sampling is used quite a bit and can be useful in educational research (Johnson, 2007). Of course, whatever sampling means is employed, it is imperative to have an adequate sample size. As a general rule, the larger the population size, the smaller the percentage of the population required to get a representative sample (Gay & Airasian, 2003). The researcher would determine the size of the population being studied (if possible) and then determine an adequate sample size (Krejcie & Morgan, 1970, p. 608). Ultimately, it is desirable to obtain as many participants as possible and not merely to achieve a minimum (Gay & Airasian, 2003). Lastly, after an adequate sample size for the study has been determined, the researcher should proceed with the administration of the research questionnaire until the desired sample size is obtained. The research questionnaire should be administered in similar conditions, and potential participants should know and understand that they are not obligated in any way to participate and that they will not be penalized for not participating (Suskie, 1996). Also, participants should know how to contact the research should they have questions about the research project, including the ultimate dissemination of the data and the results of the study. The researcher should exhaust all efforts to ensure participants understand what is being asked so they can make a clear judgment regarding their consent to participate in the study. Should any of the potential participants be under the age of 18, the researcher would need to obtain parental permission in order for them to participate. Lastly, it is imperative that the researcher obtain approval from the Institutional Review Board (IRB) before the instrument is field-tested and administered to the participants. People who participate in the study should understand that the research project has been approved through the university’s IRB process.
References
Gay, L. R., & Airasian, P. (2003). Educational research: Competencies for analysis and Applications (7th ed.). Upper Saddle River, NJ: Pearson Education, Inc.
Hopkins, K. D. (1998). Educational and psychological measurement and evaluation (8th ed.). Boston: Allyn & Bacon.
Johnson, J. T. (2007). Instrument development and validation [Class handout]. Department of Educational Leadership & Research, The University of Southern Mississippi.
Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30, 607-610.k
Krueger, R. A., & Casey, M. A. (2009). Focus groups: A practical guide for applied research (4th ed.). Thousand Oaks, SA: Sage Publications, Inc.
Kubiszyn, T., & Borich, B. (2007). Educational testing and measurement: Classroom application and practice (8th ed.). Hoboken, NJ: John Wiley & Sons.
Suskie, L. A. (1996). Questionnaire survey research: What works (2nd ed.). Tallahassee, FL: Association for Institutional Research.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allDMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: