Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

Target Population And Sampling Psychology Essay

This chapter will be a description of the research sample, methodology and data elicitations techniques used in this study. I will describe the process of constructing and designing the two instruments encompassing both quantitative and qualitative data. A mixed method was used to provide varying informational perspectives aiming toward gaining an understanding of the research context and problem and suggesting some guiding points for any necessary improvement, and finally some information related to the validity and reliability of the instruments will be introduced.

3.2. Target population and sampling

A research population ‘consists of individuals or elements, and these could be persons or events [……] anything at all of research interest, including observations, judgments, abstract qualities, etc.’ (Sapsford &Jupp, 2006: 27). The target population in this study is the 30 teachers of Arabic as a foreign language at Higher Language institute at a Syrian University described in Chapter 1. 10 of those teachers commenced their postgraduate studies in France or the Sudan; the others had a degree in Arabic language and literature and then came to teach directly in HLI Arabic courses for overseas students. The latter group of teachers is the target subject of this study. My sampling frame - which is ‘whatever is being used to identify the elements in each sampling unit’ (Sapsford &Jupp, 2006: 28) - was based on three criteria: a) teachers should be out of the 20 teachers who are part time teachers and have not got any degree higher than their B.As. b) They should teach only the four-week courses in which teachers are completely independent in designing tests. c) They should be those who might lose their job upon the decision of the head of department. According to the aspects I have identified in the sampling frame, the sample of this study was selected.

3.3. Research design

There are three board categories of research design: quantitative, qualitative and mixed methods. In quantitative research, the investigator relies on strategies that invoke the positivist worldview; those strategies involve complex experiments with many variables and treatments (Creswell, 2009: 12). Quantitative research adopts the scientific method and focuses on controlling Variables, gathering measurable evidence and analysing the relationship between the resulting numbers (Sapsford &Jupp, 2006: 2).

By contrast, Qualitative research is ‘an inquiry process of understanding’ where the researcher develops a ‘complex, holistic picture, analyses words, reports detailed views of informants, and conducts the study in a natural setting’ (Creswell, 1998:15). Denzin &Lincoln claim that qualitative research involves an interpretive and naturalistic approach: ‘This means that qualitative researchers study things in their natural settings, attempting to make sense of, or to interpret, phenomena in terms of the meanings people bring to them’ (2000: 3). Therefore, and in order to identify the variables that influence the knowledge of my subject teachers and then to explore them in more depth, I decided to mix the methods of my data elicitation process.

Over the past 15 years mixed methods research has been increasingly seen as a third approach in research methodology’ (Dornyei, 2007: 42) . It is a procedure for collecting, analysing and mixing both quantitative and qualitative data at some stage of the research process within a single study, to understand a research problem more completely (Creswell, 2009: 204,205 and Dornyei, 2007: 44). ‘The researcher bases the inquiry on the assumption that collecting diverse types of data best provides an understanding of the research problem’ (Creswell, 2009:18). Creswell and Plano Clark state that this kind of research is a research design with both philosophical assumptions and methods of inquiry, and the basic premise of it is that the use of qualitative and quantitative approaches in combination provides a better understanding of the problem than either approach alone. It also makes up for the inherent weaknesses of each type of methods used (2007:18). Another welcome benefit of mixing methods stated by Dornyei is that ‘mixed methods research has a unique potential to produce evidence for the validity of research outcomes through the convergence and corroboration of the findings’ (2007: 45). Although the choice of methods turns on whether the intent is to specify the type of information to be collected in advance of the study or allow it to emerge from participants in the project. (Creswell. 2009:16), I find it more significant and valuable using two instruments which elicit types of data which are different in natures, so that words can be used to add meaning to numbers and numbers can be used to add precision to words (Dornyei, 2007: 45).

This study will be a mixed methods (Tashakkori & Teddlie, 2003) research framework using one of the most popular mixed methods designs in educational research: Sequential explanatory mixed methods design; Procedures are those in which the researcher seeks to elaborate on or expand the findings of one method with another method. (Creswell, 2009: 14). Accordingly, this study begins with a quantitative method in which a theory or a concept is tested, followed by a qualitative method involving detailed exploration with a few individuals.

My rationale for adopting mixed methods research in the current study is that using both approaches provides a more comprehensive account of the concept being investigated. Potentially, one form of evidence might not give a full image, so that the story would have not been completed. Therefore, using both forms provided a better picture of how teachers learn about testing in the circumstances described in Chapter 1. It allowed me to use different styles of expression such as words and numbers, and multiple ways of thinking by combining inductive and deductive reasoning. In fact this is closer to reality when people try to interact with each other; they first ask broad questions then they listen to each other stories. In the first phase the numeric data was collected first, using an email-based questionnaire. The goal of the first phase is to identify the potential effect of the selected variables on the distributed teachers ‘reactions and attitudes and to allow for purposefully selecting informants for the second phase. In the second phase, a qualitative interview approach was used to collect data through individual semi-structured interview to help explain why certain contextual or attitudinal factors, tested in the first phase, may have positive or negative influence on teacher learning and attitudes towards designing tests. My rationale for choosing this approach is that the quantitative data provides a general picture of the research’s context and focus, i.e., teachers’ years of experience, whether or not they were trained and their knowledge resources, while the qualitative data and its analysis will refine and explain those statistical results by exploring participants’ views in more depth. The priority was given to the quantitative analysis, since the analysis of quantitative data in the first phase 'can yield extreme or outlier cases which will give insight later on when followed by the qualitative interviews about the reasons of divergence from the quantitative sample' (Creswell, 2009: 218).

3.3.1. Phase 1: quantitative

Data collection

To collect quantitative data I used questionnaire which is a relatively popular means of collecting data. ‘It enables the researcher to collect data in field settings, and the data themselves are more amenable to quantification than discursive data’ (Nunan, 1992: 143)

The questionnaire I designed was aimed to find out some aspects related to teachers of this context and their testing practices. It was distributed online to 20 teachers who make 66% of the population whom were chosen randomly, and drew 9 responses. It contained three types of items as classified by Dornyei (2007:102).

Factual items are those which aimed to give a general background about teachers’ and to identify some variables in teachers’ profile expected to influence the findings later on.

Behavioural items are those which aimed to explore how teachers used to design their tests and how they managed to access the knowledge that enables them to attain the test’s validity through interaction, observation or imitation.

Attitudinal items which were used to find out what teachers think about training and what field of interest they wish to have a good level of knowledge about, whether it was the declarative knowledge or the procedural knowledge or both.

This questionnaire was self-developed, containing 13 items of different formats: 7multiple choices items, asking either for one option or all that apply, 3 dichotomous answers like “Yes” and “No”, one self- assessment question and two open-ended questions.

Questions of the survey were content-classified into sections: one section was dedicated to explore the context of the research; teachers’ years of teaching, the responsibility of designing tests and their training opportunity if any. Another section contained questions related to teachers’ feelings towards tests design, their current testing practices and their knowledge resources. To answering these questions, teachers were given some prompts and they had to select one or a max of two. Responding to the question related to their usual purposes of writing tests, teachers were given 4 prompts and they had to put them in order where number 1 represents the most important purpose, and 4 is for the least important one.

The questionnaire was pilot tested and sent by emails to 3 teachers randomly chosen. Based on the pilot test results, some survey items were revised. Two days before sending the ultimate version of the major instrument, teachers were notified that their participation is voluntary and the input they will provide is important for this study to be conducted.

Variables in the questionnaire

A variable ‘refers to a characteristic or attribute of an individual or an organisation that can be measured or observed and that varies among the people or an organisation being studied’. Following the temporal order, variables measured in this study are independent; they ‘probably cause, influence or affect the outcomes’ (Creswell, 1998:50). They include teachers’ years of experience, the institution policy, the collaborative behaviour when designing tests and attitudes towards training in addition to the potential prior knowledge or training. Variables of this study are related to answer the main research question which is ‘what do teachers need to know about designing tests?’

Data analysis

The type of data analysed was numeric information gathered on scales of instruments reporting the voice of participants. Themes or patterns that emerged from the data were presented and interpreted. As a part of collating my data which was conducted in Arabic was to translate the instrument and the prompts into English and then had a set of results gathered. Data was coded according to the variables of this questionnaire; I specified the variables after a process of data screening was conducted. Initially, I read the data in Arabic then translated it into English, and then, I cross-checked when there was ambiguity in some information. Findings were summarised in a descriptive statistics way by describing general tendencies in the data (Dornyei (2007:213). And finally, the results of the analysis were reported in the form of the discussion.

Reliability and Validity

Validity and reliability are two factors any researcher should be concerned about while designing a study, analysing results and judging the quality of the study. In quantitative research, reliability and validity of the instrument are very important for decreasing errors that might arise from measurement problems in the research study. One way to attain the validity and reliability of a survey is to do a trial investigation and make any necessary changes. ‘A pilot investigation is a small- scale trial before the main investigation, intended to access the adequacy of the research design and of the instruments to be used for data collection, piloting the data collection instruments is essential, whether interview schedules or questionnaires are used’. (Sapsford &Jupp, 2006: 103)

It is worth noting that the questionnaire used in this study was piloted and developed twice. Trailing the instrument has highlighted some weaknesses related to wording some questions and changing types of others. It also proved its internal validity and reliability. The final set of Data collected came in accordance with the research questions and they covered the all aspects related to the issue under scrutiny which confirms the accuracy and precision of the measurement procedure.

3.3.2. Phase 2: qualitative

Data collection

To collect qualitative data I used the interview, which is ‘one of the commonest qualitative methods used in small-scale research. In teaching the profession when you want to get information or exchange ideas, the natural thing to do is to talk to people. Seeking to ‘examine an issue related to a certain group of individuals’ those people can be interviewed to some length to determine how they have personally experienced the issue under investigation’ (Creswell, 2009:18)

One of the two interviews I conducted and due to security considerations was by email which is ‘an obvious medium, in some ways, for initiating a conversation and interviewing in a relatively unstructured way’ (Sapsford &Jupp, 2006: 132). The conversation was asynchronous; replies were not posted immediately after the question was asked. The other one was in a chat form which usually allows ‘more of the features of spoken, face-to-face conversation to be preserved’. (Sapsford &Jupp, 2006: 132). Although there is a concern here of losing ‘the opportunity in either means […] for drawing on non-verbal cues as I face-to-face interview’, reports on immediate –responses interviews associated computer were good, while reports on asynchronous interviewing were mixed, one problem was reported is that the responses were formal and constricted (Sapsford &Jupp, 2006: 132).

Two teachers were selected, and participated in in-depth interviews regarding their experience and the reflective thinking they both have. The teachers were chosen deliberately under the general principle of purposeful sampling. Dornyei calls this kind of sampling the ‘criterion sampling’ in which ‘the researcher selects participants who meet specific predetermined criteria’ (2007:128). Teachers who are the interview sample had to be part-time teachers, teach the four-week Arabic course and have critical minds and good skills when it comes to talk about their testing practices, the context problems and their potential needs. Prior to the scheduled chatting time, the participant received the interview questions and had a chance to think and retrieve knowledge and situations needed to respond to my questions. The other one was sent the questions via email and sent her responses back within more than 8 days according to the security situation in Syria at that time. The first version of my interview questions contained 4 questions. They basically were to figure out more about what to ask in this domain and how to correlate the quantitative and qualitative data. Two interviews were conducted and pilot tested with two of the sample teachers randomly chosen. After the first pilot, some changes were made after a little discussion with one of the interviewee. The second pilot brought the ultimate version. It gave me another opportunity for further refinement and rewording. The latter version of the Interview construct included eight open-ended questions. The questions focused on teachers’ difficulty when they design their own tests, the extent to which they feel they are skilled to do that without any training and the fields they would like to know more about in relation to tests and testing methods which might help formulate a frame of any potential suggested training course.

Data analysis

In the second qualitative phase of the study, the data obtained through interviews was coded and analysed for themes. The steps in qualitative analysis included: a) preliminary exploration of the data by a close reading through the transcripts. b) Translating data into English and highlighting the potential sub-headings related to the research questions. c) Coding the data by segmenting and labelling the transcripts. d) Gathering together of segments of data from different parts of the data transcript that are relevant to some category. e) Developing themes by putting similar codes together. f) Connecting and interrelating themes. And finally, g) comparing and contrasting all the themes of data that have been assigned to the same category. The benefit gained by doing the last step which is called the ‘constant comparative method’ was clarifying what the categories that have emerged mean, as well as identifying sub-categories and relations among categories. See (Sapsford &Jupp, 2006: 253)

Validity and reliability

The criteria for judging a qualitative study differ from quantitative research. In qualitative design, the researcher seeks believability, based on trustworthiness (Lincoln & Guba, 1985) through a

Process of verification rather than through traditional validity and reliability measures and the accuracy of the researcher’s account (Maxwell, 1992). See (Dornyei, 2007:57-59).

Creswell states that qualitative validity means that ‘the researcher checks for the accuracy of the finding by employing certain procedures’ (2009: 190) as for establishing the validity of this research I followed some strategies Dornyei (2007) and Creswell (2009) provided: a) I asked the respondents to give me feedback about the instrument and to highlight what weaknesses of the research and this is what is called ‘respondent validation’ b) a thick and rich description in reporting the finding was given. c) I had an external auditor to review the whole project.

Qualitative reliability as defined by Creswell (2009) ‘indicates that the researcher’s approach is consistent across different researchers and different projects’ (190). For achieving the reliability, I made sure the transcripts do not contain any obvious mistakes and I constantly compared data with the codes and the defined themes.

Research permission and ethical considerations

Ethical issues were addressed at each phase in the study. In compliance with the Policy of the University College Plymouth St Mark & St John, an application for research permission was completed prior to the commencement of data collection. This applications contained information about the research and its significance, methods and procedures, and the participants. Then the permit of conducting the research was obtained. The consent form stated that participants agreed to be involved in this study and acknowledged their rights were protected. The participants’ right of anonymity was protected. I numerically coded each questionnaire and each response was treated confidentially. Selected interviewees were assigned fictitious names. Those names were used in describing and reporting their results

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.

More from UK Essays

Doing your resits? We can help!