This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The test format is multiple-choice on a reading text that is provided to students. The questions are 15 items with different levels of difficulty whereby I am testing different levels of Bloom's Taxonomy. My students' level is high-intermediate where they have acquired a reasonable sum of knowledge particularly in regard to reading comprehension. Therefore, the questions emphasize more on testing comprehension skills in a higher level of difficulty that is not as usually given to them in regular tests. For instance, they have to look through the whole text to be able to answer questions such as item number one "what the article talks about?", although they may guess the answer because the word goal appear in one of the distracters which may give them a clue. However, the majority will try to figure out the answer which will be challenging especially if they lack understanding of the text provided.
The test consists of three questions on grammar, items (6, 8 and 15), three questions on vocabulary, items (3, 4 and 10), and 9 questions on comprehension, items (1, 2, 5, 7, 9, 11, 12, 13, and 14). Grammar questions test verb forms and pronouns while vocabulary questions test student's ability to understand new vocabulary and try to figure out its meaning contextually. For example, item number four asks students to choose an example of a "quantum leap", so they should understand the meaning of the phrase and relate it to the text in order to choose the correct answer. On the other hand, comprehension questions vary from testing easy recall of facts in the text such as items number 5, 7 and 13, or relating different ideas in the text to generate an understanding of what is the message of these ideas combined such as items number 9, 12 and 14. For example, item number 14 doesn't depend merely on recalling the reason why Dan O'Brien won the gold medal in decathlon, but more on understanding the message the author is trying to send to the reader through the previous paragraphs preceding the mention of O'Brien as a Gold medalist. So, generally questions are constructed carefully to test students' different reading-related skills.
According to Bloom's taxonomy (refer to Table: 1- appendix), the test questions are categorized under knowledge, comprehension, application and synthesis. Knowledge level include 6 items whereby knowledge level items number 6, 8 and 14 which are grammar questions fall under application as well because in these items student is required to recall grammatical information and apply them in new context that is different from what has been done in class previously. However, items number 5, 7 and 13 which are also knowledge level questions involve only recalling facts and information from the text to enable students to choose the correct answer and these items are considered easier than items 6, 8 and 14 which involve a mixture of knowledge and application skills.
Actually, comprehension level includes 7 items 1, 2, 4, 9, 10, 11 and 12 where students are required to identify facts and information, understand the meaning and choose correct examples that matches provided ideas. These questions are considered challenging because students are required to read the text carefully and answer questions about their understanding of the messages of the text. In fact, these types of questions stimulate student's thinking and ability to relate the test to the questions in order to understand relations between different elements of the provided text. Reading tests, from my standpoint, should focus more on testing comprehension because it is an essential base for development of other skills such as grammar and vocabulary. Students should understand the text before they can show their readiness to test their grammatical knowledge as well as vocabulary.
Since my students' level is high-intermediate, I included two questions that are synthesis level to balance the difficulty scale in the test. Items number 3 and 14 are categorized as synthesis level questions because students are required to build a structure through diverse elements in the text which is to relate different parts of the text together to form an understanding of certain concepts or maybe vocabulary such as item number 3 which asks about the definition of "a breakthrough goal" through the text. Also, item number 14 requires students to generate a reason for O'Brien being a gold medalist which is not to be directly derived from the text but require student's synthesis as mentioned previously.
In general, high-intermediate students can be tested on more challenging texts with higher vocabularies in order to test their readiness to move on to higher levels. However, that is not the case if the teacher's purpose is to test certain skills to know whether students' achieved the course objectives or not. Therefore, designing the questions of this exam is merely dependent on the students' academic level, the teacher's purpose as well as the exam type which is a regular during-the-semester exam that should be moderately challenging. Also, MCQs should always be rather moderate while a teacher may experience with other question types for different testing purposes. But, MCQs' role should not overlap with other question types' roles to ensure variety, validity, reliability and fairness.
In order to understand the use of MC format in language testing, I reviewed some articles and studies that discuss some positive and negative consequences of using MC format in language testing and testing in general. Actually, multiple-choice format is a familiar objective test that is used in language testing and therefore researchers have discussed the effectiveness of using MC format in language testing from many dimensions related to structure, use and construction. Most studies in language testing were concerned with the construction of the multiple choice item and how teachers should select questions carefully to ensure reliability and validity (e.g selecting reasonable distracters). Authenticity is an issue as well when discussing the assessment of reading comprehension with multiple-choice questions. Rupp, Ferne and Choi (2006) indicated that asking test takers to respond to text passages with multiple-choice questions induces response processes that are strikingly different from those that respondents would draw in non-testing contexts. In fact, most participants in the study that Rupp, Ferne and Choi (2006) conducted responded that answering multiple-choice questions is rather a problem-solving task than a comprehension task. Hence, they are more likely to select a variety of unconditional and conditional response strategies to deliberately select choices and combine a variety of mental resources interactively when determining an appropriate choice. Actually, when people read in a non-testing context, they do not answer multiple-choice questions in their heads and this led many researchers in language testing to do numerous research programs about that.
Rupp, Ferne and Choi (2006) in their article posed the question when test-takers answer MC questions about a reading text they have just read, does the assessment offer evidence of an understanding of the text? However, they found out that reading comprehension is a complex construct whereby test items change the process itself and stimulate supplementary processes that are, in their intensity, unique to the testing context. Rupp, Ferne and Choi (2006) discussed that "the main purpose of responding to MC questions about reading passages is, undoubtedly, to answer them correctly, and so test-takers select their strategies accordingly to optimize their chances for success," which may involve a variety of factors that influence the selection strategies in testing situations such as linguistic difficulty of the text, the linguistic level of the questions, the topic of the text, the content and the phrasing of the questions, the location of information from the correct answers and distracters, as well as the level of cognitive activity required of the respondent. All of these factors indicate the lack of reliability of MC questions in testing reading comprehension as they raise many complications. Rupp, Ferne and Choi (2006) however indicated that "it appears reasonable to state that an analysis of the structure and content of MC questions on any reading comprehension test will typically reveal that very different levels of reading comprehension are assessed with different items." So, the authors suggested that one of the strengths of MC format in reading tests is that it assesses a mixture of what could be termed 'local' and 'global' comprehension processes, which will force readers to draw on component and integrative processes to different degrees. But, Rupp, Ferne and Choi (2006) affirmed that different studies provide more evidence that responding to MC questions on a reading-comprehension test might draw much more on verbal reasoning abilities relevant to a problem-solving context than on general higher-order comprehension abilities. They added that "responding to MC questions on reading comprehension tests is a complex process," whereby "reading in a non-testing context differed from response to MC questions about passages in a testing context in striking ways."
Rupp, Ferne and Choi (2006) discussed limitations of MC questions in reading test in terms of strategy selection to answer questions where students may use several techniques that do not achieve the objective of reading to understand and learn because they will (1) scan the first paragraph to get an idea of the topic, then scan questions and underline keywords that match the text and choose answers based on that, or (2) scan or read questions first and then read the text looking for answers, or (3) reading the text in chunks according to the questions to identify answers. Students thus will utilize the strategy of keyword matching, a process which test-takers often facilitate by underlining or highlighting individual words or phrases considered to be pertinent for understanding the text or answering the questions. Limitations of MC questions also include guessing especially if a test item appeared difficult and ambiguous. Also, semantically similar or plausible distracters caused problems for test-takers as well as the length of options which increased difficulty since they require more information to be processed to falsify incorrect options and to confirm the correct option.
Actually, Rupp, Ferne and Choi (2006) said that "test takers in our study often just scanned the text to locate features that a question asked for without an effort to understand paragraphs; this may make such questions easy". In fact, MC format is also perceived as relatively difficult due to the time constraints imposed by longer and more difficult texts, which makes locating the appropriate passage section more difficult. Rupp, Ferne and Choi (2006) concluded that "different MC questions do not merely tap but, indeed, create very particular comprehension and response processes. Therefore, a blanket statement such as 'MC questions assess reading comprehension' is nonsensical for any test."
Dudely (2006) compared MC questions with multiple true-false items in L2 testing and concluded with interesting conclusions that indicate the strengths and weaknesses of MC format in L2 testing. Dudely (2006) first stated that participants were given 10 minutes to answer MCQs and MTF and the findings show that students taking MTF items attempted 29.9 items while students taking MCQ attempted 8.9 items, a ratio of 3.44 to 1.00. These data clearly show that participants can answer significantly more MTF items than MCQ items in a given time period. This increase in the number of responses lengthens the test, but does not proportionally increase the time requirements and has yielded positive effects on internal consistency estimates. Dudely (2006) said that "MTF method has demonstrated the ability to equal or exceed MCQ reliability." Dudely (2006) concluded his article saying that his study provided empirical evidence that central factors such as test length, item interdependence, reliability and concurrent validity are viable with MTF items that assess vocabulary and reading comprehension in the realm of norm-referenced testing. The conversion of MCQ items to MTF items increased test length, the ratio to percentile were stable, reliability was never sacrificed, and the conversion did not create items that were dependent. These findings therefore assert the viability of the MTF format in second language testing.
Shizuka, Takeuchi, Yashima, & Yoshizawa (2006) in their study compared the three and four-option MC questions in English tests for university entrance selection purposes in Japan. In fact, this study show how complex MC questions construction can be as test-makers have to bear in mind many complex issues such as whether to make test items three or four-option and how that affects the test reliability. The study of Shizuka, Takeuchi, Yashima, & Yoshizawa (2006) confirmed advantages of using a three-option format and how the study main desire was to lighten the workload of multiple-choice item writers. Advantages include increasing measurement accuracy, the length of test booklet is smaller, distracters taken as a set are more plausible, students can answer questions with less distractions and will feel less pressure because they can work more slowly or spend time to recheck their answers and more important is that chances of providing unintended cues that profit test-wise students will be decreased. In fact, this study provides somehow a recommendation that three-option MCQs will help overcome the limitations of MCQs in one way or another. Shizuka, Takeuchi, Yashima, & Yoshizawa (2006) suggested that MC test writers should pay more attention to the quality rather than the quantity of the test to avoid problems caused by choosing MC format for language testing especially reading. Marsh and Roediger (2005) agreed on that "the more alternatives on the multiple choice test, the worse performance on the later cued-recall test and the smaller the positive testing effect. More important, we predicted that an increased number of alternatives on the multiple choice test would also increase errors on the later cued-recall test."
Al-Hamly and Coombe (2005) covered the issue of answer-changing phenomena in Gulf Arab students when taking MC questions in ESL courses and how that affect test scores. Through Al-Hamly and Coombe (2005) study we can assert that students taking MC questions in their language test will be involved in answer-change whereby they will often change their first-chosen answers with alternatives whether right or wrong. Al-Hamly and Coombe (2005) indicated that 86% of test-takers made answer changes, whereas the current investigation has produced a figure of 67%. They also found that 44% of answer changes were from wrong to right while wrong to wrong answer changes were at 37% while the lowest percentage was the right-wrong category with 19%. Choosing the alternative when undergoing answer change depends on different factors such as the nature of the test and the subject matter in terms of content and language.
Al-Hamly and Coombe (2005) also noted that wrong-wrong answer changes will occur more if students are guessing or not taking the test seriously. They concluded that changing answers on a multiple-choice questions test quite beneficial as most changes made were from wrong to right and resulted in points gained and that should make teachers encourage students to change answers judiciously and not to be influenced by traditional perceptions like 'go with your first response' or guessing. (Samad, 2004) in his book highlighted some of the advantages and disadvantages of MC format in testing. These advantages include, quick grading, high reliability, objective grading, wide coverage of content and precision in providing information regarding specific skills and abilities. However, disadvantages of MC format are as follows, (1) the MC format only test knowledge and recall of facts, (2) guessing may have a considerable effect on test scores, (3) the MC format severely restricts what can be tested to lower-order skills, (4) cheating may be facilitated, (5) it places high dependence on student's reading ability and teacher's writing ability and finally (6) it is time consuming to construct MC questions. (Samad, 2004) says that good multiple-choice questions are difficult to write and a great deal of time and effort has to go into their construction. For instance, unintentional cues in the test item may affect the reliability of test scores as well as test objective to test student's certain abilities not the ability to use these cues to answer the question.
Marsh and Roediger (2005) indicated that one of the most apparent limitations of MC format is that "because good multiple-choice tests are so difficult to construct, the same questions are often used across semesters. The result is that the test bank needs to be protected, meaning that many professors neither review the test in class nor return the tests to students. Rather, professors let students review the tests in their offices, although in reality few students do so." Hence, students' knowledge is affected because they do not have the chance to correct their wrong answers and understand why they get a lower score for instance. However, Marsh and Roediger (2005) explained that MC questions are easy to score and therefore are the evaluation method of choice in large classes whereby teachers save time and effort. They added that students expecting multiple-choice test are likely to study less than those expecting essay-format tests.
Marsh and Roediger (2005) added that research has shown that exposure to misspelled words led subjects to misspell the words later on a traditional spelling test. On the other hand, (Siong, 2004) indicated that if the distracters in MC test are "tricky", they will make students think carefully before selecting the right answer and that is positive because they are using a high-order skill which is thinking and evaluation. In fact, (Siong, 2004) further said that "some researchers argue that the ability to answer MCQ is by itself a separate ability, different from the reading ability."
From my standpoint and after reviewing literature on MC format, I believe that MC format has both positive and negative consequences of using it as a measure especially in language-related subjects whereby quality is far more important than quantity. I think that MC format is suitable for teachers of introductory courses where recall of facts and information is crucial to the learning experience more than critical thinking, essay writing and so on. Therefore, MC format is easier, time-saving and effort- saving whereby teachers can spend more time teaching than testing. Multiple-choice format is also suitable for teachers who need to provide urgent grades with highly reliable test scores; however, teachers must be sure of their ability to construct good objective test items to judge students fairly. Also, MC format is a good technique in case teachers want to cover a wide range of content with limited time. In fact, MC format allows teachers to measure particular learning objectives such as comprehension, recognition and recall.
However, MC format has shortcomings as well whereby students and teachers will consume more time and effort in making and answering questions. Students will use various ways to answer MC questions without reading the text thoroughly whether through quick scanning of text and questions, underlining keywords or guessing. Therefore, the reliability of MC test scores is questionable especially if we want to measure student's reading ability skills, critical thinking and evaluation abilities. Multiple-choice format will mostly limit students and teachers to testing lower-order skills such as recall. In fact, one of the limitations of MC format is that question-construction should be carefully done to ensure reliable and valid test items that will give accurate measurements of student's comprehension ability.
Finally, designing a multiple-choice test is not an easy task and taking a multiple-choice test is challenging too. Teachers spend a large amount of time choosing good distracters, avoiding cues, and encouraging student's not to guess. Therefore, MC format should be used with caution and knowledge of best ways to utilize this technique for the area of language testing especially in regard to vocabulary and comprehension. Teachers must also encourage students to read the text and avoid using shortest ways to answer questions without having the privilege of reading-to-learn experience. Consequently, both teachers and students will have better understanding of the testing purpose and learning objective behind taking the MC language test. They will have a base that they can construct their strategies by referring to it as often as needed. In conclusion, the educational process should be performed with utmost knowledge, experience and care to ensure a better-quality learning outcomes.