Study on the role and types of Assessment

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

'The most important single factor influencing learning is what the learner already knows. Ascertain this and teach him accordingly' (Ausubel, 1968: vi).

Ausubel's dictum underlies one of the fundamental educational principles; that is that learning should commence from the point at where the learner is (Brooks, 2007). To ascertain pupils' prior knowledge and understanding and comply with such principles, educators need to adopt a method of assessment. Assessment refers to all those activities led by teachers and students themselves, which offer information regarding pupil understanding and comprehension (Black and Wiliam, 1998a).

Summative and formative assessment

Assessment is further defined as either formative or summative, which are distinguished largely by purpose (Black & Wiliam, 1998b). Summative assessment summarises long-term attainment and relates to progression in learning against either a criterion (criterion-referenced) or other students (norm-referenced) at a set time point (Harlen et al, 1997). Summative assessment may be achieved through end-of topic tests, and/or external authorised examinations; providing an assessment 'of' learning, and is linked with the concept of high-stakes whereby the results hold great significance (Harlen, 2005 cited by Power & Frandji, 2010).

In contrast, for assessment to be formative, there must be a 'feedback' or 'feed forward' purpose, i.e. when information is used to modify the subsequent teaching to meet the learning needs of the students (Black et al. 2002; Gioka, 2009). Formative assessment is used to continuously monitor progress and allows teachers to accordingly modify approaches to teaching, reflecting a more fluid and dynamic mode of assessment. Formative assessment is regarded as assessment 'for' learning and practices may be achieved through regular written feedback, discussions, peer or self-assessment, formative use of summative tests, rich questioning or an end of lesson quiz (Black & Wiliam, 1998b). The contextual nature of formative assessment, whereby assessment is influenced by learning situations, knowledge of students and lesson purpose (Bell & Cowie 2001; Black & Wiliam 1998b), endorses the synergistic relationship between assessment and planning.

The use of summative assessment is often for accountability and monitoring purposes, and is reported to have negative effects on teaching, and indirect effects on pupils (Harlen, 2009), such as inducing test anxiety and reducing self-esteem and motivation (Harlen & Deakin Crick 2002, 2003). High stakes assessment has been associated with a reduction of formative assessment in the classroom (Osborn et al, 2000) and a more wooden, teaching plan thereby restraining content taught and promoting rote learning (Marshall & Drummond 2006; Shepard, 2000). Moreover, teaching becomes focussed towards the delivery of direct facts and offers little chance for students to learn through exploration (Johnston & McClune, 2000). 

In contrast, formative assessment has been associated with learning gains and raising learning standards (Black & Wiliam, 1998a). Black and Wiliam (1998a), using the same tests, compared the average improvements in test scores of students involved in a feedback intervention, with the range of scores found for typical groups of students. The results imply that conducting formative assessment in the classroom does raise overall attainment standards and produces learning gains. Moreover, it appears that these improvement learning gains are most evident amongst pupils with a lower attainment level, and subsequently reduce the spread of attainment throughout schools (Wiliam & Black 1998b).

Specifically, Atkin et al, (2001) cited by Buck and Truath-Nare (2009) demonstrated that science standards and learning enhancement are both supported through implementing formative assessment in the classroom. However, Buck and Truath-Nare (2009) reported that formative assessment was considered more favourable to lower achieving science students, whilst high-achieving science students perceived formative assessment practices negatively. This echoes Wiliam and Black (1998b) research, yet suggests there are potentially negative impacts of formative assessment processes on higher achieving students. However, as Buck and Truath-Nare (2009) highlight, this study did not compensate for students' differing learning experiences within the same classroom, which is an influential factor of educational performance (Nuthall, 2004).

Theoretical framework of learning and assessment

Roos and Hamilton (2004) reported that different theories for learning entail different paradigms of assessment. The constructivist conception of learning states that knowledge cannot be transmitted; instead it must be constructed by the learner (Piaget, 1970, 1976). Bruner (1960) theorises that learning is an active process, where the learner constructs concepts based on prior or current knowledge, allowing hypotheses to be formed and decisions made. In contrast, Gagné's (1977) views of learning adhere to a behaviouristic framework, with a heightened focus towards specifications and expectations of students (Roos & Hamilton, 2004). Summative assessment appears to be largely tailored to behaviourist conceptions of learning (Begg, 1997), whilst formative assessment is based more on constructivist principles (Black & Wiliam, 2007).

Learning science involves exploration and testing new ideas to help develop understanding (Zohar and Dori 2003). Exploring and interpreting phenomena in science is central to understanding scientific concepts (Driver et al, 1994) and endorses the push for scientific enquiry, the application of science into daily lives and how science works. The constructivist theory appears to closely relate to science learning, and underpins contemporary perspectives on science education (Driver et al, 1994). Formative assessment in science must reflect the new curricular which emphasises scientific inquiry, how science is applied to daily life and the development of meaningful understanding, reflecting a constructive approach to learning (Atkin & Coffey, 2003; Wiggins & McTighe, 1998 cited in Wang et al, 2010).

The science curriculum is termed a 'spiral' curriculum, where each topic is revisited and more knowledge is added, endorsing a constructivist approach to learning. The shift from summative towards formative assessment parallels with the shift in curricular content and objectives towards a more constructivist perspective, to include the learning of scientist skills, higher-order thinking and application of science to daily lives (Shepard 2000). Researchers have argued for a wider extent of assessment tasks (Duschl & Gitomer, 1997; Gitomer & Duschl, 1998) in more authentic contexts (Tamir, 1998) and to embed assessment within the curriculum. The investigative nature of science endorses the need for formative assessment, since such recommendations cannot be met through exams alone.

The role of assessment

Assessment tracks the progress of pupils in relation to knowledge, skills, comprehension and is the principal aid to target setting. Assessment also identifies where additional support is needed to maintain progression of learning and therefore directly influences lesson planning (Crooks, 1988). The assessment process should provide pupils with up-to-date information regarding their progress, rewarding attainment and identifying ways to improve work (Crooks, 1988).

In addition, assessment fulfils a motivational role (Trotter, 2006). Regular feedback offers a motivational tool which drives pupils to study and demonstrates that learning tasks are regularly assessed and therefore significant. Interestingly, summative forms of assessment have shown to be a greater source of extrinsic motivation (Trotter, 2006). However, Rowntree (1987) reports that whilst some students admit to seeking assessment as a motivational source, some students are distracted and enervated by it (cited in Trotter, 2006).

A wealth of school based research Gagné (1977) and Crooks (1988) report that assessment can be used to reactivate or consolidate required skills before introducing new subject content; draw attention towards key elements of the topic; promote active learning strategies; offer chances to practise skills and establish students' learning. Feedback can highlight discrepancies between desired and actual knowledge, understanding and achievement, which may then guide students to reach their goals by highlighting the necessary actions to take (Sadler, 1989).

In teaching the sciences, assessment fulfils an intrinsic part of the students' learning process, thereby influencing attitudes and interests towards the subject (Teaching & Learning Research Programme, 2006). The role and importance of assessment in encouraging more students to follow a scientific career path is therefore crucial, as it can be used to help promote a greater understanding of the fundamental concepts and implications of science, as well as encouraging students to think scientifically. In contrast with other school subjects, assessment in science offers greater opportunity for interactive classroom communication. Formative assessment may help to raise student expectations and counteract students attributing poor performance to lack of academic ability (Vispoel & Austin, 1995). Subsequently, assessment may serve as a 'buffer' to emotions of discouragement in an effort to promote learning. Assessment also provides science teachers with an armoury of different tools, which can be used to aid greater understanding of scientific content and address varying learner abilities and levels.

Social, economic and political drivers promoting assessment

League tables, which revolve around student assessment, are publicised each year and according to (Warmington & Murphy, 2004) raise the stakes each round. There is a concern that league tables drive teachers to put forward pupils to sit exams in order to earn league table points, rather than consider alternative qualifications and what is best for the student (Wiggins, & Tymms, 2002). Power and Frandji (2010) are of the opinion that league tables reveal the dispersion of student achievement and highlight schools which have competitive advantage, allowing parents to choose places at the 'best' schools for their children. Moreover the damage to teaching staff and the school's atmosphere can be severe if schools are unfavourably positioned in league tables (e.g., Karsten et al, 2001). For example Nicolaidou & Ainscow, (2005) report that schools labelled as 'failing' show acute drops in contentment, motivation and self-esteem. Irrespective of these issues, the Government's Education White Paper (Department of Education, 2010) has set schools higher targets in a drive to raise exam results, and proposed that league tables are more publicly accessible and transparent.

Assessment provides a benchmark and evidence to prospective employers in respect of skills and capabilities (Oloruntegbe, 2010). Universities and employers both use summative assessment to guide their selection process, and its importance to students is crucial in securing their preferred places in higher education.

Importantly, formative assessment plays a unique role in devising individual education plans (IEP) by assisting in setting targets, providing a medium to measure target success and evaluating if the target has been set (Clarke, 2001). Frankl (2005) emphasises the importance of target setting with reference to developing for those students with learning difficulties or disabilities. Moreover, Ofsted stated that IEPs are unlikely to be effective if they are not integrated into the school's assessment system (Ofsted, 1999, p. 22). However, Gross (2000) argues that progress is often measured relative to student performance against IEP targets, rather than against progress in their overall learning, suggesting the mode of assessment was not viable.

Reflections of student learning and teaching performance

The topic selected was 'Products from Oil' which referred to C1b 4 of the AQA Chemistry Additional GCSE course. The lessons discussed were delivered to top set Year 10 students.

Clarke (2001) advocates that the crux of effective formative assessment is the formulation of clear learning objectives. At the beginning of every lesson, the learning objectives were clearly stated and shared with the students; see appendix 1 for lesson plans. The teaching and assessment methods were diverse and appropriately incorporated into different lessons, allowing students to demonstrate their understanding through a variety of methods. According to Biggs' (1999) model of constructive alignment, assessment tasks should be properly designed to directly assess each of the learning objectives defined in the curriculum (Biggs, 1999; Segers & Dochy 2006). The methods of assessment were closely aligned with the learning objectives, which enabled me to use the learning objectives as a flexible form of assessment criteria to review students' understanding; (see appendix 2).

Lesson One - (self-assessment and observation)

The objectives were to understand the differences between alkanes and alkenes and know the structure and molecular formula of alkenes. To understand the differences between alkanes and alkenes, it was important for students to fully comprehend, recall and consolidate their prior knowledge of alkanes. This was achieved by asking students to self-assess their answers to a quiz; see appendix 3.

This mode of assessment allowed the students to gain a personal understanding of their current understanding. According to Sadler (1989), students should evaluate their current attainment with their desired level of performance and implement procedures to achieve those goals. Therefore, if I was to repeat the lesson, I would encourage students to reflect on this mark compared to their target mark. This would allow students to appreciate the gap between current and desired level of understanding, and may trigger and promote action required to reach desired target levels, as suggested by Sadler (1989). Students realised they had a secure understanding of alkanes and I used their encouraging understanding to move onto the next activity without having to recap the properties of alkanes.

According to Boud & Falchikov (1989), self-assessment practice also allows learners to recognise the criteria of high-quality work and discrepancies between current and desired comprehension, which can assist in directing future improvement. However, self-assessment has not been a common approach employed by teachers (Black & Wiliam, 1998a). Although self-assessment appears to be very effective, it is associated with problems. There may be a weak alignment in the way a teacher and a student mark a piece of work (Boud, & Falchikov, 1989). However, the self-assessment I employed was more reliable as the answers for each question were discussed and easily identifiable as correct or incorrect.

Following the quiz, students demonstrated their understanding and the differences between alkanes and alkenes by constructing both an alkane and alkene of five carbons using molimods. I used this time to observe students' actions and formally assess understanding of bonds, atoms and the structural formula and differences between alkanes and alkenes. Students assertively identified that propene had a double bond and fewer hydrocarbons compared to the single bond propane, implying meaningful understanding of the fundamental differences between alkanes and alkenes. Students were then asked to write the formula of an alkene using the molimod constructions as a guide.

Upon reflection, the observation and assessment of students' higher order thinking skills demonstrated their deeper levels of understanding and knowledge application. I was encouraged with their ability to do this task successfully and consequently planned the following lesson to be more challenging by placing the responsibility of learning on the students themselves. I decided to reduce the amount of subject knowledge I gave, creating a more student-led lesson.

Lesson 2 - (questioning)

It was intended that students knew the conditions required for cracking, the cracking process and how to write and balance symbol equations. Classroom dialogue and questioning is a type of formative assessment and plays an important role to encourage two-way active engagement (Black and Wiliam, 1998b). Question-asking has been identified as a higher cognitive skill, and should be emphasised particularly in science education (Hofstein et al. 2005). Following a verbal explanation and video of cracking, I asked questions to assess students understanding of this process before setting the written task. The responses to the questions were not confident, and this demonstrated a need to review the cracking process using a different approach. Upon reflection I had challenged the students too much by expecting them to independently explain the cracking process with little guidance from myself. I had over-estimated their ability from the first lesson and pitched this lesson too high for their current subject knowledge.

In response to their difficulty, I decided to ask student to model the process of cracking, whereby pupils acted as long hydrocarbon molecules, and were 'cracked', where they broke off from the long chains they had formed by holding hands. This approach incorporated visual and kinaesthetic learners, and follow-up questioning demonstrated their new understanding of the cracking process. Therefore, this use of formative assessment enabled me to realise that this concept was not understood and consequently modify my teaching approaches.

In addition, I used written questioning embedded within feedback I gave of their written work (for example see appendix 4). This was designed to prompt students to think further and improve their work. I also used questioning to assess the students' ability to balance equations when cracking hydrocarbons. Students were given exam questions relevant to cracking and balancing equations. Students' responses were not correct and it became clear that many struggled with the balancing equation concept. This provided an immediate form of assessment and alerted me to this weakness. I therefore gave the pupils a few more equations to try and balance, discussed how to go about balancing equations, and reviewed a few equations as a class. Moreover, I planned a starter activity for lesson 3 which involved balancing equations, I felt that students were still not confident with this skill; which endorses Gioka (2009) and Crooks (1998) research on the purpose of formative assessment for planning.

In the last few minutes of the lesson I asked the students how they would determine if an alkene or alkane resulted from the cracking process, which required them to recall knowledge from the previous lesson. Many students were willing to answer, suggesting they had understood the results with bromine water and the importance of the double bond in alkenes. This on the spot method of assessment demonstrated secure competence and understanding of the aforementioned reaction, as highlighted by Crooks (1998).

From the students' written work and regular verbal questioning, it appears that students did partly meet their learning objectives and questioning was an invaluable tool of formative assessment. Question-asking is diverse, from asking closed, low-order questions, such as 'identify' to open, high-order questions such as 'evaluate' and 'synthesise'. As well as an opportunity to challenge high achieving learners and demand high order thinking, questioning allowed struggling students another opportunity to learn and enhance understanding levels from both correct and incorrect answers.

However, whilst using a 'hands-up' policy to questions is an indicator of how many students in the class think they know the answer, in general question-asking only directly assessed the understanding of those answering the questions, and is therefore limited. In addition, question-asking usually only involved those students who felt confident enough to speak out to answer or those who felt confident enough that their answer was correct. Therefore, answers given may only be offered by students with either strong self-confidence and/or a deeper understanding, and may not be a reliable or fair representation of students' understanding.

Furthermore, verbal questioning can be used inappropriately using low-order thinking skills and not challenge the learners. Morrison and Lederman (2003) reported that informal means of assessment only, such as whole class questioning, appeared to dominate classroom practice, recalling only low level knowledge. McMillan (2003) endorse such findings and report that the assessment used and decided upon by teachers were based on school context, professional experiences, instead of common assessment principles. Conversely, Tomanek, et al (2008), reported that teachers select formative assessment tasks accounting for perception of students' capabilities alongside the curriculum.

Another common drawback to questioning is where pupils play the 'game', i.e. they formulate a response that they believe the teacher is looking for. Another problem is when the teacher may manoeuvre the conversation, preventing the pupils from constructing their knowledge using mental cognitive processes (Black & Wiliam, 1998b). Despite the shortcomings of questioning, I made a conscious effort to incorporate a range of questioning order-levels in accordance to Blooms Taxonomy (Bloom et al, 1956).

Although students appeared to understand key concepts, their understanding may have been unreliably assessed through questioning and their learning may therefore have been superficial. However, verbal questioning did flag up areas which needed revisiting, which were constructively responded too.

Lesson 3 - (peer-assessment)

By the end of the lesson it was intended that students would understand monomers and polymers, provide names of a range of polymers from a given monomer, and be aware of some polymer uses.

The main task in lesson 3 was to produce a poster, using a given monomer and display the polymer this formed, which the students made using paper models, and then explain the uses of this plastic. This activity promoted students subject knowledge development, as they needed to understand monomers before they could construct the polymer. This promoted a constructive, step-by-step approach to learning about polymerisation, and indicated which stage students did not understand. From class observation, I was able to provide additional support to those who evidently needed extra help to facilitate their understanding. I circulated to the groups who were struggling adn gave additional one-on-one advice that was required.

Students were asked to assess their peer's work on monomers and polymers using post-it notes to write feedback, see appendix 5. Peer-assessment practice allows students to work together or individually in judging work of their peers based on criteria, and has shown to improve subsequent work (Forbes & Spence, 1991; Cohen et al, 2001; Rust, 2002). Peer-assessment allows students to become actively engaged with the criteria reflecting a social constructivist approach and promoting a deeper, more meaningful level of understanding (Rust et al, 2005). In addition, Topping (1998) reports that peer assessment is associated with enhanced confidence, appraisal skills and learning gains. Following the peer-assessment, pupils returned to their work and were given time to read their own feedback. This allowed students to actively engage in the feedback process, learning how to receive constructive comments. It was hoped that from this peer-feedback, students would learn what additional content was needed in order to improve their own work for the future. This approach also prompted students to ask questions regarding their peers work, which subsequently increased the amount of subject knowledge covered within the lesson.

In the future I would provide an explicit assessment criteria for students to guide the peer-assessment practice. This would fully engage students with the process of peer-assessment and the assessment criteria. Moreover, using an assessment criterion would heighten students' awareness of the gap between expected standards and their own attainment, allowing students to recognise the action needed to improve their own work. By highlighting the necessary actions to improve work, the formative function of peer-assessment would be maximised. In this case, students were unable to improve their work due to time restraints, and on reflection it would have been better to allow time for this in the following lesson.

However, peer-assessment carries limitations with regard to problems associated with inaccurate over and under marking, and the need for an explicit understanding of assessment criteria (Black & Wiliam, 1998b). Whether the problem is lack of reliability in marking practices (Laming, 1990) or lack of student understanding of what is expected of them (O'Donovan et al, 2001), as well as explicit knowledge (O'Donovan et al, 2004). Therefore, it could be argued that without an explicit assessment criterion, the effectiveness of peer-assessment in this lesson was questionable. The quality of peer-assessment may have been only limited to the students' understanding and knowledge, which was low given that these lessons were covering new information and subject knowledge. Therefore, this approach would have been more effective as a revision lesson.

Training students to assess and judge work accurately may be a further pitfall of such assessment strategies, and it has been reported that higher ability students may under mark, whilst less able may over-mark themselves Falchikov (1986). Importantly, as identified by Kay et al (2007), there should be predominant alignment between student and teacher judgements, therefore the criteria of strong and weak performances needs to be explicit. This presents further challenges to teachers, given that pupils can only assess themselves when they have an explicit understanding of their learning objectives (Black & Wiliam, 1998b).

Lesson 4 - (feedback)

By the end of this lesson it was intended that students knew the forces between polymer chains, what determined the properties of plastics; including thermosetting and thermosoftening plastics.

From a review of the literature (Hattie, 1987; Black & Wiliam, 1998b; Gibbs & Simpson, 2004), regular feedback and communication between teacher and pupil appears to be at the heart of formative assessment. Rowntree (1987) cited in Trotter (2006) proposed that feedback is the 'lifeblood' of learning and creates the successful formulae of formative assessment. Research indicates students desire good feedback and appreciate the feedback provided (O'Donovan et al, 2001; Higgins et al, 2002).

Studies suggests feedback, which is given as close to the assessment time, is more effective in enhancing student learning (Gibbs and Simpson, 2004), and therefore verbal feedback may be more effective given its timely nature. Given that verbal feedback was immediate, pupils could respond and improve their work 'in the moment'. Although written feedback takes longer to prepare and receive, it is a permanent record, and therefore potentially longer-lasting (Gibbs et al, 2003). Moreover, written feedback allows teachers to offer advice for improvement to current and future work (Fleming, 1999).

During the lesson, I gave immediate verbal feedback on students' work. I circulated and checked students' diagrams of polymer chains and reinforced the diagrams which were correct. In some cases I prompted the students to add to their diagrams by asking them 'what forces are present in between the polymer chains, keeping the polymer together?'

Students were asked to develop a table of differences of the two different polymer types and give an example of each; see appendix 6. I gave verbal feedback on students work, encouraging them to include differences they had not listed or considered. Consequently, students added to their tables, encouraging a thorough evaluation of the differences between a thermosetting and thermosoftening plastic. I also prompted students to consider the structural difference of the two polymer types.

Using verbal feedback enabled students to add to their work immediately, and allowed me to deal with any misconceptions or misunderstandings. For example, I queried a specific student's work, who had initially written that thermosetting plastics could be softened repeatedly. By reacting to this misunderstanding, I was able to capture the wrong answer and allow the student to correct it straight away, dispelling their misconception immediately; as opposed to waiting to mark their books. By acting immediately to an incorrect answer also increased the chances for students to learn from their mistake and therefore verbal feedback was used as a pro-active form of assessment.

Written feedback was also provided and targets on how to improve their work were given; (see appendix 6). If the student is able to understand how to move forward, they are likely to take ownership for making the necessary actions (Harlen & James, 1997). The marking policy was based on a traffic light system where a green dot equated to high effort, orange was moderate and red represented poor effort. This feedback was regular (see appendix 6) and gave an immediate visual overview of their performance. Had I been with this group for longer, I would use this to identify specific students who were getting multiple red marks and speak to these students individually regarding their work and personal understanding. This would be useful to identify consistently struggling students and stimulate extra necessary help and guidance to improve their understanding.

In addition, the feedback served a motivational role, as endorsed by Synder (1971), and encouraged students to view their work as significant (Trotter, 2006). However, the feedback would have been more effective if students were given the time to revisit these comments and improve their work by completing the advised targets. I used this feedback to modify a revision game for the following lesson which covered common areas of misunderstanding; see appendix 7. I used the common pitfalls, highlighted from assessment approaches used in the classroom and from the marking their work over the entire topic to fuel the subject content of the quiz.

Lesson 5 - (formative use of summative assessment)

In lesson 5 students played an adapted revision game, before completing an end of topic test (C1b4); see appendix 8. Overall, the students' scores from the current topic (C1b4) have remained consistent in comparison to the previous topic test (topic C1a3); suggesting a consistency of understanding between topics. The average score from topic C1a3 test was 81%, with 62% attaining an A or above, reflecting an almost equal average score of 80% and 63% of students attaining an A or above for C1b4 test. This data suggests students' comprehensions, as a class, of both topics were similar and remained stable.

Given the different complexity of topics and individual differences such as interests for specific topics, a personalised comparison between topics is not possible. However, this presentation of results can be useful to measure the consistency of students' attainment. If a specific student had a dramatic drop in their test scores between topics, this could be identified and dealt with; for example additional revision sessions or consideration of any personal circumstances. Moreover, if the class as whole scored lower in one topic, it would suggest additional effort was needed to recap the relevant subject knowledge. It may also suggest the teaching approaches used were not appropriate and may prompt teachers to re-consider how the subject content was taught and why it may not have been that effective. This may ignite different methods to teach a specific topic for revision sessions and future upcoming years. In addition, the results displaying the grade categories could be compared between topics to assess strengths and weaknesses in the class. This summative use of assessment could then be used formatively to highlight significant areas which need to be the focus of additional teaching and revising. A further discussion of these results can be found in appendix 9. From comparing and analysing the data, I'm confident to propose that students had a good understanding of this topic.

Interestingly, when looking more closely at the individual responses to specific questions, students scored the lowest in the balancing equations question. If I were to teach this group again, this would be an area to revisit, reinforcing the use of summative assessment for future planning and teaching. Educational researchers have argued in favour for the employment of summative tests for formative purposes (Black & Wiliam 2003; Black et al. 2003) whereby external assessments can inform and help guide curriculum reforms (Shavelson et al, 2003). Shavelson et al, (2003) has also encouraged the alignment between formative and summative assessment. Using end of topic tests can reveal misconceptions and enlighten the teacher's planning and focus towards areas still misunderstood.

Conclusion

Over the course of these lessons, a range of assessment strategies were deployed, as suggested by Gitomer and Duschl (1995, 1998) research. This diverse range allowed me to reflect and evaluate the effectiveness of each strategy, providing a greater range of teaching techniques to demonstrate students' understanding. I personally found the assessment approaches taken very valuable to adapt the lesson structure and inform planning for the next lesson to meet the learning needs of the pupils, as discussed. Importantly, the assessment approaches used suggested the learning objectives had been met.

However, although active engagement with the criteria has been proven to promote meaningful understanding (Rust et al, 2005), student input to the assessment criteria was not always practical. Furthermore, often students were not given an explicit assessment criteria Therefore, without knowledge of the learning goals and what is expected of them, the reliability of peer- and self-assessment remains questionable.

Whilst the advantages and constructive purposes of formative feedback are widely acknowledged (Black and Wiliam, 1998b), the feedback I gave, may have been associated with underlying problems. There was no assurance that students found the feedback useful in facilitating students with their understanding and stimulating discussion, as proposed by Maclellen (2001). Moreover large class size, and time pressures, can make it taxing to offer extensive and useful feedback (Heywood, 2000); yet Jackson (1995) reports students appreciate feedback, as this signifies their work has been fully read and marked appropriately. Studies have also shown students may lack a proactive response where feedback is often not read (Hounsell, 1987), or disregarded if not liked Wotjas (1998). A common remark is that students don't understand the feedback provided (Lea & Street, 1998). Therefore, the value of the feedback I gave is unknown, and in the future I would allow students the time to read and respond to the feedback and targets provided. This would encourage students to fully embrace the opportunities to improve their work as little time was offered to act on these targets I gave.

The written feedback given to students consisted of a general comment and a target for improvement, without a grade. A low grade can damage a student's self-esteem, whilst Yorke (2001) reports that a positive grade can enhance student retention. However, without having a grade, Black and Wiliam (1998a) show that feedback is read more conscientiously and can be used to support learning. Paradoxically, Smith and Gorard (2005) and Trotter (2006) report that both feedback and a grade were more effective for learning and favoured over feedback alone. This falls in support of Knight (2000) who argues that formative and summative assessment could function synergistically. Therefore, future research opportunities could consider whether a mark and feedback is a better source of assessment than feedback alone.

Questioning was the most regular method of assessment and proved invaluable as an immediate tool of assessment, allowing me to recap areas of misconception or to progress the lesson further. Despite the limitations of questioning, I firmly believe this practice took a successful central role in the classroom and gave a reliable overview of the classes understanding.

I believe the assessment methods I used reflected an encouraging personal performance in the classroom. The students' results from the end of topic test were very promising and suggest students had a deep understanding of this topic. Interestingly, the students were of a higher ability and therefore offers evidence against Buck and Truath-Nare (2009) research.

5, 560 words (not including sub-titles)

Sub-titles = 50 words

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.