This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
DECLARATION: By ticking this box I confirm that the attached assessment item is all my own work. All sources are fully acknowledged and referenced appropriately. I confirm that this work has not and will not be used in any other academic context. I agree to provide an electronic version of this work if requested for use with plagiarism detection software TurnitinÂ®UK (you may copy and use this tick ïƒ¼)
Tutor to Complete
This assignment assesses the following module learning outcomes
The following feedback explains how well you achieved the learning outcomes, relates this to the assessment criteria and provides advice on how you can improve your future work
DATE TO ETM
FIRST MARKER SIGNATURE
SECOND MARKER SIGNATURE
RESULTS ARE PROVISIONAL UNTIL AGREED BY THE BOARD OF EXAMINERS
Planning for Effective Learning: A critical analysis of Standardised Assessment.
Appendix 1, Interveiws...........................................................................................Pg19-21
Assessment is a key process in education. It is only through assessment that we can determine whether instruction has had its intended effect. (Denvir & Brown, 1986). The assessment methods used to range from summative assessments, a simple observation to form a teacher's subjective opinion, to formal testing or examinations (summative assessment). Assessment was introduced nationally by the Education Reform Act of 1988, meaning that every teacher became trained in assessment. Consequently, it became apparent that assessment was facilitating the progression of learning, and the value of effective assessment was virtually undisputed. Both formative and summative assessments are used to assess a pupil's attainment, advancement, target group placement, teacher instruction and access to the curriculum. Understanding with depth and clarity the current knowledge that a pupil possesses is arguably the most fundamental aspect of teaching and learning. It may therefore be assumed that assessment should be reasonably uncontroversial. All those with a stake in the outcomes of education seek confirmation that pupils are making sufficient progress. It seems plausible that this can be evaluated easily through the use of straightforward achievement tests. However, standardised testing is a well debated and highly controversial issue, with a significant amount of negativity surrounding external summative assessments in terms of their reliability and the benefits to children's learning and progress.
Initially, there will be recognition of the benefits of using both summative and formative assessment techniques collaboratively. This recognition is to emphasise that, although I personally agree with the use of standardised testing, in order to obtain a broad picture of children's learning it should not be used in isolation. Subsequently, this report will not provide a comparison of the two techniques or consider formative assessment further It will analyse summative assessment according its primary function, an objective and concise summary of collective abilities. Key initiatives initiated by standardised testing will then be identified. The well controversial use of league tables will be analysed, with particular focus on the implications to schools, individual teachers and pupils. This report will argue that, although there has been considerable criticism for the use of standardised testing, it is not the tests themselves or the benefits that they provide that is the source of negative criticism, but their implementation. This will be highlighted by identification of negative teaching techniques that have been adopted in order to adhere to strict government guidelines in comparison to positive experiences from my placement school.
The objective of summative assessment strategies, also known as Assessment of Learning (AOL), is to measure the standard of attainment obtained at the end of a taught unit, by comparing it against defined standards and benchmarks. Examples of summative assessments include the Foundation Stage Profile (FSP), Standards Assessment Tests (SATS) and formal teacher assessments. The evidence collated from this is used to check the progress or attainment of the pupil in relation to areas of the National Curriculum. This would provide data for national and local benchmarking for comparison both across the country and internationally. Utilising this data effectively is essential as it can enable schools to determine if children are meeting age-related expectations. If they have deviated from national expectations, targets can be reviewed and necessary provisions or interventions can be implemented. The assessment of pupil achievement and record of their progress can also give an indication of, and evidence for, teacher and school effectiveness and provide some accountability. Summative assessment is useful to provide data for external sources to use to make informed decisions based on the information gathered and provides a concise summary of ability. It is not useful, however, for catering to individual requirements and circumstances.
Many recommendations have been made to use multi-faceted approaches when assessing performance. This includes first impressions, school records, pre-tests, standardised tests, informal interactions, assignments etc. (McMillan, 2010). The main criticism of formal testing is that standardised test scores do not take into account personal circumstances and therefore are not reliable. Formative assessment, however, does consider these factors and is learner focussed; allowing pupils and teachers to form a detailed opinion about their abilities (Tompkins 1997). Nevertheless, teacher assessments are at the very least minimally subjective, making wide scale comparisons inaccurate. Standardised tests are objective by nature and are often scored by computers or people who do not directly know the pupils. They are also developed to support the curriculum and each question undergoes an intense process to remove bias. Consequently, claims for both summative assessment and formative assessment are valid. My school recognises that children learn in different ways, therefore use a variety of evidence to assess pupils. Key stage leaders discuss their assessment approaches, how this is reflected in test results and what these results tell us to inform future planning. The school also uses data provided by Fischer Family Trust (FFT) Data Analysis Project, in order to target provision and improve educational outcomes.
FFT is a widely used data analysis system that provides estimates for future attainment. These are based on current achievement and the assumption that pupils will make progress in a similar fashion to children in similar circumstances. These can be used as
'the basis for a discussion about expectations...to inform planning, teaching and to engage pupils and teacher' and 'to inform discussion about targets, assessment and recent performance'.
(Fischer Family Trust 2006 pg. 1)
All of which strive to improve the educational outcomes for children by using summative data in a formative manner. FFT estimates, SATs results and formal AOL is used alongside other information, such as formative teacher assessments and observations, to enable teachers to produce realistic and challenging targets. Children enjoy learning and develop new skills whilst the school still effectively prepares them for formal testing. This concept is supported by Kyriacou who states:
'The two forms of assessment can be mutually supportive - formative assessment supports the process of learning, summative assessment measures the result' (Kyriacou, 2007, p247).
The assessment of teaching and learning should be viewed as two complementary and overlapping activities. These have very different objectives and should not be compared to one another; rather both co-exist to improve educational outcomes.
The most recognised and debated example of primary summative testing is the high-profile Standard Attainment Tests (SATs) Introduced in 1991, these tests are taken nationally at the end of Key Stage 1 and 2 and have continued to be an integral aspect of summative assessment in primary education. The key rationale of SATs was to 'genuinely give information about how children were doing in the National Curriculum' (Sainsbury and Sizmur, 1996). They were designed to be consistent and standardised, as every child of the same age and similar ability would be taking the same test. Therefore if a child moved to a different LA or school they have been tested using the same regime and an accurate account would be provided in order to inform government initiatives. Standardisation is designed to ensure that national standards are understood and being applied and students located in various schools across the country can be statistically compared. Without standardised testing, this objective comparison would not be possible. Being able to accurately compare data is invaluable and standardised testing has enabled many government initiatives to have been adopted. For example, the International Comparison of Reading Standards influenced the implementation of the National literacy strategy (Beard 2000) by identifying falling standards. It is strongly stressed that the key criteria when developing these strategies was "looking at the evidence" (Thompson 2000 pg. 1) and it "deliberately set out to be evidence based" (Thompson 2000 pg. 1). According to The National Strategies (2011) "Sharing children's progress data at school and LA level proved to be a powerful lever in raising expectations of particular groups of pupils and schools" (DfE 2011pg 8). Consequently, standards in English and Mathematics have improved dramatically since these initiatives were introduced.
Furthermore, factors such as data on ethnicity, socioeconomic status and special needs provide an opportunity to develop programs and services directed at improving provision for these children. The longstanding government commitment to raising standards of achievement for all pupils in schools was re-affirmed in Every Child Matters: Change for Children in Schools (DfES, 2004). This focussed on raising the educational achievement of the lowest attaining pupils. A vast amount of quantitative (and qualitative) data was used to inform this research which was derived from achievement tests and statistical data (Dunne et al 2007). Considerable improvements have been made in order to rectify the disparity between low attaining pupils and those who are attaining the required standard. This was made possible by the recognition that specific groups were consistently underachieving.
However, SATS in particular have attracted considerable criticism from teachers, parents, education researchers and government officials alike (Yarker, 2003), stemming from the notion that its key rationale is not being adhered to. Data derived from formal testing is under increasing scrutiny, with many stating that it is widely misused. Figures are often used for performance management purposes or to inform unrealistic targets due to incorrect interpretation of results (Baker 2010). It was originally stated that 'the average expectation for an age 11 pupil will be level 4' (DES/WO1988 pg. 34). However, the government have modified this so that a level 4 has become the expectation for all pupils in an attempt to look as though they are raising standards. Several major approaches to the national curriculum and instruction were influenced by Piaget, such as the development of the stages of cognitive development, used today as a way to gauge a child's cognitive functioning. This permits the development of activities and learning experiences that are at the appropriate cognitive development stage for the child's ability to learn (Pound 2006). It can be argued that summative assessment and data analysis support this notion when it informs the grouping of children by identifying their current capabilities. However, by enforcing a higher expectation than research suggests is appropriate, some children are forced to aim for unrealistic targets and teachers are coerced into providing activities that may not match each child's stage of development. Furthermore, data misuse can be detrimental if test scores are maximised causing children to be unrealistically grouped. Assessment practices that do not match the theories of human development and learning can result in disconnections or a lack of coherence. This can transpire as a fractured, inefficient and ineffective approach to schooling.
Critics also argue that collating test results is not with the intention of determining the attainment levels of individual children, but instead to make comparisons between the overall attainments of a school with others (Sainsbury and Sizmur 1996). Although these comparisons are a fundamental aspect of summative testing, Hummel & Huitt, (1994) argue that it is the methods of accountability that actually drive curriculum and instructional practices instead of the needs of the children. This is primarily by the use of league tables. Performance tables derived from formal testing are a major part of schools accountability systems. Adequate accountability for delivering high quality education should be a non-negotiable principle. Yet this needs to be achieved without damaging the breadth of the curriculum and expectations need to be realistic.
The House of Commons' Children, Schools and Families Select Committee state that: "Schools feel so constrained by the fear of failure according to the narrow criteria of the performance tablesâ€¦" (2010 Pg. 4). The way data is interpreted is significant because consequences of poor test scores may include public censure and risk a reduction of student numbers, and consequently funding, as parents seek to send their children to 'better' schools. A declining standing can also present serious issues in terms of recruiting and retaining the high quality staff necessary to improve the schools position. Intense pressure is therefore placed in order to meet national targets and maintain or improve the school's position in league tables. Teachers may feel personally responsible for their pupils' results (Connors et al, 2009) and may feel that the marks children achieve directly reflect their competence as practitioners.
According to Tymms, formal testing can "generate unhealthy pressure on teachers and pupils and this leads to a narrowing of the curriculum" (Baker 2010). This narrowing, according to research by Connors et al (2009), is primarily caused by children being 'taught to the test' in the time preceding formal testing. This methodical approach, it is argued, may be disadvantageous to a child's learning. Hall & Ozerk (2010) also noted that teachers can be inclined to adopt 'transmission styles' of teaching, which reduces creativity in the curriculum. This mirrors B.F Skinners 'programmed instruction' technique, which was responsible for teaching techniques that involved repetition and completing vast rows of sums without any scope for higher level thinking (Pound 2006). This behaviourist view sees learning as nothing more than the acquisition of new behaviours (Thornton 2011). Educational theories have developed greatly over a number of years, causing significant criticism for such a narrow view of human behaviour, yet this form of practice still continues. If schools provide this form of learning, children are taught 'examination technique rather than developing the knowledge and skills the test is designed to assess' (Hall & Burke 2004). This focus on memory and repetition rather than skills and application restricts the desired holistic approach to education. Schools may also prioritise subjects that will present the school in the best light, irrespective of the needs and interests of the pupils (NASUWT). The scope for creativity in lessons where teachers adopt this approach is also reduced as there is limited opportunity to deviate from the curriculum.
In my placement school, they continue to promote a creative curriculum, recognise the importance of adapting learning opportunities to meet individual need and continue to provide rich learning experiences. This is received in conjunction with receiving higher than average results in summative assessments. Pupils regularly engage in formal test situations yet view them as 'activities' that inform their targets. Teachers can identify areas where results are consistently lower and can consider alternative delivery methods or interventions by using data generated formatively. Although the staff at my placement school are subject to the same pressures when it comes to league tables and accountability, this is not transmitted to the children. According to Cullingford (2006), some children are surprisingly aware of the significance of league table and the importance of SATs. Children may realise that test scores are important for their teachers due to the shift in curricular focus and constant preparation, consequently perceiving them as important themselves (Webb 2006). When interviewed, many of the children felt 'worried' or 'nervous' at the thought of impending SATs, yet none mentioned the effect that their scores would have on the school or individual teachers and saw them as something very personal to them and their own development. Excessive pressure can be placed on a child who is worried about the consequences of failure increasing the likelihood of stress and anxiety, with severe cases reporting loss of sleep, loss of appetite and headaches (Hall et al, 2004). These detrimental effects illustrate the importance that some children place on SATs results (Connors et al, 2009).
It can be argued that the highest levels of stress and anxiety in children are caused by the thought of impending tests, rather than the test itself, caused by a negative atmosphere surrounding the preparation. Many of the children interviewed saw the more formal test situations as "...a challenge, a chance to prove to everyone what I can do" (Appendix 1). Research by Conners et al (2009) supports this notion, stating that many children regard both the prior preparation and the tests themselves to be ways of challenging themselves. This sense of challenge can increase motivation and concentration levels, with many children associating hard work with higher marks (Webb, 2006). Furthermore, some of the children interviewed stated that they enjoy formal test situations because "it helps me concentrate", "I don't feel nervous", "I feel better because I can just get on with my work and show the teacher what I can do" (Appendix 1). The concept that formal testing may have a positive impact on children's behaviour is argued by Hall et al (2004), who states that children's concentration is improved and classroom disruption is greatly reduced by taking practice tests, as they regularly need to display these attributes under test conditions. By creating an atmosphere of normality for formal test situations by consistent participation, it decreases the pressure students face. One student commented before her formal year six SATs test "We do them every year, I'm not worried, I just need to do the best I can" (Appendix 1). In fact, all the children questioned said that the school consistently tell them to just do their best. They support them by providing them with the sorts of questions they might encounter as well as regularly providing them with the opportunity to experience the formality of test situations.
Despite the considerable criticism of summative testing, my research overwhelming suggests that much of the criticism is not for the tests themselves, but for the way they are implemented. The intention of testing was that assessment scores should predominantly be used to serve as indicators of what pupils knew and understood. This can subsequently be used to determine government initiatives that target provision and improve educational outcomes. This key rationale of summative testing is still valid, it seems implausible to find a more accurate means of data for analysis that is consistent, objective and allows for such a high level of comparison. It is also suggested, by highlighting practice from my placement school, that there is a way to use summative testing successfully.
It is clear from the evidence that primary schools take the process of preparing children for SATs and formal testing very seriously. However, data has increasingly become a measure that facilitates judgements on the quality of the many elements of the education system. This has resulted in unnecessary focus on 'blame' and an ineffective use of the data available, resulting in intense pressure being placed on individual teachers and schools. Consequently this has not benefited the children we are striving to help. It can be safe to assume that the high levels of stress and anxiety are detrimental to children, teacher and schools as a whole. It is also easy to make the assumption that inadequate teaching practices should not be implemented regardless of this pressure. Yet good teachers, who work incredibly hard despite challenging circumstances, are consistently dealing with unnecessary pressures to push children to perform beyond the capabilities. With such high stakes, this pressure can be difficult to ignore.
According to Wintle and Harrison (1999), test results are 'the most significant performance indicator used by teachers, inspectors, parents and other professionals', importantly neglecting the implications for the children themselves. Assessment at all times should promote, not undermine teaching and learning and are of little importance unless they directly improve the educational outcomes for children. Summative testing has the ability to do this, yet with such high levels of data misuse it will justifiably continue to be under intense scrutiny.