Validity in relation to your subject
Within education, assessment theory can be broken down into two main areas, 'summative assessment' and 'formative assessment' – the latter is sometimes referred to as 'Assessment for Learning'.
Summative assessment can be seen as the traditional assessment method within the Scottish Secondary Education sector, for example, end of year examinations. The sole purpose of this form of assessment “...is to measure and record an individual’s attainment” (Scottish Qualifications Authority 2009c, p.5). The Scottish Qualifications Authority (SQA) defines summative assessment as
“ [an] Assessment, generally undertaken at the end of a learning activity or programme of learning, which is used to make a judgement on the candidate’s overall attainment. A key purpose of summative assessment is to record, and often grade, the candidate’s performance in relation to the stated learning objectives of the programme.” (Scottish Qualifications Authority 2009 c, p.106)
This form of assessment is traditional used in a formal sense for high stake qualifications, for example, the Higher Mathematics qualification is dependent on the outcome of one end of year formal examination.
Formative assessment is emerging as the prominent method of assessment within the evolving Scottish Secondary Education sector. The SQA (2009 c, p.98) define formative assessment as
“[an] Assessment that provides developmental feedback to a learner so that they can adjust their plan for future learning. It is not recorded for external purposes. Formative assessment is often called ‘Assessment for learning’”
Which may help to explain 'Learning and Teaching Scotland' naming of their assessment development programme - Assessment is for Learning (AifL).
Each form of assessment has its only advantages and disadvantages as outlined by the University of Luton CAA Centre:
“Summative assessment is the most useful for those external to the educative process who wish to make decisions based on the information gathered, for example employers, institutions offering further study, the courts.....It is not however very useful for communicating complex data about a student's individual abilities – are they strong in algebra but weak in calculus for example. Formative assessment on the other hand allows the students and other interested parties to form a more detailed opinion of their abilities, which can then be used to inform further study, concentrating students' efforts on the more appropriate areas and hence improving overall performance.” (CCA Centre 2002, p.6)
From personal experience summative assessment can be a highly stressful experience as the outcome of a course - may be over many years of study - is solely dependant on your examination day performance. It does not take into account your overall performance throughout the course.
Until the publication of Black & Wiliam 'Inside the Black Box' in 1998 then formative assessment had been in the background. 'Inside the Black Box' does appear to be a turning point and possibly the catalyst for change in Scottish education with a 'Curriculum for Excellence' (CfE); “.... all learners should be involved in planning and reflecting on their own learning, through formative assessment, self and peer evaluation and personal learning planning.” (The Scottish Government 2008a, p.27). Under CfE then 'National Qualifications' will move from traditional summative assessment to a mixture of assessment types (Scottish Qualification Authority, 2009c; The Scottish Government, 2010a).
Validity of assessment
No matter the form of assessment, it is of limited value to all parties involved if it is of low validity. The SQA defines validity
“ [as] The degree to which an assessment tests the actual abilities that it is supposed to test. The appropriateness of the interpretation and use of the results for any assessment instrument. (Eg a driving test where a candidate is observed driving is highly valid. A test where a candidate describes how they would drive is less valid).” (Scottish Qualification Authority 2009c, p.112)
Therefore an important aspect of any assessment is to ensure the assessment has high validity.
They are many different types of validity: face validity; criterion-related validity, which can be sub-dived into concurrent and predictive; content validity and construct validity. (Black & Dockrell, 1980; CAA, 2002; Deale, 1975).
Deale (1975, p.29) defines face validity as “The test should look as if it is testing what it is intended to test” and criterion-related validity as:
“The relationship between scores on the test and some other criterion such as teachers' estimates or results on an external examination....concurrent (test results compared with another measure of the same abilities at the same time) or predictive (test results compared with another criterion such as success in a particular job or at university).”
Deale (1975, p30) suggests that “the most important aspect of validity for the teacher is content validity – the extent to which the test adequately covers the syllabus area to be tested” which would be difficult to argue against. The CCA also suggests that you also need to ensure good construct validity – “how closely the assessment relates to the domain that you wish to assess”;”Ensuring construct validity means ensuring that the assessment content is closely related to the learning objectives of the course” (CAA 2002, p.11).
Deale (1975, p 30) believes that “To have good content validity a test must reflect both the content and the balance of teaching which led up to it.”
Validity of national summative assessment
In the case of the 'Higher Still Mathematics' course, the SQA publishes an 'arrangement document' for each level of the higher still course. The arrangement document covers in detail the course syllabus and hence what the class teacher is expected to teach his/her pupils in order that they may have sufficient knowledge and understanding to sit and hopefully pass the end of course examination. This is assuming the class teacher is able to sufficient cover the course syllabus as detailed within the 'arrangement document' with his/her class; if so then the final examination should, in theory, have high content and construct validity.
The SQA has an obligation to ensure high content validity of the end of course examination papers, as there would not be much point in the class teacher teaching to the 'arrangements document', if the questions asked in the examination had little and/or no relation to the contents of the 'arrangement document'. In discussion with teachers, it is understood that a team of senior/experienced members within the mathematics teaching profession are invited to set examination paper questions, these questions are then vetted and approved by an examination setting committee, a final check is carried out before going to print, thus helping to ensure a high content and construct validity.
In addition to this, examination papers from previous years are freely available from SQA website along with the appropriate marking instructions. These marking instructions enable both teacher and pupils to understand where marks are likely to be awarded and/or lost; If used appropriate can go towards increasing the validity of the examination further. However there is a downside, in which if the same type of question(s) appears year after year then there is a temptation for the teacher to focus on teaching the pupils to answer these types of question(s) only, even thou there is no guarantee these question will appear in this year's examination, and may inadvertently limit the pupils understanding of the subject.
In order for pupils to be presented for the final end of course examination, they are internal [summatively] assessed a minimum of three times throughout the course via end of unit National Assessment Bank (NAB) tests; Pupils are required to pass all three NABs before being allowed to sit the final end of course examination (Scottish Qualifications Authority, 2006; Scottish Qualifications Authority, 2009c; Webster, 2009).
Looking at the 'Higher Mathematics' course in detail, each of the three NABs only assess that particular unit of work, it does not re-assess prior units work i.e. unit two NAB would not assess unit one work and unit three NAB would not asses unit two or unit one work. Each unit of work does contain a number of different topics and as such each unit NAB assess the appropriate unit topics. NABs are not graded, it is a simply pass or fail. In order for the pupil to pass the NAB then they must met or exceed the threshold score per topic, which for 2006 ranged from 62.5% to 75%. (Scottish Qualifications Authority 2006, p.17)
Higher Mathematics NABs are seen by some - feedback from teachers during placement one - as only testing for a minimum level of competence and hence provides no indication if a pupil is likely to pass the final examination i.e. NABs are seen as having low predictive criterion-validity. This does not appear to the same across all subjects; In discussion with a music teacher, then music NABs are seen as testing the upper level of competence and thus pupils that pass these NABs are highly likely to obtain a pass at the end of course examination. Some schools/teachers may also require pupils to sit an additional internal extension test, maybe refereed to as an A/B test, to allow the class teacher to better assess whether the pupils are on track to pass the end of course examination. i.e. high predictive criterion-validity.
For pupils that fail a NAB, they are allowed to resit the portion of the NAB they failed. Should they subsequently fail again then they are not allowed to resit and thus should not be presented for the final end of course examination. As observed during placement, pupils are given every opportunity to pass the NABs at first sitting; Pupils were given three 'practise' NABs, which they attempted themselves during class and/or at home; The class teacher would go over these with the whole class next lesson. The only noticeable difference between these 'practise' NABs and the actually NABs themselves is that the variable letter(s), number(s) and units of measure within questions had been changed. This does raise the question of consistence i.e. is this standard practice within all schools? For which I do not have the answer to.
The Higher Mathematics end of course exam consists of two papers, 'paper one' and 'paper two'; 'Paper one' is non-calculator with mix of multiple choice – Section A - and extended answer questions – Section B; 'Paper two' is a calculator paper, extended answers only. 2009 'paper one' exam consisted of twenty multiple choice and four extended answer questions, and 'paper two' contained seven extended answer questions (Scottish Qualification Authority, 2009d). Before the papers are finalised, grading bounding meetings are held to set the minimum pass mark which would be equivalent to a pass at level 'C' (Scottish Qualification Authority, 2009b; Scottish Qualification Authority, 2009c). For 2009 paper (Scottish Qualification Authority, 2009b) the grades were defined as follows:
Total available marks 130.
Upper A => 106 (~81.5%)
A Grade 93 – 105 (~71.5%)
B Grade 77 – 92 (~59.2%)
C Grade 62 – 76 (~47.7%)
D Grade 54 – 61 (~41.5%)
No Grade < 54
The papers are external marked against the finalised marking instructions by SQA approved markers – minimum of three years teaching experience at the level to be marked (Scottish Qualification Authority, 2007b; Scottish Qualification Authority, 2009a). Quality assurance procedures are in place to monitor markers performance (Scottish Qualification Authority, 2007b; Scottish Qualification Authority, 2009c).
Overall there does appear to be appropriate measures and controls in place to help ensure validity of the 'Higher Mathematics' course.
Validity of mathematics assessment within a CfE inter-disciplinary project
Assessment of mathematics under a CfE inter-disciplinary project – a project involving more than one of the eight curriculum areas - poses it own set of challenges which would needed to be overcome to ensure high validity. While each of the eight curriculum areas along with the three responsibilities of all practitioners' - Health and wellbeing across learning, Literacy across learning, and Numeracy across learning - 'experiences and outcomes' detailed within the CfE folder are at a high level to allow for teacher flexibility and as such are open to interpretation.
Therefore moderation - “The process of establishing comparability standards between assessors to ensure the validity, reliability and practicality of an assessment.” (Scottish Qualification Authority 2009c, p.100) - would need to occur to ensure that all assessors/teachers involved in the assessment of the mathematics portion of the project fully understand what is to be assessed and how it will be assessed.
In discussion with fellow PGDE primary and secondary students, then a possible solution to assess the mathematics portion of a CfE inter-disciplinary project would be apply the following steps:
Identify lead within the mathematics department.
Leads would be responsible to identify the mathematics CfE 'experiences & outcomes' that the project should encapsulate, and how these should be measured. This would need to be moderated within the department and to help ensure consistence across the nation, it would be beneficial to discuss and share best practices with other schools via direct communication, 'Glow', and LTS national assessment resources and exemplars. (Burton, 2009)
Lead would share moderated output from above with the wider inter-disciplinary project team.
Project team would identify members within the team to develop appropriate lesson(s) to cover all curriculum area 'experience & outcomes' along with success criteria, formative assessment approach(es) to be utilised, and supporting evidence to be gathered. There is an expectation AifL strategies would used to aid assessment
Once lessons have been developed then the mathematics lead would review all lesson plans to ensure the mathematic's 'experience and outcomes' were covered.
Throughout the project then the mathematics department would assess evidence gathered from pupils to understand if 'experience and outcomes' had been achieved; If not this would be feedback into the project team for appropriate actions to be taken to address the gaps.
Following the above process should help to ensure high validity.
Section 2 – Formative Assessment in Practice
Ollerton (2006, p.218) suggests that within mathematics “the most important and positive form of assessment is the one that takes place moment-by-moment and day-by-day in lessons.” i.e. formative assessment, which relies on feedback from the pupils. Teacher TVs (2008) suggests “feedback will occur if learning environments are set up where children discuss their ideas, and that needs good teacher questioning to come through.....children are actively encouraged to self and peer-assess, because they start to gauge what quality means in a piece of work, and what needs to be done to improve.”
The key features of formative assessment are: effective questioning; feedback through marking; peer assessment and self assessment; and formative use of summative tests (Teachers TV, 2008), all of which are encapsulate in AifL approaches (See Appendix 1)
To understand the impact formative assessment has in practice then an action research project - “Action research is a term which refers to a practical way of looking at your own work to check that it is as you would like it to be.” (McNiff 2002, p.6), was to be undertaken during placement one using an AifL approach.
The action plan described by McNiff (2002, p.11) was to be followed:
“The basic steps of an action research process constitute an action plan:
We review our current practice,
identify an aspect that we want to investigate,
imagine a way forward,
try it out, and
take stock of what happens.
We modify what we are doing in the light of what we have found, and continue working in this new way (try another option if the new way of working is not right)
monitor what we do,
review and evaluate the modified action,
and so on …
(see also McNiff, Lomax and Whitehead, 1996, and forthcoming)”
However due to the limited amount of time available to conduct the research then it was only possible to follow the first five steps of this action plan, as follows.
Review our current practice
While observing lessons within an S2 class, who were currently revising for their National Assessment Test (NAT), it had been noted that only a small number of the same pupils were willing to raise their hand to volunteer answers to question(s) asked by the class teacher (CT).
For the majority of pupils that did not raise their hand, then if they were asked a question directly, they were very reluctant to provide an answer.
Identify an aspect that we want to investigate
The focus of the action research project would be to investigate if the number of pupils willing to volunteer an answer to question(s) could be increased by using another AifL strategy.
Imagine a way forward
The CT had used a number of different AfiL strategies with this class in the past; It had been observed that the CT was currently using 'Learning intentions' and 'Traffic lighting' AifL strategies with this class. However the CT had not made use of the 'Using wrong answers' AifL strategy with this class before, thus it was agreed this strategy would be used as part of teaching this class over one weeks worth of lessons, in the hope that it would help to increase the pupils self-confidence by creative a more supportive climate by allowing the class as a whole to explore areas that led to these wrong answers and therefore broaden the pupils range of experiences (Clarke, 2005a; Clarke, 2005b; Harris, 2007; Learning and Teaching Scotland, 2008).
Try it out
Before implementing 'Using wrong answers' it was important to establish an evidence baseline measurement in order to gauge progress from (McAuliffe, 2009). Of course this also implies the importance to gather evidence once the AifL strategy has been implemented to determine any effect - whether that be positive or negative - it may have had.
As part of S2 revision, the class were required to attempt six mental maths questions per lesson. After all six questions had asked by the teacher, then the class would be asked to volunteer the answer to each of the six questions i.e. by raising their hand up to volunteer to answer. This seamed to be an ideal point to gather quantifiable evidence before, during and after the AifL strategy.
Original plan for the research was to monitor a small number of pupils, chosen at random, over a period of three weeks – three stages. Stage one – pre 'Using wrong answers' - would be used to establish the baseline measurement; Stage two 'Using wrong answers' strategy in place; Stage three, remove the 'Using wrong answers' AifL strategy. However due to limited 'touch' time with this class due to in-service-days etc... the plan was modified to allow additional data to be gathered. Rather than having each of three stages run for one week, this was changed to four lesson per stage one and stage two, with the final stage running for three lessons.
However when approaching the final stage there was a concern that the pupils self-esteem could be negatively impacted by suddenly removing the 'Using the wrong answers' strategy, as it had appeared that this AifL strategy had positively impacted the pupils self-esteem; Details in 'Take stock of what happened' section. The decision was made in conjunction with the CT to continue the 'Use of wrong answers' strategy but at a reduced rate over the remainder of the research period i.e. two questions per set.
Take stock of what happens:
Analysis of findings follows (see Appendix 2 for raw data):
Stage 1 – Before implementation.
Two pupils were willing to attempt to answer all questions.
Three were not willing to answer any questions.
Stage 2 – During “Using wrong answers” strategy
Four pupils were willing to answer majority of questions.
One pupil was willing to answer approx 20% of the time.
Stage 3 – Reduce rate of “Using wrong answers” strategy”
One pupil was willing to answer all questions.
Three pupils were willing to answer majority of questions.
One pupil willing to answer question one third of the time.
Looking at each individual pupil - names have been changed to protect pupils privacy - and the class as a whole:
Pupil 1 - Adam
Adam appeared to be a very confident and able pupil who was more than willing to attempt to answer questions before the implementation of the “Using wrong answers strategy”. He was seated at the back of the class with a desk to himself. I did not expect this strategy to have any impact on this pupil.
However after the implementation of the AifL strategy, on four occasions he's hand remained down; twice during stage two and twice during stage three. The reasons for this are unclear, but it may have been as a result that he was no longer commanding attention from the teacher as other pupils were now raising there hands; Further research would be required to establish if this was the cause or not.
Pupil 2 - Brian
Initially Brian appeared to have very little confidence, and was unsure of his ability, and would very rarely contributed to class discussions. It would appear from the data the AifL strategy had a positive impact on this pupil. He was seated to another pupil at the rear of the class.
He never once raised his hand during stage one. During the stage two he was initially reluctant to raise his hand, but by the end of the stage his hand was up more than it was down, and the same again during the stage three.
By the end of the research period Brian appeared to be more confident, and was more willing to contributed to class discussions. Also noted that his mental maths score had improved.
Pupil 3 - Charley
Initially Charley appeared to be dis-interested, always scribbling in her jotter, would very rarely make eye contact with the teacher - unless spoken to directly, and would continually talk to her seating partner at every possible opportunity. She was seated at the rear of the class.
She never once raised her hand during stage one. However at the start of stage two – lesson five - she did raise her hand on five occasions within the one lesson, and the same again at the start of stage three – lesson nine..
Of particular interest was on the day Charley had raised her hand during stage two, her seating partner had been out of class. It would appear the peer relationship between Charley and her seating partner may be a contributing factor to Charley's reluctance to actively participate in the lesson. Further research would be required to determine if this was the case or not.
Note: Stage three – lesson nine – was a team challenge day and all pupils raised their hand on this day.
Pupil 4 – Debbie
Debbie appeared to be very confident and liked attention. She would continual interrupt class, by calling out or trying to distract fellow pupils. Debbie was seated at the front of class by herself.
Throughout the whole research period she was very willing to attempt to answer questions, however during lesson seven, she never once raised her hand to answer any questions (not just mental maths questions) and was quiter than normal. In overhearing a conversation between other pupils, as they left the class, it did appear that Debbie had been involved in a heated discussion with another pupil just before class, which may help to explain the unusual change in her behaviour. Of course they could have been other contributing factors at play, which would require further research.
Pupil 5 - Edith
Edith was a quite pupil and appeared to be having difficulty with mathematics in general. She was seated at the front of the class with a desk to herself.
During stage one she never once raised her hand, however during stages two and three she raised her hand the majority of the time.
Similar to Brian, it would appear this AifL strategy also had a positive impact on her. Also noted that her mental maths score had being improving.
While I do not have any hard data in support, it did appear this AifL strategy did have positive impact on the class as a whole. At the start of this research period, only a small number of hands were been raised, and by the end stage two, majority of pupils were willing to attempt to answers questions, and hence the decision to carry on with this strategy albeit at a reduced rate in stage three.
Of interest was the way in which “using wrong answers” strategy appeared to have created a more supportive climate and allowed pupils to share the methods they had used to obtain their answers to the mental math questions.
Also worth noting, when checking pupils answers to the mental math questions, approx 50% of pupils were scoring four or less out of six at the beginning of this research project and by the end every single pupil in the class was scoring four or more out of six.
To conclude, due to the limited amount of: time available for this research project; small sample size; one measurable data point; and numerous other variables. It is very difficult, if not impossible, to make any definitive claims from this research. However from the limited data gathered from this particular group of pupils, there does appear to be an increase in then number of pupils willing to volunteer to answer questions as result of the “Using wrong answers” AiFL strategy. However one pupil, Adam, may have been negatively impacted as a result of this strategy which re-enforces the importance of the taking the individual learning needs into account. (Harris, 2007 ; HM Inspectorate of Education, 2006; Scottish Executive, 2004)
Section 3 – Professional Development Reflection
The assignment as a whole, has lead me to question, in depth, and thus better understand and (re)evaluate where I believe I am in my own professional development as it relates to satisfying the the eight benchmarks of Initial Teacher Education (ITE) as per The Standard for Initial Teacher Education (SITE). (Scottish Executive, 2000; University of the West of Scotland, 2009)
The eight benchmarks as detailed within appendix 1 of the 'General approaches to school experience' document are: Curriculum; Education Systems and professional responsibilities; Principles and perspectives; Teaching and learning; Classroom organisation and management; Pupil assessment; Professional reflection and communication; Professional values and personal commitment (University of the West of Scotland, 2009).
Each section of the assignment has touched a number of these benchmarks. Looking at each section and sub-section in turn:
By accessing and evaluating relevant professional literature (benchmark 2.4.1.) I now have a greater understanding and appreciation of assessment theory; the advantages and disadvantages associated with the key forms of assessment, summative and formative, and how this relates to pupils assessment (benchmark 2.3.1).
While I may understand the theory, at this early stage in my ITE I have limited experience of putting theory into practice and as such need to ensure I take balanced view of other teachers experience/views/opinions, and continue to reflect accordingly to aid my own development. (benchmark 3.2).
Validity of national summative assessment
Having been away from education for many years, then my understanding of the current national summative assessments had been out-dated prior to this assignment. However with research (benchmark 2.4.1) I now have detailed understanding of the processes and procedures the SQA have in place to help ensure the validity of the national summative assessment.
While I appreciate the need for a national summative assessment for high stake qualifications at this moment in time, I do question the true value of these assessments; If future decisions are based only on the outcome of these assessment, then it does not give a holistic overview of the pupil, all it does do is confirm the pupil met a certain standard at the moment in time they were assessed. (benchmark 3.1)
Validity of mathematics assessment within a CfE inter-disciplinary project
Subject assessment within a CfE inter-disciplinary project creates opportunities to work with and build relationships with others outside of your department, which can only go to help broaden everyone experiences (benchmark 2.1 & 3.3) and in turn could help with curriculum development across the whole school (benchmark 2.4.3)
Personally having been part of many inter-disciplinary project teams involving many different cultures, the advantages outweigh any of the perceived disadvantages. The key learning from these projects is the importance to listen to and value everyone's opinion, no matter their background. Some of the best solution to issues have come from people outside of the specialist subject area.
Formative Assessment in Practice (Action research)
This aspect of the assignment allowed myself to have a more in-depth understanding of the benefits of research in education (benchmark 1.3.2) and the benefits formative assessment could have on pupils attainment (benchmark 2.3.2)
Initially I had reservations about conducting an action research project, as I did not see any value in conducting research over a short period of time and with a small pupil sample size. However I had underestimated the impact formative assessment could have in practice on a pupils learning and confidence.
Overall this assignment has been a valuable learning experience for myself. I had initially underestimated the amount of research and reflection time required for this assignment and will not make that mistake again - this type of assignment is new to myself and I have spent many hours reflecting on how best to approach it.
I've learned new skills, for example, how to access and evaluate profession relevant literature (benchmark 2.4.1); taking responsibility for my own professional development (benchmark 3.2.); form my own opinions, which should hopefully go towards improving the teaching practice as whole; and the importance of taking the time to reflect.
Word length: 4950
The has been submitted via 'Turnitin'
Appendix 1. - AifL approaches
A means of generating ideas by asking individuals or a group to produce quickly as many thoughts on a topic as they can.
A question cleverly structured to discover whether or not true understanding has taken place.
Opportunity for pupils to discuss answers with one another before contributing to a whole-class session. Similar to think, pair and share.
Assessment with no reference to grades or marks achieved. Comments should advise on how to bridge the gap between present performance and desired goal. They should emphasise strengths but highlight one way of making this work or future work even better.
Acronym for: Circle the words that tell you what to do; Underline the key words; Box any sources you have to refer to; Explain in your own words what you have to do. This strategy was devised to help pupils to understand the questions in Standard Grade Geography and so construct better answers. It appeared in a unit of work produced by Glasgow Corporation (now Glasgow City Council) Geography Group.
Can be a means of revealing pupils' knowledge and understanding, before, during or after a series of lessons on a given subject. A pupil sits in the 'hot-seat', and his or her responses to questions become the basis for assessment, by the teacher and/or by other pupils.
A method of structuring positive interdependence among pupils in a class. Different groups or individual pupils work on different aspects of a learning task or research project, perhaps using different resources. They are expected to know their part of the work well enough to teach to others. The pooled learning then constitutes a totality to which the different groups or individual pupils have contributed separate elements.
This is complementary to wait time. Pupils are given sufficient time to think about their answer and to note their thoughts before speaking.
Learning intentions are goals for learning in a lesson or series of lessons. They may be related to a process or the final product. WALT (We Are Learning Today) is a means of helping pupils to understand this abstract concept.
Recognition on the part of the learner that learning has taken place, or is taking place. It involves understanding and appreciating the factors that make learning possible and one's own strategies and processes of learning. Black and Wiliam stressed that opportunities for self-assessment and reflection are crucial for improving learning and there is a range of research evidence indicating that metacognition tends to be associated with effective learning.
Mind Map® / Mind Mapping
Developed by Tony Buzan in the 1970s to help students learn by creating a visual representation of links among ideas, including the association of new ideas with existing knowledge and experience. Mind Mapping may involve the use of colour and images, as well as key words/phrases.
A demonstration of an activity, emphasising the stages of the process and the standard of work expected.
Refers to assessment advice on bridging the gap between present performance and desired goal. Advice on next steps should point to the most immediate action to be taken to bring about improvement.
'No hands up'
By establishing a rule of 'no hands up' in a question-and-answer session, distractions are reduced and pupils have more time to think (see waiting time). Because of this, everyone is expected to be able to offer an answer.
'No marks' homework
Work done out of class and assessed without awarding grades or marks. Alternatives to grades or marks are self-assessment by assigning red, amber and green traffic lights to indicate level of confidence, or teacher assessment in the form of comments advising on ways of bridging the gap between present performance and desired goal.
Pair, share and square
A variation of the think, pair and share strategy for more effective questioning. Pupils have opportunity to think and time to share with a partner, before discussing ideas with another pair in the group.
Peer assessment can be the bridge between teacher assessment and self-assessment. That is, it can be a stage in the process of helping pupils become confident and skilled in self-assessment, as opposed to relying always on the teacher. In peer assessment, pupils provide feedback on others' work. It works best if the criteria have been shared and fully understood, if the teacher has modelled the process and if she/he monitors the quality of it as pupils undertake it.
Pupils or colleagues provide advice on others' work, based on peer assessment, making clear the strengths and an area for improvement. In some schools the catch phrase 'two stars and a wish' has established a culture where everyone expects constructive advice.
A means of checking how well pupils have understood a topic or issue – or, perhaps more significantly, one of ensuring high motivation in undertaking a task or project. For example, in jigsawing, pupils might read materials and research a range of resources, becoming 'experts' on a topic, which they then teach to others.
A non-threatening strategy to discover potential areas of misunderstanding. A box is set up as a 'post box' into which pupils 'post' questions. The teacher collects the 'post', reads the questions at random and addresses any misunderstandings in session with the whole class.
Pupils are made aware of the standard of work expected at a particular level. It is important to ensure that criteria are translated into language accessible to the learner. It is helpful also if pupils are able to discuss examples of work which does / does not meet the criteria.
A fun way of establishing prior knowledge at the start of a lesson or series of lessons.
Statements of standards by which success in an enterprise, for example a test/examination or a development plan, can be decided. They specify the acceptable evidence that the aim(s) of the enterprise has/have been achieved.
A modelling exercise in which the teacher works through the different stages of an activity, emphasising the stages of the process and the criteria for success.
See wait time / waiting time.
Think, pair and share
An activity to encourage higher-order thinking. Pupils consider a question independently then discuss ideas with a partner before sharing them with the class. Variations include pair, share and square.
A means of self-evaluating levels of confidence. Green means 'I can do this', amber means 'I'm reasonably confident', and red means 'I need assistance'. It is particularly effective when pupils are involved in establishing, or at least understand clearly, the criteria for success.
'Two stars and a wish'
Feedback to pupils which makes reference to two strengths in the work and one area for future development. This enables learners to build on prior learning and breaks the process of improvement into manageable steps.
Using wrong answers
Appreciating the value of wrong answers as a means of exploring areas of misunderstanding. The slogan 'it's alright to be wrong' is aimed at helping pupils move away from the idea that only a correct answer is acceptable.
Wait time / Waiting time
The concept, highlighted by Black and Wiliam, of allowing time to elapse between asking a question and asking for answers. The point is to enable pupils to think, to link the question to schemata of knowledge they already possess, before having to articulate the answer. Also known as 'thinking time'.
Acronym: We Are Learning Today. Outline of a learning intention, developed by Shirley Clarke. WALT is sometimes characterised as an owl, or an animal.
Acronym: What I'm Looking For. Exemplification of success criteria, developed by Shirley Clarke. WILF is sometimes characterised as an animal or in comic human form.
Appendix 2. - Action Research Data
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal: