Educational assessments

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

ABSTRACT

Educational assessments have come into scrutiny for many years. In times of technological change, e-assessment has been a current issue in the world of educational assessment. The implications of this change have gained considerable media coverage that range from substantial support to considerable opposition. The media article “Exams are a vital lesson” by Hilary Douglas, will serve as examples of assessment issue that are brought about by national newspapers. The paper highlights how Continuous assessment has also emerged as an accompanying issue to do with e-assessment. In this paper, one argues the necessity to understand the functions of assessment in order to fully understand why this change is being proposed and the ability to fully embrace the new opportunities that modern technology provides. In addition, one outlines some of the issues that must be considered and the difficulties that must be overcome before continuous assessment and e-assessment can become a complete reality. In conclusion, it is evident that the age of e-assessment has arrived but there are still many hurdles to overcome before the full potential and benefits of e-assessment are put into practice.

Introduction

It is with no doubt that assessment and testing have a strong effect on lives and careers of young people. According to Black and Wiliam (2006:9) ‘Assessment in education must, first and foremost, serve the purpose of supporting learning'. But what exactly is assessment? Assessment is defined by Linn and Miller (2006) as the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning. Assessment serves many functions and there are big educational gains associated with good assessment as Black and Wiliam (1998:3) review in their study:

‘All… studies show that… strengthening… formative assessment produces significant, and often substantial, learning gains. These studies range over ages (from 5-year olds to university undergraduates), across several school subjects, and over several countries

However, in many instances, assessment due to progression purposes in life may be purely seen as artificial hurdles to cross over in young people quest for employment or further education. This paper will highlight issues regarding functions of assessment that will help to understand, how first and foremost, the purpose is to support learning.

In the eyes of many educational professional, an extraordinary variety of classroom-targeted initiatives have been unleashed on schools over the last decade and more. All the initiatives with the same general aim: the improvement of pupil learning. Assessment by teachers, whether formative or summative, is one of these developments that are considered to offer significant potential for improving pupil's learning (Harlen, 1997). This development is on going and proof of it is one of the latest media articles headlined “Exams are a vital lesson” (July 19th, 2009)

The article by Hilary Douglas identifies current trends and issues regarding functions of assessment and current and future assessment practices. In particular the article focuses on a statement by the head of the Cambridge Assessment exam board stating “there will be a shift from traditional high-stake summative assessments to be replaced by computerised online testing.” The idea behind the scheme would be that students could take a test whenever they are ready and resit these as many times as necessary to be able to get a good mark. Continuous assessment would totally replace the three-hour written exam, rather than a mix of assessment of coursework and traditional testing which is the norm.

As Douglas (2009) indicates, nine years ago, Curriculum 2000 was introduced when pupils were allowed to credit their courses as AS pupils at the end of their first year. However, the introduction of continuous assessment, as proposed in the article not in its form ground breaking. Originally A-Levels were assessed through one set of exams at the end of a two-year course. They were also allowed for the first time to take exams as many times as they liked until they and their teachers felt they had achieved the optimum mark.

Even though exam boards such as OCR have already tested e-assessment in environmental and land-based science since 2007, and have 1,800 candidates and 80 schools (Douglas, 2009) using it this summer proving to be popular for both students and teachers alike, many educational experts warn that the move could be an open door to the most appalling cheating and that testing all pupils around the country in the same way at the same time and under the same circumstances is the only true way to be able to compare the results in a meaningful way.

In addition, Alan Smithers (cited in Douglas, 2009) professor of education at Buckinghamshire University, feels that the move must be stopped at all costs. “Making judgement about performances isn't easy,” he says. “The best way of doing it is dispassionate assessment of students tackling the same tasks under the same conditions.”

It is evident that a move from traditional summative assessment to continuous assessment and e-assessment will bring both challenges and opportunities regarding issues of assessment and possibly contextualise the function of assessment. This paper will begin with an examination on the function of assessment and pay particular attention to issues this change could bring to schools, colleges and more importantly, students. Current assessment practices in continuous assessment and e-assessment will all aid in understanding the issues this change in assessment practice may have.

Functions of Educational Assessment

According to Newton (2007) when considering optimal design characteristics for future assessment systems, it is necessary to bear in mind the underlying purpose of those systems. Overall it must be taken into account that a system which is fit for one purpose will not necessarily be fit for all purposes and this is something continuous assessment and e-assessment proposals need to bare into consideration.

The term ‘assessment purpose' may be interpreted in a variety of different ways one will identify the three levels as mentioned by Newton (2007)

1. Judgemental Level (concerns technical aim of an assessment event e.g. purpose is to derive standards-referenced judgement expressed as a grade , usage commonly associated in official documents)

2. Decision Level (concerns the use of an assessment judgement , the decision, action, process it enables e.g. the purpose is to support a selection decision for entry into higher education)

3. Impact Level ( concerns the intended impacts of running an assessment system e.g. the purpose are to ensure that students remain motivated, and that all students learn a common core for each subject)

(Newton, 2007)

It is important to understand that where the discrete meanings are not distinguished clearly, their distinct implication for assessment design may become unclear. In this situation, policy debate is likely to be unfocused and system design is likely to proceed ineffectively (Newton, 2007). So at what level are the new proposals aimed at?

The change proposed by the head of Cambridge Assessment exam board brings a change to high-stakes summative assessment. ‘High Stakes' a term used to denote those situations where interest in assessment goes beyond the immediate sphere of educational measurement and beyond those individuals who sit the tests (Messick, 1999). In addition, as many writers have pointed out, the stakes may be higher but the technical problems associated with assessment remain the same in that all assessment, whether high-stake or low-stakes, needs to be valid and reliable (Linn, 2000:1). American Educational Research Association (2000) noted that:

If high-stakes testing programs are implemented in circumstances where educational resources are inadequate or where test lack sufficient reliability and validity for their intended purpose, there is the potential for real harm.

Therefore if anything needs to prevail from these changes in assessment, are the needs for them to be valid and reliable. So what changes are being proposed and what differences are there in forms of assessment? This now leads one to the non envious task of briefly finding a distinction between summative and formative assessment.

It is not ones intention to provide an extensive literature research on formative and summative assessment, but a working theory that has been taken into account throughout this paper. The perspective from Harlem and James (1997) and Harlem (2005) theory in summative and formative assessment has been taken into account. Harlem and James (1997:372) attempted to distinguish formative from summative assessment by listing contrasting characteristics, for example, summative assessment needs to prioritise reliability, while formative assessment needs to prioritise validity and usefulness; formative assessment treats inconsistent evidence as informative, while summative assessment treats inconsistent values as errors. Harlen (2005) subsequently developed this argument, and further clarified the distinction between formative and summative as follows:

The two main purposes of assessment discussed in this article are for helping learning and for summarizing learning. It is sometimes difficult to avoid referring to these as if they were different forms or types of assessment. They are not. They are discussed separately only because they have different purposes; indeed the same information, gathered in the same way, would be called formative if it were used to help learning and teaching, or summative if it were not so utilized but only employed for recording and reporting. While there is a single clear use if assessment is to serve a formative purpose, in the case of summative assessment there are various ways in which the information about student achievement at a certain time is used. (Harlen, 2005, p. 208)

Therefore, for purpose of this paper it is useful to highlight the points that people often seem to think that the distinction turns on the nature of the assessment event i.e., the use to which assessment judgement will be put. One must take into consideration that whatever the nature of a judgement there would be nothing formative happening unless the judgement was used in an attempt to improve learning. Therefore, even though one might assess via summative means, there is always the availability to provide formative feedback and coach students on where they have gone wrong. This may be done through continuous assessment.

Continuous Assessment

The abolishment of the traditional three hour exam to continuous assessment brings issues and opportunities within the educational establishments. Continuous assessment, according to Federal Ministry of Education, Science and Technology (FMEST, 1985), is defined as a mechanism whereby the final grading of a student in cognitive, affective and psychomotor domains of behaviour takes account, in a systematic way, all his performances during a given period of schooling; such an assessment involves the use of a great variety of modes of evaluation for the purposes of guiding and improving learning and performance of the student. This mode of assessment is considered adequate for assessment of students' learning because it is comprehensive, cumulative, systematic, guidance and diagnostic oriented. Having the ability to continuously assess will facilitate the teacher to understand where the student is having difficulty and act through formative assessment.

But what is the purpose of this change and relating back to the previous section, what is the purpose of this educational assessment? In the case of continuous assessment, its purpose fits on an impact level, which concerns the intended impact is of running an assessment system that attempts to ensure students remain motivated, and that all students learn a common core for each subject.

It is here where even though the proposal is to bring in computerised online testing (which shall be dealt with later on in e-assessment) that will produce mainly summative judgement may be used for formative assessment. The ability for students to resit exams allows both the student and teacher to use a summative assessment and if the student was unsuccessful in their first attempt, utilise the result for formative purposes. How? This allows the student and teacher to address where exactly they have gone wrong allowing assessment procedures and practices to develop to support learning and underpin rather than undermine student confidence, achievement and progress. James and Pedder (2006:110) states, ‘feedback focused on helping students to improve sharing criteria of quality'. This point cannot be understated as the type and quality of feedback to the student via formative assessment has been seen as crucial in other studies (Black and Wiliams, 2008). However, will this change make a difference to student's perception of learning and more so of assessment? Will these changes bring big cultural differences to educational establishments?

Entwistle (1991) helps one to understand some of the issues with regards to continuous assessment and current practices. The study had findings that the student's perception of the learning environment determines how they learn and not necessarily the educational context in itself. It is evident from the study that formative assessment and continuous assessment may have a significant effect on what students learn and especially how they learn. Gibbs (1999) has hence suggested that if students see assessment as the curriculum, effective teaching needs to use this knowledge, in order to use the power of assessment strategically to help students learn. Biggs (2002) echoes the same fact when he says that students learn what they think will be assessed rather than what is in the curriculum.

The changes from traditional assessment to continuous e-assessment will conversely, have an impact on the learner's experience of evaluation and assessment determining the way in which they approaches learning (Struvyen et al, 2005). Assessment can thus be looked upon logically and empirically as one of the defining features of students' approaches to learning (Entwistle and Entwistle, 1991; Ramsden, 1997). Within the proposed assessment in the article, students are likely to take a strategic or achieving approach to learning, where Entwistle et al (2001) believes the student's intention will be to achieve the highest possible grades by using well organised and conscientious study methods and effective time management, something that one along with possibly many other teachers see as a positive and encouraging change.

Interestingly, Marton and Saljo (1997) study serves as a good example in determining relation between approaches to learning and assessment. A total of 153 students from four subjects in Engineering and Business degree streams participated in the study from a University. Results showed that continuous assessments were preferred over a single assessment by a 78% majority. Some of the popular reasons for the preference were easiness to study small topics and hence being able to score good marks easily. The coursework marks can be better because of the weighting given to each of the continuous assessment as well as the ability to build a stronger foundation as one move from one topic to the other were comments in favour of continuous assessments. It forces one to learn topics properly before going to next topic. Each topic is given emphasis throughout the continuous mode of assessment (Marton and Saljo (1997).

However, not all comments are favourable. Comments against this type of assessments included; too many assessments robs one's time to learn other subjects and frequent assessment keep you on revision mode all of the time, no relaxation (Marton and Saljo cited in Jacob et al, 2006).

What is evident from research is that continuous assessment helps to check on learning and that learning happens in steps, not just for the final exam. Does this point to learning strategy adopted by the students? They seem to need a check on their learning through tests, which they prefer in small units. But the reasoning behind this was not to so much the eagerness to master the topic as such, but just to make sure that their scoring was helped.

Relating to coursework grades, the study concluded that those candidates who follow a series of continuous assessments produce an enviable majority of Higher Achievers. However, data shows a negatively skewered distribution. This may have implication on the studies reliability as the existence of positively or negatively skewered distributions will tend to reduce the reliability of the test. However, these results are typical for coursework grades especially if they are designed to test competency. In continuous assessment, with regards to assignments, students are supposed to search for and synthesise information on the basis of its relevance to the given assignment. If formative feedback from teachers is performed correctly, it should aid in the learning of students. Overall, if the student completes and is able to complete the tasks, they will obtain higher marks.

The study also deliberated that those students who did not perform in continuous assessment experienced poorer grades which were fairly normally distributed. Is the power and influence of coursework evident here? Are some children helped more than others? It is here where education establishment may run the risk of communicating to students that each unit/coursework etc as stepping stones to certification rather than a life-long learning experience. Such perception of students encourage a strategic approach to their studies, and let them resort to plagiarism, cheating and using ‘Rules of the game' or ROGs as Norton et al (2001) name. ROGs are an indication that students perceive a hidden curriculum where tutors say they want certain things in the assessment task. Here questions of validity may represent an issue. Taking into account Cook and Campell (1979) definition of validity which is the “best available approximation to the truth or falsity of a given inference, proposition or conclusion” one has to assess whether students are achieving better grades because they are motivated, working harder, coping with smaller units or is it to do with an over tendency for students to receive coaching and specific information that helps them ‘push-up' their grades.

In addition, Black et al (2006) also reiterates this by indicating that far from promoting an orientation towards student autonomy, such practices are interpreted as techniques to assure award achievement and probably help students who are more dependent on their tutors and assessors rather than less dependent (Torrence, 2007). Modularization of A Levels is a perfect example where greater transparency of learning outcomes and the criteria by which they are judged have benefited learners in terms of the increasing numbers of learners retained in formal education and training and the range and numbers of awards which they achieve (Savory et al, 2003). Clarity in assessment outcomes, processes and criteria has underpinned the widespread use of coaching, practice and provision of formative feedback to boost individual and institutional achievement.

In addition, research evidence reported suggests that such transparency encourages instrumentalism (Savory et al, 2003). Transparency of objectives together with extensive coaching and practice to help learners meet them is in danger of removing the challenges of learning and reducing the quality and validity of outcomes achieved. This is mentioned by Torrance (2007:282) as a move from assessment of learning, through the currently popular idea of assessment for learning, to assessment as learning, where assessment procedures and practices come completely to dominate the learning experience, and ‘criteria compliance' comes to replace ‘learning' and is something that needs to be fully researched if implementation of continuous assessment and unrestricted resit options are going to be made available for all curriculum subjects. However, at this stage it is imperative to highlight the fact that the study by (Marton and Saljo, 1997) serves as a good indicator of what may be experienced in educational setting. However, with a sample size of 153 from just four subjects in Engineering and Business degrees from only one University might show perceptions and results which are significant to that specific study, but might not necessarily display an association to other educational establishments. This now leads one to evaluating e-assessment and the function of its assessment and current assessment practices.

E-assessment

The proposal of introducing e-assessment brings strengths, weaknesses, opportunities and threats to any educational establishment. But before we deal with these it is important to understand exactly what e-assessment means. The term e-assessment covers the variety of ways in which computers can be used to assist the assessment process. This might include using computers to administer an assessment for formative or summative assessment (Attali and Burstein, 2006). The proposal of introducing e-assessment is not a new one. Ken Boston (Chief executive of the Qualification and Curriculum Authority in 2004) was bullish about the power of technology to transform the educational experience of millions of pupils, but that was back in 2004, and few experts would say that he has been proved right. In fact, five years on, none of the predictions Boston made on that day has turned out to be correct. For many in this field, the big question has been why, given that technological change has happened quickly in so many other areas of life, the pace of reform in this area means that, for most pupils taking exams still means scribbling on paper.

However, Multiple-choice questions (MCQs) are a perfect example on how educational establishments have embraced the development of e-assessment. MCQ can be used as a means of supplementing or even replacing assessment practices. The growth in this method of assessment has been driven by wider changes in the higher education environment such as the growing numbers of students, modularisation and the increased availability of computer networks. MCQ's are seen as a way of enhancing opportunities for rapid feedback to students as well as a way of saving staff time in marking. However, there are recognised limitations with this method. Firstly, researchers discourage the use of MCQ, arguing they promote memorisation and factual recall, and do not encourage high-level cognitive processes (Scouller, 1998). Some researchers, however, maintain that this depends on how the tests are constructed and that they can be used to evaluate learning at higher cognitive levels (Johnstone & Arnbusaidi, 2000). The advantage of MCQ with regards to assessment is its high level of reliability that can be beneficial as an alternative form of assessment.

Nevertheless, the real difficulty for e-assessment has to do with the nature of examining. It is a high-stake activity as we have observed previously, which is closely scrutinised. Boyle (2009) deliberates that there is genuine aversion to risk in this area, within government, within providers of assessment, amongst students, parents and staff. Because of this, things will tend to move slowly. Boyle (2009) adds that e-assessment presents some serious practical challenges. Having an entire year group sit and take an exam at the same time, as happens with major conventional GCSEs now, would necessitate having two sets of computers; one for those taking the tests and another for other year groups, which is expensive and often impractical. This therefore brings with it technical difficulties in implementing such initiatives..

Taking into consideration past experiences namely the compulsory ICT exams for 14 year-olds it is not hard to see why the predicted boom of e-assessment has not occurred. In 2007, government had to pull the plug on a compulsory ICT exam for 14 year-olds, developed over five years at the cost of £26 million (Mansell, 2009) after it was found to produce results for pupils that were dramatically different from teacher's own assessments of their charges' work. It was due to become statutory last year, but in the end, was offered only voluntary to schools. Repercussions were highlighted by Andre Harland, chief of the Examination Officers' Association stated, “it did highlight some potential big risks and problems with e-assessment. The test involved taking computers in a school out of operation at the same time, and it just did not prove deliverable in the end.” An issue with reliability in summative assessment is a fundamental flaw as Harlem and James (1997) reiterate, reliability in summative assessment is crucial.

In addition, Boyle (cited in Mansell, 2009) and officials from all five exam boards in England, Wales and Northern Ireland, sets out other problems, including that it may be easier to cheat by looking over someone's shoulder at what is on screen, rather than on a desk, and ensuring that hi-tech testing does not introduce some change in the standard of the exam.

However, it is ones belief that the proposal made by Lebus is one that focuses mainly on the computerisation of the externally set and graded high stakes summative examinations of educational attainment that lead to qualifications. Surprisingly, was an article by Polly Curtis in the Guardian titles “Computerised testing likely to replace traditional exams, says head of board” released in the 12th July 2009, stated that Lebus said “that traditional-style exams would still be available for those who preferred them, but the new system would benefit students who are exam-phobic. There are some people obviously who get very frightened by exams or couldn't for other reasons do them well.” One must draw importance to this statement. Just a week after the article Hilary Douglas (2009) stated that continuous assessment would totally replace traditional exams, not providing all the information. This brings to light issues with reliability and validity of information the media publishes dealing with important assessment issues. This demonstrates the sensationalist approach to a serious issues regarding education, and foremost the manipulation of information.

In the case of A-Levels we already implement continuous assessment and provide resit opportunities the computerisation of these would be a good starting point for high stake summative assessment. But why computerise?

Why computerise a conventional test if the new test is meant to assess exactly the same things? Perhaps the most common reasons given are that computerised will delver;

I. Increased efficiency/lower costs

II. Greater flexibility regarding administration (e.g. test on demand vs tests at fixed - and infrequent - times)

III. Instant scores/feedback

IV. Fewer errors

V. Positive publicity through being seen to be ‘up-to-date'

VI. The first step that must be taken before more sophisticated computer-based assessments can be introduced.

(Raikes and Harding, 2003)

At present, most of the academic qualifications aimed at 16-18 year-olds in the UK are assessed through a mixture of coursework and summative pen and paper examinations. Written examinations are still handwritten on paper, and are often criticised for constraining education, inhibiting classroom innovation, stifling student's creativity and for being increasingly divorced from an ever more technological world (Heppel, 2003). There is therefore pressure to develop assessments that make full use it IT developments, not just in low-stake assessments but high-stakes alike.

In practice this can be hard to achieve for two main reasons, even if the innovative assessments exist. Firstly, schools and colleges will all differ in the quality and quantity of their ICT infrastructure, in the ICT support and in the level of ICT skills possessed by teachers. In such circumstances it would be very difficult for an examination board to introduce a high stakes, innovative computer-based test that would be accessible to all schools and colleges, and moreover, which would not disadvantage students from schools and colleges with impoverished ICT resources. In addition, likely demands for equity in assessment would require a traditional paper-based exam. Secondly, a very high value is placed in the UK on the maintenance of ‘standards' from year to year, and this would be difficult to demonstrate clearly since written tests define past standards. The controversy stirred up in the UK in 2002 about the results of new A Level examinations was caused largely by ‘the absence of a clear understanding of the standards or levels of demand' (Tomlinson, 2002) and how they relate to the previous A Level system; this again serves as another example as the dangers involved in introducing entirely new types of high stakes assessment.

Both equity and the standards difficulties may be addressed by first computerising existing tests. Equivalent pen and paper and computer versions of the same test may then exist parallel that will facilitate all stakeholder to then focus on the migration from pen and paper to computer (Raikes and Harding, 2003).When almost everyone is taking the tests on computer, it becomes easier to introduce some innovation. By having a process that moves in gradual stages it is believed it will facilitate the move towards valid tests whilst reducing the concern about standards.

However, there are already instances where e-assessment is being implemented and showing sign of success. Literature from Linn and miller (2005) that time required is a major issue when it comes to assignment marking. Two main factors are to be considered: time spent on administrative tasks and the time actually spent on engaging the students work and the provision of quality feedback. E-tools are developing and bringing positive changes to teachers. The area were e-tools can make a real impact on efficiency in administration: providing documents, easily accessible to all involved, accepting assignments submissions, dealing with safe and secure storage, managing the distribution of assignments to markers and facilitating the communication within the marking team; returning marking sheets etc some in addition to advantages mentioned earlier.

Detecting plagiarism was another issue that was mentioned as a major advantage of using e-tools. Having the assignment in electronic form means it can be cross-checked against past year's assignments and current assignments, and an e-tool like Turnitin can also screen for quotations from text books. (Heinrich et al, 2009)

Overall, if research papers encountered and the lack of central strategy from the government is any indication to go by, one believes that the implementation of e-assessment when it comes to continuous summative assessments still has a lot of development to make, especially if past errors are to be rectified and confidence in it reliability and validity starts to improve. There are encouraging developments and as Professor Peter Tymms, of Durham University says: “The exam boards are all on it, they are all thinking about it, and trying hard to do it. But they have not yet found their way forward yet.” It therefore leads one to believe that it is only a matter of time before e-assessment replaces traditional form of assessment.

Conclusion

The aim of this paper was to critically approach assessment practices, functions of assessment and interrogate current assessment practices through a media account. Continuous assessment, formative and summative assessment and e-assessment were the main themes developed by the media article. Within these sub groups there were common themes dealing with validity and reliability that helped understand the possible impacts these development in assessment may have for students, teachers and the wider world.

Overall, it is evident from the research that the function of assessment is of great importance when planning to change any assessment systems. The transition from traditional summative assessment to continuous assessment by the head of Cambridge Assessment exam board leads one to believe that the assessment direction assessment boards are heading is one of impact functional level, which concerns running an assessment system that attempts to ensure students remain motivated, and that all students learn a common core for each subject. Due care and attention will be needed, in order not to place excessive demand on a criterion-based assessment. This will lead educators to assess what the learner can do in relation to the task required of them and place little interest on identifying what else the learner can do. The availability of unlimited resits and importance of criterion-based assessments may have serious learning repercussions as a shift in emphasis of making sure that students scoring are helped rather than an eagerness to master the topic. There has been a move from ‘assessment of learning' to assessment for learning' and now assessment as learning' (Torrance, 2007)

The proposal made with regards to implementation of continuous assessment as a series of e-assessments is not intended to fully replace the traditional classroom assessment and that is something everyone in educational establishments, One believes, needs to take into account. But it can effectively complement the latter especially in the context of large classes. MCQs have demonstrated successful examples on how to include e-assessment into the classrooms. Increased efficiency, greater flexibility in working and instant scores are some of the advantages e-assessment has brought into classrooms and schools. However, at this moment in time, and taking past experiences into consideration, implementing e-assessment in terms of high-stake assessment alternatives will be hard to achieve; either due to quality and quantity of infrastructure or equity and standards. It is true that momentum is building and as Durham University Professor, Peter Tymms says “the exam boards are all on to it, they are all thinking about it, trying hard to do it. But they have not yet completely found their way forward yet. There is no doubt that the age of e-assessment is upon us. However, there are still many hurdles to overcome before the full potential and benefits of e-assessment are gained.

Reference List

American Educational Research Association, American Psychological Association and National

Council on Measurement in Education (1999) Standards for educational and psychological testing (Washington, DC, American Educational Research Association).

Bigg, J (2002) cited in Jacob, S., M and Issac, B. (2006) Impact on students learning from traditional continuous assessment and an e-assessment proposal. The Tenth Pacific Asia Conference on Information Systems.

Black, P. J. (1998) Testing: friend or foe? The theory and practice of assessment and testing (London, Falmer Press).

Black, P. J. & Wiliam, D. (2003) ‘In praise of educational research': formative assessment, British Educational Research Journal, 29(5), 623-637.

Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. London: GL Assessment.

Boston, K (2004) cited in Douglas, H. (2009) Exams are a vital lesson. July 19th 2009 in http://www.express.co.uk

Curtis, P. (2009) Computerised testing likely to replace traditional exams, says head of board. July 12th 2009 in http://www.guardian.co.uk

Douglas, H. (2009) Exams are a vital lesson. July 19th 2009 in http://www.express.co.uk

Entwistle, N., J. (1991) Approaches to learning and perceptions of the learning environment. Introduction to the special issues. Higher Education, 22, pp 201-204.

Entwistle, N., J. and Walker, P. (2001) Stretegic alertness and expanded awareness within sophisticated conceptions of teaching. Instructional Science, Vol 28, 335-361

Gibbs, G. (1999) Using assessment strategically to change the way students learn, In Jacob, S., M and Issac, B. (2006) Impact on students learning from traditional continuous assessment and an e-assessment proposal. The Tenth Pacific Asia Conference on Information Systems.

Henrick, E., Milne, J., Ramsay, A., Morrison, D. (2009) Recommendations for the use of e-tools for improvements around assignment marking quality. Assessment and Evaluation in Higher Education, Vol 34 (4) pp 469-479..

Jacob, S., M and Issac, B. (2006) Impact on students learning from traditional continuous assessment and an e-assessment proposal. The Tenth Pacific Asia Conference on Information Systems.

James, M. and Pedder, D. (2006) Beyond Method: Assessment and Learning Practices and Values. The Curriculum Journal, 17 (2), 109-138

Linn, R. L., (2000) Assessment and Accountability, Educational Researcher, vol. 29 (2), 4-14.

Linn, R.L., and M.D. Miller. 2005. Measurement and assessment in teaching. Columbus, OH: Pearson Merrill Prentice Hall.

Mansell, W. (2009) Why hasn't e-assessment arrived more quickly? July 21st 2001 in http://www.guardian.co.uk

Messick, S., (1999), Performance assessment, in F. M. Ottobre (Ed.), The role of measurement and evaluation in education policy. UNESCO Publishing: Paris

Marton, F. and Saljo, R. (1997) cited in Jacob, S., M and Issac, B. (2006) Impact on students learning from traditional continuous assessment and an e-assessment proposal. The Tenth Pacific Asia Conference on Information Systems.

Newton, P. E. (2007) Clarifying the purposes of educational assessment. Assessment in Education: Principles, Policy and Practice. Vol 14 (2) 149-170

Raikes, N., Harding, R., The horseless carriage stage: replacing conventional measures. Assessment in Education, vol. 10, (3), 267-277.

Savory, C., Hodgson, A. and Spours, K. (2003) A general or vocational qualification? The Advanced Vocational Certificate of Education (AVCA) (7)

Smithers, A (2009) cited in Douglas, H. (2009) Exams are a vital lesson. July 19th 2009 in http://www.express.co.uk

Struyven, K., Dochy, P. & Jansenns, S. (2005) Students perceptions about evaluation and assessment in higher education: a review. Assessment and Evaluation in Higher Education, Vol 30 (4), 325-341

Tomlinson, M. (2002) Inquiry into A level standards, Final Report (London, Department for Education and Skills, available from http://www.dfes.gov.uk/alevelsinquiry/

Torrance, H. (2007) Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning. Assessment in Education: Principles Policy and Practice, Vol 14, (3), 281-294

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.