This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers.

Chapter 4

Literature Review on Competence-based Assessment

4.1 Introduction

In this chapter, the researcher discusses the literature on competence-based assessment. The first part examines the purposes, the interpretations of competence-based assessment (CBA), the critical attributes of CBA and the issues related to competence. The second part looks into the implementation of CBA around the world and later focuses on the implementation of CBA in Malaysia.

4.2 Competence-based Assessment: An Overview

The era of the knowledge -economy and globalisation requires not only individuals who possess a sound understanding of specific subject matter but also those who have relevant industry-related skills and interpersonal skills. These attributes and capabilities are necessary for learners to acquire in order to function well in today's complex and global societies (Baartman et al., 2007). Furthermore, acquisition of complex competences (Baartman et al., 2007) has to be developed in the future human capital through purposeful, effective, learner-centred and competence-based programmes (Baartman et al., 2007) in order to prepare students to meet the needs of tomorrow's world. The report of the United States Department of Education Secretary's Commission on Achieving Necessary Skills, the so-called SCANS Report (McNabb, 1997), made clear that students must be ready to function in collaborative settings, interpret complex requirements, and exhibit self-directed, self-assessing behaviour on the job. This means that employers would want more from the graduates than just entry-level job skills which would help develop a nation progressively in accordance to its political and social needs. The relationship between learning and assessment (discussed in Chapters X and Y) means that assessment should take account of political and social purposes (Broadfoot, 1996). Different vocational and educational training programmes from school level to university level have been introduced to prepare and equip individuals to fit into the labour market. One such programme is Competency-Based Education (CBE) with the emphasis on assessment (competency-based assessment) being seen as key to the success of its implementation (Tillema et al, 2000; Frederiksen, 1984; Baartman et al., 2007).

4.2.1 Purposes of Competence-based Assessment

Any forms of assessment - s including CBA - would usually have one or more of three basic purposes to diagnose learning;, to select students for particular provision; to certificate achievements (Carless et. al., 2006; Freeman & Lewis, 1997; Ecclestone, 1996; Rowntree, 1987). CBA has been utilised by schools, training colleges and industriy for two main purposes; to measure competencies (McNerney & Briggins, 1995) and to certificate (International Labour Organisation, 1996).

4.2.1.1 CBA for Measuring Competence

Measuring competence is one of the main purposes of CBA.Generally, the reason for the implementation of CBA is to determine that learners have sufficient knowledge and skills to contribute effectively to the work force(Canning, 2000; Ecclestone, 1997; Kerka, 1998; LPM, 2002; McNabb, 1997). However,according to Hyland (1994), as competence-based education is found to be seriously flawed and ill-equipped to deal with education and training beyond the basic skills., CBA apparently could can be used to measure limited aspects of competence but Hyland (1994). He believes that its influence on training and education for future generations will be actively damaging as it could can only produce individuals who would function without much learning, knowledge and understanding of anything. He attributes this to a This is due to its highly instrumental philosophy that's combined with a narrow and uncritical behaviourist psychology. (Hyland, 1994). Thus, its qualifications resulting from CBA are viewed as basically reliable as indicators of all the most elementary skills and abilities (Armstrong, 1995). The issues of competence in CBA will be further discussed in section 4.4.

4.2.1.2 CBA for Certification

It is asserted claimed that CBA provides learners with opportunities to achieve qualifications that relate to required performance in the workplace (Erridge & Perry, 1994). Ecclestone (1997) indicates that NVQs, which primarily employ CBA, represent an explicit commitment to creating wider access to accreditation and better levels of achievement. She argues that Tthis could be made possible by severing links between attendance in learning programmes, and the formal assessment and accreditation of outcomes, and by promoting the accreditation of prior learning in which NVQs subsequently serve as serious challenge to traditional assessment approaches (Ecclestone, 1997). For instance, a trainee in a plumbing courseplumber would have the opportunities to acquire the necessary knowledge and skills in plumbing at certain level. He/she would then achieve the qualifications and certification that relate to the required performance of a plumber in the real life workplace once he/she has completed the assessment of at the at particular designated level. Nevertheless, CBA is at the same time, argued to be conceptually confusing, empirically flawed and lacking in meeting the needs of a learning society (Chappell, 1996; Ecclestone, 1997; Hyland, 1994;). This may be the results of the use of confusing language or jargons, the decreasing credibility of the competency standards on how they reflect industry standards (Kerka, 1998) and the indifferent implementation of CBA across the industries due to employers' ignorance about the its nature and the purpose. of it (Hyland, 1996).

4.2.2 Definitions and Interpretations of CBA

There is a wide range of interpretations and definitions given to CBA. In this discussion, CBA the interpretation of- CBA is looked atinterpreted in terms of three different aspects; the assessors' roles, the learners' responsibilities and the learning outcomes that are based on predetermined criteria. Figure ____tries to reflect the interedependence of these elements.The relationships of the interpretations of these three aspects are as shown in Figure ____.

Adapted: Griffin & Nix, 1991; Mcnerney & Briggins, 1995; Hager, 1994; Elliot, 1994; Cotton, 1995; Ecclestone, 1996)

In the assessor's perspective, CBA consists of the simple process of seeing, collecting, gathering,and obtaining evidence, and the further process a more complex as well as subjective process of judging and interpreting the evidence of competence demonstrated by learners (Rowe, 1995; Ecclestone, 1996; Mcnerney & Briggins, 1995; Hager, 1994; Griffin & Nix, 1991). The assessors have to observe gather and judge the evidence of an individual's competence against the specified standards. This means that the assessors have to be very careful in their actions of gathering evidence of competence and they have to decide when it is considered sufficient, based on their expert judgment. For example, when a student successfully builds a drywall framework, the assessor has to gather evidence of competence not only from the product which is the framework but also from the process and the preparations before the student begins to work on it such as work schedule, list of materials and equipment to be used, and the like. The assessor then has to use his/her expertise in this area to determine whether or not the evidence of competence gathered is adequate to say that the student has acquired satisfactory competence in building the drywall framework.

McNnerney & Briggins (1995) state that CBA is the process of identifying the competencies which are the underlying characteristics that lead toof successful performance be this by may it be among a group of employees, typically by department, job category or hierarchical level. CThey say that a list of competencies that is tied to one corporate culture is usually used tobecome associatede with exemplary performance (McNerney & Briggins, 1995)ers. They further relate CBA to its training basis where the focus is on who the successful performers are rather than on what people do. This means that it does not just include training in jobs which rely heavily on psychomotor skills, such as manual labour and traditional hourly production work but also involve performing decision-oriented jobs (Mcnerney & Briggins, 1995).

The Scottish Qualifications Authority (SQA) (2001) stresses the process of seeing whether or not an individual has the necessary skill and knowledge they need to be awarded a Scottish Vocational Qualifications as the key factor in CBA; emphsising the need for assessors to be expert . This undoubtedly requires the assessors to have even thorough knowledge and skills in the fields they are assessing in order to make good and fair judgments. This means is due to the fact that the athat assessors have to assess and collectconsider evidence of competence in terms of knowledge, abilities, skills and attitudes (Rowe, 1995; Ecclestone, 1996; Mcnerney & Briggins, 1995) displayed diferentially in authentic contexts by learners in the context of a selected set of real life professional tasks which are of different levels (Hager, 1994). The process of gathering evidence from observable performance is later followed by the more difficult process of making judgment that may be very subjective (Peddie & Wilmut, Macintosh, 1997). Despite the difficulty in making judgment based on evidence gathered, assessors have to determine whether or not the competency has been achieved by learners (Victoria Curriculum and Assesssment Authority (VCAA), 2001). It is the assessors' responsibility, then, to decide if learners' performances meet the pre-determined criteria. Thus, assessors have to equip themselves with relevant skills and use appropriate mechanisms in making fair judgments so that the problem of subjectivity among assessors is reduced. Furthermore, assessors have to assess learners' ability to apply a particular knowledge, skills, attitudes and values in a specific context according to a required performance standards (New Zealand Qualifications Authority (NZQA), 2002). In other words, assessors themselves have to be extremely knowledgeable and skilful in the art of observing and collecting evidence of competence that come in various tangible and intangible forms. The process of gathering evidence from observable performance is later followed by the more difficult process of making judgment that is inevitably very subjective (Peddie & Wilmut, Macintosh, 1997). Despite the difficulty in making judgment based on evidence gathered, assessors still have to determine whether or not competency has been achieved by learners (Victoria Curriculum and Assesssment Authority (VCAA), 2001). It is the assessors' responsibility then, to decide if learners are considered to be competent in a particular context at a particular level based on their performance whether or not it meets the pre-determined criteria. Thus, assessors have to equip themselves with relevant skills and use appropriate mechanisms in making fair judgments so that the problem of subjectivity among assessors is reduced.

In addition to assessing and making judgment on students' performance based on evidence gathered, assessors would also have to give constructive and supportive feedback to students on their performance and work (Ecclestone, 1996; Sadler, 2009). The assessors would have to point out the strengths and weaknesses as well as the improvements that could be made in the future (Sadler, 2009). Sadler (2009) further proposes that feedback should be given in a manner that would be able toenables educate students to assess and be able to evaluate their own work and give feedback to themselves as well. He suggests that students shcould be taught to monitor the quality of their productions and make adjustments as necessary while they are actually engaged in doing it.

In From the learners' point of view on the other handperspective, CBA is the platform for them to demonstrate competencies and learning outcomes (Elliot, 1994). These competencies will range from simple constructed responses to comprehensive collections of work over time in very different contexts.Elliot further asserts that CBA requires learners to demonstrate competencies and learning outcomes in performance assessment which vary in terms of simplicity and complexity such as from simple constructed responses to comprehensive collections of work over time, all of which are then judged . Learners are expected to be deeply involved in the assessment process and they areto be aware of the specified criteria that they have to meet as well as the standards of performance that are expected of them right from the beginning even before the assessment is conducted. Theoretically, tThis allows learners to take charge of and control over their own learning outcomes and success by preparing themselves well in advance. In other words, learners would have the autonomy for in their own learning, as CBA canould promote individuality and personal development (Ecclestone, 1996). The learner's responsibility includes demostrating the It is also here in CBA that learners have to show their ability to apply theoretical knowledge and procedures, in addition to their understanding ofbeing able to describe the theories or even point to appropriate theoretical knowledge (Cotton, 1995). In demonstrating competence Cotton further elucidates that learners also have to express wise use of common sense in the public by demonstrating good physical, interpersonal and intrapersonal skills with mindful decision- making; that suggests the multiple intelligence described by Gardner (1985). In other words, learners have to demonstrate their abilities in all the three psychological domains of psychomotor, cognitive and affective learning (ANTA, 1998). Similarly, the Australia's National Training Authority (ANTA) (1998) considers CBA to be a platform for learners to display their skills, knowledge and experience in accomplishing specific tasks as required in the workplace or to obtain a credit towards a qualification in the vocational and education training (VET).

A Both the assessors and learners have one thing in common: to focus on and that is the set of learning outcomes that can be derived from an assessment. Learning outcomes cover diverse range of areas including personal qualities, various forms of knowledge and skills (Ecclestone, 1996). In this case it is the evidence of competence that learners have to demonstrate and which the assessors have to observe for and make judgments on, has to meet specified criteria. Thus, CBA consists of specified set of both the general and specific outcomes that assessors, learners and third parties can make reasonably objective judgments with respect to learners' achievement or non-achievement of these outcomes (Wolf, 1995). CBA then certifies learners' progress based on the demonstrated achievement of these outcomes while the assessments may not be necessarily tied to time served in formal educational settings. The emphasis is on the outcomes - specifically, multiple outcomes, each distinctive and separately considered which should be specified clearly and as transparent as possible for assessors, assessees and third parties to understand what is being assessed and what should be achieved (Wolf, 1995) . This definition encapsulates the key-features of CBA as it has been developed and promoted for the vocational, technical and professional education and training in the UK while at the same time it signals the American origins of much of the debate (Wolf, 1995). The demonstrated performance that provides evidence of competence has to be at least of the minimum required quality in the real life workplace environment. These are the predetermined criteria set in CBA which are generally based on endorsed industry benchmark or competency standard (ANTA, 1998). The emphasis on outcomes and transparency is not only peculiar to the competence-context but it is also an essential characteristic of criterion-referenced assessment. The emphasis on what learners can actually do and the beneficial effects of clear criteria on teaching and learning (Glaser, 1963; Popham, 1978) are argued to meet the competence-based literature where in England in the early years of the implementation of CBA, such system was referred to as criterion-referenced approach (Jessup 1991: 167). Jessup (1991) further underlines that what people actually learn from an education and training system and how effectively, as the key factor to measure its success. Thus, CBA is considered a criterion-referenced interpretation of assessment (Nuttall, 1984; Ling, 1999) where individuals are given an award after achieving the pre-determined standards (Cotton, 1995). This critical attribute of CBA will be discussed further in section 4.3.2.

4.3 Critical Attributes of Competence-based Assessment

The following section discusses the two learning theories associated with CBA and the nature of its criterion-referenced assessment.

4.3.1 Learning Theories Associated with Competence-based Assessment

Learning in the psychology and education contexts is a the process of acquiring knowledge, skills, and values, and world views acquisition and enhancement through one's integrated employment of cognitive, emotional, and experiences (Illeris, 2000; Ormorod, 1995). How this process works is explained variously. Learning as a process focuses on what happens when the learning takes place and the explanations of what happens constitute learning theories. In other words, learning theories are attempts to describe how people and animals learn, and they help uncover the inherently complex process of learning to our understanding. Hill (2002) explains that learning theories have two main values. The first is to provide adequate vocabulary and a conceptual framework in to interpreting examples of observed learning; and the second. Next is to suggest the right directions to look for solutions to practical problem instead of providing the solutions. Learning theories are therefore, the basis for any form of educational assessment (Gipps, 1994) and the theories most commonly associated to with CBA are the behaviourism and, more recently, the constructivism. These two theories will be discussed as CBA essentially involves observable aspects of learning and learning as a process for construction of new knowledge. Although the cognitive theory which looks beyond behaviour to explain brain-based learning is important, the need for it in CBA is not that apparent or crucial. Thus it is not included in the discussion.

4.3.1.1 Behaviourist Learning Theory

Behaviourism is a theory of organism (may it be an animal or human) learning that only focuses on objectively observable behaviours and discounts mental activities (Murphy, 1999; Kerka, 1997; Doolittle & Camp, 1999) with the assumption that a learner starts off as a clean slate (i.e. tabula rasa) and is essentially passive, responding to environmental stimuli (Murphy, 1999; Kerka, 1997) in the acquisition of new behaviour (Chowdhury, 2006). Learning according to the behaviourists takes place as the result of a response that follows on a specific stimulus and that behaviour is shaped through reinforcement (Kerka, 1997). By repeating the Stimulus-Response (S-R) cycle the learner is conditioned into repeating the response whenever the same stimulus is present and thusbehaviour can be modified and learning is measured by observable change in behaviour (Murphy, 1999; Kerka, 1997; Doolittle & Camp, 1999).

Theis emphasis on stimulus-response pairing (Murphy, 1999; Chowdhury, 2006) and the rejections to of structuralism (Kerka, 1997) reflected behaviourism's positivistic philosophical base, as the analysis of the human condition relies on only verifiable observations of behaviour and not on untenable mentalistic constructs (Kerka, 1997). Furthermore,Accordingly most human behaviour could can be understood as basic reflexive learning mechanisms or laws that operate on one's experience within the environment (Kerka, 1997). As the approach is seen to be more operational and practical in nature, it has dominated education.n, in which Tthe teacher disseminates selected knowledge, measures learners' passive reception of facts, and focuses on behaviour control and task completion (Kerka, 1997). These views of the behaviourists and the learning characteristics that can be found in the education setting are summarised in Table___.

Generally, conditioning has been identified in experiments by behaviourists to be a universal learning process. There are two different types of conditioning, each yielding a different behavioural pattern:

  1. Classic conditioning occurs when an instinctive reaction responds to a stimulus (Comer, 2004). Essentially, animals and people are biologically "wired" so that a certain stimulus will produce a specific response. As such, learning process takes place when two events that repeatedly occur close together in time are associated in a person's mind to impulsively produce the same response. The most popular example is Pavlov's observation that dogs salivate when they eat or even see food where food is unconditioned stimulus and the salivation, the unconditioned reflex (Comer, 2004; Chowdhury, 2006). Pavlov's theory of classical conditioning is considered a foundation of learning theories to the behaviourists. According to Pavlov's experiment, when some neutral stimulus, such as the ringing of a bell, is combined with the presentation of food and is repeated for a period of time, the dog salivates with the ringing of the bell, even though food is not given. Hence, the ringing of the bell acts as the conditioned stimulus while salivation is the conditioned response or reflex (Dembo: 1994). The result of this experiment led to the formation of Pavlov's classical conditioning in which an individual responds to some stimulus that would ordinarily produce such a response.
  2. Behavioural or operant conditioning occurs when a response to a stimulus is reinforced. Basically, operant conditioning is a simple feedback system: If a response to a stimulus is rewarded or reinforced, then the response is likely to take place in the future. Similarly, when a particular behaviour is rewarded, that behaviour is repeated as shown in the experiment conducted by B.F. Skinner using reinforcement techniques to teach pigeons to dance and bowl a ball. B.F. Skinner based his theory upon the idea that learning is a function of change in overt behaviour where these changes in behaviour are the result of an individual's response to events (stimuli) that occur in the environment (Chowdhury, 2006). According to Skinner, a reward or punishment will either strengthen or weaken a voluntary or automatic behaviour (Skinner: 1968).

Ever since its introduction, the reinforced techniques have gone through series of enhancement and have contributed tremendously in training and teaching. The most important aspect of Skinner's contribution to training is the significance attached to the desired behaviour to be emitted in certain environment. In order for the trainer to ensure the right behaviour is reinforced in the trainees, the trainer should have the clear idea about the terminal behaviour of the trainees, and should closely follow the trainees to appropriately reinforce correct responses. This is the purpose of programmed instructions including competence-based training in its early years of implementation which was based on this theory of reinforcement (Burns, 1995).

.

As the emerging learning theory of the early 1900s, behaviourism provided the final found ation for social efficiency as learning which is seen objectively consists of the formation of links between specific stimuli and responses through the application of rewards ( Wirth, 1972 ). The emphasis on the need of objectivity leads to extensive use of statistical and ma thematical analysis. Despite all the remarkable contribution s the learning theory has to offer , the extreme focus on objectivity has totally ignored the significant role the mind play in shaping one's behaviour. Men are treated more like robots or machines than human beings as their thoughts and feelings are not taken into consideration. They are expected to demonstrate desired behaviour through the use of reward and punishment neglecting other factors that may have an influence on the change in behaviour. Thus, the behaviourist theory of learning is lacking in utilizing the full potential of the mind in moulding essential behaviour and in constructing new knowledge .

Assessment in Behaviourism

Assessment, according to behaviourism, is a test (the stimulus) for which the answer (the response) is conditioned In accordance to the behaviourist learning theory which focuses on the stimulus-response cycle to attain observable conditioned behaviour, assessment in the behaviourism also applies the same concept. Thus, t he test item is the stimulus, the answer is the response and a learner has to be conditioned to produce the appropriate response to any given stimulus ( (Murphy, 1999; Kerka, 1997; Doolittle & Camp, 1999). NSince the emphasis is on the response that is observable, no attention is paid to any model of the thinking process of the learner which might intervene between stimulus and response. Consequently, the distinctions between rote learning and learning with understanding is not considered as teaching is a matter deliveringof delivering the appropriate stimuli while learning is a matter of repeating the appropriate response, which will be rewarded. is what matters the most in which teaching is by repetition and then rewarding the appropriate responses. As such, a test composed of many short, 'atomised', out-of-context questions, and 'teaching to the test, are both consistent with this approach (Murphy, 1999; Kerka, 1997). Likewise, some forms of CBA - which has always been associated to thewith behaviourist theory - can be seen to assess, atomistically. applied the atomistic but not out of context approach. The assessor who is an observer ticks off a checklist of predetermined criteria whenever a learner has performed a series of discrete observable tasks. The criteria are the stimuli, the accomplished tasks the responses and learner has to be conditioned to demonstrate the ability to meet the criteria successfully.

Although this approach to assessment may developstestify to learner's ability to perform observable tasksbehaviours, it does not pay much attention to the theoretical knowledge and understanding (Ashworth, 1992) as the role of the mind is considered insignificant in delivering the required behaviour. While assessing competent observable performance is vital, assessing knowledge and understanding is just as important as it is an essential aspect of competence without which an assessment is lacking in credibility or construct validity (Ashworth, 1992). A valid assessment method should be able to measure what it is supposed to measure which in this case (Watson, 1994). Given the extensive discussion in Chapter ? on the idea of competence, both the observable performance behaviour and underpinning knowledge are aspects of competence that should be assessed and measured. People who 'understand' are those who have clear mental representation of the situation with which they are confronted and are able to deal with it creatively and imaginatively using the acquired knowledge which acts as an interpretive resource for them (Ashworth, 1992). Thus, it is insufficient to assess one's competence just by looking at the performance while ignoring the aspect of knowledge and understanding. It is unfortunate then, if such an assessment method should produce people who are like robots in a factory; they couldwho can perform a job or a task efficiently and effectively but they do not have any understanding of what they were are doing. As the approachCBA also emphasises personal competence within competence concentrates on an individual demonstrating competent performance ((Wolf, 1995), traditional notions of CBA have allowed an and emphasises on personal competences, it leads to one being individualistic perspective whilst lacking ignoring the very necessary in the abability of being able to work as a team player to work as a team whereas team work is essential in performing relevant aspect of a job in the actual workplace (Ashworth, 1992). As a result, theis behaviourist view of CBA has eventually shifted to the constructivist belief as discussed in the following section.weakened.

Despite all the remarkable contributions the learning theory has to offer, the extreme focus on objectivity has totally ignored the significant role that the mind plays in influencing one's behaviour. People are treated more like robots or machines than human beings as their thoughts and feelings are not taken into consideration. They are expected to demonstrate desired behaviour through the use of reward and punishment neglecting other factors that may have an influence on the change in behaviour.

4.3.1.2 Constructivist Learning Theory

Constructivism is a theory of learning that has roots in both philosophy and psychology (Doolitle & Camp, 1999) founded on the premise that learners actively construct their own knowledge, meaning and understanding of the world they live in by reflecting on their experiences (Doolitle & Camp, 1999; Murphy, 1999; Kerka, 1997). Learners learn by doing rather than observing and by bringing prior knowledge into a learning situation (Epstein & Ryan, 2002; Carvin, date?) in which they must critique and re-evaluate their understanding of it until they can demonstrate their comprehension of the subject (Carvin). Furthermore, learners need to analyse and transform new information or problems in their minds based on existing knowledge and understanding where these abstract thoughts evolve from concrete action (Murphy, 1999). Learning, therefore, is simply the process of adjusting their mental models to accommodate new experiences. TBasically, the theory of constructivism rests on the notion that there is an innate human drive to make sense of the world by building cognitive structures which include declarative knowledge (know that - facts, concepts, propositions) and procedural knowledge (know how - techniques, skills, and abilities) (Murphy, 1999). These two components of knowledge have been discussed in depth in Chapter 3. Moreover, learning is a matter of personal and unique interpretation which takes place within the social context and is of useful to the learner as intrinsic motivation emerges from the desire to understand and to construct meaning (Billet, 1996). However, dispositions such as attitudes, values and interests that help learners decide, are often neglected in this theory (Murphy, 1999) making it incomprehensive and insufficient in a way.

Philosophically, the essence of constructivism relies on an epistemology that stresses subjectivism and relativism, where personally unique reality resulted from the concept that reality can be known through experience although it may exist separate from experience (Doolitle & Camp, 1999). Hence came four essential epistemological tenets of constructivism (Von Glasersfeld ,1984; 1998; Doolitle & Camp, 1999);

  • Knowledge is the result of active cognizing by the individual ;
  • Cognition is an adaptive process that functions to make an individual's behaviour more viable given a particular environment;
  • Cognition organizes and makes sense of one's experience, and is not a process to render an accurate representation of reality; and
  • Knowing has roots both in biological/neurological construction, and in social, cultural, and language-based interactions (Dewey, 1916/1980; Garrison, 1997; Larochelle, Bednarz, & Garrison, 1998; Gergen, 1995).

Thus, constructivism acknowledges the active role learners play in the personal creation of knowledge, the importance of both the individual and social experiences in this knowledge creation process, and the realization that there will be a variety in the degree of validity of the knowledge created as an accurate representation of reality (Doolitle & Camp, 1999). These four fundamental tenets provide the foundation for basic principles of the teaching, learning, and knowing process as described by constructivism in which each tenet however emphasises different tenet resulting in various types of constructivism (Doolitle & Camp, 1999). Constructivism has been is frequently described as Cognitive (Anderson, 1993; Mayer, 1996), Social (Cobb, 1994; Vygotsky, 1978) and Radical ( Piaget, 1973; von Glasersfeld, 1995). These different forms represent a continuum with the underlying assumptions that this continuum vary along several dimensions (Moshman, 1982; Phillips, 1995) and have resulted in the definition and support for three broad categories: Cognitive Constructivism (Anderson, 1993; Mayer, 1996), Social Constructivism (Cobb, 1994; Vygotsky, 1978), and Radical Constructivism ( Piaget, 1973; von Glasersfeld, 1995). The comparisons of the three categories of constructivism are as shown in Table __ while Figure __ illustrates the constructivism continuum.

Assessment in Constructivism

As constructivists acknowledge learners' active role in knowledge acquisition, they believe that learners should also play a larger role in judging their own progress and thus advocate that assessment becomes part of the learning process and should be used as a tool to enhance both the learner's learning and the teacher's understanding of the learner's current understanding (North Central Regional Educational Laboratory). The use of assessment as an accountability tool that makes some students feel good about themselves and causes others to give up should be avoided and consequently the constructivists call for the elimination of grades and standardized testing.

All the three categories of constructivism assert that learners' prior knowledge influences learner's ongoing acquisition of knowledge and understanding (Doolitle & Camp, 1999). However, knowledge and understanding which develop continuously have to be inferred from action as they are not visibly observable and as such require continuous assessing procedures during teaching and learning process. Thus, formative assessment is appropriate to accurately create the next series of experiences and activities for students as well as to assess learner's knowledge continuously (Doolitle & Camp, 1999). Holt date? and Willard & Holt (2000) emphasize the concept of dynamic assessment, where the learner's true potential is assessed through extended interactive process of assessment; a two-way process involving interaction between both instructor and learner. Through the interactions, the assessor becomes the person who not only makes judgment on the learner's' achievement but also the one who identifies the learner's' strengths and weakness in doing a task and then shares with them the learner possible ways in which that performance might be improved on a subsequent occasion. Thus assessment is an integral part of, and not separate form, the learning experience and both are seen as inextricably linked and not separate processes (Gredler, 1997). The feedback created by the assessment process comprises ais supportive process rather than an intimidating (which might cause process that causes anxiety in the learner), thereby for it to ultimately serving as a direct foundation for further development (Green & Gredler 2002).

Dynamic, formative assessment may unde rpin the One of the fundamental concepts of CBA is ongoing criterion-referenced evaluation of CBA where a task, whether cognitive or psychomotor, is assessed against predetermined criteria until it is mastered. No doubt that aThe student may not be expected to achieve mastery level at first attempts of learning activity but through ongoing evaluation by teacher and learner, followed by modifications made in successive efforts from constructive feedback received, it mastery could can be accomplished (Doolittle & Camp, 1999). For instance, the bricklaying student's first brick wall may not be neat, clean, upright or built to square and the brick bonds may not even be level, vertical or to gauge but through practice over time and guidance, the student will achieve the desired outcomes. As such, CBA includes learning as active construction of schemes to understand the material (Birenbaum, 1996) where students are active participants who share learning process, practise self-evaluation and reflection, and collaborate with the teacher and other students (Baartman et al., 2007). Students are able to engage in meaningful learning process through interesting and authentic assessment tasks where both the product and process are being assessed (Baartman et al., 2007).

4.3.2 Criterion-referenced Assessment

CBA does not discriminate between students or compare students to each other as in the norm-referenced assessment but instead it is about the decision whether or not a student is competent or not which is in accordance to criterion-referenced assessment (Baartman, 2007; Ecclestone, 1996; Wolf, 1995). Like CBA, criterion-referenced deals with clear and unambiguous minute specifications of outcomes where reliable, parallel assessments can be derived from it directly (Wolf, 1995). Criterion-referenced assessment together with CBA involve the concept of testing for mastery where single passing scores are associated with, although there may be cut-offs at various levels. Although CBA is well received for its criterion-referenced property, there are still some criticisms about its effectiveness and validity in determining individuals' competence. Nuttall (1984) points out that such an assessment leads to the learning outcomes of just a pass versus not pass or can do versus cannot do. Sadler (1987) further argues that a criterion-referenced measurement such as CBA is insufficient to assess competence as the proficiency continuum has been reduced to a simple dichotomy scores of 'competent' versus 'not competent'. Regardless of these criticisms, Lembaga Peperiksaan Malaysia (Malaysia Examinations Syndicate) upholds this uniqueness of CBA as it helps identify individuals' capability in carrying out identified tasks at a required standard (LPM, 2002).

4.4 The Issues in Competence in CBA

There are various issues related to competence in CBA but only two will be discussed in the following section. The first issue relates to the interpretations of competence which is then followed by making judgments of competence.

4.4.1 Interpretations of Competence in CBA

The concept of competence has various definitions and interpretations (Eraut et al., 1998; Lizzio & Wilson, 2004; Messick, 1984; Miller, 1990; Parry, 1996; Tillema et al., 2000). A common notion of these descriptions is that it consists of connected pieces of knowledge, skills and attitudes that could be used to solve a problem. Lizzio & Wilson (2004) describe competence as the capacity to enact specific combinations of knowledge, skills and attitudes in appropriate job contexts. Taconis et al. (2004) stress that knowledge, skills and attitudes should be addressed in an integrated way in CBA, as each of these separately is not sufficient for the desired competent professional behaviour. Chapter 3 discusses the definitions and interpretations of competence as utilised in the study in detail. However, the researcher would like to reflect on the Miller's Pyramid as shown in Figure ___ and the conceptions of competence and their implications discussed by Hager (1993) as shown in Table ___.

Miller (1990) introduced a framework for the assessment of medical students and residents, "Miller's Pyramid". He advocated the evaluation of learners for their skills and abilities in the 2 top cells of the pyramid, in the domains of action, or performance, reflecting clinical reality. Miller argued that the demonstration of competence in these higher domains strongly implies that a student has already acquired the prerequisite knowledge, or Knows, and the ability to apply that knowledge, or Knows How, that make up the base of the pyramid. Basic clinical skills (Shows How) are those that can be measured in an examination situation such as an objective structured clinical examination (OSCE). However, the professionalism and motivation required to continuously apply these in the real setting (Does) must be observed during actual patient care.

4.4.2 Making Judgment of Competence

Exploring what it means to say a student is competent involves looking at what the judgment is based on; that is, the basic assumptions of the assessment being made. (Pitmann et al., 1999)

Deciding on what it is to be assessed is crucial in carrying out any assessment process (Hager et al., 1994). As the name itself suggests, CBA involves the process of assessing competence. The common questions raised would be on the meaning of competence and how it could be assessed. Hager, Gonczi and Athanasou (1994) point out that there are various ways to assess competence depending on how it is being conceived. They give two examples of assessment of competence which have been widely criticised and an example that has been employed by the twenty one professions in Australia.

The first example is the assessment that consists of an assessor who is an observer ticking off a checklist of the discrete tasks if competence is thought of as the capacity to perform a series of discrete observable tasks. This approach to assessment of competence has been criticised because it reduces an occupation to a series of discrete observable tasks which do not represent the occupation significantly. As the approach concentrates on an individual demonstrating competent performance (Wolf, 1995) and emphasises on personal competences, it leads to one being individualistic whilst lacking in the ability to work as a team whereas team work is essential in performing relevant aspect of a job in the actual workplace (Ashworth, 1992). It is just like a grandfather's clock where all the parts in it, whether they are the beautifully decorated arms on the outer surface or the corrugated and rugged device inside, have to work together and play their roles in accordance to ensure that the time shown is accurate. None of the parts regardless of how efficient they are can show the time without the support from each other. Besides depriving one from acquiring the ability to engage in teamwork, another setback of this approach to assessment is that it does not pay much attention to the theoretical knowledge and understanding (Ashworth, 1992). While assessing competent performance is vital, assessing knowledge and understanding is just as important as it is an essential aspect of competence without which an assessment is lacking in credibility or construct validity (Ashworth, 1992). A valid assessment method should be able to measure what it is supposed to measure which in this case would be the relevant elements of competence (Watson, 1994) and in this case, both the performance and knowledge are aspects of competence that should be assessed and measured. People who 'understand' are those who have clear mental representation of the situation with which they are confronted and are able to deal with it creatively and imaginatively using the acquired knowledge which acts as an interpretive resource for them (Ashworth, 1992). Thus, it is insufficient to assess one's competence just by looking at the performance while ignoring the aspect of knowledge and understanding. It is unfortunate then, if such an assessment method should produce people who are like robots in a factory; they could perform a job or a task efficiently and effectively but they do not have any understanding of what they were doing.

In comparison to the first example, the second example considers competence the possession of a series of desirable attributes such as knowledge of appropriate sorts, skills and abilities to solve problems, analyse, communicate, and attitudes of appropriate kinds where assessment is seen as a strategy to assess each of these separate attributes. Nevertheless, this approach has also been criticised for lacking in the relation to the future occupational performance as it assesses attributes in isolation from actual work practice. These attributes are no doubt highly context dependent and to assess them out of context would be inappropriate. Another approach that is more contextual has been suggested by Watson (1994) where assessment is based on samples of performance and evidence of competence is gathered from assessment events such as practical tests, exercises and simulations. These practical tests are designed to measure the technical or performance aspects of competence while supplementary evidence is collected from written and oral questions and multiple-choice tests (NCVQ, 1991b, p. 22) to measure underpinning knowledge and understanding. Judgments about competence are based on the criteria that have been set for each assessment event and students are assessed individually whenever they are ready and judged as 'competent' or 'not competent' (Watson, 1994). This approach is normally employed by formal colleges or off-job training settings and often carried out on behalf of industry. For example, the assessment conducted by the Box Hill College of TAFE, Victoria for its Hairdressing Certificate programme is based on the observation of samples of job performance carried out on specially designed practical tasks which include basic operations of hairdressing such as cutting, styling, waving, colouring and basin service. In addition to these, theory tests to assess underlying knowledge are administered to provide supplementary evidence where 80% pass rate is required before a student is considered competent for a particular element (Watson, 1994). In order to ascertain the validity of the assessment method, assessment centres, may it be the colleges or schools, have to maintain the quality and range of facilities at all times besides increasing the capacity to simulate real workplace conditions and events. The extent to which these assessment centres comply with the requirements to ensure the validity of competency-based assessment has yet to be looked in depth. This is because any invalid assessment is a waste of effort, time and money, and subsequently it affects the quality of the students being trained.

Thus, the integrated conception of competence has been employed (Gonczi et al., 1990; Heywood et al., 1992) where competence is seen as the knowledge, abilities, skills and attitudes displayed in the context of a set of realistic professional tasks with suitable level of generality. This integrated approach selects the key tasks that are central to the practice of the profession and subsequently identifies the main attributes that are required for the competent performance and thus, avoiding the problem of numerous tasks. Competence then is inferred from the observed performance of these tasks. Hence, CBA, according to Hager et al. (1994) involves gathering of relevant evidence and following the proper procedures to ensure inferences about competence are soundly based. Though all the necessary steps have been taken to ensure the reliability of the assessment of competence, the integrated approach still needs to rely on a professional judgment on whether a performance of a task is considered competent or otherwise. This requires proper training on the assessment procedures to enable the assessors to make rightful judgment on students' performance. This is due to the fact that teachers who are the assessors have raised questions about what it means to say a student is competent (Pitman et al., 1999). However, the integrated approach seems to be more holistic than the other two approaches as it encompasses all the necessary attributes required in a competent person in a particular profession. Furthermore, the realistic professional tasks provide sufficient and authentic learning experience that relates to the actual and future workplace environment. This will help reduce the gap between the learning institute and the workplace.

In the context of vocational education and post-compulsory education, competence is described as the capacity to perform defined and predictable tasks according to some pre-specified standard where particular interests are represented in defining standards (Stevenson, 1996). CBA is seen as the assessment of evidence to determine a person's current abilities against a set of competency standards through a number of assessment techniques (Hayton & Wagner, 1998). In other words, the assessment comprises the attempts to make the outcomes of education explicit by making judgments of learning based on specific criteria rather than comparing learners' achievements using marks (Pitman et al., 1999). Students need to apply their knowledge and skills in performing a practical activity or simulation that is related to the workplace in a CBA. Furthermore, they should be able to demonstrate competent performance under conditions as close as possible to those of the actual workplace (Wolf, 1995). This is inline with the intention of competency based approach to make the outcome of formal education and training more appropriate to future workplace needs (Bowden & Masters, 1993). Thus, CBA emphasises on the needs to equip students with relevant knowledge and skills that will meet the demands of the workplace and develop outcomes that match current industry needs in workplaces (Pitman et al., 1999). As CBA bears resemblance to the future behaviour of interest, which is what someone will do on the job, it can be used as predictors of the future performance (Wolf, 1995). Hence, it is necessary to verify the claims made that CBA can become predictors of future performance. In addition, how effective the assessment has been in achieving its claimed objectives and what factors are influential in determining its effectiveness need to be examined thoroughly.

References

  • Anderson, J. R. (1995). Cognitive psychology and its implications. New York: Freeman.
  • Andrew, T. E. (1972). The Manchester interview: competency-based teacher training education/ certification. Washington DC; American Association of Colleges for Teacher Education.
  • Armstrong, P. (1995). Raising standards: a creative look at competence and assessment and implications for mainstreaming in university adult education. 1995 Conference Proceedings, 6-11;SCUTREA 1997. Retrieved 21 April, 2009 from http://www.leeds.ac.uk/educol/documents/00002988.htm
  • Ashworth, P. (1992). Being Competent and having 'Competencies', Journal of Further and Higher Education, 16(3); 8 - 17.
  • Ashworth, P. & Saxton, J. (1990). On competence, Journal of Further and Higher Education, 14 (2): 3- 25.
  • Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A. & van der Vleuten, C. P. M. (2007). Evaluating assessment quality in competence-based education: A qualitative comparison of two frameworks. Educational Research Review 2; 114-129.
  • Biggs, J. (2003). Teaching for quality learning at university (2nd edition). Buckingham; Society Research into Higher Education/Open University Press.
  • Billet, S. (1996). Towards a model of workplace learning: the learning curriculum. Studies in Continuing Education, 18(1): 43-58.
  • Black, P. & William, D. (1998). Inside The Black Box -Raising Standards Through Classroom Assessment. Retrieved April 10, 2007 from http://www.pdkintl.org/kappan/kbla9810.htm
  • Bowden, J. A. & Masters, G. N. (1993) Implications for Higher Education of a Competency0based Approach to Education and Training, Australian Government Publishing Service, Canberra.
  • Bourke, J.B., Hansen, J. H., Houston, W. R. and Johnson, C. (1975) Criteria for describing and assessing competency programmes, Syracuse University, NY, National Consortium of Competency Based Education Centers. In Fletcher, S. (1992) Competence-based assessment techniques, second edition. Kogan Page, London & Stirling (USA)
  • Broadfoot, P. (1996). Education, assessment and society. Buckingham; University Press.
  • Brown, A. L., & Palincsar, A. S. (1987). Reciprocal teaching of comprehension strategies: A natural history of one program for enhancing learning. In J. Borkowski & J. D. Day (Eds.), Cognition in special education: Comparative approaches to retardation, learning disabilities, and giftedness. Norwood, NJ: Ablex. In Doolittle, P. E. & Camp, W. G. (1999). Constructivism: The career and technical education perspective. Journal of Vocational and Technical Education, 16 (1); Retrieved March 5, 2007 from http://scholar.lib.vt.edu/ejournals/JVTE/v16n1/doolittle.html
  • Broudy, H. S. (1984). The University and the Preparation of Teachers. In Advances in Teacher Education, Volume 1, ed. Katz, L. G. & Rath J. D. Norwood, New Jersey: Ablex Publishing.
  • Burns, R. (1995) The Adult learner at work, Business and professional Publishing, Sydney. In Chowdhury, M. S. (2006). Human Behavior In The Context of Training: An Overview Of The Role of Learning Theories as Applied to Training and Development. Journal of Knowledge Management Practice, 7(2). Retrieved May 5, 2009 from http://www.tlainc.com/articl112.htm
  • Canning, R. (2000). The rhetoric and reality of professional competence-based vocational education in Scotland. Research Papers in Education, 15 (1): 69- 93.
  • Carless, D., Joughin, G. & Mok, M.M.C. (2006). Learning-oriented assessment: principles and practice. Assessment & Evaluation in Higher Education, 31(4):395 - 398.
  • Carvin, A., Constructivism, a portion of the website, EdWeb: Exploring Technology and School Reform. Retrieved March 23, 2007 from http://www.edwebproject.org/constructivism.html
  • Chappell, C. (1996). Quality & Competency Based Education and Training. InThe Literacy Equation, pp. 71-79. Red Hill, Australia: Queensland Council for Adult Literacy.
  • Chappell, C. S. & Hager, P. (1994). Values and competency standards. Journal of Further and Higher Education, 18 (3): 12-23.
  • Chowdhury, M. S. (2006). Human behavior in the context of training: An overview of the role of learning theories as applied to training and development. Journal of Knowledge Management Practice, 7(2). Retrieved May 5, 2009 from http://www.tlainc.com/articl112.htm
  • Clayton, B., Blom, K., Meyers, D. & Bateman A. (2003). Assessing and certifying generic skills; What is happening in vocational education and training? Adelaide, South Australia; National Centre for Vocational Education Research.
  • Cobb. P. (1994). Where is the mind? A coordination of sociocultural and cognitive constructivist perspectives. In C. T. Fosnot (Ed.), Constructivism: Theory, perspectives, and practice (pp. 34-52). New York: Teachers College Press.
  • Cobb, P., & Yackel, E. (1996). Constructivist, emergent, and sociocultural perspectives in the context of developmental research. Educational Psychologist, 31 (3/4); 175-190.
  • Comer, R. J. (2004) Abnormal Psychology (5th ed); Worth Publishers, New York. In Chowdhury, M. S. (2006). Human Behavior In The Context of Training: An Overview Of The Role of Learning Theories as Applied to Training and Development. Journal of Knowledge Management Practice, 7(2). Retrieved May 5, 2009 from http://www.tlainc.com/articl112.htm
  • Curtain, R. and Hayton, G. (1994) The use and abuse of competency standards in Australia: a comparative perspective. Assessment in Education, forthcoming.
  • Debling, G. (1989) The Employment Department/Training Agency Standards Programme and NVQs: Implications for education. In Burke, J. W. (ed.) Competency Based Education and Training. Lewe: Falmer Press.
  • Delandshere, G. (2001). "Implicit Theories, Unexamined Assumptions and the Status Quo of Educational Assessment." Assessment in Education: Principles, Policy & Practice 8 (2): 113 - 133.
  • Dembo, M.H. (1994), Applying educational; psychology (5th edition), White Plains, NY: Longman Publishing Group. In Chowdhury, M. S. (2006). Human Behavior In The Context of Training: An Overview Of The Role of Learning Theories as Applied to Training and Development. Journal of Knowledge Management Practice, 7(2). Retrieved May 5, 2009 from http://www.tlainc.com/articl112.htm
  • Dewey, J. (1938). Experience and education. New York: Macmillan.
  • Dewey, J. (1916/1980). The need for social psychology. In J.A. Boydston (Ed.), John Dewey: The middle works, 1899-1924, Volume 10 (pp. 53-63). Carbondale, IL: Southern Illinois University. In Doolittle, P. E. & Camp, W. G. (1999). Constructivism: The career and technical education perspective. Journal of Vocational and Technical Education, 16 (1); Retrieved March 5, 2007 from http://scholar.lib.vt.edu/ejournals/JVTE/v16n1/doolittle.html Dodge, D.T., Jablon, J.R, & Bickart, T.S. (1994). Constructing curriculum for the primary grades. Washington, DC: Teaching Strategies.
  • Dole, J. A., & Sinatra, G. M. (1998). Reconceptualizing change in the cognitive construction of knowledge. Educational Psychologist, 33 (2/3): 109-128.
  • Doolittle, P. E. & Camp, W. G. (1999). Constructivism: The career and technical education perspective. Journal of Vocational and Technical Education, 16 (1): Retrieved March 5, 2007 from http://scholar.lib.vt.edu/ejournals/JVTE/v16n1/doolittle.html
  • Dubois, P.H.(1970). A History of Psychological Testing. Boston, Allyn & Bacon, Inc. In Delandshere, G. (2001). Implicit Theories, Unexamined Assumptions and the Status Quo of Educational Assessment. Assessment in Education: Principles, Policy & Practice, 8 (2): 113 - 133.
  • Dwyer, C. A. (1998). Assessment and Classroom Learning: theory and practice. Assessment in Education: Principles, Policy & Practice, 5(1): 131 - 137.
  • Ecclestone, K. (1997). Energising or enervating: implications of national vocational qualifications in professional development. Journal of Vocational Education and Training, 49 (1): 65 - 79. retrieved September 1, 2009 from http://pdfserve.informaworld.com/525209_731222516_739051801.pdf

  • Ecclestone, K. (2002). Learning autonomy in post-16 education; The politics and practice of formative assessment. London and New York; RoutledgeFalmer.
  • Ecclestone, K. (2006). Assessment in post-14 education: The implications of principles, practices and politics for learning and achievement. Research Report 2, The Nuffield Review of 14-19 Education & Training.
  • Elam, S. (1971) Performance Based teacher Education: What is the State of the Art? American Association of Colleges of teacher Education, Washington DC.
  • Epstein, M. & T. Ryan (2002). Constructivism. Maureen Epstein's Online Research Portfolio. Retrieved February 15, 2007 from: http://tiger.towson.edu/users/mepste1/researchpaper.htm.
  • Erridge, A., and Perry, S. (1994). The validity and value of national vocational qualifications: The case of purchasing.Vocational Aspect of Education,46 (2): 139-154. (EJ 497 192)
  • Fletcher, S. (1992) Competence-based assessment techniques, Second edition. London & Stirling (USA); Kogan Page
  • Frederiksen, N. (1984). The real test bias. Influences of testing on teaching and learning. American Psychologist, 39, 193-202.
  • Gardner, H. (1991). Assessment in context: The alternative to standardized testing. In Murphy, P. (1999) Learners, learning & assessment. London, Paul Chapman Publishing.
  • Garrison, J. (1997). An alternative to Von Glaserfeld's subjectivism in science education: Deweyan social constructivism. Science and Education, 6: 301- 312.
  • Gergen, K. J. (1995). Social construction and the educational process. In L. P. Steffe & J. Gale, Constructivism in education (pp. 17-39). Hillsdale, NJ: Erlbaum. In Doolittle, P. E. & Camp, W. G. (1999). Constructivism: The career and technical education perspective. Journal of Vocational and Technical Education, 16 (1): Retrieved March 5, 2007 from http://scholar.lib.vt.edu/ejournals/JVTE/v16n1/doolittle.html
  • Gipps, G. (1994). Developments in Educational Assessment: what makes a good test? Assessment in Education: Principles, Policy & Practice, 1(3): 283 - 292.
  • Gipps, G. & Simpson, C. (2004). Conditions under which assessment supports student's learning. Learning and Teaching in Higher Education, 1; 3-31.
  • Glaser, R. (1963) Instructional technology and the measurement of learning outcomes: some questions. American Psychologit 18: 519-21
  • Gonczi, A., Hager, P. & Oliver, L. (1990) Establishing Competency-Based Standards in the Professions, National Office of Overseas Skills Recognition research Paper no. 1, DEET (Canberra: Australian Government Publishing Service)
  • Grant, G., Elbow, P., Ewens, T., Gamson, Z., Kohli, W., Neumann, W., Olesen, V. and Riesman, D. (1979) On competence: a critical analysis of competence- based reforms in higher education. San Francisco: Jossey-Bass.
  • Hager, P., Gonczi, A. & Athanasou, J. (1994) General Issues about Assessment Competence; Assessment & Evaluation in Higher Education, 19:1, 3 - 16
  • Hager, P. (1993). Conceptions of Competence. Philosophy of Education. Retrieved November 25, 2008 from http://www.ed.uiuc.edu/EPS/PES-Yearbook/93_docs/HAGER.HTM
  • Hayton, G. & Wagner, Z. (1998). Performance Assessment in Vocational Education and Training. Australian and New Zealand Journal of Vocational Education Research, 6(1); 69-85.
  • Heywood, L., Gonczi, A. & Hager, P. (1992) A guide to Development of Competency Standards for Professions, National Office of Overseas Skills Recognition research Paper no. 7, DEET (Canberra: Australian Government Publishing Service).
  • Hills, T.W. (1993). Assessment in context: Teachers and children at work. Young Children, 48(5); 20-28.
  • Hill, W.F. (2002). Learning: A survey of psychological interpretation (7th ed). Boston, MA; Allyn and Bacon. In Chowdhury, M. S. (2006). Human Behavior In The Context of Training: An Overview Of The Role of Learning Theories as Applied to Training and Development. Journal of Knowledge Management Practice, 7(2). Retrieved May 5, 2009 from http://www.tlainc.com/articl112.htm
  • Hyland, T. (1994).Competence, Education and NVQs: Dissenting Perspectives. London: Cassell.
  • Hyland, T. (1993). Competence, Knowledge and Education. Journal of Philosophy of Education, 27 (1); 1993. Retrieved March 25, 2007 from http://www.wiley.com/bw/journal.asp?ref=0309-8249
  • Illeris, K. (2000). Learning theory and adult education. AERC Proceedings. Retrieved April 26, 2009 from http://www.edst.educ.ubc.ca/aerc/2000/illerisk-web.htm
  • Jessup, G. (1991) Outcomes: NVQs and the emerging model of education and training. London; Falmer.
  • Kerka, S. (1997). Constructivism, Workplace Learning, and Vocational Education. ERIC Clearinghouse on Adult Career and Vocational Education, Columbus OH. ED407573. ERIC Digest No. 181. Retrieved October 3, 2007 from http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/16/90/26.pdf
  • Kerka, S. (1998). Competency-based education and training myths and realities. Clearinghouse on Adult Career and Vocational Education, Ohio State University; Colombus, OH. Retrieved August 26, 2009 from http://www.calpro-online.org/eric/textonly/docgen.asp?tbl=mr&ID=65
  • Knight, P. (2004). The local practices of assessment. Assessment & Evaluation in Higher Education, 31(4): 435 - 452.
  • Larochelle, N. Bednarz, & J. Garrison (Eds.). (1998). Constructivism and education. Cambridge: Cambridge Press.
  • Lembaga Peperiksaan Malaysia (MES) (2002) Kertas Konsep Pentaksiran Kompetensi dan Pernsijilan Modular. Kuala Lumpur; LPM.
  • Lewis, R. J. (1999). Reliability and validity: Meaning and Measurement. Annual Meeting of the Society for Academic Emergency Medicine (SAEM). Boston, Massachusetts.
  • Maclellan, E. (2007). What is a competent "competence standard"?: Tensions between the construct and assessment as a tool for learning. Journal of Quality Assurance, 15 (4): 437-448. Retrived April 27, 2008 from http://www.emeraldinsight.com/Insight/viewPDF.jsp?contentType=Article& Filename=html/Output/Published/EmeraldFullTextArticle/Pdf/1200150404.pdf Manpower Services Commission (1981) A New Training Initiative: Agenda for Action. London, HMSO. In Wolf, A (1995) Competence-based assessment, Assessing Assessment. Buckingham ; Open University Press.
  • Mayer, R. E. (1992). Learners as information processors: Legacies and limitations of educational psychology's second metaphor. Educational Psychologist, 31 (3/4); 151-161.
  • McNabb, M. L. & Smith, S. (1998). Proximal instruction strategies and assessment tools for managing performance-based learning.20th AnnualSelected Research Proceedings of the 1997 National Conference of the Association for Educational Technology and Communication(pp. 261-279).St. Louis,MO. Retrieved August 30, 2008 from http://www.learningauge.org/director.html
  • McNerney, D. J., & Briggins, A. (1995). Competency assessment gains favor among trainer. HR Focus, 72; 19. Retrieved August 26, 2009 from http://vnweb.hwwilsonweb.com/hww/results/results_single_fulltext.jhtml;hwwilsonid=B1OMHR3B3DHJZQA3DIMCFF4ADUNGIIV0
  • Miller, G.E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65: 63-67. In Falchikov, N. (2004). Improving assessment through student involvement; Practical solutions for aiding learning in higher education. London and New York; Routledge.
  • Moshman, D. (1982). Exogenous, endogenous, and dialectical constructivism. Developmental Review, 2; 371-384.
  • Murphy, P. (1999). Learners, learning & assessment. London; Paul Chapman Publishing.
  • National Association for the Education of Young Children & National Association of Early Childhood Specialists in State Departments of Education, 1990.
  • National Council for Vocational Qualifications (1991b) Guide to National Vocational Qualifications. Great Britain: Employment Department.
  • N.C.R.E.L., Assessment in a Constructivist Classroom, North Central Regional Educational Laboratory. Retrieved March 28, 2007 from http://www.ncrel.org/sdrs/areas/issues/methods/assment/as7const.htm
  • National Forum on Assessment (1995). Principles and Indicators for Student Assessment Systems. Cambridge, MA: FairTest.
  • Ormrod, J.E. (1995). Human Learning (2nd ed.). New Jersey; Prentice Hall.
  • Perreiah, A. R. (1984). Logic examinations in Padua circa 1400, History of Education, 13: 85-103.
  • Phillips, D. C. (1995). The good, the bad, and the ugly: The many faces of constructivism. Educational Researcher, 24 (7); 5-12.
  • Piaget, J. (1977). The development of thought: Equilibrium of cognitive structures. New York: Viking Press. In Doolittle, P. E. & Camp, W. G. (1999). Constructivism: The career and technical education perspective. Journal of Vocational and Technical Education, 16 (1); Retrieved March 5, 2007 from http://scholar.lib.vt.edu/ejournals/JVTE/v16n1/doolittle.html
  • Pitman, J. A., Bell, E. J. & Fyfe, I. K (1999) Assumptions and Origins of Competency-Based Assessment: New challenges for teachers, a paper presented at a conference of the Australian Association for research in education, and the New Zealand Association for Research in Education, Melbourne, November 1999.
  • Popham, W. J. (1978) Criterion-referenced measurement. Englewood Cliffs NJ: Prentice-Hall.
  • Prawatt, R. S., & Floden, R. E. (1994). Philosophical perspectives on constructivist views of learning. Educational Psychology, 29 (1); 37-48.
  • Riesman, D. (1979). Society's demands for competence. In Grant, G., Elbow, P., Ewens, T., Gamson, Z., kohli, W., Neumann, W., Olesen, V. And Riesman, D. (1979) On competence: a critical analysis of competence-based reforms in higher education. San Francisco: Jossey-Bass.
  • Rothblatt, S. (1982). Failure in early nineteenth century Oxford and Cambridge, History of Education, 11: 1-21.
  • Rowe, C. (1995). Clarifying the use of competence and competency models in recruitment, assessment and staff development. Journal of Industrial and Commercial Training, 27 (11): 12 - 17. Retrieved August 28, 2009 from http://www.emeraldinsight.com/Insight/ViewContentServlet;jsessionid= A35236CBA6750FB448C4050699512110?Filename=Published/Emerald FullText Article/Pdf/0370271103.pdf
  • Rudolf, F. (1962). The American College and Universitiy: a history. New York; Alfred A. Knopf. In Delandshere, G. (2001). Implicit Theories, Unexamined Assumptions and the Status Quo of Educational Assessment. Assessment in Education: Principles, Policy & Practice, 8(2): 113 - 133.
  • Sadler, R. (1989). Formative assessment in a learning culture. Educational Researcherr, 29(7); 4-14.
  • Sadler, R. (1989). Formative Assessment and the Design of Instructional Systems. Instructional Science, 18;119-144.
  • Skinner, B.F. (1968) The Technology of Teaching. New York: Appleton-Century- Crofts.
  • Staver, J. R. (1995). Scientific research and oncoming vehicles: Can radical constructivist embrace one and dodge the other? Journal of Research in Science Teaching, 32(10), 1125-1128.
  • Stevenson, J. (1996). The Metamorphosis of the Construction of Competence. Studies in Continuing Education, 18(1); 22 - 42.
  • Stiggins, R. (2004). New Assessment Beliefs for a New School Mission. Phi Delta Kappan, 86(1): 22-27.
  • Swancheck and Campbell (1981) Competence/performance-based teacher education: the Unfulfilled Promise, Educational Technology, June, 5-10. In Fletcher, S. (1992) Competence-based assessment techniques, Second edition. London & Stirling (USA); Kogan Page.
  • Tillema, H. H., Kessels, J. W. M., & Meijers, F. (2000). Competences as building blocks for integrating assessment with instruction in vocational education: A case from The Netherlands. Assessment & Evaluation in Higher Education, 25: 265-278.
  • Training Agency (1988/9) Development of Assessable Standards for National Certification. Guidance Notes 1-8. Sheffield, Training Agency.
  • von Glasersfeld, E. (1984). An introduction to radical constructivism. In P. Watzlawick (Ed.), The invented reality (pp. 17-40). New York: Norton. In Doolittle, P. E. & Camp, W. G. (1999). Constructivism: The career and technical education perspective. Journal of Vocational and Technical Education, 16 (1); Retrieved March 5, 2007 from http://scholar.lib.vt.edu/ejournals/JVTE/v16n1/doolittle.html
  • von Glasersfeld, E. (1995). A constructivist approach to teaching. In L. P. Steffe & J. Gale, Constructivism in education (pp. 3-16). Hillsdale, NJ: Erlbaum. In Doolittle, P. E. & Camp, W. G. (1999). Constructivism: The career and technical education perspective. Journal of Voctional and Technical Education, 16 (1); Retrieved March 5, 2007 from http://scholar.lib.vt.edu/ejournals/JVTE/v16n1/doolittle.html
  • von Glasersfeld, E. (1998). Why constructivism must be radical. In M. Larochelle, N. Bednarz, & J. Garrison (Eds.), Constructivism and education (pp. 23-28). Cambridge: Cambridge University Press. In Doolittle, P. E. & Camp, W. G. (1999). Constructivism: The career and technical education perspective. Journal of Voctional and Technical Education, 16 (1); Retrieved March 5, 2007 from http://scholar.lib.vt.edu/ejournals/JVTE/v16n1/doolittle.html
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological process. Cambridge, MA: Harvard University Press.
  • Watson, A. (1994) Strategies for the Assessment of Competence, Journal of Vocational Education & Training, 46: 2, 155 - 165
  • Wilbrink, B. (1977). Assessment in historical perspective, Studies in Educational Evaluation, 23(1): 31 - 48.
  • William, D. (2000). The Meanings and consequences of educational assessment. Critical Quarterly, 42(1): 105-127.
  • Wirth, A. G. (1972). Education in the technological society: The vocational-liberal studies controversy in the early twentieth century. Scranton, PA: Intext Educational Publishers.
  • Wolf, A. (1995). Competence-based Assessment. Buckingham and Philadelphia; Open University Press.
  • Pitman, J. A., Bell, J. E. & I. K. Fyfe, I. K. (1999). Assumptions and Origins of Competency-Based Assessment: New challenges for teachers. A paper presented at the conference of the Australian Association for Research in Education, and the New Zealand Association for Research in Education, November, Melbourne, Australia. Retrieved March 15, 2007 from http://www.qsa.qld.edu.au/downloads/publications/research_qbssss_assessment_assumptions_99.pdf

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.