The Issues Of The Interpretations Of Competence Education Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

There are various issues related to competence in CBA particularly those related to the interpretations and the assessment of competence. Competence is undoubtedly an abstract concept which cannot be directly observed (Wilmut & Macintosh, 1997) but which generally can be inferred from observed performance (Gonczi, 1994; Wood & Power, 1987). Nevertheless, such inference may or may not provide accurate insight into competence (Wood & Power, 1987). The following section will try to examine both the issues of the interpretation of the concept of competence and the assessment of competence.

The concept of competence is often socially constructed (Evans, 2001), taking on various definitions and interpretations (Eraut et al., 1998; Lizzio & Wilson, 2004; Messick, 1984; Miller, 1990; Parry, 1996; Tillema et al., 2000) in different contexts and is used to support particular ideological positions (Evans, 2001). The common notion of competence is that it consists of combination of knowledge, skills and attitudes that could be used to solve a problem (Baartman et al., 2007) in appropriate job contexts (Lizzio & Wilson, 2004). Taconis et al. (2004) further stress on this notion of competence as knowledge, skills and attitudes if addressed separately in CBA, is not sufficient for the desired competent professional behaviour. According to Tillema et al. (2000), competences which include professional skills such as learning to learn, interactive skills, communication skills, information processing, problem-solving and reflective skills are essential in the current information and knowledge society. They further emphasise the need for learners to be equipped with competences that will help them to be more employable in the competitive labour market. Chapter 3 has discussed in detail the definitions and interpretations of competence in relation to employability as utilised in the study.

However, the researcher would like to reflect on two of the three common approaches to interpret the concept of competence in CBA; the behaviourist and cognitive (Hager, 1994; Norris, 1991; Gonczi, 1994; Wesselink et al., 2003; Mulder et al., 2007). Subsequently, the researcher will explore what it means to say a learner is competent which undoubtedly involves looking at what the judgment is based on; that is, the basic assumptions of the assessment being made (Pitmann et al., 1999). Deciding on what it is to be assessed is crucial in carrying out any assessment process (Hager et al., 1994) and as the name itself suggests, CBA involves the process of assessing competence. Hager et al. (1994) point out that there are various ways to assess competence depending on how it is being conceived. In this study, the assessment of competence will be discussed in relations to the interpretations of competence. Figure 4.3 illustrates the development of the interpretations and assessments of competence in CBA but only competence in the behaviourist approach and the cognitive approach will be discussed in detail while the generic approach will touched briefly.

Assessment of


Interpretations of Competence

Specific Tasks Approach

Behaviourist Approach

Generic Skills Approach

Generic Approach

Integrated Approach

Cognitive Approach

Figure 4.3: The Development of the Interpretations and Assessments of Competence in CBA

Adapted: (Hager, 1993; Hyland, 1993; Wesselink et al., 2003; Baartman et al., 2007)

Competence in the Behaviourist Approach

In the behaviourist approach, competence is characterised by the satisfactory completion of atomised tasks (Gonczi, 1994; Eraut, 1994) which could be observed from learners' behaviour and performance (Wesselink et al., 2003). Competence in its narrowest meaning concerns with the ability to perform a range of tasks to predetermined standards (Evans, 2000) within an employment setting (Fletcher, 1991; Evans, 2001). Examples of this behaviouristic approach could be seen in the early development of the NVQs (National Vocational Qualifications) and SVQs (Scottish Vocational Qualifications) in the UK. Learners in this traditional competence-based education in the UK are considered competent when they are able to perform a series of tasks (Gonczi, 1994; Wesselink et al., 2003) that meet the expectations of a competent worker (Ecclestone, 1996; Wesselink et al, 2003) in a specified occupational area (Hyland, 1993; FEU/PICKUP, 1987). For example, when a brick-laying trainee performs a series of brick-laying tasks such as cutting and shaping bricks/blocks, making mortar mixture, laying bricks/blocks in rows and removing excess mortar according to an acceptable standard of a professional brick-layer, he/she is then considered to be competent.

However, this behaviouristic approach to competence has received serious criticisms especially for its minimal interpretations of the concept (Evans, 2001) which are considered narrow, confusing and inadequate (Evans, 2001). This approach is more concerned with performance outcomes which involve observable actions and behaviour (Barnett, 1994; Hyland, 1995; Wesselink et al., 2003) rather than the learning process and experiences gained (Hyland, 1995; Wesselink et al., 2003). In other words, developmental process is not perceived as competence (Griffin 1995; Masters & McCurry 1990) and consequently, this reduces authenticity of real life experience in any of the professions where action is often interwoven with thought, understanding and reflection (Barnett, 1994; Wesselink et al., 2003). As the behaviouristic approach emphasises lower-level competences and psychomotor competences at the expense of higher order competences (Masters 1993), it diminishes the essence of performance that is associated to a broader sense of competence (Griffin & Gillis, 2000). Furthermore, the behaviouristic approach does not explore the connections between the discrete, small-scale tasks (Gonczi, 1994; Wesselink et al., 2003) and the transformations of the tasks (Wesselink et al., 2003). These tasks are instead broken down into competences with too detailed specifications or criteria (Griffin & Gillis, 2000) which could affect CBA to lose its proposed predictability function of transferability (Griffin 1995; Masters 1993).

Although the behaviouristic approach has an impoverished view of competence, it promotes simple recording and reporting of the assessment of competence (Griffin, 1995; Masters, 1993; Griffin & Gillis, 2000). Generally, assessment of competence in the behaviourist approach consists of an assessor who is an active observer ticking off a supposedly unambiguous assessment checklist (Jones, 1999) of the discrete tasks performed by learners in situ (Griffin & Gillis, 2000; Mulder, 2006). Evidence of competence is gathered and based on direct observation of learners' behaviour and performance (Wesselink et al., 2003; Mulder, 2006). This approach is reckoned for its simplicity in training assessors to complete forms consisting of lists, without requiring much of professional judgement to be made (Griffin & Gillis, 2000). Succinctly, the main characteristics of the behaviourist approach to assessment of competence are demonstration, observation and assessment of behaviour (Mulder et al., 2006). Despite the seemingly simple and straightforward approach, assessors sometimes do find problems with vague assessment criteria that do not really describe the competences being assessed (Jones, 1999). Attempts to ground assessment in direct observation could also be problematic as it is usually overloaded with values and subjectivity (Kemshall, 1993) on the part of the assessors.

Furthermore, this technical approach to assessment of competence has been criticised for its dehumanising effects it has on learners (Evans, 2001; Ashworth & Saxton, 1990; Hyland, 1993) as it restricts the opportunity for them to be creative in learning outcomes or competences (Ashworth & Saxton, 1990). The process of acquiring competences in this approach does not emphasise cognition and social learning (Ramsay, 1993; Jones & Moore, 1993; 1995) and thus, learners' ability to acquire competences in informal everyday life settings is ignored or neglected (Giddens, 1991). In addition, this approach reduces an occupation to a series of discrete observable tasks which do not represent the occupation significantly (Ashworth & Saxton, 1990; Hager et al., 1994). As the approach concentrates on an individual demonstrating competent performance (Wolf, 1995) and emphasises on personal competences, it leads to one being individualistic whilst lacking in the ability to work as a team whereas team work is essential in performing relevant aspect of a job in the actual workplace (Ashworth, 1992). For example, a plumber trainee working on a task of installing a washing basin would solely concentrate on meeting all the predetermined criteria statements or competency standards of the task in order to demonstrate competence whilst disregarding any communication or team-working with other trainees. This is very much different from the real life work situation where a plumber has to collaborate with builders and electricians on a building site for any kind of plumbing work.

Besides depriving one from acquiring the ability to engage in teamwork, another setback of this approach to assessment is that it does not pay much attention to the theoretical knowledge and understanding (Ashworth, 1992). While assessing competent performance is vital, assessing knowledge and understanding is just as important as it is an essential aspect of competence without which an assessment is lacking in credibility or construct validity (Ashworth, 1992). A valid assessment method should be able to measure what it is supposed to measure which in this case would be the relevant elements of competence (Watson, 1994). Both the performance and knowledge are aspects of competence that should be assessed and measured. People who 'understand' are those who have clear mental representation of the situation with which they are confronted and are able to deal with it creatively and imaginatively using the acquired knowledge which acts as an interpretive resource for them (Ashworth, 1992). Thus, it is insufficient to assess one's competence just by looking at the performance while ignoring the underlying aspect of knowledge and understanding. It is unfortunate then, if such an assessment method should produce people who are like robots in a factory; they could perform a job or a task efficiently and effectively but they do not have any understanding of what they were doing.

Consequently, the interpretations and assessment of competence have progressed from this narrow behaviourist approach to the generic approach (Norris, 1991). The generic approach considers competence the possession of a series of general desirable attributes of a practitioner (Gonczi, 1994) or personal qualities such as knowledge, skills and critical thinking abilities (Mulder et al., 2007) to solve problems, analyse, communicate, and attitudes of appropriate kinds (Hager et al., 1994). The assessment of competence is usually compartmentalised (Wolf, 1990; Gonczi, 1994) where the attributes are commonly assessed in isolation from actual work practice (Gonczi, 1994; Hager et al., 1994; Evans, 2001; Mulder et al., 2007). These attributes are no doubt highly context dependent and to assess them out of context would be inappropriate (Hager et al., 1994). Moreover, the clarity of competence statements in picking out the precise competence that relates to knowledge and understanding remains uncertain (Hyland, 1993). Thus, this approach has also been criticised for lacking in evidence of the existence of the generic competences and its transferability of occupational skills is still doubtful (Hyland, 1993; Gonczi, 1994; Mulder et al., 2007). As such, this approach is not suitable for education (Gonczi, 1994; Mulder et al., 2007) and will not be discussed in depth in this study.

4.4.2 Competence in the Cognitive Approach

Competence in the cognitive approach has evolved from totally focusing on intelligence and intellectual abilities alone to including performance that encompasses social and emotional components (Mulder et al., 2006; Hodkinson & Issit, 1995). Traditionally, the definition of competence in this approach comprises the overall human intelligence in attaining knowledge and understanding, acquiring skills and achieving good performance with appropriate values and attitudes (Hodkinson & Issit, 1995). The more recent interpretation of competence in the cognitive approach consists of successful performance of realistic professional tasks (Gonczi et al., 1990; Heywood et al., 1992) in which knowledge, skills and attitudes are incorporated (Hodkinson & Issit, 1995; Mulder, 2000; Mulder et al., 2007) within a context of general attributes (Gonczi, 1994; Hager et al., 1994)). The competence development in the cognitive approach is associated with the social constructivist approach where the emphasis is on the similarity between the competences required for successful performance in society and collaborative competence development (Mulder, 2007; Kerka, 1997). In other words, the main focus is on the assessment of knowledge creation or construction in the workplace which integrates personal qualities in social context (Mulder, 2007; Kerka, 1997; Billet, 1994). This integrated and holistic approach to competence could be the way to ensure CBA still retains its unique characteristic of occupation-specific tasks without being too atomistic about its learning and performance (Wilmut & Macintosh, 1997) as it has always been criticized for. This approach is also considered a powerful device to improve content, delivery and assessment of current curriculum (Hager, 1993).

Thus, the assessment of competence in the cognitive approach consists of assessment of occupation-specific tasks based on competency standards which are incorporated with assessment of generic competences in occupation-specific contexts (Gonczi, 1993) with an appropriate level of holism (Hodkinson & Issit, 1995). According to Hodkinson & Issit (1995), there are two dimensions of holism; the first relates to the integration of learners' knowledge and understanding, as well as values and skills needed in an occupation while the second involves the judgment made on the education and training process in developing learners' professional capabilities. An example for the former dimension is the personal identity of a trainee in gerontology and geriatric services is definitely very important to the elderly in a nursing home but it is difficult to define personal identity into measureable units. Therefore, assessment in such context needs to apply the first dimension of holism. The latter dimension of holism is employed to ascertain learners' valuable experience of practice during training process is taken into account as such experience could develop learners' competences (Dall'Alba & Sandberg, 1996). Furthermore, it is insufficient to focus just on the final outcomes or the performance related to standards as various forms of evaluations during the learning process could also help learners develop competence (Wesselink et al, 2003). For example, it is not just the laid tiles which is the product or final outcome that should be assessed but also all the learning process that takes place in accomplishing the task such as the research, the design of the layout, the sketches of the design and so forth should also be assessed comprehensively through formative assessment or even a portfolio.

This integrated and holistic approach only selects the key tasks that are central to the practice of a profession and subsequently identifies the main attributes that are required for the competent performance and thus, avoiding the problem of numerous tasks (Hager et al., 1994). Furthermore, these realistic professional tasks provide sufficient and authentic learning experience that relates to the actual and future workplace environment. This will help reduce the gap between the learning institute and the workplace. Competence is inferred from the performance of this manageable number of tasks (Hager, 1993; Hager et al., 1994). This inference of competence makes assessment of competence in this approach similar to other kinds of assessment where its validity and reliability could be increased using available procedures (Hager et al., 1994). The assessment of competence basically involves gathering of relevant evidence and following the proper procedures to ensure inferences about competence are soundly based (Hager et al., 1994). Though all the necessary steps may have been taken to ensure the reliability of the assessment of competence, the integrated approach still needs to rely on a professional judgment on whether a performance of a task is considered competent or otherwise. This requires proper training on the assessment procedures to enable the assessors to make rightful judgment on learners' performance. This is due to the fact that teachers who are the assessors have raised questions about what it means to say a student is competent (Pitman et al., 1999).

Another form of integrated approach that is more contextual has been suggested by Watson (1994) where assessment is based on samples of performance and evidence of competence is gathered from assessment events such as practical tests, exercises and simulations. These practical tests are designed to measure the technical or performance aspects of competence while supplementary evidence is collected from written and oral questions and multiple-choice tests (NCVQ, 1991b, p. 22) to measure underpinning knowledge and understanding. Judgments about competence are based on the criteria that have been set for each assessment event and students are assessed individually whenever they are ready and judged as 'competent' or 'not competent' (Watson, 1994). This approach is normally employed by formal colleges or off-job training settings and often carried out on behalf of industry. For example, the assessment conducted by the Box Hill College of TAFE, Victoria for its Hairdressing Certificate programme is based on the observation of samples of job performance carried out on specially designed practical tasks which include basic operations of hairdressing such as cutting, styling, waving, colouring and basin service. In addition to these, theory tests to assess underlying knowledge are administered to provide supplementary evidence where 80% pass rate is required before a student is considered competent for a particular element (Watson, 1994). In order to ascertain the validity of the assessment method, assessment centres, may it be the colleges or schools, have to maintain the quality and range of facilities at all times besides increasing the capacity to simulate real workplace conditions and events. The extent to which these assessment centres comply with the requirements to ensure the validity of CBA has yet to be looked in depth. This is because any invalid assessment is a waste of effort, time and money, and subsequently it affects the quality of the learners being trained. Thus, the study looked into the sufficiency of facilities provided by the schools offering BID in adhering to the requirements to develop learners' competence and employability. Research question 4 (RQ4): What are the factors that influence students' employability and are there any differences in the strength and pattern of the relations between these factors and the employability of students of different gender and race?, was therefore developed to investigate this particular matter.

Another example of this integrated approach could be seen in the assessment of skills and abilities of the medical students and residents using "Miller's Pyramid" shown in Figure 4.4 as a framework of competence (Miller, 1990). According to Miller (1990), skills and abilities demonstrated in the 2 top cells of the pyramid, reflect clinical reality as they correspond to action or performance. He further elaborates that when learners have demonstrated competence in these higher domains, they are implied to have acquired the prerequisite knowledge, or knows, and the ability to apply that knowledge, or knows how. For example, when a nurse trainee is able to describe the procedures of how to draw blood sample from a patient using the right equipment correctly and safely in an oral or structured written test, indicates that he/she has acquired the basic clinical knowledge and the procedural knowledge of it; demonstrating competence in the domain of shows how. However, it is only when the trainee is able to carry out the procedure in real life setting during actual patient care that he/she will be considered to have demonstrated competence in the highest domain, does (Miller, 1990). The prerequisite knowledge could be assessed using multiple choice questions while the procedural knowledge could be assessed in the form of a written project or portfolio. The concept of competence in Miller's Pyramid is similar to the interpretations of knowledge and skills incorporated in the concept of competence in the study as discussed in Chapter 3. The 2 lower domains of the pyramid correspond to declarative and procedural knowledge while the 2 upper domains relate to the specialist skills.



Shows how

Knows how

Figure 4.4: Miller's Pyramid

Source: Miller, 1990; p. 65

4.5 The Implementation of CBA

The implementation of CBA has its history all the way back in the 1960s in the United States of America. Ever since it was first implemented in the teacher training colleges in the US, CBA has been adapted and implemented in various parts of the world; the UK, european countries, Australia, New Zealand and other developing countries including Malaysia. CBA has also undergone series of progress and improvements as discussed in previous sections of this chapter. The following section discusses the main features of CBA in practice in the UK, which were the primary models adapted from for the implementation of CBA in Australia, New Zealand and Malaysia. The discussion further elaborates the implementation of CBA in Malaysia which was also adapted and modified from the models used in the three countries mentioned above to suit the Malaysian context.

CBA in the UK - National Vocational Qualifications (NVQs)

The CBA models used in the UK and in the USA are basically similar in many ways including the motivating forces except for the institutional differences (Wolf, 1995). Competence-based recommendations have been translated into compulsory national assessment programmes in the UK and it is this where the differences in the consistency and speed lie (Wolf, 1995). CBA was seen to be the way forward in vocational education and training (VET) because it provides opportunities to the non-traditional learners who are normally not qualified to be in the higher education (as it is too academic, self-interested, elitist, and an impediment) to equal opportunity to learning (Wolf, 1995; Ecclestone, 1996), personal and professional development. Hence, many government bodies have initiated the reformations to VET till the birth of National Vocational Qualifications (NVQs) and Scottish Vocational Qualifications (SVQs). The development of CBA has been associated to the development of NVQs in England and Wales, and SVQs in Scotland. Both the NVQs and the SVQs have the same competence-based characteristics. Thus, the discussion which follows will be mainly on NVQs. Although GNVQs which offer an alternative to GCE A-levels in providing learners with general vocational preparation for employment or further education at different levels are also available and accredited by NCVQ, the focus of this discussion is still on NVQs as they are more similar and relevant to the implementation of CBA in secondary schools in Malaysia.

The National Council for Vocational Qualifications (NCVQ) introduced NVQs, a more formal and overall structured non-academic post-16 education and training, in England and Wales in 1986 (Wolf, 1995; Ecclestone, 1996). The Employment Department then (now the Department for Education and Employment) set up and administered the standards of competence which were developed by lead industry bodies and they will be the basis for accreditation of NVQs awarded by the NCVQ (Ecclestone, 1996; Debling, 1989). These national standards of competence across all occupational areas are set at different levels from Level 1 all the the way to Level 5 with increasing degree of difficulties and complexities through the levels (Ecclestone, 1996; Wolf, 1995). NVQs consist of large numbers of modules that can be delivered separately or combined into qualifications (Wolf, 1995) which are offered in schools, colleges, universities and industries using the prescribed competence standards (Ecclestone, 1996, 1997; . The aim of NVQs is to standardize vocational or job-related training and qualifications within all occupational areas provided by various bodies under national qualifications framework (Ecclestone, 1996; QCA, 2006). NVQs are now included in the National Credits and Qualifications Framework (NCQF) and this framework is made known to the public for them to arrange and manage their progress in education or professional skills (QCA, 2006).

Figure 4.5 which is adapted from Ecclestone (1996, p.36) and Wolf (1995, p. ) summarises the process of assessment and accreditation in NVQs. The standards represent required competence in relevant occupational context where they are basically based on a functional analysis of actual workplace roles (Fletcher, 1991; Wolf, 1995; Ecclestone, 1996) whereby for each industry, there exists a single identifiable model of what competent performance entails (Wolf, 1995). The key outcomes which are related to the underlying purposes are derived from the functional analysis which are then turned into units and elements of competence. The structure of NVQ is modular or unit-based with each unit consists of groups of elements of competence and their associated performance criteria which reflects a discrete activity or sub-area of competence within an occupational area (Ecclestone, 1996; Fletcher, 1991; Wolf, 1995; Canning, 2000). It is here in the element of competence that the performance criteria appear to reflect the critical aspects of performance such as the qualities which are essential to competent performance (Ecclestone, 1996; Fletcher, 1991; Wolf, 1995). An NVQ is defined as a statement of competence which incorporates specified standards in performing a range of work-related activities, the skills, knowledge and understanding which underpin such performance in employment (Training Agency, 1988/9). Thus, each NVQ encompasses a particular area of work at a specific level of achievement and fits into the NVQ framework of five levels with levels 1 to 4 clearly defined while level 5 covers anything beyond. Judgment on competence is then based on the evidence gathered directly or indirectly from various sources available (Wolf, 1995; Ecclestone, 1996; Canning, 2000). In order to assure the quality of NVQs, monitoring and moderation are employed. Thus come the internal verification and external verification into the assessment scene before any certificate is awarded to learners (Ecclestone, 1996).

NVQ Award

Functional analysis of an occupational area

Units of competence

NVQ title

(including level)

- Summary paragraph of area of competence

demonstrated through collecting

Range and scope of situations where performance must be demonstrated

Underpinning knowledge

assessed by using

broken down into

derived from

Element of competence

Performance criteria

Unit 1

Element of competence

Element of competence

Element of competence

Element of competence

Unit 2


Performance criteria

Performance criteria

Performance criteria

Performance criteria

Observation of workplace activity

Observation of products and artefacts, eg

Reports, records

Artefacts created by the learner

Testimony of others, eg supervisors, clients

Simulations and role plays

Projects and work-based assignments

Oral questions


Competent/not yet competent

Further generation of evidence

Further practice and development of skills

Elements of competence

Performance criteria

Evidence of competence;

Indirect and Direct

Assessment of Evidence

Internal Verification

External Verification


Figure 4.5; Assessment and accreditation in NVQs

Adapted from: Ecclestone, 1996, p. 36; Wolf, 1995, p.

Highly specified performance criteria make competence-based assessment concrete as they comprise statements by which an assessor judges whether the evidence provided by a learner is sufficient to demonstrate competent performance. These criteria consist of a short sentence with two components - a critical outcome and an evaluative statement of how the activity has resulted in the required result (Fletcher, 1991). In addition, the performance criteria state explicit measures of outcomes and they are made available to both the assessors and the learners so that the learners know what exactly to achieve and the assessors in return can provide specific feedback. An example of an element of competence and its performance criteria is as in Table 1 where learners are required to meet every one of the criteria successfully.

Table 4. : Sample performance criteria from an NVQ element.

Financial Services (Building Services) - Level 2. Element title: 'Set up new customer accounts'. Provided as an exemplar in The Guide to National Vocational Qualifications (NCVQ 1991, p.)

Internal/external documents are complete, accurate and legible, and delivered to the next stage in the process to schedule

All signatures/authorisations are obtained to schedule and actioned promptly

Correspondence to customer is accurate and complete, all necessary documents enclosed, and despatched promptly

Correspondence to other branches of society and other organisations/professional agencies is accurate and complete, all necessary documents enclosed, and despatched

Cash transactions and financial documents are processed correctly and treated confidentially

Computer inputs/outputs are accurate and complete

On completing the setting up, the account is filed in the correct location

Indicators of contigencies/problems are referred to an appropriate authority

NVQs have succeeded in providing opportunities to almost everyone to develop relevant job-related skills and encouraging learners to engage in autonomous and self-directing learning (Canning, 2000) despite all the criticisms of being atomistic as discussed in section 4.4.1. They further emphasize the significance of accrediting learning at work (Canning, 2000) as such work performance is an essential component of liberal education (Bridges, 1996) . Furthermore, a high degree of transparency within the assessment process is achievable through the criterion-referenced assessment methodology (Canning, 2000). Nevertheless, NVQs are costly and time consuming in practice due to its detailed and reductionist approach to assessment (Raggart, 1994; Wolf, 1995; Eraut et al., 1996). It is also found that gender occupational barriers is reinforced as women are doing much better in academic qualifications than men (Felstead et al., 1995; p. 24). Another setback of NVQs is that the retention and completion rates are lower than the academic qualifications (OFSTED/Audit Commission, 1993).