Subsequent to the recruitment of a pool of applicants, organisations need to decide on which applicants to employ. Many organisations are realising the important contribution, effective selection practice can make, and in light of this, are utilising an assortment of methods to improve the successfulness of the entire recruitment and selection process. Validity and reliability are two important aspects that are considered fundamental when assessing the robustness of selection tools, particularly when viewed from the traditional psychometric perspective (Searle, 2003). All selection methods and tools are developed to measure and assess candidates’ appropriateness for the specified job role. The performance results of the candidate, are frequently used to make the decision, therefore it is imperative that these results are reliable and accurate. Validity concerns the appropriateness of what is being measured, whilst reliability focuses on its accuracy (Searle, 2003).
Validity is generally identified in four ways including face, content, construct and criterion related validity. The form of the selection test is what concerns face validity. For example, a test of verbal comprehension that contains only mathematical equations would measure what it sets out to (Searle, 2003). However, there is disagreement as to how far this can be considered a type of validity. Vernon and Parry (1949) found in their well-known research of US army cook selection, that even though the high face validity of the test used which included recipes and method information, what was actually being measured was reading abilities and not cooking skills (Searle, 2003). For test-takers, face validity is imperative as they have made an effort in applying and trying to get the job role, therefore want to believe they have been assessed for something appropriate for the job they have applied for. A potential dilemma with this method lies with the fact that some test-takers may, based on the appearance of the test, perceive their own idea of what is actually being assessed, and may in response distort themselves consequently.
Content validity relates to the adequacy of coverage of a conceptual domain (Searle, 2003). It is frequently found in ability tests whereby a test-taker is asked to demonstrate their ability in a specific subject. Other than face validity, it is the only form of validity based on logical rather than statistical information (Searle, 2003). The fundamental concern is the sufficient coverage of the domain. As a result, this form of assessment is often constructed by a panel of experts to ensure sufficient breadth of coverage (Searle, 2003), which can result in two potential problems including content under-representation and construct-irrelevant variance.
Cronbach and Meehl (1955) first established the concept of construct validity, when they suggested that underlying each test is a construct that is being assessed (Searle, 2003). Construct validation assumes that anything can be defined and measured. We cannot read someone’s intelligence metre, therefore a hypothetical construct defining what intelligence is has first to be created in order to measure it (Searle, 2003). There has been criticism of this as a basis for measurement within the human sciences. Stevens (1946) argued that the null hypothesis is hardly ever taken into account; kilne (1998) also critiques this measurement issue. A key concern of test-developers is to show the relationships between their instrument and other established tests which are assessing a similar domain.
Criterion-related validity is the final form and is associated with what is being measured to an external criterion (Searle, 2003). It focuses on external measures, such as job success, establishing the relationship between the predictors (results from the selection methods used) and the criterion (performance on the job). The significant issue concerned with this form of validity is the adequacy of the identification and assessment of the external standard (Searle, 2003). Frequently the external measure is chosen for its convenience instead of its relation to the dimension to be assessed (Murphy, 2000) resulting in a possible difficulty. Criterion validity can be assessed in two distinct ways: Predictively or concurrently.
The ‘pure’ method (Bach, 2005) of establishing this relationship is to measure applicants during selection and based of methods used, predict future performance; predictive validity. Applicants are NOT CHOSEN on this basis, but either all or a cross-section (both good and bad predicted applicants) of applicants are taken on. After period on the job, performance is measured and correlation established between the selection method prediction and the job performance criterion measure. The AIM here is to avoid ‘false negatives and positives’ (Bach, 2005). Practical difficulties with this process of validating selection methods arise, such as need to get results from fairly large number of individuals. A more obvious problem however, is the reluctance of decision makers to agree to employ individuals who are predicted to be poor performers.
The CONCURRENT METHOD of validation is sometimes used to avoid this difficulty. The assumption is that existing employees demonstrate variable job performance. If a new selection method can discriminate between good or poor performers, then should be able to in same way between applicants. PROBLEMS – motivation of current employees different to candidates, this may affect scores. Candidates likely to try harder. Current employees a restricted sample as have previously been selected by some method, so may on average be better than average candidate. Does not prove that the differences in team skills, as measured by the group exercise, were evident prior to employment. (might be that they weren’t learnt by employees as by-product of their work).
When establishing the value of a test, the development of validity is central as it provides an indication of the strength of the relationship connecting the tool and a criterion (Searle, 2003). New statistical processes such as meta-analysis, (validity generalisation) pioneered by Schmidt and Hunter (1996, 1998, 1999), have revolutionised selection testing. They argued that although validity does differ by way of context and role, it is nonetheless moderately stable. Centred around this claim, selection tools could be moved across a variety of circumstances and roles and still maintain their extrapolative validity. The possibility of these tools being used rather than developing expensive bespoke instruments brought about the potential for huge savings for organisations. However, validity generalisation theory is not without its critics, and there are many underlying problems of this approach (Searle, 2003). Meta analysis is based on the collection and re-analysis of comparable studies of tools, such as the situational interview. The current application of meta-analysis studies, remove the possibilities for us to understand why situational differences emerge. They prevent us from identifying what makes a situation unique. Organisations currently operate in turbulent global environments, and evidence suggests that there are important relationships amongst task type, technology and the external environment that meta-analysis studies do not assist us in exploring. As a result of the meta-analysis dominance, selection designs cannot be improved to help organisations in these contexts.
A test might produce a measure that is valid for one person, but the results may not be reproducible for another. This brings into question the issue of reliability. Reliability concerns the accuracy and consistency of a method (Bach, 2005). Increasingly, reliability is an issue which is becoming a legal requirement for selection tests yet, according to Bach, (2005) very few organisations systematically assess the reliability and validity of the selection methods they use. When psychometric tests are used, for example, there is a tendency to rely on the evidence presented in the test manual on reliability and validity based on meta-analysis research (Bach, 2005). Establishing the reliability of a selection tool involves three main elements: stability, consistency and equivalence of the results (Searle, 2003). Hermelin and Robertson, (2001) divided different selection methods into three categories (high, medium, low validity). High methods included structured interviews and cognitive ability tests. Medium included biographical data and unstructured interviews and integrity tests. Low included personality scales measuring the ‘big five’. Unfortunately evidence suggests that those methods with highest validity are not always the most popular. Rather most orgs rely on classic trio of short-listing, interviewing and references (Cook, 2003; Millmore 2003).
Research into these initial selection stages is unbalanced, with far more work looking at the organisation-led application process, (in particular the role of biographical data) rather than the impact of applicants’ CV. (ALL SEARLE, 2003).
The selection process typically begins with the candidate formally demonstrating their interest in the open job role. This is normally made by putting forward their CV or by completing an application form (Searle, 2003). This is commonly the first initial contact between potential employer and candidate, and as most applicants are selected out of the process at this stage, this implies that the CV, or resume, is a primary tool for the applicant in the selection process. Resumes also play an imperative role in the two-way selection process. For candidates, they represent an imperative chance to market themselves positively, and make an impression on the reader with their skills, knowledge and abilities (Searle, 2003). For the employer, they are the foundation on which short-listing decisions are made. The use of competency statements however, can potentially make a false impression. Bright and Hutton (2000) highlight that such statements are problematic to verify in a similar way that qualifications can be. Given its apparent significance however, the research regarding the validity and reliability of resumes to the selection process is modest.
To congregate information in a standardised way, organisations may prefer applicants to complete a specific application form. Shackleton and Newell (1991) in their study, found that 93 percent of organisations in the UK used application forms. Now that technology has significantly advanced, many organisations in the UK, in particular those dealing with high volumes of applicants will use online application forms. In addition to gathering personal information they also make available information about candidates’ experiences. Within this area of selection practice is ahead of research, so although claims are made about the increased access to jobs, the new internet medium may-be overrated (Searle, 2003). However it does enable a more cost-effective short-listing process, (Polyhart et al 2003) but how far this is free from discrimination remains to be seen.
Interviews are one of the oldest, yet most popular tools used in selection. Virtually all employers use interviews for all categories of staff (Bach, 2005). Interviews enable several important assessments to be made, and evidence by Robertson and Smith (2001) indicates that they have high predictive validity regarding future job and training performance. They offer an opportunity for a direct experience of a candidate’s behaviour coupled with the potential to ask more probing questions regarding underlying cognitive, motivational and emotional issues. Employers are however more aware of their limitations and being more careful by using variety of complementary selection techniques for some groups including graduates. There are two central theoretical perspectives that are taken regarding an interview: the objectivist psychometric perspective and the subjectivist social-interactionist perspective. The objective psychometric perspective places the interview at one extreme. It considers the interview an objective and accurate means of assessing an applicant’s suitability for a job. From this perspective, the process places the interviewee as a passive participant who provides relevant information about their experiences and capabilities. Thus this perspective reduces the interview to a verbally administered psychometric test which concerns of structure, reliability and validity predominating. First the interviewer is regarded as a rational decision-maker, who is capable of collecting – in an impartial manner – information on a number of relevant selection criteria. Implicit in such a process is the interviewer’s ability to obtain relevant data accurately. Second, it is assumed that they have the skills to be able to accurately interpret the information, relate it impartially to the criteria and assess the candidate’s suitability based on the sample of behaviour provided. This perspective tends to dominate in the field. Much of the research has examined how the validity and reliability of the process can be maintained. Inevitably the focus rests on the interviewer as a potential corrupter of an otherwise objective tool. The interviewer’s role in producing and perpetuating bias has been the main area of interest, and there has been limited effort until recently into questioning the candidates motivation to present the correct information, or in contaminating the interview.
The alternative perspective, places the interview at the other extreme. It considers the process to be a social interaction in which a subjective, socially balanced negotiation occurs. In this perspective, a far more evenly balanced dynamic emerges between each party, both having the same power in the situation. The parties are considered to become participant observers in the process. The interview thus emerges as a complex and unique event. In the selection context, those involved are engaged in creating a variable psychological contract regarding their mutual expectations of future working relationships. The importance of the psychological contract at the onset and its maintenance throughout the employment relationship cannot be overstated (Rousseau, 2001). Herriot (1987) argued that this interactive and social perspective is important because it places the applicant as a far more active player in the negotiation process. This concept is particularly valid in a job market in which the applicant’s skills and experience are in short supply, or important to the organisation. Under these conditions, the applicant plays a key role in dictating the terms and conditions under which they will be employed. From this perspective, each interview is potentially unique because of the players involved, with the parties creating a particular process that emerges from their current context. The key research issues of this perspective are concerned with the type of psychological contract reached, bias and fairness. Like the objectivist perspective, this approach is also concerned with the future, but not regarding job performance, instead a focus might explore what happens if the contract being negotiated is violated.
The single issue that has received most attention in research on the interview is the amount of structure in an interview, ranging from unstructured to structured. Traditionally interviews classed as unstructured, generally consisted of a discussion between the applicant and recruiter with no pre-set topics. An early study by Kelly and Fiske, (1951) highlighted negative evidence suggesting there is little consistency or reliability in unstructured interviews. According to Bach, (2005) UNSTRUCTURED INTERVIEWS are bad predictors because the information which is ‘extracted’ is different for each individual and differs between interviewers and so comparisons between candidates cannot be made reliably. With different questions being asked of each candidate is almost inevitable that subjective biases makes the interview both unreliable and invalid. However, this form of interviewing provides, at its best, a surrogate measurement of the candidate’s social skills (Searle, 2003). The term ‘structured’ interview can cover a wide range of processes. According to the objectivist perspective, the structured interview process focuses on the interviewer asking a pre-set sequence of questions aimed at eliciting information relating to pre-determined criteria. The purpose of the structure is to close the process to any extraneous influences, so that even when different interviewers are involved, the same data are being gathered, thereby providing a means of comparing the candidates. As a result, the process of delivering the questions is standardised. Research has shown, that increasing the structure of the interview significantly increases predictive validity and that organisations are responding by using more structured interview approaches (Taylor et al 2002). The subjectivist perspective however, instead regards the interview as a two-way process in which the actions of each party inform and shape the actions of the other. From this perspective, attention shifts towards understanding the very process of the interview, which emerges as an ongoing exchange, informed and transformed by those involved. Typically the interview is the first time the interviewer meets with the applicant. The recruiters are presenting an image of the organisation in terms of its standards, values, expectations, ambitions and goals. The interview is therefore a public information exercise providing candidates with valuable data that will assist them in deciding whether to accept the job or not if offered it.
While structured interviews can certainly be beneficial their usefulness will depend on the specific context. Where jobs are highly prescribed and knowledge about how work needs to be carried out, clarity about what constitutes good performance then structured interviews are better because prediction is possible and they are better predictors. HOWEVER when an organisation is competing in a turbulent environment and there is uncertainty about what is required of individuals a less structured approach may be more appropriate. OVERSTRUCTURING can be a problem, for EXAMPLE in an unstructured interview, the interviewer can provide more realistic information about the job, with the candidate able to ask questions which relate to his or her personal needs, values, interests, goals and abilities. Through this process, applicant and interviewer can negotiate a mutually agreeable ‘psychological contract’ (CIPD, 2009). ALSO the unstructured interview can operate as preliminary socialisation tactic with the applicant learning about the culture and values of the organisation (Dipboye 1997).
At the heart of psychometrics lies the assumption that people differ from one another, for instance in terms of friendliness, determination and ability to use mathematical concepts, and that these differences can be measured. It is assumed when measuring these different aspects, they relate to actual behaviour – that is, they relate to an external event (a behaviour) to an internal cause (a trait). Psychometrics tests aim to qualify three key aspects of individual differences; ability, personality and related work, and suggest a relationship between these two and motivation. Essentially two types can be distinguished: COGNITIVE/ABILITY TESTS or PERSONALITY TESTS. COGNITIVE: assessment of individuals intellectual abilities either in terms of general intelligence or specific abilities. PERSONALITY: assessment of an individual’s general disposition to behave in a certain way in certain situations (Bach, 2005).
The seminal piece of work on the use of cognitive tests in selection was undertaken by Hunter and Schmidt (1990) using meta-analysis, the researchers were able to demonstrate that although the many studies on the predictive validity of test appeared to be inconsistent, when adjustments were made for various factors, results were in fact consistent and proved that cognitive tests were valid predictors in a wide range of job situations. Such tests are simple to administer and score, albeit the person using such test needs to be properly trained. For most jobs the range of intelligence of those applying for the job is likely to be very restricted (rare to have a person with IQ 140 applying for caretaker job). The consequence of this is that a measure of cognitive ability may not differentiate much between the various candidates. Secondly, cognitive tests can be biased against certain groups. Eg it is well documented that black Americans tend to score lower than whites on tests of cognitive ability, and women tend to score higher than men on verbal ability. This raises SOCIAL AND ETHICAL issues which need to be considered when selecting particular tests.
In most UK selection situations – personality measures are of self report type. There is considerably more controversy over use of personality measures than cognitive tests. Some argue they are totally useless (Blinkhorn and Johnson 1990). Research has shown that personality measurement can be useful but only when specific personality constructs are linked to specific job competencies (tett et al, 1991; Robertson and Kinder 1993). Much of this work based on BIG FIVE
One problem with research on personality measurement has been that very different systems of personality description have been used, making it difficult to compare results. Now there is growing consensus around five-factor model of broad traits (Goldberg, 1993) and use of Costa and McCrae’s 1992 personality inventory which measures these five factors. Researchers have also explored the reasons for the links between personality traits and job performance. eg openness to experience appears to be related to training success (Cooper and Robertson, 1995).
However, it is unlikely personality tests alone will be good predictors of future job behaviour BECAUSE job situations often present strong situational pressures which mean that differences between individuals behaviour are minimized. ALSO because it is highly unlikely that the same job can be done in very different but equally successful, ways by individuals with different personalities. This doesn’t mean that personality measures have no place in selection process, but raises question of how such measures are best used within this context. Defining a personality profile and dismissing candidates who do not fit this profile is not good practice. HOWEVER obtaining measures of personality and using these as the basis of discussion during an interview can be helpful.
Occupational testing – occupational tests are measurement tools in world of work. The involve looking at a standard sample of behaviour that can be expressed as either a numerical scale or a category system (Cronbach, 1984). Test items are chosen specifically for their relevance to the domain of interest; for example percentage computation or word recognition. There is also an effort to standardise the delivery of the tools, ensuring that candidates have the same test experience so the only variable is their mental process. Tests used in an occupational context can be divided into two distinct groups: typical and maximal. These are based on the type of behaviour they are designed to measure.
Typical behaviour tests – the purpose of typical behaviour tests is to identify the direction of a person’s interests and suggest types of jobs associated with these areas. Personality and interest-assessment tests used in career guidance are examples of typical behaviour instruments. However, it should be noted that they do not measure the level of skill that might be associated with this vocational choice.
Maximal performance – these tests are designed to assess ‘maximal’ behaviour. They aim to find out what is the best the test-taker can do (Kline, 1998). Nonetheless, it has been argued that it is naive to make such a simplistic distinction between maximal and typical performance, as it artificially separates the measurement of affect and intellect and their combined relationship to performance (Goff and Ackerman, 1992). Measures concerned with maximal performance can be subdivided into three distinct types: attainment, aptitude and general intelligence.
Psychological tests play an important role in selection practice. They offer organisations a means of discriminating between large numbers of applicants in a rapid and often cost effective manner. Moreover, their power in predicting successful subsequent job performance is amongst the highest of any selection tool (Robertson and Smith, 2001). Through the growth of instruments such as organisational-fit questionnaires, different attitudinal and trait assessment measures and novel ability tools, the range of psychometric tools available to organisations in increasing.
Although there is an increasing use of psychometric tools in HR selection and recruitment decision making, the method is contentious. Ethnic group differences in intelligence test results reflect the ethnic divide that exists in the distribution of rewards and sanctions in our wider society (Gordon, 1997). Some argue that high intelligence quotient (IQ) scores are not important; rather, what is significant is the identification and means of assessing specific cognitive skills that are linked to job performance (Hunt, 1999). This latter group of more focused cognitive assessment tools can have a significant impact in organisations, revealing how close an applicant is to the requisite skills level estimating how much training an applicant needs to reach an acceptable standard.
Psychometric tests will always be open to abuse as they offer a potential means of legitimising discrimination by those in power and authority. Underlying issues of test production and assumptions that underpin psychometrics reveal how social values and prejudice can have an impact on the development, application, analysis and interpretation of results. Whilst some may feel comfortable to reduce the value of human beings to an empirical value, there are others who see humans in terms of their potential, regardless of the social context they find themselves in.
A critical issue underlying any test is the definition of the domain. Often tests are devised on an atheoretical basis, or they use the same term to mean different things. It is important that test-users require adequate conceptual rationale for a test. Concept validity is key here; nevertheless, it is often weakly developed or ignored. Without attention to this core issue, psychometrics will fail to offer any meaningful assessment and instead intelligence will be what intelligence tests measure, not what intelligence actually is.
Assessment centres (Bach, 2005).
Not a single selection method nor a place. Refers to utilisation of a number of different selection methods over a specified period in order for multiple assessors to assess many candidates on a range of identified competencies or behavioural dimensions. Core element is the simulation of actual work tasks in order to observe job-related behaviours (Cooper and Robertson 1995).
Managerial jobs: in-tray exercises & group decision making exercises = common. Intray: provides candidate with a range of correspondence (memos, letter, reports) and he/she required to make decisions in order to prioritise/deal with various problems in the material under tight schedule. Used to assess individuals planning/problem solving abilities. Group decision making exercise: small groups discuss particular problem, come to consensus/solve problem. Problem solving abilities may be assessed, but also interpersonal and leadership skills.
Increasing evidence of their limitations. Jones et al (1991) concluded despite the validity of different components of an AC, overall AC validity was surprisingly low. KEY PROBLEM appears to be that managers, acting as assessors, are not able to accurately assess cross-situational abilities from the different exercises. So while managers are required to rate candidates on diff. Competencies for each exercise, these ratings appear to be defined by overall task performance of the candidate on the particular exercise, rather than specific behaviours demonstrated in activity (Iles, 1992). No. Of studies have demonstrated low correlation between the overall assessment ratings and the variety of the criterion measures of on-the-job performance (Payne et al 1992).
Despite negative evidence, two important points to be made: Designing and developing an AC has potential to improve the validty of selection, but simply putting together series of exercises and running them over two days using group of untrained assessors does not guarantee that decisions will be improved. EXAMPLE: gaugler et al 1987 – validity of ACs improved when larger no. Of exercises used, and psychologists instead of managers acted as assessors. When peer evaluation included as part of assessment process and when group of assessors cantained larger proportion of women. Many probs identified with ACs need to be looked at from broader perspective than simply criterion-related validity. KEY BENEFIT of using AC is it gives potential to recruit an extended opportunity to find out more about the org. In particular many of the activities are simulations of the kind of work involved. MUTUALLY BENEFICIAL negotiation can take place if both parties know more about each other. THIS REQUIRES adoption of EXCHANGE rather than psychometric view of recruitment and selection process.
Recruitment and selection: Limitations of the psychometric approach
As noted earlier, adopting a more systematic approach to recruitment and selection to reduce bias and errors is useful. Yet ironically, it could be argued that globalisation and organisational requirements of flexibility, innovation and commitment make the ‘best practices’ somewhat problematic and suggest a need for an entirely new perspective on R&S. FIRST considering degree of change, orgs now require ‘generalists’ rather than ‘specialists’ to take on variety of different roles which require range of skills/competencies. Even when individual recruited for specific position, highly likely job role will change. Therefore, ‘best practice’ prescription of doing a thorough job analysis to identify the task and the person requirements of the particular job may be difficult or inappropriate. There is not a fixed ‘jigsaw hole’ to fill.
SECONDLY alongside flexibility is need for innovation. Identifying opportunities for change and designing creative solutions is crucial for the survival of many orgs. It is about encouraging people to think differently. Following ‘best practice’ guidelines leads to selection on basis of whether candidates can do particular jobs efficiently and whether they fit org culture. Rather than encourage innovation, traditional selection approaches may stifle creativity.
THIRDLY, orgs operating on global rather than national level. Considering array of cross-national differences it is unlikely that orgs will be effective if they simply try to replicate their home-base operation abroad (Bartlett and Ghoshal 1989). To manage this diversity requires R&S of people from different backgrounds with different experiences at all org levels. HOWEVER job analysis is backward looking. EXAMPLE if current job holders are all of same race/nationality, this may mean individuals from different backgrounds will be excluded because they do not fit the existing profile of a competent employee. ALSO during selection, different background candidates may respond differently so that they are at a disadvantage, again reducing their chances of being selected (SHack
Cite This Work
To export a reference to this article please select a referencing style below: