Challenge To Establish Their Validity Criminology Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Fingerprints are the impressions of ridged skin on the surfaces of fingers and palms and are commonly encountered in a distorted state, referred to as latent prints. Their prime function is in criminal cases due to the belief that "no two individuals have the same fingerprints" ( Galton, 1983). Historically they first appeared in criminal court in Argentina, 1892 (Rodriquez, 2004) and many successful convictions have come about from this evidence, (DOES IT NEED AN EXAMPLE?). Internationally they are regarded as an"unusual symbol for truth" (Spector, 2002)

The lawsuit of Daubert v. Merrell Dow Pharmacutical (1993) came as a Landmark for fingerprint evidence in American courts, it bought into practice new standards for scientific evidence and the notion that expert witness could no longer testify under what was "generally accepted" to be true in their field (Farrell,1993). Judges were instructed to abide by new benchmarks, focusing on whether the evidence has potential error rates or if the technique can be proved false, two questions that have raised serious issues for the validity of fingerprint evidence. (Gianneli, 2002).

Fingerprints have often been linked to the wider discourse of how forensic science fits in with the legal system, with this in mind the essay will work through both scientific and legal examples. The essay will go on to examine the validity of the current methodologies of analysis, secondary examiners, expert witnesses and finally suggestions will be made towards new approaches in the field of fingerprint evidence. (IS THIS ALL OK!!)


There is often a misconception between validity and reliability (Cole, 2006). Reliable methods are able to provide constant results at every comparison of the same set of latent prints, naturally wrong results may occur. Validity, however will routinely produce true answer ( Haber and Haber, 2008). Where there is a link between the two, however, is that if a method generates deviating results (including incorrect results) every time that it is used, then it can be concluded that the method is invalid(Haber and Haber, 2008).

The validity of latent prints was first questioned on its admissibility under Daubert, in the case of Unites States v Mitchell (1992). It was not the uniqueness of the print itself, but rather the scientific reliability that is claimed from a "distorted fragment", that was put into question. Further from this, more cases were bought forward, for such as United States Vs Ller Plaza (2002), where the US had little success to "identify the scientific testing that tended to establish the reliability of fingerprint identification".

Scientific Methodology of Fingerprints.

The procedure of fingerprints being bought forward as evidence has changed dramatically over the years. What originally began as a comparison of prints against hardcopies (Spector,2002) now relies heavily on databases. The F.B.I's Integrated Automated Fingerprint Identification System (I.A.F.I.S) has impacted significantly the credentials that fingerprints hold ( Nandakumar et al,2009). IAFIS uses computer algorithms to match a number of points on ridges of the fingerprint inputs, competency tests have shown that the first run for matches against an input print have an accuracy rate of 99.97% (Komarinski, 2005), indicating the small percentage that may still be a cause for concern when determining validity. It is after this first input that the fingerprint examiners (hereafter, examiners) are to compare the minutiae, to the point that they find there is enough of them to correspond, and are able to confidently declare a match (Feng, 2008).

One of the UK's most famous cases of this was Shirley Mckie (1997), a police officer who was arrested for perjury following her denials to a murder trial, where her prints were found in the crime scene associated with the investigation (Charlton et al. 2010).Following this a fingerprint inquiry report was conducted to review the flawed fingerprint analysis that had been used as evidence against her (Geddes, 2011). The report concluded that the human error was the cause of fingerprint fallibility. Advancements in fingerprint technology such as that of IAFIS has allowed precision levels to rise in analysis, these however are not able to compensate for the psychological and cognitive elements that are presented from human experts ( Drior and Charlton, 2006). These forms of limitation are an "inherent domain" in fingerprint individualisation ( Drior and Charlton, 2006) questioning whether validity can be established when epstimological problems are present.

Arguably the most difficult step in establishing complete validity of fingerprint identification, is that the core steps towards the decision of Identification are developed in the examiner's head (Thompson and Cole, 2005) each cognitive step to how they come about their results is not documented (Davies and Hufnagel,2007) and from this it becomes difficult to reconstruct and determine why they reached a wrong conclusion and exclude the fingerprint from the source (Cole, 2005)ok? As in you cant really tell how they think! Loool that's not an option!)

Studies from Dror and Charlton (2005) found that inconsistencies amongst experts was prevalent in past individualisation results where participants changed their outcome from exclusion to individualisation, demonstrating how susceptible results are to the changing minds of experts in their varying- decision making-thresholds. Obstacles such as these can be minimised through training (Cole, 2005) but can never be eradicated.

The process of identification is 'not discrete, but rather a continuum of tasks'(Denbeaux and Risinger, 2003) and validity of each stage of analysis must be established. The ACE-V fingerprint comparison method is one that has been in use for the last century (Epstein, 2001) and is extensively relied upon by examiners. It encompasses many different stages: Analysis, Comparison, Evaluation and Verification (Triplett and Cooney, 2006). A common misconception is that a method available for such a long period of time is unquestionable. Scientifically, however, this only provides "face validity" (Cattell,1964) to the belief that that fingerprint individuation is definitive. Research from Haber and Haber (2006) has concluded through scientific evidence that the ACE-V methods holds no validity, as examiners' final judgements differ, leading to invalid conclusions.

Legal Methodology of Fingerprints.

Fingerprints can only be presented to the court as a form of evidence, once a comparison has been established. The print itself does not hold any significance before this point. Distinctly, the method by which fingerprints are investigated is, in itself, a form of evidence (Cole, 2006), and therefore needs to be valid. A method is deemed valid if the way in which it is exercised produces 'agreements with ground truth'( Haber and Haber, 2008:88). For fingerprints this is the assurance that latent and suspects' fingerprints originate from the same person a term often referred to as "Individualisation" (Loftus, 1996).If any dissimilarity is observed between two prints then a discrepancy can be declared. Forensic experts often strengthen their argument for the admissibility of fingerprints through the practice of "repeatable procedures" (Ulery et al, 2012). With this a second examiner will repeat the methodology of the first, 'who will then confirm or counter the original examiner's view' ( Charlton, 2006). It can be argued that presence of a second examiner ensures validity. (DOES THAT MAKE SENSE? SO VALIDITY THROUGH THE SECOND EXAMINER!)

If validity was observed through the accuracy of the method used, then the second comparison must be conducted by a "highly skilled examiner " (Haber and Haber, 2004). With this said, scientifically, under which criteria should it be accepted that the examiner is "highly skilled" this is very much an objective position. To counter this problem (make sense?) verification tests have been undertaken to provide evidence of validity (Haber, 2002). If the identification results between the two agree then questions will be raised to whether the method used was responsible or inaccuracies from both parties (Cole, 2005). ANY CHANCE THIS CAN BE REWOREDED??

The indicative results through scientific methodology (such as those already discussed) allow a determination of evidence that is valid and reliable. In legal courts, however, there is often an interchangeable use between the terms "reliable", "accurate" and "valid" (Giannelli, 1980:1201), arguably allowing 'reliable' results to be taken into consideration, and be admissible in courts, rather than those that are 'valid'. With this said, evidence can still be excluded under the grounds of Daubert, seen in cases like Kumho Tire v Carmichael (1999) and General Electric Co.v Joiner (1997) (Episten, 2002).

Role of Examiner-Science

Due to messy crime scenes the distorted nature of latent prints makes them problematic and it is an 'inevitable source of error when making comparison' (Zabell, 2005). Given the limited size of latent prints, examiners are often only presented with prints that contain fifteen or fewer ridge details (Epstein, 2002). The key question for authentication by examiners is how many distinct characteristics between suspect and evidential samples are required to be identified, before it can be confirmed that there is a match (Mnookin et al, 2010).Galton (1892) points are often used as points of comparison (Jain et al, 2006) but the number of them vary worldwide; in Italy, the requirement is sixteen, in Australia and France, twelve are required, and in America, it varies between states (Spector, 2002).

The use of fingerprints has been seen for decades, yet, there is not an accepted world-wide consensus on the standards of examiners. Despite the wide range of talent, training, and experience that examiners hold (Ashbaugh, 1999), training of the personnel who preform the latent print analysis can vary between agencies, questioning the validity of the verification step of analysis (Thebaud,2003). This subjective decision is where the standards of forensic science are put into question. ( National Research Council of the National Academies, 2009) (ALL OK?)

This expert knowledge does not allow any form of validation testing, as it is difficult to show that an individual's knowledge base and "source attribution" is correct (Cole, 2006). Replication tests by examiners do not provide validation, but only show accuracy in their work (Broeders,2006). Work needs to focus on intra- and inter- individual variability between examiners (Penn et al, 2007) which allows confidence limits to be established, leading to a sound decision on how many similarities must occur for a match to be accepted. Assuming, however, that a new framework is put forward, there is no way of determining whether the new technique is accurate or not, and, more importantly, whether it leads to the truth or not, validity (true answer) cannot be established (Rinsinger, 2007).


Original work from Henry Faulds in 1880 was the original landmark for the use of fingerprints as a tool in the judicial system (ok?). He stated that 'prints found at the scene of a crime could also provide important negative evidence to help exonerate an innocent victim'. One of the many roles of Forensic Scientists is to act as an expert witness, to decide whether a print belongs to a suspect or not (Thomas et al., 2012), and with this comes crucial outcomes, in that wrongful convictions can come about (Dwyer et al, 2001), a trend that undermines its validity. The Innocence Project has highlighted how these cases are internationally prevalent. For example, the US bought the case forward of the Commonwealth V. Cowan (1997). Mr. Stephen Cowan was convicted of shooting a police officer, and sentenced to 35 years in a state prison. The case was unique, in that the fingerprint match was confirmed by two examiners, and in-turn, indicating certainty. Further testing then took place in 2004, which concluded that the matches were a mistake, and Mr Cowan had wrongfully spent 6 years in prison (Zabell, 2005).

Brandon Mayfield was a civil lawyer who was arrested in 2004 as a material witness in an assumed association with Terrorist bombings in Madrid (U.S Department of Justice, 2006). Following an investigation by the Spanish authorities it became apparent that the FBI examiners erroneously linked Mr Mayfield to the prints (Langenbury et al.2009). This case highlighted the role of the scientist as an expert witness. An interview with one of the FBI officials for the New York Times, saw them defend the wrongful testimony due to the 'quality of the latent prints' that 'the Spaniards had taken from the bag' (Heath and Brenton, 2004). What is observed here is the question of why the Spanish authorities were able to determine from the onset that the print did not match, indicating the clear variability between two scientists in the same field. Implications of flawed evidence goes beyond strictly the individual in question, and onto the justice system as a whole (Loftus and Cole, 2004), with this said, it is paramount that any claims brought forward by examiners in court are accurate. (SOUND OK??)

"Uniqueness" often has a positive connotation with expert witnesses. The FBI in the Mayfield case demonstrated how work from examiners is undertaken with the assumption that a person's fingerprints have 'unique identifiers that can be infallibly measured' (Jackey et al, 2001). This is a reflection of the problems many expert witnesses face when presenting their evidence to court; giving wrong evidence. The common notion of fingerprints being "unique" can not be scientifically accepted ( Cole, 2001). The morphogenesis of friction ridge skin is what causes biological uniqueness (Babler, 1991), but to determine uniqueness requires a measure to be conducted (Weirtheim, 2001), a study which has not been conducted as of yet. In court rooms, however, "biological uniqueness" has been used as a ground of validity for identification matches, seen in the case of People V Gomez (2002), where it was concluded that, 'based on 100 years of research, everybody's fingerprints are unique and nature will never repeat itself.'

Within Courts, expert evidence has undergone scrutiny due to the little or no question to its reliability.(Epstein,2001). Experts are aware of this and their final judgment has been influenced by this (Law commission, 2011), at times however, they have been recalled for their evidence to be investigated on its reliability. Th Mckie re-investigation allowed the world's leading dactyloscopists to confirm there was no match between the latent print and Mckie's reference point. She was released (Broeders, 2006).(SOUND OK?)

The ability of experts to reach any form of "certainty" holds no scientific base but is formed through years of training and experience (Gianneli and Imwinkelried,1993). Despite this, these highly skilled, experienced and even accredited examiners are not able to overcome "a distortion and a form of loss of information" (Brislawn, 1995), characteristics common with fingerprints, indicating clearly that the expertise that courts so heavily rely upon can be severely flawed. What examiners deem as less-than-certain fingerprint evidence is not reported at all without regard for the potential weight and relevance of the evidence (Phillip et al, 2001). This hinders the truth, ultimately leading to reduced validity. Within the US, the state of Illinois was the first to address what rules should be applied when deciding if expert testimony should be permitted in the courts ( Epstein, 2002). Illinois Supreme Court took on the case of People V. Jennings (1911), and argued that the standards of expert testimony for fingerprints were loose, where "common experience and skills" of individuals was not sufficient.

Beyond experience, (TRYING TO LINK TO PARAGRAPH BEFORE!) validity of forensic evidence is dependent on interpretation of scientific test results, necessitating in the expert having an appreciation of the circumstances of the case in question (Dror and Hampikan, 2011). This form of contextual understanding has bought about bias and demonised the value of evidential fingerprints. Dror et al. (2005, 2006) were the first to address psychological and cognitive influences. In-lab studies presented 5 experts with fingerprints and advised (does that work?lol) that they were 'highly publicized erroneous identifications' ( Stacey, 2005) following this they were instructed to ignore all the contextual information. In practice (does that work lol?) what was actually presented to them were confirmed individualised prints. Conclusions from this study showed 4 of 5 experts were vulnerable to the context, as they had inconsistent results when presented with the same pair of fingerprints. (ALL OK?- LAST BIT LINKING TO QUESTION.).

The Mayfield (2004) case shows contextual bias at its extremes. (REWORD OR OK? LIKE SEEN IN EVERYDAY CASES! Investigations from the initial examiner found a strong output from IAFIS, this along with the high-profile media attention ( Abramskiehn and Smith,2009) caused a pretense to be made that Mayfield was guilty (MAKE SENSE?). More worryingly however, was the misjudgement made by the second examiners. The investigation was conducted non-blind ( Giannelli, 2007) and the second examiner had insight that the initital examiner was a "highly respected supervisor" ( Stacey, 2005) and a positive identification had already been drawn. With this it can be argued an "agreement" was inevitable. ( IS THAT OK? SO BECAUSE THEY KNEW THE GUY BFORE ACCEPTED THEY FELT THEY HAD TO AS WELL). These errors being conducted by established agencies such as the FBI, put into question the confidence in fingerprint evidence and highlight the need for a protocol that minimises bias when interpreting test results (Krane et al, 2008).


It has become increasingly more difficult for the validity of Crime Scene fingerprints to be accepted due to the increasing number of inconsistencies observed in Identification methodologies (Risinger, 2007). High profile cases such as Mayfield (2004) and Mckie (1997), bring forward opportunities to review the limitations of forensic science and make recommendations to improve it. Fingerprints are part of a "rhetorical system" (Cole, 2003), work on the accuracy of examiners is limited, standards between match prints is yet to be agreed upon and research on the cognitive skill of experts is confined (Haber and Haber, 2008). Undeterred by all this, the current consensus that fingerprints are a "science" makes it legally untouchable. (THIS IS THE SUMMARY BIT!)

(THIS IS ME SUGGESTING WORK SCIENTIFICALLY).Scientifically, DNA typing is accepted as 'holding the highest degree of confidence' (Dixon Jr, 2009) over other forensic techniques and is known to be the most rigorous. For fingerprints to gain certainty and ultimately validity, movements towards a Scientific Review Committee should be established (Stacey, 2005). Building on this, systems should be set up that look into the quality and difficulty of the latent prints that are bought forward for assessment, where anything deemed "too difficult" can be evaluated. (MAKE SENSE? SO A SYSTEM THAT LOOKS AT HOW GOOD THE PRINTS ARE BEFORE EVEN CONSIDERING LOOKING AT THEM.!) Legally, considerations needs to be made towards how many pieces of evidence should be bought forward for a conviction to take place. Are fingerprints alone ever sufficient?.Forensic training needs to be set for all individuals involved in the Criminal Justice System ( Dror and Chalrton, 2006), allowing them to make an informed decision.(THIS IS MY SUGGESTION WORK LEGALLY!).

If fingerprints can never overcome the difficulties they face in causing miscarriage of justice, their admissibility in the courts should be reconsidered as their affects are ones that obstruct the validity of Forensic Science, damage the lives of the individual and undermine the justice system as a whole. (SAID SOMETHING SIMILAR BEFORE I KNOW BUT OK LAST LINE!)