Analysis Of The Three Mile Island Criminology Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

With the underlying belief that analysis of the Three Mile Island accident requires an anti-essentialist theoretical framework, an application of Perrow's Normal Accident Theory and one of Turner's Man-Made Disasters Theory are considered. Significant biases and inadequacies are exposed of each, leading to the proposal that the two theories be combined to provide a more effective socio-technical analysis. By considering the technology, its operators and the organisation as a single system prone to "socio-system" accidents due to its complexity, a more complete evaluation may be made. Additionally, the difficulty of evaluating social causation factors of a socio-technical accident such as Three Mile Island is presented by considering paradoxes of operator performance inherent in the nuclear power plant industry.

Table of Contents

Abstract 2

Table of Contents 3

The Events of March 28, 1979 6

Perrow's Normal Accident Theory 10

Critique of the Normal Accident Theory 12

Turner's Man-Made Disasters Theory 13

A Proposed Compromise 16

Addressing the Context 20

Human Performance Factors Analysis 20

Training and Procedures 20

Stress 21

Personnel Selection 23

Conclusion 25

Appendices 28

Appendix 1 - The four failures as identified by the Kemeny Commission 28

Appendix 3 - Fault Trees of the Three Mile Island Accident 30

Appendix 4 - Centralization / Decentralization of Authority Relevant to Crisis 32

Appendix 5 - Hypothetical Stress-Performance Relationship 33


To date, research into information systems failures has been dominated by a positivist approach. However, due to its complexity, a thorough analysis of the Three Mile Island accident requires effective theoretical frameworks from the interpretivist school, which may be effectively applied to such large-scale events. The benefits of taking an anti-essentialist view are espoused by Mitev who states the need to "investigate how the technical and the social are combined and constructed" (Mitev 2000). This will be shown to be of particular relevance when considering the Three Mile Island case, as a multitude of factors interacted to create crisis.

A presentation of the accident at the Three Mile Island nuclear power reactor in 1979 will begin this paper. Located on the Susquehanna River in Pennsylvania, the unit had been in operation for only three months prior to the incident. Resulting from a combination of relatively minor failures, the event sparked an interest in understanding the accident-potential of high-risk technologies, and the organisational structures that generate or enhance it.

Grounded in the belief that "any major industrial accident should be treated as a complicated socio-technological event" (Britkov and Sergeev 1998), I will not attempt to analyse the details of the case directly, but will instead consider two interpretations of the Three Mile Island accident, which aim to explain its origin. These accounts make use of conflicting theories; namely Charles Perrow's Normal Accident Theory and Barry Turner's Man-Made Disasters Theory. The strengths and biases of each will be focused upon, before proposing an amalgamation that is believed to more comfortably accommodate a comprehensive socio-technical analysis than either independent theory.

While a detailed technical analysis of the reactor falls beyond the limits of my scientific knowledge, a consideration of the suggested, complementary social analysis will be presented. Through an examination of the single dilemma of enhancing nuclear power plant operator performance in emergency situations, an indication of the inherent contradictions of human factor analysis is provided. For this, support is drawn from Normal Accident Theory. This difficulty supports the proposed need for an anti-essentialist, socio-technical approach to failure analysis.

The Events of March 28, 1979

On the fateful night of March 28, 1979, the night shift of TMI Unit 2 was faced with a problem of enormous magnitude. While it was impossible to imagine the extent to which the events would alter the future of nuclear power in the United States of America, the operators struggled to manage an immediate threat: nuclear meltdown. Such a catastrophic event occurs when the induced fission of the Uranium-235 bundle is not sufficiently controlled and temperatures are reached which exceed the melting point of its containing structure. As a result, the bundle bores through its containment and into the ground below, destroying the protective layer that prevents nuclear radiation leakage into the environment.

In the presentation of any case study, the author must carefully consider the depth of description that is optimal to explain the situation at hand, while avoiding the introduction of unnecessary complexity to the reader. In this vein, I hope that the following presentation will be effective. It represents an amalgamation and summary of several accounts (Jaffe 1979, NRC 2002, Stephens 1980, Stern 1979, Tew 1979), intended to provide validation and support of the official documentation of the accident. The incident at Three Mile Island was a typical common-mode failure in that it was the result of a combination of failures, caused by one single event (Ford 1981). It was this interconnectedness that significantly complicated the situation's rectification.

The series of events began at 04:00:37 on March 28, 1979 when the main feedwater pump stopped providing water to the reactor cooling system. Due to a complication of routine maintenance, where operators were attempting to dislodge clogged resin from an isolated polisher, which filters the water supplied to the feedwater pump, loss of cooling water occurred and the cooling system shut down due to the resulting low water pressure. As a result, no heat could be extracted from the primary system which contained the nuclear fuel, as, under normal operating circumstances, water in the cooling system's steam generator is exposed to the high temperatures causing it to turn to steam. This steam then drives the turbine, which, in turn, powers the electricity generator. A significant human or administrative error had occurred when the emergency feedwater system had been tested two days previously. During these tests, two valves must be closed and re-opened. However, on this occasion, these were left closed, rendering the emergency cooling system useless until the error was discovered, further escalating the emergency when their use was demanded. In the first example of operator-involvement, there were two indicators on the control panel which displayed that these valves were closed, but the operators did not check these, assuming that the Automatic Safety Device (ASD) had functioned appropriately. Possibly preventing this from being noticed, one of the two lights happened to be covered by a repair tag hanging on the switch above it. The humans' attempt to manage the complicated control panel prevented it from displaying the necessary information effectively, bringing about issues of human-computer interface design.

As little steam could then be produced, the turbine stopped automatically. The reactor continued to run, but with no heat being extracted once the initial store of cooling water in the steam generator was exhausted, pressure built up quickly. When a high level was reached, the reactor shut down automatically. With the bundle no longer undergoing induced fission, but only natural decay, no additional, significant heat was being produced. However, the heat which had already accumulated needed to be removed. These high temperatures in the airtight primary system unit caused an increase in pressure, which, without amelioration, could mount and cause an explosion. As an ASD, a pressuriser Pilot-Operated Relief Valve (PORV) opened. When a satisfactory pressure level had been obtained, the valve was meant to close. Unfortunately, this did not occur and the pressure within the reactor continued to decrease. The indicator on the control panel was misinterpreted to suggest that the valve had closed. The operators were unaware that this light indicated only that the valve was told to close by the system, not that it had in fact done so, and no operating procedures demanded a manual check. This valve allowed a continuous loss of coolant water until it was eventually closed.

The decreasing pressure levels in the primary system resulted in the water within it redistributing and accumulating in the pressuriser, while steam voids formed throughout the system. High Pressure Injection (HPI) of large volumes of water began automatically to raise the pressure within the cooling system to an adequate level, pushing water that had accumulated in the pressuriser throughout the system, but this was terminated by an operator bringing the process to his or her manual control. It is suggested that had the operator not intervened, all further problems would have been avoided (Ford 1981). Unreliable coolant level readings suggested that the system was full with water, but this did not take into account the presence of these voids. The HPI was reduced as a result, but at these lower pressures, water continued to seep out of the open valve. It was not known to the operator that these voids existed, and that they rendered the coolant level reading meaningless. It was these readings that the operators relied on as an indication of water being present throughout the system and hence covering the core. As water escaped, pressure dropped, and water which was near the reactor turned to steam increasing the size of the voids and maintaining the apparently adequate pressuriser level. This sequence resulted in parts of the core being uncovered, and, with no cooling, the temperature of the uranium bundle increased dramatically.

Due to the high temperature, the metal fuel cladding reacted with the steam in the voids, oxidising and producing hydrogen. A hydrogen bubble formed in the top of the reactor containment building, further complicating matters: at low pressures this bubble would expand preventing water from circulating in the core, and, it was feared that at high temperatures it might ignite.

For several hours, the operators were not aware that they were experiencing a Loss of Coolant Accident (LOCA). It is suggested that this was due to their failure to recognise a number of indications, including the high temperatures of the drain pipe from the pressure release valve, from which it should have been deduced that the valve was open. However, the gauge for this reading was on the back wall of the control centre and was not noticed by the operators (Jaffe 1979).

Significant damage to the core occurred and part of the fuel did melt, finding its way into the lower part of the core, destroying the seal and allowing radioactive gases to be released into the containment building. Fortunately, the condition was stabilised before further damage could release radiation into the surroundings. It has been suggested that the reactor was within 30 minutes of meltdown (Ford 1981). Recovery efforts lasted ten years and the second unit at Three Mile Island has never been reopened.

The official, government-sponsored investigation determined that there were four distinct failures comprising the accident, and a table summarising these and the attempts made to correct them is included in Appendix 1.

Theoretical discussions of Three Mile Island

The accident at Three Mile Island jeopardised the safety of many Americans and compounded citizens' resistance to the governmental decision to take advantage of nuclear power. This combination of effects demanded a thorough and unbiased analysis of the accident by the government of the United States, including efforts to determine its cause(s). As a result, President Jimmy Carter established the President's Commission on the Accident at Three Mile Island. Chaired by John G. Kemeny, President of Dartmouth College, this committee determined that "The equipment was sufficiently good that, except for human failures, the major accident at Three Mile Island would have been a minor incident" (Kemeny 1979).

In alignment with the criticisms of Johnson that accident reports frequently separate analyses of systems and human factors making "it difficult to form a coherent view of the way in which human factors and systems failures contribute to major accidents" (Johnson 1997), the report produced by the Commission focuses upon the context surrounding the accident, while a separate committee was formed to complete a technical analysis (Tew 1979). While the report produced by the Kemeny Commission, as it has come to be known, did not represent the official analysis in its entirety, it provided a very clear report to the President, which emphasised that human factors were the primary cause of the Three Mile Island accident. There has been significant opposition to this accusation, including a call for a greater focus upon the technological factors involved.

Perrow's Normal Accident Theory

Charles Perrow, in complete opposition to the human factors argument presented by the Kemeny Commission, relied upon a technologically deterministic explanation of the Three Mile Island accident, discounting operator error entirely. Perrow unequivocally states, "The system caused that accident, not the operators" (Perrow 1984). Considering it to be the exemplary model of his Normal Accident Theory, Perrow proposes that such complex systems as nuclear reactors will suffer from inevitable failures due to the inherent inter-dependence of their components (Perrow 1984). This 'tight coupling' results in what may be a minor failure in one segment quickly affecting some number of others not in direct operational sequence. This causes a series of unpredictable interactions of failures and results, which are often incomprehensible to operators in the period of time when counter-measures could be made. This form of accident is referred to as a 'normal accident' or a 'system accident.' It is important to note that Perrow's use of the term normal is not meant as an indication of frequency of accidents, but, rather, a suggestion that they are to be considered a standard element of certain systems.

Perrow summarises "the four characteristics of normal accidents: warning signals, equipment and design failures, operator errors, and unanticipated events" (Perrow 1982).

The normal accident definition is certainly well suited to the Three Mile Island case presented (as it should be, being that it was specifically designed to account for it). For example, the operators were not aware that the drop in the coolant level in the core and the turbine shutdown could be connected, as the steam generator should remove heat from the core, which will only be exacerbated by lack of coolant. However, Perrow suggests that these two events are unexpectedly linked through the PORV, implying that the PORV will not function if the turbine stops. The system's complexity and tight coupling had potentially disastrous results.

Such "interaction of small failures led [the operators] to construct quite erroneous worlds in their minds" (Perrow 1984). Under such circumstances, it is understandable that operators would make decisions found to be incorrect in retrospect. The most significant example of this is the operators' reduction of HPI used to cool the core. The operators were functioning under the assumption that no LOCA had occurred, and in such circumstances it is never advisable to reduce HPI. Perrow believes that the correlation should be made between these errors and the system, as opposed to the errors and the operators.

Critique of the Normal Accident Theory

Andrew Hopkins, takes a polarised view from Perrow in his consideration of Three Mile Island, and it is with this as the focus that I continue. Hopkins states:

"We can agree with Perrow that, given the situation the operators found themselves in, there was no way they could have avoided the accident. But it does not follow that the accident was unavoidable. We can legitimately ask: why did operators find themselves in this position?" (Hopkins 2001)

Standing in contrast to Perrow who states: "Better organization will always help any endeavour. But the best is not good enough for some that we have decided to pursue" (Perrow 1984), Andrew Hopkins presents the argument that the Three Mile Island accident, and the vast majority of industrial accidents for that matter, could have been prevented by improved management.

Hopkins points to a number of shortcomings of the Normal Accident Theory, ultimately reaching the conclusion that the lens provided by Turner's framework of Man-Made Disasters Theory (Turner and Pidgeon 1997), provides a better view into the complexity of the Three Mile Island accident. The criticisms made by Hopkins include: the applicability of the Normal Accident Theory to only a very small number of accidents, specifically those that occur in systems that are both complex and tightly coupled; and the lack of a proposed method of quantifying the complexity and coupling of a system to determine whether its accidents may fall into the category of 'normal accidents' (Hopkins 1999). I believe that the latter is of greater importance, as, should the theory effectively classify even some small number of accidents, it may still be considered useful.

To aid in this classification, Perrow provides an interaction / coupling table which places a number of sample systems within a two-by-two matrix of complexity and coupling (Appendix 2). As Hopkins states, in order for the classification system to be effective "complexity and coupling must be defined independently of the phenomenon they are designed to explain" (Hopkins 1999). While Perrow does recognise this issue of subjectivity, he does not acknowledge the significance of the flaw that it brings to his theory. It seems that Perrow's definitions remain vague in order to allow his inclusion or exclusion of systems as suits his needs, or, as Roberts comments, "his constructs are loosely coupled to his illustrations" (Roberts 1989). [1] I found that Perrow's logic became rather difficult to follow, and I was left with the impression that his entire book was written with the sole intention of defining nuclear power as the most dangerous of all systems and promoting the immediate discontinuance of all such power production. Perrow provides no effective way to determine where the line needs to be drawn between those systems that are "hopeless and should be abandoned because the inevitable risks outweigh any reasonable benefits " (Perrow 1984) and those that require other measures, though he explicitly places nuclear power and nuclear weapons within the former category. This contradiction certainly supports Hopkins's search for a more appropriate theory to apply to the Three Mile Island accident.

Turner's Man-Made Disasters Theory

Following these criticisms of the Normal Accident Theory, Hopkins states that Three Mile Island "was not a normal accident in Perrow's sense and is readily explicable in terms of management failures" and that "there was nothing technologically inevitable about the incident" (Hopkins 2001). Hopkins effectively applies the organisational theories of Turner to justify these claims; an interesting task as Turner himself had previously supported the normal accident theory and its application to a specific category of accident (Turner 1994), presumably including Three Mile Island, as this is the exemplary case of Perrow's theory.

At the core of Turner's work is the concept that all failures are preceded by an "incubation period" with hazardous conditions which foster the failure. This environment is created by the organisation itself:

"The development of an incubation period prior to a disaster is facilitated by the existence of events which are unnoticed or misunderstood because of erroneous assumptions; because of the difficulties of handling complex sets of information; because of a cultural lag which makes violations of regulations unnoteworthy; and because of the reluctance of those individuals who discern the events to place an unfavourable interpretation upon themselves… [and] is further facilitated by difficulties of communication." (Turner and Pidgeon 1997)

Hopkins is of the opinion that the Three Mile Island accident timeline was "highly susceptible to interruption" (Hopkins 2001) during the formation of its incubation phase. He proposes that had management prevented any of the four failures identified by the President's Commission, the total impact would have been drastically reduced, if the event had not been eliminated entirely.

Hopkins presents a most alarming example of the poor information exchange characteristic of incubation period development, which had a clearly identifiable impact upon the events of March 28, 1979. Apparently, a remarkably similar accident to that which is the focus of this paper occurred seventeen months prior. Recommendations of methods that should be taken to prevent a reoccurrence were compiled, but as the document was passed up the bureaucratic ladder and repeatedly summarised, "the full significance of the event was not appreciated by senior management who, therefore, failed to provide a satisfactory response" (Hopkins 2001). It is suggested that, had correct action been taken, the initiating event of Three Mile Island, the blocked purifier, would not have occurred, possibly eradicating the entire sequence of events.

In what Hopkins identifies as a significant evolution of his theories, Turner acknowledges, in his 1994 article, the role that technical system characteristics play in accidents, and contradicts his previous thoughts that all failures can be explained by management failures. He states that "there are predisposing factors for disaster which are outside the province of individual managers or management teams" (Turner 1994). This acts as a starting point for my discussion of a compromise between Perrow's and Turner's work. While Turner states the need to "tackle both sloppy management and failures of the normal system" (Turner 1994), his work has a much greater, intentional focus on organisational factors, and leaves room for a complementary theoretical mindset to aid in the application of Turner's proposal of an holistic consideration of failures.

A Proposed Compromise

I respectfully find fault with Perrow's interpretation of events. While certain to provide explicit distinctions between such terms as 'incident' and 'accident,' and 'tight' and 'loose coupling,' Perrow neglects to pay the same attention to the term 'system.' Forming the foundation of his theory, it would have been beneficial to provide indication of the boundaries that he applies. The term seems to be used in a number of ways and this ambiguity is problematic. It can be deduced that what Perrow generally considers to be a 'system' is comprised solely of physical equipment and events that take place within them. However, there are a number of uses of the term that do not follow this definition. For example, a space mission is referred to as a system, where it would seem that a mission consists of much more than simply the shuttle itself. This vagueness weakens his argument.

However, in Perrow's consideration of Three Mile Island, it is clear that the system under consideration is the technology itself and the transformation process of producing nuclear power. It is with this understanding of Perrow's definition of a system that I continue.

I believe that this delineation is inefficient for the examination of the origins of such multi-faceted accidents as Three Mile Island. A nuclear power plant does not function without the constant supervision and involvement of operators, and would not be in existence were it not for manufacturers, management staff and even the nuclear power industry. Thus, the exclusion of these elements from the 'system' is imprecise and problematic, as they have direct impact upon the technological system.

Perrow's self-claimed "focus on the properties of systems themselves, rather than on the errors that owners, designers, and operators make in running them" (Perrow 1984) suggests that operators are peripheral to, and that the organisation is a containing structure for, the system. While he acknowledges their role in fostering accidents within the system, Perrow is reluctant to include these as elements of the system itself:

"these systems require organizational structures that have large internal contradictions, and technological fixes that only increase interactive complexity and tighten the coupling" (Perrow 1984).

In support of my desire to widen the boundaries of the Three Mile Island system, I consider the example of the faulty PORV valve presented by Hopkins. The PORV valve which failed at Three Mile Island had previously failed at eleven other power plants. The component's manufacturer had not warned its customers, the industry had not effectively managed the diffusion of this information, and Three Mile Island management did not have a division devoted to analysing the experience of other plants in order to better improve their own. Had any of these pathways for information exchange been implemented efficiently, Hopkins claims, the particular PORV valve that failed at Three Mile Island may not have been an element of Perrow's 'system' at the time of the accident. Is it useful then to consider the equipment as a system independent of context?

While Perrow does allow for an "eco-system accident" in his consideration of environment-induced accidents at damns and mines (Perrow 1984), he does not allow for the same amalgamation of systems at Three Mile Island. I propose that a further adaptation of Perrow's system accident, what I shall refer to as a "socio-system accident," would provide greater applicability to the Three Mile Island case.

As Perrow argues, I agree that a nuclear power plant is a system with tight coupling and complexity great enough to warrant the inevitability of accidents. However, may I suggest that this complexity must include that inherent in the management, industry, operators and other contextual actors? For example, a complex organisational structure is required to manage a nuclear power plant. Information exchange will occasionally be poor, resulting in uninformed decisions being made. Thus, those examples of poor management presented by Hopkins could be considered, consistent with the normal accident theory, as additional, seemingly trivial, failures which interacted with the technical failures that Perrow presents to result in the "socio-system accident" of Three Mile Island.

It is important to note at this point that what Perrow considered to be further developments of his Normal Accident Theory, ten years after its publication, take into account a number of the criticisms that I have presented. His resulting use of 'Garbage Can Theory' (Cohen et al. 1988), which, as Hopkins summarises "is the idea that organisations inevitably behave in unpredictable ways" (Hopkins 1999), gives greater relevance to the role that organisational factors play in normal accidents. However, it would seem that the organisation remains excluded from the system. Normal accidents are still considered the result of technological determinism, while organisational factors "determine the number of inevitable … failures" (Perrow 1994). This is a rather different resolution from that which I have advocated.

I believe that Three Mile Island was a socio-technical incident where neither human nor technical factors should be considered in isolation. Even the Kemeny Commission pointed to technological issues that played a role in the eventual LOCA. Bradley analyses the Three Mile Island incident by considering each error in the series of events leading to an accident. He implements a classification structure that separates these errors into avoidable and unavoidable, in the categories: buying, commissioning, design, failure of equipment, management, operating, production and repair (Bradley 1995). The combination of errors for Three Mile Island was classified as: five equipment failures, an avoidable and an unavoidable management error, and an avoidable and three unavoidable operator errors. The fact that "an attempt was always made to work back to the fundamental human error which contributed towards the failure" shows an admitted bias and, while I feel that the series of events considered is rather simplified, the ultimate inclusion of several errors that could not be attributed to social factors provides significant support for a socio-technical approach to the failure. (The fault tree presented by Bradley has been included in Appendix 3 for reference purposes.)

On accident prevention Hopkins states:

"Normal accident theory suggests a technological approach: reduce complexity and coupling. The alternative approach is to make organisational changes designed to improve flows of information, decision-making processes and so on." (Hopkins 2001)

I do not believe that these two approaches are mutually exclusive. If we were to consider accidents such as Three Mile Island as socio-technical, socio-system accidents, it would follow that a dedication to each of these methods would be an effective strategy.

Addressing the Context

Having made the rather straightforward suggestion that both social and technical preventative measures need to be taken against accidents, I now focus upon a specific aspect of the human factors issue, that of enhanced operator performance. While maintaining that normal accidents will never be eradicated, Perrow does cite improved operator training as a "fairly obvious" way in which we may better complex system management (Perrow 1984), noting that the Normal Accident Theory is not intended to promote a lackadaisical attitude towards the inevitability of failure. Instead, Perrow hopes that, through a more complete understanding of the dynamics of complex systems, we may learn to more effectively manage high-risk technologies and mitigate the effects of such failures (Perrow 1984). This issue of improving operator training, however, is not as simple as Perrow implies. His Normal Accident Theory has ramifications on the issue that serve to complicate matters.

Human Performance Factors Analysis

While both Perrow and Hopkins discount operator error as a causation factor of the Three Mile Island Accident, The Kemeny Commission disagrees. Regardless, it is certainly logical that enhanced operator performance in emergency situations is always desirable. However, there are a number of paradoxes inherent in the nuclear power plant industry that must be taken into account when designing strategies for attaining this goal. Otway and Misenta have provided a very thought-provoking account of such paradoxes (Otway and Misenta 1980), several of which may be supported by direct application of the Normal Accident Theory. A brief discussion of a number of these difficulties follows, with an assumed correctness of Perrow's theory. Illustrative examples have been drawn from the Three Mile Island case.

Training and Procedures

Central to the Normal Accident Theory is the concept that a complex, tightly coupled system will experience multiple failures interacting in unexpected manners at some point. As these interactions cannot be predicted by the system's engineers, it follows that these sequences will not be included in any simulation training, and specific procedures cannot be written to rectify them. As a result, any nuclear power plant operator's training is inherently deficient, and increased procedural-awareness may not have been beneficial.

Further application of Perrow's Normal Accident Theory renders procedure writing for operation of the nuclear power plant even more problematic. Perrow suggests that in a tightly coupled system there is an inherent need for an extensive set of prescribed procedures outlining series of steps that must not be altered, but that it is these procedures that limit operators' creativity when faced with confusing failures. He concludes that such a system prone to normal accidents requires a centralised organisational structure to oversee procedures and a decentralised structure to allow for necessary creativity. This contradiction is impossible to obtain and ineffective organisational structures will inevitably result (Perrow 1984). (Please refer to Appendix 4.)

Those procedures that were available at Three Mile Island have been criticised as being of "inferior quality" (Wieringa 1991). Further, Otway and Misenta report that the particular failure combination of the Three Mile Island accident had not been included in training simulations. The operators, thus, did not have the prior experience necessary to resolve the situation. Creative responses were in dire need, and it is proposed that organisational structures and other contextual factors prevented them.


Let us assume, as is made credible by the quantitative analysis of Chisholm et. al, that the control room was a particularly stressful environment as the events of Three Mile Island unfolded (Chisolm et al. 1986). This is certainly supported by the following description:

"Consider the situation: 110 alarms were sounding; key indicators were inaccessible; repair-order tags covered the warning lights of nearby controls; the data printout on the computer was running behind (eventually by an hour and half); key indicators malfunctioned; the room was filling with experts; and several pieces of equipment were out of service or suddenly inoperative." (Perrow 1982)

The operators' close involvement, as what Perrow classifies as potential "first-party victims" (Perrow 1984), with the threatening nuclear meltdown certainly would have further increased the stress level. One would assume that this could not have been prevented and may give some concession for poor decisions made under such stressful conditions. But could better management of the situation have improved performance?

Otway and Misenta report that characteristics of the nuclear power industry require an efficient management of stress levels at all times. The authors support maintaining stress levels within an 'alertive' range where performance is enhanced, preventing their rising to 'disruptive' levels, where performance rapidly worsens. (Appendix 5 includes a helpful graphing of this concept.) This may include raising stress levels during regular operating circumstances and controlling them in crises.

While not discussed at length, 'control-room design' is cited by Otway and Misenta as a "determinant of operator performance" and a "physiological source of stress" (Otway and Misenta 1980). Training that satisfied the previous paradoxes may still have been insufficient to prevent the accident due to the complications that poor control panels brought to the situation. As has been stated, normal accidents often expose ignored warnings (Perrow 1984), but:

"humans cannot reasonably be blamed for misunderstanding a system's conditions when the actual state of the system (as depicted by instruments that the operator has been trained to trust and rely on) does not match the operators' cognitive maps." (Brookes 1982)

A more fitting HCI, that did not promote misreadings, may have been effective in reducing the stress suffered by operators.

Crediting Zajonc's work, it is also suggested that the number of people within the control room should have been limited at Three Mile Island. This overcrowding may have raised stress-levels, in turn causing "an increased probability of selecting the dominant, i.e. 'best-learned', response with a corresponding decrease in the probability of a 'new' response should the best-learned response be inappropriate" (Otway and Misenta 1980). As we have shown, the knowledge-base from which the operators were drawing was not sufficient, demanding innovative solutions.

Personnel Selection

Critics of the Three Mile Island operators may suggest that the workers were not intellectually capable of performing in the emergency situation and identifying the control panels' faulty information. It would follow that individuals of higher intelligence and with greater qualifications should have been employed. However, over-qualification is an important issue "to nuclear-power plants, where a highly intelligent, well educated operator stressed by the boredom of normal operations might even become the cause of an emergency" (Otway and Misenta 1980). Ford reports that due to complex system design even under emergency circumstances, high intelligence will not be of benefit:

"The operator's emergency role is expected to be chiefly one of "monitoring and verifying" the response of automated plant-safety devices to the contingency. The operator…will not be required to make decisions that require advanced engineering training." (Ford 1981)

Again we find that the human factors analysis of Three Mile Island is not as simple as may have originally been suspected. The nuclear industry's inherent characteristics cause human factor engineering to be particularly difficult.

The evidence provided supports the claim that Three Mile Island operators, while having a willingness to perform, as potential first-party victims, had low performance ability (Otway and Misenta 1980). Training was inadequate; stress levels were dangerously high; and the control panels with which they were interacting facilitated operators' inaccurate assessments of the situation. These issues raise significant doubt as to the accuracy of the Presidential Commission's charge of operator responsibility at Three Mile Island. Even the brief social analysis presented points to technological problems of system design. This further emphasises the need for an appropriate anti-essentialist approach to the failure analysis.


An investigation of the Three Mile Island accident through each of the lenses provided by the Normal Accident Theory and the Man-Made Disasters Theory reveals a very biased view of the situation. When considered as a normal accident, the event seems to be pardonable, due to the inevitable failures of complex and tightly coupled systems. However, considering only the management failures preceding the accident does not account for the unlikelihood that the management could have been aware of or addressed all system vulnerabilities.

It is proposed that the Normal Accident Theory be expanded to allow for an integration of the inevitable management failures in the complex organisation necessary to create and manage a complex technical system, such as a nuclear power plant. A broadening of the boundaries of the 'system' to include the operators and organisation, in conjunction with the application of the concept of a socio-system accident to the Three Mile Island Case is humbly promoted.

I would like to acknowledge the limitations inherent in my study. Not having had extensive exposure to failure analysis prior to this research, it is rather presumptuous of me to propose alterations to a well-developed theory of an experienced academic. While I draw solely from the available details of the Three Mile Island Case, both Perrow and Turner have been influenced by innumerable events and, assumedly, have adapted their theories appropriately. Perhaps they have even considered making the proposed developments to their theories, but for whatever reasons have felt that they were not fitting. Being that I do not have direct access to such information, or to their thought processes, my theories are inherently naïve. However, I do not believe that this renders the concepts that I have developed or the involved logic as invaluable, as, at the least, it promotes a complete analysis of system failures and the need to identify the causation factors.

I would agree with Perrow that "We have not had more serious accidents of the scope of Three Mile Island simply because we have not given them enough time to appear" (Perrow 1984). But, I encourage the progression of research into the area of effective development of "safety culture" (Pidgeon 2001, van Vuuren 2000). This paper has limited itself to considering Three Mile Island as a high risk socio-technical system, but acknowledges the potential benefits of the parallel study of High Reliability Theory (La Porte and Consolini 1991) in its aim to effectively manage risk. I suggest that particular care must be taken in the consideration of human factor engineering when considered in the context of the nuclear power industry. Similarly, it is necessary to continue research into system designs which minimise complexity and tight coupling.

Until we can "breed smog-resistant people - people who could thrive on asbestos, and cadmium, and mercury, and radioactivity, and PCB's, and so on" (Peterson 1980), the nuclear power industry has a social obligation to put in place any feasible mechanisms that may prevent a recurrence of an accident similar to that at Three Mile Island. The inaccurate consideration of such failures as anything other than socio-technical events, and a resulting refusal to extensively investigate both social and technical prevention methods would be inexcusable.