Human Interface And Human Error Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

One of the largest mistakes and errors on any vast and complex computer systems are directly connected towards Human Interactions. Usually these errors occur due to the fact that the interface of a certain interface has been poorly designed, but computer systems and companies are too comfortable with the fact that if an error occurs that they need a Human to fix it. This is where the problem originates. Even the most highly trained user are prone to dullness when there is only a need for normal work, and anxiety when an unfamiliar error occurs, stress levels are elevated. A Human Interface gives the correct feedback to the user so that they have the time to decide on a valid way to handle an error with their current data on the system. A user is sometime prone to make a big deal out of a small error and then overlooks a real threat to the system. Experiential evaluation, mental walkthroughs, and experimental evaluations like protocol scrutiny are ways to determine the effectiveness of the Human Interface, but they don't always provide conclusive data. A System developer must insure that the Human Interface is easy to understand and comprehend and intuitive for the users to use, but not so simple that it bores the user into a state of contentment and lowers his or her responsiveness to errors that need immediate attention.


Certified that this research project titled: Human Interface and Human Error is the bona fide record of work carried out by Johnathan Salmon for my final year of B.Sc. computer Science.

------------------------- ------------------------------------------

Technical Guide Research Coordinator Principal

Place: Belgium Campus IT varsity Date: 2012/06/11

Problem Statement

In this paper I was doing research in order to find out:

What is Human Computer Interface

How does Human Error Influence it

How to solve Human Error In A Complex System


Figure 1. Human Error Mishaps Statistics from 1 Jan 86 - 31 Dec 90C:\Users\Johnathan\Documents\Research.jpg


When looking at any composite system, it is easy to see that problems and failures in the system are directly linked with human errors. These include specifications that aren't completed, issues with the design of the system and software miscalculations and engineering flaws. But when we are focusing on human errors in an implanted system, we see issues caused by a poor Human Computer Interface.

What is Human Computer Interface?

Is the study of the relationships which exist between human operators and the Computer systems they use in the performance of their various tasks.

We as humans have tendency to make mistakes and certain conditions can only increase the chances of making more and more mistakes. When a well-designed and thought out HCI (Human Computer Interface) is created, a user is more prone to enter correct values and not to make easy mistakes on the system, due to the fact that they understand what is asked from them. Unfortunately there is no distinct way to create a HCI for systems.

Embedded systems usually have a fixed cost and control and can't do extremely complex calculations. Thus an interface has to designed so that it will be simple and easy to use without using too much of the systems so that it will still be safe. We have to differentiate between highly field specific interfaces (Nuclear Power Plants) and easy access interfaces like Automatic Teller Machines or onscreen menus like a cell phone. The easiest way to understand what is meant with this is if you look at any common vehicle.

Everyone has to go for a driver's test to be able to drive a car, but the interior of modern day cars differ and this causes a driver (user) to make a mistake when trying to drive the car because they are unfamiliar with it. This is where human error comes in to play.

The main issue to control in a safety critical system is to prevent the human user form making an error that causes hazard. Usability is a huge factor when creating a HCI. This is because if the usability is easy to understand the user will be more relaxed and have reduced nervousness. But by making the HCI to easy and simple, an operator can misjudge an error and choose to continue with the error that can create enormous errors. Such an error occurred with the THERAC-25 medical radiation device. The operator would just select dosage and not read what was being displayed on the interface and he then administered lethal dosages of radiation to the clients. Error messages are also an important part in HCI. A if message should occur it should be in such a manner that the interface is basically stops to show the error and the operator should dismiss the message by acknowledging that he/she has read the message so that the message can reply with feedback if it has succeeded or failed.

Key Theories

In an embedded system a Human user is usually the frailest link. The chances of a human operator to create a mistake in a computer system are higher than the actual components and or software to fail. The technique used to determine the probability that a human will create an error throughout the completion of the task is called the Human Error Assessment and Reduction Technique (H.E.A.R.T). The reasons to use the HEART method are for the following:

Error Identification

Error Quantification

Error Reduction

HEART method is grounded upon the belief that every time a task is performed there is a possibility of failure and that the probability of this is affected by one or more Error Producing Conditions (EPCs) - for instance: distraction, tiredness, cramped conditions etc.

HEART Method


Total HEART Effect

Assessed Proportion of Effect

Assessed Effect




(3.0-1) x 0.4 + 1 =1.8

Opposite technique



(6.0-1) x 1.0 + 1 =6.0

Risk Misperception



(4.0-1) x 0.8 + 1 =3.4

Conflict of Objectives



(2.5-1) x 0.8 + 1 =2.2

Low Morale



(1.2-1) x 0.6 + 1 =1.12

Table 1. Computational factors in calculating the Human Error Assessment and Reduction Technique.

A representation of this situation using the HEART methodology would be done as follows:

From the relevant tables it can be established that the type of task in this situation is of the type (F) which is defined as 'Restore or shift a system to original or new state following procedures, with some checking'. This task type has the proposed nominal human unreliability value of 0.003.

The final calculation for the normal likelihood of failure can therefore be formulated as:

0.003 x 1.8 x 6.0 x 3.4 x 2.2 x 1.12 = 0.27

Foundations of Human Error

Systems working on Auto are very good at doing repetitive tasks. But if by chance something happens to the system and counteractive actions must be taken, this is when the system reacts out of order. This is when a Human is needed to handle the crisis. Humans are better at handling new types of incidences than the machines themselves, but the humans cannot perform monotonous tasks as good. Consequently the Humans are left in charge of monitoring the system inactively. This creates a problem because if the operator isn't busy with work the whole time they become bored and thus overlook greater errors that can occur. This is called Operator Drop-Out.

Though, if the operator is constantly busy with routine work by controlling the system, he/ she will also make mistakes, because they become used to the same thing over and over and the moment that there is need for attention they won't be able to react to it as a NEW problem. If the operator has a determined mental ideal of the system in its normal mode of operation, the operator will incline to ignore data representing an error unless it is displayed with a high level of distinction.

Another major factor influencing the operator is: Stress. Distressing circumstances include:

Unfamiliar occurrences

Incidents that can cause money, data, life loss

Time critical tasks

We as humans tend to degrade our performance when we are threatened with stressful situations. The best way to reduce this effect is to make unusual situations accustomed by using drills. The moments when humans are expected to perform at their best is when the highest levels of stress occur making them more prone to make errors on the system. Failure rate in some situations can rise to as much as 30%. Regrettably, humans are our only choice, since a computer system cannot correct itself in complex situations or crises. The best that can be done is to design the user interface easy to understand and use so that the user will make less mistakes that can cause more damage.

Understanding Human Errors

The Action Theory and Human Errors

In 1978 Leontief developed a way to try and understand how human error occurs. It is called the Leontief's Three - Level Schema (See Figure 1). It defines the scope of enquiry of human activities and guides the attention to the changes happening on these three levels: motive activity, goal action, and instrumental conditions-operations. These three levels are ordered in a tiered arrangement where the top level of activities includes numerous actions that are performed by correct operations. In a 'pure' unbiased way only the active level can be observed and analysed. The goal setting and motivational level must be resultant or examined by indirect methods (e.g., questionnaire, interview, thinking aloud, etc.) based on the reflective remarks of the examined subjects.

Figure 2. The three levels schema of the activity theory of (Leontief, 1978).

Solving or Minimizing Human Error

The following characteristics lead to human error and we need to appoint these to solve it:

Low Stress Error

High Stress Error

Error/Change Phenomena

Low Stress Error

Have you ever walked into a room to do something but just as you enter the room you have forgotten what you wanted to do? This is a classic example of a Low Stress Error. An endeavour not carried out agreeing to the plan. These silly things happen to us because we are human. But these types of human errors in processing and important situations can start some grave concerns.

For example, we have a control box with a sequence of start and stop switches approximately 13mm apart. These buttons control numerous operational pumps. When the wrong switch is activated or deactivated at a critical moment in time, it is an action that was carried out not according to the specifics needed to operate the machine. This can create huge problems because the pipes that need to cool down are now working again and nearing explosion.

A simple and easy way to solve this type of problem is through increasing the spaces between each button and switch or even labelling them with text or collars. In other words the User Interface will change so that it won't allow a simple human error to occur.

High Stress Error

The day that the Three Mile Island accident occurred, over a hundred alarms and whistles went off and the human operators that were working with the nuclear-powered machineries did the incorrect things in the time of need. In all the panic and noise they actually shut off the main cooling system that was the most important thing that was needed in the emergency. This type of high stress error is not uncommon. According to a study on the error potential of people who suddenly are faced with impending danger, if they have only one minute to react to an out-of-control situation there is a 99.9% chance of doing the wrong thing. There is a 90% chance if they have five minutes to react, 10% with a half hour to react, and 1% (still too much) with 2 hours to react.

This error can be minimized though the following:

Hazard risk analysis for advance warning of potential hazards.

Automated designs for short time intervals for decision making that are too rapid for humans to react.

Design of clear information displays and systems that do not confuse and disorient people when upset circumstances occur.

Practice training in how to cope with system upsets.

Error/Change Phenomena

It must also be said that a catastrophe almost never occurs with only one human error. Human error is part of being human and we are surrounded by it. Every error we get changes our environment. It may occur as a reactor that is vibrating out of place when it is doing something wrong. Scientists tell us that it takes about 14 of these miscalculation components of a chain to deliver us with a bona fide catastrophe. The reason we survive this overwhelming prospective is because we are repeatedly seeing these changes and taking action to interruption the restraints.

Problems with the HCI

It is expected of the HCI to deliver natural controls and the correct advice to the user. An issue with the HCI is usually that is causes information overload. When it is expected of the human to concentrate on several screens to observe the status of the machine, he/she can get overwhelmed making them unable to respond to the data or errors on the system. This causes the operator to ignore displays that are having small amounts of information content displaying at that moment. This is hazarders if that specific display is in charge of an important sensor. A different way to engulf an operator is by putting a notifying alarm on every action he does. This renders him/her incapable of knowing when it is a real threat or just a minor error. Much like the "Cry Wolf" story

HCI should also have a high sureness level that will permit a human operator to assess the information it is displaying and to verify and validate it without any issues. The human should not have to depend on the display of the system for devices. The HCI should also not display more than one sensor per display. This can cause confusion and mistakes to happen. Humans should also not confide in the evidence from the HCI to the omission of the rest of his/her surroundings.

There are a few problem solving techniques for arbitrating a well-designed user interface, but there is no methodical method for designing safe, usable HCI's. It is also problematic to quantitatively measure the safety and usability of an interface, as well as find and correct for defects.

Available tools, techniques, and metrics for HCI

There are a few techniques for creating user interface designs, but they are not yet fully developed and they can't offer decisive data about a HCI's safety and usability. Review approaches like heuristic assessment and cognitive walkthroughs have the advantage that they can be applied at the strategy stage. Another problematic instance, is the issue that a real interface isn't being verified, also limits what can actually be determined by the HCI design. Experiential procedures like protocol analysis essentially have actual users that test the user interface, and do extensive studies on all the data collected during the session, from keystrokes to mouse clicks to the user's verbal account during interaction.

The Design of a HCI

There are no actual procedures for a user interface design. However there are a few rules and abilities that are important for a useable safe HCI, but the exact way of attaining these qualities are not well understood. The safest and best procedure to follow is the iterative design, evaluation and redesign. If we can perform efficient evaluations and correctly identify as many defects as possible, the interface will be greatly improved. Correct assessments previously in the design phase can save money and time.  It is easier to find HCI flaws when you have an actual interface to work with.  It is also vital to distinguish design of the HCI from other mechanisms in the system, so defects in the interface do not spread errors through the system.


Figure 3. Development process of a HCI

Iterative User Interface Design

Iterative improvement of user interfaces encompasses stable modification of the design grounded on user testing and other assessment means. Usually, one would complete a design and log the problems several test users had using it. These glitches would then be fixed in a new reiteration which should again be tested to ensure that the "fixes" were truly solved and to find any new usability complications presented by the transformed design. The design deviates from one iteration to the resulting one and those are normally local to those exact interface fundamentals that caused user complications. An iterative design methodology does not involve unseeingly replacing interface elements with different new design ideas. If one has to select between two or more interface alternatives, it is likely to perform relative testing to amount which alternative is the most practical, but such tests are usually viewed as creating a different methodology than iterative design as such, and they may be performed with a focus on in depth instead of the outcome of usability problems. Iterative design is explicitly intended at modification based on lessons learned from previous iterations.

Name of the system

Interface Technology




per test

Overall Improvement



Personal - Computer graphical interface




Cash Register

Specialized hardware with character based interface





Mainframe character-based interface





Workstation graphical user interface




Table 2. Four case studies of iterative design.


Figure 4. Interface quality as a function of the number of design iterations: Measured usability will normally go up for each additional iteration, until the design potentially reaches a point where it plateaus.

What Is Heuristic Evaluation?

This is when a group of people (testers) evaluate a user interface design and criticise it based on a set of "ease of use" guidelines. These guidelines or steps cannot be concretely measured, but the testes can make relative judgement about how well the user interface adheres to the guidelines or steps.

The following are a few of these guidelines or steps:

Simple and natural dialog

Speak the users' language

Minimize the users' memory load



Clearly marked exits


Precise and constructive error messages

Prevent errors

Help and documentation

These guidelines are implemented early in the life cycle of a system, since the testers won't be working on an actual interface. Each tester will inspect and evaluate the interface, criticising it on the set of guidelines. This procedure is just an evaluation method that validates your interface according to the previously mentioned guidelines. To reach optimal coverage of all the possible areas of the User Interface, a small team of 5 testers will be needed to complete the task. The downfall with this method is that it is very costly and at most only 3 testers are usually hired or brought in to evaluate a user interface.

This method of user interface design is exceptionally good at detecting mistakes and elaborating on why there are usability problems on the interface. Once the testers have uncovered the cause of unstable usability, it is easy to develop a solution for that problem to fix it. Fortunately it can save you a lot of time and prevent errors, because the issues have been uncovered before it has actually started affecting the user interface. The disadvantage of this method is that all the good qualities of this method are dependent on the skills of the testers that are testing you user interface. Expert testers that are qualified in the field of the system can recognize interface problems for that exact field in a system.

What are Cognitive walkthroughs?

This method like the previous method can be tested on the design of the HCI without the need of actually constructing it. Nevertheless the intellectual walkthrough tests the system by focusing on how an abstract operator of the Interface goes about performing a task. Every step the operator takes is examined and evaluated and then criticised on how well the user understood what the interface wanted from him/her. The HCI should always provide a suitable amount of advice to confirm that the operator is making progress on his/her task.

By making use of the intellectual walkthrough method it can uncover the differences in how the users and the designers view these tasks that they have to complete. It also reveals poor cataloguing and insufficient feedback for certain actions. But this method also misses a few important usability aspects due to the fact that it has a tight focus on other parts of the user interface. The cognitive walkthrough method cannot test the universal dependability or depth of features. It can also criticise a user interface to be poorly designed because it gives the operator too many choices to choose from.

For a user interface to be well-designed it has to have as few as possible errors and inspection methods needed to evaluate the user interface. There are compromises between how carefully the interface is looked at and how many properties are able to be dedicated at this early stage in the system life cycle. Experimental procedures can also be applied at the sample step to actually detect the performance of the user interface in action.

Define Protocol Analysis:

This is an experimental procedure of user interface testing that concentrates on the operator's vocal answers. The operator is instructed to use the interface and is asked to "think out loud" when working through the steps of performing his/her task using the system. Visual and Audio information are logged, including the mouse clicks and keystrokes on the keyboard or key pad used to access the system. By analysing each piece of data received throughout this test, is a very time consuming task, since we have to create assumptions from the individual vocal answers of the operator and create readings from his/her facial languages. This task is boring only because the total amount of information is very high and lengthy studies are required for each second.


This is a tool that was developed at the Carnegie Mellon University to automatically run the tasks of a normal boring task of gathering and analysing all the information that was accumulated from experimental user interface tests. This system is made up of software that synchronizes the method of information from a few available resources when an operator is using the interface being tested. All possible inputs from the operator like the keystrokes, clicks of the mouse even the eye movement and the vocals of the operator are logged for processing. This process is based on that if the interface has a good usability attribute, the operator will not pause during the current session, but will continue from one step to the other as he/she finishes the tasks that was assigned to them. Any unusual pauses between activities would be shown as errors on the system and the problem can be detected automatically and really fast.

This tool gives us more quantitative results and decrease the time spent collecting and processing data from each test station. The only disadvantage that this method has is an error that causes the operator to vacillate in a task.

Other topics related to HCI

Because human error is the major cause of system failures, it must be a large influence in safety critical system analysis.

Safety Critical Systems/Analysis - Human error is a main issue that is making systems safety life-threatening.  It is problematic to model human conduct in a system analysis, but the human operator is often a major weakness to making a system safe.

Exception Handling - The human operator is often a source of brilliant efforts to the system.  But, when a truly surprising state occurs, the human operator is the only exception handler and the only tool able to prevent system failure.

Security - Flaws in the user interface can sometimes be broken and present safety exposures to the system.

Social and Legal Concerns - If the user interface was poorly designed and caused the operator to make a mistake that cost lives or property, who is at fault?  The operator?  The system designer? These issues must be addressed by society and the legal system.


Humans are the most erratic fragment of any system and so the most challenging to perfect for HCI design.

Humans have greater failure amounts under high stress levels, but are flexible in improving from emergency circumstances and the last hope in a possible catastrophe.

The HCI must provide an suitable level of response without overburdening the operator with too much material

If the human operator is out of the control loop in a computerized task, the operator will tend to adapt to the standard process style and not pay close devotion to the system. When an emergency condition occurs, the operator's response will be degraded.

There is a difference between making the HCI moderately easy and spontaneous and ensuring that system safety is not negotiated by boring the operator into a state of self-satisfaction.

Testing methods for user interfaces are not developed and can be costly.  They focus more of qualitative rather than measurable metrics.  However, testing and repetitive design is the best way we have for refining the interface.

Humans will make mistakes it is inevitable. But by working around operators we can analyze how they think and thus create a User Interface that will be easy to understand and unique to each.

Research Methodologies Used

Historical Research:

I gathered information from various websites concerning the past of Human Computer Interfaces and how it changed.

Interview :

I conducted an interview with Professor Paula Kotze, who is a close family friend. She gave me some pointers on how to do research on my topic and she also gave me a few articles (in my reference) that were very useful in my development of my research topic.

Experimental Research:

I gathered information on previous case studies for HCI and implemented it into my project.

Action Research:

I decided on a topic and then I did research on problems within my topic area.

List of Figures and Tables

Figure 1. Human Error Mishaps Statistics from 1 Jan 86 - 31 Dec 90 …………….3

Figure 2. The three levels schema of the activity theory of (Leontief, 1978)……….7

Figure 3. Development process of a HCI………………………………………..…11

Figure 4. Interface quality as a function of the number of design iterations: Measured usability will normally go up for each additional iteration, until the design potentially reaches a point where it plateaus…………………………………………………...12

Table 1. Computational factors in calculating the Human Error Assessment and Reduction Technique………………………………………………………………..5

Table 2. Four case studies of iterative design………………………………………12