航空论坛_航空翻译_民航英语翻译_飞行翻译

 找回密码
 注册
搜索
查看: 6750|回复: 23
打印 上一主题 下一主题

人为因素分析综述 [复制链接]

Rank: 9Rank: 9Rank: 9

跳转到指定楼层
1#
发表于 2010-5-19 08:33:03 |只看该作者 |倒序浏览

人为因素分析综述

游客,如果您要查看本帖隐藏内容请回复

附件: 你需要登录才可以下载或查看附件。没有帐号?注册

Rank: 9Rank: 9Rank: 9

2#
发表于 2010-5-19 08:33:16 |只看该作者
Aviation Research Lab
Institute of Aviation
University of Illinois
at Urbana-Champaign
1 Airport Road
Savoy, Illinois 61874
Human Error and Accident
Causation Theories, Frameworks
and Analytical Techniques:
An Annotated Bibliography
Douglas A. Wiegmann and Aaron M. Rich
Aviation Research Lab
and Scott A. Shappell
Civil Aeromedical Institute
Technical Report ARL-00-12/FAA-00-7
September 2000
Prepared for
Federal Aviation Administration
Oklahoma City, OK
Contract DTFA 99-G-006
ARL
1
ABSTRACT
Over the last several decades, humans have played a progressively more important causal
role in aviation accidents as aircraft have become more. Consequently, a growing number of
aviation organizations are tasking their safety personnel with developing safety programs to
address the highly complex and often nebulous issue of human error. However, there is generally
no “off-the-shelf” or standard approach for addressing human error in aviation. Indeed, recent
years have seen a proliferation of human error frameworks and accident investigation schemes to
the point where there now appears to be as many human error models as there are people
interested in the topic. The purpose of the present document is to summarize research and
technical articles that either directly present a specific human error or accident analysis system,
or use error frameworks in analyzing human performance data within a specific context or task.
The hope is that this review of the literature will provide practitioners with a starting point for
identifying error analysis and accident investigation schemes that will best suit their individual or
organizational needs.
2
Adams, E. E. (1976, October). Accident causation and the management system.
Professional Safety, 26-29.
The paper explores accident causation in the context of management philosophy and support for
the safety professional. An underlying theme is that management’s thoughts and actions
influence work conditions and worker behavior. Accident prevention is then discussed as a two
level task. The first level consists of technical problem solving for correcting tactical errors. The
second level consists of management analysis and strategic planning for the correction of
operational errors. Heinrich’s domino philosophy of accident prevention is also analyzed in
regards to its relevance to management behavior.
Air Force Safety Center: Life Sciences Report (LSR) and USAF HF Taxonomy. (1998).
(NASA Aviation Data Sources Resource Handbook).
The Life Sciences Report and USAF human factors taxonomy are described. The human factors
category of the Life Science Report Investigations was designed to allow for a broader secondary
analysis of human factors issues. The data is limited to aircraft accidents only. The report relies
on the use of a logic tree. The human factors category is broken down into two main categories
with multiple subcategories within each. The first is the environmental category that incorporates
operations, institutions and management, logistics and maintenance, facilities services, and
egress/survival. The second is the individual category that is comprised of
physiological/biodynamic, psychological, and psychosocial subcategories.
AIRS Aircrew Incident Reporting System. (1998). (NASA Aviation Data Sources Resource
Notebook).
The AIRS is a reporting system developed by Airbus Industrie to assess how their aircraft are
operated in the real world, to gather human factor information, learn what role human factors
play in accidents, and inform other operators of the lessons learned from these events. A
taxonomy was designed for the database that is based on five categories of factors. The first
category is crew actions. There are three main components of this category.
(1) Activities of handling the aircraft and its systems
(2) Error types (based on Reason’s model of human error)
(3) Crew resource management teamskills
The other categories include personal influences (emotion, stress, motivation, etc),
environmental influences (ATC services, technical failure, other aircraft, etc.), organizational
influences (training, commercial pressure, etc.), and informational influences (checklists,
navigational charts, etc.). A keyword system to access the database has also been designed. This
keyword system is separated into two categories, crew behavior and contributory factors. An
advantage of the AIRS as a reporting system is that it allows for plots of error chains which
represent active and latent failures instrumental to an incident occurrence. It also supports trend
analysis.
3
Alkov, R. A. (1997). Human error. In Aviation safety- The human factor (pp. 75-87).
Casper, WY: Endeavor Books.
This paper makes the argument that much is known about what causes errors, but systems cannot
be error-free and eventually errors will occur. Three factors must be considered when studying
human error.
(1) Identical types of error can have fundamentally different causes.
(2) Anyone is capable of making errors, regardless of experience level, proficiency, maturity,
and motivation.
(3) Outcomes of similar errors can be different.
Errors are classified as design-induced. These errors can be random, systematic or sporadic.
Other types of error classifications include errors of omission, commission, substitution,
reversible, and irreversible. The paper goes on to describe three things that a model of human
error should do. It needs to be able to predict the error, take into account data input, account for
cognitive processes, and examine actions of individuals to determine what kind of error behavior
occurred. Three taxonomies for errors are also discussed. The first taxonomy simply describes
what happened. The second taxonomy lumps together errors according to the underlying
cognitive mechanism that cause it. The third taxonomy classifies errors according to human
biases or tendencies. The slips, lapses, mistakes paradigm of error is then examined within these
taxonomies. Errors, which are unintended, are contrasted to violations, which are usually
deliberate. The authors also take a look at intentional violations performed by operators. The
decision to perform a violation is shaped by three interrelated factors. These factors are attitudes
to behavior, subjective norms, and perceived behavioral control. The role of latent failures versus
active failures is discussed. Latent failures are consequences of human actions or decisions that
take a long time to reveal themselves. Active failures have almost immediate negative outcomes.
Finally, local versus organizational factors are stressed as being important. Local factors refer to
the immediate workplace whereas organizational factors refer to those that occur outside of the
immediate workplace.
Amendola, A. (1990). The DYLAM approach to systems safety analysis. In A. G. Colombo
& A. S. De Bustamante (Eds.), Systems reliability assessment (pp. 159-251). The
Netherlands: Kluwer Academic Publishers.
The DYLAM (Dynamic Logical Analytical Methodology) is described and analyzed. DYLAM is
a methodology created to address the problem of the inability of event trees to adequately
account for dynamic processes interacting with systems’ states. DYLAM is especially useful for
developing stochastic models of dynamic systems which provide a powerful aid in the design of
protection and decision support systems to assist operators in the control of hazardous processes
in addition to systems safety assessment. The method differs from other techniques in its ability
to account for process simulations and components of reliability performance in a unified
procedure. The method uses heuristical bottom-up procedures that lead to the identification of
event sequences that cause undesired conditions. It is also able to consider changes of the system
structure due to control logic and to random events.
4
Baron, S., Feehrer, C., Muralidharan, R., Pew, R., & Horwitz, P. (1982). An approach to
modeling supervisory control of a nuclear power plant (NUREG/CR-2988). Oak Ridge,
TN: Oak Ridge National Laboratory.
The purpose of this report is to determine the feasibility of applying a supervisory control
modeling technology to the study of critical operator-machine problems in the operation of a
nuclear power plant. A conceptual model is formed that incorporates the major elements of the
operator and of the plant to be controlled. The supervisory control modeling framework is
essentially a top-down, closed-loop simulation approach to supervisory control that provides for
the incorporation of discrete tasks and procedurally based activities.
Barriere, M. T., Ramey-Smith, A., & Parry, G. W. (1996). An improved HRA process for
use in PRAs. Probabilistic Safety Assessment and Management ’96 (pp. 132-137). New
York, NY: Springer.
A summary of the human reliability analysis called ATHEANA (a technique for human error
analysis) is given. ATHEANA is an analytical process for performing a human reliability
analysis in the context of probabilistic risk assessment. ATHEANA is based on an understanding
of why human-system interaction failures occur as opposed to behavioral and phenomenological
description of operator responses.
Benner, L., Jr. (1975). Accident investigations: Multilinear events sequencing methods.
Journal of Safety Research, 7(2), 67-73.
The paper tries to call attention to the need to develop generally acceptable approaches and
analysis methods that will result in complete, reproducible, conceptually consistent, and easily
communicated explanations of accidents. The first step for accident investigation should be to
answer the question, “what happened?” This involves a delineation of the beginning and end of
the accident phenomenon. It is extremely important that a convention for defining precisely the
beginning and end of an accident is decided on and used. The second question to answer is,
“Why did it happen as it did?” This means a recognition of the role of conditions leading to the
accident is necessary. A general explanation of the accident phenomenon is needed. This can be
done using the P-theory of accidents. The theory states that the accident can be seen to begin
with a perturbation and end with the last injurious or damaging event in the continuing accidental
events sequence. Accident event sequences should be displayed to aid accident investigation. An
events charting method is one way to do this. It is a chronological array of events and helps
structure the search for relevant factors and events involved in the accident. A method for
presenting the accident events and enabling conditions is suggested. This method stays tuned into
the time order and logical flow of events present in an accident. The author believes that the
adoption of the P-theory and the charting methods would improve the public’s grasp of accident
phenomenon.
5
Benner, L., Jr. (1982). 5 accident perceptions: Their implications for accident
investigations. Professional Safety, 27(2), 21-27.
The author is interested in investigating what the standards of accident investigation should be. A
common problem is that investigators may each have different ideas as to the purpose of the
investigation in relation to what their own needs and wants may be. Five distinct perceptions of
the nature of accident phenomenon are suggested to exist and the strengths of each are discussed.
These perceptions each seem to lead to a theoretical base for accident investigation. The first
perception is the single event perception, where accidents are treated as a single event. The only
strength of this perception is its tendency to concentrate attention on a single corrective measure.
A major weakness is that it provides an overly simplified explanation of accidents. The second
perception is the chain of events perception which treats accidents as a chain of sequential
events. The main focus is placed on unsafe conditions and acts. The major strength of this
perception is that the reconstruction technique provides some disciplining of the data search by
doing sequential ordering. A weakness is that the criteria for the selection of data used are
imprecise and very unlikely to lead to reproducible results. The third perception is the
determinant variable or factorial perception. This perception tries to discern common factors in
accidents by statistical manipulation of accident data. An important strength here is its ability to
discover previously undefined relationships. A major weakness is the total dependency on data
obtained by accident investigators. The fourth perception is the logic tree perception. This
presumes that converging chains of events lead to an undesired event. The major strength of this
perception is that it provides an approach to organize speculations about accidental courses of
events and allows an operator to watch for initiation events. A weakness is that the beginning
and end of an accident phenomenon are left to be decided by the individual investigators. The
fifth and final perception is the multilinear events sequence perception. This perception treats
accidents as a segment of a continuum of activities. The major strength is the way it facilitates
discovery by structuring data into logical arrays. A weakness is the perceived complexity of the
methodologies which discourages use. Three areas are addressed as problem areas that need to
be improved for accident investigators. Each investigator develops a personalized investigative
methodology instead of having a common methodology used by all investigators. Investigators
have difficulty linking investigations to predicted safety performance of an activity. Finally,
there are no standardized qualifications for investigators.
6
Berninger, D. J. <unknown date>. Understanding the role of human error in aircraft
accidents. Transportation Research Record, 1298, 33-42.
There are two main strategies used to address human error. The first is the introduction of
technology that is intended to assist and reduce the roles of humans. The second is training and
changes to the system that are suggested by human factors. One way of looking at human error is
as human malfunction. The author argues against using this point of view stating that there is no
malfunction on the human’s part because the human is responding appropriately to experience or
the circumstance. A second way of looking at human error is as a system malfunction. A system
that fails has both animate and inanimate components, and humans cause errors with the animate
components. But human performance is not independent of the inanimate components and
environment. A distinction is made between soft deficiencies and hard-system deficiencies. Soft
deficiencies are system characteristics that work against human performance and cause humans
to fail. Hard-system deficiencies are things such as insufficient durability and cause hardware to
fail. A mechanism for system design causing aircraft accidents is presented. It states that soft
deficiencies result from vigilance, which affects effectiveness along with skill and experience.
The effectiveness is compared to flight conditions. If the effectiveness level is too low compared
to flight conditions, the safety margin decreases until an accident occurs. Human factors
specialists, engineers, and others must pursue soft deficiencies jointly. By breaking down the soft
deficiencies, accidents can be understood better and made more preventable.
Besco, R. O. (1988). Modelling system design components of pilot error. Human Error
Avoidance Techniques Conference Proceedings (pp. 53-57). Warrendale, PA: Society of
Automotive Engineers.
A five factored model based on the assumption that errors have a cause and can be prevented by
removing error-inducing elements is developed and reviewed in the context of civilian aircraft
accidents. The five factors are obstacles, knowledge, systems, skill, and attitude. The model
consists of a sequential analysis of inducing elements and the associated reducers. A detailed
step-by-step graphic model is presented in the paper.
7
Besco, R. O. (1998). Analyzing and preventing knowledge deficiencies in flight crew
proficiency and skilled team performance. Dallas, TX: Professional Performance
Improvement.
A five-factor model called the Professional Performance Analysis System (PPAS) is developed
and described which has a main purpose of providing remedies to minimize pilot error and
optimize pilot performance. The model has been successful for use in accident investigation. The
model attempts to deal with knowledge deficiencies and attitudinal problems with a combination
of techniques and methodologies from organizational psychology, flight operations, business
leadership and management sciences. The five interactive factors of the model include
knowledge, skills, attitudes, systems environment, and obstacles. The first step in the analysis is
describing the process, function, task, error, or low performance. At this stage an investigator is
looking to see if the pilot was aware of risks, threats and consequences of their actions and if
there was stimulus that degraded this awareness. The second step is to assess the impact of the
error on this particular accident or incident by determining whether removal would have
prevented the accident. The third step is to assess the visibility of the error to the crewmembers.
The fourth step involves analyzing a detailed flow chart to see if the crew had adequate
knowledge to cope with the errors and anomalies that occurred. There are four levels of learning
that are examined. These include unconsciously incompetent (crew is unaware that they don’t
know something), consciously incompetent (the crew is aware that they don’t know something),
consciously competent (the crew has knowledge and skill but must apply great effort to
accomplish it), and unconsciously competent (the crew has over learned the knowledge or skill
and can apply it without conscious thought). Other questions are explored to determine
deficiencies. Recommendations are given for each of the situations where a problem was
perceived.
(1) Did the crew ever have the knowledge?
(2) Was the knowledge used often?
(3) Was there feedback on the knowledge level?
(4) Was there operationally meaningful curriculum?
(5) Did personal interaction with learning occur?
(6) Is the knowledge compatible with an organization?
(7) Was the individual’s capacity to absorb and apply information lacking?
8
Bieder, C., Le-Bot, P., Desmares, E., Bonnet, J. L., & Cara, F. (1998). MERMOS: EDF’s
new advanced HRA method. Probabilistic Safety Assessment and Management: PSAM 4
(pp. 129-134). New York, NY: Springer.
MERMOS is a HRA method that deals with important underlying concepts of HRAs that were
developed and examined in this paper. The basic theoretical object of the MERMOS method is
what is termed Human Factor Missions. The Human Factor Missions refer to a set of macroactions
the crew has to carry out in order to maintain or restore safety functions. Four major steps
are involved in the MERMOS method. The first is to identify the safety functions that are
affected, the possible functional responses, the associated operation objectives, and to determine
whether specific means are to be used. The second is to break down the safety requirement
corresponding to the HF mission . The third is to bridge the gap between theoretical concepts and
real data by creating as many failure scenarios as possible. The final one is to ensure the
consistency of the results and integrate them into PSA event trees.
Bisseret, A. (1981). Application of signal detection theory to decision making in supervisory
control: The effect of the operator’s experience. Ergonomics, 24(2), 81-94.
The role of signal detection theory was looked at in the air-traffic controller environment. A
general model of perceptive judgments on a radar screen for ATC controllers is proposed for
judging the future separation at the point of convergence for two aircraft. An experiment was
conducted that looked at air-traffic controllers (trainees vs. experienced) ability to detect loss of
separation of aircraft at present and in the future. The results showed that experienced controllers
use a ‘doubt’ response (a part of the model of perceptive judgments proposed) while trainees do
not. Trainees look for a sure and accurate response while experienced controllers create a
momentary class of indetermination.
Braddock, R. (1958). An extension of the “Lasswell formula”. Journal of Communication,
8, 88-93.
Seven aspects of the communicative process are offered as an extension to the “Lasswell
Formula”. These aspects are WHO says WHAT to WHOM under WHAT CIRCUMSTANCES
through WHAT MEDIUM for WHAT PURPOSE with WHAT EFFECT. This formula (model)
can address errors in terms of dealing with aspects of a message, its medium, and the
expectations of the sender or receiver.
9
Broadbent, D. E. (1958). Perception and communication. Pergamon Press, Oxford.
Broadbent explains in detail an information flow diagram of an organism. There are five
important principles underlying his diagram. The nervous system acts as a single communication
channel that has a limited capacity. A selective operation is performed upon the input to the
channel. Selection is not random and depends on the probability of certain events and states
being present in an organism. Incoming information can be held in a temporary store for the
maximum time in the order of seconds. And finally, information can return to the temporary
store after passing through a limited capacity channel.
CAATE Civil Aviation Authority Taxonomy Expanded. (1998). (NASA Aviation Data
Sources Resource Handbook).
The CAATE was developed from analyses of controlled flight into terrain that led to ‘problem
statements’. These problem statements were adapted into a taxonomy. A brief version of the
taxonomy outline is presented here. Factors are divided into two main categories, causal and
circumstantial. Causal factors include the airplane, ATC/ground aids, environmental, the crew,
the engine, fire, maintenance/ground handling, the aircraft structure, infrastructure, design,
performance and an ‘other’ factor. Circumstantial factors include aircraft systems, ATC/ground
aids, environmental, the crew, infrastructure, and an ‘other’ factor.
Cacciabue, P. C., Carpignano, A., & Vivalda, C. (1993). A dynamic reliability technique for
error assessment in man-machine systems. International Journal of Man-Machine Studies,
38, 403-428.
The paper presents a methodology for the analysis of human errors called DREAMS (Dynamic
Reliability technique for Error Assessment in Man-Machine Systems). DREAMS is meant to
identify the origin of human errors in the dynamic interaction of the operator and the plant
control system. It accommodates different models of several levels of complexity such as simple
behaviouristic models of operators and more complex cognitive models of operator behaviour.
10
Cacciabue, P. C., Cojazzi, G., & Parisi, P. (1996). A dynamic HRA method based on a
taxonomy and a cognitive simulation model. Probabilistic Safety Assessment and
Management ‘96 (pp. 138-145). New York: Springer.
A human factors methodology called HERMES (human error reliability methods for event
sequences) is presented and compared to the “classical” THERP method. The classification
scheme is based on the model of cognition and guides field studies, the development of
questionnaires and interviews, the extraction of expert judgment, and the examination of
accidents/incidents. The overall aim is to estimate data and parameters that are included in the
analyses. The HERMES methodology is derived from four sources. The first is a cognitive
simulation model built on the theories of human error and contextual control of Hollnagel and
Reason. The second is a classification scheme of erroneous behavior. The third source is a model
of the functional response of the plan. The fourth source is a method for structuring the
interaction of the models of cognition and of plants that control the dynamic evolution of events.
Cinq-Demi Methodology and Analysis Grids. (1998). (NASA Aviation Data Sources
Resource Notebook).
This methodology was developed as a tool to analyze the error factors and operational system
faults that underlie a group of incidents or accidents. Three types of events are identified that can
influence the status of an aircraft. This status floats between the Authorized Flight Envelope
where the probability of an accident is low (10-7) to a Peripheral Flight Envelope where the
probability of an accident is higher (10-3). The three events are maneuverability, sensitivity to
disturbances, and pilotability. Maneuverability refers to maneuvers that are either imposed by the
mission or are required to accommodate environmental events. Sensitivity to disturbances
addresses internal and external events that influence aircraft status and movement. Pilotability
deals with pilots’ performance of elementary operations and tasks, and the conditions leading to
error. Five factors are proposed that are conditions leading to error. These include high
workload, lack of information, misrepresentation (mental) due to the wrong use of information
and cues, misrepresentation (mental) due to ‘diabolic error’, and physical clumsiness. The
accidents and incidents are divided into key sub-events. These sub-events are then analyzed by
five grids. The first three grids represent events that can change the Status Point of the aircraft.
The fourth identifies the human environment at the time. The fifth is a matrix of operational
system faults and elementary operations.
(1) GAME (grid of aircraft maneuvers events)
(2) GASP (grid of aircraft sensitivity to perturbations
(3) GOOF (grid of operator failures)
(4) GARE (grid of amplifiers of risk of errors)
(5) RAFT (rapid analysis fault table)
11
Cojazzi, G., & Cacciabue, P. C. (1992). The DYLAM approach for the reliability analysis
of dynamic systems. In T. Aldemir, N. O. Siu, A. Mosleh, P. C. Cacciabue, & B. G. Göktepe
(Eds.), Proceedings of the NATO Advanced Research Workshop on Reliability and Safety
Assessment of Dynamic Process Systems (pp. 8-23). Germany: Springer-Verlag Berlin
Heidelberg.
A review of the third generation DYLAM approach to reliability analysis is performed. DYLAM
is a powerful tool for integrating deterministic and failure events and it is based on the systematic
simulation of the physical process under study. The DYLAM framework takes into account
different types of probabilistic behaviours such as constant probabilities for initial events and
component states, stochastic transitions between the states of the component, functional
dependent transitions for failure on demand and physical dependencies, stochastic and functional
dependent transitions, conditional probabilities for dependencies between states of different
components, and stochastic transitions with variable transition rates. The DYLAM method is
defined as a type of fault-tree/event-tree method.
Cooper, S. E., Ramey-Smith, A. M., Wreathall, J., Parry, G. W., Bley, D. C., Luckas, W. J.,
Taylor, J. H., & Barriere, M. T. (1996). A technique for human error analysis (ATHEANA)
(NUREG/CR-6350). Brookhaven National Laboratory.
ATHEANA has been designed to address deficiencies in current human reliability analysis
(HRA) approaches. These deficiencies to be corrected include addressing errors of commission
and dependencies, representing more realistically the human-system interactions that have
played important roles in accident response, and integrating recent advances in psychology with
engineering, human factors, and probability risk analysis disciplines. ATHEANA is a
multidisciplinary HRA framework that has been designed to fuse behavioral science,
engineering, and human factors together. The framework elements are error forcing contexts,
performance shaping factors, plant conditions, human error, error mechanisms, unsafe actions,
probability risk assessment models, human failure events, and scenario definitions. The
ATHEANA method was demonstrated in a trial application and provided a “proof of concept”
for both the method itself and the principles underlying it.
Danaher, J. W. (1980). Human error in ATC system operations. Human Factors, 22(5),
535-545.
Errors in air traffic control systems are occurring more often as air traffic increases. The author
reviews the FAA’s program that sought to identify and correct causes of system errors which
occur as a result of basic weaknesses inherent in the composite man-machine interface. A system
error was defined as the occurrence of a penetration in the buffer zone that surrounds an aircraft.
A database called the System Effectiveness Information System (SEIS) has been kept to be able
to make summaries of system error data in desired categories. A system error is allowed only one
direct cause, but may have many contributing causes. There are nine cause categories. These are
attention, judgment, communications, stress, equipment, operations management, environment,
procedures, and external factors.
12
De Keyser, V., & Woods, D. D. (1990). Fixation errors: failures to revise situation
assessment in dynamic and risky systems. In A. G. Colombo and A. Saiz de Bustamante
(Eds.), Systems reliability assessment (pp. 231-252). Dordrechts, The Netherlands: Kluwer
Academic Publishers.
The paper identifies a major source of human error as being a failure to revise situation
assessment as new evidence becomes available. These errors are called fixation errors and are
identified by their main descriptive patterns. The paper explores ways to build new systems to
reduce this type of error. Fixation occurs when a person does not revise their situation
assessment or course of action in response to one of two things. Either the situation assessment
or course of action has become inappropriate given the actual situation, or the inappropriate
judgment or action persists in the face of opportunities to revise. Three main patterns of behavior
occur during fixation. There is the “Everything but that” pattern, the “This and nothing else”
pattern, and the “Everything is OK” pattern. The authors go on to describe a fixation incident
analysis. The analysis is broken into categories. These are initial judgment and background, the
error, opportunities to revise, neutral observer tests, incident evolution, and revision and
correction.
Diehl, A. E. (1989). Human performance aspects of aircraft accidents. In R. S. Jensen (Ed.),
Aviation psychology (pp. 378-403). Brookfield, VT: Gower Technical.
There is an important relationship between the phenomena of accident generation with the
following investigation process, and the measures that are eventually performed to prevent more
similar accidents from occurring. With this in mind, the author describes three important
elements in accident generation. First, hazards occur when a dangerous situation is detected and
adjusted for. Hazards are common. Second, incidents occur when a dangerous situation isn’t
detected until it almost occurs and an evasive action of some sort is needed. These are infrequent.
Third, accidents occur when a dangerous situation isn’t detected and does occur. These are rare.
Aircraft accident investigation consists of several discrete functions that occur in the following
sequence: fact finding, information analysis, and authority review. It is also important to examine
comparative data sources and mishap data bases. There are also important accident prevention
elements which are to establish procedural safeguards, provide warning devices, incorporate
safety features, and eliminate hazards and risks.
13
Dougherty, E. M., Jr., & Fragola, J. R. (1988). Human reliability analysis. New York: John
Wiley & Sons.
A human error taxonomy is discussed that draws heavily from the Rasmussen taxonomy. This is
then used to formulate a conceptual framework of technological risks. The human error
taxonomy is broken down into behavior types (mistakes, slips) and the different parts to error
(modes, mechanisms, causes). The parts of errors are expanded below:
Modes
Misdetection
Misdiagnosis
Faulty decision
Faulty planning
Faulty actions
Mechanisms
False sensations
Attentional failures
Memory lapses
Inaccurate recall
Misperceptions
Faulty judgments
Faulty inferences
Unintended actions
Causes
Misleading
Indicator
Lack of knowledge
Uncertainty
Time stress
Distraction
Physical incapacitation
Excessive force
Human variability
The framework shows that the human being consists of many modules that carry out selected
activities. There are mechanisms that control action. There are mechanisms that interpret, plan,
and choose actions. An executive monitor exists to control these processes. A conscious module
exists. In the framework, the human relates to the world through the senses and acts through the
motor apparatus. Skill loops are shorter and presumably faster whereas knowledge loops may
pass through all categories of the modules. Influences on human behavior may increase the
effectiveness of certain modules.
Drury, C. G., & Brill, M. (1983). Human factors in consumer product accident
investigation. Human Factors, 25(3), 329-342.
The role of accident investigation in product-liability cases is discussed. A job aid is developed
using task analysis as a basis which is intended to obtain better human factors data.
Characteristic accident patterns were found among the data and these were labeled hazard
patterns or scenarios. It is stressed that etiological data is more important to obtain than
epidemiological data. Hazard patterns are developed and discussed. The intention of hazard
patterns is to create a way to predict the behavior of a product just by looking at its
characteristics. Hazard patterns are considered useful if at least six scenarios can account for
90% or more of the in-depth investigations, each scenario leads to at least one usable
intervention strategy that works for that pattern, each scenario is mutually exclusive from all the
others, and each scenario has human factors as a parameter in its description. A generic hazard
pattern is assigned to the remaining small percentage of scenarios that are not product specific.
Hazard patterns are broken down into four parts that correspond to the task, the operator, the
machine, and the environment.
14
Edwards, M. (1981). The design of an accident investigation procedure. Applied
Ergonomics, 12(2), 111-115.
The author points out that ergonomics has come under attack because models of application are
inappropriate and partly because ergonomists tend to be laboratory-centered rather than problemcentered.
The SHEL system is reviewed and suggested as a good solution to the problems
mentioned. The basis of the SHEL system is the premise that what people do in a work situation
is determined not only by their capabilities and limitations but also by the machines they work
with, the rules and procedures governing their activities and the total environment within which
the activity takes place. The model states that Hardware, Software, and Liveware (human
elements) all are system resources that interact together and with their Environment. Accidents
are described as symptomatic of a failure in the system. In order for the SHEL system to be
adopted, a change in orientation is needed so that accidents will not be regarded as isolated
events of a relatively arbitrary nature, due mostly to carelessness.
Embrey, D. E., Humphreys, P., Rosa, E. A., Kirwan, B., & Rea, K. (1984). SLIM-MAUD:
An approach to assessing human error probabilities using structured expert judgment
(NUREG/CR-3518). Brookhaven National Laboratory.
Procedures and analyses are performed to develop an approach for structuring expert judgments
to estimate human error probabilities. The approach is called SLIM-MAUD (success likelihood
index methodology, implemented through the use of an interactive computer program called
MAUD-multi attribute utility decomposition). The approach was shown to be viable in the
evaluation of human reliability.
Feggetter, A. J. (1982). A method for investigating human factor aspects of aircraft
accidents and incidents. Ergonomics, 25(11), 1065-1075.
This paper describes a comprehensive procedure for determining the human behaviour that
occurs in aircraft accidents and incidents. A recommendation is made to use interviews and
check lists in order to assess behavioral data that is involved with accidents and incidents. It is
stressed that a trained human factors specialist should interview the personnel involved in these
accidents and incidents as soon as possible. The author goes on to describe a checklist for
accident and incident investigation that has been developed. It is based on a systems approach to
understanding human error. The framework for the check list proposed considers three systems.
These three systems are the cognitive system, the social system and the situational system.
15
Ferry, T. S. (1988). Modern accident investigation and analysis (2nd ed.), New York: John
Wiley & Sons.
The book takes a thorough, detailed look at modern accident investigation and analysis. Its
purpose is to give an investigator the necessary basics to perform an investigation. It is pointed
out that a much more detailed version would be needed to truly train an expert in accident
investigation. The book is divided into four parts. The first part investigates the who, what, why
and when aspect of accident investigation. The second part examines the roles and interactions of
man, environment, and systems. The third part reviews specific analytical techniques such as
fault trees, failure mode and effect analysis (FMEA), the technique for human error rate
prediction (THERP), the management oversight and risk tree (MORT), and the technic or
operations review (TOR). The fourth part covers related topics to accident investigation. Some
examples of these are mishap reports, management overview and mishap investigation, legal
aspects of investigation, and the future of accident investigation. Fifteen general types of
methodological approaches are identified in the accident investigation domain. These are
epidemiological, clinical, trend forecasting, statistical inference, accident reconstruction,
simulation, behavioral modeling, systems approach, heuristic, adversary, scientific, Kipling
method (investigates who, what, when, where, why, and how), Sherlock Holmes method (events
sequencing integrated in the investigator’s mind), and traditional engineering safety.
Firenze, R. J. (1971, August). Hazard control: Safety, security, and fire management.
National Safety News, 39-42.
Error is looked at in the context of three integrated groups. The first group is physical equipment
(the machine) which examines poorly designed or poorly maintained equipment that leads to
accidents. The second group is man. In this group, faulty or bad information causes poor
decisions. The third group is environment. Here failures in the environment (toxic atmospheres,
glare, etc.) affect man, machine, or both. It is also noted that stressors that appear during a
decision making process cloud a person’s ability to make sound, rational decisions.
Fitts, P. M. (1954). The information capacity of the human motor system in controlling the
amplitude of movement. Journal of Experimental Psychology, 47(6), 381-391.
Fitts found that the rate of performance in a given type of task is approximately constant over a
considerable range of movement amplitudes and tolerance limits, but falls off outside this
optimum range. It was also found that the performance capacity of the human motor system plus
its associated visual and proprioceptive feedback mechanisms, when measured in information
units, is relatively constant over a considerable range of task conditions. This paper came as a
result of information theory and applied its concepts.
16
Fussell, J. B. (1976). Fault tree analysis – Concepts and techniques. In E. J. Henley & J. W.
Lynn (Eds.), Proceedings of the NATO Advanced Study Institute on Generic Techniques in
Systems Reliability Assessment (pp. 133-162). Leyden, The Netherlands: Noordhoff
International Publishing.
Fault tree analysis is a technique of reliability analysis that can be applied to complex dynamic
systems. The fault tree is a graphical representation of Boolean logic associated with the
development of a particular system failure to basic failures. Fault tree analysis has numerous
benefits. It allows the analyst to determine failures deductively. It points out important aspects of
the system in regards to the failure of interest. It provides a graphical aid giving clarification to
systems management people. It provides options for qualitative or quantitative system reliability
analysis. It allows the analyst to focus on one particular system failure at a time. Finally, it
provides the analyst with genuine insight into system behavior. Three disadvantages of fault tree
analysis include the high cost of development, the fact that few people are skilled in its
techniques, and the possibility of two different people developing two different trees for the
same system. The fault tree has 5 basic parts to it. The first parts, components, are the basic
system constituents for which failures are considered primary failures during fault tree
construction. The second parts, fault events, are failure situations resulting from the logical
interaction of primary failures. The third parts, branches, are the development of any fault event
on a fault tree. The fourth parts, base events, are the events being developed. The fifth and final
parts, gates, are Boolean logic symbols that relate the inputs of the gates to the output events.
Gerbert, K. & Kemmler, R. (1986). The causes of causes: Determinants and background
variables of human factor incidents and accidents. Ergonomics, 29(11), 1439-1453.
An investigation was done with German Air Force pilots to examine critical flight incidents. The
authors are interested in examining whether a possible cause of a failure can be traced to
permanent personality characteristics of an operator or to a situational disturbance by psychophysiological
or external events. Data analysis revealed human errors that can be interpreted as a
four-dimensional error structure. Vigilance errors encompass one dimension. These are missing
or fragmentary uptake of objectively present information due to inattention, or
channellized/shifted attention. Perception errors are another dimension. These errors are
comprised of erroneous judgment, miscalculations, wrong decisions, and faulty action plans. The
third dimension is information processing errors. These are defined as false utilization of
probabilistic information. The fourth dimension is sensorimotor errors. These are deficiencies in
timing and adjustments of simple-discrete and or complex-continuous motor activities and also
perceptual-motor confusion. The study shows that there is an entanglement and interaction of
specific causal conditions.
17
Gertman, D. I. (1993). Representing cognitive activities and errors in HRA trees.
Reliability Engineering and System Safety, 39, 25-34.
COGENT (cognitive event tree system) is an enriched HRA event tree method that is presented
in this paper that integrates three potential means of representing human activity. These include
an HRA event-tree approach, the skill-rule-knowledge paradigm, and the slips-lapses-mistakes
paradigm. COGENT attempts to combine the classical THERP technique with more cognitively
oriented approaches to bridge the existing gap between the modeling needs of HRA practitioners
and the classification schemes of cognitive theoreticians. The paper provides a detailed
description of the method and an application to an example scenario is performed.
Gertman, D. I., & Blackman, H. S. (1994). Human reliability and safety analysis data
handbook. New York: John Wiley and Sons.
The authors provide a comprehensive review and explanation of human reliability and safety
analysis. The background and “how to” aspects of conducting human reliability analysis are
discussed. Various methods of estimating and examining human reliability are reviewed. Some
of these include human cognitive reliability, maintenance personnel performance simulation,
techniques for human error rate prediction, and fault/event trees. It is stressed that existing data
sources and data banks are useful and important for performing human reliability and safety
analyses.
Gertman, D. I., Blackman, H. S., Haney, L. N., Seidler, K. S., & Hahn, H. A. (1992).
INTENT: A method for estimating human error probabilities for decision based errors.
Reliability Engineering and System Safety, 35, 127-136.
INTENT is a method that is used to estimate probabilities associated with decision based errors
that are not normally incorporated into probabilistic risk assessments. A hypothetical example is
created that uses a preliminary data set for 20 errors of intention that were tailored to represent
the influence of 11 commonly referenced performance shaping factors. The methodological flow
for INTENT involves six stages: Compiling errors of intention, quantifying errors of intention,
determining human error probabilities (HEP) upper and lower bounds, determining performance
shaping factors (PSF) and associated weights, determining composite PSF, and determining site
specific HEP’s for intention. The preliminary results show that the method provides an interim
mechanism to provide data which can serve to remedy a major deficiency of not accounting for
high consequence failures due to errors of intention.
18
Gore, B. R., Dukelow, J. S., Mitts, T. M., & Nicholson, W. L. (1995). A limited assessment
of the ASEP human reliability analysis procedure using simulator examination results.
(NUREG/CR-6355). Pacific Northwest Laboratory.
The procedures and requirements for the ASEP analysis are explained. This volume does not
contain any of the background or theory involved in developing the approach.
Hahn, H. A., Blackman, H. S., & Gertman, D. I. (1991). Applying sneak analysis to the
identification of human errors of commission. Reliability Engineering and System Safety,
33, 289-300.
SNEAK is a method designed to identify human errors of commission. It is especially powerful
as an aid to discovering latent errors. The analysis performed in this paper is in the context of
electrical circuits, although a software SNEAK analysis has also been designed. Data acquisition
and encoding is the first major consideration of the method to determine that the data being used
adequately represents the true system. Network trees are also used to represent a simplified
version of the system. The network trees are examined for topological patterns. These patterns
lead to clues that help identify SNEAK conditions.
Hansen, C. P. (1989). A causal model of the relationship among accidents, biodata,
personality, and cognitive factors. Journal of Applied Psychology, 74(1), 81-90.
Data from chemical industry workers were gathered to construct and test a causal model of the
accident process. The authors believe that social maladjustment traits, some characteristics of
neurosis, cognitive ability, employee age, and job experience would have an effect on accident
causation. An accident model path diagram is presented that considers variables from numerous
tests, scales and traits. These include the Bennett mechanical comprehension test, the Wonderlic
personnel test, an employee’s age, general social maladjustment scale, the distractibility scale,
job experience, involvement in counseling, accident risk, and accident consistency. The model
can be used to predict with some degree of accuracy the likelihood an employee has of getting
into an accident. This is accomplished through tests on the employee and employee data.
19
Harle, P. G. (1994). Investigation of human factors: The link to accident prevention. In N.
McDonald & R. Fuller (Eds.), Aviation psychology in practice (pp.127-148). Brookfield,
VT: Ashgate.
A general theme the author presents is that humans are the source of accidents, but they are also
the key to accident prevention. James Reason’s model of accident causation is examined as a
systems approach to accident causation. A step by step description of how investigations of
incidents should occur is given. It is first stressed that an investigator does not need to be a
specialist in the domain of the accident. A generalist investigator is usually well-suited.
Information needs to be collected that helps determine what happened and why it happened. The
SHEL model is useful for this type of data collection task. The SHEL model examines liveware,
software, hardware and environment of systems. Information is considered relevant and
necessary to obtain if it helps to explain why an accident or incident occurred. Two sources for
information are from primary sources and secondary sources. Primary sources include physical
equipment, documentation, audio/flight recorder tapes, etc. Secondary sources include
occurrence databases, technical literature and human factors professionals/specialists. A
framework for analyzing the occurrence data should then be used that leads to safety action as
the principal output. A human factors report of the incident/accident then needs to be written that
identifies the hazards uncovered and give safety recommendations. Finally, follow-up actions to
prevent the identified hazards needs to be taken.
Hawkins, F.H. (1997). Human error. In Human Factors in Flight, (pp. 27-56). Brookfield,
VT: Avebury Aviation.
Human error is examined in the context of aviation. Three basic tenets of human error are
developed and discussed. The first is that the origins of errors can be fundamentally different.
The second is that anyone can and will make errors. The third is that consequences of similar
errors can be quite different. From here, four different categories are used to make a
classification system for errors.
(1) Design-induced versus operator-induced
(2) Errors are either random, systematic, or sporadic
(3) Errors can be an omission, a commission, or a substitution
(4) Errors can be reversible or irreversible
20
Heinrich, H. W., Petersen, D., & Roos, N. (1980). Industrial accident prevention: A safety
management approach (5th ed.). New York: McGraw-Hill.
A basic philosophy of safety management and techniques of accident prevention are examined.
Accident prevention is accomplished through five separate steps, all built on a foundation of
basic philosophy of accident occurrence and prevention. The first step is organization. The
second step is fact finding. The third step is analysis. The fourth step is selection of a remedy.
The fifth step is the application of the remedy. The authors go on to describe and analyze an
updated model of accident prevention. Parts to the model include basic personal philosophies of
accident occurrence and prevention, fundamental approaches to accident prevention, collecting
data, analyzing data, selecting a remedy, applying the remedy, monitoring, and considering longterm
and short-term problems and safety programming. From here, a multitude of accident
sequence and causation models are examined and explained in terms of their usefulness.
Heinrich’s influential domino theory of accident causation is then proposed. An important
hypothesis put forth is that unsafe acts are the reason most accidents occur, not because of unsafe
conditions.
Helmreich, R. L., & Merritt, A. C. (1998). Error management: a cultural universal in
aviation and medicine. In Helmreich (Ed.), Culture at work in aviation and medicine.
Brookfield, VT: Ashgate.
The authors discuss how professional, national, and organizational cultures intersect within
organizations and can be engineered towards a safety culture. This is done by examining the
interplay of cultures through behaviors at the sharp end of a system. Error management is
suggested as a necessary strategy to create a safety culture. More empirical data is needed to
ascertain an organization’s health and practices. Five precepts of error management are
acknowledged: Human error is inevitable in complex systems. Human performance has
limitations. Humans make more errors when performance limits are exceeded. Safety is a
universal value across cultures. And finally, high-risk organizations have a responsibility to
develop and maintain a safety culture.
21
HFR British Airways Human Factors Reporting Programme. (1998). (NASA Aviation Data
Sources Resource Notebook).
The Human Factors Reporting Programme is a database that has four main purposes. The first is
to identify how and why a faulty plan was formulated. The second is to prevent a recurrence of
the circumstances or process. The third is to identify how well an organization supports the
activities of its flight crew. The fourth is to assure that the system does not assign blame to any
individual or agency. The database is coded into two main categories. One category is Crew
Actions. This category cover team skills (assertiveness, vigilance, workload management), errors
(action slips, memory lapses, mis-recognition), and aircraft handling (manual handling, system
handling). The other category is Influences. This category includes environmental factors
(airport facilities, ATC services, ergonomics), personal factors (complacency, distraction,
tiredness), organizational factors (commercial pressure, maintenance, training), and
informational factors (electronic checklists, information services, manuals). Each of these factors
can also be assigned in up to four ways. They can be positively/safety enhancing, negative/safety
degrading, first party or third party.
Hofmann, D. A. & Stetzer, A. (1996). A cross-level investigation of factors influencing
unsafe behaviors and accidents. Personnel Psychology, 49, 307-339.
A study was conducted to assess the role of organizational factors in the accident sequence in
chemical processing plants. Group process, safety climate, and intentions to approach other team
members engaged in unsafe acts were three group-level factors examined. Perceptions of role
overload was an individual-level factor that was also examined. Five hypotheses were made and
tested for significance. The first hypothesis was that individual-level perceptions of role overload
would be positively related to unsafe behaviors. This hypothesis was significant. The second was
that approach intentions would mediate the relationship between group process and unsafe
behaviors. This was not well supported. A third hypothesis was that group processes would be
negatively associated with actual accident rates. This was marginally supported. The fourth was
that safety climate would be negatively related to unsafe behaviors. This was significant. Finally
it was predicted that safety climate would be negatively related to actual accidents. This was
significant. A recommendation is made that safety practitioners engage in more systematic
organizational diagnosis.
22
Hollnagel, E. (1993). Human reliability analysis: Context and control. San Diego, CA:
Academic Press.
The Contextual Control Model (COCOM) is a control model of cognition that has two important
aspects. The first has to do with the conditions under which a person changes from one mode to
another. The second concerns the characteristic performance in a given mode, which relates to
determining how actions are chosen and carried out. Four control modes are associated with the
model. These are scrambled, opportunistic, tactic and strategic. Scrambled control occurs when
the choice of next action is completely unpredictable or random. Opportunistic control is the case
where the next action is chosen from the current context alone. It is mainly based on the salient
features rather than intentions or goals. Tactical control refers to situations where a person’s
performance is based on some kind of planning and following a procedure or rule. Strategic
control means that the person is considering the global context. Two main control parameters are
used to describe how a person can change from one control mode to another. They are
determination of outcome (succeed or fail), and estimation of subjectively available time
(adequate or inadequate). Four additional parameters are number of simultaneous goals,
availability of plans, the event horizon, and the mode of execution. The number of simultaneous
goals parameter refers to whether or not multiple goals are considered or just a single goal is
considered. The availability of plans parameter refers to having pre-defined or pre-existing plans
for which the next action can be chosen. The event horizon parameter is concerned with how
much of the past and future is taken into consideration when a choice of action is made.
Reference to the past is called the history size while reference to the future is called the
prediction length. The mode of execution parameter makes a distinction between subsumed and
explicit actions where a mode of execution can be ballistic/automatic or feedback controlled. The
relationships of how a person can change from one mode to another and the performance
characteristics of each control mode are discussed at length. The purpose of COCOM is to model
cognition in terms of contextual control rather than procedural prototypes.
Hollnagel, E. (1998). Cognitive reliability and error analysis method (CREAM). Alden
Group, Oxford.
Hollnagel introduces a second generation human reliability analysis method. This method has
two requirements. It must use enhanced probabilistic safety assessment event trees and it must go
beyond the categorization of success-failure and omission-commission. The purpose of CREAM
is to offer a practical approach for both performance analysis and prediction and be as simple as
possible. The model is expressed in terms of its functions as opposed to its structure. Four
aspects of the CREAM method are cited as being important. CREAM is bi-directional and
allows retrospective analysis as well as performance prediction. The method is recursive rather
than strictly sequential. There are well-defined conditions that indicate when an analysis or a
prediction is at an end. And finally, the model is based on the distinction between competence
and control which offers a way of describing how performance depends on context. CREAM
uses classification groups as opposed to a hierarchical classification scheme. This classification
scheme separates causes (genotypes) from manifestations (phenotypes). Also, CREAM relies on
the Contextual Control Model (COCOM) of cognition which is an alternative to information
processing models.
23
ICAO Circular (1993). Investigation of human factors in accidents and incidents. 240-
AN/144. Montreal, Canada: International Civil Aviation Organization.
The ADREP database records results of aviation accident investigations conducted by ICAO
member states. The information is used to create aviation accident reduction programs. Each
aviation accident or incident is recorded as a series of events. Human factors topics are structured
into the SHEL model format which covers the individual, the human-environment interface, the
person-person aspect, and the person-software aspect. The SHEL model addresses the
importance of human interaction and the use of written information and symbology while
simultaneously allowing the Reason model on accident investigation to be applied.
Jensen, R. S. & Benel, R. A. (1977). Judgment evaluation and instruction in civil pilot
training (Final Report FAA-RD-78-24). Springfield, VA: National Technical Information
Service.
A taxonomy of pilot errors is developed. Three general behavioral categories are specified. The
first category is procedural activities. Flight activity examples included under this category are
setting switches, selecting frequencies, programming a computer and making communications.
These activities are characterized as discrete events that involve cognitive processes. The second
level is perceptual-motor activities. These types of activities involve continuous control
movements in response to what a pilot sees in the environment. The third level is decisional
activities. This involves cognitive activities and judgments and is the most difficult aspect to
handle in realistic flight environments. Using this taxonomy, total percentages for fatal and nonfatal
accidents from each category were calculated for a 4 year period. Procedural activities were
responsible for 4.6% of the fatal and 8.6% of the non-fatal accidents. Perceptual-motor activities
were responsible for 43.8% of the fatal and 56.3% of the non-fatal accidents. Decisional
activities were responsible for 51.6% of the fatal and 35.1% of the non-fatal accidents.
Johnson, W. B., & Rouse, W. B. (1982). Analysis and classification of human errors in
troubleshooting live aircraft power plants. IEEE Transactions on Systems, Man, and
Cybernetics, SMC-12(3), 389-393.
Two experimental studies were used to develop and evaluate a scheme for classifying human
errors in troubleshooting tasks. The experiments focused on looking at errors in diagnosis by
advanced aviation maintenance trainees. Experimenters were able to decrease the amount of
errors with experimental changes. A modification of the classification system of van Eekhout
and Rouse (1982) was used to classify errors into five general categories in the second
experiment. These categories are observation of state errors, choice of hypotheses errors, choice
of procedure errors, execution of procedures errors, and consequence of previous error. The new
classification system led to the redesign of the training program and a decrease in the frequency
of particular types of human error.
24
Johnson, W. G. (1980). MORT: Safety assurance systems. New York: Marcel Dekker, Inc.
The MORT (management oversight and risk tree) logic diagram is a model of an ideal safety
program which is good for analyzing specific accidents, evaluating and appraising safety
programs, and indexing accident data and safety literature. MORT is useful in safety program
management for three reasons. It prevents safety related oversights, errors, and omissions. It
identifies and evaluates residual risks, and their referral to appropriate management levels for
action. Thirdly, it optimizes allocation of safety resources to programs and specific controls.
MORT is basically a diagram that presents a schematic representation of a dynamic, idealized
safety system model using fault tree analysis. Three levels of relationships exist that aid in the
detection of omissions, oversights, and defects. These are generic events, basic events, and
criteria. Furthermore, MORT explicitly states the functions that are necessary to complete a
process, the steps to fulfill a function, and the judgment criteria. A step by step outline is
provided for using the MORT system. The system is illustrated with examples. A major fault
with MORT is described as affirmation of the consequent. This is the fallacy of inferring truth of
an antecedent from the truth of the consequence.
Kahneman, D. & Tversky, A. (1984). Choices, values, and frames. American Psychologist,
39(4), 341-350.
The paper conducts a discussion of the cognitive and the psychophysical factors of choice in
risky and riskless contexts. A hypothetical value function is developed that has three important
properties. These properties are that the value function is defined on gains and losses rather than
on total wealth, it is concave in the domain of gains and convex in the domain of losses, and it is
considerably steeper for losses than for gains. This last property has been labeled loss aversion.
Three main points are made apparent. First, the psychophysics of value lead to risk aversion in
the domain of gains and risk seeking in the domain of losses. Second, risk aversion and risk
seeking decision making can be manipulated by the framing of relevant data. Third, people are
often risk seeking in dealing with improbable gains and risk averse in dealing with unlikely
losses.
Kashiwagi, S. (1976). Pattern-analytic approach to analysis of accidents due to human
error: An application of the ortho-oblique-type binary data decomposition. J. Human
Ergol., 5, 17-30.
An ortho-oblique-type binary data decomposition is proposed as a means of classifying patterns
of human error. The method is described mathematically and then applied to accidents in freightcar
classification yard work. The ortho-oblique-type of binary data decomposition is useful
because it tends to produce results that are very easily interpretable from the empirical point of
view. The main reason for adopting the method is that it allows data in the form of documents to
be made feasible for numerical classification by use of binary data matrices. The analysis of the
data showed that there are specific patterns of relevant and background conditions for most
accidents that are due to human error.
25
Kayten, P. J. (1989). Human performance factors in aircraft accident investigation. Human
Error Avoidance Techniques Conference Proceedings (pp. 49-56). Warrendale, PA: Society
of Automotive Engineers.
The author examines that evolution of human performance investigation within the National
Transportation Safety Board (NTSB). The importance of the background of the accident
investigator is explored. An argument is made that a background in the domain of the accident is
helpful, but definitely not required to be effective. Relevant facts to be collected, ignored, and to
be further thought about are discussed. It is greatly stressed that investigative techniques and
analytic methods still need to be improved to better manage human error.
Kirwan, B. (1998). Human error identification techniques for risk assessment of high risk
systems. Part 1: Review and evaluation of techniques. Applied Ergonomics, 29(3), 157-177.
This first part of a two part paper outlines thirty-eight approaches to error identification. They are
categorized by the type of error identification approach used and then they are critiqued by a
broad range of criteria. Trends and research needs are noted along with the identification of
viable and non-viable techniques. Three major components to an error are broken down. The first
component is the external error mode. This refers to the external manifestation of the error. The
second component is the performance shaping factors. These influence the likelihood of an error
occurring. The third component is the psychological error mechanism. This is the internal
manifestation of the error. The authors go on to recognize seven major error types that appear to
be of interest in current literature. These are slips and lapses, cognitive errors (diagnostic and
decision-making errors), errors of commission, rule violations, idiosyncratic errors, and software
programming errors. In order to show the general orientation of form of each error identification
technique, five broad classifications have been developed. These include taxonomies,
psychologically based tools, cognitive modeling tools, cognitive simulations, and reliabilityoriented
tools. The different approaches were also classified by their analytic method. These
methods are the checklist-based approaches, flowchart-based approaches, group-based
approaches, cognitive psychological approaches, representation techniques, cognitive
simulations, task analysis linked techniques, affordance-based techniques, error of commission
identification techniques, and crew interactions and communications. Ten important criteria to
evaluate the different techniques are laid out. The criteria are comprehensiveness of human
behavior, consistency, theoretical validity, usefulness, resources (actual usage, training time
required, requirement of an expert panel), documentability, acceptability (usage to date,
availability of technique), HEI output quantifiability, life cycle stage applicability, and primary
objective of the technique. Some main techniques are identified which could be useful for
general practice, but it is pointed out that no single technique is sufficient for all of a
practitioner’s need. It is suggested that a framework-based or toolkit-based approach would be
most beneficial.
26
Kirwan, B. (1998). Human error identification techniques for risk assessment of high risk
systems. Part 2: Towards a framework approach. Applied Ergonomics, 29(5), 299-318.
This second of a series of papers describes a framework-based and a toolkit-based approach as a
human error identification approach in nuclear power and reprocessing industries. Advantages
and disadvantages are considered. Framework approaches try to deal with all human error types
in an integrative way by using a wide array of tools and taxonomies that have been found to be
effective. The Human Error and Recovery Assessment system (HERA) is a framework approach
that is outlined in this paper. The HERA system is a document and a prototype software package.
The paper only describes in detail the procedure for skill and rule based error identification. The
document is the formal system and has main modules or functional sections. One such main
module is the scope analysis and critical task identification. This module deals with factors to
consider, logistical and otherwise, along with phases of operations to look at. A second module is
task analysis. Initial task analysis and Hierarchical Task Analysis are the two major forms of task
description that are used and described. A third module is skill and rule based error
identification. For this module, nine error identification checklists are used that may have some
over-lapping. These checklists are explained in some detail and include mission analysis,
operations level analysis, goals analysis, plans analysis, error analysis, performance shaping
factor based analysis, psychological error mechanism based analysis, Human Error Identification
in Systems Tool (HEIST) analysis, and human error HAZOP. The five remaining modules that
are not explained in detail are diagnostic and decision-making error identification, error of
commission analysis, rule violation error identification, teamwork and communication error
identification, and integration issues. The toolkit framework approach seeks to ensure that all
relevant error types are discovered by using several existing techniques. It is also pointed out that
there may be a useful synergistic relationship between human error analysis and ergonomics
evaluation.
Kletz, T. (1992). Hazop and hazan: Identifying and assessing process industry hazards.
Bristol, PA: Hemisphere Publishing.
Hazard and operability study (HAZOP) is a technique for identifying hazards without waiting for
an accident to occur. It is a qualitative assessment. A series of guide words are used in HAZOP
to explore types of deviations, possible causes, consequences and actions required. Hazard
analysis (HAZAN) is a technique for estimating the probability and consequences of a hazard
and comparing them with a target or criterion. It is a quantitative assessment. HAZAN contains
three steps. The first is to estimate the likelihood of an incident. The second is to estimate
consequences to employees, the public and environment, and to the plant and profits. The third
step is to compare these results to a target or criterion to decide if action is necessary to reduce
the probability of an occurrence.
27
Kubota, R., Ikeda, K., Furuta, T., & Hasegawa, A. (1996). Development of dynamic human
reliability analysis method incorporating human-machine interaction. Probabilistic Safety
Assessment and Management ‘96 (pp. 535-540). New York: Springer.
The authors describe an updated dynamic human reliability analysis method that considers
interactions within the plant. It compares and evaluates the response time between the cases
where the safety limit of the plant is quickly reached and the cases where it is not. The proposed
cognition mechanism borrows from the Monte Carlo calculation using the probabilistic network
method and Rasmussen’s decision making model. The authors intend the new dynamic human
reliability analysis method to replace the THERP (technique for human error rate prediction) and
TRC (time reliability correlation) methods.
Lasswell, H. D. (1948). The structure and function of communication in society. In L.
Bryson (Ed.), The communication of ideas (pp. 37-51). US: Harper and Row.
The ‘Lasswell formula’ is a description of an act of communication asking these questions:
(1) Who
(2) Says what
(3) In which channel
(4) To whom
(5) With what effect
Three functions are performed while employing the communication process in society. The first
is surveillance of the environment. The second is correlation of the components of society in
making a response to the environment. The third is transmission of the social inheritance. This
formula (model) can address errors in terms of dealing with aspects of a message, its medium,
and the expectations of the sender or receiver.
Laughery, K. R., Petree, B. L., Schmidt, J. K., Schwartz, D. R., Walsh, M. T. & Imig, R. G.
(1983). Scenario analyses of industrial accidents. Sixth International System Safety
Conference (pp. 1-20).
An analytic procedure of accidents is developed that is based on two contentions. The first is that
it is necessary to answer the question, “what happened?” The second is that it is important to
recognize that all accidents, no matter how minor, represent a valuable source of data. Four
categories of variables exist within the method. The first category is demographic variables. This
includes such aspects as gender, job classification, the day, the location, etc. The second category
is labeled accident scenario code. This includes prior activity, the accident event, the resulting
event, the injury event, the agent of the accident, and the source of injury. The third category
deals with injury variables. This breaks down into the body part injured, the injury type, and the
injury severity. The final category is labeled causal factors. This is broken down further into
human causes, and equipment/environment causes. The analytic procedures consider frequency,
severity, and potential for effective interventions. The analyses used are a frequency analysis and
a scenario analysis which describes accident patterns.
28
Macwan, A., & Mosleh, A. (1994). A methodology for modeling operator errors of
commission in probabilistic risk assessment. Reliability Engineering and System Safety, 45,
139-157.
A methodology is described that incorporates operator errors of commission in nuclear power
plant probabilistic risk assessments (PRA). An initial condition set is obtained by combining
performance influencing factors with information taken from the plant PRA, operating
procedures, information on plant configuration, and physical and thermal-hydraulic information.
These initial condition sets are fed into the primary tool of the methodology called Human
Interaction TimeLINE (HITLINE). HITLINE generates sequences of human action, including
errors, in time. At each branching point of the HITLINE, mapping rules are used to relate
performance influencing factors with errors. A quantification scheme is used to assign weights at
each of the branching points. A sample exercise is performed using the methodology and is
validated in terms of the current PRA framework.
Mangold, S. J., & Eldredge, D. (1993). An approach to modeling pilot memory and
developing a taxonomy of memory errors. In R. S. Jensen & D. Neumeister (Eds.),
Proceedings of the Seventh International Symposium on Aviation Psychology (263-268).
Columbus, OH: The Ohio State University.
A review of the methodology used to develop a memory-related taxonomy of memory errors in
pilots is performed. It is based on the connectionist approach of cognitive functioning. Five
categories of memory-related key terms were developed. The key words reflect the types of
breakdowns that can occur in the memory process. The first category is information encoding
errors. These are defined as failures to encode relevant information so that it can be accessed at a
later time. The second category is meaning structure errors. These are memory errors that come
from problems with representational structures. Processing competition errors is a third category.
These errors have to do with the cognitive system being busy with one task and failing to
adequately manage a second task. A fourth category is information retrieval errors. These are
described as failures to achieve the same cognitive state at information retrieval as was present
when the information was encoded. The final category is artifact-induced errors. These errors
come as a result of the complex demands of the advanced automation cockpit.
29
Marteniuk, R. G. (1976). Information processing in motor skills. New York: Holt, Rinehart
and Winston.
This book presents an information processing model. The basic human performance model
discussed has three major mechanisms that mediate information in the environment and
movement. The perceptual mechanism is the first one described. This mechanism receives
environmental information from the senses. Perception is argued to have three general classes of
processes. These are sensory capacities, information selection and prediction, and memory. The
second mechanism is the decision mechanism. This mechanism deals with deciding on a plan of
action for the current information that is available. The third mechanism is the effector
mechanism. This mechanism organizes a response and activates the motor commands to the
muscular system. It is emphasized that feedback information is an important part of the model
which allows correction in the effector mechanism if there is enough time. Memory also plays a
crucial role in the model and has implications for and causes interactions with the perceptual,
decision, and effector mechanism. Two types of skills are identified that can be analyzed using
the model. Open skills occur in environments where the conditions under which the skill is
performed are continually changing in space. This causes increased time pressure and stress.
Closed skills occur in environments where the critical cues for the performance of that skill were
static or fixed in one position.
Maurino, D. E., Reason, J., Johnston, N. & Lee, R. B. (1995). Widening the search for
accidental causes: A theoretical framework. In Beyond aviation human factors: Safety in
high technology systems (pp. 1-30). Vermont: Ashgate.
This chapter tries to outline a theoretical framework that seeks to provide a principled basis both
for understanding the causes of organizational accidents and for creating a practical remedial
toolbag that will minimize their occurrence. The framework traces the development of an
accident sequence. It considers organizational and managerial decisions, conditions in various
workplaces, and personal and situational factors that lead to errors and violations. Active and
latent failure pathways to an event are identified. Events are defined as the breaching, absence or
bypassing of some or all of the system’s various defenses and safeguards. Within the framework,
organizational pathogens are introduced into a system where they follow two main pathways to
the workplace. In the first pathway the pathogens act upon the defenses, barriers and safeguards
to create latent failures. In the second pathway the pathogens act upon local working conditions
to promote active failures.
30
McCoy, W. E., III, & Funk, K. H., II. (1991). Taxonomy of ATC operator errors based on a
model of human information processing. Proceedings of the 6th International Symposium
on Aviation Psychology (pp. 532-537). Columbus, OH: The Ohio State University, The
Aviation Psychology Laboratory.
An analysis of accidents was run which provided a classification of ATC errors based on a
human information processing model. The errors can be further explained in terms of inherent
human limitations such as working memory capacity and duration limits. The authors conclude
that it is advisable to develop a set of systematic design strategies which consider the propensity
of human beings to make errors and to try and mitigate the adverse consequences of such errors.
McRuer, D. (1973). Development of pilot-in-the-loop analysis. AIAA Guidance and Control
Conference (pp. 515-524). Stanford, CA.
A pilots’ dynamic characteristics when operating as a controller are affected by several physical,
psychological, physiological, and experimental variables which are contained in four categories.
These are task variables, environmental variables, procedural variables, and pilot-centered
variables. Pilot-in-the-loop analysis is discussed. It is argued that pilot-in-the-loop analysis is
dependant on four different aspects of research. The first aspect is experimental determination of
human pilot dynamic characteristics for a wide variety of situations and conditions. The second
aspect is evolution of mathematical models and manipulative rules. The third aspect is
relationships between the pilot-vehicle situation and the objective and subjective pilot
assessments. The fourth and final aspect is combination of pilot dynamics and equivalent aircraft
mathematical models to treat particular problems. Two fundamental concepts of pilot-in-the-loop
analysis are guidance and control along with the pilot sets-up and closes the loop.
MEDA Maintenance Error Decision Aid. (1998). (NASA Aviation Data Sources Resource
Notebook).
The purpose of MEDA is to give maintenance organizations a better understanding of how
human performance issues contribute to error. This occurs by providing line-level maintenance
personnel with a standardized methodology to analyze maintenance errors. MEDA provides two
levels of analysis. At one level, local factors are analyzed. At another level, organizational
factors are analyzed. MEDA has many benefits. It uses a human-centered approach to
maintenance error event analysis. The local factors analysis gives maintenance ownership of
individual event analysis. MEDA uses standardized definitions and data collection processes that
are consistent across and within airlines. Data is obtained that allows for organizational trend
analysis. The maintenance investigator gains an increased awareness of human performance
investigation techniques. And a final benefit is MEDA is that a process is provided that improves
the effectiveness of corrective actions chosen.
31
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our
capacity for processing information. The Psychological Review, 63(2), 81-97.
The amount of information that a human can process in immediate memory is examined.
Important in regards to human error is Miller’s testing of absolute judgments of single and multidimensional
stimuli. Absolute judgment is limited by the amount of information according to
Miller. Miller also states that immediate memory is limited by the number of items to be
remembered.
Nagel, D. C. (1988). Human error in aviation operations. In E. L. Wiener & D. C. Nagel
(Eds.), Human factors in aviation (pp. 263-303). New York: Academic Press, Inc.
Nagel argues that an error model needs to meet three criteria. It needs to explain in detail why a
human error occurs so that a solution strategy can be developed. It needs to be predictive and not
just descriptive. It also needs to not ignore systematic research in the field of behavioral and life
sciences. A three stage simple error model called the information-decision-action model is
presented to illustrate the previously named criteria. The first stage of the model is the
acquisition, exchange and processing of information. Stage two is where decisions are made and
specific intents or plans to act are determined. Stage three is where decisions are implemented
and intents acted upon. Nagel points out three approaches that reduce the occurrence and severity
of human error in complex human-machine systems. One approach is to design controls,
displays, operational procedures and the like in a careful and informed way. A second approach
is to reduce errors through selection and training. A third approach is to design systems to be
error-tolerant.
NASA ASRS-Aviation Safety Reporting System Database (1998). (source NASA Data
Sources Resource Notebook).
The Aviation Safety Reporting System (ASRS) is an incident database that collects, analyzes,
and responds to voluntarily submitted aviation safety incident reports. Valuable human factors
information can be obtained from the database. ASRS analysts choose appropriate fields to code
each report for the database. Eleven general categories are suggested to the ASRS analysts for
classification.
(1) Affective or cognitive states—attitude, complacency, fatigue, etc.
(2) Capability—inadequate certification, unfamiliar with operation, etc.
(3) Circumstances affecting human performance—equipment design, noise, workload, etc.
(4) Distraction—checklist, radio communication, socializing, etc.
(5) Inadequate briefing—cockpit, preflight, etc.
(6) Inadequate planning—inflight, preflight, other
(7) Inadequate technique—air traffic control, communication, flying, etc.
(8) Misread—chart, instrument, publication
(9) Non adherence to—clearance, instruction publication
(10) Other behaviors or non-behaviors—altitude callout omitted, perception problem, etc.
(11) Physical state—hypoxia, illness, incapacitation, etc.
32
National Transportation Safety Board. (1992). Human performance investigation
procedures (vol. III) [manual]. Washington, DC: Author.
The NTSB’s human performance investigation procedure is explained. The NTSB seeks to
examine six human performance factors within their investigations. These are behavioral factors,
medical factors, operational factors, task factors, equipment design factors, and environmental
factors. Examples of actual checklists used to examine these factors in accidents are included in
this manual.
Navarro, C. (1989). A method of studying errors in flight crew communication. Perceptual
and Motor Skills, 69, 719-722.
A method is described which uses the information processing paradigm to study errors in flight
crew communication. The taxonomy of errors proposed is based on two dimensions. In the first
dimension, an evaluation of the type of communication errors is made. These can be classified as
having to do with transmission, detection, identification, interpretation, and action linked to
communication. The second dimension evaluates the type of adjustment made. For individuals,
this concerns problem-solving by the operator. For interactive environments, this involves
problem-solving by a crew. The taxonomy specifically includes transmission of a message,
detection of a message, identification of a message, interpretation of a message, and action taken
in regards to the message.
Nawrocki, L. H., Strub, M. H., & Cecil, R. M. (1973). Error categorization and analysis in
man-computer communication systems. IEEE Transactions on Reliability, R-22(3), 135-
140.
The authors examine traditional approaches to human reliability and a new technique is
presented which permits the system designer to derive a mutually exclusive and exhaustive set of
operator error categories in a man-computer system. Error categories are defined in terms of
process failures and provide a qualitative index suitable for determining error causes and
consequences. The new index is tested on a set of data. From this, it is determined that the new
methodology offers a designer a systematic means for deriving error categories which appear to
be acceptable for systems in which the operator must transform and input data, such as in
information reduction tasks.
33
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ:
Prentice-Hall Inc.
The authors define a human information processing system in terms of symbols. Eight important
characteristics of a human information processing system are discussed in relation to problem
solving tasks. The system contains an active processor, input and output systems, long-term
memory, short-term memory and external memory. Long-term memory has unlimited capacity
and is organized associatively, with its contents being symbols and structures of symbols. Shortterm
memory holds about 5 to 7 symbols. Sensory modalities, processes, and motor patterns are
symbolized and handled identically in short-term memory and long-term memory. Overall
processing rates are limited by read rates from long-term and extended memory. Extended
memory is defined as the immediately available visual field. The information processing
system’s program is structured as a production system, the condition for evocation of a
production being the presence of appropriate symbols in the short-term memory augmented by
the foveal extended memory. A final important characteristic of the system is that a class of
symbol structures or goal structures are used to organize problem solving.
Norman, D. A. (1981). Categorization of action slips. Psychological Review, 88(1), 1-15.
The author concentrates on action errors in regards to slips. Three major categories of slips are
addressed. One category is based on errors in the formation of the intention. A second category is
based on errors having to do with faulty activation of schemas. The third category is based on
errors dealing with faulty triggering. The author proposes a model called the activation-triggerschema
system (ATS). It contains a system of activated schemas with a triggering mechanism for
determining appropriate time for activation. This provides a satisfactory framework for the
categorization and analysis of slips. The ATS model is considered novel for five reasons. It
combines schemas, activation values, and triggering conditions. It considers the application of
motor action sequences. The role of intention is considered. There is consideration of the
operation of cognitive systems when several different action sequences are operative
simultaneously, And finally, it is novel because a specific application of this framework to the
classification of slips is employed.
Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W. Draper (Eds.), User
centered system design (pp. 31-62).
This selected chapter introduces a theory for action to understand what the user of a system is
doing. A discrepancy between the psychological terms of the user and the physical variables of a
system is stated as the Gulf of Execution and the Gulf of Evaluation. Bridging the gap in the
Gulf of Execution is done in four segments that deal with intention, formation, specifying the
action sequence, and executing the action. Bridging the gap in the Gulf of Evaluation consists of
comparing the interpretation of system state with the original goals and intentions.
34
OASIS: Occurrence Analysis and Safety Information Systems. (1998). (NASA Aviation
Data Sources Resource Notebook).
The OASIS system is a database that is based on the ICAO standard. The major difference is that
the OASIS manual provides definitions of the explanatory factors. OASIS is also able to
generate safety reports from data entered during an investigation. Explanatory factors are
structured into eight categories:
(1) Between people
(2) Human-environment
(3) Human-machine
(4) Human system support
(5) Physical
(6) Physiological
(7) Psychological
(8) Psychosocial
O’Connor, S. L., & Bacchi, M. (1997). A preliminary taxonomy for human error analysis
in civil aircraft maintenance operations. Ninth Biennial Symposium on Aviation
Psychology.
The authors argue that a reporting scheme for human error analysis needs to have three steps. It
needs to provide detail and structure of the error form or tool. It needs to provide a method of
data collection and a procedure of implementing such a tool. It also needs to provide a storage
and utilization mechanism. This paper describes an error taxonomy that tries to provide detail
and structure of human error in aircraft maintenance. Three broad classification of human error
are identified that are based on a task oriented classification that include maintenance and
dispatch activities. The first classification, external error modes, is based on three main activities.
These are repair, service, and inspection/checking. The second classification, performance
influencing factors, is split into six main groups. These include task factors, task support,
situational factors, environmental factors, personnel factors, and error agents. The third and final
classification, psychological error mechanisms, is based on four models. Information processing
theory, symbolic processing theory, Endsley’s model of mechanisms of situational awareness,
and Rasmussen’s skill-rule-knowledge based levels of cognitive control are the four important
models that serve as the basis of this classification.
35
O’Hare, D. (1992). The “artful” decision maker: A framework model for aeronautical
decision making. International Journal of Aviation Psychology, 2(3), 175-191.
This paper reviews the available literature on aeronautical decision making and then proposes a
new framework called ARTFUL. This framework is a goal-directed process with five
functionally separate components that deal with situational awareness (detection and diagnosis),
risk assessment, planning, response selection, and response execution. This framework
recognizes three important points. It is acknowledged that most routine decision making arises
directly from situation awareness which then maps directly to response selection. Errors may
arise in the process of response execution in predictable forms such as slips. It is also recognized
that as long as the current state is consistent with the current goal state and no other threatening
circumstances exist, the current goal will continue to be pursued. The framework can be
described and defined by key steps in the process that are linked to decision making states.
Awareness of the situation as a result of monitoring is the first step. From here, risk of current
and alternative courses of action is assessed. This leads to time assessment which is a critical
factor in decision making for dynamic environments. Finally, further options are generated.
O’Hare, D. (in press). The “wheel of misfortune”: A taxonomic approach to human factors
in accident investigation and analysis in aviation and other complex systems. Ergonomics.
The Reason model of human error has been very influential in accident investigation because of
its complexity and breadth. However, a major criticism for using this model as an accident
causation model is its linear sequence of levels instead of considering intersecting influences
from various points. A revised theoretical model and associated classification framework is
proposed to help guide the accident investigation process. It is named the Wheel of Misfortune.
There are three concentric spheres in this model. The innermost circle represents the actions of
the front line personnel. The middle circle represents local precipitating conditions. The
outermost circle represents the global conditions generated by organizations. Actions of the
individual operator are described in terms of an internal function taxonomy. This taxonomy
includes the Skill-Rule-Knowledge framework that Rasmussen developed. The local conditions
circle includes factors that may be critical in the breakdown of human performance in complex
systems. These include weather and internal states of the flightcrew among other factors. The
global conditions circle considers the context within which the task activity takes place. This
includes organizational processes. This model has three potentially valuable functions. The
concentric spheres within spheres are better than the linear sequence of factors in representing
accident causation. This provides an alternative to Reason’s Swiss Cheese model. In the wheel of
misfortune, the strength of a system is determined by the outer shell of the model. The model is
also good for directing the attention of the investigator to specific questions within the layers of
concern such as local actions, immediate realities of the operational environment, and influences
of organizational functioning. A final benefit is that the model is expressed in terms of general
processes which are independent of functioning within any specific domain. This allows
information from other models to be used in this framework. The model is similar to the
“Taxonomy of Unsafe Operations” model of Shappell and Wiegmann, but uses a representation
at a higher level of abstraction that gives greater comprehensiveness and parsimony.
36
O’Hare, D., Wiggins, M., Batt, R., & Morrison, D. (1994). Cognitive failure analysis for
aircraft accident investigation. Ergonomics, 37(11), 1855-1869.
Two studies were conducted to investigate the applicability of an information processing
approach to human failure in the aircraft cockpit. In the first study, the authors attempt to
validate Nagel’s three stage information processing model of human performance. The model
confirmed that decisional factors are extremely important in fatal accidents. It is determined that
Nagel’s model is and oversimplification. There are at least five, not three, distinct categories of
errors that can occur in the cockpit. These are perceptual, decisional, procedural, monitoring, and
handling errors. In the second study, the authors develop a more detailed analysis of cognitive
errors based on a theoretical model proposed by Rasmussen and further developed by Rouse and
Rouse. A taxonomic algorithm that was derived from Rasmussen’s work was used to classify
information processing failures. The algorithm focused on structural and mechanical errors,
information errors, diagnostic errors, goal errors, strategy errors, procedure errors, and action
errors.
Paradies, M. (1991). Root cause analysis and human factors. Human Factors Society
Bulletin, 34(8), 1-6.
Root cause analysis attempts to achieve operator excellence by establishing an aggressive
program to review the accidents, determine their root causes, and take prompt corrective action.
Event investigation systems seem to be successful when they contain certain basic
characteristics. They need to identify the event’s sequence, set a goal to find a fixable root cause,
avoid placing blame, be easy for investigators to learn and use, and be easy for managers to
understand and provide an easily understood graphic display of the event for management
review.
Pedrali, M. (1997). Root causes and cognitive processed: Can they be combined in accident
investigation? Ninth International Symposium of Aviation Psychology.
A methodological approach is proposed that relies on a model of cognition and a classification of
human errors. The intent of the approach is to reconstruct the process of cognition through which
latent failures give rise to active failures. A major concern with modern methodological
approaches is that they may stray too far from the context of the accident. A classification
scheme is devised that distinguishes three categories. The first category is person-related causes.
This deals with specific cognitive functions of people and general person-related functions. The
second category, system-related causes, focuses on training, equipment, procedure, and interface
issues. The third category is environment-related causes. These include ambient conditions,
communication, organization, and working conditions. The main principles of the HERMES
model were used to create a prototype software tool called DAVID (Dynamic Analysis of Video
in Incident studies).
37
Petersen, D., & Goodale, J. (1980). Readings in industrial accident prevention. New York:
McGraw-Hill.
This book is a forum for debate and information sharing of key issues in accident causation and
prevention. The first part of the book deals with the basis of philosophy of accident prevention.
Some issues examined are the differences between unsafe acts and unsafe conditions. Models of
accident phenomenon are discussed. The pros and cons of cost-benefit analyses are argued. The
idea of an “injury tax” to help offset some problems of a company’s unwillingness to increase
safety is presented. Important principles and factors when attempting to control accidents are
also explained. The second part of the book examines accident prevention methods in stages.
Data collection and analysis is an important stage. Issues examined are mathematical evaluations
and creating a safety program priority system. Systems approach issues are examined into
separate papers. Monitoring, motivating, and training are discussed due to their relevance in
preventing accidents. The third section of the book covers miscellaneous subjects such as the
professionalism of investigators and management, insurance issues, risk management and
looking at the effectiveness of OSHA.
Ramsey, J. D. (1985). Ergonomic factors in task analysis for consumer product safety.
Journal of Occupational Accidents, 7, 113-123.
The author proposes a model that identifies contributing factors to accidents. This is done by
following the information processing steps of an accident sequence and listing factors that affect
each stage of the process. The accident sequence model has four levels to it. If any level’s
criteria are not met, it leads to an increased chance of an accident occurring. If a level’s criteria
are met, the sequence progresses to the next level until a state of increased chance for no accident
occurs. When exposure to a hazardous product occurs, a person goes through four levels in order,
assuming the criteria for each level is met. Level one is the perception of the hazard. This
involves sensory skills, perceptual skills, and the state of alertness. Level two is cognition of the
hazard. Factors affecting this are experience and training, mental abilities, and memory abilities.
Level three is the decision to avoid the hazard. Factors affecting this are experience and training
again, attitude and motivation, risk-taking tendencies, and personality. The fourth and final level
is the ability to avoid the hazard. Relevant factors here are anthropometrical, biomechanical, and
motor capabilities.
38
Rasmussen, J. (1981). Models of mental strategies in process plant diagnosis. In J.
Rasmussen and W.B. Rouse (Eds.), Human Detection and Diagnosis of System Failures
(pp.241-258). Plenum Press, New York.
The author states that the ultimate purpose of diagnosis in process plant control is to link the
observed symptoms to the actions which will serve the current goal properly. The paper contrasts
a topographic search versus a symptomatic search to locate problems. The author refutes the
trend to design man-machine interfaces towards a presentation of measured variables on visual
display units as bar graphs and/or mimic displays while also attempting to unload the operator by
alarm analysis and reduction. An optimal computer-based design is proposed as having sharp
distinctions disappear between the functions of alarm and safety systems, of control systems, and
of operators. A key role of the computer will be as a partner of the operator in higher level
supervisory control.
Rasmussen, J. (1982). Human errors. A taxonomy for describing human malfunction in
industrial installations. Journal of Occupational Accidents, 4, 311-333.
A taxonomy for event analysis is presented. The taxonomy recognizes that error mechanisms and
failure modes depend on mental functions and knowledge which are activated by subjective
factors. They are not directly observed but are inferred. A model of human information
processing is needed to relate elements of human decision making and action to internal
information processes for which generic psychological mechanisms and limitations can be
identified. Such a model was developed that draws on a distinction between three levels of
behavior. Skill-based domain of behavior includes subconscious routines and performance that is
controlled by stored patterns of behavior in a time-space domain. Rule-based domain of behavior
includes performance in familiar situations and is controlled by stored rules for coordination of
subroutines. Knowledge-based domain of behavior occurs in unfamiliar situations where actions
must be planned from an analysis. In this domain decisions need to be based on knowledge of the
functional and physical properties of the system while also giving importance to the priority of
the various goals. It is possible for the same required mental function to be served by different
information processes, and each with their own error mechanism. A five dimension multi-facet
classification system is described for an accidental chain of events. The dimensions include
external causes, internal failure mechanisms, internal mental functions failed, external mode of
action failures, and external tasks. Categories of the taxonomy directly related to the
inappropriate human performance are listed. These are:
(1) Personnel task—identification of the task performed
(2) External mode of malfunction—immediate observable effect of human malfunction
(3) Internal human malfunction—internal mental function of a persons decision making which
was not performed as required by the task
(4) Mechanisms of human malfunction
(5) Causes of human malfunction—identifies possible external causes of inappropriate human
action
(6) Performance shaping and situational factors—general conditions that can influence error
probability but not cause errors in and of themselves
39
Reason, J. (1979). Actions not as planned: The price of automatization. In G. Underwood
(Ed.), Aspects of consciousness (pp. 67-89). London: Academic Press.
The author takes a detailed look at minor slips and lapses that humans make in everyday life, and
tries to sort out some of the conceptual confusion that exists between an act and its
consequences. A brief experiment of natural history observations was performed that asked
subjects to record unintended or absent-minded actions. Two predictions are made. First, test
failures will occur when the open-loop mode of control coincides with a critical decision point
where the strengths of the motor programs beyond that point are markedly different. The second
prediction is that when errors occur, it will involve the unintended activation of the strongest
motor program beyond the node. The discussion then switches to ‘Slips of action’. It is argued
that ‘slips of action’ have certain consistencies. They occur almost exclusively during the
automatic execution of highly practiced and ‘routinized’ activities. They often result from the
misdirection of focal attention. Finally, they usually take the form of some frequently and
recently performed behavioral sequence.
Reason, J. (1990). Human error. Cambridge, MA: Cambridge University Press.
The author presents a framework of accident causation. The framework is an expansion of a
“resident pathogen” metaphor, meaning that causal factors are present in a system before an
accident sequence actually occurs. This leads to a differentiation of active failures and latent
failures. An important premise of the framework is that accidents come from fallible decisions
that are made by designers and decision makers. Five basic elements of a system are first
identified and then related to breakdowns in a system. One element is that decision makers are
those who set goals for the system and can make fallible decisions (latent failures). Another
element is that line management implements the strategies of the decision makers and are subject
to deficiencies themselves (latent failures). A third element is that preconditions are conditions
that permit efficient and safe operations and can be precursors for unsafe acts (latent failures). A
fourth element is that productive activities are the actions performed by man and machine and
lead to unsafe acts (active failures). The final element is that defenses are safeguards against
foreseen hazards and can be inadequate (active and latent failures). Unsafe acts can be broken
down into different types. If actions are unintended, they can be slips or lapses. Slips are
attentional failures that can be caused by intrusions, omissions, reversals, misordering, or
mistiming. Lapses are memory failures that lead to omitting planned items, place-losing, and
forgetting. If actions are intended, they are classified as either mistakes or violations. Mistakes
are either rule-based where there is a misapplication of a good rule or and application of a bad
rule, or they can be knowledge-based. Violations are either routine, exceptional, or even acts of
sabotage. It is argued that accidents occur as a penetration of various levels of the framework
occurs. Latent errors combined with local triggering events lead to accidents and incidents.
40
Rockwell, T. H., & Giffin, W. C. (1987). General aviation pilot error modeling – Again?
Proceedings of the 4th International Symposium on Aviation Psychology (pp. 712-720).
Columbus, OH: The Ohio State University, The Aviation Psychology Laboratory.
Process models are created to initially explore three types of pilot error in general aviation. These
are visual flight rules (VFR) flight into instrumental meteorological conditions (IMC), pilot fuel
mismanagement, and pilot response to critical inflight events. The models are intended to explain
large percentages of accidents of a specific type, pin point specific research needs to understand
and to verify elements in the models, and to create implementable countermeasures to reduce the
probability of pilot error. The models depict decisional processes, list typical errors, contributing
factors and propose needed research.
Rouse, W. B. (1983). Models of human problem solving: Detection, diagnosis, and
compensation for system failures. Automatica (19), 613-625.
The paper looks at the role of the human operator as a problem solver in man-machine systems.
Various models of human problem solving are examined and a design for an overall model is
outlined. The overall model attempts to capture the whole of problem solving and be
operationalized within specific task domains. A basic mechanism for the proposed model
incorporates pattern recognition models. Problem solving occurs on three general levels. These
are recognition and classification, planning, and execution and monitory. The model can produce
the behavior of solving problems in a top-down and a bottom-up manner and almost
simultaneously on several levels.
41
Rouse, W. B., & Rouse, S. H. (1983). Analysis and classification of human error. IEEE
Transactions on Systems, Man, and Cybernetics, SMC-13(4), 539-549.
There are two major approaches to human error, probabilistic and causal. This paper deals with
the causal approach which focuses more on the why errors occur instead of just what occurs. It is
argued that classification schemes can generally be categorized in one of three ways. They can
be behavior-oriented, task-oriented, or system-oriented. Behavior-oriented schemes range from
those emphasizing basic human information processing to those that focus on types of behavior
occurring in particular task domains. Task-oriented schemes focus on information transfer
problems, distraction events, and discriminating among types of tasks. System-oriented schemes
apply categories of a relatively broad nature which cover a series of tasks within the domain of a
particular system. A methodology is developed and discussed with the goal of analyzing human
error in terms of causes as well as contributing factors and events. It borrows heavily from
several of the previously discussed classification schemes. There are four general classes of
contributing factors for human errors: Inherent human limitations, inherent system limitations,
contributing conditions, and contributing events. Inherent human limitations include the
knowledge and attitude of the operator. Inherent system limitations include the design of controls
and displays, design of dialogues and procedures, and level of simulator fidelity. Contributing
conditions include environmental factors such as noise, excessive workload, frustration, anger,
embarrassment, confusion and operating in degraded modes. Contributing events involve
distractions, lack of or misleading communication, sudden equipment failures, and events such
as tension release. The proposed methodology is used to reanalyze data reported by Rouse et al
having to do with the design and evaluation of computer-based methods for presenting
procedural information such as checklists for normal, abnormal, and emergency aircraft
operations. The results are finer grained and support stronger conclusions than those originally
found.
Samanta, P. K., & Mitra, S. P. (1982). Modeling of multiple sequential failures during
testing, maintenance and calibration (NUREG/CR-2211). Brookhaven National Lab.
This report looks at the nature of dependence among human failures in a multiple sequential
action and how it differs from other types of multiple failures. It is necessary to consider
dependant failures in a system because otherwise there are serious doubts as to the usefulness of
a reliability calculation that considers random events only. There are two types of dependant
failures. Common cause failures are caused by an event outside the group but common to the
components. Cascade failures are caused from within the group such as a single component
failure which results in the failure of all components concerned. A multiple sequential failure
during testing and maintenance is modeled by taking into account the processes involved in such
a failure. The data suggest that the dependence among failures increases as the number of
components in the system increases. Human error causes selective failure of components
depending on when the failure started. Since previous models of dependent failures were found
to be lacking, two models were developed. The first is very general and does not require any
dependent failure data. The second model took multiple sequential failures into account.
42
Sanders, M. S. & McCormick, E. J. (1993). Human error, accidents, and safety. In Human
factors in engineering and design (7th ed.) (pp. 655-695). New York: McGraw-Hill, Inc.
This chapter deals with human error, accidents and improving safety in the domain of human
factors. Human error is defined as an inappropriate or undesirable human decision or behavior
that reduces, or has the potential for reducing, effectiveness, safety, or system performance. A
review of three classification schemes is performed. Swain and Guttmann’s omissioncommission
model is the first reviewed. A model by Rouse and Rouse is the second model
reviewed. Rasmussen’s skill-rule-knowledge based model is the third reviewed. An argument for
three general types of accident causation theories is made. These three types are accidentproneness
theories, job demand versus worker capability theories, and psychosocial theories.
Three aspects of risk perception are examined. These are the availability heuristic, the “It can’t
happen to me” bias, and relative risk. It is noted that warnings and product liability have become
important legal issues in today’s society. As a result of this, human factors has a large role to
play in these issues. The remainder of the article discusses Sanders and Shaw’s contributing
factors in accident causation model (CFAC). This model has five important tiers. Tier one
concerns itself with management issues. Tier two focuses on the physical environment,
equipment design, the work itself, and the social/psychological environment. The third tier
involves the worker and coworker factors. Tier four is where all unsafe behaviors from the
previous tiers are grouped. The fifth tier is the level of chance that leads to an accident. Unique
features of CFAC include the emphasis on management and social-psychological factors, the
recognition of the human-machine-environment system and the model’s simplicity and easy
comprehension.
Shappell, S. A. & Wiegmann, D. A. (1995). Controlled flight into terrain: The utility of
models of information processing and human error in aviation safety. Proceedings of the
Eighth International Symposium on Aviation Psychology, 8, 1300-1306.
A study of controlled flight into terrain (CFIT) accidents in the U.S. military was conducted to
try and ascertain why pilots do this. The study looked at an 11 year period. The four stage
information processing model of Wickens and Flach was used to classify 206 of the 278 pilot
causal factors found. These four stages are short-term sensory store, pattern recognition, decision
and response selection, and response execution. Reason’s model of unsafe acts was applied to the
data and allowed for the classification of 223 of 278 pilot causal factors. It was concluded that
any intervention needs to focus on the decision process of the pilot, specifically with mistakes
and violations.
43
Shappell, S. A. & Wiegmann, D. A. (1997). A human error approach to accident
investigation: The taxonomy of unsafe operations. International Journal of Aviation
Psychology, 7(4), 269-291.
A framework is provided and discussed called the Taxonomy of Unsafe Operations. This
framework bridges the gap between classical theories and practical application by providing field
investigators with a user-friendly, common sense framework that allows accident investigations
to be conducted and human causal factors classified. Three levels of failure involving the human
component are presented. These are unsafe supervision, unsafe conditions of operators, and
unsafe acts that operators commit. The framework directly incorporates Reason’s classification
of unsafe acts. Three basic error types are incorporated. The first are slips which are
characteristic of attentional failures. The second are lapses which come from memory failures.
The third are mistakes which are defined as intentional behavior that doesn’t produce the desired
outcome. Mistakes can be further broken down as being either rule-based or knowledge-based,
as is described in Rasmussen’s model. A framework was developed to breakdown unsafe
conditions of the operator. Substandard conditions of the operator are divided into three
categories. These include adverse physiological states, adverse mental states, and physical and/or
mental limitations. Substandard practices of the operator are also broken down into three
categories. These categories are mistakes-misjudgments, crew resource mismanagement, and
readiness violations. A framework for unsafe supervision is also developed. One dimension of
this framework deals with unforeseen unsafe supervision. Examples of this are unrecognized
unsafe operations, inadequate documentation and procedures, and inadequate design. The other
dimension deals with known unsafe supervision. Examples of this include inadequate
supervision, planned inappropriate operations, failure to correct known problems, and
supervisory violations. The usefulness of this cause-oriented taxonomy was shown by a
demonstration in its application to a military aviation accident.
Shappell, S. A., & Wiegmann, D. A. (2000). The human factors analysis and classification
system (HFACS) (Report Number DOT/FAA/AM-00/7). Washington DC: Federal Aviation
Administration.
The Human Factors Analysis and Classification System (HFACS) was originally developed for
the U.S. Navy and Marine Corps as an accident investigation and data analysis tool. Since its
original development however, HFACS has been employed by other military organizations (e.g.,
U.S. Army, Air Force, and Canadian Defense Force) as an adjunct to preexisting accident
investigation and analysis systems. To date, the HFACS framework has been applied to over
1,000 military aviation accidents yielding objective, data-driven intervention strategies while
enhancing both the quantity and quality of human factors information gathered during accident
investigations. Other organizations such as the FAA and NASA have has also explored the use of
HFACS as a complement to preexisting systems within civil aviation in an attempt to capitalize
on gains realized by the military. Specifically, HFACS draws upon Reason’s (1990) concept of
latent and active failures and describes human error at each of four levels of failure: 1) unsafe
acts of operators (e.g., aircrew), 2) preconditions for unsafe acts, 3) unsafe supervision, and 4)
organizational influences. The manuscript provides a detail description and examples of each of
these categories.
44
Siegel, A. I., Bartter, W. D., Wolf, J. J., Knee, H. E., & Haas, P. M. (1984). Maintenance
personnel performance simulation (MAPPS) model: Summary description (NUREG/CR-
3626). Oak Ridge, TN: Oak Ridge National Laboratory.
This report describes a human performance computer simulation model developed for the
nuclear power maintenance context. The model looks at variables such as work place,
maintenance technician, motivation, human factors, and task-orientation. Information is provided
about human performance reliability pertinent to probabilistic risk assessment, regulatory
decisions, and maintenance personnel requirements. The technique allows for the assessment of
the tasks that maintenance technicians may perform in a less than satisfactory manner and what
conditions or combination of conditions serve to contribute to or alleviate such performance.
Silverman, B. G. (1992). Critiquing human error: A knowledge based human-computer
collaboration approach. San Diego, CA: Academic Press.
A model of human error is examined that that is useful for construction of a critic system in
artificial intelligence. The model is rooted in the psychological study of expert performance. The
adapted model of human error consists of three levels. The outermost layer provides a method to
account for external manifestations of errors in human behavior. These occur as cues such as
knowledge rules, models, or touchstones, that need to be followed to reach a correct task
outcome. The middle layer determines what causes lead to the erroneous behaviors identified in
the outer layer. The inner layer investigates the innermost cognitive reasons causing the error.
This is performed by teasing apart the processes and operations that contribute to that cause. The
diagnostic graph of cognitive operations leading to human error include four categories that all
fall under the heading of cue usage error. Cognitive biases is the first category. Examples of
these are input filter biases, info acquisition biases, info processing biases, intended output
biases, and feedback biases. Accidental slips and lapses is a second category. Examples of these
include environment/feedback errors, info acquisition errors, info processing errors, and intended
output errors. A third category is cultural motivations. Errors that occur in this category are
rational actor errors, incrementalism errors, recognition primed errors, and process control errors.
The final category is the missing knowledge category. Examples of these errors are initial
training errors, knowledge decay errors, and multidisciplinary errors. This model focuses on
errors that occur prior to time pressures and stress that occur during crisis or panic times. A
framework in which errors arise is presented and described. It is a system with four entities.
(1) The person or expert making the judgments.
(2) The task-environment within which the person makes the judgments.
(3) The feedback loop consisting of actions and reactions of people.
(4) The automated critics that try to influence a person’s judgment and decision.
45
Singleton, W. T. (1973). Theoretical approaches to human error. Ergonomics, 16(6), 727-
737.
Two types of approaches to human error are discussed. The technological approach involves
coping with many problems with or without laboratory support and then attempting to generalize
the rules of the game in terms of classifications of kinds of problems with associated remedies.
The scientific approach is based on the principle that theory is the bridge between experiment
and practice. The author discusses different theoretical approaches and extracts useful elements
relevant to human error. The approaches looked at include the psychoanalytic approach, the
stimulus response approach, field theories, cybernetics, human performance and skill, Decision
Theory, the arousal/stress theories, and the social theories. The author concludes that errors and
accidents are not homogeneous, therefore it is necessary for a practitioner to match the most
relevant taxonomy and theory to the particular practical problem. It is suggested that no single
method will provide a complete answer but rather that this is where the greatest dividend is likely
to be found. From here, a comprehensive unweighted approach to problems is presented.
Stoklosa, J. H. (1983). Accident investigation of human performance factors. Proceedings
of the 2nd International Symposium on Aviation Psychology (pp. 429-436). Columbus, OH:
The Ohio State University, The Aviation Psychology Laboratory.
The paper discusses the necessary factual information for a detailed and systematic investigation
of the human performance aspects of an accident. Six profile categories are established which
include behavioral, medical, operational, task, equipment design, and environmental factors. This
concept has been successfully implemented in actual multi-modal accident investigations.
Sträter, O. (1996). A method for human reliability data collection and assessment.
Probabilistic Safety Assessment and Management ‘96 (pp. 1179-1184). New York:
Springer.
A method for evaluation of plant experience for probabilistic assessment of human actions is
described. The method is able to support root cause analysis in the evaluation of events and to
describe human failures with respect to HRA purposes. The method is applied to boiling water
reactor events and the results are compared with the data tables of the THERP handbook. The
evaluation framework was subdivided into two steps, the decomposition of an event into units
called MMS (man-machine systems), and then detailed analysis of the MMS-units. It was
concluded that it is possible to validate most of the items of the THERP handbook using the new
method. The new method is a reasonable procedure to analyze simulator data as well as to
improve Human Reliability systematically in a wide range of industries (i.e. aviation and power
plants).
46
Swain, D., & Guttmann, H. E. (1980). Handbook of human reliability analysis with
emphasis on nuclear power plant applications (NUREG/CR-1278). Albuquerque, NM:
India Laboratories.
The Technique for Human Error Rate Prediction (THERP) model is presented in this handbook.
The steps in THERP define the system failures of interest, list and analyze the related human
operations, estimate the relevant error probabilities, estimate the effects of human errors on the
system failure events, and recommend changes to the system and recalculate the system failure
probabilities. THERP is interested in evaluating task reliability, error correction, task effects, and
the importance of effects. Probability tree diagrams are the basic tools of THERP.
Tarrants, W. E. (1965, May). Applying measurement concepts to the appraisal of safety
performance. Journal of ASSE, 15-22.
The author addresses the issue of non-injurious accidents, or near misses, as an important basis
for an accident prevention program designed to remove those causes before more severe
accidents can occur. The Critical Incident Technique is presented and explained to deal with
these near misses. The Critical Incident Technique is a method that identifies errors and unsafe
conditions which contribute to accidents within a given population by means of a stratified
random sample of participant-observers selected from within this population. Interviews with
people whose jobs are being studied are performed to reveal unsafe errors, conditions and other
critical incidents. The incidents are classified into hazard categories from which accident
problem areas are defined. Accident prevention programs are then designed to deal with the
critical incidents. The technique is reapplied periodically to detect new problem areas and to
measure effectiveness of the accident prevention program that was designed. The technique of
behavior sampling is also reviewed as an accident measure. The technique is concerned with the
acts a person is engaged in at the moment of an accident.
Van Eekhout, J. M., & Rouse, W. B. (1981). Human errors in detection, diagnosis, and
compensation for failures in the engine control room of a supertanker. IEEE Transactions
on Systems, Man, and Cybernetics, SMC-11(12), 813-816.
An error classification system is presented in a marine context that is an extension of
Rasmussen’s scheme of classifying human errors in nuclear power plant operations. The general
categories of error in the classification system are observation of the system state, identification
of a fault, choice of the goal, choice of the procedure, and execution of the procedure. A study
was conducted on crews of professional engineering officers to see how they could perform the
task of coping with failures in a high-fidelity supertanker engine control room simulator. Two
important conclusions were made. First, human factors design inadequacies and fidelity
problems lead to human errors. Second, a lack of knowledge of the functioning of the basic
system as well as automatic controllers is highly correlated with errors in identifying failures.
47
Wickens, C. D. & Hollands J. G. (2000). Engineering psychology and human performance
(3rd ed.). Upper Saddle River, NJ: Prentice Hall.
The information processing model is a framework that is broken down into a series of stages and
feedback loops. Sensory processing is the first stage. Information from the environment gains
access to the brain. Sensory systems each have their own short-term sensory store. Perception is
the second stage. Raw sensory data is transmitted to the brain and is interpreted and given
meaning. Perception is automatic and occurs rapidly. It relies on both bottom-up processing and
top-down processing. Cognition and memory is another stage in the model. Working memory is
part of this stage. This stage is characterized by conscious activities which transform or retain
information and are resource limited. From here, some information is transferred to long-term
memory. The next two stages are response selection and response execution. Feedback loops are
important and necessary for monitoring progress to see if a task was completed. Attention is
another important aspect of the model. Attention is a limited resource that can be selectively
allocated to the desired channels. Attention can also be divided between different tasks and
mental operations. This model can be used to describe human error as occurring in different
stages of the model. Mistakes (knowledge and rule based) can arise from problems in perception,
memory, and cognition. Slips occur during action execution. Lapses and mode errors are related
to memory failures.
Wiegmann, D. A. & Shappell, S. A. (1997). Human factors analysis of postaccident data:
Applying theoretical taxonomies of human error. International Journal of Aviation
Psychology,7(1), 67-81.
Three conceptual models of information processing and human error are examined and used to
reorganize the human factors database associated with military aviation accidents. The first
model was the four stage information processing model of Wickens and Flach. The second
model used was O’Hare’s adapted version of Rasmussen’s taxonomic algorithm for classifying
information processing failures. The third model used was Reason’s approach to classification of
active failures. It was found that the naval aviation accident database could be reorganized with a
large degree of success into the three taxonomies of human error. It is also noted that a general
trend was found. Accidents were primarily associated with procedural and response-execution
errors as well as mistakes. The four stage information processing model accounted for slightly
less pilot-causal factors than did the other two models.
48
Wiegmann, D. A., & Shappell, S. A. (1999). A human factors approach to accident analysis
and prevention. (Workshop Manual from the 43rd Human Factors and Ergonomics Society
Conference).
This paper is an expansion of the framework given in Shappell and Wiegmann’s (1997) paper.
The only difference is that organizational factors are considered and added into the framework.
Three categories of organizational influences exist. Resource management refers to the
management, allocation, and maintenance of organizational resources. Examples of these include
human resources, monetary resources, and equipment/facility resources. Organizational climate
considers the prevailing attitudes and atmosphere in an organization. Important aspects of this
category include the structure, policies, and culture of the organization. Operational process is
the final category and is defined as the formal process by which things get done in an
organization. Parts to this category include operations, procedures and oversights in the
organization.
Williams, J. C. (1988). A data-based method for assessing and reducing human error to
improve operational performance. Conference record for 1988 IEEE Fourth Conference
on Human Factors and Power Plants (pp. 436-450). New York, NY: Institute of Electrical
and Electronics Engineers.
The HEART method (Human Error Assessment and Reduction Technique) is used to explore the
identities and magnitudes of error-producing factors and provides defensive measures to combat
their effects. The HEART method is based on three premises. First, basic human reliability is
dependent upon the generic nature of the task to be performed. Second, given “perfect”
conditions, this level of reliability will tend to be achieved consistently with a given nominal
likelihood within probabilistic limits. And finally, given that these perfect conditions do not exist
in all circumstances, the human reliability predicted may be expected to degrade as a function of
the extent to which identified error-producing conditions might apply. The HEART methodology
concentrates on the additive nature of error-producing conditions found in practice, and assumes
that factorial degradation of performance through multiple error-producing conditions is a much
more likely outcome.
Woods, D. D., Pople, H. E., & Roth, E. M. (1990). The cognitive environment simulation as
a tool for modeling human performance and reliability (NUREG/CR-5213). Pittsburgh,
PA: Westinghouse Science and Technology Center.
A tool called the Cognitive Environment Simulation (CES) was developed for simulating how
people form intentions to act in nuclear power plant personnel emergencies. A methodology
called Cognitive Reliability Assessment Technique (CREATE) was developed to describe how
CES can be used to provide input to human reliability analyses in probabilistic risk assessment
studies. CES/CREATE was evaluated in three separate workshops and was shown to work in the
tested scenarios. CES can be used to provide an objective means of distinguishing which event
scenarios are likely to be straightforward to diagnose and which scenarios are likely to be
cognitively challenging, requiring longer to diagnose and which can lead to human error.
49
Wreathall, J. (1994). Human errors in dynamic process system. In T. Aldemir (Eds.),
Reliability and safety assessment of dynamic process systems (pp. 179-189). New York:
Springer.
The author reviews the nature of human errors and the way they interact with systems in
dynamic processes. A framework of accident causation is presented that is heavily based on
Reason’s model. The important features of the model are organizational processes, errorproducing
conditions, unsafe acts, defenses against unsafe acts, latent failures, and the accident
itself. Unsafe acts can be separated into different categories. There are errors of commission and
omission. There are active and latent errors. There are also slips/lapses, mistakes, and
circumventions.
Wreathall, J., Luckas, W. J., & Thompson, C. M. (1996). Use of a multidisciplinary
framework in the analysis of human errors. Probabilistic Safety Assessment and
Management ’96 (pp. 782-787). New York, NY: Springer.
This paper describes an effort to develop and apply a framework to describe the human-system
interactions associated with several different technical applications. The fundamental elements of
the framework are the PSA model and “plant” state, human failure events, unsafe actions, error
mechanisms, performance shaping factors, and plant conditions. The framework was applied in
the analysis of several accidents (including the crash of Air Florida flight in January 1982) and
proved to be robust and insightful.
Acknowledgments and Disclaimer
This material is based upon work supported by the Federal Aviation Administration
under Award No. DTFA 99-G-006. Any opinions, findings, and conclusions or
recommendations expressed in this publication are those of the authors and do not necessarily
reflect the views of the Federal Aviation Administration.

使用道具 举报

Rank: 1

3#
发表于 2010-6-28 14:17:31 |只看该作者

使用道具 举报

Rank: 1

4#
发表于 2010-7-10 17:40:20 |只看该作者
飞行安全基金会认为,我们 先要充分利用时间在发展中国家 传播这种程序,因为在这些地方 发生的许多事故本可以通过这种 程序加以制止。 但是,在发达国家存在的安 全系统参差不齐的现状说明我们 还有很多工作要做。今年10月 在檀香山召开的国际航空安全研 讨会上,有两位人士的发言指出 了对维护程序缺乏重视的问题。

使用道具 举报

Rank: 1

5#
发表于 2010-11-11 10:04:57 |只看该作者
学习,太好了!

使用道具 举报

Rank: 1

6#
发表于 2010-12-2 12:37:33 |只看该作者
下来看看

使用道具 举报

Rank: 1

7#
发表于 2010-12-2 20:09:48 |只看该作者

向各位老师学习

使用道具 举报

Rank: 1

8#
发表于 2010-12-7 09:47:52 |只看该作者
总局的培训?

使用道具 举报

Rank: 1

9#
发表于 2011-3-6 21:52:59 |只看该作者
辛苦了!谢谢

使用道具 举报

Rank: 1

10#
发表于 2011-3-11 22:20:21 |只看该作者
谢谢分享拉

使用道具 举报

您需要登录后才可以回帖 登录 | 注册


Archiver|航空论坛 ( 渝ICP备10008336号 )

GMT+8, 2024-4-25 04:05 , Processed in 0.046800 second(s), 12 queries .

Powered by Discuz! X2

© 2001-2011 MinHang.CC.

回顶部