航空 发表于 2010-5-19 08:33:03

人为因素分析综述

<P>人为因素分析综述</P>
<P>**** Hidden Message *****</P>

航空 发表于 2010-5-19 08:33:16

Aviation Research Lab<BR>Institute of Aviation<BR>University of Illinois<BR>at Urbana-Champaign<BR>1 Airport Road<BR>Savoy, Illinois 61874<BR>Human Error and Accident<BR>Causation Theories, Frameworks<BR>and Analytical Techniques:<BR>An Annotated Bibliography<BR>Douglas A. Wiegmann and Aaron M. Rich<BR>Aviation Research Lab<BR>and Scott A. Shappell<BR>Civil Aeromedical Institute<BR>Technical Report ARL-00-12/FAA-00-7<BR>September 2000<BR>Prepared for<BR>Federal Aviation Administration<BR>Oklahoma City, OK<BR>Contract DTFA 99-G-006<BR>ARL<BR>1<BR>ABSTRACT<BR>Over the last several decades, humans have played a progressively more important causal<BR>role in aviation accidents as aircraft have become more. Consequently, a growing number of<BR>aviation organizations are tasking their safety personnel with developing safety programs to<BR>address the highly complex and often nebulous issue of human error. However, there is generally<BR>no “off-the-shelf” or standard approach for addressing human error in aviation. Indeed, recent<BR>years have seen a proliferation of human error frameworks and accident investigation schemes to<BR>the point where there now appears to be as many human error models as there are people<BR>interested in the topic. The purpose of the present document is to summarize research and<BR>technical articles that either directly present a specific human error or accident analysis system,<BR>or use error frameworks in analyzing human performance data within a specific context or task.<BR>The hope is that this review of the literature will provide practitioners with a starting point for<BR>identifying error analysis and accident investigation schemes that will best suit their individual or<BR>organizational needs.<BR>2<BR>Adams, E. E. (1976, October). Accident causation and the management system.<BR>Professional Safety, 26-29.<BR>The paper explores accident causation in the context of management philosophy and support for<BR>the safety professional. An underlying theme is that management’s thoughts and actions<BR>influence work conditions and worker behavior. Accident prevention is then discussed as a two<BR>level task. The first level consists of technical problem solving for correcting tactical errors. The<BR>second level consists of management analysis and strategic planning for the correction of<BR>operational errors. Heinrich’s domino philosophy of accident prevention is also analyzed in<BR>regards to its relevance to management behavior.<BR>Air Force Safety Center: Life Sciences Report (LSR) and USAF HF Taxonomy. (1998).<BR>(NASA Aviation Data Sources Resource Handbook).<BR>The Life Sciences Report and USAF human factors taxonomy are described. The human factors<BR>category of the Life Science Report Investigations was designed to allow for a broader secondary<BR>analysis of human factors issues. The data is limited to aircraft accidents only. The report relies<BR>on the use of a logic tree. The human factors category is broken down into two main categories<BR>with multiple subcategories within each. The first is the environmental category that incorporates<BR>operations, institutions and management, logistics and maintenance, facilities services, and<BR>egress/survival. The second is the individual category that is comprised of<BR>physiological/biodynamic, psychological, and psychosocial subcategories.<BR>AIRS Aircrew Incident Reporting System. (1998). (NASA Aviation Data Sources Resource<BR>Notebook).<BR>The AIRS is a reporting system developed by Airbus Industrie to assess how their aircraft are<BR>operated in the real world, to gather human factor information, learn what role human factors<BR>play in accidents, and inform other operators of the lessons learned from these events. A<BR>taxonomy was designed for the database that is based on five categories of factors. The first<BR>category is crew actions. There are three main components of this category.<BR>(1) Activities of handling the aircraft and its systems<BR>(2) Error types (based on Reason’s model of human error)<BR>(3) Crew resource management teamskills<BR>The other categories include personal influences (emotion, stress, motivation, etc),<BR>environmental influences (ATC services, technical failure, other aircraft, etc.), organizational<BR>influences (training, commercial pressure, etc.), and informational influences (checklists,<BR>navigational charts, etc.). A keyword system to access the database has also been designed. This<BR>keyword system is separated into two categories, crew behavior and contributory factors. An<BR>advantage of the AIRS as a reporting system is that it allows for plots of error chains which<BR>represent active and latent failures instrumental to an incident occurrence. It also supports trend<BR>analysis.<BR>3<BR>Alkov, R. A. (1997). Human error. In Aviation safety- The human factor (pp. 75-87).<BR>Casper, WY: Endeavor Books.<BR>This paper makes the argument that much is known about what causes errors, but systems cannot<BR>be error-free and eventually errors will occur. Three factors must be considered when studying<BR>human error.<BR>(1) Identical types of error can have fundamentally different causes.<BR>(2) Anyone is capable of making errors, regardless of experience level, proficiency, maturity,<BR>and motivation.<BR>(3) Outcomes of similar errors can be different.<BR>Errors are classified as design-induced. These errors can be random, systematic or sporadic.<BR>Other types of error classifications include errors of omission, commission, substitution,<BR>reversible, and irreversible. The paper goes on to describe three things that a model of human<BR>error should do. It needs to be able to predict the error, take into account data input, account for<BR>cognitive processes, and examine actions of individuals to determine what kind of error behavior<BR>occurred. Three taxonomies for errors are also discussed. The first taxonomy simply describes<BR>what happened. The second taxonomy lumps together errors according to the underlying<BR>cognitive mechanism that cause it. The third taxonomy classifies errors according to human<BR>biases or tendencies. The slips, lapses, mistakes paradigm of error is then examined within these<BR>taxonomies. Errors, which are unintended, are contrasted to violations, which are usually<BR>deliberate. The authors also take a look at intentional violations performed by operators. The<BR>decision to perform a violation is shaped by three interrelated factors. These factors are attitudes<BR>to behavior, subjective norms, and perceived behavioral control. The role of latent failures versus<BR>active failures is discussed. Latent failures are consequences of human actions or decisions that<BR>take a long time to reveal themselves. Active failures have almost immediate negative outcomes.<BR>Finally, local versus organizational factors are stressed as being important. Local factors refer to<BR>the immediate workplace whereas organizational factors refer to those that occur outside of the<BR>immediate workplace.<BR>Amendola, A. (1990). The DYLAM approach to systems safety analysis. In A. G. Colombo<BR>&amp; A. S. De Bustamante (Eds.), Systems reliability assessment (pp. 159-251). The<BR>Netherlands: Kluwer Academic Publishers.<BR>The DYLAM (Dynamic Logical Analytical Methodology) is described and analyzed. DYLAM is<BR>a methodology created to address the problem of the inability of event trees to adequately<BR>account for dynamic processes interacting with systems’ states. DYLAM is especially useful for<BR>developing stochastic models of dynamic systems which provide a powerful aid in the design of<BR>protection and decision support systems to assist operators in the control of hazardous processes<BR>in addition to systems safety assessment. The method differs from other techniques in its ability<BR>to account for process simulations and components of reliability performance in a unified<BR>procedure. The method uses heuristical bottom-up procedures that lead to the identification of<BR>event sequences that cause undesired conditions. It is also able to consider changes of the system<BR>structure due to control logic and to random events.<BR>4<BR>Baron, S., Feehrer, C., Muralidharan, R., Pew, R., &amp; Horwitz, P. (1982). An approach to<BR>modeling supervisory control of a nuclear power plant (NUREG/CR-2988). Oak Ridge,<BR>TN: Oak Ridge National Laboratory.<BR>The purpose of this report is to determine the feasibility of applying a supervisory control<BR>modeling technology to the study of critical operator-machine problems in the operation of a<BR>nuclear power plant. A conceptual model is formed that incorporates the major elements of the<BR>operator and of the plant to be controlled. The supervisory control modeling framework is<BR>essentially a top-down, closed-loop simulation approach to supervisory control that provides for<BR>the incorporation of discrete tasks and procedurally based activities.<BR>Barriere, M. T., Ramey-Smith, A., &amp; Parry, G. W. (1996). An improved HRA process for<BR>use in PRAs. Probabilistic Safety Assessment and Management ’96 (pp. 132-137). New<BR>York, NY: Springer.<BR>A summary of the human reliability analysis called ATHEANA (a technique for human error<BR>analysis) is given. ATHEANA is an analytical process for performing a human reliability<BR>analysis in the context of probabilistic risk assessment. ATHEANA is based on an understanding<BR>of why human-system interaction failures occur as opposed to behavioral and phenomenological<BR>description of operator responses.<BR>Benner, L., Jr. (1975). Accident investigations: Multilinear events sequencing methods.<BR>Journal of Safety Research, 7(2), 67-73.<BR>The paper tries to call attention to the need to develop generally acceptable approaches and<BR>analysis methods that will result in complete, reproducible, conceptually consistent, and easily<BR>communicated explanations of accidents. The first step for accident investigation should be to<BR>answer the question, “what happened?” This involves a delineation of the beginning and end of<BR>the accident phenomenon. It is extremely important that a convention for defining precisely the<BR>beginning and end of an accident is decided on and used. The second question to answer is,<BR>“Why did it happen as it did?” This means a recognition of the role of conditions leading to the<BR>accident is necessary. A general explanation of the accident phenomenon is needed. This can be<BR>done using the P-theory of accidents. The theory states that the accident can be seen to begin<BR>with a perturbation and end with the last injurious or damaging event in the continuing accidental<BR>events sequence. Accident event sequences should be displayed to aid accident investigation. An<BR>events charting method is one way to do this. It is a chronological array of events and helps<BR>structure the search for relevant factors and events involved in the accident. A method for<BR>presenting the accident events and enabling conditions is suggested. This method stays tuned into<BR>the time order and logical flow of events present in an accident. The author believes that the<BR>adoption of the P-theory and the charting methods would improve the public’s grasp of accident<BR>phenomenon.<BR>5<BR>Benner, L., Jr. (1982). 5 accident perceptions: Their implications for accident<BR>investigations. Professional Safety, 27(2), 21-27.<BR>The author is interested in investigating what the standards of accident investigation should be. A<BR>common problem is that investigators may each have different ideas as to the purpose of the<BR>investigation in relation to what their own needs and wants may be. Five distinct perceptions of<BR>the nature of accident phenomenon are suggested to exist and the strengths of each are discussed.<BR>These perceptions each seem to lead to a theoretical base for accident investigation. The first<BR>perception is the single event perception, where accidents are treated as a single event. The only<BR>strength of this perception is its tendency to concentrate attention on a single corrective measure.<BR>A major weakness is that it provides an overly simplified explanation of accidents. The second<BR>perception is the chain of events perception which treats accidents as a chain of sequential<BR>events. The main focus is placed on unsafe conditions and acts. The major strength of this<BR>perception is that the reconstruction technique provides some disciplining of the data search by<BR>doing sequential ordering. A weakness is that the criteria for the selection of data used are<BR>imprecise and very unlikely to lead to reproducible results. The third perception is the<BR>determinant variable or factorial perception. This perception tries to discern common factors in<BR>accidents by statistical manipulation of accident data. An important strength here is its ability to<BR>discover previously undefined relationships. A major weakness is the total dependency on data<BR>obtained by accident investigators. The fourth perception is the logic tree perception. This<BR>presumes that converging chains of events lead to an undesired event. The major strength of this<BR>perception is that it provides an approach to organize speculations about accidental courses of<BR>events and allows an operator to watch for initiation events. A weakness is that the beginning<BR>and end of an accident phenomenon are left to be decided by the individual investigators. The<BR>fifth and final perception is the multilinear events sequence perception. This perception treats<BR>accidents as a segment of a continuum of activities. The major strength is the way it facilitates<BR>discovery by structuring data into logical arrays. A weakness is the perceived complexity of the<BR>methodologies which discourages use. Three areas are addressed as problem areas that need to<BR>be improved for accident investigators. Each investigator develops a personalized investigative<BR>methodology instead of having a common methodology used by all investigators. Investigators<BR>have difficulty linking investigations to predicted safety performance of an activity. Finally,<BR>there are no standardized qualifications for investigators.<BR>6<BR>Berninger, D. J. &lt;unknown date&gt;. Understanding the role of human error in aircraft<BR>accidents. Transportation Research Record, 1298, 33-42.<BR>There are two main strategies used to address human error. The first is the introduction of<BR>technology that is intended to assist and reduce the roles of humans. The second is training and<BR>changes to the system that are suggested by human factors. One way of looking at human error is<BR>as human malfunction. The author argues against using this point of view stating that there is no<BR>malfunction on the human’s part because the human is responding appropriately to experience or<BR>the circumstance. A second way of looking at human error is as a system malfunction. A system<BR>that fails has both animate and inanimate components, and humans cause errors with the animate<BR>components. But human performance is not independent of the inanimate components and<BR>environment. A distinction is made between soft deficiencies and hard-system deficiencies. Soft<BR>deficiencies are system characteristics that work against human performance and cause humans<BR>to fail. Hard-system deficiencies are things such as insufficient durability and cause hardware to<BR>fail. A mechanism for system design causing aircraft accidents is presented. It states that soft<BR>deficiencies result from vigilance, which affects effectiveness along with skill and experience.<BR>The effectiveness is compared to flight conditions. If the effectiveness level is too low compared<BR>to flight conditions, the safety margin decreases until an accident occurs. Human factors<BR>specialists, engineers, and others must pursue soft deficiencies jointly. By breaking down the soft<BR>deficiencies, accidents can be understood better and made more preventable.<BR>Besco, R. O. (1988). Modelling system design components of pilot error. Human Error<BR>Avoidance Techniques Conference Proceedings (pp. 53-57). Warrendale, PA: Society of<BR>Automotive Engineers.<BR>A five factored model based on the assumption that errors have a cause and can be prevented by<BR>removing error-inducing elements is developed and reviewed in the context of civilian aircraft<BR>accidents. The five factors are obstacles, knowledge, systems, skill, and attitude. The model<BR>consists of a sequential analysis of inducing elements and the associated reducers. A detailed<BR>step-by-step graphic model is presented in the paper.<BR>7<BR>Besco, R. O. (1998). Analyzing and preventing knowledge deficiencies in flight crew<BR>proficiency and skilled team performance. Dallas, TX: Professional Performance<BR>Improvement.<BR>A five-factor model called the Professional Performance Analysis System (PPAS) is developed<BR>and described which has a main purpose of providing remedies to minimize pilot error and<BR>optimize pilot performance. The model has been successful for use in accident investigation. The<BR>model attempts to deal with knowledge deficiencies and attitudinal problems with a combination<BR>of techniques and methodologies from organizational psychology, flight operations, business<BR>leadership and management sciences. The five interactive factors of the model include<BR>knowledge, skills, attitudes, systems environment, and obstacles. The first step in the analysis is<BR>describing the process, function, task, error, or low performance. At this stage an investigator is<BR>looking to see if the pilot was aware of risks, threats and consequences of their actions and if<BR>there was stimulus that degraded this awareness. The second step is to assess the impact of the<BR>error on this particular accident or incident by determining whether removal would have<BR>prevented the accident. The third step is to assess the visibility of the error to the crewmembers.<BR>The fourth step involves analyzing a detailed flow chart to see if the crew had adequate<BR>knowledge to cope with the errors and anomalies that occurred. There are four levels of learning<BR>that are examined. These include unconsciously incompetent (crew is unaware that they don’t<BR>know something), consciously incompetent (the crew is aware that they don’t know something),<BR>consciously competent (the crew has knowledge and skill but must apply great effort to<BR>accomplish it), and unconsciously competent (the crew has over learned the knowledge or skill<BR>and can apply it without conscious thought). Other questions are explored to determine<BR>deficiencies. Recommendations are given for each of the situations where a problem was<BR>perceived.<BR>(1) Did the crew ever have the knowledge?<BR>(2) Was the knowledge used often?<BR>(3) Was there feedback on the knowledge level?<BR>(4) Was there operationally meaningful curriculum?<BR>(5) Did personal interaction with learning occur?<BR>(6) Is the knowledge compatible with an organization?<BR>(7) Was the individual’s capacity to absorb and apply information lacking?<BR>8<BR>Bieder, C., Le-Bot, P., Desmares, E., Bonnet, J. L., &amp; Cara, F. (1998). MERMOS: EDF’s<BR>new advanced HRA method. Probabilistic Safety Assessment and Management: PSAM 4<BR>(pp. 129-134). New York, NY: Springer.<BR>MERMOS is a HRA method that deals with important underlying concepts of HRAs that were<BR>developed and examined in this paper. The basic theoretical object of the MERMOS method is<BR>what is termed Human Factor Missions. The Human Factor Missions refer to a set of macroactions<BR>the crew has to carry out in order to maintain or restore safety functions. Four major steps<BR>are involved in the MERMOS method. The first is to identify the safety functions that are<BR>affected, the possible functional responses, the associated operation objectives, and to determine<BR>whether specific means are to be used. The second is to break down the safety requirement<BR>corresponding to the HF mission . The third is to bridge the gap between theoretical concepts and<BR>real data by creating as many failure scenarios as possible. The final one is to ensure the<BR>consistency of the results and integrate them into PSA event trees.<BR>Bisseret, A. (1981). Application of signal detection theory to decision making in supervisory<BR>control: The effect of the operator’s experience. Ergonomics, 24(2), 81-94.<BR>The role of signal detection theory was looked at in the air-traffic controller environment. A<BR>general model of perceptive judgments on a radar screen for ATC controllers is proposed for<BR>judging the future separation at the point of convergence for two aircraft. An experiment was<BR>conducted that looked at air-traffic controllers (trainees vs. experienced) ability to detect loss of<BR>separation of aircraft at present and in the future. The results showed that experienced controllers<BR>use a ‘doubt’ response (a part of the model of perceptive judgments proposed) while trainees do<BR>not. Trainees look for a sure and accurate response while experienced controllers create a<BR>momentary class of indetermination.<BR>Braddock, R. (1958). An extension of the “Lasswell formula”. Journal of Communication,<BR>8, 88-93.<BR>Seven aspects of the communicative process are offered as an extension to the “Lasswell<BR>Formula”. These aspects are WHO says WHAT to WHOM under WHAT CIRCUMSTANCES<BR>through WHAT MEDIUM for WHAT PURPOSE with WHAT EFFECT. This formula (model)<BR>can address errors in terms of dealing with aspects of a message, its medium, and the<BR>expectations of the sender or receiver.<BR>9<BR>Broadbent, D. E. (1958). Perception and communication. Pergamon Press, Oxford.<BR>Broadbent explains in detail an information flow diagram of an organism. There are five<BR>important principles underlying his diagram. The nervous system acts as a single communication<BR>channel that has a limited capacity. A selective operation is performed upon the input to the<BR>channel. Selection is not random and depends on the probability of certain events and states<BR>being present in an organism. Incoming information can be held in a temporary store for the<BR>maximum time in the order of seconds. And finally, information can return to the temporary<BR>store after passing through a limited capacity channel.<BR>CAATE Civil Aviation Authority Taxonomy Expanded. (1998). (NASA Aviation Data<BR>Sources Resource Handbook).<BR>The CAATE was developed from analyses of controlled flight into terrain that led to ‘problem<BR>statements’. These problem statements were adapted into a taxonomy. A brief version of the<BR>taxonomy outline is presented here. Factors are divided into two main categories, causal and<BR>circumstantial. Causal factors include the airplane, ATC/ground aids, environmental, the crew,<BR>the engine, fire, maintenance/ground handling, the aircraft structure, infrastructure, design,<BR>performance and an ‘other’ factor. Circumstantial factors include aircraft systems, ATC/ground<BR>aids, environmental, the crew, infrastructure, and an ‘other’ factor.<BR>Cacciabue, P. C., Carpignano, A., &amp; Vivalda, C. (1993). A dynamic reliability technique for<BR>error assessment in man-machine systems. International Journal of Man-Machine Studies,<BR>38, 403-428.<BR>The paper presents a methodology for the analysis of human errors called DREAMS (Dynamic<BR>Reliability technique for Error Assessment in Man-Machine Systems). DREAMS is meant to<BR>identify the origin of human errors in the dynamic interaction of the operator and the plant<BR>control system. It accommodates different models of several levels of complexity such as simple<BR>behaviouristic models of operators and more complex cognitive models of operator behaviour.<BR>10<BR>Cacciabue, P. C., Cojazzi, G., &amp; Parisi, P. (1996). A dynamic HRA method based on a<BR>taxonomy and a cognitive simulation model. Probabilistic Safety Assessment and<BR>Management ‘96 (pp. 138-145). New York: Springer.<BR>A human factors methodology called HERMES (human error reliability methods for event<BR>sequences) is presented and compared to the “classical” THERP method. The classification<BR>scheme is based on the model of cognition and guides field studies, the development of<BR>questionnaires and interviews, the extraction of expert judgment, and the examination of<BR>accidents/incidents. The overall aim is to estimate data and parameters that are included in the<BR>analyses. The HERMES methodology is derived from four sources. The first is a cognitive<BR>simulation model built on the theories of human error and contextual control of Hollnagel and<BR>Reason. The second is a classification scheme of erroneous behavior. The third source is a model<BR>of the functional response of the plan. The fourth source is a method for structuring the<BR>interaction of the models of cognition and of plants that control the dynamic evolution of events.<BR>Cinq-Demi Methodology and Analysis Grids. (1998). (NASA Aviation Data Sources<BR>Resource Notebook).<BR>This methodology was developed as a tool to analyze the error factors and operational system<BR>faults that underlie a group of incidents or accidents. Three types of events are identified that can<BR>influence the status of an aircraft. This status floats between the Authorized Flight Envelope<BR>where the probability of an accident is low (10-7) to a Peripheral Flight Envelope where the<BR>probability of an accident is higher (10-3). The three events are maneuverability, sensitivity to<BR>disturbances, and pilotability. Maneuverability refers to maneuvers that are either imposed by the<BR>mission or are required to accommodate environmental events. Sensitivity to disturbances<BR>addresses internal and external events that influence aircraft status and movement. Pilotability<BR>deals with pilots’ performance of elementary operations and tasks, and the conditions leading to<BR>error. Five factors are proposed that are conditions leading to error. These include high<BR>workload, lack of information, misrepresentation (mental) due to the wrong use of information<BR>and cues, misrepresentation (mental) due to ‘diabolic error’, and physical clumsiness. The<BR>accidents and incidents are divided into key sub-events. These sub-events are then analyzed by<BR>five grids. The first three grids represent events that can change the Status Point of the aircraft.<BR>The fourth identifies the human environment at the time. The fifth is a matrix of operational<BR>system faults and elementary operations.<BR>(1) GAME (grid of aircraft maneuvers events)<BR>(2) GASP (grid of aircraft sensitivity to perturbations<BR>(3) GOOF (grid of operator failures)<BR>(4) GARE (grid of amplifiers of risk of errors)<BR>(5) RAFT (rapid analysis fault table)<BR>11<BR>Cojazzi, G., &amp; Cacciabue, P. C. (1992). The DYLAM approach for the reliability analysis<BR>of dynamic systems. In T. Aldemir, N. O. Siu, A. Mosleh, P. C. Cacciabue, &amp; B. G. G&ouml;ktepe<BR>(Eds.), Proceedings of the NATO Advanced Research Workshop on Reliability and Safety<BR>Assessment of Dynamic Process Systems (pp. 8-23). Germany: Springer-Verlag Berlin<BR>Heidelberg.<BR>A review of the third generation DYLAM approach to reliability analysis is performed. DYLAM<BR>is a powerful tool for integrating deterministic and failure events and it is based on the systematic<BR>simulation of the physical process under study. The DYLAM framework takes into account<BR>different types of probabilistic behaviours such as constant probabilities for initial events and<BR>component states, stochastic transitions between the states of the component, functional<BR>dependent transitions for failure on demand and physical dependencies, stochastic and functional<BR>dependent transitions, conditional probabilities for dependencies between states of different<BR>components, and stochastic transitions with variable transition rates. The DYLAM method is<BR>defined as a type of fault-tree/event-tree method.<BR>Cooper, S. E., Ramey-Smith, A. M., Wreathall, J., Parry, G. W., Bley, D. C., Luckas, W. J.,<BR>Taylor, J. H., &amp; Barriere, M. T. (1996). A technique for human error analysis (ATHEANA)<BR>(NUREG/CR-6350). Brookhaven National Laboratory.<BR>ATHEANA has been designed to address deficiencies in current human reliability analysis<BR>(HRA) approaches. These deficiencies to be corrected include addressing errors of commission<BR>and dependencies, representing more realistically the human-system interactions that have<BR>played important roles in accident response, and integrating recent advances in psychology with<BR>engineering, human factors, and probability risk analysis disciplines. ATHEANA is a<BR>multidisciplinary HRA framework that has been designed to fuse behavioral science,<BR>engineering, and human factors together. The framework elements are error forcing contexts,<BR>performance shaping factors, plant conditions, human error, error mechanisms, unsafe actions,<BR>probability risk assessment models, human failure events, and scenario definitions. The<BR>ATHEANA method was demonstrated in a trial application and provided a “proof of concept”<BR>for both the method itself and the principles underlying it.<BR>Danaher, J. W. (1980). Human error in ATC system operations. Human Factors, 22(5),<BR>535-545.<BR>Errors in air traffic control systems are occurring more often as air traffic increases. The author<BR>reviews the FAA’s program that sought to identify and correct causes of system errors which<BR>occur as a result of basic weaknesses inherent in the composite man-machine interface. A system<BR>error was defined as the occurrence of a penetration in the buffer zone that surrounds an aircraft.<BR>A database called the System Effectiveness Information System (SEIS) has been kept to be able<BR>to make summaries of system error data in desired categories. A system error is allowed only one<BR>direct cause, but may have many contributing causes. There are nine cause categories. These are<BR>attention, judgment, communications, stress, equipment, operations management, environment,<BR>procedures, and external factors.<BR>12<BR>De Keyser, V., &amp; Woods, D. D. (1990). Fixation errors: failures to revise situation<BR>assessment in dynamic and risky systems. In A. G. Colombo and A. Saiz de Bustamante<BR>(Eds.), Systems reliability assessment (pp. 231-252). Dordrechts, The Netherlands: Kluwer<BR>Academic Publishers.<BR>The paper identifies a major source of human error as being a failure to revise situation<BR>assessment as new evidence becomes available. These errors are called fixation errors and are<BR>identified by their main descriptive patterns. The paper explores ways to build new systems to<BR>reduce this type of error. Fixation occurs when a person does not revise their situation<BR>assessment or course of action in response to one of two things. Either the situation assessment<BR>or course of action has become inappropriate given the actual situation, or the inappropriate<BR>judgment or action persists in the face of opportunities to revise. Three main patterns of behavior<BR>occur during fixation. There is the “Everything but that” pattern, the “This and nothing else”<BR>pattern, and the “Everything is OK” pattern. The authors go on to describe a fixation incident<BR>analysis. The analysis is broken into categories. These are initial judgment and background, the<BR>error, opportunities to revise, neutral observer tests, incident evolution, and revision and<BR>correction.<BR>Diehl, A. E. (1989). Human performance aspects of aircraft accidents. In R. S. Jensen (Ed.),<BR>Aviation psychology (pp. 378-403). Brookfield, VT: Gower Technical.<BR>There is an important relationship between the phenomena of accident generation with the<BR>following investigation process, and the measures that are eventually performed to prevent more<BR>similar accidents from occurring. With this in mind, the author describes three important<BR>elements in accident generation. First, hazards occur when a dangerous situation is detected and<BR>adjusted for. Hazards are common. Second, incidents occur when a dangerous situation isn’t<BR>detected until it almost occurs and an evasive action of some sort is needed. These are infrequent.<BR>Third, accidents occur when a dangerous situation isn’t detected and does occur. These are rare.<BR>Aircraft accident investigation consists of several discrete functions that occur in the following<BR>sequence: fact finding, information analysis, and authority review. It is also important to examine<BR>comparative data sources and mishap data bases. There are also important accident prevention<BR>elements which are to establish procedural safeguards, provide warning devices, incorporate<BR>safety features, and eliminate hazards and risks.<BR>13<BR>Dougherty, E. M., Jr., &amp; Fragola, J. R. (1988). Human reliability analysis. New York: John<BR>Wiley &amp; Sons.<BR>A human error taxonomy is discussed that draws heavily from the Rasmussen taxonomy. This is<BR>then used to formulate a conceptual framework of technological risks. The human error<BR>taxonomy is broken down into behavior types (mistakes, slips) and the different parts to error<BR>(modes, mechanisms, causes). The parts of errors are expanded below:<BR>Modes<BR>Misdetection<BR>Misdiagnosis<BR>Faulty decision<BR>Faulty planning<BR>Faulty actions<BR>Mechanisms<BR>False sensations<BR>Attentional failures<BR>Memory lapses<BR>Inaccurate recall<BR>Misperceptions<BR>Faulty judgments<BR>Faulty inferences<BR>Unintended actions<BR>Causes<BR>Misleading<BR>Indicator<BR>Lack of knowledge<BR>Uncertainty<BR>Time stress<BR>Distraction<BR>Physical incapacitation<BR>Excessive force<BR>Human variability<BR>The framework shows that the human being consists of many modules that carry out selected<BR>activities. There are mechanisms that control action. There are mechanisms that interpret, plan,<BR>and choose actions. An executive monitor exists to control these processes. A conscious module<BR>exists. In the framework, the human relates to the world through the senses and acts through the<BR>motor apparatus. Skill loops are shorter and presumably faster whereas knowledge loops may<BR>pass through all categories of the modules. Influences on human behavior may increase the<BR>effectiveness of certain modules.<BR>Drury, C. G., &amp; Brill, M. (1983). Human factors in consumer product accident<BR>investigation. Human Factors, 25(3), 329-342.<BR>The role of accident investigation in product-liability cases is discussed. A job aid is developed<BR>using task analysis as a basis which is intended to obtain better human factors data.<BR>Characteristic accident patterns were found among the data and these were labeled hazard<BR>patterns or scenarios. It is stressed that etiological data is more important to obtain than<BR>epidemiological data. Hazard patterns are developed and discussed. The intention of hazard<BR>patterns is to create a way to predict the behavior of a product just by looking at its<BR>characteristics. Hazard patterns are considered useful if at least six scenarios can account for<BR>90% or more of the in-depth investigations, each scenario leads to at least one usable<BR>intervention strategy that works for that pattern, each scenario is mutually exclusive from all the<BR>others, and each scenario has human factors as a parameter in its description. A generic hazard<BR>pattern is assigned to the remaining small percentage of scenarios that are not product specific.<BR>Hazard patterns are broken down into four parts that correspond to the task, the operator, the<BR>machine, and the environment.<BR>14<BR>Edwards, M. (1981). The design of an accident investigation procedure. Applied<BR>Ergonomics, 12(2), 111-115.<BR>The author points out that ergonomics has come under attack because models of application are<BR>inappropriate and partly because ergonomists tend to be laboratory-centered rather than problemcentered.<BR>The SHEL system is reviewed and suggested as a good solution to the problems<BR>mentioned. The basis of the SHEL system is the premise that what people do in a work situation<BR>is determined not only by their capabilities and limitations but also by the machines they work<BR>with, the rules and procedures governing their activities and the total environment within which<BR>the activity takes place. The model states that Hardware, Software, and Liveware (human<BR>elements) all are system resources that interact together and with their Environment. Accidents<BR>are described as symptomatic of a failure in the system. In order for the SHEL system to be<BR>adopted, a change in orientation is needed so that accidents will not be regarded as isolated<BR>events of a relatively arbitrary nature, due mostly to carelessness.<BR>Embrey, D. E., Humphreys, P., Rosa, E. A., Kirwan, B., &amp; Rea, K. (1984). SLIM-MAUD:<BR>An approach to assessing human error probabilities using structured expert judgment<BR>(NUREG/CR-3518). Brookhaven National Laboratory.<BR>Procedures and analyses are performed to develop an approach for structuring expert judgments<BR>to estimate human error probabilities. The approach is called SLIM-MAUD (success likelihood<BR>index methodology, implemented through the use of an interactive computer program called<BR>MAUD-multi attribute utility decomposition). The approach was shown to be viable in the<BR>evaluation of human reliability.<BR>Feggetter, A. J. (1982). A method for investigating human factor aspects of aircraft<BR>accidents and incidents. Ergonomics, 25(11), 1065-1075.<BR>This paper describes a comprehensive procedure for determining the human behaviour that<BR>occurs in aircraft accidents and incidents. A recommendation is made to use interviews and<BR>check lists in order to assess behavioral data that is involved with accidents and incidents. It is<BR>stressed that a trained human factors specialist should interview the personnel involved in these<BR>accidents and incidents as soon as possible. The author goes on to describe a checklist for<BR>accident and incident investigation that has been developed. It is based on a systems approach to<BR>understanding human error. The framework for the check list proposed considers three systems.<BR>These three systems are the cognitive system, the social system and the situational system.<BR>15<BR>Ferry, T. S. (1988). Modern accident investigation and analysis (2nd ed.), New York: John<BR>Wiley &amp; Sons.<BR>The book takes a thorough, detailed look at modern accident investigation and analysis. Its<BR>purpose is to give an investigator the necessary basics to perform an investigation. It is pointed<BR>out that a much more detailed version would be needed to truly train an expert in accident<BR>investigation. The book is divided into four parts. The first part investigates the who, what, why<BR>and when aspect of accident investigation. The second part examines the roles and interactions of<BR>man, environment, and systems. The third part reviews specific analytical techniques such as<BR>fault trees, failure mode and effect analysis (FMEA), the technique for human error rate<BR>prediction (THERP), the management oversight and risk tree (MORT), and the technic or<BR>operations review (TOR). The fourth part covers related topics to accident investigation. Some<BR>examples of these are mishap reports, management overview and mishap investigation, legal<BR>aspects of investigation, and the future of accident investigation. Fifteen general types of<BR>methodological approaches are identified in the accident investigation domain. These are<BR>epidemiological, clinical, trend forecasting, statistical inference, accident reconstruction,<BR>simulation, behavioral modeling, systems approach, heuristic, adversary, scientific, Kipling<BR>method (investigates who, what, when, where, why, and how), Sherlock Holmes method (events<BR>sequencing integrated in the investigator’s mind), and traditional engineering safety.<BR>Firenze, R. J. (1971, August). Hazard control: Safety, security, and fire management.<BR>National Safety News, 39-42.<BR>Error is looked at in the context of three integrated groups. The first group is physical equipment<BR>(the machine) which examines poorly designed or poorly maintained equipment that leads to<BR>accidents. The second group is man. In this group, faulty or bad information causes poor<BR>decisions. The third group is environment. Here failures in the environment (toxic atmospheres,<BR>glare, etc.) affect man, machine, or both. It is also noted that stressors that appear during a<BR>decision making process cloud a person’s ability to make sound, rational decisions.<BR>Fitts, P. M. (1954). The information capacity of the human motor system in controlling the<BR>amplitude of movement. Journal of Experimental Psychology, 47(6), 381-391.<BR>Fitts found that the rate of performance in a given type of task is approximately constant over a<BR>considerable range of movement amplitudes and tolerance limits, but falls off outside this<BR>optimum range. It was also found that the performance capacity of the human motor system plus<BR>its associated visual and proprioceptive feedback mechanisms, when measured in information<BR>units, is relatively constant over a considerable range of task conditions. This paper came as a<BR>result of information theory and applied its concepts.<BR>16<BR>Fussell, J. B. (1976). Fault tree analysis – Concepts and techniques. In E. J. Henley &amp; J. W.<BR>Lynn (Eds.), Proceedings of the NATO Advanced Study Institute on Generic Techniques in<BR>Systems Reliability Assessment (pp. 133-162). Leyden, The Netherlands: Noordhoff<BR>International Publishing.<BR>Fault tree analysis is a technique of reliability analysis that can be applied to complex dynamic<BR>systems. The fault tree is a graphical representation of Boolean logic associated with the<BR>development of a particular system failure to basic failures. Fault tree analysis has numerous<BR>benefits. It allows the analyst to determine failures deductively. It points out important aspects of<BR>the system in regards to the failure of interest. It provides a graphical aid giving clarification to<BR>systems management people. It provides options for qualitative or quantitative system reliability<BR>analysis. It allows the analyst to focus on one particular system failure at a time. Finally, it<BR>provides the analyst with genuine insight into system behavior. Three disadvantages of fault tree<BR>analysis include the high cost of development, the fact that few people are skilled in its<BR>techniques, and the possibility of two different people developing two different trees for the<BR>same system. The fault tree has 5 basic parts to it. The first parts, components, are the basic<BR>system constituents for which failures are considered primary failures during fault tree<BR>construction. The second parts, fault events, are failure situations resulting from the logical<BR>interaction of primary failures. The third parts, branches, are the development of any fault event<BR>on a fault tree. The fourth parts, base events, are the events being developed. The fifth and final<BR>parts, gates, are Boolean logic symbols that relate the inputs of the gates to the output events.<BR>Gerbert, K. &amp; Kemmler, R. (1986). The causes of causes: Determinants and background<BR>variables of human factor incidents and accidents. Ergonomics, 29(11), 1439-1453.<BR>An investigation was done with German Air Force pilots to examine critical flight incidents. The<BR>authors are interested in examining whether a possible cause of a failure can be traced to<BR>permanent personality characteristics of an operator or to a situational disturbance by psychophysiological<BR>or external events. Data analysis revealed human errors that can be interpreted as a<BR>four-dimensional error structure. Vigilance errors encompass one dimension. These are missing<BR>or fragmentary uptake of objectively present information due to inattention, or<BR>channellized/shifted attention. Perception errors are another dimension. These errors are<BR>comprised of erroneous judgment, miscalculations, wrong decisions, and faulty action plans. The<BR>third dimension is information processing errors. These are defined as false utilization of<BR>probabilistic information. The fourth dimension is sensorimotor errors. These are deficiencies in<BR>timing and adjustments of simple-discrete and or complex-continuous motor activities and also<BR>perceptual-motor confusion. The study shows that there is an entanglement and interaction of<BR>specific causal conditions.<BR>17<BR>Gertman, D. I. (1993). Representing cognitive activities and errors in HRA trees.<BR>Reliability Engineering and System Safety, 39, 25-34.<BR>COGENT (cognitive event tree system) is an enriched HRA event tree method that is presented<BR>in this paper that integrates three potential means of representing human activity. These include<BR>an HRA event-tree approach, the skill-rule-knowledge paradigm, and the slips-lapses-mistakes<BR>paradigm. COGENT attempts to combine the classical THERP technique with more cognitively<BR>oriented approaches to bridge the existing gap between the modeling needs of HRA practitioners<BR>and the classification schemes of cognitive theoreticians. The paper provides a detailed<BR>description of the method and an application to an example scenario is performed.<BR>Gertman, D. I., &amp; Blackman, H. S. (1994). Human reliability and safety analysis data<BR>handbook. New York: John Wiley and Sons.<BR>The authors provide a comprehensive review and explanation of human reliability and safety<BR>analysis. The background and “how to” aspects of conducting human reliability analysis are<BR>discussed. Various methods of estimating and examining human reliability are reviewed. Some<BR>of these include human cognitive reliability, maintenance personnel performance simulation,<BR>techniques for human error rate prediction, and fault/event trees. It is stressed that existing data<BR>sources and data banks are useful and important for performing human reliability and safety<BR>analyses.<BR>Gertman, D. I., Blackman, H. S., Haney, L. N., Seidler, K. S., &amp; Hahn, H. A. (1992).<BR>INTENT: A method for estimating human error probabilities for decision based errors.<BR>Reliability Engineering and System Safety, 35, 127-136.<BR>INTENT is a method that is used to estimate probabilities associated with decision based errors<BR>that are not normally incorporated into probabilistic risk assessments. A hypothetical example is<BR>created that uses a preliminary data set for 20 errors of intention that were tailored to represent<BR>the influence of 11 commonly referenced performance shaping factors. The methodological flow<BR>for INTENT involves six stages: Compiling errors of intention, quantifying errors of intention,<BR>determining human error probabilities (HEP) upper and lower bounds, determining performance<BR>shaping factors (PSF) and associated weights, determining composite PSF, and determining site<BR>specific HEP’s for intention. The preliminary results show that the method provides an interim<BR>mechanism to provide data which can serve to remedy a major deficiency of not accounting for<BR>high consequence failures due to errors of intention.<BR>18<BR>Gore, B. R., Dukelow, J. S., Mitts, T. M., &amp; Nicholson, W. L. (1995). A limited assessment<BR>of the ASEP human reliability analysis procedure using simulator examination results.<BR>(NUREG/CR-6355). Pacific Northwest Laboratory.<BR>The procedures and requirements for the ASEP analysis are explained. This volume does not<BR>contain any of the background or theory involved in developing the approach.<BR>Hahn, H. A., Blackman, H. S., &amp; Gertman, D. I. (1991). Applying sneak analysis to the<BR>identification of human errors of commission. Reliability Engineering and System Safety,<BR>33, 289-300.<BR>SNEAK is a method designed to identify human errors of commission. It is especially powerful<BR>as an aid to discovering latent errors. The analysis performed in this paper is in the context of<BR>electrical circuits, although a software SNEAK analysis has also been designed. Data acquisition<BR>and encoding is the first major consideration of the method to determine that the data being used<BR>adequately represents the true system. Network trees are also used to represent a simplified<BR>version of the system. The network trees are examined for topological patterns. These patterns<BR>lead to clues that help identify SNEAK conditions.<BR>Hansen, C. P. (1989). A causal model of the relationship among accidents, biodata,<BR>personality, and cognitive factors. Journal of Applied Psychology, 74(1), 81-90.<BR>Data from chemical industry workers were gathered to construct and test a causal model of the<BR>accident process. The authors believe that social maladjustment traits, some characteristics of<BR>neurosis, cognitive ability, employee age, and job experience would have an effect on accident<BR>causation. An accident model path diagram is presented that considers variables from numerous<BR>tests, scales and traits. These include the Bennett mechanical comprehension test, the Wonderlic<BR>personnel test, an employee’s age, general social maladjustment scale, the distractibility scale,<BR>job experience, involvement in counseling, accident risk, and accident consistency. The model<BR>can be used to predict with some degree of accuracy the likelihood an employee has of getting<BR>into an accident. This is accomplished through tests on the employee and employee data.<BR>19<BR>Harle, P. G. (1994). Investigation of human factors: The link to accident prevention. In N.<BR>McDonald &amp; R. Fuller (Eds.), Aviation psychology in practice (pp.127-148). Brookfield,<BR>VT: Ashgate.<BR>A general theme the author presents is that humans are the source of accidents, but they are also<BR>the key to accident prevention. James Reason’s model of accident causation is examined as a<BR>systems approach to accident causation. A step by step description of how investigations of<BR>incidents should occur is given. It is first stressed that an investigator does not need to be a<BR>specialist in the domain of the accident. A generalist investigator is usually well-suited.<BR>Information needs to be collected that helps determine what happened and why it happened. The<BR>SHEL model is useful for this type of data collection task. The SHEL model examines liveware,<BR>software, hardware and environment of systems. Information is considered relevant and<BR>necessary to obtain if it helps to explain why an accident or incident occurred. Two sources for<BR>information are from primary sources and secondary sources. Primary sources include physical<BR>equipment, documentation, audio/flight recorder tapes, etc. Secondary sources include<BR>occurrence databases, technical literature and human factors professionals/specialists. A<BR>framework for analyzing the occurrence data should then be used that leads to safety action as<BR>the principal output. A human factors report of the incident/accident then needs to be written that<BR>identifies the hazards uncovered and give safety recommendations. Finally, follow-up actions to<BR>prevent the identified hazards needs to be taken.<BR>Hawkins, F.H. (1997). Human error. In Human Factors in Flight, (pp. 27-56). Brookfield,<BR>VT: Avebury Aviation.<BR>Human error is examined in the context of aviation. Three basic tenets of human error are<BR>developed and discussed. The first is that the origins of errors can be fundamentally different.<BR>The second is that anyone can and will make errors. The third is that consequences of similar<BR>errors can be quite different. From here, four different categories are used to make a<BR>classification system for errors.<BR>(1) Design-induced versus operator-induced<BR>(2) Errors are either random, systematic, or sporadic<BR>(3) Errors can be an omission, a commission, or a substitution<BR>(4) Errors can be reversible or irreversible<BR>20<BR>Heinrich, H. W., Petersen, D., &amp; Roos, N. (1980). Industrial accident prevention: A safety<BR>management approach (5th ed.). New York: McGraw-Hill.<BR>A basic philosophy of safety management and techniques of accident prevention are examined.<BR>Accident prevention is accomplished through five separate steps, all built on a foundation of<BR>basic philosophy of accident occurrence and prevention. The first step is organization. The<BR>second step is fact finding. The third step is analysis. The fourth step is selection of a remedy.<BR>The fifth step is the application of the remedy. The authors go on to describe and analyze an<BR>updated model of accident prevention. Parts to the model include basic personal philosophies of<BR>accident occurrence and prevention, fundamental approaches to accident prevention, collecting<BR>data, analyzing data, selecting a remedy, applying the remedy, monitoring, and considering longterm<BR>and short-term problems and safety programming. From here, a multitude of accident<BR>sequence and causation models are examined and explained in terms of their usefulness.<BR>Heinrich’s influential domino theory of accident causation is then proposed. An important<BR>hypothesis put forth is that unsafe acts are the reason most accidents occur, not because of unsafe<BR>conditions.<BR>Helmreich, R. L., &amp; Merritt, A. C. (1998). Error management: a cultural universal in<BR>aviation and medicine. In Helmreich (Ed.), Culture at work in aviation and medicine.<BR>Brookfield, VT: Ashgate.<BR>The authors discuss how professional, national, and organizational cultures intersect within<BR>organizations and can be engineered towards a safety culture. This is done by examining the<BR>interplay of cultures through behaviors at the sharp end of a system. Error management is<BR>suggested as a necessary strategy to create a safety culture. More empirical data is needed to<BR>ascertain an organization’s health and practices. Five precepts of error management are<BR>acknowledged: Human error is inevitable in complex systems. Human performance has<BR>limitations. Humans make more errors when performance limits are exceeded. Safety is a<BR>universal value across cultures. And finally, high-risk organizations have a responsibility to<BR>develop and maintain a safety culture.<BR>21<BR>HFR British Airways Human Factors Reporting Programme. (1998). (NASA Aviation Data<BR>Sources Resource Notebook).<BR>The Human Factors Reporting Programme is a database that has four main purposes. The first is<BR>to identify how and why a faulty plan was formulated. The second is to prevent a recurrence of<BR>the circumstances or process. The third is to identify how well an organization supports the<BR>activities of its flight crew. The fourth is to assure that the system does not assign blame to any<BR>individual or agency. The database is coded into two main categories. One category is Crew<BR>Actions. This category cover team skills (assertiveness, vigilance, workload management), errors<BR>(action slips, memory lapses, mis-recognition), and aircraft handling (manual handling, system<BR>handling). The other category is Influences. This category includes environmental factors<BR>(airport facilities, ATC services, ergonomics), personal factors (complacency, distraction,<BR>tiredness), organizational factors (commercial pressure, maintenance, training), and<BR>informational factors (electronic checklists, information services, manuals). Each of these factors<BR>can also be assigned in up to four ways. They can be positively/safety enhancing, negative/safety<BR>degrading, first party or third party.<BR>Hofmann, D. A. &amp; Stetzer, A. (1996). A cross-level investigation of factors influencing<BR>unsafe behaviors and accidents. Personnel Psychology, 49, 307-339.<BR>A study was conducted to assess the role of organizational factors in the accident sequence in<BR>chemical processing plants. Group process, safety climate, and intentions to approach other team<BR>members engaged in unsafe acts were three group-level factors examined. Perceptions of role<BR>overload was an individual-level factor that was also examined. Five hypotheses were made and<BR>tested for significance. The first hypothesis was that individual-level perceptions of role overload<BR>would be positively related to unsafe behaviors. This hypothesis was significant. The second was<BR>that approach intentions would mediate the relationship between group process and unsafe<BR>behaviors. This was not well supported. A third hypothesis was that group processes would be<BR>negatively associated with actual accident rates. This was marginally supported. The fourth was<BR>that safety climate would be negatively related to unsafe behaviors. This was significant. Finally<BR>it was predicted that safety climate would be negatively related to actual accidents. This was<BR>significant. A recommendation is made that safety practitioners engage in more systematic<BR>organizational diagnosis.<BR>22<BR>Hollnagel, E. (1993). Human reliability analysis: Context and control. San Diego, CA:<BR>Academic Press.<BR>The Contextual Control Model (COCOM) is a control model of cognition that has two important<BR>aspects. The first has to do with the conditions under which a person changes from one mode to<BR>another. The second concerns the characteristic performance in a given mode, which relates to<BR>determining how actions are chosen and carried out. Four control modes are associated with the<BR>model. These are scrambled, opportunistic, tactic and strategic. Scrambled control occurs when<BR>the choice of next action is completely unpredictable or random. Opportunistic control is the case<BR>where the next action is chosen from the current context alone. It is mainly based on the salient<BR>features rather than intentions or goals. Tactical control refers to situations where a person’s<BR>performance is based on some kind of planning and following a procedure or rule. Strategic<BR>control means that the person is considering the global context. Two main control parameters are<BR>used to describe how a person can change from one control mode to another. They are<BR>determination of outcome (succeed or fail), and estimation of subjectively available time<BR>(adequate or inadequate). Four additional parameters are number of simultaneous goals,<BR>availability of plans, the event horizon, and the mode of execution. The number of simultaneous<BR>goals parameter refers to whether or not multiple goals are considered or just a single goal is<BR>considered. The availability of plans parameter refers to having pre-defined or pre-existing plans<BR>for which the next action can be chosen. The event horizon parameter is concerned with how<BR>much of the past and future is taken into consideration when a choice of action is made.<BR>Reference to the past is called the history size while reference to the future is called the<BR>prediction length. The mode of execution parameter makes a distinction between subsumed and<BR>explicit actions where a mode of execution can be ballistic/automatic or feedback controlled. The<BR>relationships of how a person can change from one mode to another and the performance<BR>characteristics of each control mode are discussed at length. The purpose of COCOM is to model<BR>cognition in terms of contextual control rather than procedural prototypes.<BR>Hollnagel, E. (1998). Cognitive reliability and error analysis method (CREAM). Alden<BR>Group, Oxford.<BR>Hollnagel introduces a second generation human reliability analysis method. This method has<BR>two requirements. It must use enhanced probabilistic safety assessment event trees and it must go<BR>beyond the categorization of success-failure and omission-commission. The purpose of CREAM<BR>is to offer a practical approach for both performance analysis and prediction and be as simple as<BR>possible. The model is expressed in terms of its functions as opposed to its structure. Four<BR>aspects of the CREAM method are cited as being important. CREAM is bi-directional and<BR>allows retrospective analysis as well as performance prediction. The method is recursive rather<BR>than strictly sequential. There are well-defined conditions that indicate when an analysis or a<BR>prediction is at an end. And finally, the model is based on the distinction between competence<BR>and control which offers a way of describing how performance depends on context. CREAM<BR>uses classification groups as opposed to a hierarchical classification scheme. This classification<BR>scheme separates causes (genotypes) from manifestations (phenotypes). Also, CREAM relies on<BR>the Contextual Control Model (COCOM) of cognition which is an alternative to information<BR>processing models.<BR>23<BR>ICAO Circular (1993). Investigation of human factors in accidents and incidents. 240-<BR>AN/144. Montreal, Canada: International Civil Aviation Organization.<BR>The ADREP database records results of aviation accident investigations conducted by ICAO<BR>member states. The information is used to create aviation accident reduction programs. Each<BR>aviation accident or incident is recorded as a series of events. Human factors topics are structured<BR>into the SHEL model format which covers the individual, the human-environment interface, the<BR>person-person aspect, and the person-software aspect. The SHEL model addresses the<BR>importance of human interaction and the use of written information and symbology while<BR>simultaneously allowing the Reason model on accident investigation to be applied.<BR>Jensen, R. S. &amp; Benel, R. A. (1977). Judgment evaluation and instruction in civil pilot<BR>training (Final Report FAA-RD-78-24). Springfield, VA: National Technical Information<BR>Service.<BR>A taxonomy of pilot errors is developed. Three general behavioral categories are specified. The<BR>first category is procedural activities. Flight activity examples included under this category are<BR>setting switches, selecting frequencies, programming a computer and making communications.<BR>These activities are characterized as discrete events that involve cognitive processes. The second<BR>level is perceptual-motor activities. These types of activities involve continuous control<BR>movements in response to what a pilot sees in the environment. The third level is decisional<BR>activities. This involves cognitive activities and judgments and is the most difficult aspect to<BR>handle in realistic flight environments. Using this taxonomy, total percentages for fatal and nonfatal<BR>accidents from each category were calculated for a 4 year period. Procedural activities were<BR>responsible for 4.6% of the fatal and 8.6% of the non-fatal accidents. Perceptual-motor activities<BR>were responsible for 43.8% of the fatal and 56.3% of the non-fatal accidents. Decisional<BR>activities were responsible for 51.6% of the fatal and 35.1% of the non-fatal accidents.<BR>Johnson, W. B., &amp; Rouse, W. B. (1982). Analysis and classification of human errors in<BR>troubleshooting live aircraft power plants. IEEE Transactions on Systems, Man, and<BR>Cybernetics, SMC-12(3), 389-393.<BR>Two experimental studies were used to develop and evaluate a scheme for classifying human<BR>errors in troubleshooting tasks. The experiments focused on looking at errors in diagnosis by<BR>advanced aviation maintenance trainees. Experimenters were able to decrease the amount of<BR>errors with experimental changes. A modification of the classification system of van Eekhout<BR>and Rouse (1982) was used to classify errors into five general categories in the second<BR>experiment. These categories are observation of state errors, choice of hypotheses errors, choice<BR>of procedure errors, execution of procedures errors, and consequence of previous error. The new<BR>classification system led to the redesign of the training program and a decrease in the frequency<BR>of particular types of human error.<BR>24<BR>Johnson, W. G. (1980). MORT: Safety assurance systems. New York: Marcel Dekker, Inc.<BR>The MORT (management oversight and risk tree) logic diagram is a model of an ideal safety<BR>program which is good for analyzing specific accidents, evaluating and appraising safety<BR>programs, and indexing accident data and safety literature. MORT is useful in safety program<BR>management for three reasons. It prevents safety related oversights, errors, and omissions. It<BR>identifies and evaluates residual risks, and their referral to appropriate management levels for<BR>action. Thirdly, it optimizes allocation of safety resources to programs and specific controls.<BR>MORT is basically a diagram that presents a schematic representation of a dynamic, idealized<BR>safety system model using fault tree analysis. Three levels of relationships exist that aid in the<BR>detection of omissions, oversights, and defects. These are generic events, basic events, and<BR>criteria. Furthermore, MORT explicitly states the functions that are necessary to complete a<BR>process, the steps to fulfill a function, and the judgment criteria. A step by step outline is<BR>provided for using the MORT system. The system is illustrated with examples. A major fault<BR>with MORT is described as affirmation of the consequent. This is the fallacy of inferring truth of<BR>an antecedent from the truth of the consequence.<BR>Kahneman, D. &amp; Tversky, A. (1984). Choices, values, and frames. American Psychologist,<BR>39(4), 341-350.<BR>The paper conducts a discussion of the cognitive and the psychophysical factors of choice in<BR>risky and riskless contexts. A hypothetical value function is developed that has three important<BR>properties. These properties are that the value function is defined on gains and losses rather than<BR>on total wealth, it is concave in the domain of gains and convex in the domain of losses, and it is<BR>considerably steeper for losses than for gains. This last property has been labeled loss aversion.<BR>Three main points are made apparent. First, the psychophysics of value lead to risk aversion in<BR>the domain of gains and risk seeking in the domain of losses. Second, risk aversion and risk<BR>seeking decision making can be manipulated by the framing of relevant data. Third, people are<BR>often risk seeking in dealing with improbable gains and risk averse in dealing with unlikely<BR>losses.<BR>Kashiwagi, S. (1976). Pattern-analytic approach to analysis of accidents due to human<BR>error: An application of the ortho-oblique-type binary data decomposition. J. Human<BR>Ergol., 5, 17-30.<BR>An ortho-oblique-type binary data decomposition is proposed as a means of classifying patterns<BR>of human error. The method is described mathematically and then applied to accidents in freightcar<BR>classification yard work. The ortho-oblique-type of binary data decomposition is useful<BR>because it tends to produce results that are very easily interpretable from the empirical point of<BR>view. The main reason for adopting the method is that it allows data in the form of documents to<BR>be made feasible for numerical classification by use of binary data matrices. The analysis of the<BR>data showed that there are specific patterns of relevant and background conditions for most<BR>accidents that are due to human error.<BR>25<BR>Kayten, P. J. (1989). Human performance factors in aircraft accident investigation. Human<BR>Error Avoidance Techniques Conference Proceedings (pp. 49-56). Warrendale, PA: Society<BR>of Automotive Engineers.<BR>The author examines that evolution of human performance investigation within the National<BR>Transportation Safety Board (NTSB). The importance of the background of the accident<BR>investigator is explored. An argument is made that a background in the domain of the accident is<BR>helpful, but definitely not required to be effective. Relevant facts to be collected, ignored, and to<BR>be further thought about are discussed. It is greatly stressed that investigative techniques and<BR>analytic methods still need to be improved to better manage human error.<BR>Kirwan, B. (1998). Human error identification techniques for risk assessment of high risk<BR>systems. Part 1: Review and evaluation of techniques. Applied Ergonomics, 29(3), 157-177.<BR>This first part of a two part paper outlines thirty-eight approaches to error identification. They are<BR>categorized by the type of error identification approach used and then they are critiqued by a<BR>broad range of criteria. Trends and research needs are noted along with the identification of<BR>viable and non-viable techniques. Three major components to an error are broken down. The first<BR>component is the external error mode. This refers to the external manifestation of the error. The<BR>second component is the performance shaping factors. These influence the likelihood of an error<BR>occurring. The third component is the psychological error mechanism. This is the internal<BR>manifestation of the error. The authors go on to recognize seven major error types that appear to<BR>be of interest in current literature. These are slips and lapses, cognitive errors (diagnostic and<BR>decision-making errors), errors of commission, rule violations, idiosyncratic errors, and software<BR>programming errors. In order to show the general orientation of form of each error identification<BR>technique, five broad classifications have been developed. These include taxonomies,<BR>psychologically based tools, cognitive modeling tools, cognitive simulations, and reliabilityoriented<BR>tools. The different approaches were also classified by their analytic method. These<BR>methods are the checklist-based approaches, flowchart-based approaches, group-based<BR>approaches, cognitive psychological approaches, representation techniques, cognitive<BR>simulations, task analysis linked techniques, affordance-based techniques, error of commission<BR>identification techniques, and crew interactions and communications. Ten important criteria to<BR>evaluate the different techniques are laid out. The criteria are comprehensiveness of human<BR>behavior, consistency, theoretical validity, usefulness, resources (actual usage, training time<BR>required, requirement of an expert panel), documentability, acceptability (usage to date,<BR>availability of technique), HEI output quantifiability, life cycle stage applicability, and primary<BR>objective of the technique. Some main techniques are identified which could be useful for<BR>general practice, but it is pointed out that no single technique is sufficient for all of a<BR>practitioner’s need. It is suggested that a framework-based or toolkit-based approach would be<BR>most beneficial.<BR>26<BR>Kirwan, B. (1998). Human error identification techniques for risk assessment of high risk<BR>systems. Part 2: Towards a framework approach. Applied Ergonomics, 29(5), 299-318.<BR>This second of a series of papers describes a framework-based and a toolkit-based approach as a<BR>human error identification approach in nuclear power and reprocessing industries. Advantages<BR>and disadvantages are considered. Framework approaches try to deal with all human error types<BR>in an integrative way by using a wide array of tools and taxonomies that have been found to be<BR>effective. The Human Error and Recovery Assessment system (HERA) is a framework approach<BR>that is outlined in this paper. The HERA system is a document and a prototype software package.<BR>The paper only describes in detail the procedure for skill and rule based error identification. The<BR>document is the formal system and has main modules or functional sections. One such main<BR>module is the scope analysis and critical task identification. This module deals with factors to<BR>consider, logistical and otherwise, along with phases of operations to look at. A second module is<BR>task analysis. Initial task analysis and Hierarchical Task Analysis are the two major forms of task<BR>description that are used and described. A third module is skill and rule based error<BR>identification. For this module, nine error identification checklists are used that may have some<BR>over-lapping. These checklists are explained in some detail and include mission analysis,<BR>operations level analysis, goals analysis, plans analysis, error analysis, performance shaping<BR>factor based analysis, psychological error mechanism based analysis, Human Error Identification<BR>in Systems Tool (HEIST) analysis, and human error HAZOP. The five remaining modules that<BR>are not explained in detail are diagnostic and decision-making error identification, error of<BR>commission analysis, rule violation error identification, teamwork and communication error<BR>identification, and integration issues. The toolkit framework approach seeks to ensure that all<BR>relevant error types are discovered by using several existing techniques. It is also pointed out that<BR>there may be a useful synergistic relationship between human error analysis and ergonomics<BR>evaluation.<BR>Kletz, T. (1992). Hazop and hazan: Identifying and assessing process industry hazards.<BR>Bristol, PA: Hemisphere Publishing.<BR>Hazard and operability study (HAZOP) is a technique for identifying hazards without waiting for<BR>an accident to occur. It is a qualitative assessment. A series of guide words are used in HAZOP<BR>to explore types of deviations, possible causes, consequences and actions required. Hazard<BR>analysis (HAZAN) is a technique for estimating the probability and consequences of a hazard<BR>and comparing them with a target or criterion. It is a quantitative assessment. HAZAN contains<BR>three steps. The first is to estimate the likelihood of an incident. The second is to estimate<BR>consequences to employees, the public and environment, and to the plant and profits. The third<BR>step is to compare these results to a target or criterion to decide if action is necessary to reduce<BR>the probability of an occurrence.<BR>27<BR>Kubota, R., Ikeda, K., Furuta, T., &amp; Hasegawa, A. (1996). Development of dynamic human<BR>reliability analysis method incorporating human-machine interaction. Probabilistic Safety<BR>Assessment and Management ‘96 (pp. 535-540). New York: Springer.<BR>The authors describe an updated dynamic human reliability analysis method that considers<BR>interactions within the plant. It compares and evaluates the response time between the cases<BR>where the safety limit of the plant is quickly reached and the cases where it is not. The proposed<BR>cognition mechanism borrows from the Monte Carlo calculation using the probabilistic network<BR>method and Rasmussen’s decision making model. The authors intend the new dynamic human<BR>reliability analysis method to replace the THERP (technique for human error rate prediction) and<BR>TRC (time reliability correlation) methods.<BR>Lasswell, H. D. (1948). The structure and function of communication in society. In L.<BR>Bryson (Ed.), The communication of ideas (pp. 37-51). US: Harper and Row.<BR>The ‘Lasswell formula’ is a description of an act of communication asking these questions:<BR>(1) Who<BR>(2) Says what<BR>(3) In which channel<BR>(4) To whom<BR>(5) With what effect<BR>Three functions are performed while employing the communication process in society. The first<BR>is surveillance of the environment. The second is correlation of the components of society in<BR>making a response to the environment. The third is transmission of the social inheritance. This<BR>formula (model) can address errors in terms of dealing with aspects of a message, its medium,<BR>and the expectations of the sender or receiver.<BR>Laughery, K. R., Petree, B. L., Schmidt, J. K., Schwartz, D. R., Walsh, M. T. &amp; Imig, R. G.<BR>(1983). Scenario analyses of industrial accidents. Sixth International System Safety<BR>Conference (pp. 1-20).<BR>An analytic procedure of accidents is developed that is based on two contentions. The first is that<BR>it is necessary to answer the question, “what happened?” The second is that it is important to<BR>recognize that all accidents, no matter how minor, represent a valuable source of data. Four<BR>categories of variables exist within the method. The first category is demographic variables. This<BR>includes such aspects as gender, job classification, the day, the location, etc. The second category<BR>is labeled accident scenario code. This includes prior activity, the accident event, the resulting<BR>event, the injury event, the agent of the accident, and the source of injury. The third category<BR>deals with injury variables. This breaks down into the body part injured, the injury type, and the<BR>injury severity. The final category is labeled causal factors. This is broken down further into<BR>human causes, and equipment/environment causes. The analytic procedures consider frequency,<BR>severity, and potential for effective interventions. The analyses used are a frequency analysis and<BR>a scenario analysis which describes accident patterns.<BR>28<BR>Macwan, A., &amp; Mosleh, A. (1994). A methodology for modeling operator errors of<BR>commission in probabilistic risk assessment. Reliability Engineering and System Safety, 45,<BR>139-157.<BR>A methodology is described that incorporates operator errors of commission in nuclear power<BR>plant probabilistic risk assessments (PRA). An initial condition set is obtained by combining<BR>performance influencing factors with information taken from the plant PRA, operating<BR>procedures, information on plant configuration, and physical and thermal-hydraulic information.<BR>These initial condition sets are fed into the primary tool of the methodology called Human<BR>Interaction TimeLINE (HITLINE). HITLINE generates sequences of human action, including<BR>errors, in time. At each branching point of the HITLINE, mapping rules are used to relate<BR>performance influencing factors with errors. A quantification scheme is used to assign weights at<BR>each of the branching points. A sample exercise is performed using the methodology and is<BR>validated in terms of the current PRA framework.<BR>Mangold, S. J., &amp; Eldredge, D. (1993). An approach to modeling pilot memory and<BR>developing a taxonomy of memory errors. In R. S. Jensen &amp; D. Neumeister (Eds.),<BR>Proceedings of the Seventh International Symposium on Aviation Psychology (263-268).<BR>Columbus, OH: The Ohio State University.<BR>A review of the methodology used to develop a memory-related taxonomy of memory errors in<BR>pilots is performed. It is based on the connectionist approach of cognitive functioning. Five<BR>categories of memory-related key terms were developed. The key words reflect the types of<BR>breakdowns that can occur in the memory process. The first category is information encoding<BR>errors. These are defined as failures to encode relevant information so that it can be accessed at a<BR>later time. The second category is meaning structure errors. These are memory errors that come<BR>from problems with representational structures. Processing competition errors is a third category.<BR>These errors have to do with the cognitive system being busy with one task and failing to<BR>adequately manage a second task. A fourth category is information retrieval errors. These are<BR>described as failures to achieve the same cognitive state at information retrieval as was present<BR>when the information was encoded. The final category is artifact-induced errors. These errors<BR>come as a result of the complex demands of the advanced automation cockpit.<BR>29<BR>Marteniuk, R. G. (1976). Information processing in motor skills. New York: Holt, Rinehart<BR>and Winston.<BR>This book presents an information processing model. The basic human performance model<BR>discussed has three major mechanisms that mediate information in the environment and<BR>movement. The perceptual mechanism is the first one described. This mechanism receives<BR>environmental information from the senses. Perception is argued to have three general classes of<BR>processes. These are sensory capacities, information selection and prediction, and memory. The<BR>second mechanism is the decision mechanism. This mechanism deals with deciding on a plan of<BR>action for the current information that is available. The third mechanism is the effector<BR>mechanism. This mechanism organizes a response and activates the motor commands to the<BR>muscular system. It is emphasized that feedback information is an important part of the model<BR>which allows correction in the effector mechanism if there is enough time. Memory also plays a<BR>crucial role in the model and has implications for and causes interactions with the perceptual,<BR>decision, and effector mechanism. Two types of skills are identified that can be analyzed using<BR>the model. Open skills occur in environments where the conditions under which the skill is<BR>performed are continually changing in space. This causes increased time pressure and stress.<BR>Closed skills occur in environments where the critical cues for the performance of that skill were<BR>static or fixed in one position.<BR>Maurino, D. E., Reason, J., Johnston, N. &amp; Lee, R. B. (1995). Widening the search for<BR>accidental causes: A theoretical framework. In Beyond aviation human factors: Safety in<BR>high technology systems (pp. 1-30). Vermont: Ashgate.<BR>This chapter tries to outline a theoretical framework that seeks to provide a principled basis both<BR>for understanding the causes of organizational accidents and for creating a practical remedial<BR>toolbag that will minimize their occurrence. The framework traces the development of an<BR>accident sequence. It considers organizational and managerial decisions, conditions in various<BR>workplaces, and personal and situational factors that lead to errors and violations. Active and<BR>latent failure pathways to an event are identified. Events are defined as the breaching, absence or<BR>bypassing of some or all of the system’s various defenses and safeguards. Within the framework,<BR>organizational pathogens are introduced into a system where they follow two main pathways to<BR>the workplace. In the first pathway the pathogens act upon the defenses, barriers and safeguards<BR>to create latent failures. In the second pathway the pathogens act upon local working conditions<BR>to promote active failures.<BR>30<BR>McCoy, W. E., III, &amp; Funk, K. H., II. (1991). Taxonomy of ATC operator errors based on a<BR>model of human information processing. Proceedings of the 6th International Symposium<BR>on Aviation Psychology (pp. 532-537). Columbus, OH: The Ohio State University, The<BR>Aviation Psychology Laboratory.<BR>An analysis of accidents was run which provided a classification of ATC errors based on a<BR>human information processing model. The errors can be further explained in terms of inherent<BR>human limitations such as working memory capacity and duration limits. The authors conclude<BR>that it is advisable to develop a set of systematic design strategies which consider the propensity<BR>of human beings to make errors and to try and mitigate the adverse consequences of such errors.<BR>McRuer, D. (1973). Development of pilot-in-the-loop analysis. AIAA Guidance and Control<BR>Conference (pp. 515-524). Stanford, CA.<BR>A pilots’ dynamic characteristics when operating as a controller are affected by several physical,<BR>psychological, physiological, and experimental variables which are contained in four categories.<BR>These are task variables, environmental variables, procedural variables, and pilot-centered<BR>variables. Pilot-in-the-loop analysis is discussed. It is argued that pilot-in-the-loop analysis is<BR>dependant on four different aspects of research. The first aspect is experimental determination of<BR>human pilot dynamic characteristics for a wide variety of situations and conditions. The second<BR>aspect is evolution of mathematical models and manipulative rules. The third aspect is<BR>relationships between the pilot-vehicle situation and the objective and subjective pilot<BR>assessments. The fourth and final aspect is combination of pilot dynamics and equivalent aircraft<BR>mathematical models to treat particular problems. Two fundamental concepts of pilot-in-the-loop<BR>analysis are guidance and control along with the pilot sets-up and closes the loop.<BR>MEDA Maintenance Error Decision Aid. (1998). (NASA Aviation Data Sources Resource<BR>Notebook).<BR>The purpose of MEDA is to give maintenance organizations a better understanding of how<BR>human performance issues contribute to error. This occurs by providing line-level maintenance<BR>personnel with a standardized methodology to analyze maintenance errors. MEDA provides two<BR>levels of analysis. At one level, local factors are analyzed. At another level, organizational<BR>factors are analyzed. MEDA has many benefits. It uses a human-centered approach to<BR>maintenance error event analysis. The local factors analysis gives maintenance ownership of<BR>individual event analysis. MEDA uses standardized definitions and data collection processes that<BR>are consistent across and within airlines. Data is obtained that allows for organizational trend<BR>analysis. The maintenance investigator gains an increased awareness of human performance<BR>investigation techniques. And a final benefit is MEDA is that a process is provided that improves<BR>the effectiveness of corrective actions chosen.<BR>31<BR>Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our<BR>capacity for processing information. The Psychological Review, 63(2), 81-97.<BR>The amount of information that a human can process in immediate memory is examined.<BR>Important in regards to human error is Miller’s testing of absolute judgments of single and multidimensional<BR>stimuli. Absolute judgment is limited by the amount of information according to<BR>Miller. Miller also states that immediate memory is limited by the number of items to be<BR>remembered.<BR>Nagel, D. C. (1988). Human error in aviation operations. In E. L. Wiener &amp; D. C. Nagel<BR>(Eds.), Human factors in aviation (pp. 263-303). New York: Academic Press, Inc.<BR>Nagel argues that an error model needs to meet three criteria. It needs to explain in detail why a<BR>human error occurs so that a solution strategy can be developed. It needs to be predictive and not<BR>just descriptive. It also needs to not ignore systematic research in the field of behavioral and life<BR>sciences. A three stage simple error model called the information-decision-action model is<BR>presented to illustrate the previously named criteria. The first stage of the model is the<BR>acquisition, exchange and processing of information. Stage two is where decisions are made and<BR>specific intents or plans to act are determined. Stage three is where decisions are implemented<BR>and intents acted upon. Nagel points out three approaches that reduce the occurrence and severity<BR>of human error in complex human-machine systems. One approach is to design controls,<BR>displays, operational procedures and the like in a careful and informed way. A second approach<BR>is to reduce errors through selection and training. A third approach is to design systems to be<BR>error-tolerant.<BR>NASA ASRS-Aviation Safety Reporting System Database (1998). (source NASA Data<BR>Sources Resource Notebook).<BR>The Aviation Safety Reporting System (ASRS) is an incident database that collects, analyzes,<BR>and responds to voluntarily submitted aviation safety incident reports. Valuable human factors<BR>information can be obtained from the database. ASRS analysts choose appropriate fields to code<BR>each report for the database. Eleven general categories are suggested to the ASRS analysts for<BR>classification.<BR>(1) Affective or cognitive states—attitude, complacency, fatigue, etc.<BR>(2) Capability—inadequate certification, unfamiliar with operation, etc.<BR>(3) Circumstances affecting human performance—equipment design, noise, workload, etc.<BR>(4) Distraction—checklist, radio communication, socializing, etc.<BR>(5) Inadequate briefing—cockpit, preflight, etc.<BR>(6) Inadequate planning—inflight, preflight, other<BR>(7) Inadequate technique—air traffic control, communication, flying, etc.<BR>(8) Misread—chart, instrument, publication<BR>(9) Non adherence to—clearance, instruction publication<BR>(10) Other behaviors or non-behaviors—altitude callout omitted, perception problem, etc.<BR>(11) Physical state—hypoxia, illness, incapacitation, etc.<BR>32<BR>National Transportation Safety Board. (1992). Human performance investigation<BR>procedures (vol. III) . Washington, DC: Author.<BR>The NTSB’s human performance investigation procedure is explained. The NTSB seeks to<BR>examine six human performance factors within their investigations. These are behavioral factors,<BR>medical factors, operational factors, task factors, equipment design factors, and environmental<BR>factors. Examples of actual checklists used to examine these factors in accidents are included in<BR>this manual.<BR>Navarro, C. (1989). A method of studying errors in flight crew communication. Perceptual<BR>and Motor Skills, 69, 719-722.<BR>A method is described which uses the information processing paradigm to study errors in flight<BR>crew communication. The taxonomy of errors proposed is based on two dimensions. In the first<BR>dimension, an evaluation of the type of communication errors is made. These can be classified as<BR>having to do with transmission, detection, identification, interpretation, and action linked to<BR>communication. The second dimension evaluates the type of adjustment made. For individuals,<BR>this concerns problem-solving by the operator. For interactive environments, this involves<BR>problem-solving by a crew. The taxonomy specifically includes transmission of a message,<BR>detection of a message, identification of a message, interpretation of a message, and action taken<BR>in regards to the message.<BR>Nawrocki, L. H., Strub, M. H., &amp; Cecil, R. M. (1973). Error categorization and analysis in<BR>man-computer communication systems. IEEE Transactions on Reliability, R-22(3), 135-<BR>140.<BR>The authors examine traditional approaches to human reliability and a new technique is<BR>presented which permits the system designer to derive a mutually exclusive and exhaustive set of<BR>operator error categories in a man-computer system. Error categories are defined in terms of<BR>process failures and provide a qualitative index suitable for determining error causes and<BR>consequences. The new index is tested on a set of data. From this, it is determined that the new<BR>methodology offers a designer a systematic means for deriving error categories which appear to<BR>be acceptable for systems in which the operator must transform and input data, such as in<BR>information reduction tasks.<BR>33<BR>Newell, A., &amp; Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ:<BR>Prentice-Hall Inc.<BR>The authors define a human information processing system in terms of symbols. Eight important<BR>characteristics of a human information processing system are discussed in relation to problem<BR>solving tasks. The system contains an active processor, input and output systems, long-term<BR>memory, short-term memory and external memory. Long-term memory has unlimited capacity<BR>and is organized associatively, with its contents being symbols and structures of symbols. Shortterm<BR>memory holds about 5 to 7 symbols. Sensory modalities, processes, and motor patterns are<BR>symbolized and handled identically in short-term memory and long-term memory. Overall<BR>processing rates are limited by read rates from long-term and extended memory. Extended<BR>memory is defined as the immediately available visual field. The information processing<BR>system’s program is structured as a production system, the condition for evocation of a<BR>production being the presence of appropriate symbols in the short-term memory augmented by<BR>the foveal extended memory. A final important characteristic of the system is that a class of<BR>symbol structures or goal structures are used to organize problem solving.<BR>Norman, D. A. (1981). Categorization of action slips. Psychological Review, 88(1), 1-15.<BR>The author concentrates on action errors in regards to slips. Three major categories of slips are<BR>addressed. One category is based on errors in the formation of the intention. A second category is<BR>based on errors having to do with faulty activation of schemas. The third category is based on<BR>errors dealing with faulty triggering. The author proposes a model called the activation-triggerschema<BR>system (ATS). It contains a system of activated schemas with a triggering mechanism for<BR>determining appropriate time for activation. This provides a satisfactory framework for the<BR>categorization and analysis of slips. The ATS model is considered novel for five reasons. It<BR>combines schemas, activation values, and triggering conditions. It considers the application of<BR>motor action sequences. The role of intention is considered. There is consideration of the<BR>operation of cognitive systems when several different action sequences are operative<BR>simultaneously, And finally, it is novel because a specific application of this framework to the<BR>classification of slips is employed.<BR>Norman, D. A. (1986). Cognitive engineering. In D. A. Norman &amp; S. W. Draper (Eds.), User<BR>centered system design (pp. 31-62).<BR>This selected chapter introduces a theory for action to understand what the user of a system is<BR>doing. A discrepancy between the psychological terms of the user and the physical variables of a<BR>system is stated as the Gulf of Execution and the Gulf of Evaluation. Bridging the gap in the<BR>Gulf of Execution is done in four segments that deal with intention, formation, specifying the<BR>action sequence, and executing the action. Bridging the gap in the Gulf of Evaluation consists of<BR>comparing the interpretation of system state with the original goals and intentions.<BR>34<BR>OASIS: Occurrence Analysis and Safety Information Systems. (1998). (NASA Aviation<BR>Data Sources Resource Notebook).<BR>The OASIS system is a database that is based on the ICAO standard. The major difference is that<BR>the OASIS manual provides definitions of the explanatory factors. OASIS is also able to<BR>generate safety reports from data entered during an investigation. Explanatory factors are<BR>structured into eight categories:<BR>(1) Between people<BR>(2) Human-environment<BR>(3) Human-machine<BR>(4) Human system support<BR>(5) Physical<BR>(6) Physiological<BR>(7) Psychological<BR>(8) Psychosocial<BR>O’Connor, S. L., &amp; Bacchi, M. (1997). A preliminary taxonomy for human error analysis<BR>in civil aircraft maintenance operations. Ninth Biennial Symposium on Aviation<BR>Psychology.<BR>The authors argue that a reporting scheme for human error analysis needs to have three steps. It<BR>needs to provide detail and structure of the error form or tool. It needs to provide a method of<BR>data collection and a procedure of implementing such a tool. It also needs to provide a storage<BR>and utilization mechanism. This paper describes an error taxonomy that tries to provide detail<BR>and structure of human error in aircraft maintenance. Three broad classification of human error<BR>are identified that are based on a task oriented classification that include maintenance and<BR>dispatch activities. The first classification, external error modes, is based on three main activities.<BR>These are repair, service, and inspection/checking. The second classification, performance<BR>influencing factors, is split into six main groups. These include task factors, task support,<BR>situational factors, environmental factors, personnel factors, and error agents. The third and final<BR>classification, psychological error mechanisms, is based on four models. Information processing<BR>theory, symbolic processing theory, Endsley’s model of mechanisms of situational awareness,<BR>and Rasmussen’s skill-rule-knowledge based levels of cognitive control are the four important<BR>models that serve as the basis of this classification.<BR>35<BR>O’Hare, D. (1992). The “artful” decision maker: A framework model for aeronautical<BR>decision making. International Journal of Aviation Psychology, 2(3), 175-191.<BR>This paper reviews the available literature on aeronautical decision making and then proposes a<BR>new framework called ARTFUL. This framework is a goal-directed process with five<BR>functionally separate components that deal with situational awareness (detection and diagnosis),<BR>risk assessment, planning, response selection, and response execution. This framework<BR>recognizes three important points. It is acknowledged that most routine decision making arises<BR>directly from situation awareness which then maps directly to response selection. Errors may<BR>arise in the process of response execution in predictable forms such as slips. It is also recognized<BR>that as long as the current state is consistent with the current goal state and no other threatening<BR>circumstances exist, the current goal will continue to be pursued. The framework can be<BR>described and defined by key steps in the process that are linked to decision making states.<BR>Awareness of the situation as a result of monitoring is the first step. From here, risk of current<BR>and alternative courses of action is assessed. This leads to time assessment which is a critical<BR>factor in decision making for dynamic environments. Finally, further options are generated.<BR>O’Hare, D. (in press). The “wheel of misfortune”: A taxonomic approach to human factors<BR>in accident investigation and analysis in aviation and other complex systems. Ergonomics.<BR>The Reason model of human error has been very influential in accident investigation because of<BR>its complexity and breadth. However, a major criticism for using this model as an accident<BR>causation model is its linear sequence of levels instead of considering intersecting influences<BR>from various points. A revised theoretical model and associated classification framework is<BR>proposed to help guide the accident investigation process. It is named the Wheel of Misfortune.<BR>There are three concentric spheres in this model. The innermost circle represents the actions of<BR>the front line personnel. The middle circle represents local precipitating conditions. The<BR>outermost circle represents the global conditions generated by organizations. Actions of the<BR>individual operator are described in terms of an internal function taxonomy. This taxonomy<BR>includes the Skill-Rule-Knowledge framework that Rasmussen developed. The local conditions<BR>circle includes factors that may be critical in the breakdown of human performance in complex<BR>systems. These include weather and internal states of the flightcrew among other factors. The<BR>global conditions circle considers the context within which the task activity takes place. This<BR>includes organizational processes. This model has three potentially valuable functions. The<BR>concentric spheres within spheres are better than the linear sequence of factors in representing<BR>accident causation. This provides an alternative to Reason’s Swiss Cheese model. In the wheel of<BR>misfortune, the strength of a system is determined by the outer shell of the model. The model is<BR>also good for directing the attention of the investigator to specific questions within the layers of<BR>concern such as local actions, immediate realities of the operational environment, and influences<BR>of organizational functioning. A final benefit is that the model is expressed in terms of general<BR>processes which are independent of functioning within any specific domain. This allows<BR>information from other models to be used in this framework. The model is similar to the<BR>“Taxonomy of Unsafe Operations” model of Shappell and Wiegmann, but uses a representation<BR>at a higher level of abstraction that gives greater comprehensiveness and parsimony.<BR>36<BR>O’Hare, D., Wiggins, M., Batt, R., &amp; Morrison, D. (1994). Cognitive failure analysis for<BR>aircraft accident investigation. Ergonomics, 37(11), 1855-1869.<BR>Two studies were conducted to investigate the applicability of an information processing<BR>approach to human failure in the aircraft cockpit. In the first study, the authors attempt to<BR>validate Nagel’s three stage information processing model of human performance. The model<BR>confirmed that decisional factors are extremely important in fatal accidents. It is determined that<BR>Nagel’s model is and oversimplification. There are at least five, not three, distinct categories of<BR>errors that can occur in the cockpit. These are perceptual, decisional, procedural, monitoring, and<BR>handling errors. In the second study, the authors develop a more detailed analysis of cognitive<BR>errors based on a theoretical model proposed by Rasmussen and further developed by Rouse and<BR>Rouse. A taxonomic algorithm that was derived from Rasmussen’s work was used to classify<BR>information processing failures. The algorithm focused on structural and mechanical errors,<BR>information errors, diagnostic errors, goal errors, strategy errors, procedure errors, and action<BR>errors.<BR>Paradies, M. (1991). Root cause analysis and human factors. Human Factors Society<BR>Bulletin, 34(8), 1-6.<BR>Root cause analysis attempts to achieve operator excellence by establishing an aggressive<BR>program to review the accidents, determine their root causes, and take prompt corrective action.<BR>Event investigation systems seem to be successful when they contain certain basic<BR>characteristics. They need to identify the event’s sequence, set a goal to find a fixable root cause,<BR>avoid placing blame, be easy for investigators to learn and use, and be easy for managers to<BR>understand and provide an easily understood graphic display of the event for management<BR>review.<BR>Pedrali, M. (1997). Root causes and cognitive processed: Can they be combined in accident<BR>investigation? Ninth International Symposium of Aviation Psychology.<BR>A methodological approach is proposed that relies on a model of cognition and a classification of<BR>human errors. The intent of the approach is to reconstruct the process of cognition through which<BR>latent failures give rise to active failures. A major concern with modern methodological<BR>approaches is that they may stray too far from the context of the accident. A classification<BR>scheme is devised that distinguishes three categories. The first category is person-related causes.<BR>This deals with specific cognitive functions of people and general person-related functions. The<BR>second category, system-related causes, focuses on training, equipment, procedure, and interface<BR>issues. The third category is environment-related causes. These include ambient conditions,<BR>communication, organization, and working conditions. The main principles of the HERMES<BR>model were used to create a prototype software tool called DAVID (Dynamic Analysis of Video<BR>in Incident studies).<BR>37<BR>Petersen, D., &amp; Goodale, J. (1980). Readings in industrial accident prevention. New York:<BR>McGraw-Hill.<BR>This book is a forum for debate and information sharing of key issues in accident causation and<BR>prevention. The first part of the book deals with the basis of philosophy of accident prevention.<BR>Some issues examined are the differences between unsafe acts and unsafe conditions. Models of<BR>accident phenomenon are discussed. The pros and cons of cost-benefit analyses are argued. The<BR>idea of an “injury tax” to help offset some problems of a company’s unwillingness to increase<BR>safety is presented. Important principles and factors when attempting to control accidents are<BR>also explained. The second part of the book examines accident prevention methods in stages.<BR>Data collection and analysis is an important stage. Issues examined are mathematical evaluations<BR>and creating a safety program priority system. Systems approach issues are examined into<BR>separate papers. Monitoring, motivating, and training are discussed due to their relevance in<BR>preventing accidents. The third section of the book covers miscellaneous subjects such as the<BR>professionalism of investigators and management, insurance issues, risk management and<BR>looking at the effectiveness of OSHA.<BR>Ramsey, J. D. (1985). Ergonomic factors in task analysis for consumer product safety.<BR>Journal of Occupational Accidents, 7, 113-123.<BR>The author proposes a model that identifies contributing factors to accidents. This is done by<BR>following the information processing steps of an accident sequence and listing factors that affect<BR>each stage of the process. The accident sequence model has four levels to it. If any level’s<BR>criteria are not met, it leads to an increased chance of an accident occurring. If a level’s criteria<BR>are met, the sequence progresses to the next level until a state of increased chance for no accident<BR>occurs. When exposure to a hazardous product occurs, a person goes through four levels in order,<BR>assuming the criteria for each level is met. Level one is the perception of the hazard. This<BR>involves sensory skills, perceptual skills, and the state of alertness. Level two is cognition of the<BR>hazard. Factors affecting this are experience and training, mental abilities, and memory abilities.<BR>Level three is the decision to avoid the hazard. Factors affecting this are experience and training<BR>again, attitude and motivation, risk-taking tendencies, and personality. The fourth and final level<BR>is the ability to avoid the hazard. Relevant factors here are anthropometrical, biomechanical, and<BR>motor capabilities.<BR>38<BR>Rasmussen, J. (1981). Models of mental strategies in process plant diagnosis. In J.<BR>Rasmussen and W.B. Rouse (Eds.), Human Detection and Diagnosis of System Failures<BR>(pp.241-258). Plenum Press, New York.<BR>The author states that the ultimate purpose of diagnosis in process plant control is to link the<BR>observed symptoms to the actions which will serve the current goal properly. The paper contrasts<BR>a topographic search versus a symptomatic search to locate problems. The author refutes the<BR>trend to design man-machine interfaces towards a presentation of measured variables on visual<BR>display units as bar graphs and/or mimic displays while also attempting to unload the operator by<BR>alarm analysis and reduction. An optimal computer-based design is proposed as having sharp<BR>distinctions disappear between the functions of alarm and safety systems, of control systems, and<BR>of operators. A key role of the computer will be as a partner of the operator in higher level<BR>supervisory control.<BR>Rasmussen, J. (1982). Human errors. A taxonomy for describing human malfunction in<BR>industrial installations. Journal of Occupational Accidents, 4, 311-333.<BR>A taxonomy for event analysis is presented. The taxonomy recognizes that error mechanisms and<BR>failure modes depend on mental functions and knowledge which are activated by subjective<BR>factors. They are not directly observed but are inferred. A model of human information<BR>processing is needed to relate elements of human decision making and action to internal<BR>information processes for which generic psychological mechanisms and limitations can be<BR>identified. Such a model was developed that draws on a distinction between three levels of<BR>behavior. Skill-based domain of behavior includes subconscious routines and performance that is<BR>controlled by stored patterns of behavior in a time-space domain. Rule-based domain of behavior<BR>includes performance in familiar situations and is controlled by stored rules for coordination of<BR>subroutines. Knowledge-based domain of behavior occurs in unfamiliar situations where actions<BR>must be planned from an analysis. In this domain decisions need to be based on knowledge of the<BR>functional and physical properties of the system while also giving importance to the priority of<BR>the various goals. It is possible for the same required mental function to be served by different<BR>information processes, and each with their own error mechanism. A five dimension multi-facet<BR>classification system is described for an accidental chain of events. The dimensions include<BR>external causes, internal failure mechanisms, internal mental functions failed, external mode of<BR>action failures, and external tasks. Categories of the taxonomy directly related to the<BR>inappropriate human performance are listed. These are:<BR>(1) Personnel task—identification of the task performed<BR>(2) External mode of malfunction—immediate observable effect of human malfunction<BR>(3) Internal human malfunction—internal mental function of a persons decision making which<BR>was not performed as required by the task<BR>(4) Mechanisms of human malfunction<BR>(5) Causes of human malfunction—identifies possible external causes of inappropriate human<BR>action<BR>(6) Performance shaping and situational factors—general conditions that can influence error<BR>probability but not cause errors in and of themselves<BR>39<BR>Reason, J. (1979). Actions not as planned: The price of automatization. In G. Underwood<BR>(Ed.), Aspects of consciousness (pp. 67-89). London: Academic Press.<BR>The author takes a detailed look at minor slips and lapses that humans make in everyday life, and<BR>tries to sort out some of the conceptual confusion that exists between an act and its<BR>consequences. A brief experiment of natural history observations was performed that asked<BR>subjects to record unintended or absent-minded actions. Two predictions are made. First, test<BR>failures will occur when the open-loop mode of control coincides with a critical decision point<BR>where the strengths of the motor programs beyond that point are markedly different. The second<BR>prediction is that when errors occur, it will involve the unintended activation of the strongest<BR>motor program beyond the node. The discussion then switches to ‘Slips of action’. It is argued<BR>that ‘slips of action’ have certain consistencies. They occur almost exclusively during the<BR>automatic execution of highly practiced and ‘routinized’ activities. They often result from the<BR>misdirection of focal attention. Finally, they usually take the form of some frequently and<BR>recently performed behavioral sequence.<BR>Reason, J. (1990). Human error. Cambridge, MA: Cambridge University Press.<BR>The author presents a framework of accident causation. The framework is an expansion of a<BR>“resident pathogen” metaphor, meaning that causal factors are present in a system before an<BR>accident sequence actually occurs. This leads to a differentiation of active failures and latent<BR>failures. An important premise of the framework is that accidents come from fallible decisions<BR>that are made by designers and decision makers. Five basic elements of a system are first<BR>identified and then related to breakdowns in a system. One element is that decision makers are<BR>those who set goals for the system and can make fallible decisions (latent failures). Another<BR>element is that line management implements the strategies of the decision makers and are subject<BR>to deficiencies themselves (latent failures). A third element is that preconditions are conditions<BR>that permit efficient and safe operations and can be precursors for unsafe acts (latent failures). A<BR>fourth element is that productive activities are the actions performed by man and machine and<BR>lead to unsafe acts (active failures). The final element is that defenses are safeguards against<BR>foreseen hazards and can be inadequate (active and latent failures). Unsafe acts can be broken<BR>down into different types. If actions are unintended, they can be slips or lapses. Slips are<BR>attentional failures that can be caused by intrusions, omissions, reversals, misordering, or<BR>mistiming. Lapses are memory failures that lead to omitting planned items, place-losing, and<BR>forgetting. If actions are intended, they are classified as either mistakes or violations. Mistakes<BR>are either rule-based where there is a misapplication of a good rule or and application of a bad<BR>rule, or they can be knowledge-based. Violations are either routine, exceptional, or even acts of<BR>sabotage. It is argued that accidents occur as a penetration of various levels of the framework<BR>occurs. Latent errors combined with local triggering events lead to accidents and incidents.<BR>40<BR>Rockwell, T. H., &amp; Giffin, W. C. (1987). General aviation pilot error modeling – Again?<BR>Proceedings of the 4th International Symposium on Aviation Psychology (pp. 712-720).<BR>Columbus, OH: The Ohio State University, The Aviation Psychology Laboratory.<BR>Process models are created to initially explore three types of pilot error in general aviation. These<BR>are visual flight rules (VFR) flight into instrumental meteorological conditions (IMC), pilot fuel<BR>mismanagement, and pilot response to critical inflight events. The models are intended to explain<BR>large percentages of accidents of a specific type, pin point specific research needs to understand<BR>and to verify elements in the models, and to create implementable countermeasures to reduce the<BR>probability of pilot error. The models depict decisional processes, list typical errors, contributing<BR>factors and propose needed research.<BR>Rouse, W. B. (1983). Models of human problem solving: Detection, diagnosis, and<BR>compensation for system failures. Automatica (19), 613-625.<BR>The paper looks at the role of the human operator as a problem solver in man-machine systems.<BR>Various models of human problem solving are examined and a design for an overall model is<BR>outlined. The overall model attempts to capture the whole of problem solving and be<BR>operationalized within specific task domains. A basic mechanism for the proposed model<BR>incorporates pattern recognition models. Problem solving occurs on three general levels. These<BR>are recognition and classification, planning, and execution and monitory. The model can produce<BR>the behavior of solving problems in a top-down and a bottom-up manner and almost<BR>simultaneously on several levels.<BR>41<BR>Rouse, W. B., &amp; Rouse, S. H. (1983). Analysis and classification of human error. IEEE<BR>Transactions on Systems, Man, and Cybernetics, SMC-13(4), 539-549.<BR>There are two major approaches to human error, probabilistic and causal. This paper deals with<BR>the causal approach which focuses more on the why errors occur instead of just what occurs. It is<BR>argued that classification schemes can generally be categorized in one of three ways. They can<BR>be behavior-oriented, task-oriented, or system-oriented. Behavior-oriented schemes range from<BR>those emphasizing basic human information processing to those that focus on types of behavior<BR>occurring in particular task domains. Task-oriented schemes focus on information transfer<BR>problems, distraction events, and discriminating among types of tasks. System-oriented schemes<BR>apply categories of a relatively broad nature which cover a series of tasks within the domain of a<BR>particular system. A methodology is developed and discussed with the goal of analyzing human<BR>error in terms of causes as well as contributing factors and events. It borrows heavily from<BR>several of the previously discussed classification schemes. There are four general classes of<BR>contributing factors for human errors: Inherent human limitations, inherent system limitations,<BR>contributing conditions, and contributing events. Inherent human limitations include the<BR>knowledge and attitude of the operator. Inherent system limitations include the design of controls<BR>and displays, design of dialogues and procedures, and level of simulator fidelity. Contributing<BR>conditions include environmental factors such as noise, excessive workload, frustration, anger,<BR>embarrassment, confusion and operating in degraded modes. Contributing events involve<BR>distractions, lack of or misleading communication, sudden equipment failures, and events such<BR>as tension release. The proposed methodology is used to reanalyze data reported by Rouse et al<BR>having to do with the design and evaluation of computer-based methods for presenting<BR>procedural information such as checklists for normal, abnormal, and emergency aircraft<BR>operations. The results are finer grained and support stronger conclusions than those originally<BR>found.<BR>Samanta, P. K., &amp; Mitra, S. P. (1982). Modeling of multiple sequential failures during<BR>testing, maintenance and calibration (NUREG/CR-2211). Brookhaven National Lab.<BR>This report looks at the nature of dependence among human failures in a multiple sequential<BR>action and how it differs from other types of multiple failures. It is necessary to consider<BR>dependant failures in a system because otherwise there are serious doubts as to the usefulness of<BR>a reliability calculation that considers random events only. There are two types of dependant<BR>failures. Common cause failures are caused by an event outside the group but common to the<BR>components. Cascade failures are caused from within the group such as a single component<BR>failure which results in the failure of all components concerned. A multiple sequential failure<BR>during testing and maintenance is modeled by taking into account the processes involved in such<BR>a failure. The data suggest that the dependence among failures increases as the number of<BR>components in the system increases. Human error causes selective failure of components<BR>depending on when the failure started. Since previous models of dependent failures were found<BR>to be lacking, two models were developed. The first is very general and does not require any<BR>dependent failure data. The second model took multiple sequential failures into account.<BR>42<BR>Sanders, M. S. &amp; McCormick, E. J. (1993). Human error, accidents, and safety. In Human<BR>factors in engineering and design (7th ed.) (pp. 655-695). New York: McGraw-Hill, Inc.<BR>This chapter deals with human error, accidents and improving safety in the domain of human<BR>factors. Human error is defined as an inappropriate or undesirable human decision or behavior<BR>that reduces, or has the potential for reducing, effectiveness, safety, or system performance. A<BR>review of three classification schemes is performed. Swain and Guttmann’s omissioncommission<BR>model is the first reviewed. A model by Rouse and Rouse is the second model<BR>reviewed. Rasmussen’s skill-rule-knowledge based model is the third reviewed. An argument for<BR>three general types of accident causation theories is made. These three types are accidentproneness<BR>theories, job demand versus worker capability theories, and psychosocial theories.<BR>Three aspects of risk perception are examined. These are the availability heuristic, the “It can’t<BR>happen to me” bias, and relative risk. It is noted that warnings and product liability have become<BR>important legal issues in today’s society. As a result of this, human factors has a large role to<BR>play in these issues. The remainder of the article discusses Sanders and Shaw’s contributing<BR>factors in accident causation model (CFAC). This model has five important tiers. Tier one<BR>concerns itself with management issues. Tier two focuses on the physical environment,<BR>equipment design, the work itself, and the social/psychological environment. The third tier<BR>involves the worker and coworker factors. Tier four is where all unsafe behaviors from the<BR>previous tiers are grouped. The fifth tier is the level of chance that leads to an accident. Unique<BR>features of CFAC include the emphasis on management and social-psychological factors, the<BR>recognition of the human-machine-environment system and the model’s simplicity and easy<BR>comprehension.<BR>Shappell, S. A. &amp; Wiegmann, D. A. (1995). Controlled flight into terrain: The utility of<BR>models of information processing and human error in aviation safety. Proceedings of the<BR>Eighth International Symposium on Aviation Psychology, 8, 1300-1306.<BR>A study of controlled flight into terrain (CFIT) accidents in the U.S. military was conducted to<BR>try and ascertain why pilots do this. The study looked at an 11 year period. The four stage<BR>information processing model of Wickens and Flach was used to classify 206 of the 278 pilot<BR>causal factors found. These four stages are short-term sensory store, pattern recognition, decision<BR>and response selection, and response execution. Reason’s model of unsafe acts was applied to the<BR>data and allowed for the classification of 223 of 278 pilot causal factors. It was concluded that<BR>any intervention needs to focus on the decision process of the pilot, specifically with mistakes<BR>and violations.<BR>43<BR>Shappell, S. A. &amp; Wiegmann, D. A. (1997). A human error approach to accident<BR>investigation: The taxonomy of unsafe operations. International Journal of Aviation<BR>Psychology, 7(4), 269-291.<BR>A framework is provided and discussed called the Taxonomy of Unsafe Operations. This<BR>framework bridges the gap between classical theories and practical application by providing field<BR>investigators with a user-friendly, common sense framework that allows accident investigations<BR>to be conducted and human causal factors classified. Three levels of failure involving the human<BR>component are presented. These are unsafe supervision, unsafe conditions of operators, and<BR>unsafe acts that operators commit. The framework directly incorporates Reason’s classification<BR>of unsafe acts. Three basic error types are incorporated. The first are slips which are<BR>characteristic of attentional failures. The second are lapses which come from memory failures.<BR>The third are mistakes which are defined as intentional behavior that doesn’t produce the desired<BR>outcome. Mistakes can be further broken down as being either rule-based or knowledge-based,<BR>as is described in Rasmussen’s model. A framework was developed to breakdown unsafe<BR>conditions of the operator. Substandard conditions of the operator are divided into three<BR>categories. These include adverse physiological states, adverse mental states, and physical and/or<BR>mental limitations. Substandard practices of the operator are also broken down into three<BR>categories. These categories are mistakes-misjudgments, crew resource mismanagement, and<BR>readiness violations. A framework for unsafe supervision is also developed. One dimension of<BR>this framework deals with unforeseen unsafe supervision. Examples of this are unrecognized<BR>unsafe operations, inadequate documentation and procedures, and inadequate design. The other<BR>dimension deals with known unsafe supervision. Examples of this include inadequate<BR>supervision, planned inappropriate operations, failure to correct known problems, and<BR>supervisory violations. The usefulness of this cause-oriented taxonomy was shown by a<BR>demonstration in its application to a military aviation accident.<BR>Shappell, S. A., &amp; Wiegmann, D. A. (2000). The human factors analysis and classification<BR>system (HFACS) (Report Number DOT/FAA/AM-00/7). Washington DC: Federal Aviation<BR>Administration.<BR>The Human Factors Analysis and Classification System (HFACS) was originally developed for<BR>the U.S. Navy and Marine Corps as an accident investigation and data analysis tool. Since its<BR>original development however, HFACS has been employed by other military organizations (e.g.,<BR>U.S. Army, Air Force, and Canadian Defense Force) as an adjunct to preexisting accident<BR>investigation and analysis systems. To date, the HFACS framework has been applied to over<BR>1,000 military aviation accidents yielding objective, data-driven intervention strategies while<BR>enhancing both the quantity and quality of human factors information gathered during accident<BR>investigations. Other organizations such as the FAA and NASA have has also explored the use of<BR>HFACS as a complement to preexisting systems within civil aviation in an attempt to capitalize<BR>on gains realized by the military. Specifically, HFACS draws upon Reason’s (1990) concept of<BR>latent and active failures and describes human error at each of four levels of failure: 1) unsafe<BR>acts of operators (e.g., aircrew), 2) preconditions for unsafe acts, 3) unsafe supervision, and 4)<BR>organizational influences. The manuscript provides a detail description and examples of each of<BR>these categories.<BR>44<BR>Siegel, A. I., Bartter, W. D., Wolf, J. J., Knee, H. E., &amp; Haas, P. M. (1984). Maintenance<BR>personnel performance simulation (MAPPS) model: Summary description (NUREG/CR-<BR>3626). Oak Ridge, TN: Oak Ridge National Laboratory.<BR>This report describes a human performance computer simulation model developed for the<BR>nuclear power maintenance context. The model looks at variables such as work place,<BR>maintenance technician, motivation, human factors, and task-orientation. Information is provided<BR>about human performance reliability pertinent to probabilistic risk assessment, regulatory<BR>decisions, and maintenance personnel requirements. The technique allows for the assessment of<BR>the tasks that maintenance technicians may perform in a less than satisfactory manner and what<BR>conditions or combination of conditions serve to contribute to or alleviate such performance.<BR>Silverman, B. G. (1992). Critiquing human error: A knowledge based human-computer<BR>collaboration approach. San Diego, CA: Academic Press.<BR>A model of human error is examined that that is useful for construction of a critic system in<BR>artificial intelligence. The model is rooted in the psychological study of expert performance. The<BR>adapted model of human error consists of three levels. The outermost layer provides a method to<BR>account for external manifestations of errors in human behavior. These occur as cues such as<BR>knowledge rules, models, or touchstones, that need to be followed to reach a correct task<BR>outcome. The middle layer determines what causes lead to the erroneous behaviors identified in<BR>the outer layer. The inner layer investigates the innermost cognitive reasons causing the error.<BR>This is performed by teasing apart the processes and operations that contribute to that cause. The<BR>diagnostic graph of cognitive operations leading to human error include four categories that all<BR>fall under the heading of cue usage error. Cognitive biases is the first category. Examples of<BR>these are input filter biases, info acquisition biases, info processing biases, intended output<BR>biases, and feedback biases. Accidental slips and lapses is a second category. Examples of these<BR>include environment/feedback errors, info acquisition errors, info processing errors, and intended<BR>output errors. A third category is cultural motivations. Errors that occur in this category are<BR>rational actor errors, incrementalism errors, recognition primed errors, and process control errors.<BR>The final category is the missing knowledge category. Examples of these errors are initial<BR>training errors, knowledge decay errors, and multidisciplinary errors. This model focuses on<BR>errors that occur prior to time pressures and stress that occur during crisis or panic times. A<BR>framework in which errors arise is presented and described. It is a system with four entities.<BR>(1) The person or expert making the judgments.<BR>(2) The task-environment within which the person makes the judgments.<BR>(3) The feedback loop consisting of actions and reactions of people.<BR>(4) The automated critics that try to influence a person’s judgment and decision.<BR>45<BR>Singleton, W. T. (1973). Theoretical approaches to human error. Ergonomics, 16(6), 727-<BR>737.<BR>Two types of approaches to human error are discussed. The technological approach involves<BR>coping with many problems with or without laboratory support and then attempting to generalize<BR>the rules of the game in terms of classifications of kinds of problems with associated remedies.<BR>The scientific approach is based on the principle that theory is the bridge between experiment<BR>and practice. The author discusses different theoretical approaches and extracts useful elements<BR>relevant to human error. The approaches looked at include the psychoanalytic approach, the<BR>stimulus response approach, field theories, cybernetics, human performance and skill, Decision<BR>Theory, the arousal/stress theories, and the social theories. The author concludes that errors and<BR>accidents are not homogeneous, therefore it is necessary for a practitioner to match the most<BR>relevant taxonomy and theory to the particular practical problem. It is suggested that no single<BR>method will provide a complete answer but rather that this is where the greatest dividend is likely<BR>to be found. From here, a comprehensive unweighted approach to problems is presented.<BR>Stoklosa, J. H. (1983). Accident investigation of human performance factors. Proceedings<BR>of the 2nd International Symposium on Aviation Psychology (pp. 429-436). Columbus, OH:<BR>The Ohio State University, The Aviation Psychology Laboratory.<BR>The paper discusses the necessary factual information for a detailed and systematic investigation<BR>of the human performance aspects of an accident. Six profile categories are established which<BR>include behavioral, medical, operational, task, equipment design, and environmental factors. This<BR>concept has been successfully implemented in actual multi-modal accident investigations.<BR>Str&auml;ter, O. (1996). A method for human reliability data collection and assessment.<BR>Probabilistic Safety Assessment and Management ‘96 (pp. 1179-1184). New York:<BR>Springer.<BR>A method for evaluation of plant experience for probabilistic assessment of human actions is<BR>described. The method is able to support root cause analysis in the evaluation of events and to<BR>describe human failures with respect to HRA purposes. The method is applied to boiling water<BR>reactor events and the results are compared with the data tables of the THERP handbook. The<BR>evaluation framework was subdivided into two steps, the decomposition of an event into units<BR>called MMS (man-machine systems), and then detailed analysis of the MMS-units. It was<BR>concluded that it is possible to validate most of the items of the THERP handbook using the new<BR>method. The new method is a reasonable procedure to analyze simulator data as well as to<BR>improve Human Reliability systematically in a wide range of industries (i.e. aviation and power<BR>plants).<BR>46<BR>Swain, D., &amp; Guttmann, H. E. (1980). Handbook of human reliability analysis with<BR>emphasis on nuclear power plant applications (NUREG/CR-1278). Albuquerque, NM:<BR>India Laboratories.<BR>The Technique for Human Error Rate Prediction (THERP) model is presented in this handbook.<BR>The steps in THERP define the system failures of interest, list and analyze the related human<BR>operations, estimate the relevant error probabilities, estimate the effects of human errors on the<BR>system failure events, and recommend changes to the system and recalculate the system failure<BR>probabilities. THERP is interested in evaluating task reliability, error correction, task effects, and<BR>the importance of effects. Probability tree diagrams are the basic tools of THERP.<BR>Tarrants, W. E. (1965, May). Applying measurement concepts to the appraisal of safety<BR>performance. Journal of ASSE, 15-22.<BR>The author addresses the issue of non-injurious accidents, or near misses, as an important basis<BR>for an accident prevention program designed to remove those causes before more severe<BR>accidents can occur. The Critical Incident Technique is presented and explained to deal with<BR>these near misses. The Critical Incident Technique is a method that identifies errors and unsafe<BR>conditions which contribute to accidents within a given population by means of a stratified<BR>random sample of participant-observers selected from within this population. Interviews with<BR>people whose jobs are being studied are performed to reveal unsafe errors, conditions and other<BR>critical incidents. The incidents are classified into hazard categories from which accident<BR>problem areas are defined. Accident prevention programs are then designed to deal with the<BR>critical incidents. The technique is reapplied periodically to detect new problem areas and to<BR>measure effectiveness of the accident prevention program that was designed. The technique of<BR>behavior sampling is also reviewed as an accident measure. The technique is concerned with the<BR>acts a person is engaged in at the moment of an accident.<BR>Van Eekhout, J. M., &amp; Rouse, W. B. (1981). Human errors in detection, diagnosis, and<BR>compensation for failures in the engine control room of a supertanker. IEEE Transactions<BR>on Systems, Man, and Cybernetics, SMC-11(12), 813-816.<BR>An error classification system is presented in a marine context that is an extension of<BR>Rasmussen’s scheme of classifying human errors in nuclear power plant operations. The general<BR>categories of error in the classification system are observation of the system state, identification<BR>of a fault, choice of the goal, choice of the procedure, and execution of the procedure. A study<BR>was conducted on crews of professional engineering officers to see how they could perform the<BR>task of coping with failures in a high-fidelity supertanker engine control room simulator. Two<BR>important conclusions were made. First, human factors design inadequacies and fidelity<BR>problems lead to human errors. Second, a lack of knowledge of the functioning of the basic<BR>system as well as automatic controllers is highly correlated with errors in identifying failures.<BR>47<BR>Wickens, C. D. &amp; Hollands J. G. (2000). Engineering psychology and human performance<BR>(3rd ed.). Upper Saddle River, NJ: Prentice Hall.<BR>The information processing model is a framework that is broken down into a series of stages and<BR>feedback loops. Sensory processing is the first stage. Information from the environment gains<BR>access to the brain. Sensory systems each have their own short-term sensory store. Perception is<BR>the second stage. Raw sensory data is transmitted to the brain and is interpreted and given<BR>meaning. Perception is automatic and occurs rapidly. It relies on both bottom-up processing and<BR>top-down processing. Cognition and memory is another stage in the model. Working memory is<BR>part of this stage. This stage is characterized by conscious activities which transform or retain<BR>information and are resource limited. From here, some information is transferred to long-term<BR>memory. The next two stages are response selection and response execution. Feedback loops are<BR>important and necessary for monitoring progress to see if a task was completed. Attention is<BR>another important aspect of the model. Attention is a limited resource that can be selectively<BR>allocated to the desired channels. Attention can also be divided between different tasks and<BR>mental operations. This model can be used to describe human error as occurring in different<BR>stages of the model. Mistakes (knowledge and rule based) can arise from problems in perception,<BR>memory, and cognition. Slips occur during action execution. Lapses and mode errors are related<BR>to memory failures.<BR>Wiegmann, D. A. &amp; Shappell, S. A. (1997). Human factors analysis of postaccident data:<BR>Applying theoretical taxonomies of human error. International Journal of Aviation<BR>Psychology,7(1), 67-81.<BR>Three conceptual models of information processing and human error are examined and used to<BR>reorganize the human factors database associated with military aviation accidents. The first<BR>model was the four stage information processing model of Wickens and Flach. The second<BR>model used was O’Hare’s adapted version of Rasmussen’s taxonomic algorithm for classifying<BR>information processing failures. The third model used was Reason’s approach to classification of<BR>active failures. It was found that the naval aviation accident database could be reorganized with a<BR>large degree of success into the three taxonomies of human error. It is also noted that a general<BR>trend was found. Accidents were primarily associated with procedural and response-execution<BR>errors as well as mistakes. The four stage information processing model accounted for slightly<BR>less pilot-causal factors than did the other two models.<BR>48<BR>Wiegmann, D. A., &amp; Shappell, S. A. (1999). A human factors approach to accident analysis<BR>and prevention. (Workshop Manual from the 43rd Human Factors and Ergonomics Society<BR>Conference).<BR>This paper is an expansion of the framework given in Shappell and Wiegmann’s (1997) paper.<BR>The only difference is that organizational factors are considered and added into the framework.<BR>Three categories of organizational influences exist. Resource management refers to the<BR>management, allocation, and maintenance of organizational resources. Examples of these include<BR>human resources, monetary resources, and equipment/facility resources. Organizational climate<BR>considers the prevailing attitudes and atmosphere in an organization. Important aspects of this<BR>category include the structure, policies, and culture of the organization. Operational process is<BR>the final category and is defined as the formal process by which things get done in an<BR>organization. Parts to this category include operations, procedures and oversights in the<BR>organization.<BR>Williams, J. C. (1988). A data-based method for assessing and reducing human error to<BR>improve operational performance. Conference record for 1988 IEEE Fourth Conference<BR>on Human Factors and Power Plants (pp. 436-450). New York, NY: Institute of Electrical<BR>and Electronics Engineers.<BR>The HEART method (Human Error Assessment and Reduction Technique) is used to explore the<BR>identities and magnitudes of error-producing factors and provides defensive measures to combat<BR>their effects. The HEART method is based on three premises. First, basic human reliability is<BR>dependent upon the generic nature of the task to be performed. Second, given “perfect”<BR>conditions, this level of reliability will tend to be achieved consistently with a given nominal<BR>likelihood within probabilistic limits. And finally, given that these perfect conditions do not exist<BR>in all circumstances, the human reliability predicted may be expected to degrade as a function of<BR>the extent to which identified error-producing conditions might apply. The HEART methodology<BR>concentrates on the additive nature of error-producing conditions found in practice, and assumes<BR>that factorial degradation of performance through multiple error-producing conditions is a much<BR>more likely outcome.<BR>Woods, D. D., Pople, H. E., &amp; Roth, E. M. (1990). The cognitive environment simulation as<BR>a tool for modeling human performance and reliability (NUREG/CR-5213). Pittsburgh,<BR>PA: Westinghouse Science and Technology Center.<BR>A tool called the Cognitive Environment Simulation (CES) was developed for simulating how<BR>people form intentions to act in nuclear power plant personnel emergencies. A methodology<BR>called Cognitive Reliability Assessment Technique (CREATE) was developed to describe how<BR>CES can be used to provide input to human reliability analyses in probabilistic risk assessment<BR>studies. CES/CREATE was evaluated in three separate workshops and was shown to work in the<BR>tested scenarios. CES can be used to provide an objective means of distinguishing which event<BR>scenarios are likely to be straightforward to diagnose and which scenarios are likely to be<BR>cognitively challenging, requiring longer to diagnose and which can lead to human error.<BR>49<BR>Wreathall, J. (1994). Human errors in dynamic process system. In T. Aldemir (Eds.),<BR>Reliability and safety assessment of dynamic process systems (pp. 179-189). New York:<BR>Springer.<BR>The author reviews the nature of human errors and the way they interact with systems in<BR>dynamic processes. A framework of accident causation is presented that is heavily based on<BR>Reason’s model. The important features of the model are organizational processes, errorproducing<BR>conditions, unsafe acts, defenses against unsafe acts, latent failures, and the accident<BR>itself. Unsafe acts can be separated into different categories. There are errors of commission and<BR>omission. There are active and latent errors. There are also slips/lapses, mistakes, and<BR>circumventions.<BR>Wreathall, J., Luckas, W. J., &amp; Thompson, C. M. (1996). Use of a multidisciplinary<BR>framework in the analysis of human errors. Probabilistic Safety Assessment and<BR>Management ’96 (pp. 782-787). New York, NY: Springer.<BR>This paper describes an effort to develop and apply a framework to describe the human-system<BR>interactions associated with several different technical applications. The fundamental elements of<BR>the framework are the PSA model and “plant” state, human failure events, unsafe actions, error<BR>mechanisms, performance shaping factors, and plant conditions. The framework was applied in<BR>the analysis of several accidents (including the crash of Air Florida flight in January 1982) and<BR>proved to be robust and insightful.<BR>Acknowledgments and Disclaimer<BR>This material is based upon work supported by the Federal Aviation Administration<BR>under Award No. DTFA 99-G-006. Any opinions, findings, and conclusions or<BR>recommendations expressed in this publication are those of the authors and do not necessarily<BR>reflect the views of the Federal Aviation Administration.

yangy2397 发表于 2010-6-28 14:17:31

<A href="http://www.civilaviation.cc/" target=_blank><FONT size=6><FONT color=red>北京蓝天飞行翻译有限公司</FONT></FONT></A>

mrmmx 发表于 2010-7-10 17:40:20

飞行安全基金会认为,我们



先要充分利用时间在发展中国家



传播这种程序,因为在这些地方



发生的许多事故本可以通过这种



程序加以制止。



但是,在发达国家存在的安



全系统参差不齐的现状说明我们



还有很多工作要做。今年10月



在檀香山召开的国际航空安全研



讨会上,有两位人士的发言指出



了对维护程序缺乏重视的问题。

涟漪雨 发表于 2010-11-11 10:04:57

学习,太好了!

晴天不见太阳 发表于 2010-12-2 12:37:33

:) 下来看看

醒不来睡不着 发表于 2010-12-2 20:09:48

<P>向各位老师学习</P>

f214216709 发表于 2010-12-7 09:47:52

总局的培训?

dingzhili 发表于 2011-3-6 21:52:59

辛苦了!谢谢

wendellc 发表于 2011-3-11 22:20:21

谢谢分享拉
页: [1] 2 3
查看完整版本: 人为因素分析综述