High Reliability Organization - History

History

The roots of the HRO paradigm were developed by a group of researchers at the University of California Berkeley (Todd LaPorte, Gene Rochlin, and Karlene Roberts) which examined aircraft carriers (in partnership with Rear Admiral (ret.) Tom Mercer on the USS Carl Vinson), the Federal Aviation Administration’s Air Traffic Control system (and commercial aviation more generally), and nuclear power operations (Pacific Gas and Electric’s Diablo Canyon reactor). An initial conference at the University of Texas in April 1987 brought researchers together to focus attention on HROs. Further research on each of these three sites included Karl Weick and Paul Schulman. More research has examined the fire incident command system, Loma Linda Hospital’s Pediatric Intensive Care Unit, and the California Independent System Operator as HROs.

Although they may seem diverse, these organizations have a number of similarities. First, they operate in unforgiving social and political environments. Second, their technologies are risky and present the potential for error. Third, the scale of possible consequences from errors or mistakes precludes learning through experimentation. Finally, to avoid failures these organizations use complex processes to manage complex technologies and complex work. HROs share many properties with other high-performing organizations including highly trained-personnel, continuous training, effective reward systems, frequent process audits and continuous improvement efforts. Yet other properties such as an organization-wide sense of vulnerability, a widely distributed sense of responsibility and accountability for reliability, widespread concern about misperception, misconception and misunderstanding that is generalized across a wide set of tasks, operations, and assumptions, pessimism about possible failures, redundancy and a variety of checks and counter checks as a precaution against potential mistakes are more distinctive.

Defining high reliability and specifying what constitutes a high reliability organization has presented some challenges. Roberts initially proposed that high reliability organizations are a subset of hazardous organizations that have enjoyed a record of high safety over long periods of time. Specifically she argued that: “One can identify this subset by answering the question, “how many times could this organization have failed resulting in catastrophic consequences that it did not?” If the answer is on the order of tens of thousands of times the organization is “high reliability”” (p. 160). More recent definitions have built on this starting point but emphasized the dynamic nature of producing reliability (i.e., constantly seeking to improve reliability and intervening both to prevent errors and failures and to cope and recover quickly should errors become manifest). In other words, there has been increased focus on thinking of HROs as reliability-seeking rather than reliability-achieving. Reliability-seeking organizations are not distinguished by their absolute errors or accident rate, but rather by their “effective management of innately risky technologies through organizational control of both hazard and probability” (p. 14). Consequently, the phrase high reliability more generally has come to mean that high risk and high effectiveness can co-exist, that some organizations must perform well under very trying conditions, and that it takes intensive effort to do so.

A key turning point that reinvigorated HRO research was Karl Weick, Kathleen Sutcliffe, and David Obstfeld’s reconceptualization of the literature on high reliability. These researchers systematically reviewed the case study literature on HROs and illustrated how the infrastructure of high reliability was grounded in processes of collective mindfulness which are indicated by a preoccupation with failure, reluctance to simplify interpretations, sensitivity to operations, commitment to resilience, and deference to expertise. In other words, HROs are distinctive because of their efforts to organize in ways that increase the quality of attention across the organization, thereby enhancing people’s alertness and awareness to details so that they can detect subtle ways in which contexts vary and call for contingent responding (i.e., collective mindfulness). This construct was elaborated and refined as mindful organizing in Weick and Sutcliffe’s 2001 and 2007 editions of their book Managing the Unexpected. Mindful organizing forms a basis for individuals to interact continuously as they develop, refine and update a shared understanding of the situation they face and their capabilities to act on that understanding. Mindful organizing proactively triggers actions that forestall and contain errors and crises. Mindful organizing requires that leaders and organizational members pay close attention to shaping the social and relational infrastructure of the organization, and to establishing a set of interrelated organizing processes and practices, which jointly contribute to the system’s (e.g., team, unit, organization) overall culture of safety

High reliability organization theory and HROs are often contrasted against Charles Perrow’s Normal Accident Theory (NAT) (see Sagan for a comparison of HRO and NAT). NAT represents Perrow's attempt to translate his understanding of the disaster at Three Mile Island nuclear facility into a more general formulation of accidents and disasters. Perrow's 1984 book also included chapters on petrochemical plants, aviation accidents, naval accidents, "earth-based system" accidents (dam breaks, earthquakes), and "exotic" accidents (genetic engineering, military operations, and space flight). At Three Mile Island the technology was tightly coupled due to time-dependent processes, invariant sequences, and limited slack. The events that spread through this technology were invisible concatenations that were impossible to anticipate and cascaded in an interactively complex manner. Perrow hypothesized that regardless of the effectiveness of management and operations, accidents in systems that are characterized by tight coupling and interactive complexity will be normal or inevitable as they often cannot be foreseen or prevented. This pessimistic view, described by some theorists as unashamedly technologically deterministic, contrasts with the more optimistic view of HRO proponents, who argued that high-risk, high-hazard organizations can function safely despite the hazards of complex systems. Despite their differences, NAT and high reliability organization theory share a focus on the social and organizational underpinnings of system safety and accident causation/prevention.

Read more about this topic:  High Reliability Organization

Famous quotes containing the word history:

    He wrote in prison, not a History of the World, like Raleigh, but an American book which I think will live longer than that. I do not know of such words, uttered under such circumstances, and so copiously withal, in Roman or English or any history.
    Henry David Thoreau (1817–1862)

    No matter how vital experience might be while you lived it, no sooner was it ended and dead than it became as lifeless as the piles of dry dust in a school history book.
    Ellen Glasgow (1874–1945)

    Boys forget what their country means by just reading “the land of the free” in history books. Then they get to be men, they forget even more. Liberty’s too precious a thing to be buried in books.
    Sidney Buchman (1902–1975)