Research and Advances
Computing Applications Emergency response information systems: emerging trends and technologies

Human-Computer Interaction: The Human and Computer as a Team in Emergency Management Information Systems

Building the computer as part of the emergency management team ensures that people continue to do the things they do well, supported by the technology, not driven by it.
Posted
  1. Introduction
  2. Metaphors
  3. The Design Challenge
  4. Coordination, Collaboration, Command, and Control
  5. The Emergency Manager's Tool Kit
  6. Information Overload
  7. Context Visibility
  8. Extending the Network
  9. Automated Systems
  10. Impacts of Automation
  11. Conclusion
  12. References
  13. Authors
  14. Figures

Emergency management scenarios present special requirements for the individuals involved. They need to act decisively within tight time schedules with often incomplete information and/or with too much data from which it is difficult to extract relevant information (information overload). Moreover, contextual drivers mean the crises are potentially increasingly complex with greater implications for more people, and need to be managed by dispersed groups of highly skilled individuals. This means that individuals within the emergency management system are under more pressure to:

  • Absorb information rapidly;
  • Judge its sense, its meaning, its relevance, and its reliability;
  • Decide what the options for action are and make effective decisions; and
  • Deal with plans that were prepared with little knowledge of the reality at the ‘coal face’ (where the pick meets the coal).

Figure 1 shows the requirement for human-computer interaction (HCI) intervention at each phase of the emergency management process. In order to achieve the vision of “the right information at the right time in the right format to the right person,” significant consideration must be given to the way in which computers are used to enhance the capability of the individual and to designing effective interfaces so that true interaction is achieved [3, 9].

Back to Top

Metaphors

In the field of emergency management there are natural metaphors that suggest a very specific, consistent, and unique approach to interfaces. Events and roles are the two concepts that professionals use for the planning, training, response, and evaluation of emergencies.

Events are triggered by outside occurrences or by a set of roles that are responsible to react to a specific type event (for example, reports of injuries) with appropriate counter events (such as sending an ambulance). There is a resulting network of related events that describes everything necessary to take care of the external emergency event. Since professionals in emergency management think of actions as a series of events, they utilize cooperatively both their semantic and episodic memories to deal with emergency situations. This seems to allow them to exceed ordinary limits on information overload and demonstrate the same sort of “cognitive absorption” (or intense concentration) usually applied to game players [1].

Back to Top

The Design Challenge

People carrying out the command, control, and analysis process for emergencies may work intensely for 14- to 24-hour shifts, deal with a great deal of information, and trust those that will take over their roles when they finally must sleep. They know that their actions, if wrong, will cost lives, and consequently want the best possible timely information from both humans and sensors to give them accurate assessments of circumstances. They need to be adequately aware of the real situation to have confidence in making meaningful life-and-death decisions. In the literature, the concept of cognitive absorption has been applied to game playing but it is also appropriate for guiding the design of emergency information systems. We interpret the literature (for example, [1, 6]) to hypothesize five properties of the concept that seem to characterize those in emergency management:

  1. Role players feel they are exercising control.
  2. A total focus of attention to the problem at hand, ignoring all that is not relevant.
  3. Using improvisation or unconventional ways to appraise information and formulate decisions.
  4. Senses of challenge, curiosity, and enjoyment in the effort.
  5. High motivation due to the critical nature of the problem.

A behavior that has also been observed in the emergency field is the “threat rigidity syndrome” [9] where additional stress is caused because of a loss of control over the situation or reduced understanding of reality. This causes role players to fall back on rules and fixed plans that may be inappropriate for the given situation. They know that delayed decisions will potentially worsen outcomes. If they feel that there is information available to help, but that this is inaccessible or cannot be gained in time, then the state of cognitive absorption is reduced and there is a feeling that there is little effective control. This presents several challenges for the design of such systems:

  • Providing ways to obtain accurate and timely perceptions of reality through communication structures that track and facilitate open exchange of information, including feedback from the incident site from both trained observers and victims.
  • Designing to enhance the ability to focus attention without interruption, and to require a minimum of effort to carry out a task.
  • Designing to encourage or facilitate creativity and improvisation on both an individual and team basis.
  • Providing mechanisms to support building trust between the individuals in the team, many of whom have not worked together in the past.
  • Being able to anticipate and predict when more relevant information will be obtained.

Back to Top

Coordination, Collaboration, Command, and Control

An emergency brings together a team of people often representing different organizations, resources and roles. The extended team needs to work together effectively and in such a way that the team members support each others’ objectives even when they have never before worked together. Events are partially defined by what or which role triggers them, who must respond to them, and what roles must be informed about it. One of the most important roles in emergency systems is the provision of intelligent feedback on the local requirements for handling an ongoing situation.

A dispatcher responds to events with the action-event of forwarding specific resources like a police car or an ambulance to the emergency. However in many wide-scale disasters such as a storm, a particular ambulance may not be able to reach victims because of a mudslide or a flooded bridge creating a mismatch between the initiation of an event and its completion. Feedback loops are thus vital.

In most emergencies plans never fit the exact details of the situation, nor are they designed with any participation from those who are actually going to carry them out [10]. This was certainly true for recent major disasters such as 9/11 and Katrina. In the former case, there was nothing in emergency plans about ferries being used as ambulances, and the evacuation plans for New Orleans had no considerations of real-life behavioral problems or how to cope with them. This situational awareness can be enhanced by feedback, perceived information from the environment, information from colleagues, as well as remote sensors. Individuals also must be aware that they may not know enough and hence need to seek further information to make better decisions.

Back to Top

The Emergency Manager’s Tool Kit

As a result, systems have been proposed to support the decision making of the emergency management team. These functions are highly computer dependant and their success is reliant on ensuring that the design of the human-computer interfaces takes into account user requirements. Moreover, the interface should enable human-computer interaction—a conversation between the user and the computer so that the user is aware of what is happening within the black box. Tools being developed include:

Information prioritization: what rules are used to prioritize the situational information? Are they defined by the sender, set by the user, or managed in a dynamic way depending on the context and time of use?

Decision support and modeling tools: to help make the decision in the first place, to carry out impact analysis as well as provide support once the decision has been made. For example, the decision to ‘evacuate those at risk’ must be supported with how many are at risk, where should they be evacuated to, and address the ‘how can we do this’ part of the task. Where are shelters? Which hospitals have beds? How will casualties be transported, and how can the process be managed, and who needs to know?

Representation of a common operating picture: a manipulable visualization of what is happening and where resources are that is open to all members of the emergency management team.

Back to Top

Information Overload

Information overload includes many aspects: What is enough information? When is a user overloaded? In dealing with overload we must consider the ways in which the operator decides what information is relevant, that he trusts, and that he will use, and which information is deemed irrelevant. If the information does not seem reliable, or is conflicting with other sources, then the user needs to corroborate it via an alternative source.

Critical thinking is a key capability for emergency managers, asking questions like:

  • Do I have time to consider the information fully?
  • Do I understand the value and impact of the information?
  • Can I recoup or mitigate for my decision if it is wrong?
  • Does someone else have the information I need?
  • Can I manage the associated degree of uncertainty?
  • Do I trust it enough to use it? If not, how can I corroborate or disprove the information by comparison with other sources?
  • Is it what I would expect?
  • Am I too reliant on the source of the information in trusting it?
  • Can I learn from this information or is the context too different from my current situation to be valuable? How can I judge this?
  • How can I improve my awareness to make a good enough decision?

Back to Top

Context Visibility

An approach to overcoming information overload is context visibility imposed upon the metaphor for the system [9]. Once we have the concept of events and roles we realize that any external event is a root item that must bring together dynamically all related events, and the resulting knowledge structure template for an action/decision process must be available to all the roles that are concerned with that specific event.

The system needs to be able to identify possible problem-solving teams by the information-seeking behavior of the role players. Contextual information needs to be attached to every piece of information in order to add important identification data (provenance and recency) for the development of confidence limits on its use and reuse. This must not be an additional burden on users but must be automated if at all possible.

The use of the concept of context visibility can help to minimize the mechanics of dealing with large databases or the many different sources of detailed dynamic information that are needed. An event in short form should have a number of related fields like the location, its related resource, other local reports, and instead of using separately added commands or menus for the context, a click on a meaningful field should open a window containing the details and latest information about the field relevant to the role doing the manipulation. This means the list of events should be organized by the triggering event. The highest level in the hierarchy provides the actual interface to the details which can be opened, expanded, and closed as needed by the person in that role.

Back to Top

Extending the Network

New technologies such as helmet-mounted displays can be linked via wireless networks to local headquarters [3] and provide status information, for example, to fire fighters about oxygen remaining, task lists, maps of the building as well as dynamic support for identification of toxic substances and guidance in smoke as well as route updates should there be further damage to the building. The ability to set up local networks on disaster sites and the potential for the exchange of digital voice, graphics, and video ultimately means more reliable, timely, and relevant information can flow between those on site and those involved in command, control, and coordination. The ability to store and reuse direct observations as needed and break the barrier of synchronicity adds significant resiliency to the network and its ability to adjust to a changing situation.

Back to Top

Automated Systems

As the amount of sensor data increases, automated systems begin to manage the flow of data in order to make the user’s job possible. This can be clearly observed in the development of flight deck technologies, where in the early days pilots looked out of the cockpit to figure out where they were, felt the g-forces to know what the aircraft was doing, and could feel the hydraulics to know what was happening to control surfaces. In comparison, flightdecks must today utilize multifunction computer displays—where huge amounts of information are stored and the pilot must navigate through layers and layers of information to find the required information. He has thus become more of a systems engineer than a pilot. The experiences on the flight deck have lead to research that has contributed to the understanding of the attributes of automation and how it affects tasks, as well as understanding the concept of situational awareness which is commonly understood to be a critical aspect in managing complexity.

Even for pilots, automated systems can produce conflicting information from different sources and will force decisions about which information to act upon. There are clear parallels with the domain of the emergency manager, who is potentially exposed to more and more information—not all of which is immediately relevant to the activities that he is required to manage. Emergency management systems are real-time systems where automation has to be under the control of the human at all times. Some designers lose sight of the limits to automation in emergency response and forget that unpredictability is a major characteristic of emergencies.

Back to Top

Impacts of Automation

Understanding the human impacts of automation and how tasks should be allocated between man and machine has been a key area of HCI research. As early as 1951, Fitts defined the attributes of what machines and men are particularly good at (see Figure 2). Fitts work was followed by other milestone papers such as Bainbridge’s (1983) discussion of the ironies of automation [2], and more recently the work of researchers such as Sarter and Woods [8] and Parasuraman [7], looking at automation on the flightdeck and its impact for pilot performance. Although the research has led to a good understanding of the human impacts of automation, there is still a tendency for technology to drive development, not fully taking into account the role of the new technology or its systemic impacts. There have been good descriptions developed of what ‘poor’ automation looks like [2, 8].


Although the research has led to a good understanding of the human impacts of automation, there is still a tendency for technology to drive development, not fully taking into account the role of the new technology or its systemic impacts.


The major attributes of poor automation can be described as follows:

  • It is strong—the system can behave autonomously;
  • It is silent—the system provides inadequate feedback about activities and intentions;
  • It can be clumsy—the system interrupts human activity particularly during periods of high workload, or even adds to task loading when workload is already high;
  • It can be obstructive—it is difficult for the automation to be reconfigured in the desired way and the automation may therefore not be used in the way that the designers intended.

These system attributes may result in less than optimum behaviors such as:

  • Automation bias—where the automation may be used to the exclusion of other systems and sources;
  • Automation complacency—where the user may come to rely completely on the automation even if it is faulty or unreliable;
  • Automation surprises—where the user may be less than familiar with all the modes of operation of the automation and therefore be surprised by its behavior asking questions like ‘what is it doing now?’, ‘why did it do that?’, and ‘what will it do next?’

If the system designer understands these attributes, then lessons can be learned and the computer systems designed to support the emergency management professional in a user-centered manner. The challenge when computers become involved in emergency management is to build the computer as part of the team but also to ensure that people continue to do the things they do well, supported by the technology, not driven by it.

Back to Top

Conclusion

The European project HINT (Human Implications of New Technology) [4] concluded that human factors and user-centered design must be integrated as part of the design life cycle. Moreover, there needs to be a strategy for the implementation of automated processes with specific requirements for taking into account:

  • Understanding the role of automation and the allocation of responsibilities between the system and the human roles;
  • Handling of conflicts, uncertainties, ambiguities, errors, and error correction;
  • Workload distribution in both normal and abnormal operation; and
  • Recognition of cultural and organizational differences that might inhibit quick trust and open information sharing.

A user-centered systemic approach is required with a major emphasis on user requirements driving technological developments as a result of lessons learned. In the aircraft world developments were driven by human error—if the human made an error—the offending system was automated so that the error ‘disappeared’. Of course this is not what happened—instead there are more, different errors, some of which are difficult to track and identify. This is a key element in the theory underlying the evolution of high reliability organizations—a desirable objective for emergency management organizations [10].

Technology is vital in extending our human capabilities to cope with either natural or man-made disasters, but we forget the human role at our peril. This means the human as part of the system, the computer as part of the team, and both the computer and the human working with other people and other computer systems in other agencies, sharing information and working together to manage the emergency, mitigate its effects, and to support the victims after the event.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. HCI design should influence the computer systems developed at each phase of the emergency management process.

F2 Figure 2. Human and machine capabilities (derived from [

Back to top

    1. Agarwal, R. and Karahanna, E. Time flies when you're having fun: Cognitive absorptions and beliefs about information technology usage. MIS Quarterly 24, 4 (Dec. 2000), 665–694.

    2. Bainbridge, L. Increasing levels of automation can increase, rather than decrease, the problems of supporting the human operator. Automatica 19 (1983), 775–779.

    3. Carver, E., Hinton, J., Dogan, H., and Dawson, B. Enhancing communication in rescue teams. In Proceedings of ISCRAM 2006: The Third International Conference on Information Systems for Crisis Response and Management (Newark, NJ, May 2006); iscram.org.

    4. Carver, E. Information and the Operator Human Implications of New Technology. HINT project. WP4 Report, 2003.

    5. Fitts, P.M. Engineering psychology and equipment design. In S.S. Stevens, Ed., Handbook of Experimental Psychology, Wiley, NY, 1951.

    6. Levinthal, D.A. Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly 35, 1 (Mar. 1990).

    7. Parasuraman, R. Humans and automation: Use, misuse, disuse, abuse. Human Factors 39, 2 (1997), 230–253.

    8. Sarter, N. and Woods, D. 'How in the world did we ever get into that mode?' Mode error and awareness in supervisory control. Human Factors 37, 1 (1995), 5–19.

    9. Turoff, M., Chumer, M., Van de Walle, B., and Yao, X. The design of a dynamic emergency response management information system (DERMIS). Journal of Information Technology Theory and Application (JITTA) 5, 4 (Summer 2004), 1–36; www.jitta.org.

    10. Turoff, M., Chumer, M., and Hiltz, S.R. Emergency planning as a continuous game. In Proceedings of ISCRAM 2006: The Third International Conference on Information Systems for Crisis Response and Management (Newark NJ, May 2006), 477–488; iscram.org.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More