Human Factors Lessons Learned in the Design and Implementation of Air Traffic Control Systems

As air traffic continues to increase, new technology will be needed to accommodate it. The interesting opportunities that new technologies will present for air traffic control (ATC) will be matched by the human factors challenges. Automated tools, already common in the cockpit, will become a necessity for air traffic control specialists and maintenance personnel. The nature of the jobs of controllers and maintainers will change with the tasks that they are required to perform and the tools that are available to them. The human factors challenge is to ensure that these tasks and tools are designed to be well-suited to the users. How well we meet this challenge will determine whether the implementation of a new system is a success, a struggle, or a failure. The purpose of this article is to explore lessons learned from the development and implementations of several systems in the United States. These lessons point to a process that can be used to help ensure that new systems are designed and implemented effectively.

What steps can we take to make sure that new systems are designed and implemented effectively, from a human factors standpoint?

The first step in the successful implementation of any ATC system is careful planning. This planning must include strategies for ensuring that systems are well-designed from a human factors perspective. Human error remains the most common contributing factor in aviation accidents and incidents. The initial investment in attention to human factors pays off first by capitalizing on the extensive body of information on factors affecting human performance that is available to the human factors specialists. Systems that consider the capabilities and limitations of the human operator in their design help to reduce the probability of human error (and limit the consequences of inevitable human error) and thereby reduce the program’s technical and safety risks, lower implementation and life cycle costs, and increase the probability of program success. Early consideration of human factors issues will result in potential problems being detected earlier, and resolved earlier, than if human factors planning is delayed or non-existent. The earlier in the acquisition process that problems are identified, the easier and less costly they are to correct.

Early and Continuous Focus on Human Factors. The initial human factors investment for a program should be in the development of the mission needs statement. This defines why the new system is needed, what you need the system to do, and describes any known operational constraints. From this, the careful description of operational and human factors requirements can be developed. This process begins with the details of what you want the system to do and identifies the tasks that the operator (controller or maintainer) will perform and the information that the operator will need to perform these tasks. This careful description of the tasks and information required to perform them is the first step in identifying the human factors requirements that will need to be met by the system. This process should adopt a team approach that has both users and human factors specialists involved in the specification of the requirements, the evaluation of prototypes, and the operational testing. The success of any system is measured by how well it meets these requirements. The specification of user and system requirements involves careful consideration of:

  • task requirements (e.g., What does the controller or maintainer need to do with the system? What duties will the users be expected to perform concurrently with the new equipment? How will the tasks change with the new system?)
  • operational environment (such as airspace characteristics; amount, type, and complexity of air traffic; local procedures, etc.)
  • characteristics of the users (this includes understanding the skills of the users that must be preserved; and the practices, procedures, and equipment that the users are accustomed to)
  • transition to the new system

The need for early and continuous consideration of human factors issues was a painful lesson first learned in the Advanced Automation System (AAS). This was an ambitious program to significantly upgrade the air traffic control systems used in the terminal and en route airspace. This included redesigning the controllers’ workstations and displays, and the software to support them. In the document, "Lessons Learned: Human Factors in the AAS Procurement", Small (1994) states that, "Some of the difficulties with controller acceptance of AAS could have been alleviated by involving human factors expertise earlier and by integrating it more fully into the design process" (p.4). Three years later, the same statement could be made about the Standard Terminal Automation Replacement System (STARS) system. STARS was designed to replace the current radar processing and display system in the terminal environment. This program was the first of its kind to attempt to acquire the system by purchasing commercially available, "off-the-shelf" (COTS) equipment (as opposed to paying for the development of a new system). The appeal of this approach was the expectation of significantly lower cost and less time required for implementation. Unfortunately, this approach was interpreted as being incompatible with a complete specification of human factors requirements. In additional to minimal human factors requirements, the initial human factors plan (dated 23 February 1995) acknowledged that, due to an aggressive schedule, there would not be time for any human factors design development, nor was a full scale human factors evaluation planned. This combination of minimal human factors requirements, minimal consideration of human factors issues and the deficiencies in human factors planning, proved to undermine the initial development of the STARS program. In his testimony to Congress, the Inspector General of the United States identified the Federal Aviation Administration’s (FAA’s) "decision to limit human factors evaluation" and the lack of a formal process to "identify, prioritize and resolve human factors issues as the system was being developed" as two of the shortcomings in the STARS acquisition program (testimony of Ken Mead, 30 October 1997).

Return to Top

Structured User Involvement. Another problem area in both the AAS and STARS programs involved the use of controller opinion in the design process. It is important for users to be involved early and continuously throughout the design and acquisition process. It is equally important that this involvement be structured and integrated with the involvement of human factors specialist so that systems are not designed solely by user preference. Users should have a well-defined role in each stage of the acquisition and their tasks should be clearly specified (such as the evaluation of a prototype through the use of a questionnaire). The developers of AAS found that the preferences of controllers often changed with the group of controllers (Small, 1994). This should come as no surprise, since individual preferences are based on individual experiences (such as, the characteristics of the airspace that the controller is accustomed to). Furthermore, it is well known that performance and preference do not always match; we do not always perform better with the design that we prefer. Finally, the controllers who volunteer to be a part of such efforts are likely to have more experience, and be more skillful and technologically inclined than controllers who choose not to participate. It is difficult (if not impossible) to put these skilled professionals in the operational shoes of a less skilled controller. Yet, to minimize the probability of human error, systems must be designed for a below average controller on a bad day.

While the AAS program may be said to have suffered from too much information on user preferences, the STARS program suffered from too little, too late. As if in response to the AAS experience of the system developers having to chase the system requirements, the involvement of line controllers in the development of the STARS requirements and other stages in the acquisition process was minimal. This resulted in costly delays in human factors problems being identified and addressed. Structured input from a broad spectrum of users is a critical component in identifying potential operational and human factors problems.

Prototype Assessment. The value of prototype assessment should not be overlooked. While the value of the information obtained from such testing will be dependent upon the stage of development in which the prototype assessment is conducted, even a rudimentary prototype can point to features of the system and procedures (such as data-entry procedures) that are likely to induce human errors or be operationally unsuitable for other reasons. As the design becomes more sophisticated, prototype testing offers a preliminary look at whether the system is likely to be able to perform its intended function and meet human factors requirements. The value of prototype assessment is that it gives these insights into changes to the system that may need to be made at a stage of system development where changes cost much less than they will later. An example of critical information that resulted from prototype testing can be seen in the early days of the Traffic Alert and Collision Avoidance (TCAS) program. TCAS is a cockpit display of traffic information that issues an instruction to the pilot when a maneuver is deemed necessary to avert an impending collision between aircraft. One of the early developmental versions of TCAS included negative resolution advisories (RAs) such as "Don't Climb" and "Don't Descend". Prototype testing revealed that pilots responded inappropriately (such as climbing in response to a "Don't Climb") 50% of the time a negative alert was presented in the operational simulation. As a result of these tests, all negative RAs were eliminated. (Boucek et. al., 1985).

Return to Top

Thorough Operational Testing. Even extensive prototype assessment does not detract from the need to conduct thorough operational testing of the design that will be implemented. Careful and thorough human factors testing of a system can be combined with the formal operational testing, although the formal operational evaluation should never be the first human factors test that is conducted. Such testing is necessary to validate the preferences of user’s and the best estimates of human factors specialists. User consensus is never a valid substitute for objective performance data. Objective performance data must be collected in order to ensure that the system is ready and suitable for implementation. Any system test must be well-designed from a human factors standpoint. For example, if the evaluation includes a simulation, then the controllers chosen must be representative of the user population (and not chosen on the basis of seniority, for example) and the tasks included in the simulation must be representative of those that the user will need to perform with the new system. Guidance for ensuring that an evaluation is well-designed from a human factors perspective, along with guidance on human factors planning is offered in "Human Factors in the Design and Evaluation of Air Traffic Control Systems".

Finally, it would be a mistake to assume that the need for such testing is minimal because the system is operational in another part of the world. Air traffic control is not "one size fits all". Systems need to be suitable for the operational environment in which they will be used. This means that characteristics of the airspace (amount, type, and complexity of traffic, local procedures, etc.) and characteristics of the users (e.g., their skills, knowledge, the type of equipment they are accustomed to) and the tasks (what do you expect the user to be doing in addition to using the new equipment) all need to be considered.

An Example. One successful program that followed all of these steps was the Center TRACON Automation System (CTAS). The operational need for CTAS was evident years ago as increases in traffic were able to be accommodated en route much more easily than in the terminal environment. Originally developed at NASA - Ames Research Center, the system shows the controller the best runway assignment, landing sequence, and other information. This system was developed by engineers, human factors specialists and controllers. There was extensive prototype assessment with operational controllers that led to many design changes. The result was a system that is currently operational at Dallas/Fort Worth airport and has already demonstrated an ability to increase capacity. Under controlled conditions during a test conducted in 1996, CTAS was able to increase aircraft operations from 102 operations to 120 operations per hour.

In addition to doing things right from a process standpoint, CTAS also serves as an example of well-designed automated tools for controllers. Far from being an automated system that requires controllers to "feed and care for" the system’s data needs with little or no return on their investment, the CTAS tools proved user-friendly and provided controllers with guidance information that they can readily use. Anecdotal reports indicate that the controllers at Denver and Dallas/Fort Worth who have used it like it very much. Dick Swauger, the national technology coordinator for the National Air Traffic Controllers Association, says, "It’s like having a top controller whispering in your ear...it makes good controllers better." (Perry, 1997, p.31) As CTAS is implemented at other facilities, the increases in capacity that these tools can support (along with the controllers’ acceptance and support for this system) is likely to be realized as long as CTAS continues to provide useful and user-friendly tools.

Return to Top

System Integration

With the independent development of systems and subsystems for ATC, system integration becomes a critical issue. Even within a single system, integration can become an issue. For example, developing a system and its back-up or different components of a system independently minimizes the probability of both sytems failing for the same reasons and helps to ensure that there is not a single point of failure. From an engineering standpoint, this approach is highly desirable. From a human factors standpoint, however, the approach can be problematic if steps are not taken to ensure that the interfaces of the two systems are designed to be compatible, if not identical.

With systems developed independently, the issues become even more complex. However, the same human factors approach that has been outlined here for the development and implementation of new systems can also serve as an outline to ensure the effective integration and compatibility of separately developed systems. An example of the use of this approach can be taken from the cockpit. In 1979, human factors specialists at Boeing, Douglas, and Lockheed (Boucek, et. al., 1980) looked at cockpit design within and across manufacturers. What they found was a an excessive number of alerts and warnings that pilots were required to respond to and a variety of important inconsistencies in cockpit design. They found differences in aircraft cockpits, both between those developed by different manufacturers, and among cockpits developed by the same manufacturer, that could result in "negative transfer", that is, pilot errors that would be induced in one cockpit by virtue of having extensive experience in another cockpit. They also found situations within individual cockpits that could induce pilot errors, such as an excessive number of alerts and warnings.

Having established an operational requirement for aircraft alerting functions to be more effectively integrated, Boucek et. al. (1980) then set out to determine the best way to design cockpit alerts and warnings to be consistent and consolidated. Human factors specialists worked with engineers to design and test prototype alerting and warning systems that would meet a range of key operational requirements, such as minimizing the number of aural alerts, providing the flight crew with an indication of the level of urgency, and fitting in the space available in the cockpit. There were many human factors issues that needed to be addressed in this endeavor. Some of them, were able to be answered from the wealth of human factors knowledge already available. Many others, such as specific formats for voice messages and whether a voice message should be preceded by a tone, had to be prototyped and tested. After a series of studies, the end result was recommendations for aircraft alerting sytems that would serve cockpit manufactures for decades (Berson, et. al., 1981). Rather than have hundreds of individual alerts and warnings, they would be consolidated into master warning or a master caution, depending on the nature of the information.

Another valuable conclusion of this work was that there should be two levels of information presented to pilots. One level of crew alerting is status information that informs the pilot of situations that are important (such as a possible engine fire), but require the pilot to stabilize the aircraft before responding to the alert. The other level is guidance information that requires the pilot to make an immediate control action. An example of this level of alert is the ground proximity warning system (GPWS) that requires the pilot to "pull up" immediately to avoid an impending impact with terrain. The nature of the task required by the pilot determines the characteristics of the alert. It is easy to see how these lessons learned in the cockpit - the need for consistency in, and integration of, alerts and warnings and the need for alerts to indicate their level of urgengy - are applicable to air traffic as the number and types of alerts and warnings for controllers increases.

These lessons learned, both in the cockpit and in ATC, present a powerful case for:

  • attending to human factors and integration issues early in the acquisition process,
  • obtaining structured user input at various stages of system development,
  • prototype testing, and
  • thorough operationally-oriented human factors testing prior to implementation.

Giving human factors issues their due consideration at all stages of acquisition can present complex organizational and managerial challenges. However, this investment is a necessary step toward making the most of the opportunities of the future without repeating the mistakes of the past.

Return to Top

References

Berson, B., Po-Chedley, D., Boucek, G., Hanson, D., Leffler, M., and Wasson, R., January 1981. Aircraft Alerting Systems Standardization Study, Volume II - Aircraft Alerting System Design Guidelines. DOT/FAA/RD-81/38/II.

Boucek, G., Erickson, J., Berson, B., Hanson, D., Leffler, M., and PoChedley, D. February, 1980. Aircraft Alerting Systems Standardization Study. DOT-RD-80-68.

Boucek, G., Pfaff, T., White, W. and Smith, W. March, 1985. Traffic Alert and Collision Avoidance System - Operational Simulation. DOT/FAA/PM-85/10.

Cardosi, K. and Murphy, E. (1995). Human Factors in the Design and Evaluation of Air Traffic Control Systems. DOT/FAA/RD-95-3.

Harwood, K., and Sanford, B. (1993) Denver TMA Assessment. NASA Contract report 4554.

Perry, T. (August, 1997) "In Search of the Future of Air Traffic Control". IEEE Spectrum, pages 19-35.

Small, D. (1994) Lessons Learned in the AAS Procurement. MITRE/CAASD Report No. MP 94W0000088. McLean, VA.

Testimony of Ken Mead, Inspector General, before the Committee on Appropriations, Subcommittee on Transportation, on the Standard Terminal Automation Replacement System, October 30, 1997

Return to Top




RITA's privacy policies and procedures do not necessarily apply to external web sites.
We suggest contacting these sites directly for information on their data collection and distribution policies.