Filed Under
Resource Type
Parent Resource8
Essays and Articles
Created August 1, 2009
A Comparison of Ethical Concern Channels in Aerospace Corporations



Share with EEL Yes
Rights For more information on permissions to use this material please see:
Year 1998

Search this Publication



I have approached you asking advice on an ethical problem I have encountered in the workplace. I am a new engineer, hired a little less than two years ago.

I am a low-level project manager for a government-contracted avionics system planned for use in a new military aircraft. The system is the first of its type; it makes extensive use of complex computer software for automating as many of the pilot's tasks as possible, including some high-level decision making. As such, the intent is to let the system do as much of the work as possible, leaving the "mission-critical" decisions and actions to the pilot. Less "important" decisions are performed by the computer under the theory that an aircraft with such complex subsystems would require too much of a single unaided pilot. Implementing the computer system eliminates the need for one (or possibly two) copilots.

Such a system is on the cutting edge of current technological and managerial prowess, and as such, it is too complex to test thoroughly. In fact, because conventional testing methods are not applicable to such an all-encompassing system, most testing procedures are evolving as the design evolves. Therefore, the government has required (via the corporate contract) that each software module within the system will be tested in isolation, to the greatest extent practically feasible. Furthermore, the system as a whole will be tested as well, but at a much more general, operational level; because details of the modules will often be inaccessible from the pilot's point of view, it is impossible to verify single modules from the system level anyway.

My supervisor's boss has chosen the team for developing the core of this system. It is a large project, involving several project managers (such as my immediate supervisor), each coordinating efforts of 3-4 sub-project managers (such as myself), who are in charge of about 10 people each, providing for an overall effort of approximately 100 people. The project is expected to be 90% complete after one year of work. Note that the requirements have been completely laid out by the government, so writing the code according to their specifications and testing it to make sure it meets those specifications is our task.

The project is approximately two-thirds done (according to schedule; it's actually about four weeks behind). In working with my team members, I have discovered some shortcomings of the system's development. Some of these shortcomings are inherent in the undertaking itself, while others are more of a symptom of the structure and ability of the development team.

Developmental Concerns

The man-hours required for the project are high, so time and money are limited. As a result, cross-checking results is too costly. Therefore, the individual module testing will be performed by the person who wrote the module. Theoretically, the module will be operationally correct, but I fear that it may be susceptible to invalid assumptions on the part of the developer in the first place. Thus, the model will be correct according to the developer's view of how it should operate, but the developer's perception of correct operation could be flawed in some way.

Almost all of the testing to be done is performed at the software level. Thus, certain inputs are set according to certain rules and assumptions, and the output is checked with the specifications that have been laid out by the government. I am worried that the testing procedures themselves have some inherent limitations, such as the range of the input parameters, the timing as they arrive at the module, etc. The problem is most likely too complex for the government to have anticipated all the possibilities - their specifications surely cannot encompass all possible situations.

I am aware (from limited previous experience, as well as speaking to others) that in such complex software systems, errors or bugs that are not apparent at an individual level can appear on a system-wide level. This is due to the assumptions of the operation of the modules and the fact that putting them together creates a very complex system; a slight shortcoming can propagate through other modules and turn out to be a nasty error. These problems tend to be hard to find, and hard to trace, because isolation doesn't always lead to the problem. So, large-scale testing for such bugs is basically impossible, because they tend to crop up when unusual, rare, or unanticipated circumstances occur.

By their nature, the most critical operational points of the aircraft are the hardest to test. Due to the stress on the system or the complexity and number of rapidly changing variables involved, the decisions that are "toughest" for the computer to make are also the most difficult to test. These arise in situations such as combat (where the number of possible scenarios is essentially infinite), when the aircraft has sustained partial damage and some systems don't work properly, or when the pilot gives the computer incorrect information (such as the wrong navigation points).

At a presentation by another company which developed a similar, less-complicated system for a different application, the company's representatives stressed two observations about the complexities of such systems. Although they said that because these systems are nonlinear in operation, these two points shouldn't be taken as fact, the two points serve as good guidelines to keep in mind the shortcomings of such systems. These two points are:

  • The individual components (modules) of the system are only as good as the competence and foresight of the developers allow them to be
  • In the worst case, the system is only as good as the worst combination of the shortcomings of the individual modules.


Several developers on my team have come to me with concern about how their modules will operate under general, "real" conditions. Due to the complexity of the modules themselves, these developers feel that the specific-case testing that was performed provides inadequate assurance that the module will work properly under all circumstances. While these modules are not mission-critical," i.e. flight controls, navigation algorithms, engine controls, etc., the impact of their incorrect operation is hard to quantify.

There are a number of developers on the team who I do not feel are adequately qualified to perform their tasks. I am concerned with the assumptions these developers have made with their modules. My own modest cross-check has revealed only a couple of problems that probably would have surfaced at the system integration level anyway.

I feel that other sub-groups are experiencing similar problems. Although my sub-group is not in charge of any "mission-critical" subsystems, I have a hard time believing that those systems will be any more infallible than the modules produced by my group.

I expressed my concerns to my supervisor, stating that I did not feel that the system will be adequately tested before it is implemented. I argued that the behavior of the system under every possible circumstance cannot be known ahead of time, and I expressed my concern that unanticipated problems may lead to subsystem failure, aircraft damage, and possibly even danger to the pilot. My supervisor downplayed these concerns, pointing out that:

  • None of my "evidence" is conclusive; my supervisor argues that although we don't know what will happen in a failure, if there are unexpected results, we don't expect them to be life-threatening. We are specifically developing the system so that any anticipated mission-critical problems are addressed. That way, unexpected failures under certain circumstances will not cause high-level failures.
  • The system is too complex to test thoroughly; the government is fully aware of this, and that's why they set the specifications the way they did. They are aware of the fact that some unforseen difficulties could arise during operation, but don't anticipate these to put the system in jeopardy.
  • We are slightly over budget, and are behind schedule by a significant amount; we'll only get more behind schedule as time goes on because the development problems will only get harder. We cannot afford to expand the effort, or we will lose face and customer confidence, with potential impacts on future contracts. This contract will set the pace for who can handle complex computer automation and decision-making jobs; we need to be that company. We need to be on-schedule and on-budget as much as we can; we cannot afford to increase development or testing efforts at these expenses.

Based on this reasoning, he told me not to expand my development or testing efforts without his specific authorization.

Return to A Comparison of Ethical Concern Channels in Aerospace

  • Using Case Studies Bibliography



    This bibliography includes examples of different ways instructors have used case studies to introduce ethical topics to their students and resources for finding cases and incorporating them into the classroom.

    Author(s) Kelly Laas
    Year 2016
  • Added06/26/2006


    In this essay, Dr. Whitbeck outlines an 'agent-centered' approach to learning ethics. The central aim is to prepare students to act wisely and responsibly when faced with moral problems. She provides a number of examples and cases with descriptions of questions and directions for promoting student participation and stimulating thought and discussion.

    Year 1995
Cite this page: "Scenario " Online Ethics Center for Engineering 3/21/2006 OEC Accessed: Tuesday, May 21, 2019 <>