Author(s): Michael McFarland, S.J.
Originally published by the Markkula Center for Applied Ethics
Wayne Davidson is a software engineer in the aerospace division of Occidental Engineering, a large engineering firm. For the past two years he has been working as a test engineer for Operation Safe Skies, a project to build a prototype of the next generation air traffic control system. This project, which is funded by a contract from the Federal Aviation Agency (FAA), is a very important one for Occidental. With all the cutbacks in defense spending, the...
Wayne Davidson is a software engineer in the aerospace division of Occidental Engineering, a large engineering firm. For the past two years he has been working as a test engineer for Operation Safe Skies, a project to build a prototype of the next generation air traffic control system. This project, which is funded by a contract from the Federal Aviation Agency (FAA), is a very important one for Occidental. With all the cutbacks in defense spending, the aerospace division has been losing business. The Safe Skies project has provided much needed business, and could lead to a much larger contract if successful. Mindful of its strategic importance, the company had bid very aggressively for the original contract. In fact they had "low-balled" it, bidding less than it would take to do the work properly. They felt that was the only way they could beat out their competitors, who were just as hungry for the work. Because of their somewhat shaky financial position, the company was not willing to take a loss on the project, so the project has been underfunded and understaffed. Nevertheless those working on the project have made a heroic effort, working eighteen hour days seven days a week to meet the deadline, because they know how much it means to the company, not to mention their own jobs. They are now very close to success.
A version of the prototype has been completed and turned over to Wayne for testing. He has run extensive simulations on it and found that it works as it should except for one little problem. When there are too many aircraft in the system, it will sometimes lose track of one or more of them. The "forgotten" aircraft will simply disappear from the screen, there will be no trace of it anywhere, and it will be ignored by all of the collision avoidance and other safety tests. Wayne has been working with the software designers to identify the cause of the problem, and they have traced it to a subtle error in memory allocation and reuse. They are confident that they can fix it, but it will take a month or more to do the redesign, coding and testing.
Wayne meets with his boss, Deborah Shepherd, the project manager, to discuss the implications. She tells him that what he is asking for is impossible. The contract requires that the company deliver a fully certified, working version of the software in three days for system integration and test. The government has developed a new, get-tough policy on missed deadlines and cost overruns, and Occidental is afraid that if they miss this deadline, the government will make an example of them. They would be subject to fines and the loss of the remainder of the prototype contract; and they might not be allowed to bid on the contract for the full system. This would have a devastating effect on the aerospace division, resulting in thousands of lost jobs.
They consider whether they can do a quick patch to the software before turning it over, but Wayne adamantly refuses to release any code that has not been tested thoroughly. There is always a chance that the patch would interact with some other part of the program to create a new bug.
"Then we'll have to deliver the software as is," Deborah says. "I can't jeopardize this project or the jobs of my people by missing that deadline."
"We can't do that!" exclaims Wayne. "That's like delivering a car with defective brakes."
"Don't worry," Deborah reassures him. "We have contacts in the FAA, so we know their testing plans. They will do a lot of simulations to make sure the software works with the hardware and has all the functionality in the specs. Then they will do live tests, but only at a small airport, with a backup system active at all times. There is no way they will overload the system in any of this. After that they will have some change requests. Even if they don't, we can give them an updated version of the program. We can slip the bug fix in there. They will never see the problem. Even if they do, we can claim it was a random occurrence that would not necessarily show up in our tests. The important thing is no one is in any danger."
"Maybe they won't find the bug, but I know it's there. I would be lying if I said the system passed all the necessary tests. I can't do that. Anyway, it would be illegal and unprofessional."
"You can certify that it is safe, because it is, the way they are going to use it."
And so he does. In the end Wayne signs off on the software. It is delivered to the FAA and makes it through all the preliminary tests, including live tests at a small airport in the Midwest. As a result of these tests, the FAA requests some changes in the user interface, and when Occidental delivers the new software it includes a robust solution to the problem of the disappearing aircraft. No one outside of Deborah's group ever learns of the problem. In fact Occidental's success with the prototype leads to major contracts for air traffic control software, giving much-needed business to the aerospace division. This saves hundreds of jobs, and allows the company to add hundreds more.
Wayne Davidson, however, takes early retirement once the prototype project is finished, in order to write a book on software testing. He feels that the book should have a chapter on ethics, but he can never bring himself to write it.
What do you think about Wayne's decision? Was it ethical?
Next: Tutorial on Ethical Decision Making
Michael McFarland, S.J., a computer scientist, is the former president of College of the Holy Cross and was a visiting scholar at the Markkula Ethics Center. June 2012
Author(s): Michael McFarland, S.J.
Originally published by the Markkula Center for Applied Ethics
When Wayne in the Occidental case loses his argument with Deborah, his manager, he still faces an ethical dilemma. If he is still convinced that there is something wrong with certifying the defective software, he must decide whether he should go ahead and do it as ordered, refuse to cooperate, or actively oppose any such action. If he does nothing, and an accident does occur because of the flawed software, is it his fault? This raises the issue of responsibility.
In making an ethical judgement, it is not enough to decide what is right to do or what should have been done in a certain case. It is also necessary to decide to what extent one is or was responsible for doing what is right. It is certainly wrong for the driver of a car to crash into another car parked by the side of the road; but if the driver had lost control of his car because he suffered a heart attack or if he didn't see the parked car, he cannot be blamed for the accident, unless he could have avoided the circumstances that led to it.
In the case of an individual act, assessing responsibility is relatively straightforward. To be held responsible, the agent must haveknowledgeof the act and its consequences and must have thefreedomto choose or not choose the act. The driver who has a heart attack and loses control of his car is not in a position to choose whether or not to hit the parked car. The driver who comes around a corner driving at a safe speed in what he has every reason to believe is a clear travel lane and unexpectedly plows into a car that has been left in the road did not know and could not be expected to know that his actions would lead to a crash. In neither case would the driver be ethically responsible for the crash. In the first case he lacked the freedom, in the second the knowledge.
When a person acts in a social and institutional context, the problem of responsibility is much more complex. As part of a group, one may be asked to take part in, or at least not interfere with, actions with which one does not agree, as in Wayne's case. Or it may be clear that some action is called for, but not at all clear who should take that action. The Red Cross says the blood supply has fallen so low that there is a crisis for the hospitals. Someone ought to donate; but why should it be me?
Another problem is that institutions so constrain people's options and their ability to act that sometimes they cannot satisfy all the ethical demands on them. Prior to the disastrous flight of the space shuttle Challenger, engineer Roger Boisjoly of Thiokol, Inc., the maker of the rocket motor that failed and led to the crash, had serious doubts about the safety of the O-ring seals, especially at low temperatures. He made his misgivings known to his managers, but when they chose to ignore him, he went no further. To do more, he felt, would have been disloyal and disrespectful of the prerogatives of management. As he later told investigators, "I must emphasize, I had my say, and I never take [away] any management right to take the input of an engineer and then make a decision based upon that input, and I truly believe that....So there was no point in me doing anything any further."40Ben Powers of NASA was in the same position. Both were aware of an inordinate risk to the lives of the astronauts, and both wanted to act to protect them; but they were frustrated by both the personnel and the procedures of the organizations in which they worked. If management had been more responsive, or if there had been alternative procedures for airing safety concerns, the tragedy might have been prevented. As it was, Boisjoly and Powers could not do anything effective without violating what they saw as their obligations to their employers. Ironically, even the loyalty they showed was not enough. Boisjoly was treated as a traitor at Thiokol and eventually put on permanent leave for speaking publicly about the company's part in the disaster. 41
Incidents like this show the power of institutions to influence and shape our ethical decisions. But we must also acknowledge the power of our ethical decisions to shape institutions. Therefore there are two aspects of responsibility to consider in a social context: our responsibilitywithininstitutions and our responsibilityforinstitutions.
In a social or institutional context, ethical responsibility can be so diffused that no one feels responsible, even when organizational roles are well-defined. This is especially true when positive action is required to bring about some good or avoid some harm. If it is wrong to lie, then it is wrong no matter how many people are involved. But who is required to step forward and tell the unpleasant truth? Beating someone with a baseball bat is wrong no matter whether one is alone or in a mob. But when is one required to step forward from a crowd of bystanders and stop such a beating, or, for that matter, to alert the public to the dangers of a particular technology? These questions become even more obscure and difficult when the people involved are bound together by institutional loyalties and contractual obligations.
In their study of corporate responsibility, 42Simon, Powers and Gunnerman looked at these questions. They used as an analogy the case of Kitty Genovese, who was stabbed to death outside her apartment in Queens while at least thirty-eight of her neighbors looked on, none of whom even called the police, let alone intervened. She was attacked three different times by her assailant over a half hour, so there was time to save her. It is obvious enough that the failure of the neighbors to help was wrong, but more difficult to explain why and how. When is there an obligation to take positive action to prevent harm to another? The authors identified four conditions that must be met:
Condition 2 says that those who are in the best position to know about a serious threat have the strongest obligation to act. In particular, when the threat comes from a certain piece of technology, the engineers who are most intimately involved with the technology and best understand its consequences have a special duty to protect the safety of the public. However, condition 3 qualifies that obligation: it only exists if action can be taken without any serious risk to the agent. This is a very conservative requirement. If the threat is serious enough, say the near-certain, catastrophic meltdown of a nuclear power plant, an engineer ought to be willing even to risk his or her job if there was a chance of preventing the disaster.
The reason for the reluctance torequirean engineer to take on significant personal risks to help out is that it is unfair. The engineer did not cause the danger, at least not intentionally, so he or she should not have to pay so high a price for preventing it. Yet in many instances that is the only choice, because of the way the organization is structured. That is why whistle-blowing cases like Boisjoly's are often so tragic. 44Either the engineer suffers unfairly or the public suffers even more unfairly. There is no choice that satisfies all the ethical demands of the situation.
That is why it is not enough in such cases to ask what the individual should do, that is, what is the ethical choice. The more important question is how to change the institutional context so that there is an ethical choice.
To return once more to our original case, the corporate structure and climate at Occidental Engineering was part of the reason why Deborah and Wayne faced such difficult ethical decisions and why the outcome was so unsatisfactory. If the company had had an uncompromising commitment to quality and safety, it would not have put itself and its engineers in the position of having to cut corners to satisfy its commitments; and when safety concerns arose, it would not have tried to suppress them. Furthermore if the company had had more respect for its engineers, especially those like Wayne who are ultimately responsible for safety, it would have given them more input in formulating the bid, to make sure they could do what they promised, and it would have provided channels for concerned employees and managers like Wayne and Deborah to pursue concerns about safety and integrity without feeling they were endangering their jobs or those of their employees. As it was, Wayne and Deborah found themselves in a position where whatever they decided, they would violate some ethical obligation. This did not excuse them of the responsibility to do what was right under the circumstances, but it was unfair to them. Because of the institutional context, they were in a position where any choice they made would be hurtful in some way. A satisfactory resolution of this case, therefore, would have to include more than a prescription for how Wayne and Deborah should have acted. It must include an analysis of how the organizational context in which they operate should change to enable and support ethical action.
Robert N. Bellah and his associates, inThe Good Society, 45have written about the importance of institutions in our moral life:
It is tempting to think that the problems that we face today, from the homeless in our streets and poverty in the Third World to ozone depletion and the greenhouse effect, can be solved by technology or technical expertise alone. But even to begin to solve these daunting problems, let alone problems of emptiness and meaninglessness in our personal lives, requires that we greatly improve our capacity to think about our institutions. We need to understand how much of our lives is lived in and through institutions, and how better institutions are essential if we are to lead better lives....We Americans tend to think all we need are energetic individuals and a few impersonal rules to guarantee fairness; anything more is not only superfluous but dangerous–corrupt, oppressive, or both....It is hard for us to think of institutions as affording the necessary context within which we become individuals; of institutions as not just restraining but enabling us; of institutions not as an arena of hostility within which our character is tested but an indispensable source from which character is formed. 46
Institutions embody and perpetuate the values of those who shape them. If those values are harmful, like carelessness and greed, the institution will be destructive, and will frustrate the efforts of those within and around it to do what is right. On the other hand, if those values are good, the institution by its normal operation will bring about much good, and will make it much easier for its members to be ethical.
Institutions do not just happen. They are the result of human choices. Ultimately we are responsible for them. Our ethical obligations are not just to act ethically in whatever situation we find ourselves, but to build ethical institutions. This means institutions that are fair, that respect human dignity and freedom, and whose purpose is to achieve some good in whatever domain they operate in. This may seem like an impossible task. The institutions are too large and powerful, and we have so little influence on them. That is true if we think ourselves only as isolated individuals. But that is where our thinking needs to change, as Bellah and his coauthors point out. Institutions are powerful because they can mobilize purposeful, organized, collective action. It takes purposeful, organized, collective action to change them. That is part of our ethical responsibility.
It is not always enough, then, for ethics to evaluate individual judgements and actions. It must also step back and examine the social and institutional context in which ethical issues arise, and to ask how that context needs to be changed to enable and support ethical behavior.
Return to the Occidental Engineering Case Study
Next "Summary: Part 8"