Describes how to use the ImpactCS approach to do an analysis of the Therac-25 case.
The ImpactCS approach to ethical analysis was devised by a panel of ethicists, computer scientists, and social scientists. The point is that any particular computing system can be analyzed from both the perspectives of social analysis and of particular ethical issues. The grid you see below was designed by the panel to serve as an analytic tool in thinking about any system. The idea is that each of the ethical issues can be analyzed at each of the levels of social analysis.
For instance, in this case, safety is the primary concern. But we need to think about issues of safety at individual, group, national and global (or international) levels. Each of these levels brings forth different issues and different ethical concerns. In addition, the grid reminds us that safety issues are only one of many issues that might concern us in a case.
We unpack these concerns in a series of documents that you can access by clicking on the highlighted cells in the Framework below. For Therac-25, we provide an extensive analysis of safety issues at all four levels. But a complete analysis would ask about privacy issues (important for reporting systems), property rights (is the software for Therac-25 a product or a service?) and all the other issues. We provide rudimentary suggestions about these issues in the links.
Quality of Life
Use of Power
Equity and Access
Honesty and Deception
Levels of Social Analysis
The decision to computerize a medical linear accelerator was, in the beginning, a quality of life consideration. AECL did not set out to make a device that would expose individuals to harm. It made improvements in a device that would, in theory, allow greater access to a medical technology and would increase the quality of care those patients received. In our interview with a Therac operator, you can find testimony about the increased quality of life provided for operators and patients by the new machine. But in the process, a lax culture of safety at AECL led to a system design that was unsafe and not well tested. In this case, the values of safety and increased quality of life for consumers need not have been in conflict. But in practice, they became so.
Perrow (1984) suggests that most risk analysis procedures are really a way for some people to think clearly about the risks to which they will subject other people. People who are doing a risk or safety analysis are usually those hired by the company to protect itself from risk. There are mixed motives here: by protecting themselves from risk, they also protect the safety of those in using their products.
Ford Motor Company made itself infamous by explicitly comparing risk to the company (in dollars lost from lawsuits) to risk that consumer faced (from inadequate design of gas tanks in the Pinto). They decided that it would cost less to pay the lawsuits than to fix the car. Here the calculations were all financial. But it is at least up for debate whether all companies make decisions in this manner. In many, the motives are mixed: protection of the company and safety of the consumer.
But AECL’s priority seems odd even in the light of self-protection. Its risk analysis seemingly was not done to protect the company, but to certify their already strongly held belief that the machine was safe. This sort of unfounded optimism regarding technology at least provides them with the defense of ignorance. But this defense is less persuasive when offered by those with power over other’s well-being. Often, when individuals or corporations are given more power, we are also more likely to hold them more responsible for their actions.
At any rate, this case is clearly an issue of who has power to enforce the acceptance of risks on others. This power may be economic (as in the case of AECL) or political (as in the FDA).
But at the individual level, the power may simply be positional--acquired because you happen to be the software engineer assigned to a particular project. This is what Huff has called unintentional power--the power that a designer has over the users of a product. Someone with unintentional power uses it without intending benefit or harm to the ultimate user of the product. This is another case of the defense of ignorance. In this case, the defense is harder to believe, since the software controlled a potentially lethal radiation beam. But the intention of the programmer, or of the operator, was not to harm patients, or even to place themselves in the position where they could harm them. But taking a job as a programmer entails this unintentional, positional power. It is better to know this than to ignore it.
These sorts of power differentials exist at all levels of the social analytical framework. And a careful ethical analysis of power will ask what duties go along with that power, and what rights are held by those with less power.
We cannot present here a full analysis of system safety and instead refer the interested reader to items in our bibliography and to various web sites that address these issues. You can see what noted system safety expert Nancy Leveson has to say about this case in the excerpt from her article that we provide in the supporting documentation.
However, we have at least learned that in order to understand the safety issues properly, we must look at them at several levels of social complexity, just as the ImpactCS framework suggests.
Safety at the Individual Level
Certainly the single individual who did the programming for Therac 25 had responsibilities as a computing professional. To whom were these responsibilities owed? An obvious first responsibility is to the organization that employed him. Another party to whom responsibility is owed is to the eventual users of the linear accelerator: the patients. We can certainly add to these two (e.g. to the profession, to the machine operators), but let’s take these two for the purposes of this analysis.
The programmer’s responsibilities to his employer were more than simply to do as directed to by the other designers of the system (or by whoever his immediate superiors were). He had a responsibility to make his superiors aware of the dangers inherent in doing safety interlocks only in the software. Whether this danger was obvious to him or not is an interesting question. Even today many computing professionals place more confidence in the safety of software than is likely. Software safety was little understood at the time the Therac-25 system was designed.
The point here is that a computer professional is responsible to the employer for using the best available methods to solve the software problems with which he or she is confronted. There are a variety of professional decisions the programmer made in the Therac-25 design that suggest he was lax in this responsibility (e.g., using unprotected memory, improper initialization, lack of appropriate testing, etc.). Thus, as an employee, he fell short of the mark in providing his employer with professional work. We cannot know whether this shortcoming was one of lack of knowledge or of poor execution.
In addition to responsibilities to his employer, the programmer clearly had a responsibility to the users of the technology he was designing. In the context of safety, his responsibility was to design software that minimized the likelihood of harm from a dangerous medical device. This obligation to "do no harm" need not mean that software should never be paired with medical linear accelerators. From the perspective of the operator we interviewed this pairing was a positive benefit in making setup and treatment easier. But it does mean that, to the extent it was within the professional control of the programmer, he should have designed the system to do no harm while providing this positive good. Again, whether the failure to do this was a result of a lack of knowledge or of poor execution, we cannot know. Huff & Brown provide some speculation on this issue, given the state of the art at the time concerning real time computing.
To sum up, the programmer had clear responsibilities to both his employer and to the users of the device. He clearly failed in these responsibilities. If we were interested in blame, we could not tell the amount of blame to assign here. We know nothing of the programmer’s background or training. Thus we cannot know if the programmer knew how poor the software design and testing was. Nor do we have any idea of the internal company dynamics that may have resulted in the lack of testing.
We noted in the case write-up that the operators of the linear accelerators had a complex combination of responsibilities. Chief among these are responsibilities to their employer and to the patients.
Just like the programmer of the system, we know little about the background and training of the operators in this case. But we can at least specify their responsibilities. They were responsible to their employers to operate the machine efficiently, getting all the scheduled treatment done in any particular day. They also had a responsibility to their employer to look after the machine so it could be maintained properly. Finally they had a responsibility to their employer to operate the machine carefully and not to place patient in danger.
From published accounts of operator’s comments, from our interview of a Therac-4 operator, and from comment’s made in court documents, it seems clear that none of the operators felt they were placing their patients in any danger when they pressed the button. Thus, we can rule out intentional negligence. But what happened? Leveson suggests that the interface on the console made operators tolerant of error messages, and readily rewarded them for pressing the "proceed" button whenever a minor error appeared. The interface made no distinction between life threatening errors and minor errors, except that major errors would not allow a "proceed." Given this, it is hard to see how the operators might be responsible for the errors, even though they were the ones to press the key.
An interesting issue arises because of the current move among operators to become more professionalized. As operators are better trained, are certified, and are more aware of the workings of the machine, they gain the prestige — but they also gain responsibility. As they become well trained enough to foresee such errors, their responsibility for them will increase.
Safety at the Group Level
There are two organizations at this level whose actions need to be thought about: the treatment facilities and Atomic Energy Canada, Limited.
Atomic Energy Canada, Limited
With regard to safety in this case, AECL’s responsibility in making a medical linear accelerator are to a range of individuals: their shareholders, their employees, the governments of Canada and the United States, to the facilities that bought the machine, and finally to the patients who were treated by them. Responsibility to shareholders and employees are similar, and for this analysis will be considered the same.
Before we look at these specific responsibilities, we will need to understand some of the technical issues involved in the analysis of a system for safety. In this instance, technical knowledge is required to make ethical judgments.
AECL claimed to do a safety analysis of its machine, but in fact the analysis only shows the likelihood of the system failing because a part wears out. There was apparently no systematic search for design flaws in the software until after the FDA required an analysis. Unfortunately, a system can be highly reliable but thereby reliably kill people because of a design flaw. This confusion of reliability analysis and safety analysis is a critical failing on the part of AECL.
Some indication of the motivations behind AECL’s inadequate safety analyses can be gleaned from the way AECL appeared to use probabilities in its analysis. These probabilities seemed to be assigned to quantify and to prove the safety of the system, rather than to identify design flaws. For example, after redesigning the logic to track the microswitches that indicated the position of the turntable, AECL apparently used a sort of Fault Tree Analysis to assert that the safety of the system had been improved by at least 5 orders of magnitude. This astonishing claim of improvement is applied to the safety of the entire machine. This use of probabilities from a Fault Tree Analysis can effectively hide critical design flaws by inflating the perception of reliability and discouraging additional search for design flaws. This hiding of design flaws was a tragic, if unintentional side effect of the improper use of this analysis.
Thus, in failing to look systematically for design flaws in its software, AECL left itself (and its employees and shareholders) open to liability claims from injured consumers. This is clearly also a failure of its responsibility to patients and to the facilities who bought the Therac-25 machine and who were assured there was no way it could hurt patients. This failure must be perceived in the light of prevailing standards (or lack thereof) in system safety at the time of the design and release of Therac-25.
The Cancer Treatment Facilities
The cancer treatment centers a primarily consumers of a product that is tested, maintained, and certified by others. This product was sold to them with assurances that it could not hurt patients. And the facilities do not have the responsibility or the capability to independently check these systems for safety.
But they do have responsibility for the safe operation and low level maintenance of the machines once they are in operation. It was clear that at least one facility fell down in this respect. In the first Tyler accident, the video monitor to the room was unplugged and the intercom was out of order. This would not have been a problem if there were no accidents — but there were. One difficulty with the safe operations of systems is that standard maintenance can become tedious and not seems a necessary component in the safe operation of a regularly used system. In this case, an individual might have been spared a second overdose if the basic communication systems had been working.
We should note, however, the extraordinary efforts of the medical physicist at Tyler in determining the cause of the overdose. This individual effort was supported by the Tyler facility and made possible by the facility’s decision to have a full time physicist on staff. Some evidence of this support comes from the facility’s decision to report the accident to the FDA even though there was no requirement that they do so. Note that the facility decided that their responsibility extended beyond the requirements of the law.
Thus, most facilities had relatively minimal responsibilities in this case and most seemed to fulfill them. The facilities had little power to resolve the problem and depended on AECL and on the FDA’s approval process to protect them and their patients. Perhaps in this dependence they were too optimistic, but it is difficult to see what other choices they might have had.
Safety at the National Level
At the time of the Therac-25 accidents, the Center for Devices and Radiological Health (CDRH) of the FDA was responsible for the oversight of the immense market in radiation based therapy and diagnostics. As we have seen, most (94% in 1984) devices for the market were approved by "pre-market equivalence" and thus not subjected to stringent testing. The CDRH could not have handled the load of testing all these devices.
Since the rules for FDA are set by congress, FDA’s rules need to be analyzed from the perspective of the responsibilities of congress. But FDA implementation of those rules is under its control. Thus we can ask if the CDRH (as a center in FDA) should have allowed Therac-25 to be approved under pre-market equivalence. Without more information this is difficult to determine. The CDRH did seem to vigorously follow the case once it became aware of the Tyler accidents, though there is some evidence that they were reluctant to quickly halt the used of the Therac-25 when the problems became evident. This reluctance may be because of their responsibility to not place an undue burden on manufacturers in their caution regarding a product. This tension between responsibilities to manufacturers/industry and responsibilities to patients is always present in decisions by the FDA. Hindsight makes this one seems easy to decide.
One of the problems in this case is that the FDA depended on AECL to notify it of accidents that had occurred. They did not hear directly from the hospitals when the accidents happened. AECL had, at best, a mixed record of notifying the FDA of problems. So perhaps facilities should have been required to directly report accidents (they are today). But FDA could not make this requirement; it could only enforce existing law. Thus, perhaps it was a responsibility of congress to enact this law. The counter-argument is that congress should allow the market to work out these issues. But in this instance, at least, the market was too slow to save the individuals who were killed or injured.
The entanglement of different ethical issues become very clear at this level of analysis. Different constituencies will value different things (e.g. personal privacy vs business freedom). These choices among different values are as severe at the other levels (e.g. operator’s responsibility to employer and patient) but not as easily seen to the outside observer. Choice and balance among these values becomes inescapable, however, at the political level.
Safety at the Global Level
Communication between the Canadian Radiation Protection Board and the FDA seemed to work pretty well in this case. These two agencies had responsibilities to their respective governments, to industry in their countries, and to patients in their countries. AECL’s communication with FDA did not seem to be hampered by its international flavor. However, this is a case of two relatively similar countries and cultures interacting with each other. Similarities in legal standards and in government oversight made this case easier. This might be an even less happy story if we had been dealing with widely different cultures of business or legal systems in the two countries.
At one point in its interactions with the user groups, AECL found itself being asked for the source code used in the Therac 25 software. AECL claimed that it had proprietary rights to the software and would not make it public. This is another case of two values coming into conflict. A concern for safety suggests that it would be helpful to open the source code to inspection by the FDA or its agents or by the user groups. But to force this openness would violate the property rights of the owner of the software, AECL. One suspects that AECL’s refusal to open the code to inspection is a defensive move based on avoiding liability rather than an attempt to protect the value of the intellectual property. But if close inspection showed the software to be poorly designed, the value of the software would surely diminish. Is this a case in which we want to uphold property rights? There may in fact be some case here for an "open software" standard to protect public safety.
The ImpactCS grid suggests we identify several levels of social analysis for each ethical issue. In this case we have interaction among those levels. The public (ideally, represented by the FDA) was placed at great risk by the software. What claims are there that the cancer treatment facilities and the FDA can make regarding the value of their being able to inspect the design and logic of the software? What claims can patients (or their surviving families) make regarding the validity of the claim by AECL to keeps its software a trade secret? So, in order to understand the property issue correctly, we need to look at claims made at many levels: national, organizational, and individual.
Privacy issues may well be raised by this case as one begins to recommend some sort of a national reporting mechanism for medical devices. In order to accurately report on any accident, sensitive medical data about individuals would need to be collected by treatment facilities and made available to national agencies. These national agencies might, in turn, make this data available internationally. In our case data about the accidents was shared by agencies of the Canadian and US governments.
It seems possible to make the data on patient records related to medical accidents anonymous — to separate the data from the identity of the patient. But one would have to think carefully about how to do this. The quickest solution, attaching the patient’s medical history, is likely collecting too much data and violating the privacy of the patient. Collecting only as much data as is needed is a reasonable requirement.
Equity and access issues are also raised by this case. The whole point of the design of Therac-25 was to make a medical linear accelerator that was less expensive to produce and thus likely less expensive (and more available) to consumers. This is often an effect of the free market on the price of technology. If AECL could make a less expensive, but equally useful linear accelerator, it would sell more of them and they would be more easily available to the public. AECL would, doubtless, make money in the process. This is Adam Smith’s invisible hand at work: decisions to make a better, less expensive product are good both for the manufacturer and for the consumer.
On the other hand, increasing regulation and oversight will imposes increasing costs on providers of medical devices. These regulations may be seen as necessary, given the track record of companies like AECL. But they still increase the development costs to the company and the cost of the product. This, in turn, reduces the availability of the devices.
Again, we have here an issue of balance between competing goods: safety of the consumer and the increased access of the consumer to life-saving medical technology.
Honesty and Deception issues are central to this case. In several places, AECL representatives made claims of safety for the Therac-25 device that in retrospect seem at least exaggerated.
How did this occur? These claims were made by individuals (salespersons, engineers), on the behalf of an organization (AECL). Individuals making claims like these have some responsibility to check on their accuracy. But salespeople have little expertise to evaluate this information and are thus more dependent on the organization. Engineers, including software engineers, have the capability of evaluating the claims though they may be allowed little time in which to do so. Again, we find a balance between the engineer’s responsibilities to the company (use time efficiently) and to the consumer (evaluate carefully claims made about the product). Because of their special expertise, it is precisely the role of a professional to balance these conflicting responsibilities, and not to neglect responsibility to the consumer.
What organizational responsibilities might there be regarding claims of safety in medical devices? AECL representatives in several instances made claims that no overdoses had occurred with the Therac-25 machine, when there was clear evidence that someone at AECL must have heard of several previous accidents. This suggests that there may have been some internal miscommunication within AECL. Some portions of the organization may have known about the lawsuit regarding radiation harm but not have had the time or seen the need to inform other parts of the organization. For instance, those in the legal division, hearing of the lawsuit, may have assumed that the engineers were aware of the issue and that there was no immediate need to contact them. This sort of miscommunication is a daily matter in organizations, even small one (think of the miscommunication that occurs in your family).
Thus one part of the organization may have been making claims that no accidents similar to the reported ones had occurred based on the information available to them. Still, when the stakes are as high as they were in this case, organizations have a special responsibility to transmit safety critical information as quickly and as accurately as possible. This occurred sometimes within AECL (the FDA was notified quickly of the Hamilton accident) but not all the time.