This case highlights potential dilemmas encountered by postdoctoral fellows in a research setting. Are participants in a breast cancer imaging modality entitled to their test results when none of the modalities have been validated as medical screening tools? This case also explores the potential problems researchers face with clinical trials.
From: Graduate Research Ethics: Cases and Commentaries - Volume 5, 2001
edited by Brian Schrag
A university-based research team is developing four new breast cancer imaging modalities based on tissue property reconstruction. The methods image tissue properties such as light attenuation, mechanical stiffness, microwave energy absorption and electrical impedance. Currently, the researchers are conducting very early examinations of volunteers, working to sort out issues of data collection, patient comfort, initial experience with biological tissue, etc. These tests are not intended to generate scientific data for detailed study but merely to establish a basic level of preparation for each method in anticipation of the large-scale clinical trials they will face in the future. It is important to note that at this point, none of the modalities is a validated medical screening tool since the clinical studies required for validation have not been performed.
Given that these experiments are not actual clinical exams, the interaction with these early volunteers is fairly informal. The women are asked if they would like to participate merely in an effort to help out with the development of the project. Women who volunteer to be imaged are told about the various techniques in basic terms, and the scientists working to develop the methods, primarily engineers, describe the resulting images to them.
Recently, two of the four modalities localized some type of heterogeneity in a volunteer's left breast, both showing almost identical size and location for the anomaly. The other two experimental imaging modalities indicated nothing unusual in either breast, and the woman's standard mammogram had come back perfectly clean. However, the researchers knew that a large malignant tumor, originally misdiagnosed as a false negative from a mammography exam, had previously been removed from the woman's right breast and that it had been discovered through palpation. Uncertain how to proceed but concerned that they had detected a potentially serious health problem, the experimenters, both radiologists and engineers, tried to decide on their next step.
After extensive discussion, the researchers decided that the results were too inconclusive to risk informing the woman at this point, but that given the potential gravity of the situation, further tests should be conducted. The researchers consulted the woman's primary care physician, and a month after the first images were taken the woman was brought back for a second examination by the four modalities. When the results from these second tests came back nearly identical to the first, the research team informed the woman of the situation and initiated a dialogue between the woman and her primary care physician. Together, they decided to follow the standard protocol for a positive mammography result, including an ultrasound examination and a high resolution MRI. When both of these tests came back perfectly clean, the woman considered the case closed and no further tests were performed.
Posted 13 years and 1 month ago
Deborah G. Johnson Georgia Institute of Technology
This case illustrates an extremely complex and difficult issue for researchers involved with the development of new technologies. At the heart of the case is uncertainty and the role uncertainty plays both in technological development and in ethics. Uncertainty makes for difficult decision making.
In one of the first textbooks on engineering ethics, Martin and Schinzinger1 suggested that engineering should be understood as social experimentation. They argued that engineering should be seen on the model of medical experimentation since engineering always involves some degree of risk and uncertainty. Even if engineers are building something that has been built before, the new undertaking will involve differences that may affect the outcome ` a different environment, different materials, a different scale and so on. Martin and Schinzinger seemed to believe that the risk and uncertainty of engineering undertakings had not been sufficiently recognized. Consequently, those who are put at risk by an engineering endeavor are rarely involved in the decision making or given an opportunity to consent or withhold consent. In this case, engineering and medical experimentation are fused. There is no distinction. Nevertheless, the fact that the engineering endeavor is framed as medical experimentation does not seem to make the ethical issue any clearer or easier. The powerful role played by uncertainty is quickly brought into focus when we compare this case to a hypothetical situation in which researchers use standard imaging modalities to test some other aspect of the machinery. Suppose, for example, that researchers are testing a new, ergonomic design for a machine that deploys standard imaging modalities. The researchers discover an anomaly in the breast of a research participant. I believe the researchers would not hesitate to inform the patient and her doctor; they would be confident with regard to the significance of the finding.
The researchers hesitate in this case because they are uncertain of the meaning of their finding and they do not want to cause unnecessary stress to the participant. This response is understandable given that the engineers are so unsure about the validity of the imaging modalities.
The situation is actually not so uncommon in engineering. Often engineers and scientists have evidence, but the evidence is limited and doesn't give them the certainty they need to make a decision. This parallels the situation in which Roger Boisjoly found himself with regard to the launching of the Challenger.2 Boisjoly had some evidence that the 0-rings behaved differently in extremely cold temperatures, but he had not had time to do further testing to establish how the 0-rings would function. He had evidence, but he was unsure of the meaning or strength of the evidence. Was it strong enough to justify stopping the launch of the Challenger? Was it weak enough to be ignored? It just wasn't clear.
The parallel with this case should be obvious. Is the evidence strong enough to contact the participant or her physician? Weak enough to be ignored? It just isn't clear.
In situations of this kind, many factors come into play: the severity of the risk involved, the timeframe before outcome, details of the domain (spaceships, breast cancer, etc.), the possibility of gathering further evidence, and so on. In the case at hand, the severity of the risk of saying or doing nothing is high in the sense that a woman's life is at stake.
The engineers are reluctant to inform the woman for fear of causing her unnecessary stress. While this attitude is understandable, it also hints at paternalism. Their hesitation presumes that the woman is not capable of understanding the uncertainty of the data and the risks at stake. Thus, I believe the researchers did the right thing by telling the woman and her physician about their discovery, and I am inclined to think they should have done so earlier. Nevertheless, I admit this case is difficult because of the uncertainty of the data.
From: Graduate Research Ethics: Cases and Commentaries - Volume 5, 2001 edited by Brian Schrag
This case examines the potentially negative outcomes that can occur when all aspects of one's actions are not taken into account. Here, perhaps out of ignorance, the engineers who were developing early prototypes of various medical imaging methods failed to appreciate the potential impact of these untested images on their volunteers. The heart of the problem lies in the cloudy statement of what the experiment's intentions were. As this was not a traditional scientific experiment in the sense of conducting a large number of trials with careful prescreening and a detailed statistical analysis of the results, the "human subjects research" aspects of the examinations are not necessarily clear.
The NIH's "Guidelines for the Conduct of Research Involving Human Subjects" defines "research" as "any systematic investigation designed to develop or contribute to generalizable knowledge." The "try it out and see if it works" type of protocol in place during the tests described in this case study may defy the designation "systematic investigation." In any case, even if it were clear to the investigators from the start that they should take precautions because of the involvement of human subjects, it is not clear that that would have prevented this situation. Really, the only type of reasoning that could have prevented, or at least predicted, the situation described in this case is careful forethought about the full impact of the imaging tests.
This discussion brings up two interesting points. First, is it enough to merely "predict" a situation such as the one described in this case? If so, at what point does it become necessary to "prevent" a situation, rather than just "predict" it? Is it too much to ask that a woman face the idea that someone may have detected cancer in her breast but can't be sure? These issues must be weighed against the fact that at some point a new medical device will have to be tested if it is ever going to come into regular clinical use. Second, how can we ensure that adequate forethought will precede every experiment without slowing the research process to a halt? Any given action has an uncountable number of potential effects; admittedly, most have very low chances of actually occurring. At what percentage chance of occurrence can one stop worrying about potential experimental side effects? How does this equation change with the severity of the side effect? A related issue concerns the danger that guidelines governing research will become too detailed to be of practical value. While it may not be common practice among the designers of noninvasive medical instruments, the type of forethought that this case begs for is certainly not crippling. In fact, it bears a close resemblance to the scientific design process used in developing such devices to begin with. The fundamental question one is really asking is, "What would happen if . . . ?", the same type of thought experiment that appears throughout the engineering design process. The only difference is that the "what" is an ethical concept rather than a scientific one.
That means that the person asking the "what if" questions must be versed in issues of ethical importance. While that may mean additional training for members of the scientific or engineering design teams, or even the addition of special ethical consultants or overseers on certain projects, there is a significant benefit to this type of ethical thought experiment, just as there is to those of a scientific nature. When asked by knowledgeable individuals, such questions will provide a great deal of insight into how to steer the project's development to avoid serious ethical problems. In these days of detailed lines of accountability and the threat of serious financial repercussions for poor ethical decisions, the extra cost of such ethical training or expertise is easily returned with the avoidance of even one potential crisis. This case could be seen as an argument for applying the scientific model to the practice of research ethics.