This paper discussed how researchers are applying the scientific method to describe, explain, and enhance the status of individuals with physical, psychological, and social vulnerabilities are encountering ethical dilemmas to which current federal regulations offer incomplete answers.
Author: Celia B. Fisher, Ph.D.
Marie Ward Doty Professor of Psychology
Director, Center for Ethics Education
Department of Psychology, Dealy Hall 117F
441 East Fordham Road
Bronx, NY 10458
Tel: (718) 817-3793 Fax (212) 759-2009
Researchers applying the scientific method to describe, explain, and enhance the status of individuals with physical, psychological, and social vulnerabilities are encountering ethical dilemmas to which current federal regulations offer incomplete answers. In such work, scientific and ethical duties often appear to have mutually exclusive goals. Whereas scientific responsibility involves a search for truth through experimental controls, ethical duties are directed toward protecting participant welfare through means that often seem to jeopardize such controls (Fisher, 1993).When the goals of science and ethics appear to conflict, investigators studying vulnerable populations draw upon their own moral compass, the advice of colleagues, and recommendations of institutional review boards (IRBs) to make decisions about ethical procedures that have immediate and possibly long-term impact on participants, their families, and the communities they represent.
Since 1974, the federal government, through regulations requiring the establishment of the institutional review board (IRB) system, has formally recognized the inadequacy of ethical procedures which rely solely on the professional judgment of individual scientists (Benson and Roth, 1988). However, ethical evaluations drawn from the consensus of IRB members can also represent a restricted moral view. IRBs typically include ethicists, academic scholars, practitioners, and scientists who judge the ethicality of a research proposal through the application of federal and professional guidelines, abstract moral principles, and values situated within the cultures of academia, institutionalized medicine, or science. The perspectives of those who participate in research are typically given only superficial consideration through the appointment of a community member who cannot realistically represent perspectives of the diverse individuals who will be called upon to participate in various research projects conducted by members of the institution. Children and adolescents at psychosocial or physical risk, individuals from diverse economic and cultural backgrounds, and adults with cognitive deficits react differently to controlled procedures, and their perspectives and the perspectives of their family members can differ from those of well-meaning IRB decision makers.
Public reactions to past and recent revelations concerning the government-sponsored Tuskegee Syphilis Study (Jones, 1993), the human radiation experiments (ACHR, 1996), and the NIMH Violence Initiative (Leavy, 1992) have lead to concerns that current federal guidelines do not adequately protect the interests of our most vulnerable citizens and that diminished public trust in human subjects research may jeopardize research participation. In response to public concern, the President has appointed the National Bioethics Advisory Commission to review the adequacy of current federal guidelines for the protection of human subjects. This paper argues that to insure such protections are adequate, revised research regulations need to reflect a relational approach that encourages moral discourse between scientists and participants as an essential means of constructing the best scientific and ethical procedures possible within each unique research context.
Back to Top
Since the Nuremberg Code (1946), federal regulations (DHHS, 1991) and professional guidelines for research (e.g., American Psychological Association [APA], 1992), have primarily drawn upon the utilitarian or consequential meta-ethical position (Beauchamp, Faden, Wallace, and Walters, 1982) to solve ethical problems when actions that would protect the rights and welfare of research participants threaten the internal validity of an experiment. According to utilitarianism, the morally right action is the one that produces the most pleasing consequences (Mill, 1861/1957). Applied to ethics-in-science decision making, when a conflict between scientific rigor and participant welfare arises, the investigator's obligation to a small group of research participants may be superseded by her or his responsibility to produce reliable data that can potentially provide future benefits to members of society at large or to the participants' particular social group. Utilitarianism thus encourages a value structure in which potential benefits of science to society can receive higher moral priority than concrete and measurable risks to research participants.
Although consequentialism does not rule out consideration of participant values about, and idiosyncratic sensitivity to, specific types of harm and benefit, in practice those adopting this framework conceptualize risk and benefit as tangible entities with universal value subject to rational analyses by those other than the participant. Utilitarianism can thus promote an ethical orientation in which an abstract risk/benefit calculus guides moral action independent of the particular values and priorities a subject might place on the specific risks and benefits under consideration.
Equally important in philosophical circles, but less pervasive in ethics-in-science decision making, is the deontological approach in which the moral rightness of an action is evaluated without regard to the consequences and is carried out only if one would will that that action should be universal law (Kant, 1785/1958; Levine, 1986). Following deontic moral premises, an investigator would never treat a participant simply as a means to advance scientific knowledge and would only select research procedures she or he could apply across all research contexts. The Kantian tradition's inherent respect for the dignity of persons would appear to encourage scientists to incorporate participant perspectives into their ethical decision making. In practice, however, its focus on the universality of moral principles, and its indifference to particular relations and particular persons (Carroll, Schneider, and Wesley, 1985; Williams, 1981), often leads investigators and IRBs to believe they can determine which research procedures are ethical without consulting members of the population under study.
Although both utilitarianism and deontology are important philosophical resources for ethics-in-science decisions, applied in isolation from a participant's own understanding of the research context, these moral frameworks have the potential to minimize a scientist's special relationship, and subsequent moral obligations, to individual research participants, fostering a psychological distance between scientist and subject (Fisher, 1994).
Moral arguments for the duty to consider participant perspectives in ethics-in-science decision making derive from a synthesis of principle-based justice ethics and relational-based care ethics. The justice perspective emphasizes moral agency based upon principles of mutual respect, beneficence, and fairness (Kohlberg, 1984). It stresses impartiality and distance from both the scientist's own interests and her or his connectedness to participants. The ethics of care emphasizes the duty to interact with research participants on their own terms in response to their needs (Gilligan, 1982). It stresses attention to the interpersonal situation and a narrative of relationships that extends over time.
In recent years there has been growing recognition in philosophical and scientific circles that morality based on justice can and does coexist with morality based on interpersonal obligations (Baier, 1988; Dillon, 1992; Higgins, 1989; Killen, 1996; Waithe, 1989). For example, efforts have been made to integrate the two perspectives into a single moral orientation toward individual identity. Those advancing the justice perspective have traditionally taken individual identity as fundamental, viewed care as a choice, focused deliberations on how one can fulfill obligations to others without violating their autonomy, and emphasized the development of moral injunctions to protect identity. In contrast, those advancing the care perspective have traditionally taken relationships as fundamental, viewed care as an obligation, focused on how one can achieve individual freedom without violating moral obligations to others, and stressed the construction of moral injunctions to protect relationships (Clement, 1996). By integrating the two philosophical orientations, a justice-care position assumes:
A justice-care orientation conceptualizes vulnerability as a relational construct. Research vulnerability is defined in terms of a susceptibility to harm that does not rest solely upon the physical, psychological, or social characteristics that society views as disadvantageous, but upon the degree to which an individual's welfare is dependent upon the specific actions of scientists within a specific experimental context. In relational ethics, the obligation to protect the vulnerable also resides within the context of dependency and not in the charitable inclinations of the moral agent (Goodin, 1985). From this perspective both the specific susceptibility to research risks and the specific ability of scientists to help alleviate these risks defines an obligation that is not voluntary but morally binding (Goodin, 1985).
When an individual labeled by society as vulnerable is the focus of scientific inquiry, the investigator must consider the special life contexts that render this person more or less susceptible to the harms associated with recruitment procedures and participatory requirements for each particular experimental design. For example, susceptibility to coercion and exploitation may be a particular risk for those whose age, mental status, or sociopolitical standing have limited their experience in making independent choices or for whom acquiescence to authority has been a means of survival. It is not unusual for individuals with mental retardation to assume permission from a nondisabled guardian is required when they seek or are offered treatment (Ficker-Terrill and Rowitz, 1991; Ellis, 1992). For these persons, recruitment and consent procedures drawing upon institutional authority or the influence of legal guardians may increase their vulnerability to undue persuasion and involuntary participation. On the other hand, relationships between vulnerable persons and their family members, practitioners, or community leaders may be a positive life feature that investigators can draw upon to reduce susceptibility to research risk.
A relational concept of vulnerability also implies harm is not predetermined. From this perspective, protecting the vulnerable entails reducing if not eliminating the probability of threatened harms (Goodin, 1985). As a consequence, morally responsible scientists must take actions which go beyond simply protecting cognitively impaired persons from established risks associated with research participation. They must be willing to reconfigure experimental procedures to reduce or eliminate research vulnerability. This may include re-conceptualizing traditional assumptions regarding the standards by which an individual is considered competent to give informed consent and the role of guardians in consent decisions. It may also include new ethical responsibilities, including an obligation to educate prospective participants about concepts associated with the conduct of human subjects research and to inform them of the value orientations driving the research. From a relational perspective, the investigator sees such efforts not in terms of paternalism (Goodin, 1985), but as contextually defined obligations of the research contract.
A justice-care perspective accepts respect, beneficence, justice, and integrity as fundamental ethical principles that guide the moral actions of scientists. The translation of these principles into moral actions is not, however, assumed to be achieved simply through a scientist's moral reflections, but must derive from expressions of mutual accommodation among scientist, participant, and caring others integrated into concrete practices (Ricoeur, 1990; Widershoven and Smits, 1996). In addition, connectedness with, and caring for, those who participate in research need to be viewed as moral ends in their own right, rather than simply as a means to facilitate recruitment or maintain participant cooperation.
While accepting the deontic principle that research participants should not simply be used as a means to achieving research goals, relational ethics conceives personhood and autonomy as social constructions which can best be respected through mutual understanding and dialogue between scientist and subject. Respecting research participants thus involves responding to them on the basis of their own self-conceptions. A justice-care perspective proposes such ethical principles as beneficence, respect, and justice can and should guide research design and ethics-in-science practices, but that the investigator's interpretation of these principles should not be prioritized over the moral perspectives of participants and their families.
A justice-care perspective includes an evaluation of the moral rightness of ethics-in-science decisions in terms of consequences. However, relational ethics also draws attention to contextual factors that may influence how a specific moral goal may be achieved and perceived. Such factors include the recognition that scientists and participants may differ in their understanding of the rightness of the consequences of a particular form of scientific inquiry. A relational approach to ethical decision making also rejects sole reliance on a rational calculation of risk to benefits, recognizing that scientists and prospective research participants may differ in how they evaluate particular harms and goods, and whether or not they view the weighing of costs and benefits itself as a morally right action.
An emphasis on the contextual nature of ethical judgments based upon scientist-participant dialogue is not meant to imply an ethical relativism. Relational ethics does not assume basic foundational moral principles can be derived from group consensus. It sees embeddedness within a moral community composed of scientists, participants, and their families as an essential starting point but not an end point in the search for the good (Mac-Intyre, 1984). Thus, the exchange of views between scientist and participant is aimed at illuminating rather than eliminating the moral values of each and creating a research enterprise that can accommodate rather than subjugate these values. As discussed more fully below, participant perspectives must inform but not dictate a scientist's moral judgments. Similarly, the value orientations of scientists can not be perceived to outweigh those who will be the focus of scientific inquiry.When applying a relational ethic, an investigator must be prepared to abandon a research project if its implementation compromises or usurps scientific, participant, or community values.
Relational ethics draws upon features of communitarianism (Rawls, 1971). It promotes reliance on compassionate and reciprocal empathy for the feelings of others and encourages scientists and prospective participants to uphold the common good rather than individualistic notions of the good life (Prilleltensky, 1997; Sugden, 1993). Rawls (1971) proposes that just actions in a society composed of members with different levels of resources and power can be guided by imagining oneself in an "original position" behind a veil of ignorance that would conceal one's actual social status. However, as Toulman (1981) points out, a system of justice based upon imagining a veil of ignorance may well be fair, but will also be an ethics for relations between strangers. In relational ethics, such imaginings, if based solely upon rational and abstract reasoning abilities, are seen to result in false perceptions irretrievably embedded in the rational scientist's own subjectivity. At its very core, relationalism assumes that the ability to understand the perspective of individuals who differ in life experiences, world views, needs, power, social status, culture, and material and personal resources requires a process of bidirectional teaching and learning. This dialectic is operationalized in investigator-participant co-learning procedures wherein the moral perspective of prospective participants is viewed as an essential element of ethics-in-science decision making.
A relational ethic based upon a justice-care perspective (Farr and Seaver, 1974; Sullivan and Dieker, 1973; Veatch, 1987; Wilson and Donnerstein, 1976), supports several moral arguments for including the views of prospective research participants, their families, and their communities in ethics-in-science decision making (Fisher and Fyrberg, 1994; Hillerbrand, 1987; LaFromboise and Foster, 1989; Ponterotto and Casas, 1990). First, formulating regulations and ethical judgments solely on the bases of opinions expressed by experts in the scholarly community and IRB members risks treating subjects as "research material" rather than as moral agents with the right to judge the ethicality of investigative procedures in which they participate. Second, failure to consider prospective participants' points of view encourages singular reliance on scientific inference or professional logic that can lead to research procedures causing significant participant distress. The University of California IRB approval of consent procedures that failed to disclose the full nature of experimental risk involving medication withdrawal from participants with recent-onset schizophrenia (OPRR, 1994) is an unfortunate example of what happens when ethics-in-science decisions are not based upon honest and open dialogue among scientists, prospective participants, and families.
Third, failure to draw upon participant perspectives can also lead to the rejection of potentially worthwhile scientific procedures that participants and their families would perceive as benign and/or worthwhile. For example, in the every day practice of science, investigators often find that guidelines designed to protect vulnerable children from experimental psychopharmacological treatments inadvertently create institutional obstacles that limit participants' autonomy and access to research protocols that may advance scientific understanding and treatment of their disorders (Jensen, Hoagwood, and Fisher, 1996). Fourth, consistent with the community consultation model advanced by ethicists and investigators concerned with ethical practices and policies for clinical research on HIV/AIDS and other life threatening and potentially socially stigmatizing disorders, engaging prospective participants' partners in the design and implementation of research:
For the past two decades, ethical decisions regarding research with human subjects have been guided by the three fundamental principles set forth by the Belmont Report (DHEW, 1978): respect for persons, beneficence, and justice. Although few dispute the importance of these principles, there is no consensus on how to prioritize one's obligations when specific ethical problems place the principles in conflict. From a relational standpoint, achieving such consensus might actually decrease the adequacy of moral procedures. Consensus among IRBs, bioethicists, and investigators risks promoting universal application of a presumed hierarchy of values across contexts differing in their moral requirements that would reflect the values of the scientific and scholarly communities without consideration of participant values. Co-learning approaches can help situate decisions surrounding conflicting ethical principles within specific research contexts and the perspectives of the specific population considered for investigation.
A major assumption of relational ethics is that co-learning enhances the moral development of scientists and participants through a better understanding of the reciprocal relationship between the participant's expectations and the researcher's obligations. Relational ethics views scientist and participant alike as moral agents joined in partnership to construct research goals and procedures that produce knowledge carrying social value and scientific validity. In viewing autonomy as a social construction, it proposes that respect for personhood must be rooted in scientist-participant dialogues aimed at discovering shared and unshared values in a process of mutual influencing through which fair and caring ethical procedures are derived.
A relational ethic seeks to develop methods of ethics-in-science decision making sensitive to both the justice-based dimension of equality and inequality and the care-based dimension of attachment and detachment (Clement, 1996). It assumes both scientist and participant come to the research enterprise as experts: The researcher brings expertise about the scientific method and extant empirical knowledge base and the prospective participant brings expertise about the fears, hopes, and wishes the community brings towards the prospect of research.
A cornerstone of relational ethics is that the roles of teacher and student are assumed by both investigator and participant throughout the process of exchanging views. For example, to begin a dialogue by asking prospective participants open-ended questions concerning research ethics is sometimes problematic since it asks individuals to provide spontaneous and decontextualized responses to moral questions which require informed deliberation on issues of scientific concern that most participants have not previously considered. Investigators can use colearning procedures to share with prospective participants their views on how and why it is important to apply the scientific method to examine questions of societal import and to debates underlying areas of current ethical concern. In turn, the prospective participants, their families, or community representatives can apply their moral perspectives to critique the scientific and social value of a proposed study and share with investigators the value orientations guiding their reactions to the planned procedures.
Through the uncovering of common and unshared dimensions of ethical attitudes toward the integrity of scientific research, co-learning joins scientist, prospective participants, and community members in partnership to discover previously unidentified areas of moral concern and to construct a scientific enterprise based upon mutual respect, accommodation, and trust. Researchers employing co-learning welcome differing points of views as checks against the risk of confusing scientific self-interest with social beneficence. They forge ongoing partnerships with prospective participants, gaining community input at the design, implementation, interpretation, and dissemination stages of research (Higgins-D'Alessandro, Fisher, and Hamilton, 1998).
A foundational assumption of relational ethics is that co-learning is an ongoing process involving scientist and community members in moral discourse throughout each step of human subjects research including: the research design, informed consent, project implementation, data interpretation, and knowledge dissemination phases. It therefore requires greater attention to debriefing procedures, an ethics-in-science practice that has received scant attention outside of ethical discourse on deceptive research practices. Debriefing has been viewed traditionally as a unidirectional activity that allows the scientist to correct any misconception or supply information, purposely withheld, in a sensitive and educational manner so that the participant can understand and accept the reasons offered, and be satisfied with the experience (Keith-Spiegel and Koocher, 1985). As a consequence, in practice, especially in non-treatment research, debriefing is typically conducted in a cursory fashion, void of an exchange of views, sometimes consisting simply of a promise (often unfulfilled) to send participants a summary of the findings when the research is completed.
From a relational standpoint, debriefing is a critical phase during which the congruence between the participant's expectations and the scientist's obligations, presumably obtained during informed consent, can truly be assessed (Scott-Jones and Rosnow, in press). Debriefing thus needs to be constructed as a bi-directional activity in which investigator and participant openly share their views on: (a) the nature of and reaction to the research experience; (b) the adequacy of information provided during informed consent; and (c) the scientific validity and social value of the data collected. From this exchange, participants become more educated critics and consumers of scientific knowledge and investigators become more educated about participant perspectives that can improve future ethical procedures, research design, and interpretation and communication of research results.
The principle of respect has generated numerous ethical guidelines for protecting participant privacy through the maintenance of confidentiality. Intricate procedures have been developed for keeping data sheets free of identifying information and for keeping records secure. Maintaining confidentiality presents few ethical challenges when science is characterized by laboratory studies devoid of information about individual differences or when individuals with previously identified disorders are the focus of study. The ethical obligations are more complex when scientists study the probability of impairment in populations judged to be at risk for disorders or health-compromising behaviors. Such studies have the potential to tap previously unidentified sources of psychopathology, developmental delay, cognitive deficits, abuse, addictions, criminal activities, and other socially stigmatizing characteristics and behaviors (Fisher, 1993, 1994; Fisher and Rosendahl, 1990).
Does an investigator have a moral duty to help a research participant if a previously unidentified problem is revealed during the course of research? Does this moral obligation override the duty to protect participant confidentiality? The scientific community has traditionally been reluctant to act upon information about individuals uncovered during the course of nonintervention research out of a healthy skepticism that inferences drawn from tests designed to evaluate differences between groups of individuals may not have diagnostic validity when applied to a particular research participant (Fisher and Brennan, 1992). A second source of reluctance is scientists' awareness that sharing information with someone who can help the research participant can sometimes create stressful or harmful consequences for the participant, especially if such individuals react to information with punitive measures (Fisher, 1993).
A third element of caution against acting when researchderived information indicates that participants are in jeopardy is rooted in the scientist-citizen dilemma (Veatch, 1987). Acting to help a research participant may threaten the internal validity of an experiment (especially in longitudinal designs) or jeopardize the trust and participation of others involved in the research (Fisher and Brennan, 1992; Fisher, Higgins, et al., 1996; Fisher, Hoagwood, and Jensen, 1996). Applying the rule-utilitarian framework, when a conflict emerges between participant welfare and scientific rigor, investigators have often valued the production of well-controlled data that can benefit society over their duty to facilitate or procure services for individual participants.
The study of risk in adolescent populations highlights ethical issues surrounding confidentiality, both because of the potential dangers the risks pose to each particular teenager's well being and because of this age group's ambiguous status with respect to decisional capacities (Holder, 1981; Koocher and Keith-Spiegel, 1990; Melton, Koocher, and Saks, 1993). Research on risk-related characteristics or behavior can reveal a particular adolescent participant may have: suicidal ideation, is engaging in health compromising behaviors, is involved in illegal and/or harmful behaviors, or is living in abusive circumstances. An implicit assumption underlying the failure to assist adolescents who indicate potential problems during the course of risk research is that teenagers value autonomy and would feel betrayed by an experimenter disclosing confidential information to protect them. Blind faith in this assumption has prevented scientists from asking two critical questions: What moral role does an adolescent research participant expect of an investigator and what are the consequences of failing to fulfill this role?
Applying a co-learning procedure,my colleagues and I (Fisher, Higgins, et al., 1996) asked these questions of high school students living in a low-income urban environment. Students (who self-identified as predominantly Hispanic) were provided with a brief overview of the scientific method and scientists' concerns regarding confidentiality. They were then asked to give opinions concerning different ethical strategies an investigator could follow if during the course of research an adolescent participant indicated she or he was in danger or engaged in high-risk behaviors. The investigator could: (1) keep the information confidential and take no action; (2) talk to the teenager and assist her or him in finding a referral source; or (3) tell a parent or another concerned adult. To avoid imposing our own evaluations of risk severity, we asked the adolescents to rate their perceptions of how problematic they considered the following: use of alcohol, illegal drugs, and cigarettes; physical and sexual abuse; suicidal ideation; sexually transmitted diseases; truancy; vandalism; theft; violence; and shyness.
"Those of us who study lives are aware that we influence the lives we examine-perhaps very little, perhaps a great deal" (Josselson, 1996, p. 80).
Perhaps, not surprisingly, adolescents of all ages viewed self-referrals most favorably. However, probably the most important finding of this co-learning approach was that teenagers often viewed the maintenance of confidentiality negatively, especially in situations in which an investigator learns that a research participant is a victim of, or engaged in, behaviors adolescents themselves perceive as problematic. Students' responses thus indicated that they saw the investigator as having a moral role in relationship to their problems. The advocacy role that teenagers assumed was a scientist's obligation was thus in direct contradiction to the role of impartial observer assumed by the majority of investigators currently conducting adolescent risk research.
A process of co-learning can illuminate the impact of both action and inaction on the life trajectories of those studied. The responses of adolescents alerted us to the disconcerting probability that even when teenagers have been promised confidentiality under traditional informed consent procedures, they nonetheless expect to be helped when they tell an adult interviewer they are a victim of violence or involved in high-risk behaviors. An investigator's failure to help a teenager may have an iatrogenic effect on how the teenager conceptualizes her or his own behaviors and the fiduciary responsibility of adults. Adolescents may interpret the scientist's lack of action as an indication that their problem is unimportant, appropriate services are unavailable, or that knowledgeable adults can not be depended upon to help children in need (Fisher, 1993, 1994; Fisher, Higgins, et al., 1996). Thus, the preservation of confidentiality in adolescent risk research in particular and research with other vulnerable populations in general, assumed by many scientists to be a moral good, may in some cases actually result in harm.
In working with vulnerable populations, ethics-in-science decisions must reflect a balance between the need for communion between scientist and participant and the obligation of individual moral agency. Relational ethic's emphasis on autonomous mutual accommodation guards against the temptation to use the co-learning process to follow the fallacy of "is to ought" (Sidgwick, 1902). The fiduciary nature of the scientist-participant relationship obliges the investigator to take ultimate responsibility for decisions that impact the rights and welfare of research participants. Accordingly, prospective participant perspectives must inform, but not dictate, the scientist's ethical decisions (Fisher and Fyrberg, 1994). In developing ethical procedures for human subjects research, scientists must assume the responsibility to apprehend and respect the views of research participants without relinquishing their obligation to apply their own knowledge, training, and values to the pursuit of the moral act.
For example, although they rated sexually transmitted diseases (STDs) as a serious problem, most teenagers we interviewed did not believe an investigator should report STDs to concerned adults. While providing teenagers with a referral to a health clinic may respect their autonomy, given the lifethreatening nature of some of these diseases, an investigator has to evaluate teenagers' preferences against the ability of those in this age group to understand the personal implications of the disease, adolescents' ability to obtain appropriate assistance in this circumstance, and the risk to their health if they do not follow through on the referral and the problem remains unreported.
Constructing ethical procedures based upon mutual accommodation. How does a relational ethic address conflicts between the principles of respect and beneficence? Instead of simply complying with or overriding the adolescents' preferences, a relational-based approach calls for the development of ethical procedures that can accommodate (a) the scientist's fiduciary responsibility to protect participant autonomy and welfare and produce reliable information according to accepted principles of research practice; (b) the adolescents' expectations for confidentiality and concern; and (c ) the participants' and guardians' right to know the exact nature of the investigator's confidentiality and reporting policy.
In this specific situation, an understanding of adolescent and guardian expectations, combined with a recognition of the investigator's fiduciary responsibility, leads to the following guidelines for confidentiality and disclosure procedures in adolescent risk research:
In the construction of its professional authority, the science establishment has endorsed a set of ethical codes to police itself and allow others to police its members. These standards can be said to largely reflect Eurocentric, rational-deductive, libertarian conceptions of the good (Prilleltensky, 1997) which, preserved in federal regulations and professional codes, become moral premises not amenable to challenge. The establishment's definition of the good is embodied in assumptions guiding scientific conduct (Beauchamp, et al., 1992; Freedman, 1975; Rosenwald, 1996; Veatch, 1987), among them:
Relational ethics poses several interrelated questions about these traditional value premises: Do the values embodied in current professional codes and federal regulations reflect the moral visions of those asked to participate in research? Do scientists and participant groups have different conceptions of the good life and therefore different evaluations of the ethical procedures aimed at producing knowledge to achieve the good? Do standards of competency for consent to research decisions place an unjust burden on those with identified mental impairments? Would some individuals who consent to research participation on the basis of information describing the immediate purpose and nature of a study, decline to consent if they knew the value orientations driving the scientific and ethical procedures? Should scientists be required to communicate their conception of the good and have their values exposed to participant evaluation? These questions take on ethical urgency when applied to research with persons with cognitive deficits, individuals not old enough to have the legal right to consent to research, and members of historically oppressed populations.
Advocates for those who because of age or impairment have traditionally been denied the right to consent to research, have begun to challenge traditional standards for judging the moral agency of those legally defined as incapable of consent. For example, with the advent of de-institutionalization and the principle of normalization into human services (Lindsey and Luckasson, 1991; Wolfensberger, 1972), regulations for Intermediate Care Facilities (Conditions of Participation, 1988) and recent court decisions guaranteeing the right of persons with mental retardation to make their own treatment decisions (Rennie v. Klein, 1982; Rogers v. Okin, 1982), a diagnosis of mental retardation is no longer accepted as a presumption of incompetence to consent to or refuse treatment (Dinnerstein, 1994). Similarly, state laws have increasingly granted adolescents the right to make decisions concerning treatment for venereal disease, drug abuse, or emotional disorders without guardian permission (Fisher, Hatashika-Wong, and Isman, 1999; Holder, 1981). However, federal guidelines regulating the rights of these individuals in research have been vague in the case of teenagers, and not formally articulated in the case of individuals with cognitive deficits (Fisher, Hoagwood, and Jensen, 1996; Bonnie, 1997). In the absence of clear guidelines, individuals who have not reached the age of legal maturity, or who because of disability do not have the legal right to make autonomous decisions, have lost their claims to the moral authority to make decisions about research participation.
Advocates for the rights of historically oppressed groups are increasingly drawing attention to the possibility that established Eurocentric views of science may not be universal. Some argue that the value placed on the control and manipulation of variables may reflect a materialistic, individualistic, power dynamic inconsistent with the values of spirit, collectivity, and harmony inherent in many ethnic minority cultures (Greenfield, 1994; Marcus and Kitayama, 1991; Parham, 1993; Triandis, 1990). Ethnic minority scholars are also espousing widely held minority community beliefs that their members have been "raped" by white researchers who engage in research without understanding or caring for those they study, who use minority members as bargaining chips for the receipt of large federal grants, and who treat them as the "human equivalent of lab rats" (Mio and Iwamasa, 1993; Parham, 1990; Ponterotto, 1993).
Science has traditionally attached ethical significance to methods but not topics (Rosenwald, 1996). This stance reflects two assumptions inherent in a scientistic philosophy: (a) the pursuit of knowledge is good regardless of its social and ethical implications, and (b) consideration for the practical consequences of research will inhibit scientific progress and academic freedom (Scarr, 1988). From this perspective, statements in the final paragraph of a journal article stating the limited generalizability of one's work to social application provide sufficient ethical safety mechanisms and/or alleviate the investigator of further moral responsibility against society's (mis)use of the products of her or his work (Fisher, et al., 1997; Prilleltensky, 1997).
From a relational perspective, research is embedded in valuational contexts that make it impossible to claim the existence of value-free information (Prilleltensky, 1997). Thus, a counterpoint to the scientistic view is that all research is value-laden and sociopolitical in nature (Kurtines, Azmitia, and Gewirtz, 1992). This is particularly true when individuals with cognitive deficits and minority group members are the focus of study (Sampson, 1993; Zuckerman, 1990). In a society in which persons with cognitive impairments see their rights diminished through protectionist laws and members of historically oppressed communities have their rights degraded through discriminatory laws and practices, scientists must recognize that any research on these and other politically vulnerable communities can directly impact public attitudes and policies directed toward research participants and the populations they represent (Fisher, et al., 1997). That policy makers and nonscientist citizens "are not likely to make the distinction between scientific theory and what seems to be its political implications, or between generalizations based on population statistics and their applications to individual members of a given group" (Zuckerman, 1990, p. 1301), argues for the importance of integrating participant perspectives into ethics-in-science decision making.
One community concern receiving little attention in ethics-in-science discourse is whether group stigmatization should be considered in determining risks to participants. Failure to give ethical attention to group depreciation as a research risk is rooted in the scientific ethos which considers research morally permissible if the risks of the procedures are "reasonable" in relation to the benefits hoped for (Beauchamp, et al., 1982). The "reasonableness" of risk has typically been determined by members of the majority establishment who, by definition of their intellectual, age, or racial caste status, may overestimate the value of research and underestimate the risks of community stigmatization. According to Cassell (1982), if harm/benefit cannot be accurately predicted it should not be applied. This may be especially true for the ethical evaluation of research on minority persons or those with diminished legal rights, when they or their families do not have a voice in evaluating the "reasonableness" of collective risks and benefits.
Many researchers have yet to recognize that racism and other prejudices are not just abstract theoretical ideas, but rather real conditions of discrimination and oppression in the lives of ethnic minority individuals and those labeled as having cognitive deficiencies (Sue, 1993). Accordingly, failure to consider group stigmatization as a potential cost of research participation may be asking politically disadvantaged members of society to unjustly bear research risks. A relational ethic calls for researchers adhering to Eurocentric, scientistic philosophies to question their individualistic and rational-deductive values and consider the diverse world views held by members of ethnic minority and cognitively vulnerable communities.
In moving from a "discourse of power of the majority" to a new form of dialectics between investigators and communities (Ponterotto and Casas, 1990; Ivey, 1987), the science establishment must be prepared to ask questions that may challenge foundational premises of scientism: Should government and IRBs provide assurance of protection from group stigmatization and personal harm to physically, cognitively and politically vulnerable participants? Should the risk of group stigmatization be communicated to participants and their families during informed consent procedures? Is collective stigmatization a moral prohibition?
Secular scientific thinking holds an instrumental view of reality. It values self-directed rational planning, self-determination, and autonomy. The notion of the good is constantly filtered through these values. In ethics-in-science decision making, the scholarly community grants moral priority to the ability to weigh the costs and benefits of an experiment, but does not challenge whether a society based upon such calculations is worthwhile. A relational ethic emphasizes the importance of considering the authenticity of the cost-benefit analysis to the moral lives of prospective research participants. For example, might some adults with mental retardation prefer to avoid unpleasant side effects associated with experimental treatments for behavioral disorders, rather than take the chance that their behavioral problems might or might not be reduced by research participation? Are some individuals from historically oppressed populations unwilling to engage in any research which may risk additional group stigmatization?
The acceptance of the balance of risks and benefits as a primary means of ethical justification implies that beneficence, the moral obligation to protect the welfare of research participants, does not take priority over other moral values in ethical decisions for human subjects research. Some have questioned whether efforts to ameliorate potential harm to vulnerable populations is sufficient ethical justification for human experimentation. In human subjects research it is often considered morally sufficient to conduct an experiment if the participants are in no worse condition at the end than they were at the beginning of a study. This emphasis on the principle of nonmaleficence - to do no harm - has lead to the acceptance of an "ethical minimalism" (Rosenwald, 1996) in which research participants are rarely direct beneficiaries of the knowledge they helped produce. Such research is said to be valuable if it is conducted according to accepted scientific standards of reliability and control and assumed to have social value. When conducting research with historically stigmatized populations, investigators need to be sensitive to how evaluations of social benefits are culturally determined and pose the question: Within whose historical tradition is this knowledge valued? (Bermant, 1982).
Justice in its narrower sense is understood to be what is fair and equal, and the just person is the person who takes only her or his proper share.When research offers no direct benefits to a participant or her or his community, how do we determine what is the scientist's proper share? Casas (1990) criticizes current ethical guidelines for human subjects research for their emphasis on avoidance of harm rather than promotion of benefit to the community under investigation. Casas argues that an emphasis on harm avoidance is an insufficient ethical justification for conducting research on vulnerable communities because it shifts the ethical burden away from the investigator's obligation to demonstrate that research will result in any good and towards the participant who must demonstrate that they may be harmed. Casas' comments raise the provocative possibility that the cost/benefit calculus, a traditionally cherished means of evaluating ethical actions, may not be an acceptable method of moral analyses for individuals holding values outside the Eurocentric and scientistic conceptions of the good.
The absence of participant input on the risks and benefits of research inducements can also lead to unfair practices. When applied to studies on impoverished, institutionalized, or otherwise vulnerable populations, the decision to provide inducements creates a tension between compensating individuals fairly for their time and coercing them to assume extraordinary burdens because they need the income (Levine, 1986). Unfortunately, little consensus exists about what defines due and undue incentives for research participation (Macklin, 1981).
To determine a just exchange of research goods, investigators can consult with prospective participants and their advocates to determine the market value of the time, skill, and effort required for participation within the context of the nonmonetary goods they may receive from research participation (e.g., individual or community benefits of research-derived knowledge). Inducements based upon this information can be set at levels sufficient to attract the desirable number and diversity of research participants (Levine, 1986). Such an approach reflects the position that economic justice belongs to the domain of obligation rather than charity (Goodin, 1985).
"Concepts of racial inferiority form what Horace Mann Bond called "a crazy-quilt world of unreality" in a society that proclaimed equality, opportunity, and democracy as goals while it "brutalized, degraded, and dehumanized " African Americans" by every instrument of the culture" (Tyack, 1995, p. 6).
Racism in American society has a long history marked by social and political constructions of differences governed by the political and social interests of the ruling racial caste (Miles, 1989). Race-based research can and has been used to justify segregation, political subordination, and hostile and demeaning stereotypes (Laosa, 1984; Tyack, 1995). To many members of racial and ethnic minority groups, federally funded research represents another arm of a powerful racial caste system.
Although the recent federal regulation that requires justification for failing to include women and minorities in research is laudable (DHHS, 1994), this policy does not address, and may even perpetuate, the questionable scientific validity and ethicality of classifying humans into different "races" and the practices of power and subordination that such classifications represent in the United States (Tyack, 1995). In contemporary science, terms such as "race" and "ethnicity" are used categorically with little scientific basis outside of historical folk beliefs based upon pre-colonial era thinking about the inherent superiority and inferiority of populations along genetic lines (Chan and Hune, 1995; Essed, 1991; Fisher, et al., 1997; Stanfield, 1991). Use of racial labels to categorize research participants enables investigators to leave monolithic racial stereotypes unquestioned and avoid examining the personal significance of these terms for research participants, scientists, and members of society (Cocking, 1994; Fisher, et al., 1997; Oboler, 1995; Ogbu, 1994; Stanfield, 1993). Socially constructed racial labels can strip participants of their personal identity by studying them only in terms of racial or ethnic categorizations (Heath, 1993). In their rush to label ethnic minority participants, researchers apply categories that may not reflect how individuals see themselves.
Funding for research on ethnic minority populations is often driven by economic and political concerns (e.g., urban crime, welfare dependency) framed within the cultural lens of nonminority political leaders. Research designed to address minority "problems" may be viewed very differently by white researchers and the minority communities they wish to study. Desegregation policies (Tyack, 1995), the Bell Curve debate and associated IQ-based tracking movements in American education (Herrnstein and Murray, 1997; Jensen, 1991; Laosa, 1984), the Tuskegee syphilis study and government radiation experiments which misinformed research participants about information directly relevant to their health (Jones, 1993; ACHR, 1996), the NIH-initiated studies on the biological bases of violence (Leavy, 1992), and the California Adolescent Family Life Program's study of sexual abuse in African-American and Latin-American adolescent mothers (Fisher, et al., 1997) are examples of sociopolitically driven experimentation on racial/ethnic minorities that have undermined trust in scientists as guardians of ethical treatment when prospective participants are minorities (Fisher, et al., 1997).
Some minority scholars have expressed the view that white researchers do not understand the sociopolitical nature of research involving questions of oppression, discrimination, prejudice, racism, and dominate-subordinate relations (Sue, 1993). They argue researchers seeking to study minority communities, including investigators who are themselves members of the ethnic group(s) to be studied, should routinely seek advice of community leaders (J.F. Jackson, M.H. Bennett, J. Dent, H. Fairchild, R. Jones, and P. Rhymer-Todman, personal communication, January 21, 1993). In response to these concerns, social scientists investigating high-risk behaviors in ethnic minority youth have formed community advisory task forces comprised of ethnic minority scholars, practitioners, and community members charged with assisting in the development of culture-fair research procedures and adequate informed consent and debriefing procedures (Fisher, Hoagwood, and Jensen, 1996).
Relational ethics requires investigators to guard against another form of paternalism: The unwarranted assumption that opinions of minority scholars and community leaders reflect or override those of the less educated and more vulnerable community members who may be the target of investigation. In response to concerns regarding the controversial NIH Violence Initiative (Wheeler, 1992), a panel of African- American leaders was appointed to review the scientific adequacy and potential for group stigmatization and harm that would result from government-sponsored research on pharmacological approaches to stemming the tide of urban violence. However, absent from the dialogue was the voice of African-American women and men living in impoverished ghetto communities, whose sons, based on current statistics, have a devastatingly high probability of entering the juvenile justice system before they reach adulthood (Wordes, Bynum, and Corley, 1994).
Federal guidelines that encourage the inclusion of guardians of prospective participants might have situated the ethical issues raised by the government initiative within the real-world concerns and needs of those who would be most directly impacted. For example, how would these individuals have weighed the risk of group stigmatization against the chance that experimental treatment might help them protect their sons from the sobering picture of adolescent risk characterizing their communities? How might an understanding about their fears, hopes, and dreams for their children have influenced the research plans and goals supported by the initiative? How might an honest dialogue between scientists and the parents of prospective participants have shaped the recruitment procedures, experimental design, and dissemination plans in ways that might impact positively on the reactions of those later recruited for participation in the studies?
From a relational perspective, investigators conducting multicultural research need to insure scientific and ethical procedures are derived from dialogue among scientists, community leaders, and representatives of the ethnic minority individuals who will directly participate in the research. Moreover, investigators need to insure discussions are bidirectional, and that ethics-in-science decision making derived from such discourse is based upon respect and mutual accommodation, rather than compromise and coercion.
From a relational perspective, a scientist's identity is in part defined by the participants studied. When members of ethnic minority groups are the focus of scientific inquiry, investigators should approach all research projects with the assumption that racial/ethnic bias is inherently present (Atkinson, 1993). A relatively ignored basis of unintentional racism is failure of white investigators to consider the impact of their own racial identity on what research problems they choose to examine and the research methodologies they select (Ponterotto, 1993). According to Helms (1993), an inherent and sometimes unconscious facet of white racial identity is that white members of society are born the benefactors and beneficiaries of racism. Their attempt to deny, repress, or distort this fact can lead to research supporting racist ideologies. From this perspective, racism in research can only truly be overcome after white researchers attempt to become aware of their role and status in a racist society and work to develop non-racist definitions of whiteness. In relational ethics, this goal can only be achieved through honest, caring, and ongoing engagement of minority members in dialogue on value assumptions driving race-relevant research.
Helms' model of white racial identity includes six stages of increasingly complex racial conceptualizations. In the first stage, "contact," a white researcher is considered naïve to the sociopolitical implications of race in this country and erroneously assumes that data from research on predominantly white samples pertain to people of all races. Researchers operating at this level may focus their investigations on social characteristics such as income, education, and employment status rather than factors associated with minority status (e.g., discrimination) on the unsupported assumption that racial group differences disappear when ethnic groups are of similar demographic backgrounds (Fisher, et al., 1997; Slonim-Nevo, 1992).
In Helms' second stage, "disintegration," an investigator becomes aware of race-related moral dilemmas and becomes ambivalent about the inclusion of racial/ethnic minorities in research. This may lead to unrealistic expectations for standards in research excellence applied only to minority group investigations, resulting in a paucity of studies directly relevant to the concerns of ethnic-minority communities. For example, members of grant review panels operating at this level may give lower priority scores to research on newly immigrated, lower-income, Spanish-speaking populations if the proposed study does not include comparison groups defined by various combinations of individuals of different immigration histories, income levels, and language orientations. Such decisions can undermine research on ethnic minorities when there is a lack of sufficient numbers of individuals representing each of these groupings, when individuals who meet certain group criteria are non-representative of Spanish-speaking residents of the United States, or when the rationale for inclusion is based upon empirically unsupported assumptions that these factors comprise independent influences on behavior.
In attempts to deal with the personal disorientation emerging in the second stage, Helms describes a third level of white racial identity development, "reintegration." In this stage, white researchers may seek re-equilibration by idealizing white culture as a standard for behavioral norms. This can lead to the assumption that ethnic minority research is only valuable when whites are used as a control group, leading to comparative methodologies, which in turn result in deficitoriented approaches to understanding ethnic-minority behaviors and mental health issues (e.g., Banks, 1993; Graham, 1992; McAdoo, 1993).
At Helms' next level, "pseudo-independence," white researchers substitute the ethnocentrism of the earlier stages for a liberalism that seeks to explain away racial-group differences in terms of cultural disadvantage, rather than looking equally at both minority and white behaviors. This can lead to research supporting the paternalistic view that ethnic minorities lack the ability or fortitude to play a role in alleviating adverse conditions impacting their lives (Parham and McDavis, 1987). Research influenced by this level of white racial identity development may also include assessments of acculturation (adaptation to white social values) as an indicator of psychological adjustment, when in fact some newly immigrated participants may experience the transfer of culture as a source of intrapsychic and intrafamilial stress (e.g., Cooper, 1994; Gil, Vegas, and Dimas, 1994; Szapocznik and Kurtines, 1993), and traditional values or a bicultural orientation may in fact serve as buffers against psychological distress (Berry, 1980; Bettes, et al., 1990; LaFromboise, 1988). In the absence of information about what elements of majority culture are harmonious with the basic values and characteristics of specific ethnic communities, white researchers operating at this level of racial identity development risk legitimizing social prejudices into presumably value free "adaptive" and "maladaptive" categories of racial behavior (Fisher, et al., 1997; Takanishi, 1994; Tharp, 1994).
According to Helms, those scientists attaining the fifth level of white racial identity, "immersion-emersion," attempt to re-educate themselves and others by incorporating an understanding of white culture and racist sociopolitical history in studies on both minority and white behaviors. This can include scientific attention to the impact of racial discrimination in employment, housing, educational, and legal institutions as factors influencing family socialization patterns and physical and psychological well-being (Boykin and Toms, 1985; Fisher, et al., 1997, Gaines and Reed, 1995; Johnston, O'Malley, and Backman, 1993; Sue, 1991). Such endeavors will fail to provide adequate explanation of factors influencing ethnicminority well-being if they do not incorporate the perceptions and understanding that minorities have of their own social realities, including perspectives of their immigration and life in the United States.
In Helms' final stage, "autonomy," white scientists, willing to abandon the benefits racism has provided them, recognize the implicit cultural assumptions in their work and the need not to impose these assumptions on other racial groups. From a relational perspective, researchers can not develop a mature white racial identity without giving ethnic minority members a voice in the scientific enterprise designed to determine their identity and subjectivity (Fisher, et al., 1997; Sampson, 1993). Incorporation of ethnic minority perspectives in white researchers' exploration of their own racial biases may challenge the extent to which their world view and conception of the good is sufficient or even appropriate for studying racially diverse populations.
The scholarly and legal establishments have traditionally defined partners in the moral community as "rational" persons with whom one can have a shared understanding about what constitutes a moral action in a given situation. The "rational person" orientation has elevated certain levels of abstract thinking to standards by which moral agency is judged. In the scientific community, adaptation of the utilitarian philosophy has led to ascribing what might be considered exultant status to the ability to weigh the costs and benefits of research. For adults with cognitive impairments who may not make decisions based upon rational calculation, valuation of cost-benefit analysis as a standard of moral agency can deprive them of liberty of action and consensus making-considered to be the rights of personhood.
The ability to rationally manipulate the costs and benefits of research and arrive at a "reasonable" outcome of choice is the most cognitively complex of several psycho-legal standards of consent capacity (Appelbaum and Roth, 1982). The ability to respond to requests to participate in research can be also be evaluated at levels requiring less abstract reasoning skills including: (1) expressing a choice concerning participation; (2) demonstrating a factual understanding of the risks, benefits, and alternatives associated with a research project; or (3) indicating the ability to appreciate the implications of the above factors to one's own circumstance and the voluntary nature of participation (Appelbaum and Roth, 1982).Holding persons to a standard which requires the calculation of costs and benefits poses legal and ethical problems because it is difficult to demonstrate that a person's preference is directly related to the rational she or he may give, and rejection of an individual's rational can justify widespread substitute decision making for those with cognitive impairments (Roth, Meisel, and Lidz, 1977).
All persons with mental disabilities are unique individuals. Those in the mild and moderate classifications of mental retardation and those with non-acute psychiatric disorders can often speak intelligibly, comprehend the speech of others , and reason, and many have more in common with those with typical mental abilities than with those classified with severe or profound mental retardation or acute psychosis. However, many have characteristics, educational backgrounds, and social experiences that can negatively impact their ability to make decisions affecting their lives. These can include deficits in basic knowledge, difficulty with abstract reasoning and in foreseeing the long-term consequences of a present act, denial of disability, reduced ability to make and/or communicate a reasoned choice, limited experience in making independent choices, or difficulty in delaying gratification (Ellis, 1992, Evans, 1981; Hayden, et al., 1992; Hill and Lakin, 1986; Wikler, 1996; Zetlin and Turner, 1984). The ethical challenge for scientists is to balance the obligation to respect the right of those with cognitive deficits to be treated as members of the moral community, with the need to ensure that illinformed or incompetent decisions will not place their welfare in jeopardy (Ellis, 1992; Grisso, 1986; Lidz, et al., 1984).
Since the decisionmaking styles of those without identified mental disability are rarely evaluated, some have warned that adults with intellectual impairments may be unfairly held to a higher standard of competency than commonly applied to the general population (Lidz, Meisel, et al., 1984; Morris, et al., 1993). Defining consent competency simply, in terms of higher-level abstract reasoning skills, does not do justice to the complexity of human judgment as situated in a person's experiences, emotions, needs, and patterns of practical life (Merleau-Ponty, 1945;Widdershoven and Smits). Scientists recognize the role of affective and practical factors in the decision making of those without mental impairments, and respect their "non-rational" preferences to decline research participation.
Consider, for example, persons with diagnosed disorders not considered mentally incapacitating who are invited to participate in a study to determine the efficacy of a psychopharmacological agent that may potentially reduce symptoms of their disorder. They have the right to refuse to participate if they do not want to subject themselves to the experimental medication's side effects (e.g., nausea, dry mouth, headaches), despite the fact that in objective terms such side effects pose "minimal" risk with the potential benefits of symptom reduction outweighing the temporary physical discomfort. This is not the case with individuals with mental deficits who, by being presumed incompetent, must demonstrate a capacity to make rational decisions, especially when their wishes are inconsistent with conventional wisdom (Drane, 1985; Grisso, 1986; Lidz, et al., 1984; Roth, et al., 1977).
What moral claims do adults with mental retardation have on science? From a relational perspective, their claims are no different from those with typical intelligence. They have the right to assume that scientists are obligated to communicate with them honestly, to develop procedures that do them no harm, to act to protect their right to autonomy and privacy, and to treat them fairly. The special cognitive status of adults with mental deficits does mean that procedures to insure that these claims are met require special efforts. Such special efforts may include the use of proxy consent if: (a) standards of consent capacity are applied equally to those with and without mental retardation; (b) guardian consent is used to protect the personal rights and welfare of the prospective participant rather than the interests of science; and (c) the adult with mental impairment sees proxy oversight as a legitimate and/or desirable means of protecting her or his interests.
Relational ethics recognizes that in some research contexts denying cognitively impaired individuals, especially those in institutional settings, the protection of guardian consent may result in unfair outcomes. Their limited abstract reasoning skills, restricted knowledge base, and lack of experience and opportunity to make autonomous decisions, may in some contexts make the cognitively impaired particularly vulnerable to coercion and exploitation. Despite these vulnerabilities, it is difficult to justify current ethical procedures that do not require a person's assent along with guardian permission or that allow proxy consent to override an individual's objections to research participation.
For example, the principle of justice calls for a re-evaluation of scientists willingness to rely on proxy consent for research with only a minor increase over minimal risk that holds out no potential benefit to a cognitively impaired individual, but which might provide general information about her or his condition (Bonnie, 1997). Individuals without mental disorders have the right to consent or dissent to requests to participate in research that may generate information pertinent to their future welfare or the welfare of others. It is inherently unfair to require those with mental disabilities to participate in research that may benefit members of their social group, when that same requirement is not made of those with typical intelligence.
For experimentation on mental disability, the science establishment has also condoned proxy consent over participant objections for research holding out direct benefit to the individual, especially when no other treatments are available (Bonnie, 1997). However, accepting this violation of participant autonomy rests on a false distinction between non-therapeutic and therapeutic research. First, all knowledge generated by science, including basic research, can potentially lead to application. Second, by definition, therapeutic research does not guarantee benefits and, in fact, can pose greater risk to the participant because of the side effects of the experimental manipulation or the deprivation of treatment if one is assigned to a non-treatment control group. Consequently, to give investigators and guardians greater power in overriding the objections of vulnerable individuals in treatment research does not have a convincing moral basis and is unjust if the dissent of those without cognitive disabilities is considered inviolate.
Relational ethics emphasizes attention to both the person and the context in which research will be conducted. Accordingly, when working with adults with identified cognitive impairment, it is incumbent upon the investigator to justify the standard of consent that will be required for each experimental procedure and the specific role that proxy consent should play in the informed consent process. This justification should be based upon an understanding of the characteristics, life experience, knowledge base, and attitudes toward proxy consent of the individuals who will be recruited for participation. Such understanding should be achieved through ongoing dialogue with prospective participants, their families, and advocates. Engaging prospective participants and their legal guardians in discussion regarding consent decisions can also help determine how proxy consent, when necessary, can reflect both the participant's wishes and her or his best interests?
From a relational perspective, the responsibility to meet a selected standard of consent should not rest solely on the intellectual capacity or prior experience of a person with a cognitive impairment. Rather, investigators should seek to reduce the participant's vulnerability to research risks by providing information essential for a knowledgeable decision to be made in a format that is conducive to the prospective participant's learning abilities. Many people with longstanding cognitive impairments are used to other people making decisions and may not understand or have experience applying the concept of autonomy. For these individuals, the concept of voluntary choice may be an important element of the informed-consent dialogue. In addition, investigators should be required to develop consent procedures sensitive to the ways in which those with cognitive impairments may express their desire not to participate in a study (e.g., physical or verbal signs of anxiety or fatigue, body movements indicating a desire to leave the situation, verbal expressions of distress).
As advance directives for health care have become increasingly accepted in society, some have suggested that similar directives by those with advancing cognitive impairment can enhance substitute decision making for research participation once an individual's mental capacity has been compromised. Several scholars have provided excellent overviews of the ethical issues associated with using advance directives for research (Moorhouse and Weisstub, 1996; Sachs, 1994). Among the problems inherent in issuing and following advance directives is that neither the individual, in the early stages of increasing mental disability, nor those who will serve as her or his legal guardians can know with certainty how the prospective participant will think and feel in a deficient state (Moorhouse and Weisstub, 1996). In the face of such uncertainty, protectionist policies precluding research with the cognitively disabled and paternalistic approaches taking consent authority away from the participant, are equally undesirable. Rather, from a relational perspective, despite limitations in foreseeing future reactions, the prospective participant is still the most expert in envisioning how she or he would respond to experimental procedures in an eventual state of cognitive impairment.
Persons with advancing cognitive impairments can not make decisions regarding future research participation in isolation. The process of obtaining ethically acceptable advance directives requires a series of ongoing co-learning experiences among scientists, the prospective participant, and substitute decision maker. This process, like that of obtaining informed consent, must insure that participant decisions are free of coercion and exploitation. This means that statements precluding participation in research are presented as equally acceptable directives.
During the co-learning process:
As in research with persons already identified as cognitively impaired, there is no ethical justification for overriding an advance directive that indicates dissent to participate in research. However, from a justice-care perspective, advance directives which do not rule out participation in specific types of studies do not replace the moral decisionmaking responsibility of the legal guardian. The fiduciary nature of legal guardianship obliges substitute decision makers to take ultimate responsibility for deciding the extent to which research participation protects the rights and welfare of those who have placed their trust in them. Thus, a substitute decision maker' s dissent should override advance directives that appear to grant consent to research participation. The advance directive process should provide a sufficient understanding of the participant's character and values to assist the guardian in making consent decisions that most closely represent the prospective participant's past wishes and protect the participant' s current best interests.
Informed consent has been seen by many as the primary mechanism for respecting the rights and protecting the welfare of research participants. Children and most adolescents do not, however, have the legal capacity to consent, may lack the cognitive capacity to comprehend the nature of experimental procedures, or perceive they lack power to refuse participation (Fisher and Rosendahl, 1990; Keith-Spiegel, 1983; Koocher and Keith-Spiegel, 1990; Levine, 1986; Melton, et al., 1993; Thompson, 1990). To insure that more vulnerable persons with diminished autonomy have their rights as autonomous agents protected, federal regulations (DHHS, 1991) and professional codes (e.g., American Psychological Association, 1992; Society for Research in Child Development, 1993) require both guardian permission and assent from the adolescent before a teenager can participate in research.
Informed consent procedures need to provide individuals and their guardians with all information that might affect their willingness to participate in research, including the potential risks of participation. In consent practices for adolescent risk research, one risk often overlooked - or intentionally not included because investigators worry that it may be a disincentive to participation - is the possibility that the researcher will disclose confidential information because of state laws (e.g., in the case of suspected child abuse), institutional policies (e.g., harm to self or others), or ethical standards set by IRBs or the investigator's own moral compass (e.g., illegal substance use or abuse, sexually transmitted diseases). Applying a co-learning procedure, Colleen O' Sullivan and I examined whether disclosure policies stated in informed consent forms would deter parental and adolescent agreement to participate in research on different adolescent risk behaviors (O'Sullivan and Fisher, 1997). Contrary to assumptions held by many investigators, the attitudes expressed by this sample of predominantly white suburban parents and their teenagers suggested that for some risk contexts confidentiality policies may actually be a deterrent to research participation.
In our examination of prospective participant opinions, a majority of parents indicated they would refuse to grant permission for their teenager to participate in investigations of peer harassment, child maltreatment, suicide, sexually transmitted diseases, and violent behavior if they were informed that investigators would neither discuss the problem with the teenager nor report the problem to a concerned adult (O'Sullivan and Fisher, 1997). Moreover, both parents and high school students indicated they would agree to participate in research on physical and sexual abuse, suicide, and sexual harassment if the investigator had a policy of informing parents if any of these risk factors were a problem for the adolescent. Parents and adolescents also indicated they would consent to participation in studies on other risk factors (e.g., substance use, shyness, truancy, stealing, and vandalism) if they knew that the investigator would discuss the problem with the teenager and assist him or her in getting help.
Information gained from this study underscores the value of obtaining the views of prospective participants and their guardians about different confidentiality and disclosure policies. This information challenges traditional investigator biases which assume that consent forms which include notice that an investigator will refer adolescents found to be in jeopardy for services or report their problem to a concerned adult participants will reduce participation rates. The views expressed by parents and teenagers suggest that alternatives to confidentiality policies may actually increase participation in some types of studies and points to the importance of telling individuals and their guardians about disclosure policies during informed consent procedures.
The cost-benefit calculus has often been applied unjustly to decisions to waive the requirement for parental permission and guardian consent when research involves ethnic minority participants. To insure the rights of those who do not have the legal capacity to consent, federal regulations (DHHS, 1991, 46.408a; OPRR, 1993) require the permission of legal guardians, as well as the assent of the minor, before a child can participate in research. In some situations, however, federal regulations (DHHS, 1991, 46.408[c]) allows parental permission to be waived when data are collected anonymously and questions are assumed to be noninvasive and non-harmful or when such consent may jeopardize the minor's welfare. When guardian consent is waived or when minors are wards of the state, federal regulations (46.408[c] and 46.409 [2.b]) require that an advocate for the minor verify the minor's understanding of assent procedures, support her or his preferences, ensure that participation is voluntary and that the minor can terminate participation, assess reactions to planned procedures, and ensure that debriefing is appropriate (Fisher, 1993; Fisher, Hoagwood, and Jensen, 1996; OPRR, 1993).
Federal guidelines 46.408[c] also allow for waiver of guardian consent in an unfortunately gray area defined in federal regulations 46.116[c.2] and 46.116 [b.3] as "research that cannot be practically carried out without the waiver or alteration." These regulations can lead to an abuse of participant rights in situations where the investigator successfully argues that obtaining parental permission is a legitimate "practical" reason for waiving the consent requirement. Ethnic minority children, especially those living in economically disadvantaged or non-English speaking communities, are particularly vulnerable to scientific exploitation supported by conventional justifications for consent waivers. Investigators often find recruitment in these neighborhoods difficult. In such instances, some have condoned the use of "passive consent" procedures (sending home a letter to parents asking for a response only if the guardian does not wish their child to participate) as an acceptable means of protecting child welfare. Middle class majority populations are not immune from the use of passive consent procedures, especially when school principals or administrators in children's psychiatric centers, out of paternalism or convenience, support or encourage its use.
It has been argued that passive consent is not an ethical alternative to active guardian consent because its use creates an unjust situation in which certain populations are disproportionately deprived of the protections afforded by parental and guardian consent (Fisher, 1993; Fisher, et al., 1997; Nolan, 1992). I would also argue that the science establishment's acceptance of passive consent as a tool of convenience to enhance participation rates reflects the scientistic assumptions that knowledge gathering is a fundamental and unconditional good and that scientists are entitled to use humans as material for their pursuits. As a consequence, underlying ethical justifications for the use of passive consent is the implicit assumption that a caring and knowledgeable guardian would perceive the research as important and desirable for her or his child. This assumption leads to the damaging inference that parents who do not return consent forms either lack the knowledge to appreciate the importance of the research or are unconcerned about their child's welfare (Fisher, 1993).
No empirical data exist to support these assumptions. Such views fail to consider that parents may decide not to return consent forms because they do not approve of the goals or methods of the research, are generally suspicious of scientific research, or are concerned that the signing of any form may trigger inquiries from immigration, welfare, or other government agencies. In the absence of knowledge derived from scientist-community dialogues on the potential threats of passive consent to participant autonomy and adolescent welfare, unwarranted assumptions regarding community attitudes toward informed consent procedures risk substituting investigator paternalism for parental permission.
Relational ethics draws our attention to the interpersonal nature and obligations inherent in the scientist-participant relationship. It expands the traditional universalistic, principle orientation of ethics-in-science decision making to include the importance of intersubjectivity, particularity, and context, and moves scientists toward a reinterpretation of their own moral agency (see Smith, 1985;Walker, 1992). A relational perspective also recognizes power-asymmetry as an inherent feature of human subjects research.
The scientist-participant relationship is not purely contractual because the scientist has directive power that the participant does not have and because the hypothesis may not be known to the participant. Most prerogatives lie with the researcher. A scientist has the prerogative to select who will be recruited for research and the question under investigation. The participant has the prerogative to decline research participation or withdraw once consent has been granted. An investigator can come back and ask a person to participate in an extension of the research or a second study, but the participant does not usually have the prerogative to ask for additional scientific assessments of treatment efficacy or knowledge generation once a study is completed. The command performance for the participant is to apply her or his best efforts to follow the experimental protocol during the study at the direction of the scientist. The command performance of the scientist is to protect the scientific and ethical integrity of the study before, during, and after experimentation-however, the investigator's responsibilities are not commanded by the participant, but by the scientific establishment and the investigator's IRB.
When working with individuals identified as vulnerable, the responsible scientist needs to insure power differentials are not a product of the participant's special circumstance. Context-derived power asymmetries can occur when guardian consent is given higher priority than participant assent simply because of an individuals physical, psychological, or social status. Power asymmetries are also magnified when the experimental arrangement itself increases participant dependency. This can occur, for example, when an individual with cognitive disabilities or inexperience in challenging authority freely assents to participation, but is not aware of her or his right to withdraw participation, does not know the actions she or he would take to withdraw, or believes that she or he would do so at great cost. Potentially destructive power asymmetries also emerge when science is used as a tool of subordination to legitimize oppressive policies (Prilleltensky, 1997).
Those who seek greater symmetry in power relationships emphasize that each party must derive something out of the relationship and be able to exercise discretionary control over the resources prized by the other (Goodin, 1985). However, these resources must be used to enhance, not compromise, the ethical and scientific integrity of experimentation. Relational ethics recognizes both scientists and participants can misuse their influence to compromise the autonomy of the other: Scientists can use their status and control of resources to coerce participant compliance in treatment research that the participant may view as harmful, unjust, or unworthy. Participants or their community representatives can exploit the science establishment's dependency upon their cooperation to coerce investigator compliance in research practices that compromise scientific validity. In relational ethics, the development of ethical procedures must derive from mutual accommodation rather than coercion.
Although power relationships between scientist and participant may not be truly symmetrical, they can be complementary. Such complementarity must be based upon trust that each party will work to understand and respect the value orientations of the other. Relational ethics views an action as unethical if it violates the moral values of either the scientist or participant. If co-learning discourse reveals that mutual accommodation can not take place, the investigator must be willing to abandon a particular research plan. The argument is that to truly accept a relational model, one must value the moral claims of both investigators and research participants. Scientific procedures gain moral legitimacy only if they are the product of autonomous solutions which do not require compromises that would coerce, exploit, or deprecate the values of either party. In a justice-care based approach, ethics-inscience decision making is based upon respect and mutual accommodation between scientist and participant, rather than compromise and coercion.
Relational ethics encourages scientists to engage research participants as partners in creating experimental procedures reflecting both scientific and interpersonal integrity. It does not seek to encourage federal regulations that shackle science or that promote protectionist policies that create research orphans out of vulnerable populations. Rather, a relational perspective should serve as a guide for moral discourse that moves science toward an orientation of the good life lived with others in social conditions that are just (Widdershoven and Smits, 1996). Scientific ethics is a process which draws upon investigators' human responsiveness to those who participate in research and their awareness of their own boundaries, competencies, and obligations. If becoming a moral subject is the critical moral task for all individuals (Smith, 1985), then recognizing that morality is embedded in the investigatorparticipant connection is the essential moral activity of science.
The site is updating and expanding and is currently in beta form. Please fill out our questionnaire to provide your feedback. It should take no more than 15 minutes to complete.
If you would like to fill out the form later, you can return to this page http://www.onlineethics.org/feedback.aspx
If you need help finding a specific resource, please contact us via email at email@example.com.