Charles Weijer of Dalhousie University, Halifax, Nova Scotia, Canada, prepared a paper for NBAC on the topic of protecting communities in research. That paper was published in 1999 in the journal Cambridge Quarterly of Healthcare Ethics. The reader can find the article at the following citation:
Weijer C. 1999. Protecting Communities in Research: Philosophical and Pragmatic Challenges. Cambridge Quarterly of Healthcare Ethics 8:501-513.
The papers included in this volume were edited to conform to minimal stylistic consistency. The content and accuracy of the papers are the responsibility of the authors, not the National Bioethics Advisory Commission.
The charter of the National Bioethics Advisory Commission (NBAC), a presidential commission created in 1995, states that "As a first priority, NBAC shall direct its attention to consideration of protection of the rights and welfare of human research subjects." During NBAC's first five years, the Commission focused on several issues concerning research involving human participants,1 issuing five reports and numerous recommendations that, when viewed as a whole, reflect an evolving appreciation of the various and complex challenges facing the implementation and oversight of the system that protects those who volunteer to participate in research.2 In May 1997, NBAC unanimously resolved that "No person in the United States should be enrolled in research without the twin protections of informed consent by an authorized person and independent review of the risks and benefits of the research."3 In 1999, NBAC indicated to the White House several areas of concern regarding the oversight of human research in the United States and provided preliminary findings. (See Appendix B.) The key concerns identified were as follows:
Based on these findings, and in response to a special request from the White House Office of Science and Technology Policy to further develop recommendations for improving the system for protecting research participants, NBAC undertook a comprehensive examination of the various aspects of the oversight system, including its purpose; its structure, including its local configuration - composed of investigators, institutions, and Institutional Review Boards (IRBs); and the ethical issues relevant to review of research. The recommendations contained in this report reflect a dual commitment to ensuring the protection of those who volunteer for research while supporting the continued advancement of science. They are based on a view of the oversight system as a whole and provide both a rationale for change and an interrelated set of proposals to improve the protection of human participants and enable the oversight system to operate more efficiently.
Back to Top
Throughout history, the pursuit of knowledge has been a highly valued human endeavor, and research through systematic, empirical investigation has become an essential method of attaining this goal. Like other forms of learning, research is worthwhile because it helps to make sense of and give meaning to the world and contributes to a growing knowledge base that also gives rise to a wide variety of practical benefits. Indeed, the contributions of science and technology to our daily lives are so ubiquitous that they are easily taken for granted. Knowledge developed from a constant and broad-based national investment in research has resulted in improvements in health, created valuable new products for everyday living, provided the capacity to sustain cleaner environments in a rapidly industrializing world, and facilitated better personal and family relationships.
This investment in basic science and clinical and public health research also has yielded a steady decline in mortality since the 1950s (National Center for Health Statistics 2000). Significant advances in treatment and prevention have reduced the impact of deadly diseases, such as some cancers (National Center for Health Statistics 2000; Ries et al. 2000) and cardiovascular diseases (National Center for Health Statistics 2000), as well as diseases causing morbidity, such as lead poisoning (CDC 1999a), vaccine-preventable diseases (CDC 1999a), depression (Frank et al. 1993), and sexually transmitted diseases (CDC 1999b).
The humanities and social sciences are also central to society's capacity to understand human nature and biology by informing public and private decisionmaking and by clarifying the effects of human behavior on well-being. For example, long-term studies have increased our understanding of poverty and the effects of family stability on economic well-being, leading to changes in welfare policy and the tax code (Hurst et al. 1998), and numerous studies from developmental psychology and cognitive science have articulated the processes by which people learn, with important implications for education (National Research Council 2000). By illuminating the practices of others, anthropology research has also contributed to better understanding of certain societal groups, such as the homeless (Baxter and Hopper 1981).
Many important issues involving health and well-being can be studied by looking at how they interface at the intersection of the humanities, the social sciences, and the biological sciences. That is, prevention and amelioration of many diseases require attention to the interfaces that exist at the molecular, organismal, psychosocial, and environmental levels. For example, emotional states and the availability of social resources can influence disease survival rates and recovery and even the likelihood of developing certain illnesses, indicating that one's position in the social hierarchy can be related to morbidity and mortality. Even gene expression at the fundamental level may depend on the general environmental conditions experienced by an organism. Thus, meaningful studies that will enhance our understanding of human health and disease will include the study of biological, psychological, environmental, and societal factors and will involve the participation of a wide range of individuals - including the healthy and the sick and the affluent and the less fortunate - all of whom deserve to have their rights and welfare protected.
Although the rewards of research for society can be great, in some cases research can seriously harm participants. However noble an investigator's intentions may be, the uncertainties that are inherent in any research study raise the prospect of harms that may be difficult to fully anticipate.4 Thus, a system of protections is needed to minimize harms that might occur. In the United States, the core aspect of Federal Policy for the Protection of Human Subjects, known as the Common Rule (Code of Federal Regulations, Title 45 Part 46 Subpart A), has been the regulatory policy followed by 17 federal departments and agencies for protecting human research participants (see Appendix C for a history of the Common Rule's development and Appendix E for the regulations [45 CFR 46]). Each codification of the Common Rule by a department or agency is equivalent to 45 CFR 46.101 - 46.124 (Subpart A), the Department of Health and Human Services (DHHS) codification. Some agencies have promulgated additional regulations concerning the protection of human participants in research, for example, those related to privacy. The Common Rule applies to all research involving human participants "conducted, supported or otherwise subject to regulation by any federal department or agency which takes appropriate administrative action to make this policy applicable to such research." The Food and Drug Administration (FDA) also has its own regulatory authority over research involving food and color additives, investigational drugs for human use, medical devices for human use, biological products for human use being developed for marketing, and electronic products that emit radiation (21 CFR 50, 56; see Appendix F). To this research, FDA applies its own set of regulations, which is generally but not entirely the same as the Common Rule. Even though the federal regulations cover a large portion of human research conducted domestically, and in some cases overseas, they are limited in their reach. In fact, if federal funds are not involved or if regulatory approval is not required, research activities involving human participants might not be subject to any form of oversight.
In general, the current research oversight system, when applicable, adequately protects the rights and welfare of research participants. However, the consequences of it not working can be tragic. Several recent cases point to the need for improvements in the current oversight system.
For example, in California, a research study of schizophrenic disorders raised concerns about the quality and completeness of informed consent and about the risks of research when one of the participants committed suicide (Appelbaum 1996; Katz 1993; OPRR 1994). The informed consent process for the study did not adequately explain the risks associated with receiving fixed rather than individually tailored doses of medication, receiving no medication at all, or the alternatives for treatment that were available outside of research (OPRR 1994).
In 1994, a healthy 19-year-old student at the University of Rochester died from complications related to a research study in which she underwent a bronchoscopy, during which investigators took more samples and used more anesthetic than were called for in the research protocol as approved by the IRB (New York State Department of Health 1996; Rosenthal 1996). Her death illustrates the need for independent review of protocols accompanied by the assurance that investigators will adhere to the approved protocol.
In 1999, the death of a young man, Jesse Gelsinger, in a gene transfer trial highlighted a number of concerns - including the role of federal oversight - that arise when researchers began human trials of new and experimental approaches to treatment (Marshall 2000; Wolf and Lo 2000). Gelsinger, who had ornithine transcarbamylase deficiency, a rare genetic disorder that affects the body's ability to eliminate ammonia, participated in a gene transfer trial conducted at the University of Pennsylvania. The Phase I study was designed to test the safety of a gene transfer vector that, if successful, would have been used to treat infants with the fatal form of the disorder. Gelsinger was in a group receiving the highest dose. Although he was aware that he was in a research study, the research may not have been fully or adequately explained to him. During this study, participants were not informed about serious adverse events that had been previously reported, such as significant elevations in liver enzymes experienced by other participants.5
In addition, FDA was not notified of results from preclinical animal studies as required,6 and some participants, including Gelsinger, did not fit the revised inclusion criteria.7 Moreover, the lead investigator had financial interests in the company that developed the gene transfer techniques being studied (Wolf and Lo 2000). Finally, the death of Jesse Gelsinger raised questions about federal and local IRB monitoring of previous gene transfer studies when it was discovered that adverse events from other trials had not been reported to the National Institutes of Health (NIH) in a timely manner. As this incident was investigated, it became clear that a mechanism for federal agencies to adequately share information was lacking and that NIH was unaware of the exact nature of adverse event reports provided to FDA. In addition, an amendment broadening the inclusion criteria for the trial was implemented without specific FDA approval.8
A number of other cases highlight the limits of the current oversight system. In the early 1990s, for example, plastic surgeons at a New York City hospital compared two common surgical procedures for facelifts by performing both procedures on each individual participant, one procedure on each side of the face. The study was not reviewed by an IRB, and the participants were not told that they were participating in a research study (Hilts 1998). The Office for Protection from Research Risks (OPRR) halted its investigation of the case when it learned that the research, which involved no federal funds, was not subject to the federal oversight system.9
In another case, in 1996, according to one news account, an eye surgeon at the University of South Florida performed an experiment on at least 60 people using a cutting tool he developed to accelerate the healing process after corneal transplantation. However, the surgeon did not have IRB approval to use the experimental tool on human participants,10 and participants did not give informed consent.11 Moreover, the press raised concerns about conflicts of interest because the university held a patent on the cutting tool and listed the surgeon as a co-inventor. Both stood to benefit financially from the marketing of the tool (Klein 2000).
Although the more dramatic examples have occurred in clinical research, problematic cases are not limited to this area of investigation. The real and potential harms involved in these and other well-publicized examples of the failure to protect the rights and welfare of research participants erode public trust in the research enterprise and make it clear that a viable and credible oversight system should aim first to protect participants from undue harm, with the additional goal of creating an environment in which ethically sound and meritorious research can be conducted with society's support and trust. Ideally, the oversight system should avoid needless complexities and regulation, enhance the quality of research, and protect participants, a difficult but achievable balance.
The conduct of research has been transformed by many factors over the past 25 years, resulting in a much larger and more complex enterprise. Changes include shifts in patterns of research investment; growing stresses on academic medical centers and research universities; the emergence of independent IRBs; changing public perceptions and expectations about research participation; new technologies that affect risks and potential benefits in research; and growing consideration of the roles of groups and communities in research design and implementation.
In the past two decades, phenomenal growth has occurred in federally and industry-sponsored biomedical research. Federal expenditures for medical and health research conducted in the United States and in foreign countries almost doubled from $6.9 billion to $13.4 billion between 1986 and 1995. Roughly half of that funding went to university-based research programs, largely to academic medical centers.12 The federal investment in research involving human participants extends well beyond biomedical research and is extremely diverse (see Exhibit 1.1).
Industry expenditures for medical and health-related research conducted in the United States and in foreign countries have been rising even faster than those of the public sector, tripling from $6.2 billion to $18.6 billion during that same period.13 Research conducted in the United States sponsored by one segment of industry, pharmaceutical companies, has experienced particularly rapid growth, rising 14-fold from $1.5 billion to $22.4 billion between 1980 and 2000 (PhRMA 2000). As a result, industry funding is playing an increasingly important role in the support and conduct of medical and health-related research.
Not surprisingly, the rapid rise in industry investment in research funding has been matched by an accompanying rise in the number of clinical investigators connected with this activity. For example, the number of investigators participating in FDA-regulated research increased from 5,500 in 1990 to 25,000 in 1996 (Valigra 1997), and the total number of U.S. clinical investigators is now estimated to be between 45,000 and 50,000 (CenterWatch 2000). Thus, the sheer volume and diversity of research have placed new strains on the system designed to oversee the protection of research participants.
Academic medical centers, traditionally the principal sites of clinical research, have experienced certain stresses that offset, in part, the effects of this growth in research funding. In particular, managed care, price competition in health care, and cost containment efforts (e.g., the Balanced Budget Act of 1997)14 have resulted in reductions in net clinical income to academic institutions. This trend negatively influences their capacity for research and education, because excess clinical revenue traditionally has been the means by which academic medical centers subsidize these activities (Crowley and Thier 1996;
Sixteen federal departments and agencies reported to NBAC that they conduct or support research involving human participants, although some components within departments reported that they do not sponsor or conduct such research (e.g., the DHHS Administration on Aging). Each agency's research program involving human participants is distinctive in terms of its size, scope, organization, and focus, all of which reflect its primary mission. The following examples illustrate the diverse types of research conducted or supported by federal agencies:
At least 69 federal departments and agencies are not covered by the Common Rule. NBAC was unable to determine which of these departments and agencies might sponsor or conduct research with human participants; however, at least some of them are involved in such activities.
Mechanic and Dobson 1996). In addition, clinical income has been affected by managed care's scrutiny of patient-related costs, whereby much routine patient care is deemed unreimbursable when associated with a clinical trial or an "experimental" therapy.
One set of responses has been the establishment of revenue-generating centers for clinical research and the development of new relations with industry (Gallin and Smits 1997). Another is the creation of research partnerships with health maintenance organizations (Donahue et al. 1996). In addition, the burgeoning number of academic investigators competing for funding has stimulated many institutions to seek financial support from industry (Henderson 1999). In some cases, this shift to more private funding has changed the nature of regulatory oversight.
At the same time, industry-sponsored research is spreading between and beyond academic medical centers. In 1998, only 40 percent of industry funding for clinical trials went to academic medical centers, down from 80 percent in 1991 (Henderson 1999). Large amounts of research are now managed by private Contract Research Organizations (CROs), rather than academic investigators, and there has been significant growth in Site Management Organizations (SMOs), which conduct research in dedicated facilities and through various types of physician networks (Association of Clinical Research Professionals 1997). Research also continues to be performed in private medical and diagnostic practices unaffiliated with an SMO (CenterWatch 1998). Thus, increasingly some avenues of clinical research fall outside the h3est and most experienced part of the current system of oversight.
To find the large numbers of participants needed to enroll in clinical studies, sponsors and CROs often conduct a single research study at dozens or even hundreds of sites. A study may involve numerous academic centers, as well as community hospitals and private practice physicians. In order to compete, some academic medical centers are forming research networks and attempting to provide services similar to those of for-profit companies (Bodenheimer 2000). Consequently, the traditional biomedical research model of one research study led by one investigator at one academic institution now occurs much less frequently than in the past, a situation that complicates and often prolongs the review and approval of research studies (OIG 1998a, 4 - 5).
As clinical research has spread beyond academic institutions, the locus of ethics review also has shifted. In the United States, the committees that review research with the mandate to protect the rights and welfare of human participants - IRBs - have traditionally been located in the institution in which the research is conducted. However, IRBs also now exist as separate entities that are not part of the organizational structure of an institution that conducts or funds research. Although many labels are used to describe these groups, this report will use the term independent IRBs.16 Independent IRBs, which have existed for more than 30 years and are growing both in size and in the number of protocols they review, usually are for-profit entities that operate on a fee-for-service basis (OIG 1998b). Traditionally, independent IRBs primarily have reviewed industry-funded clinical research, but since 1995 they also have been permitted to review federally funded research.17 It should be noted that some institutionally affiliated IRBs have begun to charge for review of certain types of protocols (e.g., industry-sponsored research), and may even conduct reviews for other, unaffiliated or loosely affiliated groups, thus acting much like independent IRBs.
The growth and spread of clinical research also reflects a growing demand by patients for access to clinical trials. People with difficult-to-treat, life-threatening diseases often see clinical trials as offering the benefits of cutting-edge medicine. In this context, trial participation is viewed as a benefit to be sought rather than a burden to be avoided (Kahn et al. 1998). This sentiment was expressed forcefully by HIV/AIDS activists (Rothman and Edgar 1991), some of whom adopted the slogan, "A Drug Trial Is Health Care Too" (Annas 1990, 35). Disease-oriented patient activists have also emphasized the collective benefits of research for all individuals with a specific condition. Advocacy groups commonly lobby Congress and NIH for more research funding (in particular for clinical trials) for specific diseases and conditions, not only to benefit individual research participants, but also to improve treatment for all who are affected by a given disease or condition.
These calls for access to trials have spurred a "reconceptualization of the concept of justice" in clinical research (Brody 1998). That is, although the application of the principle of justice has traditionally focused on fairly distributing the risks of research - selecting participants equitably meant not targeting individuals considered vulnerable for participation in risky research from which they were unlikely to receive any direct benefit - applying the principle of justice now focuses also on fairly distributing the potential benefits of research. Selecting participants equitably means not unfairly excluding certain subgroups of the population from research and working to ensure that the knowledge gained in research applies as appropriate across all groups in society. Routine exclusion of groups - such as women of childbearing age - once seen as appropriate and protective, is now seen as arbitrary and paternalistic. Therefore, several federal agencies have developed policies to promote, for example, inclusion of women and/or minorities in research, as well as data analysis relevant to these groups.18
Other policies that reflect a growing emphasis on access to participation in research include FDA's regulations granting an exception from informed consent requirements for some emergency research (21 CFR 50.24), the promotion of the inclusion of children in 19 and the provision of Medicare payment for the routine costs of clinical trials and items and services that are otherwise generally available to beneficiaries.20
Research both produces and is affected by advances in technology. However, although advances in genetics, the rise of the Internet, and the growth of informatics are all providing important new capabilities for research, these advances also can raise new ethical challenges. For example, although genetic research may pose no physical risk beyond that of drawing blood, it can pose significant psychological and economic risks if participants - or their insurers or employers - learn that they are predisposed to an untreatable condition. NBAC has addressed some of these issues in a previous report (NBAC 1999b).
New information technologies can provide opportunities for medical, health-related, and social science research while also raising ethical challenges regarding the protection of confidentiality of the resulting data. The computerization of medical records, which greatly facilitates retrospective analysis of patients' medical records, has also prompted discussion about the legitimate access to and use of medical records in the new electronic environment (Etzioni 1999; National Research Council 1997). Such new technologies might, in this case, increase threats to privacy by making it easier to identify patients from combinations of seemingly unidentifiable data, such as age and date of hospital admission (Sweeney 1997; Woodward 1999). As with medical records, computerization has prompted discussion about the ability to restrict access to and use of employment or school records, financial information, and large survey data sets (Garfinkel 2000; White 2000).
The Internet has also given rise to new research opportunities and risks by allowing investigators to reach a wide pool of participants, although participants' assumptions about the anonymity, security, and privacy of Internet connections might not be justified. The ease with which investigators can misrepresent themselves online raises new questions about the propriety of deception research carried out in this context. The possibility for online misrepresentation by participants is also of concern; for example, investigators may have no way of knowing whether children are participants in online research and are therefore in need of special protections (Frankel and Siang 1999).
Social science research also is undergoing a number of important changes that affect the protection of research participants. Beginning with the cardiovascular disease primary prevention trials conducted in various communities and sponsored by the National Heart, Lung, and Blood Institute in the 1980s (Carleton et al. 1995; Farquhar et al. 1985; Jacobs et al. 1986), there has been an increase in the number of research studies conducted in community settings (Mittelmark et al. 1993). As the behavioral and social determinants of more diseases are known (e.g., HIV/AIDS, lung cancer, heart attack, stroke), the focus of intervention strategies has broadened from the individual to the population, and the research setting has in some cases moved into the community (Schneiderman and Speers 2000). For example, research on cigarette smoking once focused on cessation efforts, and interventions were targeted at individuals (DiClemente et al. 1991). Now, with the emphasis on prevention of smoking behavior, research interventions are often targeted at particular populations and carried out at the community level (Cummings 1999). With such community-oriented research interventions, defining the research participants and identifying the appropriate participant protections can be difficult.
Increasingly, research is conducted with communities, not on communities (Bracht 1991; George et al. 1996). Local community groups and organizations often act as collaborative investigators by sharing responsibility with academic investigators in designing and implementing a research study (Hatch et al. 1993). However, this new collaborative role of the community raises many issues related to research infrastructure and oversight. For example, it is unclear when community groups must have an IRB and how to build capacity within the community to carry out these regulatory responsibilities. Issues related to just what individual or which group speaks for the community as a whole and how to obtain community input or consent are continuing challenges to conducting such research.
Huge financial investments, expansion of the research enterprise, and new technology have all stressed a system of oversight that is less than optimal. A major overarching challenge that faces the entire system is a lack of adequate resources, both financial and human. Scarce resources limit the functioning of the oversight system at every level and often prevent federal offices, institutions, and IRBs from implementing initiatives that would improve the system.
A 1996 General Accounting Office report, conducted at the request of the Senate Committee on Governmental Affairs, found the current system for protecting participants in scientific research to be deficient because of heavy workloads and competing demands on IRBs, a lack of preparedness of IRBs to review complex research, limited funds for federal inspections, and over-reliance on investigators' willingness to comply with regulatory requirements (GAO 1996).
In June 1998, the DHHS Office of Inspector General (OIG) sounded a "warning signal" that the system had not adapted sufficiently to the changing research environment (OIG 1998a). This warning accompanied and was followed by a series of reports on specific aspects of the oversight of research, particularly the role of IRBs (OIG 1998b - e; OIG 2000a - d). The OIG reports found that many IRBs are simply overwhelmed by the volume and complexity of the research they review, by a lack of financial, administrative, and educational resources, and by a regulatory system that often distracts from rather than focuses on key ethical issues. These pressures make the system inefficient and strain its capacity to protect participants.
A related report sponsored by the NIH Office of Extramural Research provided quantitative information about IRBs' workloads (Bell et al. 1998). Based on a survey of 491 institutions holding Multiple Project Assurances (MPAs),21 this report provided a sense of the scale of the human research enterprise and noted that some IRBs review a striking number of protocols. Indeed, the highest volume IRBs, about 10 percent, were found to account for 37 percent of the total reviews (Bell et al. 1998, 8).
Recent actions of the federal Office for Human Research Protections (OHRP) (formerly OPRR) within DHHS highlight the existence of systemic problems of the oversight system at the institutional level. For example, OHRP has restricted or suspended MPAs and required corrective actions at nearly a dozen academic institutions. These sanctions were imposed by OHRP when it found "numerous deficiencies and concerns regarding systemic protections for human subjects" (OHRP 2000). As previous reports have suggested, deficiencies occurred in areas, such as IRB membership, education of IRB members and investigators, institutional commitment, IRB initial and continuing review of protocols, review of protocols involving vulnerable persons, and procedures for obtaining voluntary informed consent.
The recent academic literature regarding the current oversight system for the protection of human research supports many of the findings from these reports (Edgar and Rothman 1995; Moreno et al. 1998; Phillips 1996; Snyderman and Holmes 2000; Woodward 1999). There is general recognition that because the nature and context of research have changed, the nature and structure of the oversight of research also must change. For example, Moreno et al. argue that the federal regulations should be revised to reflect changes that have affected the nature and context of research, such as the increased importance of multi-site studies (1998). They also argue that the federal regulations should be responsive to certain needed protections that have been identified, but were not enacted when the Common Rule was issued (e.g., protections for individuals categorized as vulnerable). Others, including Edgar and Rothman, argue that the expansion of the scientific frontier requires that ethics review mechanisms other than local IRBs should be considered, such as national, topic-specific advisory panels (1995). Edgar and Rothman also characterize the local IRB as a "paper tiger," buried in paperwork and often unable to deal effectively with ethical issues (1995).
While more protection may be needed in some areas, another concern is the overwhelming burden that is placed on IRBs and investigators and the extent to which unnecessary paperwork requirements are displacing a focus on important ethical issues. For example, Phillips points to the growing frustration among investigators and IRBs that has resulted from the increase in administrative and regulatory requirements without a commensurate increase in protection (1996). Some support the need for oversight while still perceiving regulatory and compliance mechanisms, such as reporting requirements, as difficult to interpret, redundant, and inefficient (Snyderman and Holmes 2000).
Others perceive the problems with the current oversight system as failures to address such issues as inadequate funding, lack of adequate education for IRB members and investigators, and insufficient focus on conflicts of interest in research (Amdur 2000; Shamoo 1994; Snyderman and Holmes 2000; Sugarman 2000). Overall, there is broad agreement in the academic literature that the current oversight system is in need of improvement.
The creation of the Common Rule (see Appendix C) provided significant unification in the language of federal regulations for the protection of human research participants. However, the Common Rule did not create a shared mechanism for interpreting and implementing the regulations at the federal level. In the absence of a formal mechanism, OHRP sometimes acts as a de facto reference point and consensus builder among federal agencies, even though it has no congressional or executive authority to do so. Moreover, some other departments have not established offices comparable to OHRP for interpreting and implementing the regulations; in some cases, a single individual is responsible for oversight activities. Thus, the ability to coordinate oversight among the departments is weak, leading departments and agencies bound to the Common Rule potentially to interpret regulatory requirements differently (see Exhibit 1.2). In addition to varying substantive interpretations of the regulations, departments and agencies use different procedures to ensure compliance. This issue is further discussed in Chapter 2.
Some federal departments have supplemented the Common Rule with additional regulations and policies. For example, DHHS provides additional protections for pregnant women and fetuses, prisoners, and children (45 CFR 46 Subparts B, C, and D).22 The Central Intelligence Agency and the Social Security Administration (SSA) also follow these regulations for groups that are considered vulnerable. The Department of Education has adopted protections for children (34 CFR 97 Subpart D), and the Department of Justice has adopted protections for research conducted within the Bureau of Prisons (28 CFR 512). In addition, although FDA's regulations do not include protections for vulnerable individuals analogous to the DHHS subparts, a law passed in 2000 requires DHHS to apply Subpart D of 45 CFR 46 to all research conducted, supported, or regulated by the department, including research regulated by FDA.29 FDA subsequently issued an interim rule incorporating Subpart D into its regulations.30 It is noteworthy, however, that so few Common Rule signatories have adopted additional protections for individuals who are considered vulnerable, in effect providing incomplete protection for human research participants. (The protection of individuals and groups that are categorized as vulnerable is discussed in detail in Chapter 4.)
There is wide variation among federal departments and agencies regarding their policies and procedures for determining whether a research activity is exempt from the federal regulations (45 CFR 46.101). Differences could be due to the variability in the types of research sponsored; however, they also could be due in part to inconsistent interpretation of the regulations. As shown in Table 1.1, many agencies report that all, or nearly all, of the research that they conduct or sponsor is exempt from the federal regulations.
The procedures used to make these determinations vary across agencies. In general, agencies use their IRB chair to determine whether research conducted by the agency is exempt, and a combination of technical and legal staff determine exemptions for human participant research sponsored through grants and contracts. Some agencies have customized administrative mechanisms for making these determinations. For example, the Census Bureau considers all of its research to be exempt under Federal Policy 15 CFR 27.101 (b)(3)(ii), which exempts survey procedures if "federal statute(s) require(s) without exception that the confidentiality of the personally identifiable information will be maintained throughout the research and hereafter." However, privacy and confidentiality issues that relate to human participants are brought to the Census Bureau's Policy Office. The Census Bureau's Disclosure Review Board has primary responsibility for ensuring confidentiality in published reports and data products.
SSA does not have an IRB, because it claims that all of its research is exempt. This exemption took effect on April 4, 1983,23 as a result of a final DHHS rule published on March 4, 1983. Research carried out under section 1110(b) of the Social Security Act, however, remains subject to the Common Rule's informed consent requirements. The 1983 notice states that "[i]n order to insure the continued protection of human subjects participating in such [otherwise exempt] research activity, the Department is adding a specific requirement of written, informed consent in any instance, not reviewed by an IRB, in which the Secretary determines that the research activity presents a danger to the physical, mental, or emotional well-being of a participant."24
In the case of biomedical and behavioral research, in the 1983 Federal Register notice, DHHS makes clear the need for IRB review, but states that such review would be "unnecessary and burdensome in the context of research under the Social Security Act and otherwise."25 DHHS discusses, but rejects, several proposals for IRB review of research and demonstrations to support public benefit or service programs and concluded that "ethical and other problems raised by research in benefit programs will be addressed by the officials who are familiar with the programs and responsible for their successful operations under state and federal law."26 SSA reviewed the 1983 regulation with OHRP/OPRR and concluded that it continues to apply to SSA research and demonstrations. In 1999, SSA did not conduct any extramural human participant research or demonstrations under section 1110(b).
The Health Resources and Services Administration (HRSA, DHHS) reported that nearly all of its research activity comprises program evaluation or evaluation of demonstration projects, which are considered to be exempt from the federal regulations under the public "benefit and service" criterion. However, HRSA requires such a claim of exemption to be approved by the HRSA Human Subjects Committee. Otherwise IRB oversight is required.27 Furthermore, even within DHHS, both substantive and procedural differences can be found, notably between FDA and DHHS regulations. These differences relate to informed consent, the definition of research, emergency research, assurances of compliance, inspections by the sponsoring agency, sanctions for noncompliance, and additional protections for vulnerable populations.
Whatever the source, inconsistency among departments and agencies can lead to confusion and frustration among some investigators and IRBs28 and can render the oversight system unnecessarily confusing and open to misinterpretation. Not only do different rules apply to different research studies, but a single study may be subject to more than one set of regulations if it is sponsored or conducted by institutions that are required to follow more than one set of rules. IRBs and investigators are often uncertain which rules apply or to whom they must report. For example, an NIH-funded study involving an FDA-regulated investigational drug conducted in a VA hospital would be subject to the regulations and oversight of three different departments or agencies (45 CFR 46; 21 CFR 50,56; 38 CFR 16).
*Some departments reported data for several units. The range represents the differences in data reported. Source: NBAC, 'Federal Agency Survey of Policies and Procedures for the Protection of Human Subjects in Research.' This staff analysis is available in Volume II of this report.
Source: NBAC, 'Federal Agency Survey of Policies and Procedures for the Protection of Human Subjects in Research.' This staff analysis is available in Volume II of this report.
Some of the regulatory challenges are exacerbated by the fact that the Common Rule is difficult to amend. Amending the Common Rule requires that each signatory agency first agree to a revision before the 15 agencies with regulations go through the rulemaking process to revise the regulations.31 This weakness results in a set of regulations for which no system-wide change is possible. Obtaining concurrence of the departments and agencies on any regulatory change so far has proven impossible, although not because of lack of need or 32 The addition of regulations to the Common Rule specific to classified research, for example, has not been achieved despite three years of effort, a Presidential Memorandum directing the change, and a challenge in a U.S. District Court.33 Unable to change the regulations, some departments have attempted to make modifications by issuing regulatory guidance, a strategy used by the VA in issuing regulations for providing treatment for injuries resulting from participation in research (38 CFR 17.85). However, the power of such changes is limited, and because guidance is usually department specific, it promotes inconsistency and undermines the very unification the Common Rule is supposed to establish. (Further discussion of this issue and recommendations appear in Chapter 2.)
The Common Rule was intended to both provide uniformity across federal departments and expand the scope of regulations to federal departments that previously had none. However, although it marked a significant expansion in scope, the Common Rule still does not apply to all federally sponsored research.
Existing regulations also do not apply to many areas of research funded and conducted by businesses, private nonprofit organizations, and state or local agencies, although such research may be subject to federal regulation if it involves the development of medical devices or drugs requiring approval by the FDA or if it is conducted at an institution that has voluntarily agreed to apply Common Rule requirements to all research it conducts. An unknown amount of nonfederally funded research is completely unregulated under the federal system. This research may include experimental surgical techniques, research on reproductive technologies, some uses of approved drugs and medical devices, and research use of private, identifiable data.34 In some cases, nonfederally funded research may be subject to state regulations, or investigators may voluntarily meet federal requirements to reduce research-related liability. (Recommendations regarding expanding the scope of the system to include such research appear in Chapter 2.)
Even for research that is subject to federal regulations, the mechanisms for enforcing them suffer from three potential weaknesses; the lines of enforcement authority are awkward and sometimes isolated; there is a limited repertoire of sanctions to match the range of possible violations; and the oversight and monitoring process is uneven.
First, there is no clear line of authority or system for the federal government as a whole to sanction serious or repeated noncompliance by investigators or institutions. This results from the dispersion of enforcement functions among various departments and agencies, which weakens the sanctions any one department can impose because investigators could continue research overseen by a different authority. Each federal department that adheres to the Common Rule has the authority to enforce its own codification of the Common Rule for research it conducts or sponsors. However, federal agencies and institutions with assurances of compliance (formal commitments by institutions to the government stating that they comply with federal regulations) from OHRP are subject to enforcement from that office as well. In the case of DHHS grantees and contractors, the enforcement authority is clear because OHRP is part of DHHS. But, when the assurance holder is the grantee of another department, OHRP decisions come from outside the regular reporting line of authority. Additionally, departments that do use the OHRP assurance process may have their own separate systems for enforcement, and there is little coordination among the various offices responsible for ensuring compliance with the Common Rule.
Second, concerns have emerged that enforcement authorities do not have or use an adequate range of sanctions to respond to various forms of noncompliance. Federal regulations give department and agency heads the authority to terminate or suspend funding for research projects that are not in compliance with the regulations (45 CFR 46.123(a)). Common enforcement tools are the requirement of written responses or the enactment of specific changes to address the identified deficiencies; those who grant assurances can also restrict or suspend institutional assurances. Under its regulations, FDA can withhold approval of new studies, prohibit enrollment of new subjects, and terminate studies. FDA can also issue warning letters and can restrict or disqualify investigators, IRBs, or institutions from conducting or reviewing research with investigational products.35 However, a more complete range of sanctions should be considered for enforcement authorities.
Third, any system of sanctions can only be as good as the monitoring and investigating processes that are used to determine their need. The Common Rule does not set out agency responsibilities for monitoring IRBs or investigators. Some agencies, such as DOE, have a program of routine site reviews.36 Other agencies, such as DHHS, conduct only "for cause" investigations, generally because limited budgets do not permit more proactive monitoring. Investigations often take a long time (in some instances over a year)37 and usually do not include on-site visits (OIG 1998a).
This lack of centralized enforcement authority, proportionate sanctions, and active research oversight serves to weaken severely the system for protecting human research participants. Any one of these weaknesses alone or in combination can lead to unnecessary bureaucracy and burden that could be reduced by a unified office for oversight, which could serve to simplify the bureaucratic complexity and lead to improved monitoring and enforcement.
In addition to the challenges described above, the current regulations suffer from other weaknesses. For example, they do not sufficiently embody and reflect the substantive ethical principles and standards that should govern behavior, but instead focus on the procedural aspects of IRB review. Thus, although IRBs may review research in accordance with an appropriate focus on ethical behavior, they are ultimately held responsible primarily for procedure and documentation. OHRP's.
(OHRP 2000) reflects this emphasis on the regulations by focusing on the procedures by which protocols are reviewed, for example, inappropriate use of expedited review and exemptions, lack of a quorum, less than annual continuing review, and failure to document required findings or votes (OHRP 2000). The emphasis by regulators on procedure is frustrating to IRBs and investigators38 and also contributes to an atmosphere in which review of research becomes an exercise in avoiding sanctions and liability rather than in maintaining appropriate ethical standards and protecting human participants. (Chapters 4 and 5 offer recommendations regarding IRB review and the emphasis on procedural requirements.)
Another weakness in the current regulations is that they fail to adequately address unique ethical issues that arise in different types of research. Although federal regulations for human research have long applied to the social sciences and humanities as well as biomedical research, their articulation reflects a persistent emphasis on clinical or biomedical research. Indeed, with regard to activities, it is sometimes difficult to determine which activities constitute human research and are therefore subject to the regulatory requirements. In addition, quality improvement studies in health care organizations, public health studies, program evaluation, and humanities research may require review by an IRB in some institutions, but not in others.
Applied to nonclinical research and particularly to the humanities and social science research, the regulatory requirements seem to be either irrelevant or insufficient to provide protection, depending on the type of research. For example, requirements for written documentation of consent may be inappropriate for some survey and anthropological research. Recently, the Association of American University Professors issued a report stating that IRBs often "mistakenly apply standards of clinical and biomedical research" to social science and historical research, which adversely affects not only the quality of the research but inadequately protects human research participants (AAUP 2000). In other areas, the regulations are insufficient - for example, with regard to protecting privacy and confidentiality. Although the regulations require "adequate provisions" to protect privacy and confidentiality, nonphysical harms, such as those resulting from breaches of confidentiality, are often difficult for IRBs to assess without more specific regulatory guidance (45 CFR 46.111(a)(7); 21 CFR 56.111(a)(7)). Much of the difficulty in applying the federal regulations is due to differences in the nature of the risks associated with non-clinical research. For example, physical harms are rarely a concern in nonclinical research, while psychological, social, economic, and legal harms are more likely to occur and should be the primary concern of IRB review.
The quality of IRB review is often compromised by the burden of excessive paperwork, because although IRBs are broadly charged with ethical review, in practice they also must fulfill many procedural requirements. While some of these requirements are designed to ensure compliance with ethical standards (e.g., documentation of waiver of informed consent), others appear to have little relevance to ethical standards or the protection of participants (e.g., requirements for documentation in meeting minutes). In all of their deliberations, IRBs must keep track of a range of detailed regulations and document the grounds on which they make decisions in accordance with them. In addition, IRBs must comply with numerous regulations regarding their operations. However, some of the regulatory and paperwork requirements governing IRBs are difficult to interpret (NBAC 1999b), unnecessarily burdensome, and often not commensurate with their contribution to protecting research participants.
One particularly time-consuming task for investigators, IRBs, and institutional officials is the preparation of assurances. Although many domestic research institutions have an MPA, which covers all research and generally needs to be renewed every five years, other institutions must obtain a separate assurance for each funded project (i.e., a Single Project Assurance, or SPA). For multi-site and international research, this process can be particularly time consuming. Even institutions with an MPA must often revise or amend that document to include changing institutional affiliates as well as affiliation agreements between specific investigators and individual physicians, practice plans, and health care institutions.39 In addition, many IRBs lack the basic resources of staff, space, and technology (Sugarman 2000). Without h3 professional and clerical support, busy IRBs remain mired in paperwork and are often unable to focus on ethical considerations. One can get a sense of the unmet resource needs of IRBs from the institutional responses to OPRR shutdowns; large institutions are routinely creating several new IRBs to share the review workload and adding several additional full-time IRB staff (Desruisseaux 2000; Phillips 2000).40
Because of the large workload, serving as an IRB member or chair requires a significant time commitment, with many hours spent in and out of meetings reviewing protocols and writing reports. Few IRB members receive compensation or recognition for their efforts. Thus, with little financial or academic support for IRB membership, IRBs must rely heavily on the goodwill of individual members, which can make it difficult to attract and retain members. (Recommendations related to reducing burdens on IRBs are discussed in Chapters 4, 5, and 6.)
Multi-site research, discussed further in Chapter 6, poses its own set of problems for local IRBs. The growing importance of multi-site research has challenged fundamental assumptions about the importance of local review, for the more IRBs duplicate each other's work by reviewing the same protocols, the more pressure there is to show why multiple local reviews of identical research protocols are needed. Although local review can provide insight about the social and cultural context of a study, the facilities in which it will be carried out, and any local laws or policies that might affect the study, IRBs may be squandering precious resources when dozens or hundreds of them must review all aspects of a single, multi-site protocol when the design and methods are unlikely to be changed.
IRBs are not the only groups frustrated by multi-site research. Investigators and sponsors are discouraged by having to submit protocols to multiple boards, particularly because changes requested by one board usually have to be approved by the others, a repetitive process that is labor intensive and that can significantly delay research, with little resulting benefit.
Another difficulty is that local IRBs are sometimes poorly situated to review multi-site research. Although IRBs and institutions have the authority to require changes for their site or to refuse to approve a multi-site study about which they hold serious reservations, in practice they are hesitant to use that authority. Thus, although local IRBs may modify recruitment procedures and consent forms, it may be that no single IRB has the power to require substantive changes to a study design, which must remain standardized across sites.
Multi-site research also poses problems with regard to continuing review (mandated, periodic review of research in progress) and review of adverse events. Many IRBs find themselves reviewing a staggering number of reports of adverse events that have occurred at other sites, often without any context, such as the total number of participants in a protocol or whether an adverse event occurred with the experimental or control intervention. Even when they have this information, IRBs sometimes lack the expertise to assess its significance in terms of the risks and potential benefits to trial participants.
Some of the weaknesses in the implementation of the federal regulations are being overcome by knowledgeable and creative IRBs, investigators, and institutional officials. But more can be done (see Chapter 3). Knowledgeable IRBs can find in the regulations extensive discretion in the types of protocols that may be approved. Investigators attentive to regulatory requirements can design research with protections that will easily satisfy ethical and regulatory requirements. Institutions can prepare policies and procedures that clarify, extend, and apply regulations to fit the local and institutional research context. Unfortunately, this kind of expertise is not widespread, and at all levels, from investigators to IRBs to institutions, there is frequently a lack of understanding of research ethics. In some cases, for example, investigators and IRB members might assume that research ethics means their own personal ethics rather than a common set of established ethical principles, standards, and procedures. The current system simply fails to ensure adequate education or preparation of individuals and institutions that wish to conduct and review research.
Two key tasks of institutions are supporting and educating IRB members. Currently, however, many IRB members receive little or no formal training in the relevant ethical analysis or federal regulations. A 1995 survey of 186 IRBs at major universities found that almost half provided no training or less than an hour of training to board members (Hayes et al. 1995). Without trained members, IRBs may act with little knowledge of or attention to the regulations or ethical principles they are supposed to implement. On-the-job-training of new IRB members reinforces IRBs' isolation from each other and encourages inconsistency between IRBs. Most important, lack of training for IRB members leads even the best intentioned IRBs to consistently miss or ignore important ethical issues. IRB review can only be as good as the IRB members' judgment. Without standards for IRBs and IRB members, IRB review is likely to be of uneven quality.
Education is essential not only for IRB members, but also for investigators and research staff. If investigators do not know that a specific project is subject to regulations, the entire system of protections is undermined. Even when investigators know that a research study must be reviewed by an IRB, they may not understand their continuing responsibilities as well as the IRB's responsibility for the continuing oversight of the research. Investigators may be unfamiliar with their obligations to report certain adverse events and have protocol amendments approved by the IRB and must be able to recognize their responsibilities beyond securing and maintaining IRB approval, including explaining protocols to prospective participants and answering their questions.
Public and private sector groups have taken steps to improve protections for those who participate in research. A number of professional organizations, such as Public Responsibility in Medicine and Research, the Applied Research Ethics National Association, the Association of American Medical Colleges, and the Association of American Universities, have contributed to this area by issuing policy statements, instituting workshops and training, or encouraging their member organizations to strengthen their protections procedures. In addition, industry,41 advocacy groups (Shamoo and Irving 1993; Sharav 1994),42 and members of the media (Sloat and Epstein 1996; Whitaker and Kong 1998) have been vocal in calling for her protections.
To date, two federal departments also have moved to strengthen and streamline the oversight system. DHHS elevated its oversight office from NIH to the Office of the Secretary, reorganizing OPRR into OHRP.43 (Chapter 2 discusses this transition in more detail.) OHRP now has more visible authority over the 11 agencies within DHHS. However, OHRP's authority does not extend to other departments and their research programs, although it often attempts to perform a government-wide role without specific authority. FDA has centralized and elevated its coordination of participant protection activities into a new Office of Clinical Science. DHHS instituted, through NIH or FDA, several initiatives to require education and training of investigators, improve monitoring of safety for those participating in clinical trials, address financial conflicts of interest, and seek civil, monetary penalties for noncompliance (DHHS 2000). In December 2000, OHRP revised its assurance process and is planning to work with other departments to create a unified system of registering IRBs. VA has strengthened oversight of research conducted at its facilities by developing a system of independent accreditation for all of its IRBs (VA 2000).
Notwithstanding these new initiatives, some of the basic problems with the current system have not been addressed and continue to burden sponsors, institutions, IRBs, and investigators with unnecessary delays and costs. While overall improvement of the system is needed, reform is particularly needed to direct attention to those research studies that pose the greatest risk to participants.
Faced with all of these challenges, the oversight system for protecting research participants is losing credibility among some investigators, IRBs, institutions, and, perhaps most important, the public, causing more frustration and less willingness to commit time and resources to the system. This could result in IRBs providing inadequate reviews; investigators not following the IRB-approved protocol or even submitting the protocol for IRB review; institutions not supporting their IRBs; and assurances being restricted or suspended. These possibilities are real and serious and are made more pressing by the continuing and rapid growth of the research enterprise.
The challenges facing the current system for research participant protection are significant and call for major change. Although there will always be ambiguity and difficult ethical decisions to make in reviewing the risks and potential benefits of research, and competing principles might apply in challenging new situations, the need for the protection of human participants requires a unified and consistent commitment on the part of the federal government, research organizations, sponsors, the research community, and the public.
Unfortunately, the history of human research protection demonstrates that knowing how to design research procedures generally has not always been translated into developing practices that are sound and ethical. Although there have been important improvements in research design over the past 50 years that enhance the protection of research participants, it is worth noting that many of these advances were motivated by reactions to various problematic situations. However, given the great progress in all areas of research and the rapid increase in the number of research protocols that involve human participants, the time is right to create a system of oversight that provides appropriate participant protection and encourages ethically sound research.
With this objective in mind, this report offers a number of recommendations aimed at modifying the current oversight system, although this may involve certain trade-offs. For example, enhancing consistency across federal departments raises concerns that oversight mechanisms will be tailored primarily to the clinical or biomedical model and ignore the ethical and research issues in other disciplines,44,45,46,47 and increased oversight intended to provide more complete protections could lead to more unnecessary bureaucratic requirements and delays. It is not NBAC's intention to recommend changes that will add burdens without demonstrable increases in protections for human research participants. It is important to understand that because comprehensive oversight is necessarily complex, with interconnected components, changes in any one part of the structure will affect the entire system. Chapter 7 discusses some of these interconnections and how the proposed system would function as a whole.