Europe PMC

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

Abstract 


No abstract provided.

Free full text 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
IRB. Author manuscript; available in PMC 2013 Jul 1.
Published in final edited form as:
IRB. 2012 Jul-Aug; 34(4): 15–20.
PMCID: PMC3673020
NIHMSID: NIHMS474751
PMID: 22893993

The Silent Majority: Who Speaks at IRB Meetings?

Abstract

Institutional review boards (IRBs) are almost universally considered over-worked and under-staffed, at the same time requiring substantial commitments of time and resources. Although some surveys report average IRB memberships of 15 persons or more, federal regulations require only five. We present data on IRB meetings at 8 of the top 25 NIH-funded academic medical centers in the U.S., indicating substantial contributions from primary reviewers and chairs during protocol discussions but little from other members. The implications of these data for current IRB functioning are discussed and an alternative model proposed.

Keywords: Institutional review boards (IRBs), research oversight, IRB decision-making, meeting analysis

Introduction

As institutional review boards (IRBs) face criticism for being over-burdened and under-resourced, the time commitment of their highly trained and relatively expensive members places considerable demands on institutional resources.1 Some surveys have reported that IRBs average approximately 15 members, with one study reporting a high of 44 members on one IRB.2 In contrast, federal regulations require only five members, including a scientist, a non-scientist, and a member unaffiliated with the institution.

The practice of maintaining large IRBs raises the question of whether membership that extends much beyond the minimal regulatory requirements is necessary. Of course one reason for an extensive membership is the need for reviewers whose technical expertise addresses specific topics of review. However, IRBs need not have all members present and can call on specialized reviewers to assist in review of specific protocols, potentially reducing the importance of having diverse expertise present at all board meetings. Moreover, a large standing membership requires a substantial commitment of resources. The proportion of salary, overhead, other services, and travel costs associated with board member review time has been estimated conservatively at 22% of total IRB costs, with high-volume IRBs estimated to cost well over $1 million annually.3 Thus, requiring fewer members at meetings could result in considerable cost savings. Although these expenses may be hidden because IRB participation is often included under more general duties, they remain real costs.

Given that IRB panels are expensive, yet have the option of calling on outside experts, the question is whether a membership of fifteen or even larger (as were 50% of the panels we studied) is helpful. One way that a large membership might be justified is if member interactions at meetings make a significant contribution to IRB discussions. The empirical questions are how often, how extensively, and from which roles such interactions occur. Anecdotal data suggest that chairs and reviewers assigned for each protocol may make the greatest contributions to discussions, but to date there have been no systematic data describing how frequently anyone other than an assigned reviewer or the committee chair participates in protocol discussions. If “ancillary” participants rarely speak, then it is difficult to argue that these participants play an important role in IRB deliberations. We use data from a unique observational study of IRBs in major academic medical centers to examine this question.

Methods

Data Collection

We observed and recorded discussions and participation patterns at IRB meetings at 10 academic medical centers. The sites were recruited from among the 25 largest medical centers receiving NIH funds in 2004 – a group chosen because such medical centers do much of the clinical research in the U.S. and because, as larger centers, they should have sufficient resources for proper implementation of research review. At each site, we studied two panels that reviewed new or re-submitted general applications, observing and audio-taping one meeting of each panel. Two sites were excluded from this analysis because of concerns about our accuracy in recording the presence of IRB members who did not speak during the meeting. The median number of new or resubmitted proposals discussed at each meeting was 5 (range 3 to 10). The panels we observed included between 8–21 members in attendance, with a median of 16. A detailed description of the methods can be found elsewhere.4

This study received IRB approval at each of the sites involved, as well as at the home institutions of the principal investigator and each co-investigator. All participants at the observed IRB meetings were asked for their written consent to be audiotaped and interviewed; 196 of 226 potential subjects consented (159 IRB members and 37 staff). In keeping with our agreement with the participating IRBs, the contents of statements by IRB members who declined to participate were deleted from the data although the observation that they spoke was not. Seven protocols were excluded because the primary reviewers of those protocols did not consent to participate. At the eight sites, we observed and recorded discussions of 70 new protocols and 11 previously deferred protocols, for a total of 81. Table 1 presents data on the research focus and status of the 81 protocols.

Table 1

Characteristics of Protocols Reviewed by the IRBs

Characteristicn%
Field of Medicine
 Infectious Diseases911.1
 Oncology1923.5
 Neurology/Psychiatry89.9
 Circulatory System67.4
 Other3948.2
  Total81100.0
Review Status
 New7086.4
 Deferred1113.6
  Total81100.0
N of Sites
 Single-site4251.9
 Multi-site3948.2
  Total81100.0
Therapeutic/Non Therapeutic
 Therapeutic4454.3
 Non Therapeutic3745.7
  Total81100.0
Study Type
 Observational1822.2
 Intervention6276.5
 Neither11.2
  Total81100.0
Study Design
 Phase I1214.8
 Phase II2227.2
 Phase III2227.2
 Feasibility11.2
 Laboratory78.6
 Survey/Interview11.2
 Other1619.8
  Total81100.0

Panel meetings were transcribed and redacted of any information that might identify the site, PI, or protocol. A detailed codebook was developed on the basis of observations from a pilot study, and modified using data collected from the early sites (codebook available on request from the authors). Two investigators coded the meetings separately and then compared results to resolve disagreements. Further disagreements were resolved by review with the principal investigator. Here we report primarily on two measures of participation at IRB meetings: the number of speaking turns by the persons in given roles and the number of words spoken in those speaking turns.i Speaking turns were identified as beginning with any statement that could be assigned to a specific individual and continuing until an interruption from another speaker.

To assess the level of participation of panel members according to their role in reviewing a particular protocol, we divided participants into 7 roles: chair/non-reviewer, chair/reviewer, staff/non-reviewer, staff/reviewer, primary reviewer, secondary or tertiary reviewer, and all other non-reviewers. An individual’s role often changed from protocol to protocol. For each role we counted the number of times the member(s) in that role spoke during each protocol discussion (speaking turns), and the number of words in their statements. We averaged both turns and number of words by the number of people in each role for each protocol review.

Results

The degree of participation by IRB panel members is shown in Table 2. Panel chairs, as reviewers and non-reviewers, played very large roles, measured both by speaking turns and word counts. Primary reviewers and chair/reviewers spoke the most words. However what is striking in these results is the relatively minor degree of participation of other members when they were not reviewers. They averaged only 1.8 speaking turns and fewer than 25 words per protocol review. This is further illustrated by the frequency of recommendations made by the primary reviewer and secondary or tertiary reviewers compared to the non-reviewers (see Table 3). Primary reviewers averaged 3.9 speaking turns related to recommendations per protocol compared to non-reviewing members who averaged 0.2 per protocol. The overall pattern of discussion was characterized by lengthy introductions by reviewers followed by very short exchanges with other members (Table 4).

Table 2

Speaking Turns and Words per Person per Protocol by Role*

RoleNumber of participants by role (including non-consenters)Number of Speaking TurnsMean Speaking Turns per Participant per ProtocolNumber of participants by roleMean Words per Participant per Protocol
Chair as reviewer1634521.616893.9
Chair as non- reviewer651,69626.165421.3
Staff as reviewer33612.03383.3
Staff as non- reviewer2506142.521326.6
Primary reviewer691,35019.669822.8
Secondary or tertiary reviewer5363412.047269.2
All other non- reviewers6821,2161.864724.3
*Because we agreed not to transcribe statements by non-consenting IRB members, data for non-consenting members were available only for their number of speaking turns, not for the number of words they spoke. Column five describes participants excluding those who did not consent.

Table 3

Recommendations per Person per Protocol by Role*

RoleRecommendation turnsRecommendation words
Primary reviewer3.9405.8
Secondary/tertiary reviewer1.995.4
Other non-reviewer0.23.8
*Table 3 does not include chairs, staff, or those who did not consent to have their comments transcribed.

Table 4

Introductory and Subsequent Speaking Turns and Words per Person per Protocol by Role*

RoleIntro turnsIntro wordsOther turnsOther words
Chair/reviewer1.6485.720.0408.2
Chair/non-reviewer0.00.026.1421.3
Staff/reviewer1.0245.311.0138.0
Staff/non-reviewer0.00.02.626.6
Primary reviewer1.7490.417.9332.4
Secondary/tertiary reviewer1.085.810.7183.5
All other non-reviewers0.00.01.824.3
*Table 4 does not include members who did not consent to having their comments transcribed (30 of 226).

As shown in Figure 1, the median number of IRB members other than the chair and designated reviewers who spoke at all during a protocol review was 2. For 14.8% of protocol reviews, there was no speaker other than a reviewer and the chair. In 62.9% of protocol discussions, 50% or more of the members remained completely silent. Although 2 or more non-reviewers spoke during 66.7% of protocol reviews, their comments were typically brief and emphasized points made by previous speakers. At any given meeting, between 6.7 and 44.4% of all members said nothing, with 23.9% of members at all the meetings we observed remaining silent throughout the entire meeting. Moreover, the larger the meeting, the greater the percentage of members who did not participate in the discussion. The correlation between the number of people attending the meeting and the percentage who did not speak is .73 (Pearson’s r).

An external file that holds a picture, illustration, etc.
Object name is nihms474751f1.jpg

Number of members other than chairs, reviewers and staff who spoke during protocol reviews

Discussion

Most protocol reviews at the IRB meetings we observed consisted of long descriptions of the protocol by designated reviewers, especially the primary reviewer, followed by a few short exchanges with other members and a significant number of contributions from the chair. From the initial introduction of protocols to the final vote, reviewers joined chairs in leading the discussions. The other IRB members formed a “silent majority,” who often left the substantive work of research review to their panel chairs and designated reviewers. Many IRB members maintained this pattern across the entire meeting.

Although we began with no preconceptions about the implications of the degree of IRB members’ participation on cost and efficiency, our observations indicate that many IRBs may simply be larger than they need to be. Most protocol reviews involve only a small number of members, and non-reviewing members usually do not play an active role in IRB deliberations. Meetings where many members say nothing or contribute little are consequently inefficient and expensive. Charging review fees or incorporating IRB costs into institutional overhead may offset some part of these costs, but practices that require large numbers of members to attend every meeting appear to be a waste of institutional resources. Although our observations do not point to a “correct” number of participants, they do suggest the need to rethink the size of IRBs.

Our findings on the degree of participation and its relationship to the size of the IRB are consistent with classic reports from the business and sociological literatures that larger groups hinder individual participation and lead to dominance by a small number of speakers.5 Large groups appear to be particularly prone to “groupthink,” with most members failing to analyze the issues for themselves in the presence of leaders who state their opinions at the beginning of meetings and do not encourage group participation. The fact that a smaller percentage of members participated as the number of members at the meeting grew larger suggests the kind of passivity consistent with the groupthink model. The consequent waste of individual time and resources challenges the current approach to IRB review.

Limitations

It is possible that the IRBs observed in this project altered their behavior under the scrutiny of investigators. Members’ concerns about having their remarks recorded may have inhibited their participation. However, observation of the meeting by outsiders seems more likely to have led members to greater participation and engagement. In any case, the observation that small numbers of members participated in each review held across all of our sites.

Since this was a cross-sectional study of IRB behavior, it is possible, perhaps even probable, that many of the members who did not serve as designated reviewers and were thus largely or totally silent would have been more active at other meetings at which they served as assigned reviewers. Thus, this observation should not lead to the conclusion that these members make little or no contribution to the IRB process as a whole. But the question remains whether a large number of members must be present at every meeting.

Another limitation is that this paper does not analyze the content of non-reviewer member statements. Although it is our impression that non-reviewers, other than chairs, rarely made significant contributions to the protocol review decisions, it is possible that silent or nearly silent members influence a discussion merely by their presence or contribute significantly with concise statements. They may make contributions in pre-review, although this is more the purview of staff than of other IRB members. Overall, the numerous silent members and average contributions of fewer than 2 speaking turns per protocol by non-reviewing members suggest that it may be possible for smaller committees to accomplish the same tasks with no reduction in the quality of review.

Conclusions

If large numbers of members contribute to the cost of IRB review without substantially altering the outcome, it may be necessary to refine our concept of the appropriate size of review panels. Consistent with the proposed changes to the Common Rule that seek efficiency and reduction of IRB workload, proposals that address the size of IRB meetings are both timely and cost-conscious. Quality reviews may be achieved by identifying outside specialists to review specific protocols and convening much smaller meetings. If the specialists are not local, reviews could be conducted by telephone or submitted by e-mail. The use of outside reviewers, selected because their expertise matches the requirements of a specific protocol, would allow IRB chairs to function in part like journal editors, identifying the appropriate outside experts to contribute to smaller review panels. Indeed, the example of a journal editorial board that bases its decisions on input from expert reviewers but which includes few members who actually participate in publication decisions, may be a more appropriate model for IRB review.

Acknowledgments

This project was conducted under a grant from the National Institutes of Health (National Cancer Institute R01 CA107295). The views expressed in this article do not necessarily reflect those of the NIH. We are grateful to the many IRB members and staff who contributed their cooperation and effort to this project.

Footnotes

iThe number of words was computed only for individuals who consented to participate. Because there were few non-consenters, with some speaking and others not, this should not distort the results.

References

1. Emanuel EJ, Wood A, Fleischman A, Bowen A, Getz KA, Grady C, Levine C, Hammerschmidt DE, Faden R, Eckenwiler L, Muse CT, Sugarman J. Oversight of human participants research: Identifying problems to evaluate reform proposals. Annals of Internal Medicine. 2004;141:282–291. [Abstract] [Google Scholar]Wood A, Grady C, Emanuel EJ. Regional ethics organizations for protection of human research participants. Nature Medicine. 2004;10(12):1283–1288. [Abstract] [Google Scholar]Shamoo AE, Schwartz J. Universal and uniform protections of human subjects in research. American Journal of Bioethics. 2008;8(11):3–5. [Abstract] [Google Scholar]Finch SA, Barkin SL, Wasserman RC, Dhepyasuwan N, Slora EJ, Sage RD. Effects of local IRB review on participation in national practice-based research network studies. Archives of Pediatric and Adolescent Medicine. 2009;163(12):1130–1134. [Europe PMC free article] [Abstract] [Google Scholar]Infectious Diseases Society of America. Grinding to a halt: the effects of the increasing regulatory burden on research and quality improvement efforts. Clinical Infectious Diseases. 2009;49(3):328–335. [Abstract] [Google Scholar]Silverstein M, Banks M, Fish S, Bauchner H. Variability in institutional approaches to ethics review of community-based research conducted in collaboration with unaffiliated organizations. Journal of Empirical Research Human Research Ethics. 2008;3(2):69–76. [Abstract] [Google Scholar]Wagner TH, Bhandari A, Chadwick GL, Nelson DK. The cost of operating institutional review boards (IRBs) Academic Medicine. 2003 Jun;78(6):638–644. [Abstract] [Google Scholar]
2. DeVries RG, Forsberg CP. What do IRBs look like? What kind of support do they receive? Accountability in Research. 2002;9:199–216. [Abstract] [Google Scholar]Catania JA, Lo B, Wolf LE, Dolcini MM, Pollack LM, Barker JC, Wertlieb S, Henne J. Survey of US Human research protection organizations: workload and membership. Journal of Empirical Research in Human Research Ethics. 2008 Dec;3(4):57–69. [Europe PMC free article] [Abstract] [Google Scholar]Catania JA, Lo B, Wolf LE, Dolcini MM, Pollack LM, Barker JC, Wertlieb S, Henne J. Survey of U.S. Boards that Review Mental Health-Related Research. Journal of Empirical Research in Human Research Ethics. 2008 Dec;3(4):71–79. [Europe PMC free article] [Abstract] [Google Scholar]Bell J, Whiton J, Connelly S. Evaluation of NIH Implementation of Section 491 of the Public Service Act: Mandating a Program of Protection for Research Subjects. Arlington, VA: James Bell Associates; 1998. [Google Scholar]
3. Byrne MM, Speckman J, Getz K, Sugarman J. Variability in the costs of IRB oversight. Academic Medicine Aug. 2006;81(8):708–712. [Abstract] [Google Scholar]Sugarman J, Getz K, Speckman J, Byrne MM, Gerson S, Emanuel EJ Consortium to Evaluate Clinical Research Ethics. The Cost of IRBs in Academic Medical Centers. New England Journal of Medicine. 2005;352:1825–1827. [Abstract] [Google Scholar]
4. Lidz CW, Appelbaum PS, Arnold R, Candilis PJ, Gardner W, Garverich S, Simon L. IRBs: How closely do they follow the Common Rule? (submitted) [Europe PMC free article] [Abstract] [Google Scholar]
5. Romano NC, Jr, Nunamaker JF., Jr Meeting Analysis: Findings from Research and Practice. Proceedings of the 34th Hawaii International Conference on System Sciences; January 3–6, 2001. [Google Scholar]Maui HI, Moorhead G, Neck CP, West MS. The tendency toward defective decision-making with self-managing teams: The relevance of groupthink for the 21st century. Organizational Behavior and Human Decision Processes. 1998;73(2–3):327–351. [Abstract] [Google Scholar]Mosvick RK, Nelson RB. We’ve got to start meeting like this – a guide to successful meeting management. Glenview Ill: Scott, Foresman; 1987. [Google Scholar]Slater PE. Contrasting correlates of group size. Sociometry. 1958;21(2):129–139. [Google Scholar]

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Article citations


Go to all (8) article citations

Similar Articles 


To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.

Funding 


Funders who supported this work.

NCATS NIH HHS (1)

NCI NIH HHS (1)