Please confirm you are human
(Sign Up for free to never see this)
← Back to Search
Better Reporting Of Harms In Randomized Trials: An Extension Of The CONSORT Statement
Published 2004 · Medicine
Reporting harms may cause more trouble and discredit than the fame and glory associated with successful reporting of benefits (1). The CONSORT (Consolidated Standards of Reporting Trials) statement, a checklist (Table 1) flow diagram first published in 1996 and revised 5 years later (2, 3), is an effort to standardize, and thereby improve, published reports of randomized, controlled trials (RCTs). One of the additions to the 2001 revision was an item about reporting adverse events. This single item did not do full justice to the importance of harms-related issues. The CONSORT Group met in September 2001 to discuss how to correct this deficiency. We aimed to provide evidence-based guidance on the reporting of harms in RCTs. First, we searched MEDLINE, EMBASE, Web of Science, and the Cochrane Library using a wide array of terms related to harms and identified pertinent evidence. We also communicated with experts and reviewed bibliographies of identified articles to find additional studies. At a meeting in Montebello, Quebec, Canada, in May 2003, CONSORT Group members, including several journal editors and additional experts in related fields, held a structured discussion of recommendations about reporting of harms-related issues in RCTs. The discussions led to a written document that we circulated among the team members for comment. The present manuscript describes our recommendations on the appropriate reporting of harms in RCTs. Table 1. Original CONSORT Checklist The terminology of harms-related issues in RCTs is confusing and often misleading or misused (see Glossary) (4, 5). Safety is a reassuring term that may obscure the real and potentially major harms that drugs and other interventions may cause. We encourage authors to use the term harms instead of safety. In addition to misused terminology, reporting of harms in RCTs has received less attention than reporting of efficacy and effectiveness and is often inadequate (6-14). In short, both scientific evidence and ethical necessity call for action to improve the quality of reporting of harms in RCTs (15, 16). Here, we present a set of recommendations and accompanying explanations for the proper reporting of harms in RCTs. These recommendations should complement the existing CONSORT statement (Table 2). Examples are presented on the Annals and CONSORT (www.consort-statement.org) Web sites. Table 2. Checklist of Items To Include When Reporting Harms in Randomized, Controlled Trials Recommendations Title and Abstract Recommendation 1. If the study collected data on harms and benefits, the title or abstract should so state. The title should mention harms if the study of harms was a key trial objective. Many phase I and phase II trials, some phase II/III trials, and most phase IV trials (17, 18) target harms as primary outcomes. Yet, the title and abstract seldom contain the word harm. Among 375143 entries in the Cochrane Central Register of Controlled Trials (Cochrane Library, issue 3, 2003), searching titles with the search terms harm or harms yielded 337 references (compared with 55374 for efficacy and 23415 for safety). Of the 337, excluding several irrelevant articles on self-harm or harm reduction, only 3 trial reports and 2 abstracts contained the word harm in their titles. Authors should present information on harms in the abstract. If no important harms occurred, authors should so state. Explicit reference to the reporting of adverse events in the title or abstract is also important for appropriate database indexing and information retrieval (19). Introduction Background Recommendation 2. If the trial addresses both harms and benefits, the introduction should so state. The introduction states the scientific background and rationale of an RCT. This requires a balanced presentation whereby the possible benefits of the intervention under investigation are outlined along with the possible harms associated with the treatment. Randomized, controlled trials that focus primarily on harms should clearly state this interest when describing the study objectives in the Introduction and in defining these objectives in the Methods. Methods Outcomes Recommendation 3. List addressed adverse events with definitions for each (with attention, when relevant, to grading, expected vs. unexpected events, reference to standardized and validated definitions, and description of new definitions). The Methods section should succinctly define the recorded adverse events (clinical and laboratory). Authors should clarify whether the reported adverse events encompass all the recorded adverse events or a selected sample. They should explain how, why, and who selected adverse events for reporting. In trials that do not mention harms-related data, the Methods section should briefly explain the reason for the omission (for example, the design did not include the collection of any information on harms). Authors should also be explicit about separately reporting anticipated and unexpected adverse events. Expectation may influence the incidence of reported or ascertained adverse events. Making participants aware in the consent form of the possibility of a specific adverse event (priming) may increase the reporting rate of the event (20). Another example of priming is the finding that the rates of withdrawals due to adverse events and the rates of specific adverse events were significantly higher in trials of aspirin, diclofenac, or indomethacin with comparator drugs compared with placebo-controlled trials (21). Presumably, participants were more eager to come forth and report an adverse event or to withdraw from treatment when they knew they could not be receiving inactive placebo. Authors should report whether they used standardized and validated measurement instruments for adverse events. Several medical fields have developed standardized scales (22-32). Use of nonvalidated scales is common. The source document for well-established definitions and scales should be referenced. New definitions for adverse events should be explicit and clear. Authors should describe how they developed and validated new scales. For interventions that target healthy individuals (for example, many preventive interventions), any harm, however minor, may be important to capture and report because the balance of harms and benefits may easily lean toward harms in a low-risk population. For other populations and for interventions that improve major outcomes (for example, survival), severe and life-threatening adverse events may be the only ones that are important in the balance of benefits and harms. Recommendation 4. Clarify how harms-related information was collected (mode of data collection, timing, attribution methods, intensity of ascertainment, and harms-related monitoring and stopping rules, if pertinent). It is important to describe the questionnaires, interviews, and tests used to collect information on harms, as well as their timing during follow-up. Passive surveillance of harms leads to fewer recorded adverse events than active surveillance (4). Open-ended questions may yield different information, both quantitatively and qualitatively, than structured questionnaires (33). Studies of nonsteroidal, anti-inflammatory drugs (NSAIDs) exemplify how data collection methods can affect the detection and reporting of harms. When selective NSAIDs with fewer gastrointestinal adverse events became available, trials reported more than 10 times as many ulcers when comparing these drugs with older NSAIDs as when older NSAIDs were compared with placebo. In the newer trials, more ulcers were detected because participants had regular endoscopy, and the case definition of ulcers was more sensitive (34). Authors should specify the time frame of surveillance for adverse events. Some investigators stop recording adverse events at the end of the intervention period or a certain number of days afterward (for example, 30 days after discontinuation of drug therapy) and miss events with long latency (35). Surgical trials often capture only the adverse events that occur intraoperatively. Several important surgical complications are likely to occur later. Finally, in crossover trials, delayed events might occur while the patient is taking a subsequent assigned treatment. Attribution is the process of deciding whether an adverse event is related to the intervention. Whenever authors filter events through an attribution process, they should state who makes the attribution (investigators, participants, sponsors, or combinations), whether the process is blinded to assigned treatment, and what definitions of adverse events they use (4). Discontinuations and withdrawals due to adverse events are especially important because they reflect the ultimate decision of the participant and/or physician to discontinue treatment. Although treatment may occasionally be discontinued for mild or moderate adverse events, attributing discontinuation to a specific reason (to toxicity, lack of efficacy, other reasons, or combinations of reasons) may be difficult. For example, in psychopharmacology, dropouts may reflect treatment ineffectiveness as much as toxicity-related intolerance (36). Trial reports should specify who gave the reasons for discontinuation (participants or physicians) and whether attribution was blinded to the assigned treatment. For example, even in blinded trials, participants and their clinicians are often unblinded before they decide whether to discontinue the intervention. It is important to report participants who are nonadherent or lost to follow-up because their actions may reflect their inability to tolerate the intervention. Moreover, authors should specify how they handled withdrawals in the analyses of the data. Randomized, controlled trials should report any plan for monitoring for harms and rules for stopping the trial because of harms (37). They should clarify whether stopping guidelines examine benefits and harms separate