CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomized Trials

The CONSORT (Consolidated Standards of Reporting Trials) statement is used worldwide to improve the reporting of randomized, controlled trials. Schulz and colleagues describe the latest version, CONSORT 2010, which updates the reporting guideline based on new methodological evidence and accumulating experience.

R andomized, controlled trials, when appropriately designed, conducted, and reported, represent the gold standard in evaluating health care interventions. However, randomized trials can yield biased results if they lack methodological rigor (1). To assess a trial accurately, readers of a published report need complete, clear, and transparent information on its methodology and findings. Unfortunately, attempted assessments frequently fail because authors of many trial reports neglect to provide lucid and complete descriptions of that critical information (2)(3)(4).
That lack of adequate reporting fueled the development of the original CONSORT (Consolidated Standards of Reporting Trials) statement in 1996 (5) and its revision 5 years later (6 -8). While those statements improved the reporting quality for some randomized, controlled trials (9, 10), many trial reports still remain inadequate (2). Furthermore, new methodological evidence and additional experience has accumulated since the last revision in 2001. Consequently, we organized a CONSORT Group meeting to update the 2001 statement (6 -8). We introduce here the result of that process, CONSORT 2010.

INTENT OF CONSORT 2010
The CONSORT 2010 Statement is this paper, including the 25-item checklist (Table) and the flow diagram ( Figure). It provides guidance for reporting all randomized, controlled trials but focuses on the most common design type-individually randomized, 2-group, parallel trials. Other trial designs, such as cluster randomized trials and noninferiority trials, require varying amounts of additional information. CONSORT extensions for these designs (11,12), and other CONSORT products, can be found through the CONSORT Web site (www.consortstatement.org). Along with the CONSORT statement, we have updated the explanation and elaboration article (13), which explains the inclusion of each checklist item, provides methodological background, and gives published examples of transparent reporting.
Diligent adherence by authors to the checklist items facilitates clarity, completeness, and transparency of reporting. Explicit descriptions, not ambiguity or omission, best serve the interests of all readers. Note that the CONSORT 2010 Statement does not include recommendations for designing, conducting, and analyzing trials. It solely addresses the reporting of what was done and what was found.
Nevertheless, CONSORT does indirectly affect design and conduct. Transparent reporting reveals deficiencies in research if they exist. Thus, investigators who conduct inadequate trials, but who must transparently report, should not be able to pass through the publication process without revelation of their trials' inadequacies. That emerging reality should provide impetus to improved trial design and conduct in the future, a secondary indirect goal of our work. Moreover, CONSORT can help researchers in designing their trial.

BACKGROUND TO CONSORT
Efforts to improve the reporting of randomized, controlled trials accelerated in the mid-1990s, spurred partly by methodological research. Researchers had shown for many years that authors reported such trials poorly, and empirical evidence began to accumulate that some poorly conducted or poorly reported aspects of trials were associated with bias (14). Two initiatives aimed at developing reporting guidelines culminated in one of us (D.M.) and Drummond Rennie organizing the first CONSORT statement in 1996 (5). Further methodological research on sim-

Introduction
Background and objectives 2a Scientific background and explanation of rationale 2b Specific objectives or hypotheses

Methods
Trial design 3a Description of trial design (such as parallel, factorial), including allocation ratio 3b Important changes to methods after trial commencement (such as eligibility criteria), with reasons Participants 4a Eligibility criteria for participants 4b Settings and locations where the data were collected Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually administered Outcomes 6a Completely defined prespecified primary and secondary outcome measures, including how and when they were assessed 6b Any changes to trial outcomes after the trial commenced, with reasons Sample size 7a How sample size was determined 7b When applicable, explanation of any interim analyses and stopping guidelines Randomization Sequence generation 8a Method used to generate the random allocation sequence 8b Type of randomization; details of any restriction (such as blocking and block size) Allocation concealment mechanism 9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions Blinding 11a If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how 11b If relevant, description of the similarity of interventions Statistical methods 12a Statistical methods used to compare groups for primary and secondary outcomes 12b Methods for additional analyses, such as subgroup analyses and adjusted analyses

Results
Participant flow (a diagram is strongly recommended) 13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analyzed for the primary outcome 13b For each group, losses and exclusions after randomization, together with reasons Recruitment 14a Dates defining the periods of recruitment and follow-up 14b Why the trial ended or was stopped Baseline data 15 A ilar topics reinforced earlier findings (15) and fed into the revision of 2001 (6 -8). Subsequently, the expanding body of methodological research informed the refinement of CONSORT 2010. More than 700 studies comprise the CONSORT database (located on the CONSORT Web site), which provides the empirical evidence to underpin the CONSORT initiative. Indeed, CONSORT Group members continually monitor the literature. Information gleaned from these efforts provides an evidence base on which to update the CONSORT statement. We add, drop, or modify items based on that evidence and the recommendations of the CONSORT Group, an international and eclectic group of clinical trialists, statisticians, epidemiologists, and biomedical editors. The CONSORT Executive (K.F.S., D.G.A., D.M.) strives for a balance of established and emerging researchers. The membership of the group is dynamic. As our work expands in response to emerging projects and needed expertise, we invite new members to contribute. As such, CONSORT continually assimilates new ideas and perspectives. That process informs the continually evolving CONSORT statement.
Over time, CONSORT has garnered much support. More than 400 journals, published around the world and in many languages, have explicitly supported the CONSORT statement. Many other health care journals support it without our knowledge. Moreover, thousands more have implicitly supported it with the endorsement of the CONSORT statement by the International Committee of Medical Journal Editors (www.icmje.org). Other prominent editorial groups, the Council of Science Editors and the World Association of Medical Editors, officially support CONSORT. That support seems warranted: When used by authors and journals, CONSORT seems to improve reporting (9).

DEVELOPMENT OF CONSORT 2010
Thirty-one members of the CONSORT 2010 Group met in Montebello, Quebec, Canada, in January 2007 to update the 2001 CONSORT statement. In addition to the accumulating evidence relating to existing checklist items, several new issues had come to prominence since 2001. Some participants were given primary responsibility for aggregating and synthesizing the relevant evidence on a particular checklist item of interest. Based on that evidence, the group deliberated the value of each item. As in prior CONSORT versions, we kept only those items deemed absolutely fundamental to reporting a randomized, controlled trial. Moreover, an item may be fundamental to a trial but not included, such as approval by an institutional ethical review board, because funding bodies strictly enforce ethical review and medical journals usually address reporting ethical review in their instructions for authors. Other items may seem desirable, such as reporting on whether on-site monitoring was done, but a lack of empirical evidence or any consensus on their value cautions against inclusion at this point. The CONSORT 2010 Statement thus addresses the minimum criteria, although that should not deter authors from including other information if they consider it important.
After the meeting, the CONSORT Executive convened teleconferences and meetings to revise the checklist. After 7 major iterations, a revised checklist was distributed to the larger group for feedback. With that feedback, the Executive met twice in person to consider all the comments and to produce a penultimate version. That served as the basis for writing the first draft of this paper, which was then distributed to the group for feedback. After consideration of their comments, the Executive finalized the statement.
The CONSORT Executive then drafted an updated explanation and elaboration manuscript, with assistance from other members of the larger group. The substance of the 2007 CONSORT meeting provided the material for the update. The updated explanation and elaboration manuscript was distributed to the entire group for additions, deletions, and changes. That final iterative process converged to the CONSORT 2010 Explanation and Elaboration (13).

CHANGES IN CONSORT 2010
The revision process resulted in evolutionary, not revolutionary, changes to the checklist (Table), and the flow diagram was not modified except for 1 word (Figure). Moreover, because other reporting guidelines augmenting the checklist refer to item numbers, we kept the existing items under their previous item numbers except for some renumbering of items 2 to 5. We added additional items either as a subitem under an existing item, an entirely new item number at the end of the checklist, or (with item 3) an interjected item into a renumbered segment. We have summarized the noteworthy general changes in Box 1 and specific changes in Box 2. The CONSORT Web site contains a side-by-side comparison of the 2001 and 2010 versions.

IMPLICATIONS AND LIMITATIONS
We developed CONSORT 2010 to assist authors in writing reports of randomized, controlled trials, editors and peer reviewers in reviewing manuscripts for publication, and readers in critically appraising published articles. The CONSORT 2010 Explanation and Elaboration provides elucidation and context to the checklist items. We strongly recommend using the explanation and elaboration in conjunction with the checklist to foster complete, clear, and transparent reporting and aid appraisal of published trial reports.
CONSORT 2010 focuses predominantly on the 2-group, parallel randomized, controlled trial, which accounts for over half of trials in the literature (2). Most of the items from the CONSORT 2010 Statement, however, pertain to all types of randomized trials. Nevertheless, some types of trials or trial situations dictate the need for additional information in the trial report. When in doubt, authors, editors, and readers should consult the CONSORT Web site for any CONSORT extensions, expansions (amplifications), implementations, or other guidance that may be relevant.
The evidence-based approach we have used for CONSORT also served as a model for development of other reporting guidelines, such as for reporting systematic reviews and meta-analyses of studies evaluating interventions (16), diagnostic studies (17), and observational studies (18). The explicit goal of all these initiatives is to improve reporting. The Enhancing the Quality and Transparency of Health Research (EQUATOR) Network will facilitate development of reporting guidelines and help disseminate the guidelines: www.equator-network.org pro-vides information on all reporting guidelines in health research.
With CONSORT 2010, we again intentionally declined to produce a rigid structure for the reporting of randomized trials. Indeed, Standards of Reporting Trials (SORT) (19) tried a rigid format, and it failed in a pilot run with an editor and authors (20). Consequently, the format of articles should abide by journal style; editorial directions; the traditions of the research field addressed; and, where possible, author preferences. We do not wish to standardize the structure of reporting. Authors should simply address checklist items somewhere in the article, with ample detail and lucidity. That stated, we think that manuscripts benefit from frequent subheadings within the major sections, especially the methods and results sections.
CONSORT urges completeness, clarity, and transparency of reporting, which simply reflects the actual trial design and conduct. However, as a potential drawback, a reporting guideline might encourage some authors to report fictitiously the information suggested by the guidance rather than what was actually done. Authors, peer reviewers, and editors should vigilantly guard against that potential drawback and refer, for example, to trial protocols, to information on trial registers, and to regulatory agency Web sites. Moreover, the CONSORT 2010 Statement does not include recommendations for designing and conducting randomized trials. The items should elicit clear pronouncements of how and what the authors did, but do not contain any judgments on how and what the authors should have done. Thus, CONSORT 2010 is not intended as an instrument to evaluate the quality of a trial. Nor is it appropriate to use the checklist to construct a "quality score."

We improved consistency of style across the items by removing the imperative verbs that were in the 2001 version.
We enhanced specificity of appraisal by breaking some items into subitems. Many journals expect authors to complete a CONSORT checklist indicating where in the manuscript the items have been addressed. Experience with the checklist noted pragmatic difficulties when an item comprised multiple elements. For example, item 4 addresses eligibility of participants and the settings and locations of data collection. With the 2001 version, an author could provide a page number for that item on the checklist but might have reported only eligibility in the paper, for example, and not reported the settings and locations. CONSORT 2010 relieves obfuscations and forces authors to provide page numbers in the checklist for both eligibility and settings.
Nevertheless, we suggest that researchers begin trials with their end publication in mind. Poor reporting allows authors, intentionally or inadvertently, to escape scrutiny of any weak aspects of their trials. However, with wide adoption of CONSORT by journals and editorial groups, most authors should have to report transparently all impor-tant aspects of their trial. The ensuing scrutiny rewards well-conducted trials and penalizes poorly conducted trials. Thus, investigators should understand the CONSORT 2010 reporting guidelines before starting a trial as a further incentive to design and conduct their trials according to rigorous standards.

Item 2b (introduction)-We added a new subitem (formerly item 5 in CONSORT 2001) on "Specific objectives or hypotheses."
Item 3a (trial design)-We added a new item including this subitem to clarify the basic trial design (such as parallel group, crossover, cluster) and the allocation ratio.
Item 3b (trial design)-We added a new subitem that addresses any important changes to methods after trial commencement, with a discussion of reasons.

Item 5 (interventions)-Formerly item 4 in CONSORT 2001. We encouraged greater specificity by stating that descriptions of interventions should include "sufficient details to allow replication" (3).
Item 6 (outcomes)-We added a subitem on identifying any changes to the primary and secondary outcome (end point) measures after the trial started. This followed from empirical evidence that authors frequently provide analyses of outcomes in their published papers that were not the prespecified primary and secondary outcomes in their protocols, while ignoring their prespecified outcomes (that is, selective outcome reporting) (4, 22). We eliminated text on any methods used to enhance the quality of measurements.
Item 9 (allocation concealment mechanism)-We reworded this to include mechanism in both the report topic and the descriptor to reinforce that authors should report the actual steps taken to ensure allocation concealment rather than simply report imprecise, perhaps banal, assurances of concealment.

Item 11 (blinding)-We added the specification of how blinding was done and, if relevant, a description of the similarity of interventions and procedures.
We also eliminated text on "how the success of blinding (masking) was assessed" because of a lack of empirical evidence supporting the practice, as well as theoretical concerns about the validity of any such assessment (23, 24).

Item 12a (statistical methods)-We added that statistical methods should also be provided for analysis of secondary outcomes.
Subitem 14b (recruitment)-Based on empirical research, we added a subitem on "Why the trial ended or was stopped" (25).

Item 15 (baseline data)-We specified "A table" to clarify that baseline and clinical characteristics of each group are most clearly expressed in a table.
Item 16 (numbers analyzed)-We replaced mention of "intention to treat" analysis, a widely misused term, by a more explicit request for information about retaining participants in their original assigned groups (26).

Item 19 (harms)-We included a reference to the CONSORT paper on harms (28).
Item 20 (limitations)-We changed the topic from "Interpretation" and supplanted the prior text with a sentence focusing on the reporting of sources of potential bias and imprecision.
Item 22 (interpretation)-We changed the topic from "Overall evidence." Indeed, we understand that authors should be allowed leeway for interpretation under this nebulous heading. However, the CONSORT Group expressed concerns that conclusions in papers frequently misrepresented the actual analytical results and that harms were ignored or marginalized. Therefore, we changed the checklist item to include the concepts of results matching interpretations and of benefits being balanced with harms.

Item 23 (registration)-We added a new item on trial registration. Empirical evidence supports the need for trial registration, and recent requirements by journal editors have fostered compliance (29).
Item 24 (protocol)-We added a new item on availability of the trial protocol. Empirical evidence suggests that authors often ignore, in the conduct and reporting of their trial, what they stated in the protocol (4, 22). Hence, availability of the protocol can instigate adherence to the protocol before publication and facilitate assessment of adherence after publication.

Item 25 (funding)-We added a new item on funding. Empirical evidence points toward funding source sometimes being associated with estimated treatment effects (30).
Academia and Clinic CONSORT 2010 Statement CONSORT 2010 supplants the prior version published in 2001. Any support for the earlier version accumulated from journals or editorial groups will automatically extend to this newer version, unless specifically requested otherwise. Journals that do not currently support CONSORT may do so by registering on the CONSORT Web site. If a journal supports or endorses CONSORT 2010, it should cite one of the original versions of CONSORT 2010, the CONSORT 2010 Explanation and Elaboration, and the CONSORT Web site in their "instructions to authors." We suggest that authors who wish to cite CONSORT should cite this or another of the original journal versions of CONSORT 2010 Statement and, if appropriate, the CONSORT 2010 Explanation and Elaboration (13). All CONSORT material can be accessed through the original publishing journals or the CONSORT Web site. Groups or individuals who desire to translate the CONSORT 2010 Statement into other languages should first consult the CONSORT policy statement on the Web site.
We emphasize that CONSORT 2010 represents an evolving guideline. It requires perpetual reappraisal and, if necessary, modifications. In the future, we will further revise the CONSORT material considering comments, criticisms, experiences, and accumulating new evidence. We invite readers to submit recommendations via the CONSORT Web site.

ANNALS OF INTERNAL MEDICINE JUNIOR INVESTIGATOR AWARDS
Beginning in 2010, Annals of Internal Medicine and the American College of Physicians will recognize excellence among internal medicine trainees and junior investigators with annual awards for original research and scholarly review articles published in Annals in each of the following categories: • Most outstanding article with a first author in an internal medicine residency program or a general medicine or internal medicine subspecialty fellowship program • Most outstanding article with a first author within 3 years following completion of training in internal medicine or one of its subspecialties Selection of award winners will consider the article's novelty, methodological rigor, clarity of presentation, and potential to influence practice, policy, or future research. Judges will include Annals Editors and representatives from Annals' Editorial Board and the American College of Physicians' Education/Publication Committee.
Papers published in the year following submission are eligible for the award in the year of publication. First author status at the time of manuscript submission will determine eligibility. Authors should indicate that they wish to have their papers considered for an award when they submit the manuscript, and they must be able to provide satisfactory documentation of their eligibility if selected for an award. Announcement of awards for a calendar year will occur in January of the subsequent year. We will provide award winners with a framed certificate, a letter documenting the award, and complimentary registration for the American College of Physicians' annual meeting.
Please refer questions to Mary Beth Schaeffer at mschaeffer@acponline .org.