Publication bias - How similar are results from published versus unpublished studies?
Source: Rothstein, D. H. R., Sutton, D. A. J., & Borenstein, D. M. (2006). Publication Bias in Meta-Analysis. Publication Bias in Meta-Analysis (pp. 1-7) doi:10.1002/0470870168.ch1
The reliability of the results of a randomized trial depends on the extent to which potential sources of bias have been avoided. A key part of a review is to consider the risk of bias in the results of each of the eligible studies. A useful classification of biases is into selection bias, performance bias, attrition bias, detection bias and reporting bias. In this section we describe each of these biases and introduce seven corresponding domains that are assessed in the Collaboration’s ‘Risk of bias’ tool. These are summarized in Table 8.4.a. We describe the tool for assessing the seven domains in Section 8.5. We provide more detailed consideration of each issue in Sections 8.9 to 8.15.
Selection bias refers to systematic differences between baseline characteristics of the groups that are compared. The unique strength of randomization is that, if successfully accomplished, it prevents selection bias in allocating interventions to participants. Its success in this respect depends on fulfilling several interrelated processes. A rule for allocating interventions to participants must be specified, based on some chance (random) process. We call this sequence generation. Furthermore, steps must be taken to secure strict implementation of that schedule of random assignments by preventing foreknowledge of the forthcoming allocations. This process if often termed allocation concealment, although could more accurately be described as allocation sequence concealment. Thus, one suitable method for assigning interventions would be to use a simple random (and therefore unpredictable) sequence, and to conceal the upcoming allocations from those involved in enrolment into the trial.
For all potential sources of bias, it is important to consider the likely magnitude and the likely direction of the bias. For example, if all methodological limitations of studies were expected to bias the results towards a lack of effect, and the evidence indicates that the intervention is effective, then it may be concluded that the intervention is effective even in the presence of these potential biases.
Performance bias refers to systematic differences between groups in the care that is provided, or in exposure to factors other than the interventions of interest. . After enrolment into the study, blinding (or masking) of study participants and personnel may reduce the risk that knowledge of which intervention was received, rather than the intervention itself, affects outcomes. Effective blinding can also ensure that the compared groups receive a similar amount of attention, ancillary treatment and diagnostic investigations. Blinding is not always possible, however. For example, it is usually impossible to blind people to whether or not major surgery has been undertaken.
Detection bias refers to systematic differences between groups in how outcomes are determined. Blinding (or masking) of outcome assessors may reduce the risk that knowledge of which intervention was received, rather than the intervention itself, affects outcome measurement. Blinding of outcome assessors can be especially important for assessment of subjective outcomes, such as degree of postoperative pain.
Attrition bias refers to systematic differences between groups in withdrawals from a study. Withdrawals from the study lead to incomplete outcome data. There are two reasons for withdrawals or incomplete outcome data in clinical trials. Exclusions refer to situations in which some participants are omitted from reports of analyses, despite outcome data being available to the trialists. Attrition refers to situations in which outcome data are not available.
Reporting bias refers to systematic differences between reported and unreported findings. Within a published report those analyses with statistically significant differences between intervention groups are more likely to be reported than non-significant differences. This sort of ‘within-study publication bias’ is usually known as outcome reporting bias or selective reporting bias, and may be one of the most substantial biases affecting results from individual studies (Chan 2005).
In addition there are other sources of bias that are relevant only in certain circumstances. These relate mainly to particular trial designs (e.g. carry-over in cross-over trials and recruitment bias in cluster-randomized trials); some can be found across a broad spectrum of trials, but only for specific circumstances (e.g. contamination, whereby the experimental and control interventions get ‘mixed’, for example if participants pool their drugs); and there may be sources of bias that are only found in a particular clinical setting.