Meta-synthesis or Meta-analysis in Medical Research Assignment

Meta-synthesis or Meta-analysis in Medical Research Assignment

What effect does a meta-synthesis or meta-analysis have on research   translation? Describe a clinical practice in place that is supported   by this level of evidence. 1 page, 2 sources. APA.

Using meta-analyses for comparative effectiveness
research
Vicki S. Conn, PhD, RN, FAAN*, Todd M. Ruppar, PhD, RN, GCNS-BC,
Lorraine J. Phillips, PhD, RN, Jo-Ana D. Chase, MN, APRN-BC
Meta-Analysis Research Center, School of Nursing, University of Missouri, Columbia, MO
article info
Article history:

BUY A PLAGIARISM-FREE PAPER HERE

Received 30 December 2011
Revised 16 April 2012
Accepted 22 April 2012
Keywords:
Comparative effectiveness
research
Meta-analysis
abstract
Comparative effectiveness research seeks to identify the most effective interventions
for particular patient populations. Meta-analysis is an especially
valuable form of comparative effectiveness research because it emphasizes the
magnitude of intervention effects rather than relying on tests of statistical
significance among primary studies. Overall effects can be calculated for diverse
clinical and patient-centered variables to determine the outcome patterns.
Moderator analyses compare intervention characteristics among primary
studies by determining whether effect sizes vary among studies with different
intervention characteristics. Intervention effectiveness can be linked to patient
characteristics to provide evidence for patient-centered care. Moderator analyses
often answer questions never posed by primary studies because neither
multiple intervention characteristics nor populations are compared in single
primary studies. Thus, meta-analyses provide unique contributions to knowledge.
Although meta-analysis is a powerful comparative effectiveness strategy,
methodological challenges and limitations in primary research must be
acknowledged to interpret findings.
Cite this article: Conn, V. S., Ruppar, T. M., Phillips, L. J., & Chase, J.-A. D. (2012, AUGUST). Using metaanalyses
for comparative effectiveness research. Nursing Outlook, 60(4), 182-190. doi:10.1016/
j.outlook.2012.04.004.Meta-synthesis or Meta-analysis in Medical Research Assignment
Despite remarkable scientific advances over recent
decades, the effectiveness of many health interventions
remains unclear. The Institute of Medicine noted
that evidence of effectiveness exists for less than half of
the interventions in use today.1 Scant evidence exists
comparing multiple possible interventions for the same
health problem.2 Newer or more costly interventions
may not be linked with better outcomes, and variations
in health care expenditure may be unrelated to changes
in health outcomes.3-5The troubling lack of information
about interventions’ relative effectiveness led to
comparative effectiveness research (CER) initiatives.
CER can be defined as research designed to discover
which interventions work best, under what circumstances,
for whom, and at what cost.1,6 CER methods
include randomized, controlled trials; nonrandomized
comparison studies; prospective and retrospective
observational studies; analyses of registry and practice
datasets; practice-based evidence studies; and metaanalyses.6-9
This paper examines using meta-analytic
approaches for CER. Examples of nurse-led metaanalyses
will be used to demonstrate key points. The
paper begins with an explanation of meta-analytic
overall effect size estimates for CER, especially in
* Corresponding author: Dr. Vicki S. Conn, Associate Dean & Potter-Brinton Distinguished Professor, Director, Meta-Analysis Research
Center, S317 School of Nursing, University of Missouri, Columbia, MO 65211.
E-mail address: conn@missouri.edu (V.S. Conn).
0029-6554/$ - see front matter 2012 Elsevier Inc. All rights reserved.
doi:10.1016/j.outlook.2012.04.004Meta-synthesis or Meta-analysis in Medical Research Assignment
Available online at www.sciencedirect.com
Nurs Outlook 60 (2012) 182 e 190
www.nursingoutlook.org
situations with inconsistent findings among primary
studies. The value of statistically quantifying the
magnitude of effects for both clinical and patientcentered
outcomes is described. Unique contributions
of meta-analysis for both specifying temporal patterns
of outcomes and adverse outcomes are presented. Then
the importance of including diverse studies that represent
clinical heterogeneity is explained. The use of
patient characteristic moderator analysis to accomplish
CER goals of identifying which interventions work best
for which subjects is explored. The use of moderator
analyses to determine whether intervention characteristics
are linked with outcomes is presented. The use of
moderator analyses to determine whether setting
characteristics are associated with outcomes is
described. The potential use of moderator analyses to
explore intervention worth is briefly addressed. Finally,
selected limitations of meta-analytic methods and
primary studies are discussed to provide a context for
interpreting meta-analytic CER. Full details of metaanalysis
methods, including limitations, are available
in other sources.10-15Meta-synthesis or Meta-analysis in Medical Research Assignment
Application of Overall Effect Sizes to
Comparative Effectiveness Research
CER includes determining effectiveness of interventions
on clinical and patient-centered outcomes. CER
can involve performing a meta-analysis of primary
studies to quantify intervention outcomes. Metaanalyses
can synthesize results of head-to-head
comparisons of 2 interventions in primary studies or
compare 2 interventions tested in different primary
studies. Meta-analytic statistical procedures generate
a unitless effect size for each study. Thus, outcomes
reported using different measures of the same
construct in primary studies may be combined. Each
effect size is weighted by the inverse of its sampling
variance so studies with larger samples have more
influence in aggregate effect-size estimates.11
The meta-analytic approach of estimating an effect
size for each primary study does not depend on
P values in original studies, which makes it valuable in
areas of science where underpowered studies are
common. Some areas have multiple small primary
studies without statistical power to detect important
changes. Reviews of such work conducted without
meta-analysis, such as those relying on vote counting
of the proportion of studies with statistically significant
findings, might conclude that the primary studies
did not support the effectiveness of the tested intervention
because they reported statistically nonsignificant
differences between treatment and comparison
groups. However, meta-analytic strategies can
combine the magnitude of differences between treatment
groups across primary studies to discover a clinically
important intervention effect. For example, we
retrieved 10 studies testing the effects of physical
activity behavior self-monitoring as an intervention to
increase physical activity.16-25 Four of the studies
reported statistically significant findings in favor of
self-monitoring. Six other studies reported that selfmonitoring
did not significantly improve physical
activity behavior. A review without meta-analysis
would conclude that the evidence is mixed, inconclusive,
or did not support the efficacy of self-monitoring.
In contrast, a meta-analysis of the same studies
documented an overall effect size of .435 (standardized
mean difference), which is significantly different from
no effect (P < 0.001, 95% confidence interval .278, .592).
Thus the meta-analysis concluded that selfmonitoring
increased physical activity. Figure 1
includes a forest plot that demonstrates these findings.
CER aims to determine the extent to which interventions
are effective, not whether they are better than
control conditions. Meta-analysis calculates and
emphasizes the magnitude of the effect, rather than
the tests of statistical significance reported in primary
studies. The emphasis on effect size, instead of tests of
statistical significance, also aids interpretation of
findings from overpowered primary studies with
statistically significant findings that may not be clinically
important. For example, a study of an intervention
to reduce pain may have a statistically significant P
value if hundreds of subjects are included, whereas the
average reduction in pain between the treatment and
control group might be from 6.5 to 6.2 on a pain scale of
0 to 10. Meta-analysis findings emphasize the magnitude
of effects, thus overpowered studies are interpreted
in the context of the effect size they achieved.
Because CER results are intended to improve clinical
practice, outcomes need to be interpretable by practitioners.
The meta-analysis overall effect size, which
quantifies the magnitude of effects, can be converted
to the original clinical metric to enhance interpretation.
For example, a meta-analysis of metabolic
outcomes of diabetes self-management programs
reported an overall mean difference effect size of .26.Meta-synthesis or Meta-analysis in Medical Research Assignment
The conversion to the original metric depicted findings
in clinically meaningful terms: HbA1c of 7.38 for
treatment subjects compared with HbA1c of 7.83 for
control subjects.26 Clinical practice can be further
supported by making comparisons across metaanalyses
to determine consistency of findings. These
comparisons can be accomplished by the ability to
convert meta-analysis effect size metrics (eg, odds
ratios to standardized mean difference).27
CER aims to examine intervention effects on
multiple clinical and patient-centered outcomes. Metaanalyses
compute separate effect sizes for diverse
outcomes that are reported in primary research.
Although a main health outcome may be considered
most important, other outcomes may be summarized
separately to estimate intervention effects for multiple
outcomes. For example, a meta-analysis comparing
passive descent to immediate pushing during secondstage
labor in nulliparous women with epidural anesthesia
examined multiple outcomes: Spontaneous
Nurs Outlook 60 (2012) 182 e 190 183
vaginal birth, instrument-assisted delivery, cesarean
birth, lacerations, and episiotomies.28 Varied patterns
of findings among related outcomes can be interesting.
For example, a meta-analysis of exercise interventions
among older adults found improvement in objective
physical performance measures but no improvement
in the ability to perform activities of daily living.29
Patient-centered outcomes research emphasizes
outcomes of importance to patients such as quality of
life, symptoms, or functional status. Patient-centered
outcomes can be synthesized in addition to other
outcomes health providers typically value.30 For
example, a meta-analysis of silver-releasing wound
dressings included pain-related symptoms and quality
of life measures, as well as typical clinical outcomes of
wound healing, exudate, and dressing wearing time.31
Analyzing multiple outcomes is important because
the definition of “success” for interventions varies.32
Comparisons between interventions may reveal small
or negligible differences in main outcome effect sizes.
In these cases, comparisons of other nonprimary
outcomes, such as patient convenience, may provide
valuable information about complex tradeoffs for
making decisions about patient care.33Meta-synthesis or Meta-analysis in Medical Research Assignment
Providers are interested in CER research that documents
persisting health benefits of interventions, not
just immediate improvements. Effect sizes calculated
for multiple time points can provide information about
the temporal pattern of effects. Some primary studies
report outcomes over multiple time points. Others
report only one outcome assessment, though its timing
may vary across studies. These data can be used inmetaanalyses
to identify interventions whose effects are
transient or those showing limited immediate impact
but long-term positive outcomes.32 These patterns may
reveal themselves as interventions first become effective,
peak in effectiveness, and then decay. For example,
Van Kuiken documented changes in the effects of guided
imagery on outcomes over 5 to 18 weeks.34
CER is intended to develop information to providers
and patients about both positive and negative
outcomes of interventions so advantages and disadvantages
may be considered in making treatment
decisions. Adverse or negative events are important
sequelae that CER meta-analyses can address. Many
adverse events are rare, which makes it difficult to
assess incidence in individual primary studies.
Combining adverse event rates across multiple
primary studies with thousands of subjects provides
more stable estimates of incidence than are available
in single studies. For example, Lo et al documented no
increased incidence of adverse events when using
silver-releasing dressings over alternative dressing by
aggregating findings across many patients in multiple
primary studies.31 Although primary research tends to
emphasize positive outcomes in research reports,
providers need accurate information about negative
events or neutral outcomes to weigh the advantages
and disadvantages of interventions for practice.
Heterogeneity in Meta-Analyses Comparative
Effectiveness Research
CER values real-world tests of interventions. Heterogeneity
is expected in CER meta-analyses because
primary studies (1) include samples of diverse, realFigure
1 e Forest plot of 10 studies that tested self-monitoring interventions. The horizontal line adjacent to
each study on the forest plot reflects the confidence interval for that study’s effect size. Studies with
horizontal lines crossing 0 did not report a statistically significant outcome in the individual studies. The
meta-analysis standardized mean difference effect size, the final row in the figure marked “Effect size,” is
represented by the diamond whose width corresponds to the confidence interval.
184 Nurs Outlook 60 (2012) 182 e 190
world populations; (2) commonly have planned and
unplanned variations in interventions; and (3) test
interventions in varied clinical settings that may
influence their effectiveness or patient responsiveness.
Meta-analysts’ decisions regarding inclusion and
exclusion of potential primary studies with diverse
samples and interventions should be directed by
conceptually clear definitions about what kinds of
interventions should be combined and for which types
of subjects. CER meta-analyses generally use randomeffects
model analyses, which assume diversity in
sample, interventions, and study methods. (Methodological
challenges related to inclusion criteria and
primary study quality are addressed in the Limitations
section.)Meta-synthesis or Meta-analysis in Medical Research Assignment
Heterogeneity is valuable because CER includes
studies conducted with diverse populations and varied
methods to provide strong evidence about interventions’
effectiveness. CER expects variations in patients,
interventions, and outcomes. This approach stands in
contrast to efficacy findings commonly established in
tightly controlled, randomized, controlled trials.8,35
The emphasis on randomized, controlled trials in
some Cochrane Collaboration reviews is one reason
these may have limited CER impact. A strength of
meta-analysis is its ability to estimate heterogeneity
and examine potential moderating variables that
contribute to it. Even when testing identical interventions,
heterogeneity of outcome effects is common
because patients vary in their response to treatments,
and treatment effects may vary by setting.35 Heterogeneity
offers the opportunity to conduct moderator
analyses to explore how primary studies differ by
examining sample, intervention, and setting characteristics
that may be linked to outcomes. CER metaanalysis
facilitates discovery of best practices by
identifying interventions that are the most effective
overall and for certain populations once sufficient
primary research has accumulated.8
Patient Characteristic Moderator Analyses
One focus of CER is identifying differential intervention
effectiveness for specific populations. CER subgroup
moderator analyses can focus on demographic
features such as ethnicity or gender, or they can
examine health characteristics such as disease
severity or functional status. Meta-analysis moderator
analyses can examine whether intervention effectiveness
varies by patient subgroups. For example, a metaanalysis
of interventions to increase medication
adherence among older adults found that interventions
were most effective for those with 3 to 5
prescription medications.36 This could be because
those taking fewer medications needed little assistance
with medication adherence and those taking
more than 5 might need more intense interventions
than those typically tested.36 Rice reported that
smoking cessation interventions were more effective
for cardiac patients than for other populations.37
The increased CER emphasis on patient-level
attributes linked with better or worse outcomes
may lead to more personalized care.38 Findings that
intervention effects do not vary by sample characteristics
may mean that a range of patients may
experience similar benefit from the intervention. For
example, a meta-analysis of respiratory rehabilitation
interventions on exercise capacity found similar
benefits across sample age or initial forced expiratory
volume.39Meta-synthesis or Meta-analysis in Medical Research Assignment
Intervention Characteristic Moderator
Analyses
CER aims to provide clinical guidance by comparing
interventions to determine which interventions are
most effective.
Intervention Moderators
In a few situations, meta-analysis can prove useful in
determining whether an intervention is better than no
intervention, such as a watchful waiting approach.38
For some interventions, it can be valuable to synthesize
comparisons between new interventions and
usual care. If usual care is standardized, these analyses
provide information comparing 2 interventions.
However, oftentimes usual care is not standardized
and such comparisons cannot yield clear recommendations
for practice. More commonly, providers need
to know which interventions are most effective.
Meta-analyses can address comparisons between
interventions by either synthesizing extant primary
research with head-to-head comparisons of treatments
or by using moderator analyses on primary
studies that test different interventions. Using metaanalysis,
researchers can directly compare interventions
from multiple primary studies that compare the
same 2 interventions. The effect sizes for the difference
between the 2 interventions provided information
about the most effective intervention when methodological
quality was similar between studies and valid
outcome measures were used. For example, Lo et al
synthesized findings of primary studies that each
compared silver-releasing dressing with other
dressings.31Meta-synthesis or Meta-analysis in Medical Research Assignment
Unfortunately, many primary studies of nursing
interventions are not compared against other interventions.
Head-to-head comparisons of multiple
interventions in the same primary studies are
unusual because of funding, feasibility, and very large
sample challenges. Rather, interventions are generally
compared with usual care or a control group.
Using meta-analysis, interventions not directly
compared in primary studies can be indirectly
compared to accomplish the goals of CER to compare
Nurs Outlook 60 (2012) 182 e 190 185
interventions.7

BUY A PLAGIARISM-FREE PAPER HERE

The effect of one intervention
compared with a control group can be contrasted
either with the effect of a second intervention
compared with a control group.38 Two interventions
each compared with usual care in separate primary
studies can be compared using meta-analysis.38 An
effect size is computed for the first intervention
compared with control subjects. A separate effect size
is calculated for the second intervention compared
with control groups. The difference in the effect sizes
is tested statistically to determine whether the first or
second intervention was most effective. Because no
primary studies directly compared the 2 interventions,
this indirect comparison is a unique contribution
of meta-analysis. For example, a meta-analysis
by Jung et al compared exercise-only interventions
with exercise-and-education interventions to reduce
fear of falling in older adults.40 Primary studies did
not compare the 2 interventions but rather compared
each one with a control group. Their meta-analysis
statistically compared the interventions despite the
absence of any primary studies making this direct
comparison.40
Nurses often use common labels to describe variable
interventions. For example, patient education could
describe work to change the knowledge and attitudes
about exercise or it could describe behavioral strategies
to change exercise (eg, self-monitoring, prompts,
contracts). Meta-analysis adds clarity in such cases
with its ability to compare characteristics of interventions
to determine the best one. For example, a recent
meta-analysis of physical activity interventions found
that behavioral interventions (eg, self-monitoring,
cues, rewards, behavioral goals) were more effective
than cognitive interventions (ie, changing knowledge,
attitudes, beliefs) at increasing physical activity
behavior.41 These comparative analyses provide
evidence about best practices to achieve desired
outcomes.42Meta-synthesis or Meta-analysis in Medical Research Assignment
Moderator analyses can examine intervention
features that may vary along dimensions beyond
content.43 Dose variations include individual dose
amount, dose frequency, and total number of doses.
Intervention timing may be linked to index events
or other determining factors. Mode of delivery
can include face-to-face or mediated mechanisms
(eg, email, telephone). Interventions may be delivered
to the target, who is expected to benefit from the
intervention, or to other recipients (eg, family
members of patients, health care providers). Moderator
analyses can compare standardized interventions to
those tailored to an individual (ie, intervention features
matched to individual subject characteristics) or targeted
to groups (eg, different interventions for
subgroups such as women vs. men). Unplanned intervention
variations (eg, unanticipated content or dose
variations) can relate to outcomes. Moderator analyses
on such characteristics can provide information to
help design interventions that improve health and
well-being outcomes.
Setting and Context Moderator Analyses
CER aims to discover the best interventions in specific
situations. Meta-analyses can compare interventions’
setting and context characteristics using moderator
analyses to discover circumstances in which interventions
are most effective. For example, interventionist
characteristics that vary among primary studies
(eg, advanced practice nurses vs. physicians) can be
compared statistically. Setting features, such as home
vs. clinic or individual patient vs. group of patients, also
can be examined to determine the most effective
setting. For example, Conn et al’s meta-analysis of
physical activity behavior outcomes compared interventions
delivered to groups versus individuals and
compared interventions delivered face-to-face versus
mediated mechanisms (eg, telephone).41 Modifications
in health care delivery are important potential moderators
in health services research. For example, Kim and
Soeken examined how hospital-based case management
affected length of stay and readmission rates.44
Intervention Worth
Although current national CER discussions have not
emphasized cost analyses, an examination of cost
issues is relevant. Meta-analysis methods can address
relationships between intervention costs and
outcomes. Ideal primary intervention reports contain
adequate data about intervention costs and outcomes
to estimate the amount of improvement in outcome
variables per unit cost. It is important that the full
range of outcomes be compared with costs to provide
a complete cost-benefit. Unfortunately, few existing
intervention studies provide adequate cost data to
include this important variable in meta-analyses. As
cost information takes on greater importance in
primary research, such analyses will be possible in the
future.Meta-synthesis or Meta-analysis in Medical Research Assignment
Interpreting Meta-Analysis Results for
Comparative Effectiveness Research
Meta-analysis is a powerful CER tool. Valid interpretations
of meta-analyses results require researchers to
consider limitations of both meta-analysis methods
and primary studies. In-depth explanation of metaanalysis
methods is beyond the scope of this paper.
Other excellent resources provide detailed information.10-15
Two checklists with criteria for evaluating
meta-analyses are available online (PRISMA: http://
www.prisma-statement.org/statement.htm; MOOSE:
http://www.editorialmanager.com/jognn/account/
MOOSE.pdf). This discussion will focus on CER metaanalysis.
186 Nurs Outlook 60 (2012) 182 e 190
The findings of meta-analyses may be generalized to
situations similar to the primary studies included in
the analyses. Thus, if only randomized, controlled
trials are included in meta-analyses, they may provide
limited information about effectiveness while
providing excellent estimates of efficacy. Because CER
does not seek to determine whether interventions are
efficacious under highly controlled conditions, CER
meta-analyses should include primary trials with
varied populations and broad clinical practice, as well
as tightly controlled efficacy trials, so findings are
generalizable to practice settings.45,46
Limitations and Challenges of
Meta-Analysis CER
Meta-analysis inclusion criteria determine which
primary studies to include in aggregate analyses.
Excessively narrow inclusion criteria may exclude
studies conducted in the practice setting, which might
provide the most valuable evidence for changing
practice. For example, the Cochrane Collaboration
emphasis on randomized, controlled trials and exclusion
of patient-centered outcomes may limit the
usefulness of some reviews for CER.14Meta-synthesis or Meta-analysis in Medical Research Assignment
Including studies with varied methodological difficulties
can be both valuable and challenging. Metaanalysts
manage primary study quality in 3 ways.47
First, meta-analysts may set inclusion criteria that
address methodological quality. This approach can be
effective for CER if it does not exclude the very field
studies that provide the best evidence about effectiveness.
Second, a meta-analysis could weight effect
sizes by quality scores. This approach is fraught with
problems because no valid measures of primary study
quality exist and the importance of specific quality
attributes may differ by scientific topic.47 Third, metaanalysts
may consider quality features as an empirical
question. Conducting moderator analyses to examine
associations between effect sizes and methods characteristics
(eg, allocation, masked outcome assessment,
attrition) can be informative. For example, Lee,
Soeken, and Picot compared effect sizes of studies with
strong internal validity with those with significant
weaknesses.48 Combination approaches may be most
effective if CER research is to ensure that studies conducted
in realistic clinical settings are included while
testing linkages between methods and effect sizes.
Primary study limitations profoundly influence
meta-analyses. Poorly described interventions are
a persistent problem.49-52 Studies that describe interventions
as patient education or social support,
without additional details, provide insufficient information
about intervention content. Other studies use
well known labels for interventions but provide insufficient
evidence about intervention content or delivery.
For example, studies may claim “motivational interviewing”
without conducting an intervention entirely
consistent with motivational interviewing principles.
Inadequate details about interventions and outcomes
make valid coding difficult for some primary studies
and may necessitate exclusion from meta-analyses.
Reporting bias, the tendency for articles to report
statistically significant findings and not report findings
that are not statistically significant, and publication
bias, the tendency for studies with statistically significant
findings to be published, alter meta-analysis
findings in unknown ways.53 Inadequate statistical
information in primary studies, such as not reporting
sample sizes, means, and measures of variability, is
frustratingly common.54,55 Some primary studies may
use outcome measures with no recognized standards
for clinically relevant differences, hindering meaningful
interpretation.
Perhaps the most common limitation in published
meta-analyses is inadequate searching for primary
studies. This is important because easier-to-find
studies generally have larger effect sizes than obscure
studies.56,57 Publication bias is a persistent problem
that thwarts scientific progress.57,58 Considerable
resources must be devoted to adequate searching to
ensure valid CER meta-analyses.56
Meta-analysts can only synthesize existing information.
For example, some populations may be underrepresented
in research.59 The comprehensive
searching completed for valid meta-analyses allows
investigators to identify missing populations.
Individual studies are the unit of analysis in metaanalyses.
To ensure independent data (subjects do
not enter any one meta-analysis statistical procedure
multiple times), meta-analysts must make principled
decisions regarding which measures to use or create an
index score when studies report multiple measures of
the same construct. Procedures also must be in place to
ensure that the same subjects do not enter metaanalysis
effect sizes multiple times when more than
one article reports on the same subjects.Meta-synthesis or Meta-analysis in Medical Research Assignment
Use of CER Meta-Analysis Results
In some CER meta-analyses, moderator analyses may
be more important than overall effect sizes.
Researchers should place less emphasis on overall
effects in meta-analyses that include significant clinical
and methodological diversity. Researchers should
use caution when interpreting overall effect sizes of
small meta-analyses with significant heterogeneity
and no explanatory moderator analyses.42
CER meta-analysis results may be conclusive
regarding best practices if primary studies offer strong
and consistent evidence. In these situations, no further
research comparing interventions may be necessary.
Primary research often yields less conclusive findings
when few studies are available, all studies have
significant methodological weaknesses, or extensive
heterogeneity cannot be explored through moderator
Nurs Outlook 60 (2012) 182 e 190 187
analyses. In these situations, meta-analysis may
contribute most by identifying comparisons that
further research should address. Rather than simply
suggesting additional research on a topic, metaanalyses
usually can specify the nature of the
comparisons that should be made (eg, intervention
characteristics, samples).Meta-synthesis or Meta-analysis in Medical Research Assignment
Comprehensive meta-analyses can provide
evidence for practice. Consistent findings across
multiple meta-analyses that address the same fundamental
research question provide powerful evidence
for practice. For example, 3 meta-analyses have documented
that behavioral interventions are more
powerful than cognitive interventions to change
physical activity behavior among healthy, chronically
ill, and older adults.41,60,61 Contradictory findings
across multiple meta-analyses should be evaluated
carefully. Considerations include differences in search
strategies, inclusion criteria, and outcome variables to
identify potential sources of discrepancies before
making practice recommendations.
Meta-analyses must be updated with newly available
evidence. The shelf-life of meta-analyses depends
on the amount of new evidence that could change
findings.59 A meta-analysis may suggest comparisons
to make in primary studies, the findings of which could
require updates to the seminal meta-analysis. Newer
studies may include populations that older studies
included infrequently. Important methodological
advances may affect the results of more recent studies.
Emerging data should be included in updated metaanalyses.7
Meta-analyses may also need to be updated
as new methods of meta-analyzing data become
available.62Meta-synthesis or Meta-analysis in Medical Research Assignment
Conclusions
Meta-analyses can address central CER questions of
which interventions work best, for whom, in what
situations, and at what cost. Moderator analyses that
compare intervention characteristics, patient attributes,
and clinical circumstances on clinical outcomes
make the largest CER contribution to knowledge for
practice. These moderator analyses typically answer
questions that primary studies never ask; metaanalyses
can make unique contributions to scientific
knowledge of health interventions. Methodological
challenges and weaknesses in extant primary research
should provide the context for interpreting findings.
Rigorously conducted meta-analyses are a useful
method for conducting valid CER.
Acknowledgments
Financial support provided by grants from the National
Institutes of Health (R01NR009656 & R01NR011990) to
Vicki Conn, principal investigator. The content is solely
the responsibility of the authors and does not necessarily
represent the official views of the National
Institutes of Health.
references
1. Institute of Medicine. Roundtable on evidence-basedmedicine.
Learning what works best: the nation’s need for evidence on
comparative effectiveness in health care. Available at: http://
www.iom.edu/w/media/Files/Activity%20Files/Quality/VSRT/
ComparativeEffectivenessWhitePaperESF.pdf. Accessed May
29, 2012.
2. Donnelly J, Garber AM, Wilensky GR, Dentzer S, Agres T. Health
policy brief: Comparative effectiveness research. 2010. Health
Aff. Available at: http://www.healthaffairs.org/healthpolicy
briefs/brief.php?brief_id¼28. Accessed May 29, 2012.
3. Fisher ES, Bynum JP, Skinner JS. Slowing the growth of health
care costsdlessons from regional variation. New Engl J Med
2009;360(9):849-52.Meta-synthesis or Meta-analysis in Medical Research Assignment
4. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL,
Pinder EL. The implications of regional variations in Medicare
spending. Part 2: health outcomes and satisfaction with care.
Ann Intern Med 2003;138(4):288-98.
5. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL,
Pinder EL. The implications of regional variations in Medicare
spending. Part 1: the content, quality, and accessibility of
care. Ann Intern Med 2003;138(4):273-87.
6. Clancy C. Patient-centered outcomes research: what is it and
why do we need it? Presented at Council for the Advancement
of Nursing Science special topics conference, October 12,
2011; Washington, DC.
7. DuBois RW, Kindermann SL. Demystifying comparative
effectiveness research: a case study learning guide. National
Pharmaceutical Council. 2009. Available at: http://www.
npcnow.org/Public/Research___Publications/Publications/
pub_ebm/Demystifying_Comparative_Effectiveness_
Research__A_Case_Study_Learning_Guide_.aspx. Accessed
May 29, 2012.
8. Horn SD, Gassaway J. Practice based evidence: Incorporating
clinical heterogeneity and patient-reported outcomes for
comparative effectiveness research. Med Care 2010;
48(6 Suppl):S17-22.
9. Manchikanti L, Falco FJ, Boswell MV, Hirsch JA. Facts,
fallacies, and politics of comparative effectiveness
research: Part I. Basic considerations. Pain Physician 2010;
13(1):E23-54.
10. Cooper H, Hedges LV, Valentine JC, editors. The handbook of
research synthesis and meta-analysis. 2nd ed. New York:
Russell Sage Foundation; 2009.
11. Cooper H. Research synthesis and meta-analysis: a step by
step approach. 4th ed. Los Angeles, CA: Sage Publications,
Inc.; 2010.
12. Borenstein M, Hedges L, Higgins J, Rothstein H. Introduction
to meta-analysis. West Sussex: John Wiley & Sons; 2009.
13. Lipsey M, Wilson D. Practical meta-analysis. Los Angeles, CA:
Sage Publications, Inc.; 2000.
14. Higgins JPT, Green S. Cochrane handbook for systematic
reviews of interventions. United Kingdom: Cochrane
Collaboration. Available at: http://www.cochrane.org/
training/cochrane-handbook. Accessed May 29, 2012.
15. Campbell Collaboration. Oslo, Norway: Campbell
Collaboration. Available at: http://www.campbell
collaboration.org/. Accessed May 29, 2012.
16. Napolitano MA, Fotheringham M, Tate D, Sciamanna C,
Leslie E, Owen N, et al. Evaluation of an internet-based
188 Nurs Outlook 60 (2012) 182 e 190
physical activity intervention: A preliminary investigation.
Ann Behav Med 2003;25(2):92-9.
17. Furukawa F, Kazuma K, Kawa M, Miyashita M, Niiro K,
Kusukawa R, et al. Effects of an off-site walking program
on energy expenditure, serum lipids, and glucose
metabolism in middle-aged women. Biol Res Nurs 2003;
4(3):181-92.
18. Hubball HT. Development and evaluation of a worksite health
promotion program: application of critical self-directed
learning for exercise behaviour change (Unpublished
dissertation). The University of British Columbia, Vancouver;
1996.
19. Nichols GJ. Testing a culturally consistent behavioral
outcomes strategy for cardiovascular disease risk reduction
and prevention in low income African-American women
(Unpublished dissertation). University of Maryland,
Baltimore; 1995.
20. Blanchard CM, Fortier M, Sweet S, O’Sullivan T, Hogg W,
Reid RD, et al. Explaining physical activity levels from a selfefficacy
perspective: The physical activity counseling trial.
Ann Behav Med 2007;34(3):323-8.
21. Annesi JJ. Effects of music, television, and a combination
entertainment system on distraction, exercise adherence,
and physical output in adults. Canadian J Behav Sci 2001;
33(3):193-202.
22. King AC, Baumann K, O’Sullivan P, Wilcox S, Castro C.
Effects of moderate-intensity exercise on physiological,
behavioral, and emotional responses to family caregiving: A
randomized controlled trial. J Gerontol A Biol Sci Med Sci
2003;57(1):M26-36.
23. Bennett JA, Young HM, Nail LM, Winters-Stone K, Hanson G.
A telephone-only motivational intervention to increase
physical activity in rural adults: a randomized controlled trial.
Nurs Res 2008;57(1):24-32.
24. Raber AC. Empowering women: a health promotion program
for weight-related problems (Unpublished dissertation).
Bowling Green State University, Ohio; 2004.
25. King AC, Friedman R, Marcus B, Castro C, Napolitano M,
Ahn D, Baker L. Ongoing physical activity advice by
humans versus computers: The Community Health
Advice by Telephone (CHAT) trial. Health Psychol 2007;
26(6):718-27.
26. Conn VS, Hafdahl AR, Mehr DR, LeMaster JW, Brown SA,
Nielsen PJ. Metabolic effects of interventions to increase
exercise in adults with type 2 diabetes. Diabetologia 2007;
50(5):913-21.Meta-synthesis or Meta-analysis in Medical Research Assignment
27. Borestein M. Effect size for continuous data. In: Cooper H,
Hedges L, Valentine J, editors. Handbook of research
synthesis and meta-analysis. 2nd ed. New York: Russell Sage
Foundation; 2009. p. 221-35.
28. Brancato RM, Church S, Stone PW. A meta-analysis of
passive descent versus immediate pushing in
nulliparous women with epidural analgesia in the
second stage of labor. J Obstet Gynecol Neonatal Nurs
2008;37(1):4-12.
29. Gu MO, Conn VS. Meta-analysis of the effects of exercise
interventions on functional status in older adults. Research
Nurs Health 2008;31(6):594-603.
30. Navathe AS, Clancy C, Glied S. Advancing research data
infrastructure for patient-centered outcomes research. JAMA
2011;306(11):1254-5.
31. Lo SF, Chang CJ, Hu WY, Hayter M, Chang YT. The
effectiveness of silver-releasing dressings in the
management of non-healing chronic wounds: a metaanalysis.
J Clin Nurs 2009;18(5):716-28.
32. Lohr KN. Comparative effectiveness research methods:
symposium overview and summary. Med Care 2010;
48(6 Suppl):S3-6.
33. Atkins D, Kupersmith J. Implementation research: A critical
component of realizing the benefits of comparative
effectiveness research. Am J Med 2010;123(12 Suppl. 1):e38-45.
34. Van Kuiken D. A meta-analysis of the effect of guided
imagery practice on outcomes. J Holist Nurs 2004;22(2):164-79.
35. Kaplan SH, Billimek J, Sorkin DH, Ngo-Metzger Q,
Greenfield S. Who can respond to treatment? Identifying
patient characteristics related to heterogeneity of treatment
effects. Med Care 2010;48(6 Suppl):S9-16.
36. Conn VS, Hafdahl AR, Cooper PS, Ruppar TM, Mehr DR,
Russell CL. Interventions to improve medication adherence
among older adults: meta-analysis of adherence outcomes
among randomized controlled trials. Gerontologist 2009;49(4):
447-62.
37. Rice VH, Stead L. Nursing intervention and smoking cessation:
meta-analysis update. Heart Lung 2006;35(3):147-63.
38. Committee on Comparative Effectiveness Research
Prioritization, Institute of Medicine. Initial national priorities
of comparative effectiveness research. Washington DC:
National Academies Press; 2009.
39. Oh H, Seo W. Meta-analysis of the effects of respiratory
rehabilitation programmes on exercise capacity in accordance
with programme characteristics. J Clin Nurs 2007;16(1):3-15.
40. Jung D, Lee J, Lee SM. A meta-analysis of fear of falling
treatment programs for the elderly. West J Nurs Res 2009;
31(1):6-16.Meta-synthesis or Meta-analysis in Medical Research Assignment
41. Conn VS, Hafdahl AR, Mehr DR. Interventions to increase
physical activity among healthy adults: meta-analysis of
outcomes. Am J Public Health 2011;101(4):751-8.
42. Fu R, Gartlehner G, Grant M, Shamliyan T, Sedrakyan A,
Wilt TJ, et al. Conducting quantitative synthesis when
comparing medical interventions: AHRQ and the Effective
Health Care Program. J Clin Epidemiol 2011;64(11):1187-97.
43. Conn VS, Groves P. Protecting the power of interventions
through proper reporting. Nurs Outlook 2011;59(6):318-25.
44. Kim Y-J, Soeken KL. A meta-analysis of the effect of hospitalbased
case management on hospital length-of-stay and
readmission. Nurs Res 2005;54(4):255-64.
45. Gibbons RJ, Gardner TJ, Anderson JL, Goldstein LB,
Weintraub WS, Yancy CW. The American Heart Association’s
principles for comparative effectiveness research: A policy
statement from the American Heart Association. Circulation
2009;119(22):2955-62.
46. Hadler NM, McNutt RA. The illusory side of “comparative
effectiveness research.” 2011. Health Beat. Available at: http://
www.healthbeatblog.com/2011/04/the-illusory-side-ofcomparative-effectiveness-research-.html.
Accessed May 29,
2012.
47. Conn VS, Rantz MJ. Research methods: managing primary
study quality in meta-analyses. Res Nurs Health 2003;26(4):
322-33.
48. Lee J, Soeken K, Picot SJ. A meta-analysis of interventions for
informal stroke caregivers. West J Nurs Res 2007;29(3):344-56.
discussion 357-64.
49. Conn VS, Cooper PS, Ruppar TM, Russell CL. Searching for the
intervention in intervention research reports. J Nurs
Scholarsh 2008;40(1):52-9.
50. McGilton KS, Boscart V, Fox M, Sidani S, Rochon E, SorinPeters
R. A systematic review of the effectiveness of
communication interventions for health care providers
caring for patients in residential care settings. Worldviews
Evid Based Nurs 2009;6(3):149-59.
51. Forbes A. Clinical intervention research in nursing. Int J Nurs
Stud 2009;46(4):557-68.
52. Conn VS. Intervention? What intervention? West J Nurs Res
2007;29(5):521-2.
53. Smyth RMD, Kirkham JJ, Jacoby A, Altman DG, Gamble C,
Williamson PR. Frequency and reasons for outcome reporting
Nurs Outlook 60 (2012) 182 e 190 189
bias in clinical trials: interviews with trialists. Brit Med J 2011;
342:c7153.Meta-synthesis or Meta-analysis in Medical Research Assignment
54. Orwin R, Vevea J. Evaluating coding decisions. In: Cooper H,
Hedges L, Valentine J, editors. The handbook of research
synthesis and meta-analysis. 2nd ed. New York: Russell Sage
Foundation; 2009. p. 177-203.
55. Pigott T. Handling missing data. In: Cooper H, Hedges L,
Valentine J, editors. The handbook of research synthesis and
meta-analysis. 2nd ed. New York: Russell Sage Foundation;
2009. p. 399-416.
56. Conn V, Isaramalai S, Rath S, Jantarakupt P, Wadhawan R,
Dash Y. Beyond MEDLINE for literature searches. J Nurs
Scholarsh 2003;35(2):177-82.
57. Conn VS, Valentine JC, Cooper HM, Rantz MJ. Grey literature
in meta-analyses. Nurs Res 2003;52(4):256-61.
58. Dickersin K. Publication bias: recognizing the problem,
understanding its origins and scope, and preventing harm. In:
Rothstein HR, Sutton AJ, Borenstein M, editors. Publication
bias in meta-analysis. West Sussex, United Kingdom: John
Wiley & Sons, Ltd.; 2006. p. 9-33.
59. Jones JB, Blecker S, Shah NR. Meta-analysis 101: What
you want to know in the era of comparative
effectiveness. 2008. Am Health Drug Benefits. Available
at: http://www.ahdbonline.com/feature/meta-analysis101-what-you-want-know-era-comparativeeffectiveness.
Accessed May 29, 2012.Meta-synthesis or Meta-analysis in Medical Research Assignment
60. Conn VS, Hafdahl AR, Brown SA, Brown LM. Meta-analysis
of patient education interventions to increase physical
activity among chronically ill adults. Patient Educ Couns 2008;
70(2):157-72.
61. Conn VS, Valentine JC, Cooper HM. Interventions to increase
physical activity among aging adults: a meta-analysis. Ann
Behav Med 2002;24(3):190-200.
62. Berry D, Wathen JK, Newell M. Bayesian model averaging in
meta-analysis: Vitamin E supplementation and mortality.
Clin Trials 2009;6(1):28-41.
190 Nurs Outlook 60 (2012) 182 e 190

BUY A PLAGIARISM-FREE PAPER HERE

HIPPOKRATIA 2010, 14 (Suppl 1) 29 REVIEW ARTICLE HIPPOKRATIA 2010, 14 (Suppl 1): 29-37 PASCHOS KA Meta-analysis in medical research Haidich AB Department of Hygiene and Epidemiology, Aristotle University of Thessaloniki School of Medicine, Thessaloniki, Greece Abstract The objectives of this paper are to provide an introduction to meta-analysis and to discuss the rationale for this type of research and other general considerations. Methods used to produce a rigorous meta-analysis are highlighted and some aspects of presentation and interpretation of meta-analysis are discussed. Meta-analysis is a quantitative, formal, epidemiological study design used to systematically assess previous research studies to derive conclusions about that body of research. Outcomes from a meta-analysis may include a more precise estimate of the effect of treatment or risk factor for disease, or other outcomes, than any individual study contributing to the pooled analysis. The examination of variability or heterogeneity in study results is also a critical outcome.Meta-synthesis or Meta-analysis in Medical Research Assignment. The benefits of meta-analysis include a consolidated and quantitative review of a large, and often complex, sometimes apparently conflicting, body of literature. The specification of the outcome and hypotheses that are tested is critical to the conduct of meta-analyses, as is a sensitive literature search. A failure to identify the majority of existing studies can lead to erroneous conclusions; however, there are methods of examining data to identify the potential for studies to be missing; for example, by the use of funnel plots. Rigorously conducted meta-analyses are useful tools in evidence-based medicine. The need to integrate findings from many studies ensures that meta-analytic research is desirable and the large body of research now generated makes the conduct of this research feasible. Hippokratia 2010; 14 (Suppl 1): 29-37 Key words: meta-analysis, systematic review, randomized clinical trial, bias, quality, evidence-based medicine Corresponding author: Anna-Bettina Haidich, Department of Hygiene and Epidemiology Aristotle University of Thessaloniki, School of Medicine, 54124 Thessaloniki, Greece, Tel: +302310-999143, Fax: +302310-999701, e-mail:haidich@med.auth.gr Important medical questions are typically studied more than once, often by different research teams in different locations. In many instances, the results of these multiple small studies of an issue are diverse and conflicting, which makes the clinical decision-making difficult. The need to arrive at decisions affecting clinical practise fostered the momentum toward “evidence-based medicine”1-2. Evidence-based medicine may be defined as the systematic, quantitative, preferentially experimental approach to obtaining and using medical information. Therefore, meta-analysis, a statistical procedure that integrates the results of several independent studies, plays a central role in evidence-based medicine. In fact, in the hierarchy of evidence (Figure 1), where clinical evidence is ranked according to the strength of the freedom from various biases that beset medical research, meta-analyses are in the top. In contrast, animal research, laboratory studies, case series and case reports have little clinical value as proof, hence being in the bottom. Meta-analysis did not begin to appear regularly in the medical literature until the late 1970s but since then a plethora of meta-analyses have emerged and the growth is exponential over time (Figure 2)3 . Moreover, it has been shown that meta-analyses are the most frequently cited form of clinical research4 . The merits and perils of the somewhat mysterious procedure of meta-analysis, however, continue to be debated in the medical community5-8. The objectives of this paper are to introduce meta-analysis and to discuss the rationale for this type of research and other general considerations. Meta-Analysis and Systematic Review Glass first defined meta-analysis in the social science literature as “The statistical analysis of a large collection of analysis results from individual studiesfor the purpose of integrating the findings”9 . Meta-analysis is a quantitative, formal, epidemiological study design used to sysFigure 1: Hierarchy of evidence. Meta-synthesis or Meta-analysis in Medical Research Assignment 30 tematically assess the results of previous research to derive conclusions about that body of research. Typically, but not necessarily, the study is based on randomized, controlled clinical trials. Outcomes from a meta-analysis may include amore precise estimate of the effect of treatment or risk factor for disease, or other outcomes, than any individual study contributing to the pooled analysis. Identifying sources of variation in responses; that is, examining heterogeneity of a group of studies, and generalizability of responses can lead to more effectivetreatments or modifications of management. Examination of heterogeneity is perhaps the most important task in meta-analysis. The Cochrane collaboration has been a long-standing, rigorous, and innovative leader in developing methods in the field10. Major contributions include the development of protocols that provide structure for literature search methods, and new and extended analytic and diagnostic methods for evaluating the output of meta-analyses. Use of the methods outlined in the handbook should provide a consistent approach to the conduct of meta-analysis. Moreover, a useful guide to improve reporting of systematic reviews and meta-analyses is the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-analyses) statement that replaced the QUOROM (QUality Of Reporting of Meta-analyses) statement11-13. Meta-analyses are a subset of systematic review. A systematic review attempts to collate empirical evidence that fits prespecified eligibility criteria to answer a specific research question. The key characteristics of a systematic review are a clearly stated set of objectives with predefined eligibility criteria for studies; an explicit, reproducible methodology; a systematic search that attempts to identify allstudies that meet the eligibility criteria; an assessment ofthe validity of the findings of the included studies (e.g., through the assessment of risk of bias); and a systematic presentation and synthesis of the attributes and findings from the studies used. Systematic methods are used to minimize bias, thus providing more reliable findings from which conclusions can be drawn and decisions made than traditional review methods14,15. Systematic reviews need not contain a meta-analysis—there are times when it is not appropriate or possible; however, many systematic reviews contain meta-analyses16. The inclusion of observational medical studies in meta-analyses led to considerable debate over the validity of meta-analytical approaches, as there was necessarily a concern that the observational studies were likely to be subject to unidentified sources of confounding and risk modification17. Pooling such findings may not lead to more certain outcomes. Moreover, an empirical study showed that in meta-analyses were both randomized and non-randomized was included, nonrandomized studies tended to show larger treatment effects18. Meta-analyses are conducted to assess the strength of evidence present on a disease and treatment. One aim is to determine whether an effect exists; another aim is to determine whether the effect is positive or negative and, ideally, to obtain a single summary estimate of the effect. The results of a meta-analysis can improve precision of estimates of effect, answer questions not posed by the individual studies, settle controversies arising from apparently conflicting studies, and generate new hypotheses. In particular, the examination of heterogeneity is vital to the development of new hypotheses. Individual or Aggregated Data The majority of meta-analyses are based on a series of studies to produce a point estimate of an effect and measures of the precision of that estimate. However, methods have been developed for the meta-analyses to be conducted on data obtained from original trials19,20. This approach may be considered the “gold standard” in metaanalysis because it offers advantages over analyses using aggregated data, including a greater ability to validate the quality of data and to conduct appropriate statistical analysis. Further, it is easier to explore differences in effect across subgroups within the study population than with aggregated data. The use of standardized individual-level information may help to avoid the problems encountered in meta-analyses of prognostic factors21,22. It is the best way to obtain a more global picture of the natural history and predictors of risk for major outcomes, such as in scleroderma23-26.This approach relies on cooperation between researchers who conducted the relevant studies. Researchers who are aware of the potential to contribute or conduct these studies will provide and obtain additional benefits by careful maintenance of original databases and making these available for future studies. Literature Search A sound meta-analysis is characterized by a thorough and disciplined literature search. A clear definition of hypotheses to be investigated provides the framework for such an investigation. According to the PRISMA statement, an explicit statement of questions being adFigure 2: Cumulative number of publications about metaanalysis over time, until 17 December 2009 (results from Medline search using text “meta-analysis”). HAIDICH AB HIPPOKRATIA 2010, 14 (Suppl 1) 31 dressed with reference to participants, interventions, comparisons, outcomes and study design (PICOS) should be provided11,12. It is important to obtain all relevant studies, because loss of studies can lead to bias in the study. Typically, published papers and abstracts are identified by a computerized literature search of electronic databases that can include PubMed (www.ncbi.nlm.nih.gov./entrez/query.fcgi), ScienceDirect (www.sciencedirect.com), Scirus (www.scirus.com/srsapp ), ISI Web of Knowledge (http://www.isiwebofknowledge.com), Google Scholar (http://scholar.google.com) and CENTRAL (Cochrane Central Register of Controlled Trials, http://www.mrw. interscience.wiley.com/cochrane/cochrane_clcentral_articles_fs.htm). PRISMA statement recommends that a full electronic search strategy for at least one major database to be presented12Meta-synthesis or Meta-analysis in Medical Research Assignment. Database searches should be augmented with hand searches of library resources for relevant papers, books, abstracts, and conference proceedings. Crosschecking of references, citations in review papers, and communication with scientists who have been working in the relevant field are important methods used to provide a comprehensive search. Communication with pharmaceutical companies manufacturing and distributing test products can be appropriate for studies examining the use of pharmaceutical interventions. It is not feasible to find absolutely every relevant study on a subject. Some or even many studies may not be published, and those that are might not be indexed in computer-searchable databases. Useful sources for unpublished trials are the clinical trials registers, such as the National Library of Medicine’s ClinicalTrials.gov Website. The reviews should attempt to be sensitive; that is, find as many studies as possible, to minimize bias and be efficient. It may be appropriate to frame a hypothesis that considers the time over which a study is conducted or to target a particular subpopulation. The decision whether to include unpublished studies is difficult. Although language of publication can provide a difficulty, it is important to overcome this difficulty, provided that the populations studied are relevant to the hypothesis being tested. Inclusion or Exclusion Criteria and Potential for Bias Studies are chosen for meta-analysis based on inclusion criteria. If there is more than one hypothesis to be tested, separate selection criteria should be defined for each hypothesis. Inclusion criteria are ideally defined at the stage of initial development of the study protocol. The rationale for the criteria for study selection used should be clearly stated. One important potential source of bias in meta-analysis is the loss of trials and subjects. Ideally, all randomized subjects in all studies satisfy all of the trial selection criteria, comply with all the trial procedures, and provide complete data. Under these conditions, an “intention-totreat” analysis is straightforward to implement; that is, statistical analysis is conducted on all subjects that are enrolled in a study rather than those that complete all stages of study considered desirable. Some empirical studies had shown that certain methodological characteristics, such as poor concealment of treatment allocation or no blinding in studies exaggerate treatment effects27. Therefore, it is important to critically appraise the quality of studies in order to assess the risk of bias. The study design, including details of the method of randomization of subjects to treatment groups, criteria for eligibility in the study, blinding, method of assessing the outcome, and handling of protocol deviations are important features defining study quality. When studies are excluded from a meta-analysis, reasons for exclusion should be provided for each excluded study. Usually, more than one assessor decides independently which studies to include or exclude, together with a well-defined checklist and a procedure that is followed when the assessors disagree. Two people familiar with the study topic perform the quality assessment for each study, independently. This is followed by a consensus meeting to discuss the studies excluded or included. Practically, the blinding of reviewers from details of a study such as authorship and journal source is difficult. Before assessing study quality, a quality assessment protocol and data forms should be developed. The goal of this process is to reduce the risk of bias in the estimate of effect. Quality scores that summarize multiple components into a single number exist but are misleading and unhelpful28. Rather, investigators should use individual components of quality assessment and describe trials that do not meet the specified quality standards and probably assess the effect on the overall results by excluding them, as part of the sensitivity analyses. Further, not all studies are completed, because of protocol failure, treatment failure, or other factors. Nonetheless, missing subjects and studies can provide important evidence. It is desirable to obtain data from all relevant randomized trials, so that the most appropriate analysis can be undertaken. Previous studies have discussed the significance of missing trials to the interpretation of intervention studies in medicine29,30. Journal editors and reviewers need to be aware of the existing bias toward publishing positive findings and ensure that papers that publish negative or even failed trials be published, as long as these meet the quality guidelines for publication. There are occasions when authors of the selected papers have chosen different outcome criteria for their main analysis. In practice, it may be necessary to revise the inclusion criteria for a meta-analysis after reviewing all of the studies found through the search strategy. Variation in studies reflects the type of study design used, type and application of experimental and control therapies, whether or not the study was published, and, if published, subjected to peer review, and the definition used for the outcome of interest. There are no standardized criteria for inclusion of studies in meta-analysis. Universal criteria are not appropriate, however, because meta-analysis can be applied to a broad spectrum of topics. Published data in journal papers should also be cross-checked with conference papers to avoid repetition in presented data.Meta-synthesis or Meta-analysis in Medical Research Assignment 32 Clearly, unpublished studies are not found by searching the literature. It is possible that published studies are systemically different from unpublished studies; for example, positive trial findings may be more likely to be published. Therefore, a meta-analysis based on literature search results alone may lead to publication bias. Efforts to minimize this potential bias include working from the references in published studies, searching computerized databases of unpublished material, and investigating other sources of information including conference proceedings, graduate dissertations and clinical trial registers. Statistical analysis The most common measures of effect used for dichotomous data are the risk ratio (also called relative risk) and the odds ratio. The dominant method used for continuous data are standardized mean difference (SMD) estimation. Methods used in meta-analysis for post hoc analysis of findings are relatively specific to meta-analysis and include heterogeneity analysis, sensitivity analysis, and evaluation of publication bias. All methods used should allow for the weighting of studies. The concept of weighting reflects the value of the evidence of any particular study. Usually, studies are weighted according to the inverse of their variance31. It is important to recognize that smaller studies, therefore, usually contribute less to the estimates of overall effect. However, well-conducted studies with tight control of measurement variation and sources of confounding contribute more to estimates of overall effect than a study of identical size less well conducted. One of the foremost decisions to be made when conducting a meta-analysis is whether to use a fixed-effects or a random-effects model. A fixed-effects model is based on the assumption that the sole source of variation in observed outcomes is that occurring within the study; that is, the effect expected from each study is the same. Consequently, it is assumed that the models are homogeneous; there are no differences in the underlying study population, no differences in subject selection criteria, and treatments are applied the same way32. Fixed-effect methods used for dichotomous data include most often the Mantel-Haenzel method33 and the Peto method 34(only for odds ratios)Meta-synthesis or Meta-analysis in Medical Research Assignment. Random-effects models have an underlying assumption that a distribution of effects exists, resulting in heterogeneity among study results, known as τ2 . Consequently, as software has improved, random-effects models that require greater computing power have become more frequently conducted. This is desirable because the strong assumption that the effect of interest is the same in all studies is frequently untenable. Moreover, the fixed effects model is not appropriate when statistical heterogeneity (τ2 ) is present in the results of studies in the meta-analysis. In the random-effects model, studies are weighted with the inverse of their variance and the heterogeneity parameter. Therefore, it is usually a more conservative approach with wider confidence intervals than the fixed-effects model where the studies are weighted only with the inverse of their variance. The most commonly used random-effects method is the DerSimonian and Laird method35. Furthermore, it is suggested that comparing the fixed-effects and random-effect models developed as this process can yield insights to the data36. Heterogeneity Arguably, the greatest benefit of conducting metaanalysis is to examine sources of heterogeneity, if present, among studies. If heterogeneity is present, the summary measure must be interpreted with caution 37. When heterogeneity is present, one should question whether and how to generalize the results. Understanding sources of heterogeneity will lead to more effective targeting of prevention and treatment strategies and will result in new research topics being identified. Part of the strategy in conducting a meta-analysis is to identify factors that may be significant determinants of subpopulation analysis or covariates that may be appropriate to explore in all studies. To understand the nature of variability in studies, it is important to distinguish between different sources of heterogeneity.Meta-synthesis or Meta-analysis in Medical Research Assignment. Variability in the participants, interventions, and outcomes studied has been described as clinical diversity, and variability in study design and risk of bias has been described as methodological diversity10. Variability in the intervention effects being evaluated among the different studies is known as statistical heterogeneity and is a consequence of clinical or methodological diversity, or both, among the studies. Statistical heterogeneity manifests itself in the observed intervention effects varying by more than the differences expected among studies that would be attributable to random error alone. Usually, in the literature, statistical heterogeneity is simply referred to as heterogeneity. Clinical variation will cause heterogeneity if the intervention effect is modified by the factors that vary across studies; most obviously, the specific interventions or participant characteristics that are often reflected in different levels of risk in the control group when the outcome is dichotomous. In other words, the true intervention effect will differ for different studies. Differences between studies in terms of methods used, such as use of blinding or differences between studies in the definition or measurement of outcomes, may lead to differences in observed effects. Significant statistical heterogeneity arising from differences in methods used or differences in outcome assessments suggests that the studies are not all estimating the same effect, but does not necessarily suggest that the true intervention effect varies. In particular, heterogeneity associated solely with methodological diversity indicates that studies suffer from different degrees of bias. Empirical evidence suggests that some aspects of design can affect the result of clinical trials, although this may not always be the case. The scope of a meta-analysis will largely determine HAIDICH AB HIPPOKRATIA 2010, 14 (Suppl 1) 33 the extent to which studies included in a review are diverse. Meta-analysis should be conducted when a group of studies is sufficiently homogeneous in terms of subjects involved, interventions, and outcomes to provide a meaningful summary. However, it is often appropriate to take a broader perspective in a meta-analysis than in a single clinical trial. Combining studies that differ substantially in design and other factors can yield a meaningless summary result, but the evaluation of reasons for the heterogeneity among studies can be very insightful. It may be argued that these studies are of intrinsic interest on their own, even though it is not appropriate to produce a single summary estimate of effect. Variation among k trials is usually assessed using Cochran’s Q statistic, a chi-squared (χ2 ) test of heterogeneity with k-1 degrees of freedom.Meta-synthesis or Meta-analysis in Medical Research Assignment. This test has relatively poor power to detect heterogeneity among small numbers of trials; consequently, an α-level of 0.10 is used to test hypotheses38,39. Heterogeneity of results among trials is better quantified using the inconsistency index I 2 , which describes the percentage of total variation across studies40. Uncertainty intervals for I 2 (dependent on Q and k) are calculated using the method described by Higgins and Thompson41. Negative values of I 2 are put equal to zero, consequently I 2 lies between 0 and 100%. A value >75% may be considered substantial heterogeneity41. This statistic is less influenced by the number of trials compared with other methods used to estimate the heterogeneity and provides a logical and readily interpretable metric but it still can be unstable when only a few studies are combined42. Given that there are several potential sources of heterogeneity in the data, several steps should be considered in the investigation of the causes. Although random-effects models are appropriate, it may be still very desirable to examine the data to identify sources of heterogeneity and to take steps to produce models that have a lower level of heterogeneity, if appropriate. Further, if the studies examined are highly heterogeneous, it may be not appropriate to present an overall summary estimate, even when random effects models are used. As Petiti notes43, statistical analysis alone will not make contradictory studies agree; critically, however, one should use common sense in decision-making. Despite heterogeneity in responses, if all studies had a positive point direction and the pooled confidence interval did not include zero, it would not be logical to conclude that there was not a positive effect, provided that sufficient studies and subject numbers were present. The appropriateness of the point estimate of the effect is much more in question. Some of the ways to investigate the reasons for heterogeneity; are subgroup analysis and meta-regression. The subgroup analysis approach, a variation on those described above, groups categories of subjects (e.g., by age, sex) to compare effect sizes. The meta-regression approach uses regression analysis to determine the influence of selected variables (the independent variables) on the effect size (the dependent variable). In a meta-regression, studies are regarded as if they were individual patients, but their effects are properly weighted to account for their different variances44. Sensitivity analyses have also been used to examine the effects of studies identified as being aberrant concerning conduct or result, or being highly influential in the analysis. Recently, another method has been proposed that reduces the weight of studies that are outliers in meta-analyses45. All of these methods for examining heterogeneity have merit, and the variety of methods available reflects the importance of this activity. Presentation of results A useful graph, presented in the PRISMA statement11, is the four-phase flow diagram (Figure 3). This flow-diagram depicts the flow of information through the different phases of a systematic review or meta-analysis. It maps out the number of records identified, included and excluded, and the reasons for exclusions. The results of meta-analyses are often presented in a forest plot, where each study is shown with its effect size and the corresponding 95% confidence interval (Figure 4). The pooled effect and 95% confidence interval is shown in the bottom in the same line with “Overall”. In the right panel of Figure 4, the cumulative meta-analysis is graphically displayed, where data are entered successively, typically in the order of their chronological appearance46,47. Such cumulative meta-analysis can retrospectively identify the point in time when a treatment Υear of publication Figure 3: PRISMA 2009 Flow Diagram (From Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol 2009;62:1006-12, For more information, visit www.prisma-statement.org).Meta-synthesis or Meta-analysis in Medical Research Assignment 34 effect first reached conventional levels of significance. Cumulative meta-analysis is a compelling way to examine trends in the evolution of the summary-effect size, and to assess the impact of a specific study on the overall conclusions46. The figure shows that many studies were performed long after cumulative meta-analysis would have shown a significant beneficial effect of antibiotic prophylaxis in colon surgery. Biases in meta-analysis Although the intent of a meta-analysis is to find and assess all studies meeting the inclusion criteria, it is not always possible to obtain these. A critical concern is the papers that may have been missed. There is good reason to be concerned about this potential loss because studies with significant, positive results (positive studies) are more likely to be published and, in the case of interventions with a commercial value, to be promoted, than studies with non-significant or “negative” results (negative studies). Studies that produce a positive result, especially large studies, are more likely to have been published and, conversely, there has been a reluctance to publish small studies that have non-significant results. Further, publication bias is not solely the responsibility of editorial policy as there is reluctance among researchers to publish results that were either uninteresting or are not randomized48Meta-synthesis or Meta-analysis in Medical Research Assignment.There are, however, problems with simply including all studies that have failed to meet peer-review standards. All methods of retrospectively dealing with bias in studies are imperfect. It is important to examine the results of each meta-analysis for evidence of publication bias. An estimation of likely size of the publication bias in the review and an approach to dealing with the bias is inherent to the conduct of many meta-analyses. Several methods have been developed to provide an assessment of publication bias; the most commonly used is the funnel plot. The funnel plot provides a graphical evaluation of the potential for bias and was developed by Light and Pillemer49 and discussed in detail by Egger and colleagues50,51. A funnel plot is a scatterplot of treatment effect against a measure of study size. If publication bias is not present, the plot is expected to have a symmetric inverted funnel shape, as shown in Figure 5A. In a study in which there is no publication bias, larger studies (i.e., have lower standard error) tend to cluster closely to the point estimate. As studies become less precise, such as in smaller trials (i.e., have a higher standard error), the results of the studies can be expected to be more variable and are scattered to both sides of the more precise larger studies. Figure 5A shows that the smaller, less precise studies are, indeed, scattered to both sides of the point estimate of effect and that these seem to be symmetrical, as an inverted funnel-plot, showing no evidence of publication bias. In contrast to Figure 5A, Figure 5B shows evidence of publication bias. There is evidence of the possibility that studies using smaller numbers of subjects and showing an decrease in effect size (lower odds ratio) were not published. Figure 4: Forest plots of the meta-analysis addressing the use of antibiotic prophylaxis compared with no treatment in colon surgery. The outcome is would infection and 32 studies were included in the meta-analysis. Risk ratio <1 favors use of prophylactic antibiotics, whereas risk ratio > 1 suggests that no treatment is better. Left panel, Studies displayed chronologically by year of publication, n represents study size. Each study is represented by a filled circle (denoting its risk ratio estimate) and the horizontal line denotes the corresponding 95% confidence interval. Studies that intersect the vertical line of unity (RR=1), indicate no difference between the antibiotic group and the control group. Pooled results from all studies are shown at bottom with the random-effect model. Right panel,Cumulative meta-analysis of same studies with random-effects model, where the summary risk ratio is re-estimated each time a study is added over time. It reveals that antibiotic prophylaxis efficacy could have been identified as early as 1971 after 5 studies involving about 300 patients (n in this panel represents cumulative number of patients from included studies). (From Ioannidis JP, Lau J. State of the evidence: current status and prospects of metaanalysis in infectious diseases. Clin Infect Dis 1999;29:1178–85)Meta-synthesis or Meta-analysis in Medical Research Assignment. HAIDICH AB HIPPOKRATIA 2010, 14 (Suppl 1) 35 Asymmetry of funnel plots is not solely attributable to publication bias, but may also result from clinical heterogeneity among studies. Sources of clinical heterogeneity include differences in control or exposure of subjects to confounders or effect modifiers, or methodological heterogeneity between studies; for example, a failure to conceal treatment allocation. There are several statistical tests for detecting funnel plot asymmetry; for example, Egger’s linear regression test50, and Begg’s rank correlation test52 but these do not have considerable power and are rarely used. However, the funnel plot is not without problems. If high precision studies really are different than low precision studies with respect to effect size (e.g., due different populations examined) a funnel plot may give a wrong impression of publication bias53. The appearance of the funnel plot plot can change quite dramatically depending on the scale on the y-axis - whether it is the inverse square error or the trial size54. Other types of biases in meta-analysis include the time lag bias, selective reporting bias and the language bias. The time lag bias arises from the published studies, when those with striking results are published earlier than those with non-significant findings55. Moreover, it has been shown that positive studies with high early accrual of patients are published sooner than negative trials with low early accrual56. However, missing studies, either due to publication bias or time-lag bias may increasingly be identified from trials registries. The selective reporting bias exists when published articles have incomplete or inadequate reporting. Empirical studies have shown that this bias is widespread and of considerable importance when published studies were compared with their study protocols29,30. Furthermore, recent evidence suggests that selective reporting might be an issue in safety outcomes and the reporting of harms in clinical trials is still suboptimal57. Therefore, it might not be possible to use quantitative objective evidence for harms in performing meta-analyses and making therapeutic decisions. Excluding clinical trials reported in languages other than English from meta-analyses may introduce the language bias and reducethe precision of combined estimates of treatment effects. Trials with statistically significant results have been shown to be published in English58. In contrast, a later more extensive investigation showed that trials publishedin languages other than English tend to be of lower quality and produce more favourable treatment effects than trials publishedin English and concluded that excluding non-English language trials has generally only modest effects on summary treatment effect estimates but the effect is difficult to predict for individual meta-analyses59. Evolution of meta-analyses The classical meta-analysis compares two treatments while network meta-analysis (or multiple treatment metaanalysis) can provide estimates of treatment efficacy of multiple treatment regimens, even when direct comparisons are unavailable by indirect comparisons60. An example of a network analysis would be the following. An initial trial compares drug A to drug B. A different trial studying the same patient population compares drug B to drug C. Assume that drug A is found to be superior to drug B in the first trial. Assume drug B is found to be equivalent to drug C in a second trial. Network analysis then, allows one to potentially say statistically that drug A is also superior to drug C for this particular patient population.Meta-synthesis or Meta-analysis in Medical Research Assignment (Since drug A is better than drug B, and drug B is equivalent to drug C, then drug A is also better to drug C even though it was not directly tested against drug C.) Meta-analysis can also be used to summarize the performance of diagnostic and prognostic tests. However, studiesthat evaluate the accuracy of tests have a unique design requiring different criteria to appropriately assess the quality of studies and the potential for bias. Additionally, each study reports a pair of related summary statistics (for example, sensitivity and specificity) rather than a single statistic (such as a risk ratio) and hence requires different statistical methods to pool the resultsof the studies61. Various techniques to summarize results from diagnostic and prognostic test results have been proposed62-64. Furthermore, there are many methodologies for advanced Figure 5: A) Symmetrical funnel plot. B) Asymmetrical funnel plot, the small negative studies in the bottom left corner is missing. 36 meta-analysis that have been developed to address specific concerns, such as multivariate meta-analysis65–67, and special types of meta-analysis in genetics68 but will not be discussed here. Meta-analysis is no longer a novelty in medicine. Numerous meta-analyses have been conducted for the same medical topic by different researchers. Recently, there is a trend to combine the results of different meta-analyses, known as a meta-epidemiological study, to assess the risk of bias79,70. Conclusions The traditional basis of medical practice has been changed by the use of randomized, blinded, multicenter clinical trials and meta-analysis, leading to the widely used term “evidence-based medicine”. Leaders in initiating this change have been the Cochrane Collaboration who have produced guidelines for conducting systematic reviews and meta-analyses10 and recently the PRISMA statement, a helpful resource to improve reporting of systematic reviews and meta-analyses has been released11Meta-synthesis or Meta-analysis in Medical Research Assignment. Moreover, standards by which to conduct and report meta-analyses of observational studies have been published to improve the quality of reporting71. Meta-analysis of randomized clinical trials is not an infallible tool, however, and several examples exist of meta-analyses which were later contradicted by single large randomized controlled trials, and of meta-analyses addressing the same issue which have reached opposite conclusions72. A recent example, was the controversy between a meta-analysis of 42 studies73 and the subsequent publication of the large-scale trial (RECORD trial) that did not support the cardiovascular risk of rosiglitazone74. However, the reason for this controversy was explained by the numerous methodological flaws found both in the meta-analysis and the large clinical trial75. No single study, whether meta-analytic or not, will provide the definitive understanding of responses to treatment, diagnostic tests, or risk factors influencing disease. Despite this limitation, meta-analytic approaches have demonstrable benefits in addressing the limitations of study size, can include diverse populations, provide the opportunity to evaluate new hypotheses, and are more valuable than any single study contributing to the analysis.Meta-synthesis or Meta-analysis in Medical Research Assignment. The conduct of the studies is critical to the value of a meta-analysis and the methods used need to be as rigorous as any other study conducted. References 1. Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ. User’s guide to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group. JAMA. 1995; 274: 1800–1804. 2. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996: 312: 71–72. 3. Chalmers TC, Matta RJ, Smith H Jr, Kunzler AM. Evidence favoring the use of anticoagulants in the hospital phase of acute myocardial infarction. N Engl J Med. 1977; 297: 1091–1096. 4. Patsopoulos NA, Analatos AA, Ioannidis JP. Relative citation impact of various study designs in the health sciences. JAMA. 2005; 293: 2362-2366. 5. Hennekens CH, DeMets D, Bairey-Merz CN, Borzak S, Borer J. Doing more good than harm: the need for a cease fire. Am J Med. 2009; 122: 315-316. 6. Naylor CD. Meta-analysis and the meta-epidemiology of clinical research. BMJ. 1997; 315: 617-619. 7. Bailar JC. The promise and problems of meta-analysis [editorial]. N Engl J Med. 1997; 337: 559-561. 8. Meta-analysis under scrutiny [editorial]. Lancet 1997; 350: 675. 9. Glass GV. Primary, secondary and meta-analysis of research. Educational Researcher. 1976; 5: 3–8. 10. Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.0.2 [updated September 2009]. The Cochrane Collaboration, 2009. Available from www. cochrane-handbook.org. 11. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol 2009;62:1006- 1012. 12Meta-synthesis or Meta-analysis in Medical Research Assignment. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Götzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62:e1-34. 13. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of metaanalyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999; 354: 1896–1900. 14. Antman, EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts: Treatments for myocardial infarction. JAMA. 1992; 268: 240–248. 15. Oxman, AD, Guyatt GH. The science of reviewing research. Ann NY Acad Sci. 1993; 703: 125–133. 16. Lau J, Ioannidis JP, Schmid CH. Quantitative synthesis in systematic reviews. Ann Intern Med. 1997; 127: 820–826. 17. Greenland S. Invited commentary: A critical look at some popular meta-analytic methods. Am J Epidemiol. 1994; 140: 290– 296. 18Meta-synthesis or Meta-analysis in Medical Research Assignment. Ioannidis JPA, Haidich A-B, Pappa M, Pantazis N, Kokori SI, Tektonidou MG, et al. Comparison of evidence of treatment effects in randomized and non-randomized studies. JAMA. 2001; 286: 821-830. 19. Simmonds MC, Higgins JPT, Stewart LA, Tierney JA, Clarke MJ, Thompson SG. Meta-analysis of individual patient data from randomized trials: A review of methods used in practice. Clin Trials. 2005; 2: 209–217. 20. Stewart LA, Clarke MJ. Practical methodology of meta-analyses (overviews) using updated individual patient data. Cochrane working group. Stat Med. 1995; 14: 2057–2079. 21. Altman DG. Systematic reviews of evaluations of prognostic variables. BMJ. 2001; 323: 224 –228. 22. Simon R, Altman DG: Statistical aspects of prognostic factor studies in oncology. Br J Cancer.1994; 69: 979 –985. 23. Ioannidis JPA, Vlachoyiannopoulos PG, Haidich AB, Medsger TA Jr., Lucas M, Michet CJ, et al. Mortality in systemic sclerosis: an international meta-analysis of individual patient data. Am J Med. 2005; 118: 2-10. 24. Oxman AD, Clarke MJ, Stewart LA. From science to practice. Meta-analyses using individual patient data are needed. JAMA. Meta-synthesis or Meta-analysis in Medical Research Assignment1995; 274: 845- 846. 25. Stewart LA, Tierney JF: To IPD or not to IPD? Advantages and disadvantages of systematic reviews using individual patient data. Eval Health Prof. 2002; 25: 76 –97. 26. Trikalinos TA, Ioannidis JPA: Predictive modeling and heterogeneity of baseline risk in meta-analysis of individual patient data. J Clin Epidemiol. 2001; 54: 245–252. HAIDICH AB HIPPOKRATIA 2010, 14 (Suppl 1) 37 27. Pildal J, Hróbjartsson A, Jorgensen KJ, Hilden J, Altman DG, Götzsche PC. Impact of allocation concealment on conclusions drawn from meta-analyses of randomized trials. Int J Epidemiol. 2007; 36: 847-857. 28. Juni P, Altman DG, Egger M. Systematic reviews in health care: Assessing the quality of controlled clinical trials. BMJ. 2001; 323: 42–46. 29. Chan AW, Hróbjartsson A , Haahr MT, Götzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004; 291: 2457–2465. 30. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008; 3:e3081. 31. Egger M, Smith GD, Phillips AN. Meta-analysis: principles and procedures. BMJ.Meta-synthesis or Meta-analysis in Medical Research Assignment 1997; 315: 1533-1537. 32. Stangl DK, Berry DA. Meta-analysis in medicine and health policy. Marcel Dekker, New York, NY. 2000. 33. Mantel N, Haenszel W. Statistical aspects of the analysis of data from retrospective studies of disease. J Natl Cancer Inst. 1959; 22: 719–748. 34. Yusuf S, Peto R, Lewis J, Collins R, Sleight P. Beta blockade during and after myocardial infarction: an overview of the randomized trials. Prog Cardiovasc Dis. 1985; 27: 335–371. 35. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986; 7: 177–188. 36. Whitehead A. Meta-analysis of controlled clinical trials. JohnWiley & Sons Ltd. Chichester, UK. 2002. 37. Greenland S. Quantitative methods in the review of epidemiologic literature. Epidemiol Rev. 1987; 9: 1–30. 38. Cochran W. The combination of estimates from different experiments. Biometrics. 1954; 10: 101–129. 39. Hardy RJ, Thompson SG. Detecting and describing heterogeneity in meta-analysis. Stat Med.1998; 17: 841–856. 40. Higgins JP, Thompsom SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analysis. BMJ. 2003; 327: 557–560. 41. Higgins JP, Thompson SG. Quantifying heterogeneity in a metaanalysis. Stat Med. 2002; 21:1539–1558. 42. Huedo-Medina TB, Sanchez-Meca J, Marin-Martinez F, Botella J. Assessing heterogeneity in meta-analysis: Q statistic or I2 index? Psychol Methods. 2006; 11: 193–206. 43. Petiti DB. Meta-analysis, decision analysis and cost-effectiveness analysis. Oxford University Press, New York, NY. 1994. 44. Lau J, Ioannidis JP, Schmid CH. Quantitative synthesis in systematic reviews. Ann Intern Med. 1997; 127: 820–826. 45. Baker R, Jackson D. A new approach to outliers in meta-analysis. Health Care Manage Sci. 2008; 11:121–131. 46. Lau J, Schmid CH, Chalmers TC. Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol. 1995; 48: 45–57. 47. Ioannidis JP, Lau J. State of the evidence: current status and prospects of meta-analysis in infectious diseases. Clin Infect Dis. 1999; 29: 1178–1185. 48. Dickersin K, Berlin JA. Meta-analysis: State of the science. Epidemiol Rev. 1992; 14: 154–176. 49Meta-synthesis or Meta-analysis in Medical Research Assignment. Light J, Pillemer DB. Summing up: The Science of Reviewing Research. Harvard University Press, Cambridge, Massachusetts,1984. 50. Egger M, Smith GD, Schneider M, Minder C. Bias in metaanalysis detected by a simple, graphical test. BMJ. 1997; 315: 629–624. 51. Sterne JA, Egger M. Funnel plots for detecting bias in metaanalysis: guidelines on choice of axis. J Clin Epidemiol. 2001; 54: 1046–1045. 52. Begg CB, Mazumdar M. Operating characteristics of a rank correlation test for publication bias. Biometrics. 1994; 50: 1088- 1101. 53. Lau J, Ioannidis JP, Terrin N, Schmid CH, Olkin I. The case of the misleading funnel plot. BMJ. 2006; 333: 597–590. 54. Tang JL, Liu JL. Misleading funnel plot for detection of bias in meta-analysis. J Clin Epidemiol. 2000; 53: 477–484. 55. Ioannidis JP. Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA. 1998; 279: 281-286. 56. Haidich AB, Ioannidis JP. Effect of early patient enrollment on the time to completion and publication of randomized controlled trials. Am J Epidemiol. 2001; 154: 873–880. 57. Haidich AB, Charis B, Dardavessis T, Tirodimos I, Arvanitidou M. The quality of safety reporting in trials is still suboptimal: Survey of major general medical journals. J Clin Epidemiol. 2010; (in press). 58. Egger M, Zellweger-Zähner T, Schneider M, Junker C, Lengeler C, Antes G. Language bias in randomised controlled trials published in English and German. Lancet. 1997; 350: 326–329. 59. Jόni P, Holenstein F, Sterne J, Bartlett C, Egger M. Direction and impact of language bias in meta-analyses of controlled trials: empirical study. Int J Epidemiol. 2002; 31:115–123. 60. Lumley T. Network meta-analysis for indirect treatment comparisons. Stat Med. 2002; 21: 2313–2324. 61. Deeks JJ. Systematic reviews in health care: Systematic reviews of evaluations of diagnostic and screening tests.Meta-synthesis or Meta-analysis in Medical Research Assignment. BMJ. 2001; 323: 157–162. 62. Littenberg B, Moses LE. Estimating diagnostic accuracy from multiple conflicting reports: a new meta-analytic method. Med Decis Making. 1993; 13: 313–321. 63. Reitsma JB, Glas AS, Rutjes AW, Scholten RJ, Bossuyt PM, Zwinderman AH. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol. 2005; 58: 982-990. 64. Rutter CM, Gatsonis CA. A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Stat Med. 2001; 20: 2865–2884. 65. Arends LR. Multivariate meta-analysis: modelling the heterogeneity. Rotterdam: Erasmus University; 2006. 66. Riley RD, Abrams KR, Sutton AJ, Lambert PC, Thompson JR. Bivariate random-effects meta-analysis and the estimation of between-study correlation. BMC Med Res Methodol. 2007; 7: 3. 67. Trikalinos TA, Olkin I. A method for the meta-analysis of mutually exclusive binary outcomes. Stat Med. 2008; 27: 4279–4300. 68. Trikalinos TA, Salanti G, Zintzaras E, Ioannidis JP. Meta-analysis methods. Adv Genet. 2008; 60: 311–334. 69. Nόesch E, Trelle S, Reichenbach S, Rutjes AW, Bόrgi E, Scherer M, et al. The effects of excluding patients from the analysis in randomized controlled trials: meta-epidemiological study. BMJ. 2009; 339: b3244. 70. Wood L, Egger M, Gluud LL, Schulz KF, Jόni P, Altman DG, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ. 2008; 336: 601-605. 71. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000; 283:2008–2012. 72. LeLorier J, Grégoire G, Benhaddad A, Lapierre J, Derderian F. Discrepancies between meta-analyses and subsequent large randomized, controlled trials. N Engl J Med. 1997; 337: 536–542. 73. Nissen SE, Wolski K. Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes. N Engl J Med. 2007; 356: 2457–2471. 74. Home PD, Pocock SJ, Beck-Nielsen H, Curtis PS, Gomis R, Hanefeld M, et al; RECORD study team. Rosiglitazone Evaluated for Cardiovascular Outcomes in Oral Agent Combination Therapy for Type 2 Diabetes (RECORD). Lancet. 2009; 373: 2125-2135. 75. Hlatky MA, Bravata DM. Review: rosiglitazone increases risk of MI but does not differ from other drugs for CV death in type 2 diabetes. Evid Based Med. 2007; 12:169-170.

Meta-synthesis or Meta-analysis in Medical Research Assignment