Skip to main content

Main menu

  • Home
  • Content
    • First Release
    • Current
    • Archives
    • Collections
    • Audiovisual Rheum
    • 50th Volume Reprints
  • Resources
    • Guide for Authors
    • Submit Manuscript
    • Payment
    • Reviewers
    • Advertisers
    • Classified Ads
    • Reprints and Translations
    • Permissions
    • Meetings
    • FAQ
    • Policies
  • Subscribers
    • Subscription Information
    • Purchase Subscription
    • Your Account
    • Terms and Conditions
  • About Us
    • About Us
    • Editorial Board
    • Letter from the Editor
    • Duncan A. Gordon Award
    • Privacy/GDPR Policy
    • Accessibility
  • Contact Us
  • JRheum Supplements
  • Services

User menu

  • My Cart
  • Log In

Search

  • Advanced search
The Journal of Rheumatology
  • JRheum Supplements
  • Services
  • My Cart
  • Log In
The Journal of Rheumatology

Advanced Search

  • Home
  • Content
    • First Release
    • Current
    • Archives
    • Collections
    • Audiovisual Rheum
    • 50th Volume Reprints
  • Resources
    • Guide for Authors
    • Submit Manuscript
    • Payment
    • Reviewers
    • Advertisers
    • Classified Ads
    • Reprints and Translations
    • Permissions
    • Meetings
    • FAQ
    • Policies
  • Subscribers
    • Subscription Information
    • Purchase Subscription
    • Your Account
    • Terms and Conditions
  • About Us
    • About Us
    • Editorial Board
    • Letter from the Editor
    • Duncan A. Gordon Award
    • Privacy/GDPR Policy
    • Accessibility
  • Contact Us
  • Follow Jrheum on BlueSky
  • Follow jrheum on Twitter
  • Visit jrheum on Facebook
  • Follow jrheum on LinkedIn
  • Follow jrheum on YouTube
  • Follow jrheum on Instagram
  • Follow jrheum on RSS
Review ArticleExpert Review
Open Access

Conducting a High-Quality Systematic Review

Nadine Shehata and Rohan D’Souza
The Journal of Rheumatology July 2025, 52 (7) 636-646; DOI: https://doi.org/10.3899/jrheum.2024-1241
Nadine Shehata
1N. Shehata MD, MSc, Departments of Medicine, Laboratory Medicine, and Pathobiology, Institute of Health Policy Management and Evaluation, University of Toronto, Division of Medical Oncology and Hematology, University Health Network, and Division of Hematology, Mount Sinai Hospital, Toronto;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Nadine Shehata
  • For correspondence: Nadine.shehata{at}sinaihealth.ca
Rohan D’Souza
2R. D’Souza MD, PhD, Departments of Obstetrics and Gynecology and Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Rohan D’Souza
  • Article
  • Figures & Data
  • Info & Metrics
  • References
  • PDF
PreviousNext
Loading

This article has a correction. Please see:

  • Errata - July 01, 2025

Abstract

Systematic reviews (SRs) are a structured means of knowledge synthesis used by a variety of healthcare practitioners to aid in medical decision making. The SR, if conducted rigorously, is considered to be at the top of the hierarchy for research studies. In addition to synthesizing evidence, SRs identify research priorities, address questions that may not be answerable by individual studies, and identify gaps to be addressed in future primary research. There are several steps that need to be taken when developing SRs to provide the best available evidence—the most essential being the assessment of risk of bias (ROB). Several ROB tools have been developed for use according to study design. Increasingly used is the assessment of certainty of evidence using approaches such as those developed by the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group. Whereas ROB is assessed for individual studies, the certainty of evidence is assessed for each critical or important outcome across studies. Analysis can be quantitative (meta-analysis) or qualitative (narrative), with the former intended to develop estimates of the effect measure (ie, the statistic that compares collated data), with confidence limits around that estimate. This review will focus on the steps required to develop SRs, from registration of the review protocol to the conduct, analysis, and reporting, with a focus on the assessment of ROB and certainty of evidence to ensure the development of a methodological and rigorous process.

Key Indexing Terms:
  • critical appraisal
  • metaanalysis
  • quality
  • risk of bias
  • systematic review

Systematic reviews (SRs) use structured methods to identify, analyze, and collate scientific literature.1 If conducted methodically, the SR is considered as the highest tier in research studies.2 SRs generate an aggregate of knowledge and can be applied by a variety of users including patients, healthcare providers, researchers, and policymakers.1 SRs are focused and unbiased, with explicit methods of identification and analysis of collated research data.3 Further, SRs provide the best available evidence to inform decision making, both for clinical practice and healthcare policy.3,4

SRs differ from other types of reviews, such as scoping reviews. SRs are designed to address key clinical questions, analyze global evidence, address practice variation and conflicting results to guide decision making, identify evidence gaps, and inform future research. In contrast, a scoping review—also a type of knowledge synthesis that uses a systematic approach—has a broader scope and aims to identify concepts, theories, sources, and knowledge gaps pertaining to the objectives.5,6 Scoping reviews identify the types of evidence (eg, cohort studies, clinical trials), explore how research has been conducted, identify concept characteristics, and often precede the conduct of SRs.6 Scoping reviews are not intended to answer a clinical question (such as appropriateness or effectiveness of therapy) or to inform practice.6 The assessment of risk of bias (ROB) or critical appraisal is not an essential component of scoping reviews.5 Criteria for features to be included in a scoping review have been established,5 and descriptions of other types of reviews beyond the scope of this paper have been extensively reviewed elsewhere.7-9

SRs can assess the effectiveness and safety of a treatment, procedure, or policy; determine the accuracy of a test for diagnosis and/or prognosis; compare outcomes with different exposures described in observational studies; provide incidence estimates from single-arm studies; and analyze perspectives and experiences through qualitative evidence syntheses.10 The analysis in SRs can be quantitative or nonquantitative/qualitative, and both methods have a systematic analytic approach. The quantitative SR features a metaanalysis, which is an analysis of all pertinent and clinically significant measures of effect—whether dichotomous (eg, mortality) or continuous (eg, duration of hospitalization)—that includes confidence intervals (CIs) and an assessment of heterogeneity (variability).11 A metaanalysis uses statistical techniques to combine outcomes of individual studies to provide an overall summary statistic, with the aims of providing a more precise estimate of the effect of an intervention on an outcome and reducing uncertainty.10 A qualitative review is a descriptive review developed when data are not amenable for a metaanalysis, such as when data are sparse, from studies of different designs, from studies that are of low quality,12 and/or too heterogeneous for statistical aggregation (Table 1).10,11

View this table:
  • View inline
  • View popup
Table 1.

Adapted from Treadwell et al.12

Both quantitative and qualitative SRs adhere to the same criteria for conduct, including developing and registering a protocol for the SR, a systematic approach to search for relevant references, an analysis for bias, and a summary according to the best available evidence.

Prior to performing an SR, bibliographic databases and registries for SRs on the same or similar research questions should be searched to avoid duplication.13 An SR team is assembled that includes content and methodological experts who ideally do not have conflicts of interest or involvement in important decisions required for the review.14 Establishing a team allows for the completion of tasks including the selection of eligible studies, data extraction, and assessment of the ROB by ≥ 2 people independently to minimize the probability of errors.14

Patient and public involvement in SRs is essential, similar to that in randomized controlled trials (RCTs), and is increasingly being described. An analysis of the 56 SRs demonstrated that 59% solely involved patients, 18% solely engaged the public, and 23% included both. Patients and the public were involved at various phases of the review process, though predominantly in the development of the question and interpretation. Involvement can include focus groups or ongoing patient participation. Acknowledgment or authorship may be considered for the latter, although this is not routinely offered.15

Registration of a protocol for an SR

SRs are developed to be transparent, robust (ie, the degree to which minor alterations in data do not alter conclusions),12 and free from bias as much as possible.4 Conducting a high-quality SR requires the development of a protocol that defines the main objectives, design, and planned analyses for the review. A protocol written in advance of the review and completed prior to determining study eligibility is ideal in order to ensure that the review methods are transparent and reproducible. Publication of protocols (and completed reviews) permits for the tracking of revisions to enable an examination of the effect that the changes may have on the results of the review.16

Prospective registration of SR protocols may also prevent unintended duplication.13 Protocol registration differs from publication of a manuscript for a protocol as the latter will undergo peer review.13 There are several options for registering a protocol for an SR, including registries that are specific to SRs and those that include SRs.13 Organizations conducting or commissioning SRs, such as the Cochrane Collaboration or the Joanna Briggs Institute (JBI), have their own databases of ongoing and published SRs that are restricted to reviews performed within their organizations.13 The International Prospective Register of Systematic Reviews (PROSPERO), established in 2011, is one of the most commonly used registries. PROSPERO includes several mandatory fields to describe the clinical question, inclusion/exclusion criteria, data collection process, critical appraisal, primary and secondary outcomes, data synthesis, and investigators.17 The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines—originally developed for RCTs—have been extended to include reporting guidelines for SR protocols (PRISMA-P) and include similar fields to PROSPERO.18 Completion of protocols prior to conducting the literature search leads to a more comprehensive literature search strategy. PRISMA-P is intended to facilitate the process of reporting a protocol and registration with PROSPERO.19 Publication of well-developed SRs now includes registration information and often the submission of the SR protocol as a supplementary appendix. Registration of an SR and the required steps in the development of a protocol as defined by PRISMA-P19 are akin to registration with clinical trial websites such as ClinicalTrials.gov and the International Standard Randomised Controlled Trial Number (ISRCTN) in the United Kingdom, as well as the items necessary for protocols for RCTs using the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist.20 For both SRs and RCTs, these checklists are designed to optimize transparency, reproducibility, and completeness in design and conduct.20

Steps in the development of an SR

Various agencies have published guidance for the development of SRs, including the Agency for Healthcare Research and Quality (USA),21 JBI (Australia),22 and the Cochrane Collaboration (UK).14 The general criteria used for the development of SRs are similar across these organizations. Regardless of the criteria used for quantitative research, standards for reporting of SRs are predominantly expected to follow the widely endorsed PRISMA reporting guidelines developed for RCTs and the Meta-analysis of Observational Studies in Epidemiology (MOOSE) criteria.23,24 Several PRISMA reporting checklists have been developed to include reporting of SRs for studies assessing diagnostic accuracy25; outcome measurement instruments (ie, how an outcome is measured), such as laboratory tests and scales26; complex interventions (ie, those with multiple components)27; and those reporting harms,28 among others.29 PRISMA and MOOSE checklists aim at improving the reporting of SRs and metaanalyses and include many of the criteria used by agencies to guide development of SRs.1 In addition to referring to guidelines for the development of SRs, it is beneficial to refer to reporting guidelines early in the review process to ensure all elements are included when planning the SR. Fillable PRISMA and MOOSE checklists are available to facilitate the completion of this step.30,31 The features described in PRISMA and MOOSE have been categorized below into 4 categories for simplicity, each of which is essential for methodical development.

1. Clinical/research question

The research question is a carefully formulated one and is the critical first step in the process of developing an SR. The rationale, objectives, and scope are described in detail to permit for the eventual progression into the next phases of describing eligibility of patients and populations; the intervention, screening method, or diagnostic test; the selection of the comparators; and the selection of primary and secondary outcomes, including surrogates. This step is summarized in both qualitative and quantitative studies in PICOT (Patient/Population – Intervention/Test – Comparison/Comparator – Outcome – Time) format, sometimes including study design. The selected outcomes reflect those pertinent to clinical practice, and not those reported in studies. The research question serves as the guide for the systematic search strategy, study eligibility, and citation selection.

2. Study selection

Search strategy. The design of a search strategy for the selection of eligible studies requires the assistance of an information specialist/librarian with experience in conducting SRs, using accurate search terms and sources to ensure the transparency and reproducibility of the search. Generally, ≥ 2 databases are searched to ensure all eligible studies are included.32 Search sources outside of medical databases include grey literature (eg, website and policy publications), government sources, and nongovernmental documents.33 The inclusion of grey literature is routinely recommended by some agencies.34 Inclusivity and completeness are key; thus, avoiding the sole use of English-language sources, including searches of clinical trial registries, and contacting authors for incomplete information is advised.34 The search strategy should ideally be peer reviewed, as this is deemed to improve the quality and comprehensiveness of the search strategy.35 The Peer Review of Electronic Search Strategies (PRESS) is a structured tool that includes guidance and checklists for the completion of the peer review process.35 A detailed search strategy for ≥ 1 medical database is generally included in the publication of an SR to assure reproducibility.

Selection of studies. The selection criteria and the selection process (ie, the assessment of citations/references by reviewers, which should be completed at least by 2 reviewers and independently, as well as the approach to divergences) are determined prior to completion of the search. Criteria used for the selection of citations are generally piloted to ensure that studies of relevance are included. The process of study selection is summarized in a PRISMA diagram (Figure).1 Notably, separate searches may be needed for quantitative SRs if adverse events or harms for an intervention are infrequent and are not adequately assessed.

Figure.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure.

PRISMA flow diagram for new systematic reviews, which includes searches of databases, registers, and other sources.1 PRISMA: Preferred Reporting Items for Systematic reviews and Meta-Analyses.

The included study designs should consider the availability of similar study designs, for example, the availability of RCTs in SRs focusing on interventions. If large, well-designed RCTs are available, including only RCTs in the SR is a consideration. In the absence of RCTs, or where RCTs have small sample sizes or have not been rigorously conducted, observational studies are incorporated into the SR and analyzed separately from RCTs.

3. Data abstraction

The data abstraction process comprises the abstraction of data and a description of the abstraction process, similar to the selection of citations. This typically outlines the reviewers who will complete the data abstraction and whether the abstraction will be conducted in duplicate (ie, 2 reviewers or more) and independently. Overall, the data abstraction section specifies the characteristics that will be used to satisfy the PICOT criteria, including the conduct of the study, outcome measures, and financial support (eg, industry support).36 For example, study population characteristics should include elements such as sex, age, and comorbidities, among others, to determine similarity to the population of interest. The intervention is described to enable comparability; examples include medication formulation, and route and frequency of administration. Outcome features comprise definitions of outcomes and similarity to the outcomes of interest, including the use of actual outcomes (eg, mortality) or surrogate outcomes (eg, disease-free survival instead of overall survival), statistical measures/units, and a description of the scale (eg, validation) and its administration (eg, self-administered or administered by the research team), if applicable. Software options for citation libraries, mapping of selection criteria, and data abstraction are available (eg, Distiller SR, Covidence37,38), as are examples of manual data abstraction forms (eg, Cochrane Collaboration36).

Assessment of ROB. An assessment of ROB—also referred to as quality of a study—is an essential component of the data abstraction process. Bias is assessed at the levels of study design, conduct, and analysis, and refers to the confidence in the estimates for outcomes that have been generated.39 Limitations in the study design, conduct, or analysis can lead to systematically inaccurate results.40 Bias in studies can be classified into 4 general categories41: (1) selection bias, the process that leads to groups not being comparable (eg, when allocation is according to prognosis); (2) performance bias, the process of providing dissimilar care (eg, when allocation is not concealed); (3) detection bias, wherein an outcome is influenced by knowledge of the intervention, which tends to be more important for subjective outcomes (eg, pain assessment) than objective outcomes (eg, mortality); and (4) attrition bias, which occurs when participants are lost to follow-up, when there are missing data, or when there are deviations from a study protocol, as remaining participants may not be representative of the population.41 ROB assessments are intended to detect these biases to ensure that outcomes reflect true estimates, as low-quality studies generally exaggerate treatment effects.42

Numerous tools are available to assess ROB for quantitative and qualitative research. ROB assessments for quantitative research can be checklists (eg, the checklist developed by JBI), scales (eg, Jadad scale for RCTs),43 or domain-based (ie, an assessment at different stages of study, such as the Cochrane Collaboration ROB for RCTs).44 Checklists provide a variety of quality measures scored individually, whereas scales also provide a total score by summing features, assuming equal weights for each individual feature (although weights may not be equal) or assigning more emphasis to specific features. The latter has been demonstrated previously to be problematic, as pooling studies according to various scales will lead to different estimates of effect and CIs.42,45 The assessment of domains provides a descriptive summary of and emphasis on individual components that can lead to bias, such as allocation concealment as a measure of performance bias.

The necessity for critical appraisal in qualitative research has been established; however, tools used to assess ROB are thought to represent a unified approach without differentiating the distinct methodological approaches for qualitative research (such as grounded theory, interpretative phenomenology, or discourse analysis) and methods for data collection (such as interviewing, use of focus groups and observations).46,47 The available tools include checklists and frameworks for ROB assessment.46,47 Checklists for qualitative studies are similar to those used in quantitative studies, whereas frameworks assess concepts of (1) transferability (ie, the ability to make connections between data and wider community settings); (2) credibility (ie, the appropriateness of participants’ accounts, as interpreted by the researcher); and (3) reflexivity and transparency (ie, the influence of the researcher on the analysis, rather than grounding the analysis and its transparency).46

Instruments used to assess ROB (ie, checklists/scoring systems, scales, domain scores) for quantitative SRs have been evaluated. The methods are similar to each other in that their intent is to determine whether results are plausible, without flaws, and permit for the inclusion of other biases that may be specific to the clinical question, such as variable duration times for outcome assessment.48 Each method has advantages and disadvantages. An analysis of scoring systems suggested that an overall score may not necessarily correlate with overall quality of a study, and scales may provide different results for the same study.45 The components used in checklists differ somewhat, and although all incorporate consistent features such as masking and allocation concealment, do not require a detailed description compared to using domains.49 The use of domains is considered to be a standardized approach to ROB, but interrater agreement and time to completion may be variable and lengthy, respectively.50,51 The more commonly used methods and published advantages and disadvantages specific to these methods are described in Table 2.40,50-63

View this table:
  • View inline
  • View popup
Table 2.

Common ROB assessment tools for quantitative research according to study design.40,44,49–62

Subjectivity and judgment are required in the assessment of ROB of studies in an SR. As an example, in a study addressing transfusion, blinding may not be considered critical, whereas the blinding of participants for subjective outcomes such as pain would be considered critical.48 Further, an overall categorization of ROB (ie, high or low ROB) for each study subsequently needs to be determined depending on the importance of each domain on the outcomes.44 Conducting dual, independent reviews will limit additional unnecessary subjectivity.

4. Data synthesis

A metaanalysis is a statistical method to collate outcomes of studies in an SR,64,65 and is conducted when outcomes—as well as the measurement statistics used for those outcomes—are predominantly the same. Metaanalyses have the potential to (1) improve precision, particularly if there are many small studies that cannot provide convincing evidence of the effect of an intervention in isolation; (2) answer new questions not addressed in individual studies; and (3) address controversies from conflicting results of studies as well as explore differences.66 The intent to include a metaanalysis is prespecified in an SR protocol and in the registration of the SR. The rationale for selection of an effect measure (the statistic that compares outcome data) in a meta-analysis67 is also generally prespecified. Common effect measures include odds ratios (ORs) or risk ratios for dichotomous/binary outcomes and mean differences or standard mean differences for continuous outcome variables. The risk ratio (relative risk) and OR are relative measures, whereas the risk difference is an absolute measure.66 Table 3 defines these measures and describes considerations for selection.

View this table:
  • View inline
  • View popup
Table 3.

Effect measures for metaanalyses.64-66

Selection of specific summary statistics also depends on whether values are consistent, have the same mathematical properties, and can be easily understood.66 The estimate of the effect measure is generally expressed with the degree of uncertainty, such as a CI or standard error.65,67 A CI provides a range of probabilities within which the true estimate lies, and is a measure of precision as it reflects the adequacy of the sample size used for the true estimate.68 Several software options are available to estimate effect measures.69,70 An assessment of the effect of overall ROB (or aspects that are considered more significant in the assessment, such as allocation concealment) on the effect measure (ie, sensitivity analysis, which is the primary analysis with the substitution of alternate values according to ROB)66 permits for the examination of the robustness of the metaanalysis.41 Subgroup analysis (ie, dividing participants or studies into subgroups, such as an analysis of male vs female individuals or studies of different geographic locations) may be conducted to assess variability or to explore specific questions.66

The synthesis model for a metaanalysis can be a fixed-effect or random-effects model.71 A fixed-effect synthesis presumes that there is a common treatment effect across all study settings, whereas in a random-effects metaanalysis, treatment effects vary from study to study.71 In a random-effects model, the differences in observed effect sizes are not only due to random error, similar to a fixed-effects model, but also to variation in true treatment effects (referred to as heterogeneity).71 The summary effect from a fixed-effect model is an estimate of the assumed common underlying treatment effect; in contrast, for the random-effects model, the summary effect is an estimate of the average of the distribution of treatment effects across various study settings.71 As between-study heterogeneity is common and may not be identifiable, the random-effects model is the standard (ie, default) model for metaanalyses and is conducted if prespecified in a protocol, despite high heterogeneity.72

Certainty of evidence. In addition to individual study assessment, the overall certainty of evidence according to each outcome needs to be conducted, the most used is the method developed by the Grading of Recommendations Assessment, Development, and Evaluations (GRADE) working group (Table 4).73,74 Evaluating the reliability and validity for the data for each outcome by determining the methods used to assess them in each individual study is required, as the quality of data for an outcome may differ across studies.64 For instance, an outcome may be primary for 1 study and be systematically measured, but it may be a secondary or tertiary outcome in another study and may not be as carefully measured.64

View this table:
  • View inline
  • View popup
Table 4.

Assessment of certainty of evidence for outcome assessment across studies according to GRADE.39

GRADE categorizes studies into 2 groups: RCTs and observational studies. The former is assumed to be associated with less ROB but can be downgraded in quality depending on the overall ROB as assessed by 4 features: (1) the directness of evidence (whether the outcome directly answers the health question), (2) precision (the extent of confidence in the estimate of effect to support a decision),75 (3) the inconsistency of results (differing estimates of treatment effects across studies), (4) and the presence/absence of publication bias. Publication bias refers to studies not being submitted or published because of the strength and direction (ie, negative) of the trial result.76 Studies with statistically significant results are more likely to be published and those with negative results often face delayed publication.77 Visual (ie, a funnel plot that assesses whether there is asymmetrical representations of studies, representing publication bias) and statistical tests for asymmetry can be conducted to gauge for publication bias. The certainty in the quality of evidence can also be upgraded for observational studies and nonrandomized studies, such as in cases where there is an evident dose-response relationship (Table 4). The low certainty of evidence assigned by GRADE for nonrandomized studies is due to the fact that causation cannot generally be determined by nonrandomized studies. These nonrandomized studies, however, play a considerable role in identifying associations; can be complementary (eg, provide information in different populations), often providing long-term outcomes of benefit (not available in RCTs) or harm; and they may be more reflective of usual practice. Thus, nonrandomized studies may provide higher quality of evidence than RCTs.78 The GRADE approach is also used for qualitative research and categorizes absence of methodological limitations (ie, limitations in design or conduct), adequacy of data (ie, richness and quantity), coherence (ie, clarity and rationale of the fit of data), and relevance (ie, applicability).79

Overall, certainty in the evidence for an outcome is presented as high, moderate, low, or very low certainty of evidence of effect, with high certainty suggesting that future research studies are unlikely to change the confidence in the estimate of effect and very low certainty representing an uncertain estimate of effect.80 GRADEpro (Evidence Prime), a software for GRADE analyses, generates tables that summarize the description of features used in the categorization of the certainty of evidence (ie, the summary of findings table) as well as estimates of effect based on data from metaanalyses.9,81 In the absence of an estimate from a metaanalysis, a descriptive assessment of overall certainty of evidence can be used.

A completed SR will also be evaluated for its quality and ROB. Two commonly used tools, A Measurement Tool to Assess Systematic Reviews, version 2 (AMSTAR2)82 and Risk of Bias in Systematic Reviews (ROBIS)83 use a domain-based assessment of bias for SRs of RCTs and observational studies, and include such items as protocol registration (AMSTAR2), adequacy of the literature search, eligibility of individual studies, ROB of included individual studies, appropriateness of metaanalytical methods, and consideration of ROB during interpretation. AMSTAR2 also includes conflicts of interest and is thought to be considered easier to use, whereas ROBIS requires more expertise.84 Prior to conducting an SR, awareness of the requirements of PRISMA for reporting of SRs and those of AMSTAR2 and ROBIS for the evaluation of an SR are of similar importance.

Conclusion

To accurately conduct a rigorous SR and provide the best available evidence for decision making, (1) awareness of other published reviews or protocols to avoid duplication, and (2) criteria for developing and reporting an SR are essential. Although steps in conducting an SR are structured, there are several judgments that are needed in the SR process that allow for a transparent, reproducible, and credible method when a detailed description is provided. The ROB of each study and certainty of evidence for each outcome are critical for assessing the plausibility and accuracy of findings. Advantages and disadvantages have been described for each method of assessing ROB. The selection of a method should be on the basis of assuring confidence in the estimates of the study outcomes.

Footnotes

  • CONTRIBUTIONS

    NS designed the framework, conducted the search, critically reviewed and interpreted the intellectual content of the studies, prepared the manuscript, approved the final version, and is accountable for all aspects of the work. RD contributed to the interpretation of data, reviewed for important intellectual content, approved the final version, and is accountable for all aspects of the work.

  • FUNDING

    The authors declare no funding or support for this research.

  • COMPETING INTERESTS

    The authors declare no conflicts of interest relevant to this article.

  • ETHICS AND PATIENT CONSENT

    Institutional review board approval and patient consent were not required for this work.

  • Accepted for publication March 18, 2025.
  • Copyright © 2025 by the Journal of Rheumatology

This is an Open Access article, which permits use, distribution, and reproduction, without modification, provided the original article is correctly cited and is not used for commercial purposes.

REFERENCES

  1. 1.↵
    1. Page MJ,
    2. McKenzie JE,
    3. Bossuyt PM, et al.
    The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71.
    OpenUrlFREE Full Text
  2. 2.↵
    1. Guyatt GH,
    2. Sackett DL,
    3. Sinclair JC,
    4. Hayward R,
    5. Cook DJ,
    6. Cook RJ.
    Users’ guides to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group. JAMA 1995;274:1800-4.
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. Murad MH,
    2. Montori VM.
    Synthesizing evidence: shifting the focus from individual studies to the body of evidence. JAMA 2013;309:2217-8.
    OpenUrlCrossRefPubMed
  4. 4.↵
    1. Stewart L,
    2. Moher D,
    3. Shekelle P.
    Why prospective registration of systematic reviews makes sense. Syst Rev 2012;1:7.
    OpenUrlCrossRefPubMed
  5. 5.↵
    1. Tricco AC,
    2. Lillie E,
    3. Zarin W, et al.
    PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018;169:467-73.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. Munn Z,
    2. Peters MDJ,
    3. Stern C,
    4. Tufanaru C,
    5. McArthur A,
    6. Aromataris E.
    Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol 2018;18:143.
    OpenUrlCrossRefPubMed
  7. 7.↵
    1. Oxman AD,
    2. Cook DJ,
    3. Guyatt GH.
    Users’ guides to the medical literature. VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA 1994;272:1367-71.
    OpenUrlCrossRefPubMed
  8. 8.
    1. Duke University Medical Center Library and Archives
    . Systematic reviews. [Internet. Accessed March 28, 2025.] Available from: https://guides.mclibrary.duke.edu/sysreview/types
  9. 9.↵
    1. Grant MJ,
    2. Booth A.
    A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J 2009;26:91-108.
    OpenUrlCrossRefPubMed
  10. 10.↵
    1. Cochrane Library
    . About Cochrane reviews. [Internet. Accessed March 28, 2025]. Available from: https://www.cochranelibrary.com/about/about-cochrane-reviews
  11. 11.↵
    1. Cook DJ,
    2. Sackett DL,
    3. Spitzer WO.
    Methodologic guidelines for systematic reviews of randomized control trials in health care from the Potsdam Consultation on Meta-Analysis. J Clin Epidemiol 1995;48:167-71.
    OpenUrlCrossRefPubMed
  12. 12.↵
    1. Treadwell JR,
    2. Tregear SJ,
    3. Reston JT,
    4. Turkelson CM.
    A system for rating the stability and strength of medical evidence. BMC Med Res Methodol 2006;6:52.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Pieper D,
    2. Rombey T.
    Where to prospectively register a systematic review. Syst Rev 2022;11:8.
    OpenUrlCrossRefPubMed
  14. 14.↵
    1. Lasserson TJ,
    2. Thomas J,
    3. Higgins JPT.
    Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions, version 6.5. [Internet. Accessed March 28, 2025.] Available from: https://training.cochrane.org/handbook/current/chapter-01
  15. 15.↵
    1. Zhou Q,
    2. He H,
    3. Li Q, et al.
    Patient and public involvement in systematic reviews: frequency, determinants, stages, barriers, and dissemination. J Clin Epidemiol 2024;170:111356.
    OpenUrlCrossRefPubMed
  16. 16.↵
    1. Silagy CA,
    2. Middleton P,
    3. Hopewell S.
    Publishing protocols of systematic reviews: comparing what was done to what was planned. JAMA 2002;287:2831-4.
    OpenUrlCrossRefPubMed
  17. 17.↵
    1. University of York
    . PROSPERO. [Internet. Accessed March 28, 2025.] Available from: https://www.crd.york.ac.uk/prospero/
  18. 18.↵
    1. Moher D,
    2. Shamseer L,
    3. Clarke M, et al.
    Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015;4:1.
    OpenUrlCrossRefPubMed
  19. 19.↵
    1. Shamseer L,
    2. Moher D,
    3. Clarke M, et al.
    Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 2015;350:g7647.
    OpenUrlCrossRefPubMed
  20. 20.↵
    1. Chan AW,
    2. Tetzlaff JM,
    3. Altman DG, et al.
    SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med 2013;158:200-7.
    OpenUrlCrossRefPubMed
  21. 21.↵
    1. Agency for Health Care Research and Quality
    . Training modules for the systematic reviews methods guide. [Internet. Accessed March 28, 2025.] Available from: https://effectivehealthcare.ahrq.gov/products/cer-methods-guide/presentations#toc-0
  22. 22.↵
    1. Aromataris E,
    2. Lockwood C,
    3. Porritt K,
    4. Pilla B,
    5. Jordan Z
    , editors. JBI manual for evidence synthesis. [Internet. Accessed March 28, 2025.] Available from: https://synthesismanual.jbi.global
  23. 23.↵
    1. Stroup DF,
    2. Berlin JA,
    3. Morton SC, et al.
    Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis of Observational Studies in Epidemiology (MOOSE) group. JAMA 2000;283:2008-12.
    OpenUrlCrossRefPubMed
  24. 24.↵
    1. Brooke BS,
    2. Schwartz TA,
    3. Pawlik TM.
    MOOSE reporting guidelines for meta-analyses of observational studies. JAMA Surg 2021;156:787-8.
    OpenUrlCrossRefPubMed
  25. 25.↵
    1. McInnes MDF,
    2. Moher D,
    3. Thombs BD, et al.
    Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: the PRISMA-DTA statement. JAMA 2018;319:388-96.
    OpenUrlCrossRefPubMed
  26. 26.↵
    1. Elsman EBM,
    2. Mokkink LB,
    3. Terwee CB, et al.
    Guideline for reporting systematic reviews of outcome measurement instruments (OMIs): PRISMA-COSMIN for OMIs 2024. J Clin Epidemiol 2024;173:111422.
    OpenUrlCrossRefPubMed
  27. 27.↵
    1. Guise JM,
    2. Butler ME,
    3. Chang C, et al.
    AHRQ series on complex intervention systematic reviews-paper 6: PRISMA-CI extension statement and checklist. J Clin Epidemiol 2017;90:43-50.
    OpenUrlCrossRefPubMed
  28. 28.↵
    1. Zorzela L,
    2. Loke YK,
    3. Ioannidis JP, et al.
    PRISMA harms checklist: improving harms reporting in systematic reviews. BMJ 2016;352:i157.
    OpenUrlAbstract/FREE Full Text
  29. 29.↵
    1. PRISMA
    . PRISMA extensions. [Accessed March 28, 2025.] Available from: https://www.prisma-statement.org/extensions
  30. 30.↵
    1. PRISMA
    . PRISMA checklist. [Accessed March 28, 2025.] Available from: https://www.prisma-statement.org/
  31. 31.↵
    MOOSE (Meta-analyses Of Observational Studies in Epidemiology) Checklist. [Accessed on October 2024]. Available from: https://legacyfileshare.elsevier.com/promis_misc/ISSM_MOOSE_Checklist.pdf
  32. 32.↵
    1. Lefebvre C,
    2. Glanville J,
    3. Briscoe S, et al.
    Chapter 4: Searching for and selecting studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch, VA, editors. Cochrane Handbook for Systematic Reviews of Interventions, version 6.5. [Internet. Accessed March 28, 2025.] Available from: https://training.cochrane.org/handbook/current/chapter-04
  33. 33.↵
    1. Simon Fraser University
    . Grey literature: what it is and how to find it. [Internet. Accessed March 28, 2025.] Available from: https://www.lib.sfu.ca/help/research-assistance/format-type/grey-literature
  34. 34.↵
    1. Agency for Healthcare Research and Quality
    . Finding grey literature evidence and assessing for outcome and analysis reporting biases when comparing medical interventions: AHRQ and the Effective Health Care Program. [Internet. Accessed April 8, 2025.] Availabe from: https://effectivehealthcare.ahrq.gov/products/methods-guidance-reporting-bias/methods
  35. 35.↵
    1. McGowan J,
    2. Sampson M,
    3. Salzwedel DM,
    4. Cogo E,
    5. Foerster V,
    6. Lefebvre C.
    PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol 2016;75:40-6.
    OpenUrlCrossRefPubMed
  36. 36.↵
    1. Cochrane Training
    . Data collection form (for RCTs). Cochrane training. [Internet. Accessed March 28, 2025.] Available from: https://training.cochrane.org/data-collection-form-rcts
  37. 37.↵
    Distiller SR. [Internet. Accessed March 28, 2025.] Available from: https://www.distillersr.com/
  38. 38.↵
    Covidence. [Internet. Accessed March 28, 2025.] Available from: https://www.covidence.org/
  39. 39.↵
    1. Guyatt GH,
    2. Oxman AD,
    3. Vist GE, et al.
    GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008;336:924-6.
    OpenUrlFREE Full Text
  40. 40.↵
    1. Wolff RF,
    2. Moons KGM,
    3. Riley RD, et al.
    PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med 2019;170:51-8.
    OpenUrlCrossRefPubMed
  41. 41.↵
    1. Jüni P,
    2. Altman DG,
    3. Egger M.
    Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ 2001;323:42-6.
    OpenUrlFREE Full Text
  42. 42.↵
    1. Hempel S,
    2. Suttorp MJ,
    3. Miles JNV, et al Agency for Healthcare Research and Quality
    . Empirical evidence of associations between trial quality and effect size. [Internet. Accessed March 28, 2025.] Available from: https://www.ncbi.nlm.nih.gov/books/NBK56932/
  43. 43.↵
    1. Jadad AR,
    2. Moore RA,
    3. Carroll D, et al.
    Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials 1996;17:1-12.
    OpenUrlCrossRefPubMed
  44. 44.↵
    1. Sterne JAC,
    2. Savović J,
    3. Page MJ, et al.
    RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019;366:l4898.
    OpenUrlFREE Full Text
  45. 45.↵
    1. Jüni P,
    2. Witschi A,
    3. Bloch R,
    4. Egger M.
    The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 1999;282:1054-60.
    OpenUrlCrossRefPubMed
  46. 46.↵
    1. Williams V,
    2. Boylan AM,
    3. Nunan D.
    Critical appraisal of qualitative research: necessity, partialities and the issue of bias. BMJ Evid Based Med 2020;25:9-11.
    OpenUrlFREE Full Text
  47. 47.↵
    1. Noyes J,
    2. Booth A,
    3. Flemming K, et al.
    Cochrane Qualitative and Implementation Methods Group guidance series-paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. J Clin Epidemiol 2018;97:49-58.
    OpenUrlCrossRefPubMed
  48. 48.↵
    1. Moher D,
    2. Jadad AR,
    3. Nichol G,
    4. Penman M,
    5. Tugwell P,
    6. Walsh S.
    Assessing the quality of randomized controlled trials: an annotated bibliography of scales and checklists. Control Clin Trials 1995;16:62-73.
    OpenUrlCrossRefPubMed
  49. 49.↵
    1. Savović J,
    2. Weeks L,
    3. Sterne JAC, et al.
    Evaluation of the Cochrane Collaboration’s tool for assessing the risk of bias in randomized trials: focus groups, online survey, proposed recommendations and their implementation. Syst Rev 2014;3:37.
    OpenUrlCrossRefPubMed
  50. 50.↵
    1. Jørgensen L,
    2. Paludan-Müller AS,
    3. Laursen DRT, et al.
    Evaluation of the Cochrane tool for assessing risk of bias in randomized clinical trials: overview of published comments and analysis of user practice in Cochrane and non-Cochrane reviews. Syst Rev 2016;5:80.
    OpenUrlCrossRefPubMed
  51. 51.↵
    1. Higgins JPT,
    2. Savović J,
    3. Page MJ,
    4. Elbers RG,
    5. Sterne JAC.
    Chapter 8: Assessing risk of bias in a randomized trial. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch, VA, editors. Cochrane Handbook for Systematic Reviews of Interventions, version 6.5. [Internet. Accessed March 28, 2025.] Available from: https://www.training.cochrane.org/handbook/current/chapter-08
  52. 52.
    1. Wood L,
    2. Egger M,
    3. Gluud LL, et al.
    Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ 2008;336:601-5.
    OpenUrlAbstract/FREE Full Text
  53. 53.
    1. Sterne JAC,
    2. Hernán MA,
    3. Reeves BC, et al.
    Robins-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016;355:i4919.
    OpenUrlFREE Full Text
  54. 54.
    1. Wells GA,
    2. Shea B,
    3. O’Connell D, et al.
    The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. [Internet. Accessed March 28, 2025.] Available from: https://www.ohri.ca/programs/clinical_epidemiology/oxford.asp
  55. 55.
    1. Whiting PF,
    2. Rutjes AWS,
    3. Westwood ME, et al.
    QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155:529-36.
    OpenUrlCrossRefPubMed
  56. 56.
    1. Yang B,
    2. Mallett S,
    3. Takwoingi Y, et al.
    QUADAS-C: a tool for assessing risk of bias in comparative diagnostic accuracy studies. Ann Intern Med 2021;174:1592-9.
    OpenUrlCrossRefPubMed
  57. 57.
    1. Lee J,
    2. Mulder F,
    3. Leeflang M,
    4. Wolff R,
    5. Whiting P,
    6. Bossuyt PM.
    QUAPAS: an adaptation of the QUADAS-2 tool to assess prognostic accuracy studies. Ann Intern Med 2022;175:1010-8.
    OpenUrlCrossRefPubMed
  58. 58.
    1. Tomlinson E,
    2. Cooper C,
    3. Davenport C, et al.
    Common challenges and suggestions for risk of bias tool development: a systematic review of methodological studies. J Clin Epidemiol 2024;171:111370.
    OpenUrlCrossRefPubMed
  59. 59.
    1. Hartling L,
    2. Milne A,
    3. Hamm MP, et al.
    Testing the Newcastle Ottawa Scale showed low reliability between individual reviewers. J Clin Epidemiol 2013;66:982-93.
    OpenUrlCrossRefPubMed
  60. 60.
    1. Mokkink LB,
    2. de Vet HCW,
    3. Prinsen CAC, et al.
    COSMIN Risk of Bias checklist for systematic reviews of patient-reported outcome measures. Qual Life Res 2018;27:1171-9.
    OpenUrlCrossRefPubMed
  61. 61.
    1. Higgins JPT,
    2. Morgan RL,
    3. Rooney AA, et al.
    A tool to assess risk of bias in non-randomized follow-up studies of exposure effects (ROBINS-E). Environ Int 2024;186:108602.
    OpenUrlCrossRefPubMed
  62. 62.↵
    1. Bero L,
    2. Chartres N,
    3. Diong J, et al.
    The risk of bias in observational studies of exposures (ROBINS-E) tool: concerns arising from application to observational studies of exposures. Syst Rev 2018;7:242.
    OpenUrlCrossRefPubMed
  63. 63.↵
    1. Moher D,
    2. Liberati A,
    3. Tetzlaff J,
    4. Altman DG; PRISMA Group
    . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009;339:b2535.
    OpenUrlFREE Full Text
  64. 64.↵
    1. Schünemann HJ,
    2. Vist GE,
    3. Higgins JPT, et al.
    Chapter 15: Interpreting results and drawing conclusions. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch, VA, editors. Cochrane Handbook for Systematic Reviews of Interventions, version 6.5. [Internet. Accessed March 28, 2025.] Available from: https://training.cochrane.org/handbook/current/chapter-15
  65. 65.↵
    1. Deeks JJ,
    2. Higgins JPT,
    3. Altman DG
    , editors. Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch, VA, editors. Cochrane Handbook for Systematic Reviews of Interventions, version 6.5. [Internet. Accessed March 28, 2025.] Available from: https://training.cochrane.org/handbook/current/chapter-10
  66. 66.↵
    1. Higgins JPT,
    2. Li T,
    3. Deeks JJ
    , editors. Chapter 6: Choosing effect measures and computing estimates of effect. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch, VA, editors. Cochrane Handbook for Systematic Reviews of Interventions, version 6.5. [Internet. Accessed March 28, 2025.] Available from: https://training.cochrane.org/handbook/current/chapter-06
  67. 67.↵
    1. Guyatt G,
    2. Jaeschke R,
    3. Heddle N,
    4. Cook D,
    5. Shannon H,
    6. Walter S.
    Basic statistics for clinicians: 2. Interpreting study results: confidence intervals. CMAJ 1995;152:169-73.
    OpenUrlAbstract
  68. 68.↵
    1. Cochrane Training
    . ReviewManager (RevMan). [Internet. Accessed March 28, 2025.] Available from: https://training.cochrane.org/online-learning/core-software/revman
  69. 69.↵
    OpenMeta[Analyst]. [Internet. Accessed March 28, 2025.] Available from: www.cebm.brown.edu/openmeta/
  70. 70.↵
    1. Nikolakopoulou A,
    2. Mavridis D,
    3. Salanti G.
    How to interpret meta-analysis models: fixed effect and random effects meta-analyses. Evid Based Ment Health 2014;17:64.
    OpenUrlCrossRefPubMed
  71. 71.↵
    1. Guyatt G,
    2. Oxman AD,
    3. Akl EA, et al.
    GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 2011;64:383-94.
    OpenUrlCrossRefPubMed
  72. 72.↵
    1. Riley RD,
    2. Higgins JPT,
    3. Deeks JJ.
    Interpretation of random effects meta-analyses. BMJ 2011;342:d549.
    OpenUrlFREE Full Text
  73. 73.↵
    1. Owens DK,
    2. Lohr KN,
    3. Atkins D, et al.
    AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions—agency for healthcare research and quality and the effective health-care program. J Clin Epidemiol 2010;63:513-23.
    OpenUrlCrossRefPubMed
  74. 74.↵
    1. Moher D,
    2. Liberati A,
    3. Tetzlaff J,
    4. Altman DG, PRISMA Group
    . Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA statement. PLoS Med 2009;6:e1000097.
    OpenUrlCrossRefPubMed
  75. 75.↵
    1. Guyatt GH,
    2. Oxman AD,
    3. Kunz R, et al.
    GRADE guidelines 6. Rating the quality of evidence--imprecision. J Clin Epidemiol 2011;64:1283-93.
    OpenUrlCrossRefPubMed
  76. 76.↵
    1. Dickersin K,
    2. Min YI.
    Publication bias: the problem that won’t go away. Ann N Y Acad Sci 1993;703:135-46; discussion 146.
    OpenUrlCrossRefPubMed
  77. 77.↵
    1. Schünemann HJ,
    2. Tugwell P,
    3. Reeves BC, et al.
    Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Res Synth Methods 2013;4:49-62.
    OpenUrlCrossRefPubMed
  78. 78.↵
    1. Guyatt GH,
    2. Oxman AD,
    3. Montori V, et al.
    GRADE guidelines: 5. Rating the quality of evidence—publication bias. J Clin Epidemiol 2011;64:1277-82.
    OpenUrlCrossRefPubMed
  79. 79.↵
    1. Lewin S,
    2. Bohren M,
    3. Rashidian A, et al.
    Applying GRADE-CERQual to qualitative evidence synthesis findings-paper 2: how to make an overall CERQual assessment of confidence and create a Summary of Qualitative Findings table. Implement Sci 2018;13 Suppl 1:10.
    OpenUrlCrossRefPubMed
  80. 80.↵
    1. Atkins D,
    2. Best D,
    3. Briss PA, et al.
    Grading quality of evidence and strength of recommendations. BMJ 2004;328:1490.
    OpenUrlAbstract/FREE Full Text
  81. 81.↵
    GRADEpro GDT. [Internet. Accessed March 28, 2025.] Available from: https://www.gradepro.org/
  82. 82.↵
    1. Shea BJ,
    2. Reeves BC,
    3. Wells G, et al.
    AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or nonrandomised studies of healthcare interventions, or both. BMJ 2017;358:j4008.
    OpenUrlFREE Full Text
  83. 83.↵
    1. Whiting P,
    2. Savović J,
    3. Higgins JPT, et al.
    ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 2016;69:225-34.
    OpenUrlCrossRefPubMed
  84. 84.↵
    1. Perry R,
    2. Whitmarsh A,
    3. Leach V,
    4. Davies P.
    A comparison of two assessment tools used in overviews of systematic reviews: ROBIS versus AMSTAR-2. Syst Rev 2021;10:273.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

The Journal of Rheumatology: 52 (7)
The Journal of Rheumatology
Vol. 52, Issue 7
1 Jul 2025
  • Table of Contents
  • Table of Contents (PDF)
  • Index by Author
  • Editorial Board (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word about The Journal of Rheumatology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Conducting a High-Quality Systematic Review
(Your Name) has forwarded a page to you from The Journal of Rheumatology
(Your Name) thought you would like to see this page from the The Journal of Rheumatology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Conducting a High-Quality Systematic Review
Nadine Shehata, Rohan D’Souza
The Journal of Rheumatology Jul 2025, 52 (7) 636-646; DOI: 10.3899/jrheum.2024-1241

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

 Request Permissions

Share
Conducting a High-Quality Systematic Review
Nadine Shehata, Rohan D’Souza
The Journal of Rheumatology Jul 2025, 52 (7) 636-646; DOI: 10.3899/jrheum.2024-1241
del.icio.us logo Twitter logo Facebook logo  logo Mendeley logo
  • Tweet Widget
  •  logo
Bookmark this article

Jump to section

  • Article
    • Abstract
    • Registration of a protocol for an SR
    • Steps in the development of an SR
    • Conclusion
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • References
  • PDF

Keywords

critical appraisal
METAANALYSIS
QUALITY
risk of bias
SYSTEMATIC REVIEW

Related Articles

Cited By...

More in this TOC Section

  • The Usual Suspects: Established and Emerging Predictor Variables for Remission in Rheumatoid Arthritis
  • Hydroxychloroquine Ocular Toxicity: A Comprehensive Review
  • Exploring the Genetic Landscape of Psoriatic Arthritis: A Narrative Review of Recent Genomic Studies
Show more Expert Review

Similar Articles

Keywords

  • critical appraisal
  • metaanalysis
  • quality
  • risk of bias
  • systematic review

Content

  • First Release
  • Current
  • Archives
  • Collections
  • Audiovisual Rheum
  • COVID-19 and Rheumatology

Resources

  • Guide for Authors
  • Submit Manuscript
  • Author Payment
  • Reviewers
  • Advertisers
  • Classified Ads
  • Reprints and Translations
  • Permissions
  • Meetings
  • FAQ
  • Policies

Subscribers

  • Subscription Information
  • Purchase Subscription
  • Your Account
  • Terms and Conditions

More

  • About Us
  • Contact Us
  • My Alerts
  • My Folders
  • Privacy/GDPR Policy
  • RSS Feeds
The Journal of Rheumatology
The content of this site is intended for health care professionals.
Copyright © 2025 by The Journal of Rheumatology Publishing Co. Ltd.
Print ISSN: 0315-162X; Online ISSN: 1499-2752
Powered by HighWire