Skip to main content

Main menu

  • Home
  • Content
    • First Release
    • Current
    • Archives
    • Collections
    • Audiovisual Rheum
    • 50th Volume Reprints
  • Resources
    • Guide for Authors
    • Submit Manuscript
    • Payment
    • Reviewers
    • Advertisers
    • Classified Ads
    • Reprints and Translations
    • Permissions
    • Meetings
    • FAQ
    • Policies
  • Subscribers
    • Subscription Information
    • Purchase Subscription
    • Your Account
    • Terms and Conditions
  • About Us
    • About Us
    • Editorial Board
    • Letter from the Editor
    • Duncan A. Gordon Award
    • Privacy/GDPR Policy
    • Accessibility
  • Contact Us
  • JRheum Supplements
  • Services

User menu

  • My Cart
  • Log In

Search

  • Advanced search
The Journal of Rheumatology
  • JRheum Supplements
  • Services
  • My Cart
  • Log In
The Journal of Rheumatology

Advanced Search

  • Home
  • Content
    • First Release
    • Current
    • Archives
    • Collections
    • Audiovisual Rheum
    • 50th Volume Reprints
  • Resources
    • Guide for Authors
    • Submit Manuscript
    • Payment
    • Reviewers
    • Advertisers
    • Classified Ads
    • Reprints and Translations
    • Permissions
    • Meetings
    • FAQ
    • Policies
  • Subscribers
    • Subscription Information
    • Purchase Subscription
    • Your Account
    • Terms and Conditions
  • About Us
    • About Us
    • Editorial Board
    • Letter from the Editor
    • Duncan A. Gordon Award
    • Privacy/GDPR Policy
    • Accessibility
  • Contact Us
  • Follow Jrheum on BlueSky
  • Follow jrheum on Twitter
  • Visit jrheum on Facebook
  • Follow jrheum on LinkedIn
  • Follow jrheum on YouTube
  • Follow jrheum on Instagram
  • Follow jrheum on RSS
Research ArticleExpert Review
Open Access

Post Hoc Power Calculations: An Inappropriate Method for Interpreting the Findings of a Research Study

Michael G. Heckman, John M. Davis III and Cynthia S. Crowson
The Journal of Rheumatology August 2022, 49 (8) 867-870; DOI: https://doi.org/10.3899/jrheum.211115
Michael G. Heckman
1M.G. Heckman, MS, Division of Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville, Florida;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Michael G. Heckman
John M. Davis III
2J.M. Davis III, MD, MS, Division of Rheumatology, Mayo Clinic, Rochester, Minnesota;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for John M. Davis III
Cynthia S. Crowson
3C.S. Crowson, PhD, Division of Rheumatology, and Division of Clinical Trials and Biostatistics, Mayo Clinic, Rochester, Minnesota, USA.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Cynthia S. Crowson
  • Article
  • Figures & Data
  • Info & Metrics
  • References
  • PDF
PreviousNext
Loading

Abstract

Power calculations are a key study design step in research studies. However, such power analysis is often inappropriately performed in the medical literature by attempting to help interpret the findings of a completed study, instead of attempting to aid in choosing an optimal sample size for a future study. The aim of this article is to provide a brief discussion of the drawbacks of performing these post hoc power calculations, and to correspondingly suggest best practices regarding the use of statistical power and the interpretation of study results. Specifically, power analysis should always be considered before any research study in order to choose an ideal sample size and/or to examine the feasibility of properly evaluating study aims, but it should never be used in order to help interpret the results of an already completed study. Alternatively, 95% confidence intervals for effect sizes (eg, odds ratio, hazard ratio, mean difference) or other relevant parameter estimates should be used when attempting to draw conclusions from results, such as the likelihood of a type II error (ie, a false negative finding).

Key Indexing Terms:
  • power
  • sample size
  • type I error
  • type II error
  • effect size

Using statistical power analysis to guide sample size decisions for a future study is an important step in the design of research studies. However, there are many instances in the medical literature in which power analysis is used incorrectly in an attempt to aid in the interpretation of the results of an already completed study. The inappropriate nature of these “post hoc power calculations” has been well documented.1-9 Despite this, post hoc power calculations are still provided in the medical literature relatively frequently, and it is not uncommon for journal reviewers or researchers to request that such calculations be provided. Therefore, in a continuing attempt to address this lingering issue, the aim of this article is to provide a simple discussion of the drawbacks of utilizing post hoc power calculations, and to correspondingly suggest easily implemented best practices regarding the use of statistical power and the interpretation of study results.

Appropriate use of power calculations

Power can be defined as the probability that a statistically significant difference or association will be observed for a future study under a set of assumptions for a given sample size. One of these assumptions is a specified true magnitude of difference or association, which is ideally chosen to be the weakest clinically meaningful difference/association.10,11 As such, the general goal of performing a power analysis when designing a clinical study (assuming that the aim is to test whether a difference between patient groups or an association among variables exists) is to choose a sample size that controls the 2 types of statistical error given a specified true effect size (eg, odds ratio [OR], hazard ratio, mean difference) in the overall patient population. Specifically, these types of statistical error are type I error (ie, a false positive finding, most often chosen to be 5%) and type II error (ie, a false negative finding, most often chosen to be 20% corresponding to 80% power; Table 1). In more general terms, power analysis aids in choosing a sample size that is large enough to allow for a reasonable probability of generating meaningful conclusions from the study data, while at the same time avoiding an excessive sample size that could result in unnecessary burdens and costs to patients and investigators.

View this table:
  • View inline
  • View popup
Table 1.

Illustration of the 2 types of statistical errors.

Perhaps most obviously, using power analyses to determine the sample size of a randomized controlled trial ensures that the sample size will allow for a reasonable probability of detecting a specified clinically meaningful difference between treatment groups. For example, in a recent study by Messier et al12 assessing whether high-intensity strength training reduces knee pain or knee joint compressive forces in adults with knee osteoarthritis, a sample size of 372 patients (124 in each of 3 treatment groups) was targeted. This sample size resulted in 80% power at the P < 0.0083 significance level (ie, after adjusting for multiple testing) to detect a mean difference of 1.1 in Western Ontario and McMaster Universities Osteoarthritis Index between treatment groups.

Power analyses can also be useful for observational studies, either in the form of a prospective study of new patients or a retrospective study on an already existing patient group where data have not yet been collected. For example, such analyses can aid in evaluating the feasibility of a rigorous analysis of aims given the study population (eg, if we wish to examine risk factors for a certain outcome, how well can we do this given the data we will generate if the outcome is rare?), and if feasible, can determine ideal sample size. Additionally, power calculations can be helpful when the sample size is already fixed but there is a need to collect extra data of interest that come with financial costs. In these situations, power analysis can help decide whether these extra costs are likely to be worthwhile, and if so, whether all samples, or a smaller subset, should be included. In short, whether or not power analyses are actually conducted, they should always be considered before any research study.

Inappropriate use of power calculations

On the other hand, performing power analysis following completion of a study in order to aid in the interpretation of its results is inadvisable for 2 reasons: (1) such a power analysis is theoretically incorrect, and (2) there is a much better and readily available alternative. Both of these issues will be discussed herein; however, we will first address the theoretically incorrect nature of performing post hoc power calculations. To illustrate this, it is first necessary to formally define probability, since power is a specific type of probability. Probability is a numerical quantity ranging from 0 to 1 that expresses the likelihood of a future event. Notably, probability in general, and correspondingly statistical power, refers only to something that may or may not happen in the future; neither of these concepts is relevant when the event of interest has already occurred. For example, the probability that a certain team will win the Super Bowl in a given year ceases to be a meaningful concept once the Super Bowl has finished. Therefore, for this reason alone, power calculations should only be performed when planning a future study that has not yet taken place. Post hoc power calculations that are performed in reference to a previous study are never appropriate, as we already know with certainty whether or not a statistically significant finding has occurred.

In our experience, a request for post hoc power calculations is the most common statistics-related comment that is made by journal reviewers in the medical literature. Additionally, post hoc power calculations are often requested by researchers prior to manuscript submission. This may be because they believe these calculations will be helpful, because they have seen these calculations presented in the literature previously and believe they are expected, or because they are aiming to preemptively address a comment by a journal reviewer.

Why is an incorrect statistical technique requested (and presented) so often? There are several likely reasons. The first and probably most common scenario occurs when a statistically significant difference or association has not been identified, resulting in the following question: “Is the lack of a statistically significant result in this study a false negative finding that is caused by an inappropriately small sample size?” This is an important question; however, performing power calculations is not an appropriate way to address it. The request for a power calculation in this scenario generally comes in 1 of 2 forms. First, there is often a desire to estimate “observed power,” or the power that the study had to detect the observed effect size assuming the observed levels of variability. For example, if a nonsignificant OR of 1.5 was reported in the study, one might wonder what power the study had to detect that OR with the sample size that was utilized. However, if a nonsignificant finding was obtained, power will always be low to detect the observed effect size,7 as observed power is directly related to the obtained P value, with the former providing no additional information than the latter.6 Therefore, calculating observed power is completely nonin-formative. Second, it may be of interest to estimate the power that the study had to detect a clinically meaningful effect size (eg, an OR of 2.0 might be clinically relevant in a given study). The thought process behind both these approaches is likely that a low estimate of power could signify a false negative finding. However, ignoring the fact that power in this scenario of an already completed study is undefined as previously mentioned, such an approach would be an indirect way to address the likelihood of a false negative finding.

A sound alternative to post hoc power calculations

Fortunately, there is an alternative calculation to post hoc power calculations that is theoretically correct and is also very often already provided in the results that we are hoping to interpret: a 95% confidence interval (CI). A 95% CI can reasonably be thought of as a range of effect sizes that are consistent with the observed data and that the true effect size is likely to lie within, and therefore directly informs us regarding whether or not a false negative finding may have occurred. Of note, the technical and somewhat long-winded interpretation of a 95% CI is that if samples of the same size as that of the current study were repeatedly taken from the same patient population and a 95% CI for the effect size calculated for each sample, 95% of these 95% CIs would contain the true population effect size. Of course, this interpretation assumes that there is no systematic error in the estimation of the effect size, such as bias or confounding.

In general, once the all the data for a given study have been collected, power analysis no longer has a part to play, and it is best to perform the analysis and interpret the results accordingly based on 95% CIs for effect sizes (along with the effect sizes themselves). For example, if the weakest clinically meaningful effect size for a given study is an OR of 1.5, a 95% CI that ranges from 0.8 to 2.2 would indicate that a clinically meaningful association is possible, whereas a 95% CI that ranges from 0.8 to 1.3 would indicate that a clinically meaningful association is unlikely. A graphical illustration regarding how to assess the likelihood of a clinically meaningful difference based on 95% confidence limits and presence or absence of a statistically significant difference is shown in Figure 1. Notably, the width of a 95% CI for an effect size is dependent on several factors related to sample size and variability that differ depending on the hypothesis being evaluated and the types of variables being examined (ie, continuous, binary, time-to-event, ordinal). A shorter 95% CI width indicates a smaller range of likely effect sizes and therefore a more precise estimate of the true effect size.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Illustration of how to assess the likelihood of a clinically meaningful difference (CMD) based on 95% confidence limits for a scenario of comparing a binary outcome between 2 groups. Examples are provided with and without the occurrence of a statistically significant difference (ie, P < 0.05), and all assume that an OR of 1.5 indicates a CMD in this example, which is shown with a solid horizontal line. ORs are represented by solid black square points, and 95% CIs are represented by vertical lines. A dashed horizontal line is provided for an OR of 1, indicating no difference between groups. CI: confidence interval; OR: odds ratio.

Comments on other power analysis scenarios

There are several other less common situations when a power analysis following the completion of a study might be requested, and we focus on 2 instances here. First, it could be of interest to estimate the power that a future study with the same sample size as the current study would have to detect a certain effect size, in order to better inform other researchers for such future studies. This is completely acceptable as long as the emphasis is solely on that future study and not on the current study that has been completed, where 95% CIs are best used to interpret the results, as previously mentioned. Second, one might have the opinion that all studies should have a power calculation and therefore any manuscript that does not contain a power statement should include one. While this is certainly a valid viewpoint at the study design stage, if a given study is already completed and a power calculation was not used to choose the sample size, performing a power calculation at that point will be of no use, as power is solely to be used to decide on the sample size of a future study.

Suggested best practices for performing power analysis

Taking all of the above into account, 3 simple suggested best practices for performing statistical power analysis and interpreting the results of a research study are as follows (Table 2). First, power analysis should always be considered before any research study in order to choose an ideal sample size and/or to examine the feasibility of properly evaluating study aims. It should be noted that sample size decisions can also be informed by considering the precision of estimates (eg, width of 95% CIs for effect sizes) instead of, or in conjunction with, power analyses.13 Second, power analysis should never be used to help interpret the results of an already completed study, or indeed for any reason other than to help inform sample size decisions for a future study. Third, 95% CIs for effect sizes (or other parameter estimates such as means, proportions, etc.) should be used along with effect sizes and P values when attempting to draw conclusions from results; for example, the likelihood of a false negative finding. In other words, when interpreting the results of a given research study, the best practice is to use the actual results of that study.

View this table:
  • View inline
  • View popup
Table 2.

Suggested best practices regarding statistical power analysis.

Footnotes

  • The authors declare no conflicts of interest relevant to this article.

  • Accepted for publication January 7, 2022.
  • Copyright © 2022 by the Journal of Rheumatology

This is an Open Access article, which permits use, distribution, and reproduction, without modification, provided the original article is correctly cited and is not used for commercial purposes.

References

  1. 1.↵
    1. Zhang Y,
    2. Hedo R,
    3. Rivera A,
    4. Rull R,
    5. Richardson S,
    6. Tu XM
    . Post hoc power analysis: is it an informative and meaningful analysis? Gen Psychiatr 2019;32:e100069.
    OpenUrl
  2. 2.
    1. Smith AH,
    2. Bates MN
    . Confidence limit analyses should replace power calculations in the interpretation of epidemiologic studies. Epidemiology 1992;3:449-52.
    OpenUrlCrossRefPubMed
  3. 3.
    1. Nuzzo RL
    . Post hoc power. PM R 2021;13:422-4.
    OpenUrl
  4. 4.
    1. Matcham J,
    2. McDermott MP,
    3. Lang AE
    . GDNF in Parkinson’s disease: the perils of post-hoc power. J Neurosci Methods 2007;163:193-6.
    OpenUrlPubMed
  5. 5.
    1. Levine M,
    2. Ensom MH
    . Post hoc power analysis: an idea whose time has passed? Pharmacotherapy 2001;21:405-9.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. Hoenig JM,
    2. Heisey DM
    . The abuse of power: the pervasive fallacy of power calculations for data analysis. Am Stat 2001;55:19-24.
    OpenUrlCrossRef
  7. 7.↵
    1. Goodman SN,
    2. Berlin JA
    . The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results. Ann Intern Med 1994;121:200-6.
    OpenUrlCrossRefPubMed
  8. 8.
    1. Dziak JJ,
    2. Dierker LC,
    3. Abar B
    . The interpretation of statistical power after the data have been gathered. Curr Psychol 2020;39:870-7.
    OpenUrl
  9. 9.↵
    1. Althouse AD
    . Post hoc power: not empowering, just misleading. J Surg Res 2021;259:A3-6.
    OpenUrl
  10. 10.↵
    1. Kallogjeri D,
    2. Spitznagel EL, Jr.,
    3. Piccirillo JF
    . Importance of defining and interpreting a clinically meaningful difference in clinical research. JAMA Otolaryngol Head Neck Surg 2020;146:101-2.
    OpenUrl
  11. 11.↵
    1. Copay AG,
    2. Subach BR,
    3. Glassman SD,
    4. Polly DW Jr,
    5. Schuler TC
    . Understanding the minimum clinically important difference: a review of concepts and methods. Spine J 2007;7:541-6.
    OpenUrlCrossRefPubMed
  12. 12.↵
    1. Messier SP,
    2. Mihalko SL,
    3. Beavers DP, et al
    . Effect of high-intensity strength training on knee pain and knee joint compressive forces among adults with knee osteoarthritis: the START randomized clinical trial. JAMA 2021;325:646-57.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Rothman KJ,
    2. Greenland S
    . Planning study size based on precision rather than power. Epidemiology 2018;29:599-603.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

The Journal of Rheumatology
Vol. 49, Issue 8
1 Aug 2022
  • Table of Contents
  • Table of Contents (PDF)
  • Index by Author
  • Editorial Board (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word about The Journal of Rheumatology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Post Hoc Power Calculations: An Inappropriate Method for Interpreting the Findings of a Research Study
(Your Name) has forwarded a page to you from The Journal of Rheumatology
(Your Name) thought you would like to see this page from the The Journal of Rheumatology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Post Hoc Power Calculations: An Inappropriate Method for Interpreting the Findings of a Research Study
Michael G. Heckman, John M. Davis, Cynthia S. Crowson
The Journal of Rheumatology Aug 2022, 49 (8) 867-870; DOI: 10.3899/jrheum.211115

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

 Request Permissions

Share
Post Hoc Power Calculations: An Inappropriate Method for Interpreting the Findings of a Research Study
Michael G. Heckman, John M. Davis, Cynthia S. Crowson
The Journal of Rheumatology Aug 2022, 49 (8) 867-870; DOI: 10.3899/jrheum.211115
del.icio.us logo Twitter logo Facebook logo  logo Mendeley logo
  • Tweet Widget
  •  logo
Bookmark this article

Jump to section

  • Article
    • Abstract
    • Appropriate use of power calculations
    • Inappropriate use of power calculations
    • A sound alternative to post hoc power calculations
    • Comments on other power analysis scenarios
    • Suggested best practices for performing power analysis
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • References
  • PDF

Keywords

power
sample size
type I error
type II error
effect size

Related Articles

Cited By...

More in this TOC Section

  • Inflammation as a Treatment Target in Hand Osteoarthritis: A Review of Previous Studies and Future Perspectives
  • Conducting a High-Quality Systematic Review
  • Urine Protein Tests in Systemic Lupus Erythematosus: What Do They Mean?
Show more Expert Review

Similar Articles

Keywords

  • power
  • sample size
  • type I error
  • type II error
  • effect size

Content

  • First Release
  • Current
  • Archives
  • Collections
  • Audiovisual Rheum
  • COVID-19 and Rheumatology

Resources

  • Guide for Authors
  • Submit Manuscript
  • Author Payment
  • Reviewers
  • Advertisers
  • Classified Ads
  • Reprints and Translations
  • Permissions
  • Meetings
  • FAQ
  • Policies

Subscribers

  • Subscription Information
  • Purchase Subscription
  • Your Account
  • Terms and Conditions

More

  • About Us
  • Contact Us
  • My Alerts
  • My Folders
  • Privacy/GDPR Policy
  • RSS Feeds
The Journal of Rheumatology
The content of this site is intended for health care professionals.
Copyright © 2025 by The Journal of Rheumatology Publishing Co. Ltd.
Print ISSN: 0315-162X; Online ISSN: 1499-2752
Powered by HighWire