Last update: Jan 15, 2026 Reading time: 4 Minutes
Minimum Detectable Effect (MDE) refers to the smallest change in a metric that can be detected with a certain level of confidence during A/B testing or experiments. Knowing how to calculate MDE is crucial for forming realistic test plans because it helps determine the resources, time, and effort that need to be allocated to observe meaningful results. The significance of MDE lies in its ability to inform you whether the changes being tested have a substantial impact or if they fall within the realm of randomness.
Calculating MDE accurately helps prevent waste of resources. If your test does not have the capacity to detect significant changes, efforts spent on experimenting could result in overlooked opportunities or erroneous conclusions. Thus, understanding how to calculate minimum detectable effect for realistic test plans can lead to more strategic decision-making.
When teams know the MDE, they can set realistic expectations for test outcomes. This clarity aids in preventing cognitive biases, ensuring that results are interpreted correctly and effectively communicated across teams.
Having a clear understanding of MDE can guide your overall test design. It influences sample sizes, timelines, and statistical methods used, ultimately leading to more robust experimental setups.
Before diving into calculations, identify the primary metric you want to measure. This could be conversion rates, average order value, or user engagement levels. Specifying the metric will set the foundation for your calculations.
Determine your desired statistical power (commonly set at 80%) and significance level (usually 0.05). These parameters will affect the MDE.
Gather historical data on your primary metric to determine the baseline conversion rate or average value. Reliable baseline data is key to making informed calculations.
The formula to calculate MDE is as follows:
[ MDE = \text{Z}{\beta} \times \sqrt{\frac{2 \cdot p(1 – p)}{n}} + \text{Z}\alpha \times \sqrt{\frac{p(1-p)}{n}} ]
Where:
Make sure to adjust the MDE according to real-world factors. You may have constraints like budget and resources that need to be taken into account to create a feasible test plan.
Statistical power indicates the likelihood of detecting an effect if it exists. Higher power reduces the chances of Type II errors, ensuring that significant changes are not overlooked.
Your baseline data should be substantial enough to give an accurate representation of historical performance. Ideally, it should cover various periods to account for fluctuations in performance.
While it’s technically possible to adjust MDE after analyzing results, doing so may introduce biases and affect the integrity of the experiment. Correct calculation pre-experiment is advised.