Explain Like I am 5 | Day 05: Understanding Minimum Detectable Effect
Explain Like I’m Five Series | Lesson 05 - Understanding Minimum Detectable Effect. One core experimentation concept, explained with clarity and practical intuition.
In the series “Explain like I’m 5” we are breaking down A/B testing topics that can be understood by even a 5-year-old.
👋 Hey! This is Manisha Arora from PrepVector. Welcome to the Tech Growth Series, a newsletter that aims to bridge the gap between academic knowledge and practical aspects of data science. My goal is to simplify complicated data concepts, share my perspectives on the latest trends, and share my learnings from building and leading data teams.
Remember our toy mystery with your sibling Sam that led to the setting of Null and Alternate Hypothesis? Let’s continue that story.
You had 20 toys but you suspect your sibling Sam has been swiping them. You counted your toys and found you have 19 instead of 20. One is missing.
Does that mean Sam took it? Probably not. You lose toys all the time—under the couch, at school, in the car. One missing toy could be nothing.
But what if 3 were missing? Now that’s suspicious. So you draw a line. If three or more toys are missing, then you’ll have proof that Sam has been taking them. That’s when you’d actually want to investigate further. This number 3 is your Minimum Detectable Effect (MDE). It’s the smallest difference you’ve decided matters enough to prove your point. Anything less, and you’ll assume nothing fishy is happening. But hit that threshold, and you have your answer.
In simpler terms, MDE is asking yourself a hard question upfront: “How big does the change need to be before it actually matters to me?”
In Data Science Terms
MDE is the smallest effect size you care about detecting in your experiment.
MDE forces you to ask upfront: “How big does the effect need to be for it to actually matter to my business?” Rather than hoping for any improvement, you define what “success” looks like before you start.
A small MDE requires more data: If you want to detect a tiny 1% improvement in conversion rate, you’ll need thousands of users flowing through your experiment. This means longer experiments and more patience.
A large MDE requires less data: If you only care about detecting a 5% improvement, you need far fewer users. Your experiment finishes faster, but you might miss smaller wins.
By setting MDE before the experiment, you eliminate the temptation to move the goalposts. You can’t run the test, see the results, and then decide “actually, this 0.5% lift is good enough.”
Example:
You work at an ecommerce company testing whether Buy Now, Pay Later (BNPL) increases orders.
Ho (Null Hypothesis): BNPL does not change the order rate.
H1 (Alternative Hypothesis): BNPL changes the order rate.
MDE (Minimum Detectable Effect): You decide the smallest meaningful improvement is 2%. Anything smaller is not valuable for the business.
Using this MDE, you calculate the required sample size (say 10,000 users) and run the experiment for 2 weeks.
You compare order rates between control and treatment. If treatment and control has a positive difference of more than 2% and the difference is statistically significant then we reject Ho i.e. BNPL does change the order rate.
Now the next question is how to prove the difference is statistically different? Stay tuned to find out.
If you’d like to dive deeper into experimentation, here are a few of our learning programs you might enjoy:
A/B Testing Course for Data Scientists and Product Managers
Learn how top product data scientists frame hypotheses, pick the right metrics, and turn A/B test results into product decisions. This course combines product thinking, experimentation design, and storytelling—skills that set apart analysts who influence roadmaps.
Advanced A/B Testing for Data Scientists
Master the experimentation frameworks used by leading tech teams. Learn to design powerful tests, analyze results with statistical rigor, and translate insights into product growth. A hands-on program for data scientists ready to influence strategy through experimentation.
Master Product Sense and AB Testing, and learn to use statistical methods to drive product growth. I focus on inculcating a problem-solving mindset, and application of data-driven strategies, including A/B Testing, ML, and Causal Inference, to drive product growth.
Not sure which course aligns with your goals? Send me a message on LinkedIn with your background and aspirations, and I’ll help you find the best fit for your journey.



