L&D likes to talk about innovation. About agility, responsiveness, and data-informed decisions. But in practice, we’re often doing something quite different: designing the one perfect version of a course or intervention, launching it to everyone, and hoping for the best.
You wouldn’t call that a strategy in product development. You’d call it a guess.
That’s why we need to talk about A/B testing.
Not as a gimmick. Not as a data science buzzword. But as a perfectly reasonable, practical way to improve learning outcomes through deliberate, low-stakes experimentation. And yes, it’s a form of experimentation. Which, in case you missed it, is something I think should sit at the very core of L&D practice.
Because when we experiment, we reduce risk. We learn faster. We make better decisions. We move forward with more certainty, not because we guessed right, but because we tested what works.
So, what is A/B testing, really?
At its simplest, A/B testing is just comparing two versions of something to see which one performs better. You show one group, Version A, and another group, Version B, and then compare the results. That’s it.
Marketers use it all the time, subject lines, call-to-action buttons, and landing page layouts. But L&D teams can (and should) use it to test things like:
Which subject line gets better course sign-ups?
Does video or interactive text lead to more accurate post-training performance?
Which version of feedback leads to more improvement in skills practice?
This doesn’t mean becoming a data analyst or running complex experiments with control groups and p-values (though you can, if that’s your jam). It means replacing assumptions with small, thoughtful tests and using the results to inform what you scale.
But why test? Why not just use best practice?
Because “best practice” is often just “what worked for someone else, somewhere else, once.”
Your audience, your context, your constraints, these things matter. What works brilliantly in one department might fall flat in another. What learners engage with during onboarding might not land the same way in a performance support context.
The only way to know what works for your people, in your organisation, at this moment is to test. And here’s the punchline: A/B testing helps you make safer decisions. You don’t launch the risky, unproven version to everyone. You try it with a subset, compare, and then move forward based on evidence, not instinct or opinion or the HiPPO (Highest Paid Person’s Opinion).
In other words, A/B testing doesn’t create risk. It reduces it.
How can you start testing?
Start small. You don’t need a fancy platform or statistical wizardry. You just need two things that are different enough to measure and a way to observe the outcome.
Try things like:
Two subject lines in a learning email campaign: which one gets more opens?
There are two module intros; one starts with a story, and the other starts with a stat. Which leads to more completions?
Two feedback messages, one detailed, one directive. Which results in better post-course assessment scores?
Measure what matters. Opens, completions, quiz scores, confidence ratings, even behavioural observations, whatever’s appropriate for the context. You’re not trying to publish in an academic journal. You’re trying to learn enough to make a better next step. And crucially, document what you find. A/B testing builds internal knowledge insights you can feed into your design workflow, making your next iteration much stronger.
Objections you may hear include…
“We don’t have the scale.”
You don’t need thousands of learners. A small sample can still reveal useful direction. And over time, multiple small experiments add up.
“We don’t have time to do two versions.”
You already do. You just don’t realise how much time you waste building single, untested versions that miss the mark. Testing helps you build the right thing sooner.
“We need certainty before launch.”
A/B testing is how you move toward certainty. Not perfect knowledge, just enough confidence to act smartly instead of guessing blindly.
Final thoughts
I believe experimentation should be a core part of L&D. Not a side project. Not something you do once a year. But a regular, expected, embedded part of how we work.
We’re not here to create learning content. We’re here to improve performance, support change, and help organisations get better. And that means we need to stop guessing and start learning about our learners, content, and impact.
That’s the real job. That’s the work.
References
Petersen, R. A. (2020). The Lean Learning Cycle: A Guide to Designing Effective Learning Experiences. Independently published.
Kahneman, D. (2011). Thinking, Fast and Slow. London: Penguin.
Kirkpatrick Partners (n.d.). Evaluation and A/B Testing. https://www.kirkpatrickpartners.com
Wroblewski, L. (2011). A/B Testing the Right Way. https://www.lukew.com