The Discipline Trap
At the end of the recent Data and Metrics Mastery Day held by The Learning Network, I found myself in one of those conversations that starts somewhere specific and ends up somewhere much bigger. We were talking about metric chains, about how L&D teams can build measurement frameworks that trace a clear line from performance indicators all the way up to what matters to the board. Sensible stuff, solid methodology, and then someone asked the question that always comes eventually: “But what do we do when people just keep coming to us and asking for training?”
The tendency to look for one methodology, one framework, one way of thinking that we can apply cleanly and consistently across everything we do is ever present. We find something that makes sense, offers clarity and rigour, and we hold onto it with both hands. The problem is that the world refuses to cooperate.
The ideal state for metric chains, as I was explaining, is working backwards from the organisation’s top priorities, mapping the performance indicators that signal progress against those priorities, defining the levers that L&D can pull. When a need arises, you already know where it sits in the chain and what the right response might be. You never start with an intervention and work forwards; you start from strategy and work down. It’s coherent, defensible, and in an ideal world, it is absolutely what you would do.
Of course, most of us don’t live in that ideal world. Most of us are still navigating organisations where people arrive with a training request, an urgency, and a deadline. So what do we do? We could refuse to engage until conditions are perfect, treating our methodology as something too precious to compromise? Or, and I’d argue this is the more useful choice, we could start from the other end of the chain. We begin with the intervention we’ve been handed, and we work upwards, building evidence, demonstrating value, and using that to make the case for a better approach in future. Is it ideal? No. Does it represent the full power of the methodology? Also no. Is it considerably better than nothing at all? Absolutely.
This is something I’ve observed across pretty much my entire career: the most effective work rarely comes from a single approach applied in its purest form. It comes from drawing thoughtfully across multiple disciplines and fields, taking what is most useful from each, and using ideas from one area to compensate for the constraints that prevent you from fully applying another. The best L&D practitioners I know are not devotees of a single school of thought; they are curious borrowers from many, who know when to reach for which tool and why.
We often repeat the phrase “perfection is the enemy of progress,” and I think we mean it when we say it. But I’ve noticed, and I include myself in this without hesitation, that we apply it mainly to the things we make rather than to how we work. I’ve spent more time than I care to admit worrying about whether something was the absolute best version of itself before releasing it, or wondering whether our function was operating at peak efficiency before attempting to change anything, when what I should have been doing was adapting one small area of practice, learning from it, and moving forward. The principle applies to our methodologies just as much as it applies to our e-learning modules.
So, in the spirit of making progress rather than achieving perfection, here are five questions worth asking yourself in a regular basis:
Am I refusing to act because the conditions for my preferred approach aren’t fully in place, and if so, what is the cost of that inaction to the people and the organisation I’m here to help?
What does the best available version of this approach look like given the constraints I’m actually working within, rather than the constraints I wish I had?
Which elements of other disciplines, fields, or frameworks could help me compensate for what this methodology can’t do on its own in this context?
If I begin here, what evidence could I gather that would make the case for doing it better next time?
Am I conflating how something should work in theory with what would move things forward in practice, and am I honest with myself about which of those I’m optimising for?
None of this is an argument for abandoning rigour or settling permanently for less. It’s an argument for treating methodology as a starting point rather than a set of handcuffs; for understanding that combining approaches thoughtfully is often a sign of sophistication, not compromise. The work gets better when we stop waiting for the perfect conditions and start asking what we can do with the conditions we have.


I appreciate your thoughts here. They are refreshing. A lot of what I see written on social media comes off as disconnected from reality—a reality in which we work within constraints and still find ways to add value by doing precisely what you advocate here when it must be done.