Ten years ago, I wrote with embarrassing confidence about the imminent death of SCORM and the inevitable rise of xAPI as the new standard for learning technology. I wasn't hedging my bets or suggesting possibilities. I was predicting the future with the kind of certainty that makes you want to crawl under a desk when you read it back a decade later.
SCORM is still here. xAPI exists, certainly, but it hasn't swept away the old order like I proclaimed it would. The revolution I predicted turned out to be more of a gentle evolution, and even that's being generous. I was wrong, spectacularly and publicly, about something I supposedly understood well.
This isn't a confession designed to make you feel better about your own prediction failures, though if it does, you're welcome. It's recognition of a deeper problem that plagues our industry: we've become addicted to certainty about an inherently uncertain future, and it's making us worse at our jobs.
The Prediction Trap
The appeal is obvious. Stakeholders want to know what's coming next, vendors promise revolutionary changes just around the corner, and we feel pressure to demonstrate thought leadership by gazing confidently into the crystal ball. But prediction in learning and development isn't just difficult, it's actively misleading, both for ourselves and the people who rely on our judgement.
Consider the track record. How many "game-changing" technologies have been announced as the future of L&D over the past decade? Virtual reality was going to transform training by 2020. Artificial intelligence would personalise learning beyond recognition by 2022. Microlearning would replace traditional courses entirely. Blockchain would revolutionise credentialing. Each prediction came with compelling logic and evangelical advocates.
Some of these technologies have found useful applications, certainly. But the transformational changes predicted haven't materialised in the timeframes or ways expected. Meanwhile, we've spent enormous energy preparing for futures that didn't arrive while sometimes missing the gradual changes that actually matter.
The fundamental problem isn't that we're bad at prediction, it's that complex systems like learning and organisational behaviour resist prediction by their very nature. When you add rapidly changing technology, shifting workplace expectations, and the inherent unpredictability of human adoption patterns, confident prediction becomes intellectual hubris rather than professional insight.
The Vendor Reality Check
Having worked with numerous educational technology companies, I can offer insight into what "on the roadmap" actually means in practice. It's a spectrum that ranges from "we're launching this next week" to "we have no intention of building anything even close to this, but I'd like to make this sale." The majority of roadmap promises fall somewhere in the middle: genuine intentions that may or may not survive contact with development realities, budget constraints, and shifting market priorities.
This isn't necessarily deception, though it sometimes is. More often, it's optimism bias meeting commercial pressure. Product teams genuinely believe they'll deliver the promised features, sales teams need something to sell against competitor advantages, and marketing teams need compelling narratives about the future.
But for L&D professionals making purchasing decisions, vendor roadmaps represent hope rather than commitment. The history of educational technology is littered with promised features that never materialised, partnerships that dissolved, and revolutionary approaches that turned out to be evolutionary at best.
McKinsey's research on technology project delivery found that large IT projects run 45% over budget and 7% over time on average, while delivering 56% less value than predicted (Bloch et al., 2012).
The Factual Alternative
This doesn't mean we should ignore future possibilities or avoid planning beyond the immediate present. But we can distinguish between thoughtful scenario planning and reckless prediction, between considering possibilities and declaring certainties.
Start with what you can observe today. Are vendors actually delivering promised features to existing customers? Can you speak with organisations using the technology in production environments rather than demonstration settings? Do the case studies represent controlled implementations or complex real-world deployments?
When evaluating new approaches, look for evidence of sustained adoption rather than initial enthusiasm. The graveyard of learning technologies is full of innovations that generated conference buzz but failed to achieve lasting implementation. Technologies that survive and grow typically solve genuine problems in ways that work within existing organisational constraints.
Focus on purchasing decisions based on current capability rather than future promises. If a platform doesn't meet your needs today, don't assume roadmap features will solve your problems. If a technology requires significant organisational change to be effective, consider whether you have the capacity and commitment to make those changes before the vendor priorities shift.
Planning Without Predicting
The alternative to reckless prediction isn't paralysing uncertainty. We can think about future possibilities without claiming to know future realities. We can develop capabilities that will be useful regardless of which specific technologies emerge. We can build organisational adaptability rather than betting on particular solutions.
This approach starts with understanding fundamental challenges rather than fashionable solutions. What performance problems are you trying to solve? What barriers prevent people from doing their best work? What organisational constraints limit effectiveness? These questions matter regardless of whether the future belongs to artificial intelligence, virtual reality, or technologies we haven't yet imagined.
Invest in developing evaluation capabilities that can assess new technologies quickly and systematically. Build relationships with early adopters who can provide honest feedback about implementation realities. Create small-scale experimentation processes that allow you to test approaches without major commitments.
Most importantly, develop comfort with uncertainty as a permanent condition rather than a temporary problem to be solved through better prediction. The pace of technological change suggests that uncertainty will increase rather than decrease, making adaptability more valuable than foresight.
The Intellectual Humility Advantage
Perhaps the most important capability for L&D professionals isn't the ability to predict the future, but the intellectual humility to acknowledge we can't. This doesn't make us less valuable to our organisations; it makes us more trustworthy. Stakeholders quickly learn to distinguish between professionals who offer honest assessments and those who make confident predictions that don't materialise.
Research on expert judgement consistently shows that those who express appropriate uncertainty and acknowledge the limits of their knowledge make better decisions than those who project false confidence (Tetlock, 2017). In complex domains like learning and technology, intellectual humility becomes a professional advantage rather than a weakness.
This approach also allows us to focus energy on what we can control rather than what we can predict. We can improve our current practice, develop better evaluation methods, and build organisational capabilities that will be useful regardless of future technological developments.
The organisations that navigate technological change most successfully aren't those with the best predictions, but those with the best adaptation capabilities. They experiment thoughtfully, evaluate honestly, and maintain flexibility in the face of uncertainty.
The Long View
My failed prediction about SCORM and xAPI taught me something valuable: the future usually arrives more slowly and differently than we expect. Revolutionary changes tend to be evolutionary in practice, and the technologies that matter most are often those that solve mundane problems reliably rather than promising transformational breakthroughs.
This doesn't mean we should ignore emerging possibilities or resist innovation. It means we should approach the future with appropriate humility, focus on capabilities rather than predictions, and make decisions based on current realities rather than projected possibilities.
The next time someone asks you to predict the future of learning technology, consider offering something more valuable: thoughtful analysis of current trends, honest assessment of existing capabilities, and practical frameworks for evaluating whatever emerges next. Your stakeholders might find it less exciting than confident predictions, but they'll find it more useful when they're making actual decisions.
After all, the future will arrive regardless of our predictions about it. Our job isn't to forecast it accurately, but to help our organisations respond to it effectively when it does.
What predictions have you made that now seem embarrassingly confident? And what have those experiences taught you about planning for uncertainty rather than predicting certainty?
References
Bloch, M., Blumberg, S. & Laartz, J. (2012). Delivering large-scale IT projects on time, on budget, and on value. McKinsey & Company. Available at: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/delivering-large-scale-it-projects-on-time-on-budget-and-on-value
Tetlock, P. (2017). Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press.
Thoughtful piece Tom. I think the real validation is whether the product delivers what it promises. The proof should be in the pudding. If it is, the prediction is valid (in terms of delivery). But whether the market adopts it, even if it delivers results, well, this is the other side of the predictive coin :-)