The edtech world loves a launch.
New features. New platforms. New promises. And if you’re anywhere near the Learning Technologies scene this week, it’s hard not to be swept up in the buzz. Words like revolutionary, AI-powered, game-changing, and next-generation echo across every aisle.
Now, I’m not against innovation. Quite the opposite. But as someone who spends most of my time helping organisations fix performance problems, I’ve learned something important:
Not all innovation is progress.
Some tools genuinely change how we work and learn. Others just add a new login, a new licence fee, and a new layer of complexity, all without making a single meaningful dent in how people actually perform.
So, let’s talk about how we make better edtech decisions, not based on the demo, or the features, or the FOMO, but on what actually drives value in the real world.
The danger of mistaking activity for impact
We’ve all seen it happen. A platform gets bought, shiny, flexible, promising to “transform learning.” But six months later, the team is still uploading the same old SCORM files and manually chasing completions. The only thing that’s changed is the vendor logo. This happens when we evaluate tech by what it can do, not what we actually need it to do.
At its core, edtech should enable performance. That might mean faster access to support materials. Better data. Smarter pathways. But unless it solves a real-world barrier to doing great work, it’s just an expensive interface.
Before you even open the demo, you need to know:
What problem are we solving?
What’s getting in the way of that?
What could we do manually to test our assumptions before investing?
If you can’t answer those questions, the tech isn’t your next step. Discovery is. This principle mirrors the advice in The Lean Learning Cycle (Petersen, 2020), which reminds us that smart learning starts with a question, not a purchase order.
Disruption isn’t always a good thing
We love the language of disruption. But here’s the thing: sometimes, disruption just means breaking something that was already working.
A new tool that disrupts your workflow, confuses your users, or creates new silos isn’t progressive, it’s just new.
Disruption is only valuable if it enables something that was previously impossible or inefficient. It should remove friction, improve access, accelerate insight, or simplify action. As Ries (2011) notes in The Lean Startup, the goal of innovation isn’t novelty, it’s validated learning and improved outcomes. If it doesn’t? Then it’s not disruption, it’s distraction.
How to experiment with tech before committing
This is where I’ll always come back to experimentation. Because you don’t need to implement the full system to test whether something adds value. You just need to design a small, intentional test.
Let’s say a vendor promises that their system improves self-directed learning. Great. How could you test that manually?
Could you curate a small set of resources and track how people use them?
Could you run a low-tech version of the same journey in SharePoint or Slack?
Could you survey the same users before and after to measure relevance and uptake?
If the outcome is promising, now you’ve got something to build a business case around, evidence, not enthusiasm.
This works with AI tools, authoring platforms, LXPs, mobile apps, chatbots, anything. Start by testing the value, not the tech. This is echoed in Wallace’s (2023) view that performance-centred approaches must precede implementation, not follow it.
Having better conversations with vendors
Tech vendors are (usually) not the enemy. Most are smart people who believe in their product. But it’s your job to get past the pitch and into the value.
Here’s how:
1. Get specific.
Ask about actual use cases. Not features, outcomes.
“What business problem have your clients solved with this feature, and how did they measure success?”
2. Ask for evidence.
“Do you have data showing impact on performance — not just engagement?”
If the case study ends with “and everyone completed the course,” keep digging.
3. Don’t be afraid of small trials.
If a vendor won’t support a low-stakes test or pilot, ask why. Good partners will want you to see real value first.
4. Bring a performance scenario to the demo.
Instead of watching a generic walkthrough, ask them to show how their product could support a problem you are facing.
5. Clarify the human effort required.
“How much time will it take for my team to make this useful?”
“What internal resources will we need to implement this effectively?”
If a tool adds complexity faster than it removes it, it's a cost, not a solution.
Fosway’s (2024) research shows that learning systems are still widely underutilised, often because implementation is misaligned with business needs. If the tool’s primary benefit is “modernising your L&D,” then it’s worth asking, in what way, and to what end?
A few red flags to look out for
Over the years, I’ve developed a personal list of signals that tell me a tool may be more hype than help:
“It replaces your learning strategy.” No, it doesn’t. Tech supports strategy. It doesn’t write it.
“The engagement numbers speak for themselves.” Engagement is not the same as performance.
“AI automates everything.” Including things you probably shouldn’t automate.
“You don’t need to worry about the data model.” Yes, you absolutely do.
“Just plug in your existing content.” Into what? For who? To solve what?
If the pitch doesn’t include a clear use case, an actual performance outcome, or a transparent explanation of how it works, question it. Ask what problem it solves and how it proves it.
Making better decisions as a buyer
The best buyers are sceptical, not cynical, but curious. They ask better questions. They bring stakeholders into the discussion early. They look for pilot opportunities. And they’re not afraid to say, “This looks impressive, but it’s not what we need right now.”
That’s not resistance. That’s leadership.
You don’t need to chase every trend. You need to choose the tools that fit your goals, your systems, and your people. Tools that help you move faster toward something that matters, not just faster, full stop.
As Laura Overton reminds us in Learning Changemakers (2022), “the world of work is messy.” And if the solution doesn’t hold up under messy conditions, it’s probably not a solution at all.
The learning tech world is bursting with possibility. Some of it is genuinely exciting. Some of it is game-changing. Some of it will end up in a forgotten browser tab three months from now.
If you want to separate innovation from noise, start with a different question:
“What’s the smallest thing we could do to test whether this will make a real difference?”
Because if it won’t move the needle, it doesn’t matter how many features it has.
References
Fosway Group. (2024). Digital Learning Realities Research. www.fosway.com
Ries, E. (2011). The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business.
Petersen, R. A. (2020). The Lean Learning Cycle: A Guide to Designing Effective Learning Experiences. Independently published.
Wallace, G. L. (2023). Performance-Based Instructional Systems Design. www.performancebasedisd.com
Overton, L. (2022). Learning Changemakers: Stories from the Frontline of L&D Transformation. The Learning Network.