Here's a question that's been keeping me up at night: how do you prove that mentoring programmes deliver real value to organisations, rather than just making everyone feel warm and fuzzy about supporting each other?
We've been working with clients at Evolve L&D to tackle this exact challenge as we develop our mentoring platform, Tandemo. It became clear early on that the usual suspects, meeting frequency, relationship duration, satisfaction scores, tell us plenty about activity but precious little about effectiveness.
Our breakthrough came when we stopped asking "are people mentoring?" and started asking "is mentoring changing anything that matters?" This shift in questioning led us to develop a framework that goes beyond usage metrics and toward effectively measuring the impact of mentoring on business performance.
Note: I have written regularly about the folly of claiming direct ROI or causal links in our work, and we’ve tried to remain cognisant of this. Instead, when I talk about proving the impact of mentoring, I’m referring to the process of establishing and communicating the correlation between the activity within the mentoring program and the wider business performance against stated objectives.
Four Pillars of Mentoring Measurement
Usage remains the foundation, but it's just that, the starting point. We're tracking completion rates (73% of mentoring relationships in Q1 reached their six-month milestone), meeting frequency (average 2.3 sessions per month), and platform engagement (mentees logging goals and updates within 48 hours of sessions). These numbers tell us whether people are showing up, which matters because you can't improve what isn't happening. But showing up isn't the same as making progress.
Note: These stats are tremendously valuable when it comes to course-correcting the mentoring program itself, especially during the early launch phases.
Alignment tracking examines how well mentoring goals connect to business and functional objectives. This means being able to report that "43% of Q2 mentoring goals aligned with our strategic objective of improving customer retention" or "67% of sales team mentoring relationships included goals that directly support our target of increasing revenue by 30% year-on-year." Or whatever in true in your context.
When someone sets a goal to "improve communication skills," we're asking: improve them for what purpose? Is this about presenting to senior stakeholders to support the leadership development initiative? Is it about client-facing communication to drive the customer satisfaction goals? Or is it about team collaboration to support the efficiency targets? Each connects to different organisational priorities, and tracking these connections lets us see whether mentoring energy flows toward outcomes that matter to the business.
Note: Here, we are forcing people within the program to be quite perscriptive and precise in the language of their goal. This is part clear communication and part tech solution by allowing mentees to actually select aligned language to shape their goal.
Observable change brings line managers into the conversation with specific, measurable expectations. Instead of hoping behavioural improvements will somehow permeate through the organisation, we're asking managers to identify what success looks like: "Sarah will lead the weekly team standup by month three" or "James will take ownership of client escalations without requiring supervisor approval within six weeks."
These observations create accountability loops that extend beyond the mentoring relationship. When we can report that "a line manager has observed 78% of stated behavioural goals," we're demonstrating real behaviour change contributed to by mentoring.
Business impact correlation connects individual mentoring goals to organisational KPIs and behaviours through data that we can track. We're looking for patterns: mentored employees in the sales function showed 23% higher goal attainment compared to non-mentored peers. Teams with active mentoring relationships had 15% lower turnover rates. Customer satisfaction scores increased by 12% in departments with formal mentoring programmes compared to those without.
This longer-term view helps demonstrate contribution rather than just correlation. When you can show that people who received mentoring on "stakeholder management" subsequently received 18% higher performance ratings in that competency area, you're building a solid business case for the impact of mentoring.
The Rating Game Trap
We deliberately sidestepped the temptation to measure mentoring success through ratings and reviews of relationships or mentors. A 9/10 satisfaction score doesn't tell us whether someone's performance improved or whether they're more likely to stay with the company. These metrics serve as useful development feedback for mentors and selection guidance for mentees, but they don't answer the fundamental question of business value.
A perfectly rated mentoring relationship that produces no measurable performance change isn't successful.
Making Mentoring Accessible
The measurement challenge in mentoring isn't just technical; it's often political. Until we can demonstrate clear value to organisations, mentoring risks remaining the preserve of senior leadership teams, much like coaching has become in many companies.
I think the key to making internal mentoring programs commonplace is treating them like any other L&D-led intervention and holding ourselves to the highest possible standards when measuring and reporting impact. That means no NPS, satisfaction, or engagement surveys. Instead, it behoves us all to work to provide meaningful evidence of impacting organisational performance.
What's your experience with measuring mentoring programmes? Are we focusing on the right things, or are there other indicators of mentoring success that deserve attention?