There's a curious thing happening across our industry. Every conference presentation about user-generated content follows the same script: impressive slides showing thousands of pieces of content created, hundreds of active contributors, and dramatic reductions in L&D workload. The presenter beams with pride as they reveal that employees have generated 2,847 learning assets in the past year, up 340% from the previous period.
What they don't mention is whether any of this content actually helps people do their jobs better.
I've sat through enough of these presentations to recognise the pattern. The metrics are always about volume: pieces created, contributors engaged, SME time saved. It's understandable why these numbers appeal to us. They suggest both increased engagement with learning and improved efficiency for stretched L&D teams. The problem is that we're measuring the wrong thing entirely.
Note: I do acknowledge that measuring these quantity-based measures does have some value. But I do not think it's the primary benefit and should not be the primary driving metric that we talk about when discussing user-generated content.
This obsession with quantity over quality is dangerous. When we celebrate volume without considering value, we incentivise the creation of content that feels productive but delivers little genuine performance improvement. Worse, we risk validating the sceptics who argue that user-generated content inevitably leads to misinformation and poor practice spreading through organisations.
The irony is that UGC, when implemented thoughtfully, offers something L&D teams can never provide at scale: deeply contextualised training that addresses the specific challenges people face in their work environments. A frontline manager sharing how they handle difficult customer situations will always be more relevant to their peers than a generic customer service module.
Quality at Scale: Three Essential Measures
The challenge lies in assessing quality without creating bureaucratic bottlenecks that kill the spontaneity that makes UGC valuable. Here's how to approach it systematically.
First, implement risk-based review processes.
Not all user-generated content carries equal risk. A safety procedure video requires different scrutiny than a time management tip. Platform configuration should automatically flag high-risk categories, compliance-related content, or anything touching regulated processes for immediate expert review. Everything else can enter a sampling process where you spot-check a percentage of submissions, using the time you save from not manually reviewing cat videos about productivity tips to focus on content that could cause harm.
Second, track application and outcomes, not just creation.
Quality content gets used repeatedly and generates follow-up questions or adaptations. Platform analytics should show which user-generated content gets referenced most frequently, generates the most discussion, or leads to further content creation. Conversely, content that gets created but never accessed again suggests quality issues worth investigating. This approach reveals what employees actually find valuable rather than what they feel obligated to create.
Note: There's an argument to say that if a piece of UGC is not being viewed, used, or commented on, it's probably worth simply removing from the platform. You could even automate this so after a predetermined amount of time (say a month), if it's been viewed by less than 10 people, it's automatically removed from the platform or archived if you don't want to delete it altogether.
Third, measure knowledge accuracy through downstream indicators.
Rather than trying to assess every piece of content for technical accuracy, monitor whether teams consuming user-generated content about specific topics show improved performance or increased error rates. If the sales team starts sharing negotiation techniques and deal closure rates improve, that's evidence of quality content. If new compliance content coincides with increased policy violations, you've identified a problem worth looking into.
The goal isn't perfect quality control, which would eliminate the speed and authenticity that make UGC valuable. It's intelligent quality assurance that catches genuinely harmful content while allowing the messy, imperfect, contextual sharing that drives learning.
We need to stop measuring user-generated content like a content factory and start measuring it like a performance improvement initiative. The question isn't whether people are creating more content; it's whether the content they're creating helps their colleagues perform better.