AI Chatbots at Work: Knowledge Management, Training, or Both?
I was part of a conversation last week about AI-enabled platforms that can search across an organisation’s documents, systems, and internal knowledge bases and come back with high-context answers to specific workplace questions. A chatbot on steroids, if you like. The central question was whether this constitutes training or whether it’s the logical extension of knowledge management as a discipline.
I don’t think there’s a perfectly clean line between the two, but I lean towards knowledge management. Having a question in the moment, not knowing how to do something, needing to look up a piece of information to support your performance, that is, to my mind, pretty much the definition of what knowledge management has always tried to do. Whether the knowledge base is AI-enabled or just well-structured and meta-tagged for a search engine to navigate, the function is the same; the AI simply drafts a response rather than taking you to an article. Useful? Enormously. Something every organisation should be considering as the future of its approach? Without question. Lots of companies are already working on this, and while I’ve yet to see a brilliant implementation, I don’t think we’re far off.
The obvious risk is inaccuracy, and organisations have a responsibility to create guardrails around when AI tools are and aren’t appropriate. If someone working in retail needs to know whether a product contains nuts, the AI should not be involved. There is no world in which I’d feel comfortable saying an LLM can be trusted to answer that question. It is the prime example of when a completely non-dynamic piece of content is required; there is no context to be applied, there is a yes-or-no answer, and any risk introduced by an AI sitting between the person and the answer is unacceptable because of the potential harm. If, on the other hand, someone is looking up how to process a refund to an American Express card, we’re in much safer territory. Nobody is going to die even if they get it completely wrong, and the systems probably have sufficient protections to prevent anything disastrous.
Where the conversation got more interesting was in the question of conversational AI in training. I don’t think it’ll surprise anyone that I’m not excited about using this technology to mass-produce videos explaining the same old content, personalised or otherwise. What I do find interesting is the opportunity for practice: not just practising with a bot and feeling like you’ve achieved something, but practising within a system that takes your inputs, stacks them against a rubric or scoring matrix, and gives meaningful feedback and suggestions for improvement.
Will this be as useful as a line manager observing real work and providing coaching? No. But that approach has always suffered from a scaling problem. Most line managers can’t do it as often as they’d like, and many organisations, not because they explicitly prevent it but because they weigh everyone down with so many competing priorities, make it nearly impossible. This is an example of where an imperfect but scalable approach is a good solution.


Ok… here goes: what constructive feedback would you give me, Tom. With the aid of video replay, I’ve spotted and tried to overcome pained facial expressions - didn’t even know I was doing that!