AI Readiness Means More Than Testing Tools
We talk about AI readiness as though it’s a checklist: train your people on prompt engineering, establish governance frameworks, identify use cases, measure adoption rates. Tick the boxes, roll out the technology, job done. The problem with this approach is that it treats AI implementation as a technical challenge when the most consequential impacts are thoroughly human.
When organisations shifted to remote work during the pandemic, we made a similar mistake; we focused on the mechanics of video calls and virtual collaboration platforms whilst largely ignoring what happened to trust, communication quality, and people’s sense of connection to their colleagues and their work. We’re now discovering that those overlooked human costs were substantial, and we risk compounding them by treating AI adoption with the same mechanistic mindset.
Note: I am very aware that this is not universally true and that some people did a phenomenal job. However, my current observation based on most organisations that I have worked with pre- and post-pandemic is that things were not handled in an ideal fashion.
Entirely understandably, I should add. Given the circumstances, the lack of time to plan, and let’s face it, the generally heightened state of fear we were all existing in at the time. The fact that we made anything work at all is nothing short of a small miracle, but that does not mean we should look back on it with rose-tinted spectacles.
Instead, we should continue to hold ourselves to the highest possible standards, even when we face incredible odds.
Recent research makes this painfully clear. A global study of over 48,000 people across 47 countries found that whilst 66% of people use AI with some regularity, only 46% are willing to trust it, and when compared with data collected before ChatGPT’s release, people have become less trusting and more worried about AI as adoption has increased (Gillespie and Lockey, 2025). More concerning still, only 47% of employees report receiving any AI training, and only 40% say their workplace has guidance on generative AI use (Gillespie and Lockey, 2025). We’re essentially asking people to navigate a technology they neither trust nor understand, often without proper support, and we’re surprised when that creates problems.
The impact on workplace communication deserves specific attention. A study of 1,100 professionals examining AI-assisted workplace writing found that whilst AI tools make managers’ emails more professional, regular use undermines trust between managers and employees (Cardon and Coman, 2025). When supervisors relied heavily on AI for messages requiring empathy or motivation, only 40-52% of employees viewed them as sincere, compared with 83% for minimally assisted messages (Cardon and Coman, 2025). People can detect AI-generated content, and they interpret its use as laziness or lack of caring; when something important needs saying, we expect humans to say it themselves.
Note: this note is of particular importance to CEOs and leaders, even line managers, who think it’s a good idea to create an avatar of themselves to do those annoying comms messages they occasionally have to send out. The reason you’re doing them in the first place is that people want to know that you have taken the time to record something for them. The use of an avatar fundamentally undermines the very value you are bringing to that piece of communication. If I could roll up a newspaper, hit you on the nose, and say “Stop!” in a stern voice, I would.
This all matters because workplace performance depends on trust, clear communication, and people feeling they matter to their organisation. If AI adoption degrades these foundations, we’re not becoming more effective; we’re just becoming faster at undermining what makes organisations work. The parallels with remote work are instructive: we gained flexibility and reduced commute times, but we lost informal conversations, spontaneous problem-solving, and the kind of connection that builds strong teams. Those losses weren’t immediately visible in productivity metrics, but they showed up eventually in retention, innovation, and organisational health.
AI readiness, then, cannot be purely technical. It requires acknowledging how people feel about these technologies, creating space for those feelings to be expressed and addressed, providing genuine support rather than perfunctory training, and being honest about trade-offs rather than pretending benefits come without costs. It means recognising that forcing people to use tools they’re uncomfortable with produces compliance without commitment, and that scared or resentful workers will avoid AI where possible, missing opportunities to increase their effectiveness.
Most importantly, it means accepting that today’s job market forces many people to stay in roles they’d rather leave, but that economic circumstances are temporary. When the market improves, organisations that treated AI adoption as a technical exercise whilst ignoring its human impact will discover their best people have left for employers who remembered that technology exists to serve people, not the other way round.
References
Cardon, P.W. and Coman, A.W. (2025) ‘Professionalism and Trustworthiness in AI-Assisted Workplace Writing: The Benefits and Drawbacks of Writing With AI’, International Journal of Business Communication.
Gillespie, N. and Lockey, S. (2025) Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025. The University of Melbourne and KPMG.

