Balanced Scrutiny: Why We Must Face Both Sides of the AI Revolution
Yesterday I attended the Talent Gathering 2025. It was a fantastic event, and I plan to write a full reflection on some of the sessions next week. But there was one conversation that I really wanted to share with you all, and that I’ve come away to do more reading on this morning.
During the event, an interesting discussion emerged about artificial intelligence and its role in our workplaces. When it was suggested we should ignore the headlines about AI’s negative impacts and focus instead on its benefits, several people in the room bristled. Their response wasn’t rooted in technophobia or a desire to resist change, but because meaningful progress requires us to grapple with the full picture, not just the parts we find convenient.
This tension reveals something important about how we consume and evaluate information, particularly when it comes to transformative technologies. The suggestion to selectively ignore certain narratives whilst amplifying others isn’t just poor critical thinking, it’s a dangerous abdication of responsibility. When we choose to engage with emerging technologies, we must scrutinise not only the technology itself but also the sources telling us what to think about it, the biases they carry, and the economic interests that shape their narratives.
The case for AI’s benefits is well documented and, in many respects, compelling. Research from PwC’s 2025 Global AI Jobs Barometer, presented at The Talent Gathering, demonstrates that revenue growth in industries most exposed to AI has nearly quadrupled since ChatGPT’s launch in 2022, suggesting the technology’s productivity gains are translating into measurable business value (PwC, 2025). Their analysis of job advertisements revealed that workers with AI skills command a 43% wage premium over their peers in similar roles without such capabilities, a significant jump from the 25% premium observed just the previous year (PwC, 2025). These aren’t trivial findings and indicate that, for those able to work alongside AI systems, the technology can indeed increase both individual value and organisational performance.
Yet to stop there would be to tell half the story, and it’s the other half that many in positions of influence seem eager to downplay or dismiss entirely. The environmental toll of AI’s explosive growth is staggering and accelerating.
Google reported a 48% increase in greenhouse gas emissions since 2019, directly attributing this surge to data centre energy consumption required for AI systems (NPR, 2024). Microsoft’s emissions grew by 29% between 2020 and 2023, with the company explicitly noting these increases stemmed from constructing data centres “designed and optimised to support AI workloads” (NPR, 2024). Goldman Sachs Research forecasts that roughly 60% of increased electricity demand from data centres through 2030 will be met by burning fossil fuels, adding approximately 220 million tons of carbon emissions globally (MIT News, 2025).
To put this in perspective, a single query to ChatGPT uses roughly as much electricity as a light bulb being on for twenty minutes, and when you multiply that by millions of daily queries, the daily carbon output becomes substantial (NPR, 2024).
We cannot pretend these costs don’t exist simply because acknowledging them is uncomfortable or inconvenient for our adoption plans.
The impact on employment is similarly complex and deserves honest examination. The World Economic Forum projects that whilst 170 million new jobs will emerge by 2030, some 92 million will be displaced in the same period (World Economic Forum, 2025). These aren’t one-to-one swaps happening in the same locations with the same individuals. Goldman Sachs Research estimates that 6-7% of the US workforce faces potential displacement if current AI capabilities expand across the economy, with certain occupations including computer programmers, customer service representatives, and administrative assistants at particularly high risk (Goldman Sachs, 2025). Perhaps most concerning is that 77% of emerging AI-related roles require master’s degrees, creating a substantial skills gap that will exclude vast numbers of displaced workers from the very opportunities meant to replace their lost positions (Nartey, 2025).
I find the way we discuss job displacement slightly concerning. When we highlight that AI creates higher-paying jobs, we rarely follow through with the uncomfortable questions:
Higher-paying for whom?
What happens to the workers whose technical, forward-looking roles are displaced by systems that can complete their tasks faster and cheaper?
If we can disrupt jobs that have existed for twenty or thirty years, how long will the replacement jobs last before the next wave of capability renders them obsolete as well?
History offers little comfort here. Technological advancement has consistently generated additional economic value, but that value has rarely flowed proportionally to the workers creating it. Instead, the gains typically concentrate among those who were already positioned to capture them, widening rather than narrowing existing inequalities. There’s no reason to believe AI will break this pattern unless we deliberately design systems and policies to ensure it does.
The economic sustainability of the AI industry itself warrants scrutiny. OpenAI, the market leader in generative AI, lost approximately $5 billion in 2024 on revenue of $3.7 billion (The Information, 2024). This means the company spent $2.25 to generate every dollar of revenue, a burn rate that projects to a loss of $14.4 billion in 2025 if spending patterns hold (LessWrong, 2025). Yet despite these staggering losses, OpenAI’s latest funding round valued the company at $300 billion, roughly 75 times its annual revenue (Where’s Your Ed At, 2025). This valuation model bears uncomfortable similarities to the dot-com bubble, where excessive value was attributed to companies that generated little actual revenue or profit. When a market leader operates at this scale of loss whilst commanding such extraordinary valuations, we must ask ourselves what’s propping these companies up and how long they’ll continue to do so. If OpenAI were to collapse tomorrow, how many of the fancy new features in our learning management systems and authoring tools would suddenly stop working?
This isn’t an anti-AI position. The potential benefits of artificial intelligence are real and, in many applications, transformative. The issue is that happy, clappy, blind adoption of new technology has never worked well, and we’re seeing warning signs that should give us pause. To adopt AI responsibly means acknowledging both its capabilities and its costs, understanding that the environmental damage, job displacement, and economic instability it creates are just as real as the productivity gains and new opportunities it generates.
We must read deeply, investigate thoroughly, and remain aware of both sides of any development we’re considering. The suggestion to ignore certain headlines because they’re inconvenient or challenging is precisely the wrong approach. Instead, we should scrutinise every source, question every claim, and recognise that the companies and consultancies selling AI solutions have vested interests in emphasising benefits whilst minimising harms. Their perspectives are valuable but incomplete.
What does responsible engagement look like?
To me, it means being honest about trade-offs. It means acknowledging that when we deploy AI systems, we’re making a choice to accept their environmental costs. It means planning for the displacement that will occur, not just celebrating the jobs that might be created. It means questioning whether the economic models underpinning the AI industry are sustainable or whether we’re building on foundations that will crumble. It means recognising that the workers creating additional value through AI may not see proportional increases in their compensation unless we actively work to ensure they do.
The path forward requires us to be open, honest, and clear about the promise and the peril. AI can make work more efficient, unlock new forms of creativity, and solve problems that were previously intractable. It can also contribute to environmental destruction, displace millions of workers, and concentrate wealth and power in ways that exacerbate existing inequalities. Both of these things are true simultaneously, and our responsibility is to grapple with that complexity rather than retreating into comfortable narratives that tell us only what we want to hear.
My question isn’t whether to adopt AI but how to do so in ways that maximise benefit whilst minimising harm, and how to ensure the gains are distributed more equitably than technological revolutions of the past.
References
Goldman Sachs (2025). How will AI affect the global workforce? Available at: https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce
LessWrong (2025). OpenAI lost $5 billion in 2024 (and its losses are increasing). Available at: https://www.lesswrong.com/posts/CCQsQnCMWhJcCFY9x/openai-lost-usd5-billion-in-2024-and-its-losses-are
MIT News (2025). Responding to the climate impact of generative AI. Available at: https://news.mit.edu/2025/responding-to-generative-ai-climate-impact-0930
Nartey, J. (2025). AI job displacement analysis (2025-2030). Available at: https://ssrn.com/abstract=5316265
NPR (2024). AI brings soaring emissions for Google and Microsoft, a major contributor to climate change. Available at: https://www.npr.org/2024/07/12/g-s1-9545/ai-brings-soaring-emissions-for-google-and-microsoft-a-major-contributor-to-climate-change
PwC (2025). The Fearless Future: 2025 Global AI Jobs Barometer. Available at: https://www.pwc.com/gx/en/issues/artificial-intelligence/ai-jobs-barometer.html
The Information (2024). OpenAI sees roughly $5 billion loss this year on $3.7 billion in revenue. Available at: https://www.theinformation.com
Where’s Your Ed At (2025). OpenAI is a systemic risk to the tech industry. Available at: https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/
World Economic Forum (2025). Is AI closing the door on entry-level job opportunities? Available at: https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/