Understanding AI's Impact on the Labor Market

Yassin | May 23, 2025 min read

I recently listened to a 3-hour podcast interview with Michiel Bakker, an Artificial Intelligence (AI) safety researcher and MIT professor who previously worked at Google’s DeepMind. His message was a true thought-provoker: AI’s influence on the labor market extends far beyond what many currently perceive, accelerating towards truly profound shifts in job roles and availability. His conviction in this belief that AI will change our job landscape caught my attention, as the prevailing narrative often suggests AI will primarily work collaboratively with humans, enhancing their capabilities without fundamental disruption.

During my master’s degree in AI, I’ve had the opportunity to dive deeper into the architecture behind Generative AI-models like ChatGPT and Stable Diffusion. From early, somewhat clunky versions of GPT-3.5 in 2022 to the much more advanced models today, it’s clear how quickly this field is moving. Yet, despite these leaps, the long-term implications for human work remain a complex and hotly debated subject.

In this blog post, my goal is to explore the multifaceted and potentially profound impact of Generative AI on the labor market. I’ll be drawing from both public discourse and scientific findings, and rather than settling on a single, definitive outlook, I aim to present various viewpoints and empirical evidence, highlighting the inherent uncertainties that define this transition.

AI making AI better

One of the most recent innovations dicussed in the podcast was the succesful attempt of “AI making AI better.” Recently, Google DeepMind’s AlphaEvolve is able to autonomously write and improving algorithms. This system is able to solve trivial problems such as putting shapes into other shapes as efficiently as possible (how exciting…) to actualy doing some truly usefull innovations like improving matrix multiplications, improving 4 × 4 matrix multiplications to be exact. Though the actual improvement made is very small, it can have an enormous impact for these AI systems as they’re built on matrix multiplications. For more info on the improvements done by AlphaEvolve, you should check out this video of the youtube channel Stand-up Maths, it’s a pretty fun explanation of a bunch of the math discoveries made by this model.

Michiel Bakker argues that AlphaEvolve’s new discoveries are significantly accelerating AI’s development, paving the way for Artificial General Intelligence (AGI) - meaning an AI entity that autonomously perform tasks that a human being would be able to over the course of a day. My take, though, is that the advancements from these self-improving algorithms have so far been trivial at best—certainly not groundbreaking. On the flip side, consider AlphaFold, a truly groundbreaking application led by his boss, Demis Hassabis. That system has delivered genuinely transformative advancements in the field of protein structure prediction.

Yet, it is from this accelerating pace of AI’s own development that Bakker draws his bold timeline for AGI. Bakker speculates that AGI could be achieved within just three to four years due to the advancements in reasoning models: AI that “thinks” before providing an answer. He points out that the current phase, where human researchers collaborate with AI, is likely transitional. There’s an inherent economic pressure to reduce the ‘human in the loop’ for increased speed and efficiency, which naturally raises concerns about diminishing human oversight in AI’s self-improvement.

However, not all leading AI scientists concur with this accelerated timeline for AGI or the primary path to human-level intelligence. Professor Yann LeCun, Vice President and Chief AI Scientist at Meta, offers a more critical perspective. In his recent lecture at the NUS AI institute, he argues that current Large Language Models (LLMs) are largely “regurgitation machines” trained to predict the next token, lacking true understanding of the physical world, common sense, reasoning, and planning abilities. LeCun famously critiques the “religion of scaling”, the belief that merely training larger LLMs on more data will lead to human-level intelligence, calling it a “distraction” from achieving true AI progress.

drawing
"AI is just a very good regurgitation machine" (source)
LeCun proposes a different path for Artificial Intelligence, preferring the destined goal for AI systems to be Advanced Machine Intelligence (AMI), and not AGI, due to the specialized nature of human intelligence. He advocates for systems built around "world models" that learn predictive models from observation and interaction, particularly from high-bandwidth sensory data like video, similar to how infants learn about the physical world. This approach is what he believes will genuinely unlock true reasoning and versatile robotic capabilities, which current LLMs are still far from achieving. He believes these approaches are necessary, because LLM's current major issue is their inherent lack of long-term coherence. Lecun thinks this major flaw makes LLMs incapable of true reasoning, but Bakker is more optimistic, stating this is just an area where uncertainty sits and a lot of promising development is being done.

Humans in the loop

As AI takes over increasingly complex cognitive tasks, there’s a concern that humans could become thin clients: individuals increasingly dependent on AI for their intellectual capabilities, potentially losing some fundamental skills themselves. This doesn’t necessarily mean humans are completely out of the loop, but their role might shift towards supervision or managing AI agents, rather than direct performance of the core task. This shift, driven by economic pressures for efficiency, could lead to a reduction in the number of human roles overall, even if each remaining human is “augmented.” Whether this extreme scenario fully materializes for high-skilled work, as Bakker suggests, remains a subject of intense debate and depends on many complex factors. I am convinced of the possibility of significant changes, but also to the unforeseen ways humans might adapt.

Traditionally, workers held power by being able to cease work and inflict economic damage. However, if AGI could fully automate companies, this fundamental dynamic might indeed shift. This raises critical questions about the future role of government, education, and social welfare in a world where human labor’s traditional economic leverage might change.

The data don’t lie: AI impacts employment

Forget the simple “AI takes jobs” or “AI creates jobs” headlines. The truth about AI’s influence on the labor market is far more nuanced, hitting different skill sets in incredibly diverse ways. But what does that really mean? A compelling study by David Marguerit (2025) delves into two distinct types of AI impact: automation AI (which means technology taking over tasks human labor used to do) and augmentation AI (where technology actually boosts what humans can achieve). What’s fascinating is how profoundly different these two types of AI affect various parts of the workforce.

Source: Marguerit (2025)

Some occupations are highly exposed to AI automation (real estate brokers, proofreaders, and foreign language teachers). For these highly exposed jobs, wages may decrease as AI automates tasks within their roles. While the study doesn’t observe a significant overall effect of automation AI exposure on employment, it acknowledges that automation can lead to a displacement effect where labor is replaced. For low-skilled occupations specifically, automation AI does have a negative impact on the emergence of new work, employment, and wages. Some job groups , like dancers, and reinforcing iron works, are less directly affected by automation AI, as these tasks rely on physical abilities, something that’s still posing a challenge for this type of AI.

Source: Marguerit (2025)

Augmentation AI is expected to foster the emergence of new work and raises wages for high-skilled occupations. This means that for jobs with high augmentation scores, new tasks and roles are likely to be created, and wages are expected to increase.

Conclusion

The perspectives of both Professor Bakker and Professor LeCun, alongside the empirical findings from Marguerit’s research, paint a undeniably transformative picture of AI’s impact. It’s clear that AI is not merely a tool for gradual improvement at this point.

Empirical evidence suggests automation AI poses a significant threat to low-skilled jobs and wages, while augmentation AI enhances high-skilled roles and increases their wages. While there are concerns about the possibility of declining “humans in the loop” due to economic pressures, other perspectives like Professor Yann LeCun’s, lean towards amplifying human intelligence and opening doors to new, “properly human” tasks involving creativity and complex interaction. What this interaction might entail, I’m not sure. It’s totally dependent on how quickly the capabilities of this new technology accelerates. However, I’m more inclined to side with Yann LeCun’s vision of the need that some ground break innovations need to be done still for AGI or AMI to be truly viable. Anecdotally, it’s quite difficult for me to let AI improve my work efficiency for tasks that lasts more than 1 hour. The current lack of long-term coherence is a major issue that could plague these systems’ performance for many years to come. Bakker states that there are a lot of cool innovations currently being done by AI labs, but it’s not a given that this problem will be solved in three to four years as he claims.

Both experts emphasize the neccesity for education to adapt. Bakker stresses a shift towards “soft skills” like critical thinking and problem-solving, acknowledging that traditional “hard skills” are increasingly susceptible to automation. LeCun advises learning “deep technical knowledge” with a long shelf life, preparing individuals to potentially become “bosses of AI systems” rather than simply programmers of ephemeral applications. It’s interesting that both experts with such a different viewpoint on the future of these AI systems have a surprisingly similar view on the future of education.

At this point, the question isn’t if AI will transform our world, but how we as a society proactively and thoughtfully choose to engage with this shift. This involves developing robust national AI strategies, protecting critical assets like ASML, building strong governmental AI capabilities for oversight, and engaging in global discussions about wealth redistribution. Ignoring these challenges risks allowing the system to evolve in ways that might not align with our collective human values and societal well-being. It’s a bit worrying that the Dutch government doesn’t seem to be fully grasping these developments right now, but let’s hope they catch on and take action in the near future.