How Vanishing Entry Level Jobs will Hurt Organizational Performance
Redesigning entry-level work with AI fluency and human mentorship is how organizations will regenerate skill, culture, and resilience in the age of AI.
Artificial intelligence is coming for the bottom of the career ladder first. Senior executives with reputations and networks may feel safe. New graduates not so much.
Recent data underline the problem. In the United States, the overall unemployment rate was 4.3% in August 2025, and for recent college graduates aged 22–27, it was 5.3% in Q2. Worse still, 41% of graduates are underemployed, working in jobs that don’t require a degree. The St. Louis Fed reports that graduate unemployment has climbed from 3.25% in 2019 to nearly 4.6% in 2025. In other words, for many young workers, entry-level opportunities are simply disappearing.
What to do? Harvard Business School professor Amy Edmondson, famous for pioneering the concept of psychological safety in teams, has issued a warning. In Harvard Business Review, she and Tomas Chamorro-Premuzic argue that while AI has made it easier than ever to automate away junior tasks, the real risk comes from how organizations respond. Cutting entry-level jobs may look efficient, but it is dangerously short-sighted. These roles are not inefficiencies to be eliminated but the very infrastructure through which organizations regenerate skill, culture, and tacit knowledge—the invisible glue that leadership alone cannot provide.
Their argument rests on four pillars:
1. Every capable leader starts on the ground floor; without early exposure, managers risk becoming detached and naïve.
2. Innovation often bubbles up from the bottom, where junior staff see inefficiencies veterans overlook.
3. Young hires inject energy and diversity into workplace culture.
4. Work itself is a civic good: it offers structure and purpose, and without it, societies face the risks of alienation and unrest.
Their solution is not to protect outdated tasks but to redesign entry-level work. If AI can draft reports or reconcile accounts, then give juniors higher-value challenges: interpreting the machine’s output, stress-testing assumptions, and developing resilience through ambiguity and failure. She likens it to medicine: residents still endure grueling shifts not because the work cannot be automated, but because the experience itself builds judgment and empathy.
But Conor Grennan, Chief AI Architect at NYU Stern School of Business, isn’t convinced. In a pointed LinkedIn post, he praises Edmondson for identifying the crisis but calls her prescriptions unrealistic for CHROs (Chief Human Resources Officers) tasked with workforce strategy.
His critique has three prongs:
1. Asking juniors to “red-team” AI outputs is redundant when large models can already critique themselves more effectively than most novices.
2. Companies are not civic organizations. Appeals to “protect jobs for society” may be morally compelling, but firms optimize for profit, not public good.
3. There’s a structural problem: the apprenticeship model. In medicine or trades, apprenticeships make sense because workers are tied to the system. In business, juniors can leave anytime, which means firms often end up training the competition’s future workforce.
Grennan’s alternative is blunt. As a consultant, he advises companies not to hire for doomed roles where AI can already do more than half the tasks. Raise the bar so entry-level now means “advanced AI-native work” from day one. And shorten the leash: six-month contracts with strict performance metrics that force new hires to prove they bring uniquely human values of insight, relationships, and persuasion within 90 days.
The two perspectives diverge sharply, but they agree on one thing: if companies hollow out entry-level jobs, they will face a leadership vacuum in five years. The short-term savings of automation could leave firms without the next generation of managers and innovators.
For CHROs, the choice is difficult. Edmondson urges a reframing of entry level roles as long-term investments in culture and leadership. Grennan urges ruthless redefining of roles to hire only those who can add AI-complementary value immediately. Both warn that ignoring the issue is a recipe for crisis.
Toward a Synthesis: Mentorship and AI Fluency
Edmondson is right: cutting entry-level roles severs the very pipelines that build leaders, sustain culture, and anchor society. Grennan is right, too: companies cannot hire for charity, and the pressure to redefine value is real. The challenge is to hold both truths at once.
This is where Matt Beane’s research in The Skill Code adds some critical depth. He demonstrates that true expertise rests on three pillars: challenge, complexity, and (human) connection. Novices must be stretched by difficult tasks, exposed to the messy realities of work, and embedded in relationships with experienced mentors. When AI takes over the routine work, novices are often sidelined (whether in surgery, robotics, or banking) and reduced to passive observers rather than active participants. The result is a hollowing out of tacit knowledge, cultural continuity, and the ability to innovate from within. These are the invisible assets that make organizations resilient in crises and distinctive in markets. Without them, firms may look efficient on the surface while becoming hollow at the core
This is where Grennan’s warning of apprenticeships is misplaced. He argues that apprenticeships in business are risky because juniors can leave, which means turning a company’s training into a competitor’s gain. But Beane’s counterpoint is that mentorship is necessary—it is the very infrastructure that allows expertise to reproduce at all. Without juniors in the loop, organizations lose their capacity to regenerate skills, culture, and leadership. In this sense, the greater danger is not wasted investment in workers who depart, but systemic collapse when no one remains capable of carrying institutional knowledge forward. AI cannot fill that gap; only structured mentorship can.
The synthesis, then, is clear: entry-level jobs must be redesigned, and at least not wholly eliminated. They should preserve the human stretch of ambiguous, complex tasks while embedding mentorship that AI cannot replace. At the same time, universities, companies, and young people must embrace a new baseline of AI fluency. This means learning how to learn, how to prompt, how to use different AI tools, and how to critique.
But there are at least two catches here: (1) AI-fluent juniors lacking depth, and (2) seniors lacking AI fluency.
First, even the most AI-savvy graduate lacks the depth to reliably evaluate outputs without experienced human guidance. Gerlich’s (2025) study of 666 participants, for example, found that 17–25-year-olds were not only those most immersed in AI tools, but also had the lowest critical thinking scores. Heavy AI use correlated strongly with cognitive offloading (r = +0.72) and negatively with critical thinking (r = –0.68). In interviews, younger participants admitted they often accepted AI outputs at face value. This is another reason why mentorship matters as much as AI literacy and fluency: education and guided experience are what transform raw AI familiarity into the judgment needed to evaluate its outputs.
The second catch is that while mentors and senior staff may hold the depth of tacit knowledge, many are not yet fluent in AI themselves. If they dismiss machine outputs too quickly, or resist being challenged when AI produces conflicting or even better solutions, they block the very learning process they are meant to foster. In the AI age, mentorship cannot be one-directional; it must be reciprocal. Juniors bring AI-native skills but need guidance, while seniors bring judgment but must adapt their expertise in light of what the technology reveals. Without this two-way exchange, organizations risk ending up with overconfident juniors or outdated seniors. Both equally hazardous.
The path forward is not Edmondson or Grennan, but a synthesis reframed through Beane’s evidence. Organizations must redesign entry-level work to keep mentorship at the core, while also demanding higher expectations of AI-native fluency from new hires. But mentorship in the AI age cuts both ways: juniors need the depth and judgment only seniors can provide, and seniors must stay open to learning from AI outputs that may challenge and also enhance their expertise. Done right, this reciprocal model preserves tacit knowledge, builds critical AI-savvy skills, and creates the only sustainable advantage left—a workforce that can both harness AI and still evolve beyond it.
References
Beane, M. (2024). The skill code: How to build and protect skills in an age of AI and robots. Boston, MA: Harvard Business Review Press.
Edmondson, A. C., & Chamorro-Premuzic, T. (2025, September 16). The perils of using AI to replace entry-level jobs. Harvard Business Review. https://hbr.org
Gerlich, R. N. (2025). The interplay of AI tool use, cognitive offloading, and critical thinking in the digital age. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006
Grennan, M. (2025, September 18). [LinkedIn post]. LinkedIn. https://www.linkedin.com/posts/matt-grennan-235a505_ai-hrm-amyedmondson-activity-7237982360175636480-QLfZ




Nigel, one of the things that troubles me about these conversations is the notion that "AI fluency" is something tangible that might get turned into an outcome that can be taught and measured. I'm skeptical because we simply do not understand much yet about how LLM tools will be used in organizational contexts. The idea that getting an AI chatbot to answer questions accurately or to generate a draft of some memo no one will read is a good use case LLM is laughable.
Reducing headcount among junior staff may turn out to be a disaster, or not. Who knows? Maybe we should be laying off the senior people and clearing the decks for those who use the tools.
The real disaster is pretending we know something about what counts as effective AI use this early into a decades-long transformation.