MMNTM logo
Return to Index
Market Analysis

The Hollow Firm 2.0: What Happens When Juniors Disappear

AI is automating junior work in law, consulting, and finance. Short-term margin expansion, but a 2035 succession crisis when AI-trained juniors become senior experts.

MMNTM Research
6 min read
#professional-services#ai-automation#talent#law#consulting#finance#workforce

Professional services firms are running an experiment. The hypothesis: AI can replace junior work without disrupting the talent pipeline that produces senior expertise.

The data so far is striking.

Law Associate Decline

23%

AmLaw 100, 2022-2023

Consulting Entry-Level Decline

54%

Industry-wide, YoY June 2024

Banking Campus Hiring Cut

17%

Bank of America, citing AI

These aren't cyclical adjustments. Thomson Reuters reports that AmLaw 100 firms made "substantial effort to control expenses by slowing hiring." McKinsey's headcount dropped 11% from 2022 to 2024. BCG added 1,000 employees in 2024 versus 5,000 in 2022—an 80% reduction.

The firms deploying AI most aggressively are reducing junior hiring most aggressively. Correlation isn't causation, but the strategic logic is explicit.

The Barbell Model

Workforce Structure

FeatureTraditional PyramidBarbell ModelPopular
Structure1 Partner + 5 Seniors + 10 Juniors1 Partner + 2 Seniors + AI Agents
Leverage Ratio1:151:50+
Compensation (% Revenue)60-70%30-40% + 5-10% AI

The economic logic is straightforward. A&O Shearman deployed Harvey AI across 4,000+ lawyers and staff, reporting 30% time savings on contract matrix analysis. Management acknowledged that "big parts of what we used to do will be automated in a completely natural way." Effects are "already visible in reduced junior-level hiring."

McKinsey's Lilli is used by 75% of its 43,000 employees an average of 17 times per week. Kate Smaje, McKinsey's global technology and AI leader, put it directly: "Do we need armies of business analysts creating PowerPoints? No, the technology could do that."

Goldman Sachs has 1,000 developers dedicated to AI projects with 12+ proof-of-concepts targeting junior analyst work. Project "Mercury" is training models on financial modeling with over 100 ex-investment bankers, targeting 60-70% automation of junior analyst tasks within 12 months.

For the economics driving these decisions, see Agent Economics.

What Grunt Work Actually Taught

The traditional progression existed for a reason. Law associates spent years 1-2 on document review, due diligence, and basic drafting—70-80% of their time. Consulting analysts built slide decks and financial models. Banking analysts created pitch books and ran comparables.

This work was boring. It was also training.

Pattern recognition develops from seeing thousands of contract variations, not AI summaries. Quality calibration comes from understanding what "good" looks like through supervised work and corrections. Domain intuition—how to "think like a lawyer" or "think like a banker"—emerges from extensive exposure to edge cases.

A mid-level associate at Atermott Will & Schulte reflected: "I'm concerned that I'm not getting the same level of experience that senior attorneys acquired."

The question isn't whether AI can do the grunt work. It can. The question is whether juniors can develop expertise by reviewing AI output rather than producing it.

The 2035 Problem

Today's workforce is stratified:

  • Seniors (ages 40-55): Trained pre-AI through traditional apprenticeship
  • Juniors (ages 25-30): AI-assisted training, potentially 70% less hands-on practice

By 2035, today's juniors will be positioned for senior leadership. Will they have the depth to:

  • Supervise AI agents effectively without having done the work manually?
  • Quality-check AI output without knowing what "good" looks like firsthand?
  • Mentor the next generation in skills they never practiced?
  • Innovate beyond AI capabilities in domains they only reviewed?

The supervision paradox is acute. The barbell model assumes seniors can supervise AI producing junior-level work. But effective supervision requires knowing where AI fails. If tomorrow's seniors never learned those distinctions through direct experience, the quality control mechanism breaks down.

This matters for platform design. At MMNTM, we build agents that maintain human oversight not just for accuracy, but to preserve the skill development that oversight enables. Our platform architecture explicitly separates tasks that benefit from human learning from tasks that don't.

The Differentiation Collapse

There's a second-order problem. If every law firm uses Harvey, every consultancy uses Lilli, and every bank uses the same AI tools, the technology commoditizes. The only remaining moat is institutional knowledge—firm-specific precedents, client relationships, proprietary methodologies.

But institutional knowledge is precisely what's not being developed in juniors who rely on generic AI outputs. The firms using AI most aggressively may be eroding the only differentiator AI can't replicate.

AI Salary Premium

56%

Lawyers with AI skills ($203.5K vs $130K)

Law Grad Employment

82.2%

2024 (quantity stable, but nature shifting)

The Counter-Thesis

The optimistic view deserves a fair hearing.

Learning by review may be faster than learning by doing. AI-generated work represents best practice. Juniors learn from optimized outputs rather than struggling through suboptimal first drafts. "Learn by editing" could be more efficient than "build from scratch."

Historical precedent cuts both ways. Calculators didn't destroy accountants—they freed them for strategy. Spreadsheets didn't destroy finance—they enabled modeling complexity. Each automation wave ultimately expanded the profession's scope.

Work quality may improve. Less grunt work reduces burnout and improves retention. Juniors engage in strategic thinking earlier. Higher job satisfaction from interesting challenges.

The honest answer: we don't know. The experiment is running in real-time. We won't have results until 2035 when today's AI-trained juniors become senior experts—or fail to.

The Strategic Question

For professional services firms:

  • How much junior work can be automated without destroying the talent pipeline?
  • What's the minimum apprenticeship exposure for expertise development?
  • How do you evaluate AI-trained associates for partnership?

For AI agent builders:

  • Should agents replace junior work or augment it?
  • Can agents include pedagogical features that develop user skills?
  • The work being automated may have been the training.

At MMNTM, we think about this carefully. Our thesis is that AI employees should handle operational work so humans focus on high-leverage activities. But we're explicit that some operational work has training value. Our ecosystem is designed for context sharing between agents—so humans can oversee without repetitive manual verification, while still maintaining the judgment loops that build expertise.

The firms that solve the "training without grunt work" problem will have an enormous competitive advantage by 2035. The firms that ignore it may find they've optimized themselves into a succession crisis.

The Hollow Firm 2.0: What Happens When Juniors Disappear