Vivold Consulting

Why Cohere’s ex-AI research lead is betting against the scaling race

Key Insights

Former Cohere AI research head Sara Hooker argues that the industry's obsession with ever-larger language models has hit diminishing returns. Her new venture focuses on efficient architectures and data quality, not sheer scale.

Stay Updated

Get the latest insights delivered to your inbox

Betting against the AI scaling race

Sara Hooker, once leading research at Cohere, has become one of the most vocal critics of the industry's 'bigger is always better' mentality. In her recent talk and interview with TechCrunch, she argues that model scaling has plateaued — and that breakthroughs will come from smarter training data, specialized models, and adaptive reasoning systems.

What Hooker is saying


- Massive models like GPT-4, Claude 3, and Gemini Ultra show marginal accuracy gains but at exponential cost in compute and energy.
- The new research focus is data curation — improving signal-to-noise ratios and dynamic sampling instead of blindly increasing corpus size.
- Hooker calls for a future of modular AI — smaller, task-specific systems that communicate rather than one monolithic model.

The technical case


- Studies from Hooker’s lab show that training efficiency doubles when data is pre-filtered for reasoning diversity rather than scale.
- Mixture-of-experts architectures are regaining attention, offering large-model performance at a fraction of cost.
- She advocates for open, interpretable benchmarks beyond MMLU and HELM to measure real-world reliability.

Why this matters for the AI ecosystem


- The “scaling ceiling” could reshape the AI race. Firms like Anthropic and OpenAI may pivot toward data efficiency and reasoning-enhanced training.
- VC interest is shifting from GPU-driven startups to optimization-focused ventures building inference-efficient stacks.
- For enterprise buyers, the new competitive edge becomes: Can you get 90% of GPT-4 performance for 10% of the cost?

The human and policy angle


- Hooker also highlights the carbon footprint of scaling and urges policymakers to include compute transparency standards.
- “We’ve mistaken size for intelligence,” she says. “Now we have to make AI more human by making it smaller.”