The AI BriefThe AI Brief
BreakthroughsToolsStartupsIndustryDiscussions
The AI BriefThe AI Brief— AI news for developers
AboutMethodologySourcesAPITermsPrivacy

© 2026 The AI Brief. All rights reserved.

The AI BriefThe AI Brief
BreakthroughsToolsStartupsIndustryDiscussions
DeepMind’s David Silver just raised $1.1B to build an AI that learns without human data
Startups & FundingPosted 2w agoLIVE

DeepMind’s David Silver just raised $1.1B to build an AI that learns without human data

Originally published by TechCrunch AI
Key Takeaways
  • 1 billion in funding at a valuation of $5
  • 1 billion to join the race for novel AI models that could outperform large language models
  • A professor at University College London, Silver was until recently leading the reinforcement learning team at Google-owned DeepMind, where he spent more than a decade before leaving to found this new venture
  • Similarly, Ineffable Intelligence hopes that its superlearner will discover all knowledge from its own experience
  • "If successful, this will represent a scientific breakthrough of comparable magnitude to Darwin: where his law explained all Life, our law will explain and build all Intelligence," its site claims (capitals included)

Ineffable Intelligence, a British AI lab founded a mere few months ago by former DeepMind researcher David Silver, has raised $1.1 billion in funding at a valuation of $5.1 billion.

Ready to dive deeper?

Read the full story on the original source for primary detail and technical specifications.

Read on TechCrunch AI
Heat26

Based on social velocity, sharing rate, and discussion volume across communities.

Impact31

Estimated significance to the industry, potential for disruption, and technical novelty.

Why This Matters

This development reflects ongoing shifts in the AI ecosystem. Stay informed about how these changes might affect your technology stack, competitive landscape, and strategic planning.

Automated Summarization

This content was automatically aggregated and summarized from TechCrunch AI. Original content and nuance may vary.

Discussion

Start the conversation.

Related Stories

ParaRNN: Large-Scale Nonlinear RNNs, Trainable in Parallel

ParaRNN: Large-Scale Nonlinear RNNs, Trainable in Parallel

Recurrent Neural Networks (RNNs) are naturally suited to efficient inference, requiring far less memory and compute than attention-based architectures…

2654
Read the full story to learn more.

Read the full story to learn more.

When ChatGPT launched as an experimental prototype in late 2022, OpenAI’s chatbot became an everyday everything app for hundreds of millions of people…

3531
Google expands Pentagon’s access to its AI after Anthropic’s refusal

Google expands Pentagon’s access to its AI after Anthropic’s refusal

After Anthropic refused to allow the DoD to use its AI for domestic mass surveillance and autonomous weapons, Google has signed a new contract with th…

2731
The AI BriefThe AI Brief— AI news for developers
AboutMethodologySourcesAPITermsPrivacy

© 2026 The AI Brief. All rights reserved.