AIExplained
No. 1 Superforecaster & AI 2027 Author Eli Lifland - On Our Differing Timelines to Superintelligence (New Podcast Series Potentially!)
Claude 4 Simple SOTA Insights + Leaked System Prompt
A New Twist in the ChatGPT Sycophancy Saga
Next-level reasoning: The Good News and Bad News - 2 new papers analysed
Paper: AI Doesn't Say What It Thinks. AI Orgs: It Could Be Your Friend
“OpenAI is Not God”: DeepSeek, Liang Wenfeng and the R1 Phenomenon
'Claude 3.7 Knows it is Being Tested' - New Research, Theory of Mind and Consciousness
4 AI Trends Emerging in 2025. Patchy, Epic, Expensive, and Deceptive Models
Content Creation Insights, Update on Timelines, Race Dynamics: Think Sip-by-Sip Podcast
Mini-documentary Poll
The One Machine to Rule Them All - Origin Stories. Mini-Documentary on How the Founding Vision of Each AGI Lab Went Awry
Pod 12: Apollo Research Group Interview - Models Try Hard Not to Undergo 'Unlearning', the media, and much more ... - Let's Think Sip-by-Sip
'Takeoff Speeds' - my unreleased, now-topical explainer
Veo 2 vs Sora ... then Veo o3?
Media Misreporting Over o1's 'Escape' - 70 page report highlights - why o1 did what it did
DeepMind Prof. Tim Rocktäschel on Takeoff Speeds, GDP 2x-ing, Gemini 2, ASI timelines and Automating Science
AI vs Human Creativity. Can you tell them apart? Plus, a key battleground for 2025 onwards
Pod 10: 4 Reasons Why Data is Now Even More Important: Scaling plateaus, judge rulings, test-time training paper and post-AGI jobs - Let's Think Sip-by-Sip
A Step Toward Nationalizing AI? White Memo Full Analysis and Context
Pod 9: Full Simple-Bench Results, o1-preview to Grok-2 - Let's Think Sip by Sip
'Machines of Loving Grace' - Key Highlights. 'All the 21st Century ... by 2036.'
o1 can 'self-correct'. That's kinda significant.
Pod 8: Do we have a straight shot to AGI? 'Don't teach, incentivize' - Let's Think Sip by Sip
Is o1 No Longer a LLM? LeCun + New 'LRM' paper explained (+ exclusive interview clips)
'Humanity's Last Exam' - I Doubt It
The Struggle to Define 'AGI' - Controversial Terms in AI, Explained
10,000x Scaling Deep Dive, and a 5-year LLM Roadmap
Simple Bench Exclusive Tour: I couldn’t find a good reasoning benchmark, so I made one.
'The Bitter Lesson' - Controversial Terms in AI, Explained - New Series
Pod 7: The Story Behind SIMPLE Bench, More Results, and Next Steps - Let's Think Sip by Sip
'Emergent Behaviors' - Controversial Terms in AI, Explained - New Series
Can ChatGPT Do Task X? It’s Surprisingly Hard to Answer
Pod 6: No One Agrees @ OpenAI if GPT-4o is 'a smart highschooler' + My Take on Murati, Altman and Sutskever - Let's Think Sip by Sip
'Open Source' - Controversial Terms in AI, Explained - New Series
Fired OpenAI researcher - 'OpenAI Planned to Sell AGI to China' and 'It's Coming by 2027' - Full Analysis of 165 page Doc
'Stochastic Parrot' - Controversial Terms in AI, Explained - New Series
New Benchmark Madness, But Hope on the Horizon
Prompt Injections in the AI Agent Era - Donato Capitella
Pod 5: GPT 4o Reflections, Cryptic OpenAI Tweet, When to Declare AGI, and New Guests - Let's Think Sip by Sip
Reflections on Sam Altman’s Recent Expectation-Setting on GPT-5
Many-Shot Magic: 2 New Papers + 1 Failed Bet Show What Can Be Done with LLMs
SmartGPT Website Demo and Community Project
Perplexity CEO on the Future of Search, and Why He's Not Scared of OpenAI or Google
AI Jobs Warning: 36 Hours Later, Author Interviewed, Paper Analysed in Full, and Why I am Still Somewhat Optimistic
Pod 4: Unpredictability: AI, Content Creation, Timelines and Vernor Vinge - Let's Think Sip by Sip
A Note on Not Being Shocking, and Making Connections
The AGI Lawsuit
AI Professional Tips and Networking
$7 Trillion, a Bioweapon and a Nuke In Space - Under-the-Radar AI Safety Papers
Deepfakes - The Peril and Potential