Best AI papers explained
A podcast by Enoch H. Kang
431 Episodes
-
Async-TB: Asynchronous Trajectory Balance for Scalable LLM RL
Published: 4/1/2025 -
Instacart's Economics Team: A Hybrid Role in Tech
Published: 3/31/2025 -
Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian Framework
Published: 3/31/2025 -
Why MCP won
Published: 3/31/2025 -
SWEET-RL: Training LLM Agents for Collaborative Reasoning
Published: 3/31/2025 -
TheoryCoder: Bilevel Planning with Synthesized World Models
Published: 3/30/2025 -
Driving Forces in AI: Scaling to 2025 and Beyond (Jason Wei, OpenAI)
Published: 3/29/2025 -
Expert Demonstrations for Sequential Decision Making under Heterogeneity
Published: 3/28/2025 -
TextGrad: Backpropagating Language Model Feedback for Generative AI Optimization
Published: 3/27/2025 -
MemReasoner: Generalizing Language Models on Reasoning-in-a-Haystack Tasks
Published: 3/27/2025 -
RAFT: In-Domain Retrieval-Augmented Fine-Tuning for Language Models
Published: 3/27/2025 -
Inductive Biases for Exchangeable Sequence Modeling
Published: 3/26/2025 -
InverseRLignment: LLM Alignment via Inverse Reinforcement Learning
Published: 3/26/2025 -
Prompt-OIRL: Offline Inverse RL for Query-Dependent Prompting
Published: 3/26/2025 -
Alignment from Demonstrations for Large Language Models
Published: 3/25/2025 -
Q♯: Distributional RL for Optimal LLM Post-Training
Published: 3/18/2025 -
Scaling Test-Time Compute Without Verification or RL is Suboptimal
Published: 3/14/2025 -
Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning
Published: 3/14/2025 -
Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning
Published: 3/14/2025 -
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Published: 3/14/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.