431 Episodes

  1. Async-TB: Asynchronous Trajectory Balance for Scalable LLM RL

    Published: 4/1/2025
  2. Instacart's Economics Team: A Hybrid Role in Tech

    Published: 3/31/2025
  3. Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian Framework

    Published: 3/31/2025
  4. Why MCP won

    Published: 3/31/2025
  5. SWEET-RL: Training LLM Agents for Collaborative Reasoning

    Published: 3/31/2025
  6. TheoryCoder: Bilevel Planning with Synthesized World Models

    Published: 3/30/2025
  7. Driving Forces in AI: Scaling to 2025 and Beyond (Jason Wei, OpenAI)

    Published: 3/29/2025
  8. Expert Demonstrations for Sequential Decision Making under Heterogeneity

    Published: 3/28/2025
  9. TextGrad: Backpropagating Language Model Feedback for Generative AI Optimization

    Published: 3/27/2025
  10. MemReasoner: Generalizing Language Models on Reasoning-in-a-Haystack Tasks

    Published: 3/27/2025
  11. RAFT: In-Domain Retrieval-Augmented Fine-Tuning for Language Models

    Published: 3/27/2025
  12. Inductive Biases for Exchangeable Sequence Modeling

    Published: 3/26/2025
  13. InverseRLignment: LLM Alignment via Inverse Reinforcement Learning

    Published: 3/26/2025
  14. Prompt-OIRL: Offline Inverse RL for Query-Dependent Prompting

    Published: 3/26/2025
  15. Alignment from Demonstrations for Large Language Models

    Published: 3/25/2025
  16. Q♯: Distributional RL for Optimal LLM Post-Training

    Published: 3/18/2025
  17. Scaling Test-Time Compute Without Verification or RL is Suboptimal

    Published: 3/14/2025
  18. Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning

    Published: 3/14/2025
  19. Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning

    Published: 3/14/2025
  20. Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Published: 3/14/2025

21 / 22

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.