Best AI papers explained

A podcast by Enoch H. Kang

Categories:

153 Episodes

  1. Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning

    Published: 3/14/2025
  2. Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Published: 3/14/2025
  3. Revisiting Superficial Alignment Hypothesis

    Published: 3/14/2025
  4. Diagnostic uncertainty: teaching language Models to describe open-ended uncertainty

    Published: 3/14/2025
  5. Language Model Personalization via Reward Factorization

    Published: 3/14/2025
  6. Is a Good Foundation Necessary for Efficient Reinforcement Learning? The Computational Role of the Base Model in Exploration

    Published: 3/14/2025
  7. How Well do LLMs Compress Their Own Chain-of-Thought? A Token Complexity Approach

    Published: 3/14/2025
  8. Can Large Language Models Extract Customer Needs as well as Professional Analysts?

    Published: 3/13/2025
  9. Spurlens: finding spurious correlations in Multimodal llms

    Published: 3/13/2025
  10. Improving test-time search with backtrack- Ing Improving test-time search with backtrack- Ing against in-context value verifiersagainst in-context value verifiers

    Published: 3/13/2025
  11. Adaptive elicitation of latent information Using natural language

    Published: 3/13/2025
  12. Document Valuation in LLM Summaries: A Cluster Shapley Approach

    Published: 3/13/2025
  13. s1: simple test time scaling

    Published: 3/13/2025

8 / 8

Men know other men best. Women know other women best. And yes, perhaps AIs know other AIs best. AI explains what you should know about this week's AI research progress.