431 Episodes

  1. Throughput Limits for LLM Inference and AI Agent Scheduling

    Published: 4/14/2025
  2. RL Post-training Amplifies Pretraining Behaviors in Language Models

    Published: 4/14/2025
  3. Fast Adaptation of Behavioral Foundation Models

    Published: 4/14/2025
  4. Proprietary Reward Models: Sustaining Advantage in Agentic AI

    Published: 4/13/2025
  5. Why Multi-Agent LLM Systems Fail: A Comprehensive Study

    Published: 4/12/2025
  6. Play2Prompt: Zero-Shot Tool Instruction Optimization via Tool Play

    Published: 4/12/2025
  7. Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems

    Published: 4/12/2025
  8. API and GUI Agents: Divergence, Convergence, and Hybrid Approaches

    Published: 4/12/2025
  9. AI, Chess, and Competitive Advantage: Substitution and Complementation

    Published: 4/12/2025
  10. Knowledge of the Firm and Replication of Technology

    Published: 4/12/2025
  11. Firm Resources and Sustained Competitive Advantage

    Published: 4/12/2025
  12. Evaluating Pharmaceutical Marketing to Physicians with Panel Data

    Published: 4/12/2025
  13. Theory of the firm in the era of Agents

    Published: 4/12/2025
  14. Large Language Models: An Applied Econometric Framework

    Published: 4/12/2025
  15. Evaluating the World Model Implicit in a Generative Model

    Published: 4/12/2025
  16. Machine Learning for Hypothesis Generation in Social Science

    Published: 4/11/2025
  17. Active Learning for Moral Preference Elicitation: Challenges and Nuances

    Published: 4/11/2025
  18. Gradient-Based Surveys for Nonparametric Discrete Choice Experiments

    Published: 4/11/2025
  19. Explainable Data-driven Share-of-choice Product Line Design Optimization

    Published: 4/11/2025
  20. The More You Ask, the Less You Get: When Additional Questions Hurt External Validity

    Published: 4/11/2025

18 / 22

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.