Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents
Best AI papers explained - A podcast by Enoch H. Kang

Categories:
This position paper argues for a reassessment of uncertainty quantification in large language model (LLM) agents. The authors contend that the traditional division between aleatoric (irreducible) and epistemic (reducible) uncertainty is insufficient for the interactive nature of LLM agents, especially given their propensity to produce incorrect outputs. They highlight how existing definitions of these uncertainties can be conflicting and fail to apply effectively in dynamic conversational settings. To address this, the paper proposes three novel research directions centered on how LLM agents should handle uncertainty: acknowledging underspecification uncertainties from users, employing interactive learning to clarify information, and utilizing richer output uncertainties beyond simple numbers to communicate ambiguity. The authors believe these approaches will foster more transparent, trustworthy, and intuitive LLM agent interactions.