Personalizing LLMs via Decode-Time Human Preference Optimization
Best AI papers explained - A podcast by Enoch H. Kang

Categories:
This paper introduces PANDA, a novel approach to personalizing large language models (LLMs) at the point of generating text, known as inference time. Unlike traditional methods that require costly retraining for each new preference, PANDA dynamically adjusts an LLM's output based on learned user preferences without altering the core model. By using context-aware preference weights and reward models, PANDA enables flexible and efficient tailoring of LLM responses to individual needs, validated through experiments showing improved performance on personalized tasks compared to existing alignment techniques. This method represents a significant step towards scalable and dynamic personalization of LLMs.