Auto-Differentiating Any LLM Workflow: A Farewell to Manual Prompting
Best AI papers explained - A podcast by Enoch H. Kang

Categories:
This document introduces LLM-AutoDiff, a novel framework for Automatic Prompt Engineering (APE) that aims to automate the challenging task of designing prompts for complex Large Language Model (LLM) workflows. By viewing these workflows as computation graphs where textual inputs are treated as trainable parameters, the system uses a "backward engine" LLM to generate textual gradients β feedback that guides the iterative improvement of prompts. Unlike previous methods that focus on single LLM calls, LLM-AutoDiff supports multi-component pipelines, including functional operations like retrieval, handles cycles in iterative processes, and separates different parts of prompts (like instructions and examples) into peer nodes for more precise feedback. The system also incorporates efficiency techniques like computing gradients only for errors and a two-stage validation process. Experiments demonstrate that LLM-AutoDiff outperforms existing text-gradient and few-shot optimization baselines across various tasks, showcasing the potential of this auto-differentiable approach for scaling and optimizing intricate LLM applications.