LLMs Get Lost In Multi-Turn Conversation
May 15, 2025LLMs Get Lost In Multi-Turn Conversation (via). In this paper, large-scale simulation experiments are performed, and performance degradation is found in multi-turn LLM settings when compared to single-turn settings. From abstract:
Analysis of 200,000+ simulated conversations decomposes the performance degradation into two components: a minor loss in aptitude and a significant increase in unreliability. We find that LLMs often make assumptions in early turns and prematurely attempt to generate final solutions, on which they overly rely. In simpler terms, we discover that when LLMs take a wrong turn in a conversation, they get lost and do not recover.
The main explanations for this effect could be:
- premature and incorrect assumptions early in the conversation
- over-relying on previous incorrect responses, compounding the error
- overly adjusting responses to the first and last turn, forgetting middle turns
- overly verbose responses, muddling the context, and confusing next turns