LLMs Get Lost In Multi-Turn Conversation

May 15, 2025

LLMs Get Lost In Multi-Turn Conversation (via). In this paper, large-scale simulation experiments are performed, and performance degradation is found in multi-turn LLM settings when compared to single-turn settings. From abstract:

Analysis of 200,000+ simulated conversations decomposes the performance degradation into two components: a minor loss in aptitude and a significant increase in unreliability. We find that LLMs often make assumptions in early turns and prematurely attempt to generate final solutions, on which they overly rely. In simpler terms, we discover that when LLMs take a wrong turn in a conversation, they get lost and do not recover.

The main explanations for this effect could be:

#llm #conversation #research