The Biggest 'Lie' in AI? LLM doesn't think step-by-step
May 29, 2025The Biggest “Lie” in AI? LLM doesn’t think step-by-step. Interesting video trying to make the point that the process in which a model arrives to a mathematical evaluation answer is not necessarily the process the model describes when asked to describe how it achieved the answer. In other words, the verbalization of the reasoning is not necessarily how they model reason, and it could be the case the verbalization might not even be key to reasoning.
What I found odd about the video is that it kind of makes a claim that that is the reason LLMs don’t think like humans do. However, I’d say humans also can think without verbalizing, and, actually, verbalizing the thought process could even be difficult in some cases.