by tangent » Fri Apr 11, 2025 3:00 pm
Summarizing info from the following video:
- New research (attribution graphs) shows that LLMs do have internal reasoning, but this is heuristic-based, not cognition.
- LLMs do each prediction (response) separately, and thus lie about how they work - they exhibit no self-awareness.
- A specific example shown demonstrates how and why LLMs do not and cannot understand math.
New Research Reveals How AI “Thinks” (It Doesn’t) by Sabine Hossenfelder

https://www.youtube.com/watch?v=-wzOetb-D3w
Word choice really matters. Sabine is using the colloquial definition of consciousness, which is sapience and self-awareness. Consciousness does not require these things, but consciousness does not mean abstract or intelligent thinking either.. I'm phrasing that badly because I'm unsure how to exactly say what I mean. Thought is very complicated and not very well understood, conceptually.
Related: I've read multiple authors talking about AI development use the specific word choice of
choice to mean a
decision based on intelligent thought (and self-awareness), while
decision means a selection based only on data or a model. Computers make
decisions, humans make
choices. I've also encountered one person who instead of using those words to represent that difference, uses choice and decisions interchangeably for what humans do, and
prediction for what LLMs do. This choice is probably better because it leads to less confusion and accurately describes what an LLM is doing.
Summarizing info from the following video:
- New research (attribution graphs) shows that LLMs do have internal reasoning, but this is heuristic-based, not cognition.
- LLMs do each prediction (response) separately, and thus lie about how they work - they exhibit no self-awareness.
- A specific example shown demonstrates how and why LLMs do not and cannot understand math.
[url=https://www.youtube.com/watch?v=-wzOetb-D3w][size=200]New Research Reveals How AI “Thinks” (It Doesn’t) by Sabine Hossenfelder[/size]
[img]https://i.ytimg.com/vi/-wzOetb-D3w/maxresdefault.jpg[/img]
https://www.youtube.com/watch?v=-wzOetb-D3w[/url]
Word choice really matters. Sabine is using the colloquial definition of consciousness, which is sapience and self-awareness. Consciousness does not require these things, but consciousness does not mean abstract or intelligent thinking either.. I'm phrasing that badly because I'm unsure how to exactly say what I mean. Thought is very complicated and not very well understood, conceptually.
Related: I've read multiple authors talking about AI development use the specific word choice of [b]choice[/b] to mean a [b]decision[/b] based on intelligent thought (and self-awareness), while [b]decision[/b] means a selection based only on data or a model. Computers make [b]decisions[/b], humans make [b]choices[/b]. I've also encountered one person who instead of using those words to represent that difference, uses choice and decisions interchangeably for what humans do, and [b]prediction[/b] for what LLMs do. This choice is probably better because it leads to less confusion and accurately describes what an LLM is doing.