LLMentalist Effect

The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con. Baldur Bjarnason, July 4th, 2023. page

LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

There are two possible explanations for this effect:

1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.

2. The intelligence illusion is in the mind of the user and not in the LLM itself.

I now believe that there is even less intelligence and reasoning in these LLMs than I thought before.

Many of the proposed use cases now look like borderline fraudulent pseudoscience to me.

.

We might be interested in the book by the same author. Out of the Software Crisis. Systems-Thinking for Software Projects. book