Researchers at Apple have come out with a new paper showing that large language models can’t reason — they’re just pattern-matching machines. [arXiv, PDF] This shouldn’t be news to anyone here. We …
Well, two responses I have seen to the claim that LLMs are not reasoning are:
we are all just stochastic parrots lmao
maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of “emergent”).
So I think this research is useful as a response to these, although I think “fuck off, promptfondler” is pretty good too.
No, there’s an actual paper where that term originated that goes into great deal explaining what it means and what it applies to. It answers those questions and addresses potential objections people might respond with.
There’s no need for–and, frankly, nothing interesting about–“but, what is truth, really?” vibes-based takes on the term.
Well, two responses I have seen to the claim that LLMs are not reasoning are:
So I think this research is useful as a response to these, although I think “fuck off, promptfondler” is pretty good too.
Well are we not stochastic parrots then? Isn’t this a philosophical, rhetorical and equally unfalsifiable question to answer also?
No, there’s an actual paper where that term originated that goes into great deal explaining what it means and what it applies to. It answers those questions and addresses potential objections people might respond with.
There’s no need for–and, frankly, nothing interesting about–“but, what is truth, really?” vibes-based takes on the term.
no
Hark! I hear the wanker roar.
fuck off, promptfondler