A Theory on Why AI Seems so Human-like
By: Brian Downing | Published on: 09/23/2024
This article discusses a possible reason why the latest artificial intelligence (AI), such as ChatGPT, can seem so human-like.
Table of Contents
1. Summary
Dr. Stephen Wolfram theorized on the Lex Fridman Podcast #376 that language can be described at a higher level of abstraction and large language models (LLMs) by inherently recognizing this higher level of abstraction can create human-like text.
2. My Initial Theory
As the popularity of ChatGPT skyrocketed, my theory on why ChatGPT seemed so human-like was that since there was a significant amount of human refinement during training of the ChatGPT model (called reinforcement learning from human feedback or RLHF), ChatGPT was trained on similar questions users were asking and thus ChatGPT seems to provide human-like responses.
As I used ChatGPT and other LLMs, my theory of human refinement for the human-like responses seemed less and less probable. The questions I asked AI seemed to have a low probability of being asked during reinforcement learning from human feedback.
3. Dr. Wolfram's Theory
On the Lex Fridman Podcast #376 from May 2023, Dr. Wolfram discussed how Aristotle developed syllogism by looking for patterns in arguments that he found persuasive along with George Boole's later work on logic.
Dr. Wolfram theorized LLMs by inherently recognizing this higher level of abstraction can create human-like text.
Dr. Fridman summarized Dr. Wolfram's theory as the existence of 'laws of semantic grammar that underlies language.'
As one of AI's strengths is recognizing patterns, if there is in fact a higher level of abstraction in human language, it would make sense that LLMs could recognize these patterns and thus seem human-like.
Here is a link to the discussion on Lex Fridman Podcast #376
4. Bonus Tip from Dr. Wolfram
One of Dr. Wolfram's prompt engineering tactics is after ChatGPT responds to a question, ask 'is the answer correct?'' Dr. Wolfram's experience is that ChatGPT will at times say no.