>>105829701Even if I humor your claim that LLMs and human brains are "just deterministic I/O machines," you're still wildly off the mark. The differences are not minor, they’re fundamental:
- Humans have a continuous stream of consciousness: a persistent, self-referential internal loop. LLMs only generate output when prompted. No prompt, no thought. A human locked in a sensory deprived room will go insane. You can experiment with this yourself by sitting in a dark room and discover that you'll start hallucinating.
- The human brain runs on ~20–30 watts of power. A state-of-the-art LLM like GPT-4, when running inference at scale, can require thousands of watts across multiple GPUs. The energy efficiency gap is astronomical.
- Biological neurons fire with millisecond latency, and operate massively in parallel. Transformers run on discrete clocked hardware with micro- to millisecond latency per token, and lack dynamic real-time interaction.
- Human brains are estimated to perform on the order of 1016–1018 operations per second, far beyond the total throughput of any current LLM or supercomputer running it. That's not even talking about flops/watt efficiency.
- Brains have integrated, hierarchical memory systems: working memory, episodic memory, long-term memory, all tightly coupled to sensorimotor experience. LLMs have no memory unless it’s externally bolted on and curated by the HUMAN user, and even then, it’s not learned or managed the way biological memory is. What we call "memory" is just artificial context passed in the prompt by the human user, and it disappears between sessions. LLMs are stateless.