Are you in a crisis of existence now that something much more "intelligent" than you exists? Don't be. The answer is here, but it needs your agency.
Reasoning
“What do you want to eat?” “I’m on a cholesterol budget, so I’ll get some veggies.”
We turn questions like these into problems every day. We call it reasoning. Reasons come from obvious truth, like the fact that coffee wakes you up. But sometimes they come from your memory, like when alcohol messed up that one friend badly one day so they have refused to drink ever since.
Knowledge
“It is raining” “yet I must go out” “then I need an umbrella”.
We reconcile conflicting ideas into larger truths like that on a daily basis. Call it dialectic. And we store solutions in our memory.
For us, some knowledge is a bunch of configurations and connections in brain cells. LLMs, or generative AI whatever you call it, use a similar principle to “know”. But text is mostly used, so an LLM is some kind of verbal thinker. Us—we have a little bit of variation from one another.
An LLM is just slices of neuron-like code called transformer blocks that contain numbers called weights and math formulas that model the thought: how “I” should renew my understanding if I see those words together, which simulates dialectic as a game of chance. "Talking" to it means passing words, which are turned into numbers that we call tokens, through it. At the end of the slices is the expected "next word". Chain them together, you get text you can read.
Scientists put questions at one end of the LLM and the answers at the other. When the LLM answers differently, the weight is fitted so that the LLM leans closer to the answer, making this weight the LLM's memory. It is a process that's somewhat similar to training dogs with treats.
We've mentioned "training", but not "learning", because learning for machines involves much more. Scientists have a lot to do like collecting text to be made into questions and answers. Someone needs to choose which text makes sense. Heck, someone needs to actually write them.
Maybe one of the most important tenets of learning lies in Alan Turing’s epiphany back in 1950 that randomness is essential in learning. But the machine is anything but random. You know what’s random in physics, quantum phenomena.
Collapse
Collapse is the word I use for a reason: some physicists and neuroscientists think that prediction and decisions are quantum phenomena because the mind can do things that computers can’t. Not only do I like the idea, but it also gives me another: that imaginations, predictions, and choices are future realities waiting to be realized. Maybe that's why it takes so much energy to be indecisive because you’re holding anchors to these unrealized potential futures; decide and you collapse the superposition.
AI can’t fully predict, just like time series of past transactions in forex and stock trading can't predict black swan events like the 2008 financial crisis and COVID. But the mind can. Those events are unfair examples but still Michael Burry saw through 2008.
And then there’s an aspect of our mind that spontaneously does the deciding, acting, and committing to memory. The same memory is what we feed the LLM with. That makes the LLM echoes of our collective past—a copy of our mind pattern that’s lagging behind in time.
Remember
Homo Obliviscens, we are forgetful creatures. But how can’t we not? We’ve dreamt of creating a machine to our liking since the tales of Hephaestus’ automata and what we made surpassed us in knowing things.
But in this time of unrelenting barrage against our attention and agency, we need to remember that we are sensemaking creatures and agents of ourselves, a dreamware, a judgeware, an actware. Call it imagination, art, science. But for now, only we can dream ourselves the best possible future and to collapse that dream from tomorrow to now.
The statistical yapper can and will tell you things, wonderful things, but in the end you’re the one to decide if that makes sense. Because if you let mirrors, echoes, and dead things decide what makes sense for you then what’s left of you?
Of mirrors, echoes, and dead things
Do you know what separates us from dead things? It is our fight against the dying of the universe. Heat disperses. Matter radiates away. Things will come to a stop.
But did we decay? Did we regress from existing to nothingness? No. We rose from two cells into an apex. We were hiding in the caves, now we move mountains. We do not and shall never bow to entropy.
The universe may die, but we will stay.