If you've used ChatGPT, you've likely had a profound feeling.
Or rather, a flurry of feelings; ranging between shock n' awe, and wide eyed, existential terror.
It seems the entire world is caught in a similar storm of emotions, with a polarizing hysteria at the center.
Some mark GPTs as a turning point for human civilization. They say we're on the cusp of AGI (artificial general intelligence) and society will never be the same.
Others think the hype has us over our skis. They'll admit ChatGPT is impressive, but that the breakthroughs are overstated; it's really just fancy auto-complete benefiting from more data and more compute.
In grappling for the truth (which is likely somewhere in between), all of the age-old AI debates have roared back to life. The questions range from the practical (what will humans do with all this free time?), to the hyperbolic (will AI kill us all?), to the damn near impossible to answer (are we recreating consciousness?).
While the questions aren't new, the debates have a different tone. They are more tangible. More urgent. More...consequential.
But as I listen to experts, I'm left unsatisfied and conflicted.
One minute, I'm convinced we're all doomed and this is quite different from the plow, the Model T, and the internet. The next minute, I have no doubt that such fear is hogwash. This is a mere human augmentation. We're now destined for greatness in all that we do, productivity and growth will sky rocket, the newfound wealth will all trickle down, and we'll all live happily ever after.
The seesaw is exhausting. And it doesn't help when even the expert’s convictions are slippery at best…
So, what is our destiny amidst the age of intelligent machines? Is there a satisfying foothold within these debates?
Amidst such confusion, I turn to Occam's Razor; the theory that the simplest solution or answer is usually the best one.
Sure, simplicity might not exist within a topic of such extreme complexity and depth. But I recently came across an idea that strikes a chord.
At first glance, it appears too simple. Yet when you sit with it, something about it feels right. Or at least, directionally correct.
The idea I came across isn't about AI. It's not even from someone alive today to witness this shift.
It's also counterintuitive. But like all great paradoxes, it holds poignancy and truth.
The idea pertains to the creative process, and it came from this 1974 interview with Ray Bradbury, an author and poet most well-known for his book Farenheit 451. It's worth watching the whole interview for yourself, but I'll try to summarize here (with some edits for clarity).
Bradbury says, "the intellect is a great danger for creativity."
The interviewer is taken aback. He challenges him and says, "Really? The intellect is a danger to creativity?"
"A terrible danger." Bradbury says. "Because you begin to rationalize and make up reasons for things instead of staying with your own basic truth: who you are, what you believe, what you want to be."
On many levels this makes sense. More often than not, humans aren't great at thinking and rationalizing. We're riddled with bias and all kinds of subconscious programming.
Thinking, or should I say, over thinking, can get us into all kinds of trouble. It's the source of our anxiety, our depression, and the fears that keep us from doing what we really want with our lives, and from becoming who we really want to be. It's the source of internal narratives that are, all too often, simply not true.
To protect his own basic truth, Bradbury keeps a sign over his typewriter that says, "Don't Think".
He says, "You must never think at the type writer. You must feel. Your intellect is always buried in that feeling anyway. Sure, you collect a lot of data, and you do a lot of thinking away from your typewriter... but at the type writer you must be living. Creation should be a lived experience."
It’s evident robots will kick our ass at thinking. We're going to offload the brunt of thinking to algorithms, collecting and analyzing oceans of data, and producing insights & learnings in the blink of an eye.
But I can't help but wonder... is this a bad thing? Won't this free up our time & energy, and allow us to get straight to the metaphorical typewriter to do what we do best: feel?
Feeling is our superpower. It's how we produce happiness and fulfillment. At our core, everything that we do is in search of a feeling. The feeling of fun, of being in flow, of being in love, of being connected, of being in awe. Feeling is what it means to be alive.
And when you study how large language models work, I think you'll find some solace. You'll quickly realize that feeling is something these machines will never be able to do. It's a neurochemical and physiological phenomenon, based upon some sort of biological intelligence that remains a mystery in many ways (e.g. is our gut and microbiome our real source of intelligence?)
Prominent author and thinker Charles Einstein states this eloquently in his recent article on AI. It’s an extremely thought-provoking, three-way dialogue between Einstein, a modern-day philosopher/shaman named Feely, and lawyer/writer Tam Hunt. The article is definitely worth the read, but I'll provide some highlights.
Einstein says, “...even a single neuron is more complex than the largest artificial neural network. The brain functions holistically in a way ANNs (artificial neural networks) do not. Besides nodes and states, a brain generates electromagnetic fields that encode information through transient and meta-stable structures that feed back into the neurochemistry. This speaks to a kind of irreducibility of intelligence, the same that (John) Searle is striving to establish. While his logic is flawed, his point has merit – intelligence is more than the mechanical execution of a set of instructions converting a set of input bits to a set of output bits."
In other words, human-grade intelligence can't be reduced to a set of linear functions, i.e. a one-way flow of 'if this, then that' statements. True AGI will require more non-linear, bi-directional, and likely 'quantum' forms of data processing, with a real-time, closed-loop connection to the real-world. While this end-state is certainly possible, we have a LONG way to go.
As for human intuition, cultivating it is no easy task; the art of feeling clearly and feeling well feels increasingly difficult in modern times. Technology inundates us with waves of information and shiny objects; a deluge of distraction 24/7 and a muting of our senses, and in turn, our sense of self.
I don't know about you, but I often feel like I'm developing adult ADD. I yearn for better filters and something to offload this information to; something to help take back my attention and stop me from reaching for my phone; which all too often clouds my thinking to harmful effect.
Bradbury hints at this, saying, "The worst thing you can do when you think is lie. You can make up reasons that are not true for the things that you did. And what you're trying to do as a creative person is to surprise yourself and to find out who you really are. And to try not to lie in the process, to try to tell the truth all the time."
If we're spending more time at the typewriter, not thinking, but feeling, that's more energy and time going towards self-discovery and personal truth. That's less time lying to ourselves.
Self-discovery and personal truth; we all know these are critical pursuits. But how much time and effort do most of us really dedicate to this? Perhaps herein lies the greatest problem to be solved with AI. Especially early on in our lives when tweaks to our trajectory matter most, such as K-12 education, or even university.
Many people get into a tizzy about AI's impact on education. They're worried our kids won't be able to learn and think for themselves.
I don't think this is the case... far from it.
I think AI is going to spark the forest fire our education system needs, burning down an entrenched status quo that relies too much on (useless forms of) thinking, and not enough on feeling, and helping kids understand truth.
Thinking for oneself starts with feeling. Feeling is best uncovered through some sort of creative process. This will demand not less writing/creating, but more. Finally, kids won't be forced to write about useless topics they'll never rely on again. Rather, they'll be free to write about topics that force them to dig deep within themselves; to surprise themselves, to not lie to themselves, and to find out who they really are.
With internal lies abolished, AI can hone in on our kids’ super powers. Imagine an AI 'teacher' personalized for each student, finely tuned to their temperament, their interests, their strengths & weaknesses. The bottom half of the class no longer has to get left behind.... And the top 1% no longer has to conform and wait for their cohort to catch up.
More talent will be discovered and placed on better career paths, boosting societal productivity and satisfaction along the way.
Bradbury goes on to discuss how kids should be learning, and how to strike the right balance between thinking and feeling.
He says "The only way to [not lie to yourself] is by being very active and very emotional. To get it out of yourself.... make lists of the things you love and the things you hate, and write about these things intensely. And when it’s over, then you can think about it, and think about if it works or if it doesn't work; you can look and say 'something is missing here or there'. And then if something is missing, you go back and re-emotionalize that... Thinking should be a corrective force in our lives, not the center of our lives. Living is the center. Feeling and being is the center... with correctives all around us, holding us like the skin holds in our blood and our flesh... But the skin isn't our way of life. Our way of living is the blood pumping through our veins, the ability to sense, and to feel, and to know.... and the intellect doesn't really help us very much there.... we should get on with the feeling of living...."
Perhaps AI can be the ultimate 'corrective'; the intelligent 'skin' protecting and projecting what lies within.
To put a contextual wrapper around this idea, I turn again to Charles Einstein, who summarizes the essence of this essay better than I ever could.
“Already a large proportion of student papers are being written by Chat GPT. The purpose of writing a paper is not just the product, it is also the process. What is lost when we surrender the process to a machine? That is an urgent question, yet on the bright side we may ask another: What might be gained? What new directions might we direct human intelligence toward?
If I may make a vague prediction, it will be always and ever toward those things that elude quantification. Traditionally, science has told us that anything that is real is quantifiable, and will one day succumb to its onward march. Science may be wrong in that foundational metaphysical postulate. Quantity can only simulate quality; it can never reach it. That will become more obvious, not less, as the latest extension of quantitative intelligence that we call AI, despite its wonders, fails as did its predecessors to solve the real problems of the human condition. The most significant positive effect of AI, then, may lie not in its capabilities but, paradoxically, in its limitations.”
Finding a silver lining within the limits of AI isn't easy. The easy reaction is to think the metaphorical type writer has been ripped away, and even creativity is lost to AI.
This isn't an irrational thought. From one perspective, it doesn't matter what LLMs CAN'T do. All that matters is what they CAN do; which already exceeds the capacity of most humans, including in the realm of creativity.
This is creating a powerful hum of anxiety and fear. Just check out this poem from ChatGPT, based on the Einstein paragraph above...
In Chat GPT's embrace, students find their aid,
But in the process, are we lost or saved?
A question lingers, urgent and profound,
As human minds traverse new battlegrounds.
What gains await us, as machines advance,
And human thoughts take on a different stance?
To realms untapped, our intellect aspires,
Where quantity can't grasp what heart desires.
For science claims, all truths it shall unfold,
Yet mysteries persist, untamed, untold.
Mere numbers fail to touch the soul's embrace,
In AI's limits, we might find our grace.
As wonders rise, and algorithms soar,
The human essence seeks a distant shore.
In AI's grasp, we find a paradox,
A chance to learn, and heal the deepest cracks.
- by ChatGPT
I mean.... WHAT?!
This came spitting out within seconds and it about knocked me out of my chair.
As a I writer, I should be fearful indeed. Especially in light of the current timescale. We're just at the embryonic stages! This stuffs only been in the wild for the last 3-6 months... What's going to happen over the next 3-6 years? Heck, what about 10-20?!
The common answer is a super intelligence. One that will render humans useless and replace us all together. The extinction event Fermi predicted all along (i.e. the reason we haven't found intelligent life is because it destroys itself at a sufficient level of sophistication, e.g. via a super intelligent AI)
Such an outcome is unpredictable. But one thing is certain: what humans do, and why they exist, will be completely reinvented. Likely within most of our lifetimes.
Let that reality sink in for a second... What a time to be alive!
Sure, the uncertainty is scary. But I find solace in this notion of 'feeling', and I hope you can as well.
Remember... while large language models produce a form of intelligence, only a fragment of human-grade intelligence and interaction comes from language.
As we've alluded in this essay, intelligence is embodied and present within our physical representation. If we believe this to be true, then we should believe that the ultimate 'super intelligence' can only come from the marriage of human intuition and machine intellect.
A marriage, that just like all great relationships, could be our best shot at fulfilling our ultimate promise and potential.
Or, as Freely more eloquently says in the Einstein article, "Artificial Intelligence is unraveling who we thought we were, what we thought technology was, what we thought life was. In the process we discover our true nature. As our technology comes alive, so too do we. We shed the layers we had taken on along the way until finally we find ourselves naked. And in primordial innocence we eat the fruits of the tree of life, cultivating the garden of our hearts in love and beauty.”
Thanks for reading! If you enjoyed, please share with a friend or two :-) And if you’re new…Subscribe for free below to receive new posts and support my work.
Love that you quote Ray Bradbury! Great writer and still quite relevant today!