I’m often requested by schools to offer a model of a chat on how I turned a author. The straightforward factor to do is to offer a type of guided tour by the woods of literary self-formation: a string of anecdotes designed to elicit just a few chuckles, a second or two of reflection in regards to the inevitable bends within the highway, issues that felt momentous however turned out to not matter, or issues that didn’t appear important on the time however with hindsight turned out to be a very powerful of all.
Sometimes, these excursions finish in the identical place: The creator has discovered a path by the wilderness, and found a voice alongside the way in which. Voice is what leads us out of the woods.
The difficulty, a minimum of for me, is that this type of speech is generally fiction; the trail is simply a path looking back. Telling the story this fashion elides, smooths over, and underestimates the position of circumstance and dumb luck. Most of what a author experiences is failure. Creating a voice takes years. The purpose is to not make it out of the woods shortly or unscathed. Getting misplaced just isn’t the tough half. It’s the entire thing.
Now alongside comes AI, purporting to be our GPS by the woods. Not simply any information: tireless, fearless, is aware of all of the shortcuts. AI obviates the necessity to enter the woods within the first place. Why face the clean web page and the blinking cursor? Why battle to grasp what you imply and the right way to articulate it? Why hearken to your personal croaky, warbly voice when you’ll be able to push the button for fluid, facile, polished language, accessible anytime, on any topic? Voice on demand.
After I communicate to high-school and school college students (together with my very own youngsters), I fear that on the time when they need to be growing their very own voices, they’re being informed they don’t must trouble. AI writes for us, reads for us, thinks for us. It replaces our voice with its personal.
Besides that AI doesn’t have a voice. It’s lip-syncing ours. It’s a mean, a remix. Initially, the massive language fashions had no elements aside from our human language. With out the pure voice, there might by no means have been a synthetic one. But when we turn into content material to substitute AI-generated language for our personal, we find yourself in a closed loop by which the identical outputs are recycled again as inputs.
What I worry is that we’re shedding the power to inform the distinction between our voice and the machines’. Or worse, shedding the desire to argue that there’s one.
And it’s an argument. Those that are probably the most bullish on machine studying argue that synthetic normal intelligence, or AGI—synthetic intelligence fashions that match or surpass human cognitive capabilities on any process—is imminent, simply two or three years away. Some say 10 years, or extra. It’s a rolling goal, at all times simply over the horizon. However no matter timeline, the thought is that each one of our “cognitive work” will quickly be automated. They consider that is potential as a result of they consider that the language we produce is fungible with that generated by LLMs.
I’m not all for predictions or timelines, or in who is true or mistaken and by how a lot. I’m no AI skilled, nor am I even an AI beginner. I’m not a neuroscientist or a cognitive scientist or any type of scientist in any respect. What I’m is a dad or mum of youngsters, a human, a reader, and a author, in roughly that order. What I’m fighting, like many others, is how to consider AI, and what it means for work, college, and life—and the right way to speak about all of that with my youngsters (who absolutely have way more perception into AI than I do).
What I’m most all for is the “I” in AGI. What does it really imply? And why have we let a small variety of rich businesspeople outline it?
Sam Altman, the CEO of OpenAI, promised that participating with Chat GPT-5 can be like speaking “to a professional Ph.D.-level skilled in something.” I can’t cease excited about how revealing—and bizarre—that definition of intelligence is.
Don’t get me mistaken. It’s unimaginable that we’re even having this dialog. I don’t wish to decrease the gap the know-how has traveled, the pace with which it has accomplished so, or how far it would nonetheless go. What I do wish to do is ask a query: How can we create intelligence once we don’t absolutely perceive—can’t even actually outline—what intelligence is?
Again to Altman’s formulation: Normal intelligence means being a Ph.D.-level skilled in something. Such experience is little doubt spectacular, and positively associated to, or perhaps a part of, intelligence, nevertheless outlined. Nevertheless it’s just one small a part of intelligence. My alma mater, UC Berkeley, gives doctoral packages in 94 fields of examine. Presumably AGI will cowl all of these.
However the achievement of a level doesn’t cowl, doesn’t even purport to the touch, emotional intelligence. What’s a Ph.D. in studying the room? In instructing your child to journey a motorbike? In crying since you had been moved by a bit of music? We take into account elephants clever as a result of they mourn their useless. What’s a Ph.D. in grief, awe, surprise, curiosity?
Maybe nobody must be shocked that among the world’s greatest scientists and engineers have outlined intelligence the way in which they’ve. Even when the AGI champions’ motives had been solely altruistic, they might nonetheless be biased by their very own method of seeing the world, by their very own experiences and successes. Researchers on the forefront of AI are among the many most good and achieved minds on Earth—they usually make up a really slim, self-selected group of individuals primed to grasp sure varieties of information higher than others: express, well-defined, tokenizable information; information that kinds the idea of our most far-reaching, wildly correct theories of the universe; information that has allowed us to create world-changing applied sciences. However that’s solely a small subset of all information—the sliver that may be expressed symbolically, as language or arithmetic.
The remaining is what the thinker Michael Polanyi known as “tacit information,” which makes up a a lot bigger quantity of information, and interacts in lots of extra methods. His philosophy of information may be summed up by: “We all know greater than we are able to say.”
Is that a part of AGI? I don’t consider so. I received’t consider it till ChatGPT texts me a hyperlink to a video that made it snort or cry or rethink its opinions on that factor we had been speaking in regards to the final time we spoke.
Till it does, I’d argue that the “I” these engineers are chasing is a proxy—or perhaps a misnomer. It’s nothing like intelligence as we perceive it.
You may say this argument is flawed, based mostly on an anthropocentric view of intelligence. Possibly now we have to let go of preconceptions and embrace the concept that machine intelligence can—and maybe should—be radically completely different from human intelligence. Possibly machine intelligence doesn’t require sentience, or autonomy, or curiosity, or feeling.
Say I concede all that. What I’m arguing is that, regardless of the machines can do—as unimaginable and helpful and doubtlessly economically helpful as their capabilities could also be—none of it deserves the phrase intelligence.
A few outliers apart, even probably the most enthusiastic proponents of AGI don’t consider that the frontier AI fashions are able to feeling. That means they need to assume that intelligence may be decoupled from embodiment and emotion. They’re saying: We perceive what intelligence is, in its distilled and remoted kind.
To which I might say: Please share that definition with the remainder of us.
In the event that they’re proper, we’ll know quickly sufficient.
But when they’re mistaken, the relentless pursuit of AGI poses actual dangers: to social coverage, to schooling, to our energy grid, to the economic system, to the atmosphere. Already, generative AI seems like provide in quest of demand. The necessity to scale up, plus the ever-present strain to hunt greater charges of return, have mixed to create a mind-boggling motion of capital and societal assets into one trade. Generative AI is the tech equal of high-fructose corn syrup: a presumably helpful ingredient that’s now being inserted into a lot of what we devour, with out our consent.
However maybe simply as essential are the potential harms to our personal self-conception, each as people and as a species.
AI will proceed to enhance. It’d change the world; arguably, it already has. However for now—and maybe at all times—it’s no substitute for the human voice.
Voice is what we use to speak with each other. Voice is the sound we make as we navigate the unknown—our echolocation, mapping the world, making an attempt to put ourselves inside it. Voice encodes expertise, loss, ache, pleasure. We don’t purchase voice despite failure, however by it. Due to it.
AI doesn’t have a voice, and it’s not speaking with us. Probably not. It solutions our questions. That’s what it was constructed to do. It’s a solution machine. However we’re query machines. Questions are important to intelligence. With out them, we’re static, stagnant. With out them, we don’t evolve. We will study solutions, however solely by asking questions. Questions are how we recursively self-improve. We people are consistently prompting each other in endlessly artistic methods. We immediate. We reply. Our solutions turn into new prompts. Our context home windows are our lifetimes; our tokens are uncountable.
That is about greater than semantics. By calling what AI can do “intelligence,” we’re conflating a technological functionality with a human attribute. We’re dumbing ourselves down—not by speaking to AI however by measuring ourselves towards it. The hazard isn’t that we’re overestimating AI. It’s that we’re underestimating ourselves.
This essay was tailored from Charles Yu’s 2026 Joel Connaroe Lecture, given at Davidson School on February 10.
