The other day in the “five quotes” post, I quoted an article about online seminaries adapting to AI. The author encouraged seminaries to blend the human and the machine in online learning. He advised seminaries to double down on what only humans can provide.
You see this sort of argument a lot these days. The basic framework is something like this: “[Fill-in-the-blank technology] may be really great at X, but it can never match a human’s ability to Y.”
However, as technologies continue to improve and mimic the things that only humans are supposedly good at doing, the range of things that fall into Y in the equation above continues to shrink.
In Friday’s post, I called this thinking “people-of-the-gaps.” In doing so, I referenced a well-known scientific fallacy called “God of the gaps.” Essentially, this argument is used to paper over any gaps in our scientific understanding by just saying “God did it” or something like that. Dietrich Bonhoeffer famously references the idea in one of his prison letters:
How wrong it is to use God as a stop-gap for the incompleteness of our knowledge. If in fact the frontiers of knowledge are being pushed further and further back (and that is bound to be the case), then God is being pushed back with them, and is therefore continually in retreat. We are to find God in what we know, not in what we don't know.
Tying theological belief to scientific misunderstanding or ignorance is a pretty thin fig leaf for deists to use. It is obvious to point out, but as soon as the misunderstanding is corrected, the need for God evaporates.
I worry because I see a similar fallacy in our thinking about technology. If humans are only good for what we do better than technological replacements, then as technology advances, humanity recedes. It confounds me how people fail to see the logic of this argument.
Oddly enough, almost as soon as I finished writing that post, I read a blog post by Alan Jacobs that covered similar material. He referenced his teaching style in the face of ChatGPT and other chatbots as “pedagogy of the gaps.” If a teacher can only assign those things that AI can’t replicate well, that is to say, humanly, the types of work a teacher can assign will only continue to narrow. The return to handwritten essays does not emanate from a pedagogical desire to receive and grade handwritten work or some theory that students write better when they write by hand, but is a stopgap measure attempting to circumvent AI-enabled cheating. Since the dawn of the Internet (to use a student’s way of phrasing things), students have been able to cheat using web-based tools. ChatGPT and its ilk have only made this far easier and more convenient. They have also muddied the plagiarism waters by their sheer convenience. Paying someone in Bangladesh to write your essay or copying and pasting from an online essay aggregator is clearly wrong; asking ChatGPT to write an outline for you and fill in some details and change a few things here and there is a bit grayer.
Designing a class curriculum around thwarting chatbot cheating is, let’s all agree, not an ideal launching point for a successful course. But this is where many teachers are today. And the gaps will only narrow.
The other part of Jacobs’ post involved a lengthy quote by Leif Weatherby. I had never heard the name before, but he published a book that has just migrated to the top of my reading list. Called Language Machines: Cultural AI and the End of Remainder Humanism, Weatherby tackles the increasing marginalization of humans in our AI world.
So, where I have “people-of-the-gaps,” Weatherby has “remainder humanism.” Here is Weatherby’s explication of the term:
Remainder humanism is the term I use for us painting ourselves into a corner theoretically. The operation is just that we say, “machines can do x, but we can do it better or more truly.” This sets up a kind of John-Henry-versus-machine competition that guides the analysis. With ChatGPT’s release, that kind of lazy thinking, which had prevailed since the early days of AI critique, especially as motivated by the influential phenomenological work of Hubert Dreyfus, hit a dead end. If machines can produce smooth, fluent, and chatty language, it causes everyone with a stake in linguistic humanism to freak out.
I like his phrasing there—we paint ourselves into a theoretical corner—in the way that we talk about these technologies. Humans are left in a constant rearguard action, fighting to keep up with technology’s steam engine lest we be entirely overwhelmed.
Weatherby is a linguist by training; he is not a technologist or computer engineer. His interest with AI, then, is not in the technology per se but the way the technology has framed our understanding of how language works and what language is. In this respect, as you can tell from his title, he believes that AI has crossed the cultural Rubicon; it is no longer merely a language phenomenon. From what I can tell, as one would expect of a linguist, his argument is pretty technical and esoteric1.
For me, the question I have asked myself since I input my first prompt in ChatGPT back in December of 2022 has less to do with quality or the Turing test or anything technical like that: my main preoccupation has been what type of world we want. Freddie DeBoer writes often about how badly Silicon Valley needs the AI hype. Outside of the frenzy around LLMs, there just isn’t a lot to get excited about in the tech market (slightly more megapixels on your iPhone’s camera!). So much of the craze is predicated upon the market needing something that promises we are still innovative.
Outside of the economic picture, though, I just don’t see whose life is qualitatively improved by AI. I am not trying to be ignorant. I know some of these programs are good at coding or writing emails you want to avoid, but that’s quantitative. Has ChatGPT improved anyone’s quality of life (other than OpenAI’s employees, of course)? Has it made anyone a better person?
Google’s Gemini now checks the tone of my emails. It would gladly correct the tone for me if I let it. Perhaps I came off more snippy or brusque than I intended: Gemini can make it lighter and cheerier. But maybe, I don’t know, we should get better at moderating our tone ourselves. To frame this in an explicitly Christian fashion, if I am growing in faith and holiness, I should be using the Holy Spirit and not Google Gemini to make my tone more loving.
I guess what I am trying to say is this: things done by humans are not better because they are technically or efficiently or substantially better; they are better because they are done by humans.
Allow me to stretch an analogy for a moment. I have seen the musical Les Misérables live twice. The first time was in 2018 when my school did a production; the second was a few months ago in London at the original West End theater. Both were great, but I loved the first one more. And I loved it because I had Javert in class, I watched Enjolras mature from a very annoying freshman into a very charming senior, I read Dante with Eponine, and convinced two of the boys in the ensemble to join because it would be fun. You get my point. Even though the version at my school was not as technically proficient and masterfully acted as the one in London, it meant more to me because I knew the people involved and loved them.
Now, everyone acting in both productions of Les Mis was a human being, so the analogy is not perfect. What it does illustrate is that human knowledge complicates our judgment of things. At the moment (literally), I am listening to my oldest child perform “Canon in D” on our home piano. He is flawed. But he’s my son. It’s my favorite rendition of that beautiful song of all time.
The measuring rod for value cannot be abstracted from human concerns. We cannot let technology dominate while we consign humanity to the gaps left (temporarily) unfilled.
To paraphrase briefly, Socrates/Plato thought that all writing was artificial. Written language is always and inevitably one step removed from the primary human communication form: verbal speech. To critique computer-generated language like ChatGPT for being “synthetic” or fake, then, sort of misses the point, according to structuralist theorists of language.