einarwh

The AI puppet dance

January 12, 2024

I worry about the wasted effort, the distraction and the opportunity cost brought on by the massive AI hype wave sweeping over society. Instead of solving real problems with known solutions, we dance around to the tune of the trend goblins like so many string puppets. It’s like we’re all participating in a massively distributed presales party arranged by the tech giants.

An AI-generated image of a robot puppet

These days, every little agency in every little county is spending their time and money scrambling to come up with an AI strategy instead of addressing the problems on their doorstep, because they’re told that otherwise they’ll be “left behind”. Every little business in every little domain is struggling to find some way to shoehorn something that qualifies as “AI” into their offering, because otherwise they “can’t compete”. They have little choice in this matter. If doing the AI dance is a prerequisite for funding, for attention, for hiring, everyone will do the AI dance. It’s rational given the conditions, but the outcome for society at large remains to be seen.

Most small agencies and businesses are in no position to do any of this in any meaningful way, and so they compensate by arranging webinars and inviting inspiring speakers and in general by being very excited about it all. After all, they’re in with the times! But how and if any of this will pay off and actually deliver the value promised remains unclear. It’s all hearsay and untested hypotheses. Let’s hope it turns out to be correct, huh? Otherwise we’d all look pretty stupid. All that wasted time and money. And all that talk!

There is little doubt that LLMs will find usage in a variety of domains. LLMs are being jammed in everywhere imaginable, and they will provide value in some of those places. We’ll see what sticks. Unfortunately I see by far the biggest potential in scams and spam, in automated social engineering attacks, in the generation of fake news, in misinformation campaigns, in cheap rip-offs, in intellectual property theft, and in the general pollution of the Internet. But we’ll also use LLMs in positive ways, for summarizing text and distilling notes, for draft generation for both written text and code, for good-enough translations for many contexts, for good-enough illustrations for our slides (not to mention our blog posts!) and so forth. But these are primarily productivity enhancers and incremental improvements, rather than radical game changers. How much more productive will we be? Ten percent? Twenty? Five? It depends on how much of a bottleneck these tasks pose for us.

But surely we’re just at the doorstep of the AI revolution?

Are we? The AI hype wave is accompanied by an extrapolation narrative that is invariably linear or even exponential (predicated on the idea of a self-improving AGI, which looms in the background as a mythical guarantee for the eventual success of AI efforts). There’s little talk of plateauing. But previous accomplishments are no guarantee for continued accomplishments. A mathematical model that matches historical data does not automatically predict the future. You’d have to be pretty naive about both mathematics and reality to make that assumption. It’s really a STONKS GO UP meme. You draw the curve and say “See? It goes up.”

2023 was about what LLMs can do. 2024 will show us more about what they can’t do. It’s not obvious that the problems currently facing LLMs can be solved by doing more of the same.

LLMs are fundamentally parasitic in nature. You need to feed them the output of human effort. As a result, you’d be pretty stupid if you did something to threaten the supply of input data. But of course that’s exactly what many businesses are hoping to do. The biggest gains will come from cutting jobs where humans today perform a task that can be mimicked by an LLM tomorrow. This is cutting off the branch you’re sitting on as a business plan. It’s not yet a problem, since current models are trained on the entire corpus of human production up until now. It will be a problem going forward, as models run on the fumes of human production and ultimately start consuming their own output, or the output of other models. Inbreeding, rather than the singularity, seems to be the immediate future of AI.

As customers and citizens, we should expect a gradual erosion of quality in many domains as LLMs become ubiquitous. Perhaps we will get used to it, or perhaps it will swing back. Either way, at the moment, we don’t have much choice. It’s coming. Even before LLMs took the world by storm last year, this process was underway in some domains. Blatant errors in closed captions for video content has become a new normal for instance. We should expect to see this process accelerate and spread to text everywhere, as text written and thus inherently checked by humans is replaced by text generated “for free” by machines and checked by no-one, and oddities and errors sneaking in. I imagine reading texts will ultimately be a bit like eating a sandwich when you know that it may have sprinkles of sand in it.

The same goes for illustrations, another LLM showcase. It is certainly true that DALL-E and Midjourney can produce astonishing pictures. I have many times failed to guess whether or not a given picture was generated by AI. At the same time, last year demonstrated very clearly that the reason most of us aren’t artists isn’t that we can’t draw. It’s that we’re not artists. If you’re serious about graphic design, you need a graphic designer. Equipping a random person with an AI prompt is not going to work. If you do, your graphic design will suck. If your competitor retains their graphic designer (or hires yours) they will outcompete you on graphic design.

For me, these examples demonstrate that we should be moderate and realistic about the benefits of the new AI tools, and mindful of the drawbacks they might have on society. We should certainly have our eyes open and our heads turned on. Meanwhile, we continue to dance. I for one wish we could retain a bit more control over our own limbs. We have important problems to focus on, and the AI won’t do it for us.