Why We Can’t Have Nice Things
A descriptive piece, about how a certain dynamic in society works and why that makes it more difficult to design a society that benefits from AI while keeping the downsides under control.
Our new AI technology can do amazing things. Things we wouldn’t have believed possible for another couple decades. We can easily think of many great uses for this technology that could really help humanity.
It’s a shame then, that it seems it’s so often used for things that are not particularly beneficial, or only beneficial for a small group of people.
Worse, it’s often used in ways that are detrimental for at least some groups of people. Sometimes this is obvious from the get-go, sometimes this is something that emerges.
As is so often the case, not just with technology, it’s a few people that ruin something nice for everyone else. Let’s just start with something super basic from daily life: say free cookies at the office. It’s a small thing, not particularly important or expensive, but it improves the mood for everyone. Right up until the point where perhaps one or two people start taking advantage of it a bit too much, maybe bring a whole plate of them home when no-one’s looking. Inevitably, even though it’s still not a huge amount of money, there will be no more free cookies.
The problem is of course incentives (you’ll be seeing this term a lot here). The short-term benefits for the people taking advantage of or abusing a nice thing are very tempting. A free box of cookies every day! But only for a little while, until you’ve pissed off enough people. And now the nice thing is taken away, or we gotta start putting rules in place. Which still kinda ruins the nice thing, being perhaps more effort than it’s worth. Everyone gets 1 cookie per day, no more, and they will be handed to you by the cookie master!
I don’t think I have to spell out how the analogy applies to laws and regulations right?
Let’s move on to a more complex and more insidious dynamic, and I’ll use an example around self-driving cars. I saw this dynamic explained by the Youtube channel Not Just Bikes in this video, and I think it’s very striking. If you like opinionated videos about traffic design I can recommend the video in full, but the short of it is this (10:30 to 17:00 in the video): the self-driven cars are designed, trained and optimised for the current road designs of a particular place in the world. If the government, driven by the desire of the people, wants to change the road design to make it safer or more friendly to cyclists and pedestrians or whatever, the companies that make the self-driving cars will have to re-train and perhaps redesign their self-driving technology, which costs money of course. This inevitably leads to rich, powerful tech companies lobbying and otherwise doing everything they can to block any improvements to road design.
In other words, we get stuck in a manner similar to too-big-to-fail, and progress stagnates1. The current state will get “locked-in”, and to make matters worse, places that are innovative (but in the minority) will be forced to adopt the big-tech-friendly status quo. This is not just a theoretical what-if scenario. This exact dynamic plays out and has played out in many places and in many different times. For example in the early days of motor vehicles, the car companies invented the term “Jay-walking” so they could imply that people getting hit by cars had only themselves to blame (with accompanying lobbying and advertising campaigns). All so they would not be held accountable for the safety risks their product was creating, and avoid having to spend money on a better design.
Lastly, something a bit straightforward again. The Trump administration’s recent tariff raises, which are quite likely invented by an LLM. The resulting tariffs were overly simplistic and had a number of nonsensical features. We can keep this short here and generalise to a law of human nature: someone somewhere will always try to use AI in a stupid way for something important (because the barrier to entry is non-existent).
What can we take away from the above? This is not an argument against AI as a technology. Instead, this is simply an observation relating to human nature. Whatever else we do, we will always have to keep this tendency in mind. We cannot have all the cake and eat it too. Actually, the Nicer a Thing is, the more we Can’t Have It, because by and large the incentives for abuse are stronger.
We Can Have it, but only if you make it less Nice than you hoped it would be, that’s the sad reality of it.
A nice hint towards an upcoming post about the relation between innovation and regulation