Introduction
This might be a long post, but! It’s not just the main premise for the rest of my work, it’s also the transcript of the best and most popular talk I’ve ever created. There are 2 parts, which you can read independently from each other if you like.
Let me first tell you where I come from. I was an astrophysicist first. My PhD was in cosmology - research on the nature of the universe, you know, only slightly less mysterious than AI. I then went on - like everyone else - to be a Data Scientist and Machine Learning Engineer for various companies. I’ve always stuck my nose in the organisational and responsible AI aspects of data science so that I progressed to now working full time in AI Governance.
The theme of my career is of course ‘data’ and how to treat it in order to come to correct outcomes. By “correct” here I mean knowledge, value, but also safety, and what is beneficial for people and society.
I’ve been doing all that for years. That includes giving talks about ethics in AI. This is what I would usually talk about:
I would start by showing how much we use AI in our daily lives (spotify, netflix, facebook, self-driving cars, etc, etc, …)
Then I show many examples of AI gone (very) wrong, such as racist algorithms, sexist algorithms, people wrongly accused by police, citizens wrongfully denied government support, etc, etc, …
I talk about how you can try to detect and quantify discrimination (bias), and that there are straightforward metrics for this. But it turns out, there are many different definitions of discrimination and they all exclude each other. You can prove mathematically that it’s impossible to satisfy more than 2 definitions of fairness at the same time.
Next up I mention that AI algorithms are not explainable. There are some techniques that help, but just like models are always wrong but sometimes useful (a quote by George Box) this is doubly true for explanations of models because by definition it leaves out part of the information. This can be useful if you carefully determine which information can be missed depending on the goal of your explanation.
Basically that then leads me to my 2 conclusions of what is needed to create good AI. Firstly there is Design Thinking. When designing a product you have to look at the entire system. From the data and the way it was gathered, to the way you implement the algorithm in the socio-technical system. In other words, how do people and processes interact with it, how does it in the end influence the real world and real people. Secondly there is ‘ethical particularism’, by which I simply mean that you have to look at the details of every individual use case, and in order to make a system safe, fair and reliable, you have to make bespoke decisions. Every case is different.
Well. That was my usual talk summarized in 3 minutes. Nowadays, you can wear a suit while telling this story and make money and sell products with it. That is to say, it's mostly consensus by now. Yes, that is a good thing.
Part 1
A New Age
However, since we have chatGPT, we live in a new age, and I am starting to wonder if the above is just a sideshow. All of it is still true, and essential to creating beneficial AI products, but no longer the ‘big thing’.
All these new AI things like chatGPT - we call them Generative AI or GenAI for short - are powered by large language models, or LLMs. And LLMs are really weird. You may have noticed this yourself. We talk to them in human language, but it's also the only way for us to do anything with them. The only way for us to try to figure out what those models are doing or why, is to talk to them in human language! And when we tie multiple models together, to make a bigger system, these models also talk to each other in human language!
Why?! Is human language the optimum carrier of information? Or are we simply creating these things in our own image because we’re lazy? If we are, then we will probably end up in a too-big-to-fail situation, noting all the money that’s now being pumped into it.
Why am I starting by telling you this? With this example, I’m trying to start you thinking about systems.
Because when we talk about ethics, or when we talk about AI or when we talk about society, we will now have to broaden our scope. Up to now we’ve usually talked about instrumental rights and human rights. By instrumental rights I mean things like privacy and transparency, which are also fundamental in a sense, but they are also important in order to achieve other rights. Namely, the human rights, which I don’t need to explain I think but nevertheless; things like justice and equity, non-discrimination, agency (the right not to be manipulated), etc.
Is this no longer a broad enough scope? No. What we now also need to consider at the same time are humanity and power.
You see, tech is not neutral.
Let’s have a look at other revolutionary system technologies:
The printing press caused 2 centuries of bloodshed in Europe due to religious strife (the reformation) and redrew the borders of Europe.
Steam power (ie, the industrial revolution) caused a massive shift of power from the landed aristocracy to industrialists.
Atomic power ended one world war, and started the cold war. It changed the geopolitical landscape forever. It’s considered dangerous enough to the fabric of civilization to be immediately highly regulated.
The internet caused another massive power shift towards technology companies. The richest people alive today are all internet entrepreneurs.
Gene editing was pretty much immediately banned (at least highly regulated) for its terrifying potential.
We see that the common theme here is ‘power’: chaos, power shifts, and bans. In other words, we can see technology as a carrier of power.
A very important note to add here is that AI is bigger than any of these other technologies. This situation is unique in another way too: the most powerful entities in the world are the ones controlling this new technology too. This has never happened in history before (not even with atomic power). We are talking about massive consolidation of power on a scale never before seen.
Technology is not only a carrier, a vector, of power but also of ideology.
Let’s go back to the early days of social media. There were 2 famous labs in the USA, the Stanford Persuasive Technology Lab and the MIT Human Dynamics Lab, that pioneered the application of persuasive technologies in computers (in plain language, that’s simply manipulation of course).
The head of the MIT Human Dynamics lab said this:
“In just a few short years we are likely to have incredibly rich data available about the behaviour of virtually all of humanity - on a continuous basis... And once we develop a more precise visualisation of the patterns of human life, we can hope to understand and manage our modern society in ways better suited to our complex, interconnected network of humans and technology” - Alex Pentland, MIT
The professors that ran these Labs and taught these students have a coherent ideology with respect to their work. These were small labs, with few students, but many of them went on to influence most of our current state of society through founding companies in the internet, social media and machine learning space.
Those people back then used advanced statistical techniques and machine learning already, but it was incomparable to the “Generative AI” we have now.
Silicon Valley and the people working on Generative AI are actively working on making super-human general artificial intelligence. This is their stated goal. And just like in those other labs, they also have their ideologies.
These are often collectively referred to as TESCREAL. If you Google the term you will already find reference in major news media too. The abbreviation stands for:
Transhumanism
Extropianism
Singularitarianism
Cosmism
Rationalism
Effective Altruism
Longtermism
Let’s not get into all the details of all of those. The important thing is that all of these ideologies focus on long term benefits for humanity. That sounds pretty good right? However, the longtermism part tends to lead to the conclusion that it is worth causing harm to current people if it means creating a better future for uncountable future humans. Transhumanism considers that human beings in their current form are not good enough and need to be improved if we want to survive and thrive - think of cybernetica and bioengineering, for example.
What does this have to do with AI? Well, the underlying point of all these ideologies is that human beings as they currently are, are unable to solve the problems we are currently facing. The chosen solution to that is to create Artificial General Intelligence. Most adherents to one of the TESCREAL ideologies would rather we do this as quickly as possible.
Although it must be said that there are also some from the same ideologies that are actually calling for a halt to the current development, with the argument that we’re currently going so fast, that the results for future humanity could be very harmful indeed.
If you’ve not heard of this before, it all might sound rather strange, but we have seen that technology carries ideology and that it has a real effect on the course of society. Especially if there’s a lot of money involved. And now most of the people behind the major progress in ML and AI, they are doing this exactly because of the ideologies about the future of humankind that they have. Which is that the world is too complex for us humans.
So we come back to these two points: technology carries power and it carries ideologies. Ideologies about what it means to be human and how should that change in the future.
Let’s explore the role of the concept ‘Humanity’ further, in this context.
What is Humanity?
Modernity was created during the Renaissance and the scientific revolution. It characterises itself by the idea that us humans can study and understand nature, and thereby, eventually, completely form nature to our liking.
After Modernity came Postmodernism. It’s a very controversial term, people fight a lot even about its definition, which I am about to butcher for the sake of being able to make a point. For our purposes here, what we care about is that postmodernism says that humans are more a product of our environment than that we intentionally produce our environment.
We could ask the question if machine learning is modernist or postmodernist. If you said modernist, let me argue why that is not the case.
All of us still very much embody the modernist way of thinking. I myself am trained as a scientist and was a researcher in astrophysics for a while, so my default mode of thinking about the world is definitely and decidedly modernist.
And machine learning sounds like science right? We take data, do all kinds of fancy mathematics and statistics, and then we can predict the world better. Here is why that’s so wrong and why your intuition failed. The only reason why we use machine learning to solve a problem, is because it is too difficult to come up with a solution ourselves! In fields like gravity, rocket science, quantum mechanics, we make sure we understand each component part (by designing falsifiable experiments) and then we put them together one at a time until we understand the whole.
Not so with machine learning. Humans are unable to solve a problem themselves, so we ask a very complicated machine to find the patterns that we couldn’t. The machine is only asked to give a solution that works (and it’s very smart of us to come up with a machine that can create a solution), but we don’t ask for the why: it is completely unclear and unknowable to us why that solution works.
Now let’s zoom out a bit. Why talk about postmodernism and machine learning?
The idea of the nation state, democracy, the way we see ourselves as humans, all that stuff and more..., is rooted solidly in our perception of ourselves as beings with free will, that seek to create their own experience, the “will to will”. We believe that we (humans, individuals) can have a rational discourse together on any topic and arrive at the best solution.
The irony is that the modernist credo of “measuring, understanding and hence manipulating the natural world” started to extend to the human being as subjects as well (as we saw f.e. in the persuasive technologies labs). After some point, there was no denying it: we realised more and more how easy it is to manipulate people. And not just that, this manipulation is done completely by machine learning algorithms. In a very real sense, computers understand us better than we do ourselves.
We still think of the world as modernist and of ourselves as very rational beings, but we are already living in a post-modern world, giving away our agency to machines bit by bit.
Alright, maybe this all seems a bit abstract still. Although I can assure you it’s not just theoretical. Let’s try to make it a bit more practical.
Your Choice Matters
Every single time we (you or I) use AI, we accept that the information available to us is too great and too complex for humans to understand, and that only the computer can.
Every single time we use AI, to do something for us, we become in turn more predictable.
Every single time we use AI, it becomes more normal to accept the lack of autonomy and the lack of free will of the individual.
If we insist on carrying on using our modernist interpretation, while applying postmodernist tools, we shouldn’t be surprised if we end up disappointed.
If we insist on carrying on using arguments like “we can just make choices to use the good bits of this technology and not the bad bits, by making regulations about it for example”, we are forgetting that technologies carry power and ideologies, in this case specifically ones that do not care about decision making from rational discourse.
You can't use modernist thinking to cherry pick from a system that is actively moving away from modernist thinking. You will not get the results you expect.
So we have a choice, with only 2 options:
Either we try to preserve our humanity as it is now,
Or we embrace some sort of post-humanism, where we fundamentally change what it means to be human.
As Foucault put it: “As the archeology of our thought easily shows, man is an invention of recent date. And one perhaps nearing its end…”
Byung-Chul Han makes it even more explicit: “The idea of the human being as defined by individual autonomy and freedom, by the ‘will to will’, will eventually appear as merely a short historical interlude”
Each of these options has far-reaching consequences, but there is no in-between. A choice has to be made. Note that not making a choice is also making a choice.
So I want you to think about these 2 possibilities. About that every time you use GenAI, no matter how trivial, you are giving the people with power more power, and giving away something of yourself, your humanity and your future. I know I sound dramatic, and I’m also doing that to shake up your implicit assumptions, but I wouldn’t be saying it if it weren't also true.
So I ask you know, are you prepared to make choices?
Not just for big choices, but also for the small things, because that’s where you have the most influence. To consider, is what I’m getting worth giving away power and humanity for?
We are not asking these questions right now.
Are you prepared to make a choice?
Part 2
Information & Stories
So I guess what I’ve been trying to explain is that the world was already changing because of Big Data, and that AI will change it even more - in more ways than you think -, and that the way you decide right now to use or not use AI will determine how the world is going to change. And that making no choice is also making choice.
That story can stand on its own. But I’d like to give you another way of looking at it, that will arrive at the same conclusions. I’d like to give you more handholds for each of the following aspects: how the world is maybe fundamentally different from what you think; what is at stake; and what to think about when making your choice.
Our point of entry is to talk about information and stories.
Firstly, we understand ourselves, we understand our place in the world, and the world itself through stories. Democracy, the nation state, free will, the human being with rights and freedoms, etc… These are all stories.
We have no idea whatsoever what happens when something other than humans starts to write stories though. We have, as a culture, explored what it might look like if computers go rogue, become evil, want to destroy or enslave mankind, or protect themselves. Plenty of books and movies about it. But think about this: there is not a single book or movie that explores what happens when computers start to write stories instead of humans.
Stories are very different from information. Stories provide stability in time, continuity, while information only has an immediate relevance. Information is not the same as knowledge or insight.
The age of big data has changed our relationship to information. As a culture, but I mean in particular on the level of the individual. The age of big data has created what we can rightly call an overload of information. Let’s compare some before and afters.
Before 1: We thought that when the internet made all information freely accessible this could only be a good thing.
After 1: Information is not actually free: you pay an opportunity cost because you, as a person, can never consume even a fraction of all the content on a topic. So this creates an incentive to compete for your attention, and you pay with that.
Before 2: Democracy and capitalism require information symmetry to work. In other words, for those systems to provide the intended benefits, the buyer and seller (or citizen and government) have to have similar information. That also means the main method for the powerful to exercise control was to restrict who you can exchange information with. So power is exercised through restrictions which are typically physical.
After 2: Information symmetry no longer exists. The increase of information is mostly in the hands of companies and governments. A client or a citizen can hardly compete with that. This state is in fact encouraged: instead of restricting communication, control is obtained by encouraging communication and interaction, because that creates information.
Before 3: The basis for liberal democracy is the assumption that we can always come to some collaboration through discourse, without violence. Rational discourse requires time, effort and attention. It requires attention for the other and attention for the long term. And it must take place in the public sphere, that selects topics that are relevant to all of society.
After 3: Information is relevant only briefly, fleetingly. The large amount of information assailing us all the time fractures our attention. This ‘permanent frenzy of actuality’ means we cannot linger. In other words, to spend time with the information, to process it, to synthesise it. It also fractures the public sphere into many private spheres.
Byung-Chul Han puts it very succinctly: “The compulsion of accelerated communication deprives us of rationality… the best we can do is intelligence”. This quote also encourages to linger on the difference between rationality (which is a long term thing) and intelligence (which is short term).
What is at stake?
Notice that I didn’t mention AI even once to make any of the points above. Everything I just mentioned about information was already happening. The point is that the GenAI that we have since ChatGPT is much much much more effective at creating additional “information”. The effects on society will therefore be much faster and much more far reaching, because we will be creating new data at an exponentially larger rate (which, by the way, will be much lower quality information than before).
Both stories and information, then, are spinning fast out of the reach of our simple human hands and minds. We lose control over our own stories, and we lose the ability to process and synthesise new stories, because we are overwhelmed with information.
What will happen to the story of humanity then? We don’t know. We have literally no idea or plan or vision at all.
You see, all our comfy little lives in the West are based on this system you may have heard of: capitalist liberal democracy. This specific system probably has something to do with how comfortable our lives are, but it’s also simply the fact that we have a consistent and coherent system at all, that makes our lives comfortable.
Now, as we’ve just discussed, each of these 3 components relies on certain fundamental assumptions. It relies on the will-to-will, on discursive rationality, and on information symmetry. And as we’ve just discussed, all 3 of these things are currently in the process of being hollowed out. One might even go as far as claiming that the system is currently only being upheld by the institutions that we’ve built, like a house being held together by its roof instead of its foundations.
Do you feel like perhaps there are too many crises and wondering why it is that we can’t seem to fix any of them? This might be the reason why.
All these concepts are from modernity (remember that bit?), and we have absolutely nothing to replace them with. Think of all the alternatives you know for these 3 things. They are all old; tested and found wanting. There is nothing from post-modernity to prop up our new house, maybe because post-modernity is characterised by its rejection of grand narratives to begin with.
That’s what I mean: we don’t know where we want humanity to go next.
What to consider when making your choices
This may have been a lot to take in. So let’s make it a bit more tangible. Something to keep a hold of mentally, while you are making your choices.
Think about the things in life that make you the happiest. Like seeing the people you care about be happy, being in nature, seeing the world, doing sports, experiencing music, good food, sharing those experiences with the people you care about.
How many of those things will GenAI actually improve? I dare say: none. GenAI could perhaps make some things more efficient, saving us time that we can spend on the things that make us happiest. Well, the fridge and the laundry machine did that as well, and the computer and internet and smartphone too, but are we actually spending more time on the things that make us happiest? Think about this when you are choosing.
Next, every time you are using AI to write something, think of this. For example when you’re writing an email, or a blog, an essay, a social media post or whatever. Perhaps GenAI makes it twice or even 10 times as fast? Maybe AI makes it possible at all, in the first place, for you to write that book or make that video you’ve been thinking about? If that is true, imagine that literally everyone else is doing the same. Because why wouldn't they? Then suddenly we have 10 times more content, 100 times more content than we had before! But who the hell is going to read all of that?
Think of the mountains of emails and videos we’ll be producing. Or think of dating apps, suddenly you’ll be able to send dozens of witty and empathic messages per day. No-one has the cognitive capacity to consume that content. So the only option for the receiving party… is to use AI to read, summarise and reply. It could be an arms race, just AI talking to AI. This can literally apply to everything, in fact it is already happening: homework, scientific papers, policy documents, news commentary, job applications, you name it.
I would also like to mention the Precautionary Principle. It’s used in the legal world, in civil engineering, in risk management, and many other fields, so it may be familiar to many of you. In short, the precautionary principle says that you don’t wait for definitive proof that something is unsafe, because by that time you are too late to do anything about it.
The precautionary principle gave us the global treaties governing geo-engineering and gene-editing for example, but is also why companies in Europe usually have to prove a new chemical is safe before releasing it on the market. The US does not really apply the precautionary principle as a rule, which explains why they have a lot more public health & safety scandals there compared to Europe.
You can use the principle in your daily life too, and you probably already do. If we think the impact can be very large, we don’t need everyone to agree on how likely the bad outcome is.
And lastly, think of social media. AI is in many ways social media on steroids, both for good and ill. It took us 15 years to “prove” that maybe social media has some bad effects on society and maybe we should try to limit those. People argued this from the start, but of course we are a little late by now. It’s too-big-to-fail: too many people, businesses, governments, processes, and jobs rely on it.
There is no reason to think it would be different for AI, because the incentives and the players are exactly the same.
Second Conclusion
In the beginning I told you that the story about LLMs being weird was so that you would think about systems.
You see, I don’t really care if you use GenAI for this task or that, or at this time or that. What I do care about is if you are aware of the consequences of your habits of using it every day or even every month. Because systems care about habits, about structural things, and it’s habits - the small things that stack up - that will create the system.
Don’t just act out of habit, don’t let the chips fall where they may. If we don’t stop to discuss where the story of humanity goes, it will end up being AI that writes it, and what it writes will be the result of the power and ideology that it carries.
That’s not a statement of intention: there is no authority, and not even a conspiracy behind this that made some kind of plan. It’s just what happens if we do not make a choice, and let the existing incentives do their work: same as with social media or climate change.
Every single time we use AI, we are accelerating the information overload.
Every single time we use AI, we are accelerating the information asymmetry.
And that makes it just that little bit more difficult in the future to exercise our will-to-will, and more difficult to linger on a new story for humanity.
Many companies will do their very best to entice you to use AI as much as possible in order to get the return on their huge investments. But if they don’t have a story about humanity for you that goes beyond “isn’t that easy?” Then you know it is up to YOU to make that choice.
Don’t think of AI as just not just another productivity tool, but actually as the start of a different civilization. If you want to be, will be, a voice in how that civilization will look like, now is the time to speak.
Further Reading
Infocracy by Byung-Chul Han
21 Lessons for the 21st Century by Yuval Noah Harari
Podcast “Philosophize This” (episodes 180-189 on AI and Byung-Chul Han)
Podcast “FT Tech Tonic” (season 9 – Superintelligent AI)
Great story Jeroen. good stuff for everyone to think about.
It's also nice to see where you've gone since we both left Astrophysics.