In this continuation of Why We Can’t Have Nice Things, we also get meta: commentary of the commentary. We’ll start descriptively, and then move on to opinionated prescription.
If we look at the underlying reasons Why We Can’t Have Nice Things, they are actually all the same. It doesn’t matter which Nice Thing we’re talking about. It doesn’t even matter which particular behaviour is going to get the Nice Thing cancelled. It’s all the same, regardless even of timeframes or impacts.
What I mean is, I quite often see people (including people I admire) talking about how we should talk about more X instead of Y in the context of AI. Most of the time X and Y are some combination of innovation, existential risk, near-term risk, current harms, or AI for good. Often (but certainly not always!) accompanied by a few words that are intended in a desultory manner indicating that however “too much” the other people are talking is an unreasonable amount.
Well here's the thing. It's not an either/or thing. Let's try a little bit harder to keep the whole picture in mind, shall we? A "yes, and" attitude.
On a cognitive level, I think the following happens in conversations about these topics. As soon as you sees an argument that you are not sure you agree with, the human mind tends to go: *bing* “aha, this is a ‘scenario’". And unconsciously the brain treats a scenario as a preclusionary thing: only 1 scenario can actually happen at a time, so when one scenario happens the other potential scenarios cannot happen anymore. Or in other words, when faced with multiple scenarios, the brain asks which single (!) scenario is most likely.
Stop it, brain.
In fact, all of the scenarios commonly talked about around AI can happen simultaneously. To take some extremes from either end, for maximum effect: AI could solve climate change, cure all diseases and enslave humanity, all at the same time. Regardless of how likely each of those things is, there is no thing there that fundamentally excludes the possibility of the other thing. Therefore, we should prepare for a complex manifestation of a set of scenarios, maybe even most of them.
I don't think it's helpful to say "the current discussion spends too much time on X and we're forgetting to talk about Y". Well, we are forgetting to talk about a lot of important things. Let's instead rephrase it as: "the amount of improvement we're seeing in Y is not enough compared to the likelihood and impact of Y happening."
The latter is a reasonable statement for any and all occasions that do not involve a proposal for an exact distribution of resources.
And in any case, and this is our main take-away today, literally any issue you care about that has to do with AI, from existential risk from AGI to harms currently being perpetrated by biased ML models, is enabled by the same root problem.
In other words, the reason why you are saying “we should focus more on risk X” is the exact same reason that the other person is saying “no we need to spend more time and money on risk Y”.
Because the reasons are always the incentives. It’s our bad incentive structures that make us worry AI companies will get civilization stuck in a dystopia. It’s the same bad incentive structures that are deploying AI systems that discriminate by ethnicity and sex. It’s the same bad incentive structures that are encouraging social media platforms to hijack our attention mechanisms.
We Can Have Nice Things, if (and only if) we fix our incentive structures.
This post’s title is of course a reference to the famous quote “It’s the economy, stupid”