I’m always amused when I see someone pronounce on social media that they’ve “solved” the problem of artificial superintelligence, or insist that they have a 100% ACCURATE! prediction of where it will lead, often used as flimsy pretext to justify some awful idea like Universal Basic Income. This, despite the fact that some of the brightest minds alive today have been working on the Friendly AI problem for over a decade and still aren’t even confident in their predictions, let alone their solutions.
Too much has already been written on why we should or shouldn’t be worried about ASI. If you’re unfamiliar with the debate, there’s a good summary and great infographic at Future of Life. I won’t rehash that here. Instead, I want to explain why there are so many terrible ideas and predictions floating around the “I F***ING LOVE SCIENCE!” crowd (i.e. not scientists and certainly not AI researchers). And indeed, how this very same problem applies to human intelligence and infects every aspect of social and political thinking.
A good starting point is the Dunning-Kruger Effect. The least-able are most likely to overestimate their ability. Even those who know they are below average tend to be way off in their estimation of how far below average they are, and cannot even conceive of the different levels of mastery. Ironically, knowing about Dunning-Kruger does not make one immune to it, leading to some embarrassingly cringey articles from self-important journalists. (I’m sure that conservative writers have done this too, I just… can’t seem to find any.)
Dunning-Kruger explains why, as a middling chess player, I can predict who will win in a game between amateurs, but have no clue what’s going to happen next in a grandmaster game. It also explains why many new business owners have very high turnover; they’re still learning the trade and can’t tell good from bad, and have to use a trial-and-error approach to hiring. Rating systems address this; in chess, it’s completely objective, and on Yelp it’s very subjective but still a decent predictor of outcome. With intelligence, the objective rating is IQ.
Despite appearances, I’m not an IQ-ist. I have never asked anyone for their IQ, nor told anyone mine without having explicitly been asked. You don’t need to be smart to be successful, or even to master a particular trade. IQ is not a reliable individual predictor of life outcomes. At an aggregate level, however, it informs us of certain social outcomes. A phenomenon called assortative mating explains why successful relationships tend to involve partners of similar IQ, which itself explains why marriage is for the rich. It also explains why high-IQ nations have more economic output than low-IQ nations. A lot of people know this, but what they do not realize is that the relationship between average IQ and collective outcome is not linear, it’s exponential.
The exponential relationship is important. We measure IQ on a bell curve, but the measurement itself is more like a decibel of sound than, say, a degree on a thermometer. Various high-IQ societies have each done their own analyses, concluding that an approximate 5-point increase is equivalent to double the actual intellectual performance (i.e. problem-solving speed). So, on average, a 150-IQ individual can solve problems about 60 times faster than a 120-IQ individual, and more than 1000 times faster than a typical 100-IQ individual.
Those numbers are insane to think about. Try to imagine driving your car, on the same roads you’ve always driven on, but at 3000 mph. Or 50,000 mph. It’s all just a blur at that point, and the 3000 mph blur doesn’t feel much different from the 50,000 mph blur; either way you’d probably crash instantly. An X-15 pilot could relate to 3000 mph in the wide-open skies, but navigating ground traffic over short distances at that speed would still be inconceivable.
But now imagine that you can drive at a normal speed of 50 mph, and everyone else around you is limited to 1 mph. A few thoughts might cross your mind:
- Your commute time would be way shorter than everyone else’s.
- Being stuck behind a 1 mph vehicle would drive you crazy.
- Anyone else going much faster than 1 mph would stand out. A lot.
- You still wouldn’t be able to see a car going by at 3000 mph.
It’s not too difficult to imagine other people being slower than you – either physically or intellectually. You won’t really understand or empathize with their experience, but you can interact with them, and you can predict their behavior. However, none of us – not even the smartest of us – are capable of even imagining higher intelligence than our own, because if we could, then we’d be more intelligent ourselves. We can imagine the outcomes of being super-smart, like having a dozen Ph.Ds and starting 50 wildly successful companies, but not the actual process of getting from here to there.
The exponential relationship between ability and outcome is described by a Pareto distribution or power law:
Ability can be intelligence, or anything you can observe or measure. These distributions pop up everywhere, by the way, as the fabled “80/20 rule”, although in reality it’s often more like 90/10, or even 99/1. It all depends on how far right the x-axis goes. In the above example, more than 4-5 standard deviations above average ability is literally off the chart for achievement. Not every field of human endeavor will have this exact scale, but almost all have this general shape.
If you equate “achievement” to “wealth”, and you imagine (incorrectly) that the amount of wealth in the world is fixed, then this graph looks terrifying. However, if achievement represents the production of wealth (or other resources), all of history starts to make sense. The poorest family in America today lives better than the richest kings and aristocrats of Europe in the middle ages, and it’s all because of the achievements of a very small number of inventors, entrepreneurs, artists, military generals, and so on.
As historical figures, we hold prodigies like Rembrandt and Edison in high respect, even reverence. They advanced civilization by leaps and bounds. Yet today, the trend seems to be fear and jealousy, as though these “1 percenters” are vampires feeding off us plebs. The reality is, if I, not Steve Jobs, had been the CEO of Apple, you wouldn’t have your iPhone, and Apple probably wouldn’t exist anymore. If you, not Lincoln, had been president during the Civil War, America wouldn’t be a single country. These outcomes required unique individuals.
Maybe the resentment was always there, and just omitted in the history books. Either way, the more heterogeneous a group, the more resentment you seem to get. Every identifiable subgroup seems to be equally hypocritical, believing that the lower-achieving subgroups simply don’t have the same ambition or ability (which is mostly true), but at the same time deluding themselves into believing that higher-achieving groups got there by cheating. This is essentially the basis for all collectivism and identitarian beliefs, which are best described as weaponized intellectual laziness rather than coherent ideologies.
An artificial superintelligence would be way past the edge of today’s Pareto distribution. The ASIs would become responsible for nearly all “human” achievement, unless we could keep up via genetic enhancement and technological augmentation. If we lag behind, then we would all become insignificant underachievers compared to the intellectual and creative marvels produced by the supers.
What I wonder is: are we ready? Assuming, hypothetically, that ASI is Friendly, are we emotionally and intellectually mature enough to deal with a social class over and above the current billionaires? Machines that we can’t even begin to understand, but are nevertheless responsible for managing vast amounts of resources and producing almost all of the new goods and employment opportunities? I’m not really worried about superintelligence destroying jobs or culture, because that’s not what actually happens when you add super-producers to a society. What I wonder about is whether we would be able to accept the new reality, or whether humans would collectively become so bitter that they’d immediately try to destroy it.
Futurists believe that ASI will save us and deliver a post-scarcity economy. I’m not sure if we could handle it. My hunch is, the only way we’ll be able to truly advance beyond General AI is by improving ourselves, not our machines.