I recently got a chance to talk about a wide range of issues at the Geopolitics and Empire podcast that got me thinking more about the potential impact of A.I. on the future. The host talks to a stunningly wide range of thinkers from all sorts of backgrounds, so I am sure you will find something in his back catalogue to make you yell at whatever device you are listening on. My episode can be found HERE if you are curious.
Sales of “Taming the Apocalypse” are proceeding nicely. Grab your copy if you want to check it out and help support my experimental farming work. You can also get it through Kobo and the full audiobook should go into distribution soon. Hopefully the first third of the audiobook has reached paid subscribers (let me know if you haven’t seen it turn up yet- I may have clicked the wrong button along the way). My first review came in from none other than Joseph Lofthouse (it still counts even though we are good friends).
A wonderful exploration of how we might improve the world through plant and animal breeding. The book offers suggestions for how everyone from the most humble peasants to the richest billionaires might contribute to bettering our relationships with the natural world, and making our food system more secure.
And now for this week’s post…
The world is currently panicking over the threat of runaway A.I. The more I think about the issue, however, the more convinced I am that we have been living under the thumb of a destructive artificial intelligence for thousands of years already.
This is the standard story of intelligence: an organism (or possibly a machine) can become complex enough that it is capable of creative problem solving (usually involving some level of abstraction). I would like to point out that the tools of abstraction (and hence most of the problem solving that we consider as evidence of our intelligence) does not come from inside individuals. Instead, it is a collective entity that passes from being to being, generation to generation, stitching together the human superorganism and its vast array of tools.
If you look closely at individual humans, you will find that they are spectacularly uncreative. If a person invents a single new mathematical theory or useful novel word they are seen as remarkably intelligent. Look closer at these individuals and you will find their big idea is a synthesis of input that they received from the rest of the human superorganism (just as novel proteins arise when fragments of two previous proteins are combined). When you zoom out and consider all the people who devoted their whole life to intellectual pursuits and came up empty handed, you have to wonder if all those human minds are doing much more than randomly shuffling the elements provided to them by the wider culture, with the lucky few winning accolades by little more than chance. Smarter brains may merely be better at absorbing incoming abstractions, and shuffle the cards a little faster than their dim-witted competitors.
Intelligence as something that lives outside of individuals and between them. This suggests that intelligence, as humans conceive it, is inherently artificial and always has been.
Abstraction is an old habit among humans. It goes right back to the origin of language. Other organisms use communicative vocalisations, but humans take this tool so much further (relying on our exquisitely flexible vocal tracts and sensitive hearing). This trait arose somewhere between 2 million and 200 000 years ago. Spoken words, as humans use them, possess considerable power to manipulate the behaviour of others. Humans are capable of swearing oaths, and breaking them by telling lies. Human language is believed to have been crucial for the transmission and evolution of tool making technology, extending human capabilities beyond methods which could be merely imitated or stumbled upon by an individual. Humans also can experience intrapersonal communication- an inner voice heard only inside the head, though the capacity for this varies widely between people. Sometimes these inner voices take on a life of their own, often making people’s lives a living hell.
Spoken language is a fairly slippery beast. Human memory is notoriously fallible, so yesterday’s threats and promises will eventually be forgotten. The emergence of written language upset this cultural clearing mechanism and allowed abstract thoughts to persist indefinitely.
One of the oldest forms of writing is found in various methods of divination. Pieces of bone were marked with arcane symbols, then cast about to deliver messages from the spirits. The questioner would then change the course of their life in response to an abstraction. How many warriors went to their death following this early form of artificial intelligence, when their own instincts might have saved them? How many triumphed based on the irrational confidence offered to them?
As writing technology became cheaper and more sophisticated, various volumes of law appeared such as the code of Hammurabi. The creation of such early abstractions had the potential to develop a mind of their own. The highest king could be held to their stark dictates, forced to execute their own children if the crime was sufficient, honour bound to follow the law if he wished to retain the confidence of the people. The law today has ballooned into a bottomless pit of impenetrable wording, designed to keep the legal profession employed and to make the application of the law more dependent on access to ample funds than any understanding of basic intent or community will. Religious texts such as the bible represent another mass of abstraction capable of coordinating and restricting human behaviour, though as in the law the malleable nature of language allows room for creative reinterpretation (and the conflict that ensues).
The second oldest form of written language appeared in the record keeping associated with trade, setting the foundation for modern economics. Such accounts determined the fate of countless generations of peasants. How often were such records tampered with? What poor farmer could challenge the priest-scribe at the central granary? Perhaps this was the first version of the “computer says no” sketch. Medieval peasants often burnt down their local church during times of unrest, a rare opportunity to destroy the records of personal indebtedness. Civilisations rose and fell according to the abstract logic of financial economics, with the mightiest emperors powerless to override them. Today a financial system that evolved during an unprecedented period of sustained growth is now struggling to maintain its internal logic in the face of a faltering real economy. Just as desperate individuals sometimes commit suicide to avoid the cold logic of their financial predicament, whole nations can sometimes drive themselves of a cliff based on similar abstract reasoning.
Modern money has become increasingly abstract, transitioning from physical money in the form of precious metals, to sworn certificates of metal holdings, to paper notes increasingly disconnected from any physical resource, to digital currencies that appear and disappear at the speed of light. The logic of money (only recently metastasized by the expectation of a constant return on investment) drove people to liquidate real assets (like a living forest) in order to convert it to financial assets which were capable of more impressive numerical growth.
These trends culminated in the emergence of computers- machines designed to process vastly more abstract information than squishy human brains. The irony is that the most advanced versions are only now learning to do such astonishing tasks as walking without falling over and telling the difference between a dog and a cat (something that non-human brains have been adept at for millions of years through decidedly non-abstract forms of intelligence). The leaders of the USSR hoped that the power of computers could create the perfect centrally controlled economy that their theoreticians imagined. The sad reality was that the quality and quantity of the information entering the system was insufficient, and the dynamic between input of data and output of policy too disjointed to achieve meaningful results (summarised beautifully in Adam Curtis’ documentary “All Watched Over by Machines of Loving Grace”).
Functional intelligence is not merely a matter of the volume of data subjected to abstract processes. If there is too much of a delay between sensing and reacting, the system can enter a process of repeated overcorrection, much like an inexperienced driver fishtailing down a slippery road at high speed. The greatest wisdom is useless if it arrives too late (and downright dangerous if it arrives too soon).
The other essential aspect of intelligence is its essentially networked and segmented nature. The Central Intelligence Agency has no centre. Humans love to stroke our swollen brains, but much information processing happens outside our craniums. Instinctive reactions are often processed in the spinal cord, saving precious milliseconds to retract a limb from danger. The eyes filter a large amount of useless information from our visual field, but this short cut also makes them prone to various optical illusions.
Intelligence in societies is likewise segmented and networked, with information collected at various points, then sifted, compressed and funnelled upward to parts of the system that integrate various inputs and consider responses. The different components in a human society experience different pressures in the collection and processing of information. A low-level department might find itself punished for reporting inconvenient facts to the higher ups (who themselves fear being made to look bad). Given the right incentives, human societies can inadvertently blind themselves to the reality around them. A recent example of this was an enormous power outage in China, which was hidden from their leader for many months until the US president casually mentioned it. This phenomenon is not limited to “foreign” governments, and not limited to the suppression of undesirable information.
Human societies function more like cephalopods when it comes to information processing. Although octopi have impressive central nerve bundles, each arm contains its own miniature information processing system and functions semi-autonomously as the creature interacts with the world. I suspect something similar is true for most mammals and their hind legs. We’ve all seen the video of the cat that kicks itself in the head then gets angry at its own foot.
The industrial global economy has often been compared to a vampire squid wrapped around the face of humanity. Presently, the tentacles are all sucking in harmony, though if the blood ran short, they might turn on each other (which won’t necessarily bring relief for the face).
This structure means that it is possible for different segments of human societies to become dis-integrated in terms of information processing and incentives, leading to rupture and conflict. Most civil wars occur when the upper-middle class lose the confidence of the ruling class. Abstract means of communication are often vital tools for each side in the resulting conflict (the French revolution heavily depended on the newly invented printing press for the mass distribution of newsletters). Once again, an abstraction, often boiled down to a few psychologically sticky slogans, developed a life of its own and turned society inside out.
All this gets me back to thinking about how AI in its current form might change the nature and dynamics of information processing in our society. Already we should be aware that our abstract online activities are exhaustively monitored, though as the saying goes if you aren’t doing anything wrong then you have nothing to worry about (yet). Typed messages are the most easily processed information, though spoken language can be fairly reliably converted into written text for machines to filter for evidence of misbehaviour. This is a considerable step up from medieval kings who often had little idea what languages their subjects spoke, let alone their taxable economic output (as discussed in “Seeing Like a State”). Industrial serfs are completely dependent on the industrial economy, so this result should not be surprising.
The flashier side of artificial intelligence comes in the form of large language models- massive opaque programs that digest all the text on the internet, and spit out the linguistic equivalent of mechanically recovered meat paste. If anybody is writing books that are outcompeted by this dreck then I think the computers are doing the poor authors a favour by putting them out of business and giving them permission to go outside instead. I can however see the potential for this technology in automating the production of propaganda and advertising (especially since legacy advertising agencies are struggling with the short format of YouTube video ads).
You might be worried about the potential that your cat video viewing experience is about to become a heck of a lot more inconvenient as a result, but I can foresee a much more dangerous implication. Historically, the ruling class has relied on a small group of trusted underlings to handle the most delicate part of their ecosystem- the creation and dissemination of culture/propaganda. These unusual individuals needed the rare ability to sense which direction the political winds were blowing, while also lacking scruples about their work. In recent generations their talents have been considerably dulled by a growing dependence on non-stop polling, resulting in leaders (or at least the people who play them on television) coming to resemble animatronic robots that only deliver a limited set of pre-approved sound bites. The dissatisfaction with this stolid state of affairs laid the ground for the popularism of “tells it like it is” figures like Trump.
The emergence of AI language models as a tool for pushing out propaganda (especially in little bite sized pieces on social media, often disguised as the sentiments of “real people”) creates a strange new dynamic. For one thing, the traditional arbiters and analysts of popular sentiment risk having their vital information source on the mood of the nation polluted by AI sludge (the modern equivalent of elites believing their own propaganda). If their functions are automated by A.I. then this class could be removed from the ear of power. The new upper management classes can be filled with a smaller number of computer and data science experts, though recent history has proven that their capacity to extract reliable information from the general public is often compromised, especially if they create a climate of fear or shame about publicly expressing politically inconvenient positions. This issue came to the surface in the lead up to the Brexit vote, with a similar dynamic occurring before the unexpected victory of the One Nation party in Australia in 1998. A sufficiently distrustful population could start deliberately lying on push polls and express only insincere online sentiments.
This dynamic, coupled with the exclusion of the former upper management wind-sniffers, could set up a serious conflict in the near future. The overproduction of wannabe elites by our bloated university system has created a potential army of aggrieved individuals with little to lose. The discarded arms of the vampire squid could turn on the head, sparking a serious civil war.
In the long run though I am not worried about the potential of A.I. to run away from us and create a lasting dystopia. Apart from their limited capabilities (constantly oversold by AI companies desperate for more venture capital to burn) the underlying technology needed to create these devices represents the most complex and fragile system ever created by humanity. Hardware is constantly wearing out and needing replacement. Software bloats and accumulates bugs and vulnerabilities. If the elites manage to create a global A.I. emperor its reign will be short lived. The resources to build it will be exhausted soon enough. High end microprocessors are only produced in a handful of factories in Taiwan, and each depends on uninterrupted supply chains that stretch to every corner of the planet. And the people left in charge after such an event may well have forgotten how to rule without A.I. assistance. Imagine retrofitting a government department to function without basic computers.
The global, industrial vampire squid will be replaced with a series of smaller, weaker squids in time. No juicy face goes unsucked for long. If it is of interest to readers, I might devote some future posts to considering what kind of power structures might arise to fill the coming regional power vacuums. The dark ages that follow collapse tend to be mysterious, mostly because people didn’t take much time to preserve their thoughts in abstract, durable forms. But I would love an excuse to dig into the processes that incubated new power structures after collapses. I merely await your abstract encouragement in the comments section.
I’d definitely love to hear more thoughts on what might come next. In the short term I imagine it would be rather chaotic, but there are probably many possibilities in the medium to long term, and some better than others.
Incredible article! But what about your own intelligence? Surely you didn't get your own insights from others alone, you also saw beyond their horizon. I hope you consider true intelligence may have "occult" origins, e.g., there are spirits at work whose existence science denies. Already the elites (predators) are financially devouring the fattened middle classes (herds of prey). And it is only right, as you say, the juicy face will get sucked on. I am interested to hear more about your vision of future power structure. My own idea for Europe is that people will become rapidly superstitious and fall back on ethno-religious lines. The rich will prove naive, and the poor too imbecilic to survive. A class of ruthless clans will thrive, however, claiming authority from their new God.