.Dick Pountain /Political Quarterly / 19 Aug 2025 12:28
Book Review: Empire Of AI: Inside the reckless race for total domination
by Karen Hao; 20 May 2025; Allen Lane, Penguin; £25.00
“It was the summer of 2015, and a group of men had gathered for a private dinner at Sam Altman’s invitation to discuss the future of AI and humanity.” So begins Karen Hao’s enthralling account of the rise-and-rise of Altman’s firm OpenAI and its best-known product ChatGPT. It’s not a biography of Altman, though he is its principal actor, nor is it a biography of ChatGPT as that ‘bio-’ prefix rules out applying it to a non-biological entity (however ‘intelligent’). Hao is an experienced tech journalist whose work has appeared in The Atlantic, Wall Street Journal and MIT Technology Journal, and she was fortunate to be granted extensive access to OpenAI in its very earliest days, enabling her to write a technically-informed history that is also a shrewd and highly critical psychological analysis of a cabal of phenomenally clever, phenomenally rich (mostly young, mostly male, mostly white) entrepreneurs who elevated Artificial Intelligence from an arcane subject of research into a commercial enterprise that promises/threatens to overthrow the existing economic order.
At that summer dinner table along with Altman were Elon Musk, who turned up an hour late, and three hot-shot AI researchers Greg Brockman, Dario Amodei, and Ilya Sutskever who became founding fathers of OpenAI. All around age 30 apart from Musk, who was 15 years older and already a billionaire after selling his shares in PayPal in 2002 and investing in Tesla and SpaceX. The younger men were around 7 or 8 years old when the movie ‘Star Wars’ came out in 1977, in their teens when computer gaming took off, and probably still played internet games and consumed science fiction into adulthood, which might explain their apocalyptic zeal (and naivety) which Hao describes thus:
“a radical commitment to develop so-called artificial general intelligence [AGI], what they described as the most powerful form of AI anyone had ever seen, not for the financial gains of shareholders but for the benefit of humanity”.
Musk promised $1billion to the new enterprise he and Altman set up, a non-profit corporation pledged to develop AGI. They believed that a machine smarter
than a human being would either solve all humanity’s problems – poverty, climate change and cancer included – but might also turn rogue and seek to enslave us all. The dramatic thrust of Hao’s book, painstakingly unfolded over 410 pages, recalls the way this initial idealism was gradually dismantled and turned into a commercial profit-seeking corporation, along with an almost unhinged intention to scatter the whole planet with $7trillion worth of gigantic datacentres.
AI research had long been contentious and largely fruitless until the early 2000s, when the arrival of new silicon-chip technologies enabled the construction of supercomputers with thousands of processors that could emulate or ‘model’ the action of human neurons. This type of architecture was called ‘connectionism, and it started to produce truly impressive results around 2010, in large part thanks to Google’s efforts to improve its Translate product using ‘deep learning’ software models. Such models, known as Large Language Models (LLM), must be ‘trained’ by exposing them to vast quantities of real-world data – text, image and audio – gleaned from the rapidly expanding internet culture of websites and libraries (in a manner that’s now creating enormous legal problems over copyright). OpenAI’s founders all believed that connectionism was the only way forward for AI and that ‘scale’ was crucial – the more processors you had the smarter the machine would become, and if one scaled far enough it must inevitably reach AGI. They also believed that Google was almost there but couldn’t be trusted with AGI, so their non-profit corporation would share all its research findings to head it off…
While they believed AGI was achievable and imminent, the founders differed over whether the potential benefits or dangers were more likely. Those who feared a rogue AGI could destroy or enslave humanity, including Musk, were ‘Doomers’ who wanted to spend heavily to identify and remove aversive tendencies from their models. The ‘Boomers’ were optimists who looked forward to the benefits, and included Altman and Brockman who wanted to spend on performance, efficiency and rapid scaling to achieve AGI sooner. The two factions had seriously incompatible policy interests which Hao follows in agonising detail over their first eight years of operation. Though she deals evenhandedly with them, her sympathies clearly lie with the Doomers who criticised the company’s cavalier attitude toward the safety of its models. Musk left the company in 2018 following fierce disputes over non-profit status, control and safety. In 2019 the company became a ‘capped-for-profit’ rather than non-profit to attract more investors. In 2020-21 eleven Doomers left to start a rival AI company, Anthropic. Tensions within OpenAI rose, mainly over how long and how much to spend to render safe a new product, a ‘chatbot’ called ChatGPT safe. It culminated in a board-room coup (referred to in retrospect jokingly as ‘The Blip’) which briefly deposed Altman as CEO, but pressure from employees and investors reinstated him after five days. The remaining Doomers resigned and in 2022 ChatGPT was released to the public for free and became the fastest-adopted consumer software in history, gaining over 100 million users within two months and thrusting AI to the attention of the whole world.
Hao’s story could have ended here, but she refuses to ignore its very prominent dangers. In a chapter called ‘Plundered Earth’ she expresses deep concern about the use of cheap labour to manually expunge harmful content from the models. For all the talk of replacing human intelligence, since LLMs lack reason, morality and empathy they’re capable of regurgitating bias, nonsense and hatred contained in their training data. Safety can only be assured by hiring human
beings to censor that data, a pursuit called ‘aligning’ the model with human sensibilities by means of ‘reinforcement learning from human feedback’ or RLHF. Hao travelled to Kenya, Columbia and India to interview workers on shockingly low piecework payrates, who were traumatised by the vile content to which they were exposed for 12 or more hours daily doing RLHF work. She’s equally critical of the economic and environmental damage that the AI corporations’ policy of untrammelled scaling is wreaking, not merely in the USA but increasingly across the world:
“ ‘Digital’ technologies do not just exist digitally. The ‘cloud’ does not in fact take the ethereal form its name invokes. To train and serve up AI models requires tangible, physical data centers”.
The four largest ‘hyperscaler’ corporations Google, Microsoft (provider of OpenAI’s supercomputers), Amazon and Meta spend trillions of dollars building AI data centres every year, and Hao travelled to Chile where Microsoft and Google are buying up large plots of land to build huge datacentres close to villages whose already precarious potable water supply will be diverted for cooling purposes. She pointedly compares such actions to the extractive plunder that typified previous eras of Empire…
Hao doesn’t explore in detail whether or not AGI is actually attainable, which matters since if it’s not the worst imaginary harms won’t have to be faced, but then the current monomaniacal hyperscaling will prove futile as well as dangerously and horribly wasteful. I’m a sceptic who believes that it’s not, because emulating human language and pattern recognition, even if Reason can be added, still does not amount to general intelligence. Living beings have needs, like nutrition, avoidance of danger, reproducing themselves, that structure their thoughts and behaviour profoundly. Evolution equips them with chemical alert mechanisms that we call ‘emotions’ which detect and seek to satisfy such needs. Neuroscientist Antonio Damasio postulates that when we store memories of events, and later retrieve trying to predict future ones, they are imprinted with an emotional ‘stamp’ based on the hormonal state at the time of capture. So images and words can never be entirely neutral, they carry emotional values that contribute to the outcome whenever we make decisions. Jonathan Haidt and colleagues have demonstrated that many emotion-related decisions, from moral judgements and friendships to hatreds and prejudices, actually bypass the reasoning parts of the brain via direct neural links. These ‘intuitive’ mental activities are not reducible to either symbolic logic or Turing computability. Intuition is a vital component of creative reason, permitting those unprecedented leaps between vastly differing conceptual spaces that make for a Newton, a Mendeleev or an Einstein… But the training data for connectionist AI models contains only representations of mental states, lacking all the emotional freight and therefore incapable of intuition. Without affective virtues like empathy, honesty, compassion and generosity it would be a sociopathic silicon solipsist rather than a general intelligence. Equipping a robot body with an autonomous AGI brain belongs to Star Wars fantasy too, because LLMs run on supercomputers the size of an aircraft hangar that consume megawatts of electricity, unlike our own bodies whose every cell contains a microscopic mitochondrial ‘battery’, enabling us to think and reproduce ourselves on a diet of weeds.
The connectionist paradigm adopted by OpenAI, Anthropic, Meta and the rest may already be running out of steam, and covering the whole planet with Nvidia chips would still leave such AI capable only of statistical correlation rather than intuitive causal reasoning. Perhaps a major U-turn to re-incorporate some degree of older rule-based symbolic AI paradigms, which extreme connectionism has elbowed aside, could create a more modest hybrid AI as useful tool rather than greedy parasite.
No comments:
Post a Comment