From PCPRO November
It becomes harder and harder to scrabble grains of pleasure, amusement or edification from the internet, but it remains possible (just) on YouTube. I particularly enjoy two grizzled performers, Rick Beato – whose interviews with musicians such as Rick Rubin, Guthrie Trapp and Tom Bukovac are priceless – and Jon Stewart, the satirical commentator whose Daily Show kept many of us in stitches during the George W Bush presidency. Stewart tried to retire in 2015, but he’s now returned, presumably lured by the grim shenanigans in the White House. He presents a YouTube version of the Daily Show on Mondays, and on Thursdays he hosts an in-depth podcast called The Weekly Show, on which a recent guest was the social psychologist Jonathan Haidt.
Haidt’s best-selling book suggests that smartphone overuse is damaging young people’s mental health. I know his work from reviewing an earlier book called The Righteous Mind, published in 2012 – a study of the way a person’s moral outlook affects their political behaviour. Haidt is a leading light of the “Intuitionist” school of psychology, which holds that not all our behaviour is rational, and that moral judgements such as disgust are hard-wired to bypass the reasoning parts of the brain. His experiments that test this are highly amusing but unsuitable for a family magazine like this, since they involve ideas of incest and molesting chicken dinners.
Impressive, amusing and addictive as current AI systems are, they fall short of general intelligence
I raise his work because it relates to another book I’ve just reviewed: Karen Hao’s Empire of AI, an inside glimpse into the rise of OpenAI and ChatGPT. Hao documents three important facts about OpenAI: unanimous agreement that artificial general intelligence (AGI) is possible and the only worthwhile goal; belief that AGI will be achieved by endless scaling, cramming millions of Nvidia GPUs into its servers; and a split, right from the very start, between those who think AGI will be great and those who think it will be deadly.
Personally, I believe AGI is neither desirable nor possible, for reasons that depend upon the work of Haidt, among others. If it’s not achievable, that means we needn’t fear enslavement by robots, but also the current monomaniacal hyperscaling is futile and wasteful.
Impressive, amusing and addictive as current AI systems are, they fall far short of general intelligence as they’re not alive. Unlike Nvidia chips, living beings need food, safety and to reproduce, and these imperatives structure our thoughts and behaviours. Billions of years of evolution have equipped us with “emotions”, chemical computational sub-systems that detect and seek to satisfy needs. Portuguese neuroscientist Antonio Damasio postulates that when we store a memory of an event it gets imprinted with our emotional/hormonal state at the time (via biochemistry that is barely yet understood). When we retrieve it later to help understand some future event, these emotional markers act as weights – like parameters in an LLM – and contribute to the outcome of our decision.
To us, therefore, images and words are never entirely neutral: they carry subconscious emotional connotations of varying strengths. AI models lack needs and fragile bodies, and hence purpose. Actually, smartphones, which can “see” and “hear”, know their location and orientation and travel the world in our pockets (so long as we remember to charge them), are way closer to human experience than ChatGPT is.
To us, images and words are never neutral: they carry subconscious emotional connotations. AI models lack needs, and hence purpose
Equipping one of those highly capable Boston Dynamics robots with a fully autonomous AGI must remain science fiction so long as GPTs require aircraft-hangar-sized supercomputers and consume megawatts of electricity. Our own bodies have a mitochondrial “battery” in every cell, enabling us to think and/or reproduce ourselves on around 2,000 calories a day.
Cognitive psychologists such as Haidt show that emotional modes of intuitive thought aren’t reducible either to symbolic logic or Turing computability, yet it is these mechanisms that underpin crucial affective human virtues such as empathy, wisdom, justice, courage, honesty, compassion and generosity, without which any aspiring AGI would merely be a sociopathic silicon solipsist. Most importantly, intuition is vital for creative reasoning, causing those unprecedented leaps between vastly differing conceptual spaces that make up the mind of a Newton, a Mendeleev or an Einstein.
Yet training data for connectionist AI models contains only representations of mental states – text and pictures scraped from the internet – and what emotional weight it does aggregate is mostly bad news, a swamp of hateful human communication that costs the AI firms big bucks to hire human beings to painstakingly disinfect. They call these procedures “alignment” and RLHF, or reinforcement learning from human feedback. I will quietly draw your attention to the word “human” in that phrase, and leave it there
No comments:
Post a Comment