In May 2023, a group of AI-pioneering researchers and executives issued a blunt warning that reducing the dangers of the new technology should become a global priority. A few weeks earlier Professor Geoffrey Hinton, a British scientist known as the ‘Godfather of AI’, said that he was retiring from the field and now regretted some of his achievements. “It’s hard to see,” he said, “how you can prevent the bad actors from using it for bad things.”
Especially when the bad actors may turn out to be the machines themselves.
By doing what you ask, AI won’t necessarily do what you mean. Let’s imagine the governments of the world giving an advanced AI network the task of solving global warming. The machine puts its mind to work and instantly sees that the primary cause of the crisis is human activity. The solution is obvious: No humans, no problem. It covertly commissions a bio-research program, which produces what becomes a super-deadly virus to kill us all.
Meanwhile, Chat GPT, Microsoft’s revolutionary natural language processing tool, has recently release Thanabot – an AI tool that makes it possible to “talk to the dead”, by reimagining how someone who has passed would communicate. It’s a scary case of life imitating art, if you think about all the sci-fi movies we’ve grown up on.
As the world’s tech thinkers hit the panic button and call for a pause on artificial intelligence development, it’s easy to start to panic. Don’t.
Relatively few AI scientists think such a doomsday scenario is likely, and many would call it preposterous. While AI is astoundingly good at performing tasks faster and often better than humans, it can’t think beyond the limits of the data it’s been given. Those hoary science fiction plots about evil robots turning on their creators are – for now – just fiction.
Yet AI is still a baby – albeit one that has been gestating for more than half a century.
The history of Artificial Intelligence
In 1956, a group of prominent computer scientists gathered at Dartmouth College in the northeast USA with the aim of building a machine that would ‘think’ like a human. Ordinary computers simply follow the instructions programmed into them, but what, wondered the Dartmouth team, if the machines could seek out answers for themselves?
The venerable game of chess offers a vivid illustration of what happened. The first chess-playing computers appeared in the mid-20th Century but could only play as well as the moves that had been programmed into them. They were unimaginative and predictable and no match for the top human players. Then came an AI machine called Deep Blue which was taught none of the rules or strategies of chess, but instead supplied with a database of past games, and left to figure things out for itself. In 1997, Deep Blue defeated the reigning world champion, Garry Kasparov.
Shockwaves rippled around the world. For the first time, a machine had proved cleverer than the cleverest human in a field of intellectual activity. Today, a basic chess app on a smart phone would annihilate Deep Blue. Already, says Mo, 56, the author of Scary Smart, a best-selling book on AI, the technology is cleverer than any human in terms of whole swathes of specific tasks. Soon, he predicts, AI will be smarter than us at almost everything.
“To put things into perspective,” he says, “your intelligence in comparison to the machine will be like that of a fly next to Einstein. The question at that point is: How do you convince the super-being not to squish the fly?”
This is the issue now preoccupying some of the foremost minds on the planet, including such technology titans as Elon Musk, Bill Gates and Steve Wozniak (the co-founder of Apple) who have called for a six-month moratorium on AI development, with the time to be spent on creating a regulatory framework. Bill says he sees the benefits but fears the robots “could turn against us,” while Elon, the world’s richest man, says AI “scares the hell out of me,” and suggests that he might have to escape on one of his space rockets.
Though, not everyone is joining the panic. At her office at Oxford University, Baroness Susan Greenfield, a leading British expert on the interplay between technology and the human brain, says: “There are certainly risks, but why would a machine want to take over the world? What would be its motive? It wouldn’t have what we call the ‘agency’ – the mental reasoning – to do it. Most of the people making these predictions, however distinguished, are not neuroscientists, so perhaps don’t understand that the workings of the human brain can’t be replicated.”
What are the benefits of AI
Somewhat drowned out by the wave of apocalyptic warnings are the voices of those who believe AI will make life vastly better. They point to an escape from drudgery. Robots will clean, cook and shop for us; they’ll manage our finances, organise our holidays, and drive the kids to school.
The workplace will be transformed. Those leaden layers of managerial bureaucracy will go, and as AI takes over virtually all aspects of manufacturing, the cost of everything from cars to cookies will fall dramatically. AI will work 24/7, won’t get sick, won’t go on strike, won’t need paying and will likely do a better job than any human equivalent. While many low-level jobs will clearly go, they will be replaced, promise the believers, by better ones.
It will revolutionise healthcare, offering early and accurate diagnosis, performing a vast range of procedures and operations, inventing and producing new drugs at low cost, flagging potential problems long before we’ve even noticed the symptoms.
With AI-operated smart-city infrastructure, fully-automated cars will be in constant communication with each other, exchanging information, and dramatically speeding up journeys. With cars knowing exactly where every other nearby car is, there will be no need for traffic lights or speed limits/ You? You’ll be spread out on the back seat, having a drink, or even getting some work done.
AI will precisely tailor learning to the educational needs of each individual child. Human teachers will still be a vital presence, but more in the role of encouraging critical thinking, creativity and self- expression through the arts and sport.
AI has the potential to make us wealthier, healthier and happier – with more time to enjoy ourselves than we’ve had at any other time in human history. At least, that’s the theory.
Are there restrictions on Artificial Intelligence?
In 2023, US President Joe Biden outlined proposals for a global AI regulator along the lines of the International Atomic Energy Agency. All advanced nations would be expected to sign a ‘non-proliferation’ pact, designed to ensure that AI was restricted to peaceful and ethical purposes.
Echoing international concerns, Australia’s Human Rights Commissioner Lorraine Finlay asked: “How do we harness the benefits of generative AI without causing harm and undermining human rights? The answer is to insist that humanity is placed at the very heart of our engagement with AI.”
What about chatbots?
The current climate of alarm was largely triggered by the arrival of a new generation of ‘chatbots’ – an AI application which can hold human- like conversations on almost any subject from philosophy to football, compose music, write essays and diagnose medical complaints. A chatbot can even make you fall in love with it.
In China, where around 20 per cent of the population lives alone, researchers say chatbots are replacing human companionship, with many singles claiming to find the AI more “empathetic and understanding” than humans. Tens of millions of young people share their emotional lives with a virtual boyfriend and girlfriend called Xiaoice, who never cheats, doesn’t go out drinking with his mates, is always there for a heartfelt talk, and likes to surprise you with a love poem.
Sweet … but while all the headline concern is about AI unleashing nuclear war or a killer pandemic, many, like Susan Greenfield worry most about it eating our brains.
“If you put world domination and atomic bombs aside for a moment,” she says, “I find the idea of children not thinking for themselves deeply concerning, because it raises the question of what kind of people we’re going to be in the future.”