Artificial intelligence (AI) may have the power to drive advances that “kill many people” in just two years, advisers to Rishi Sunak have warned.
Matt Clifford expresses concern over lack of global regulation artificial intelligence Producers also said that if left unregulated, they could become “very powerful” and difficult for humans to control, posing significant short-term risks.
He made the comments in an interview with TalkTV, pointing out that artificial intelligence has the potential to create dangerous cyber and biological weapons that could kill many people.
Countless experts in the field have voiced this concern, exemplified by a letter published last week urging greater attention and action to mitigate the risks of AI being compared to epidemics or nuclear war.
The letter rejecting the harmful use of artificial intelligence was signed by executives from leading companies including Google DeepMind and Anthropic.
Geoffrey Hinton, known as the “godfather of AI,” echoed the letter, warning that AI could be disastrous for humanity if it fell into the wrong hands.
Mr Clifford is chair of the Advanced Research and Invention Authority (ARIA) and is currently advising the Prime Minister on the development of the Government Foundation Models Working Group, which focuses on AI language models such as ChatGPT and Google Bard.
“I think there are a lot of different types of risks with AI, and in this industry we talk a lot about near-term and long-term risks, and the near-term risks are actually pretty scary,” Mr Clifford told TalkTV.
“You can use artificial intelligence today to create new recipes for biological weapons or to launch massive cyber attacks. Those are bad things.
“I think the kind of existential risk that the letter writers are talking about is … what happens once we effectively create a new species, an intelligence more powerful than humans.”
Mr Clifford acknowledged that predictions of computers surpassing human intelligence within two years were on the “optimistic end”, but said artificial intelligence systems were rapidly improving and becoming more powerful.
During an appearance on the first edition of the show on Monday, he was asked what he thought the odds of humans being wiped out by artificial intelligence were, and he replied: “I don’t think it’s zero.”
He continued: “If we go back to things like biological weapons or cyber[attacks]you could have very dangerous threats to humanity that could kill many — not all — just From where we expect the model to appear in two years time.
“I think the focus now is how do we make sure we know how to control these models, because right now we don’t.”
The technologist added that AI production needs to be regulated globally — not just by national governments.
The AI warnings come as apps using the technology spread rapidly, with users sharing fake images of celebrities and politicians, while students use ChatPGT and other “language learning models” to generate college-level essays.
read more:
Terminator blamed for public concerns over AI
On avoidance time: Apple will tweak the iPhone’s autocorrect feature
Meet the world’s first humanoid robot, Ai-Da
AI is also being used in positive ways — such as performing life-saving tasks, including algorithms that analyze medical images from X-rays to ultrasounds, helping doctors more accurately and quickly identify and diagnose diseases such as cancer and heart disease.
Mr Clifford said AI could be a force for good if harnessed in the right way.
“You can imagine artificial intelligence curing disease, increasing economic productivity, and helping us achieve a carbon-neutral economy,” he said.
But Labor has been urging ministers to ban technology developers from developing advanced artificial intelligence tools unless they are licensed.
Shadow digital secretary Lucy Powell, who will speak at the TechUK conference today, said artificial intelligence should be licensed in a similar way to medicines or nuclear energy.
“This is the model we should be thinking about, and you have to get permission to build these,” she told the Guardian.