The rapid rise of artificial intelligence (AI) has not only caught the attention of society and lawmakers, but also some of the technology leaders at the heart of its development.
some experts, including “Godfather of Artificial Intelligence” Jeffrey HintonWarns that AI proposes similar Risk of human extinction due to pandemics and nuclear war.
More than 350 people, from the boss of the company behind ChatGPT to the head of Google’s AI lab, said mitigating the “risk of AI extinction” should be a “global priority”.
While artificial intelligence can perform life-saving tasks, such as algorithms that analyze medical images such as X-rays, scans and ultrasounds, its rapidly growing capabilities and increasing use have raised concerns.
We take a look at some of the main fears — and why critics say some of them are overdone.
Disinformation and AI-altered images
AI apps are all the rage on social media sites, with users posting fake images of celebrities and politicians, and students using ChatGPT and other “language learning models” to generate college-level essays.
one General The concern about AI and its development is the misinformation AI is generating and how it is causing confusion online.
British scientist Professor Stuart Russell said one of the biggest concerns was disinformation and so-called deepfakes.
These are videos or photos of a person in which their face or body has been digitally altered so they appear to be someone else – often used maliciously or to spread disinformation.
Professor Russell said that while disinformation for “propaganda” purposes had existed for a long time, the difference now was that, in the case of Sophy Ridge, he could ask the online chatbot GPT-4 to try to “manipulate” her, In this way she is “not very pro-Ukraine”.
Last week, a fake picture There appears to have been an explosion near the Pentagon It briefly went viral on social media, sending fact-checkers and local fire departments scrambling to refute the claim.
The image purportedly showing a cloud of black smoke next to the US Department of Defense headquarters appears to have been created using artificial intelligence technology.
It was first posted on Twitter and quickly re-broadcasted by verified but fake news accounts. But fact-checkers were quick to certify that the Pentagon hadn’t exploded.
But some action is being taken. In November, the government confirmed that sharing pornographic “deep fakes” without consent would be criminalized under new legislation.
beyond human intelligence
AI systems involve machines imitating the process of human intelligence—but are they at risk of developing beyond human control?
Professor Andrew Briggs of the University of Oxford told Sky News that there are fears that as machines become more powerful, it “may come” that one day their capabilities surpass that of humans.
He said: “At the moment, whatever the machine is programmed to optimize is chosen by humans, it may be chosen for harm or it may be for good. Humans decide it at the moment.
“The worry is that as machines become smarter and more powerful, one day its capabilities will surely exceed those of humans, and humans will lose the ability to control what the machines are seeking to optimize.”
read more:
What is GPT-4 and how has it been improved?
That’s why it’s important to be “mindful” of the potential for harm, he said, adding that “it’s not clear to me or any of us that the government really knows how to regulate it in a safe way”.
But there are a host of other concerns surrounding AI – including its impact on education, Experts warn against papers and work.
just the latest warning
Signatories to the AI Safety Center statement include Mr. Hinton and Yoshua Bengio, two of three so-called “Godfathers of AI” who won the 2018 Turing Award for their work on deep learning.
But today’s warning isn’t the first time we’ve seen technologists raise concerns about AI developments.
March, Elon Musk and a group of artificial intelligence experts Calls for a moratorium on training powerful artificial intelligence systems due to potential risks to society and humanity.
The letter, issued by the nonprofit Future of Life Institute and signed by more than 1,000 people, warns that artificial intelligence systems competing with humans could pose risks to society and civilization in the form of economic and political disruption.
It called for an end to the six-month “dangerous race” to develop a system more powerful than OpenAI’s new GPT-4.
earlier this week, Rishi Sunak also meets Google’s CEO Discuss the “balance” between AI regulation and innovation. Downing Street said the Prime Minister spoke to Sundar Pichai about the importance of ensuring the right “guardrails” were in place to keep technology safe.
Are these warnings “bullshit”?
While some experts agree with the Center for AI Safety’s statement, others in the field label the notion of an “end of human civilization” as “nonsense.”
Pedro Domingos, a professor of computer science and engineering at the University of Washington, tweeted: “Reminder: Most AI researchers think that the idea of AI ending human civilization is bullshit.”
Mr Hinton responded by saying what Mr Domingos’ plan was to ensure that AI “doesn’t manipulate us, let’s give it control”.
The professor replied: “You’re already being manipulated every day by people who aren’t even as smart as you, yet somehow you’re okay. So why worry about artificial intelligence in particular?”