Using Artificial Intelligence to Generate ‘Extremely Realistic’ Child Abuse Images | Tech News

An online safety group has warned that artificial intelligence could be used to generate an “unprecedented amount” of real-life child sexual abuse material.

The Internet Watch Foundation (IWF) says it has found “very real” artificial intelligence– Produced images that many will find “indistinguishable” from real images.

The pages investigated by the group, some of which were reported by members of the public, showed children as young as three years old.

The International Women’s Federation, which is responsible for finding and removing child sexual abuse content on the internet, has warned that they are realistic enough that when real children are at risk, it may become harder to spot.

IWF CEO Suzy Hargreaves calls Prime Minister Rishi Sunak The UK has made the issue a “top priority” when it hosts the Global AI Summit later this year.

She said: “We haven’t seen these images in large numbers yet, but we are well aware that criminals have the potential to produce an unprecedented number of vivid imagery of child sexual abuse.

“This could be devastating for internet safety and the safety of children online.”

AI images ‘increased’ risk

While AI-generated images of this nature are illegal in the UK, the IWF said the rapid progress and increased accessibility of the technology meant the scale of the problem could soon make it difficult for the law to keep up.

The National Crime Agency (NCA) said the risk was “increasing” and was being taken “extremely seriously”.

Chris Farrimond, director of threat leadership at the NCA, said: “If the amount of AI-generated material increases, it has the potential to significantly impact law enforcement resources and increase the time it takes us to identify real children. Need protection “.

Sunak said a global summit expected in the fall would discuss regulatory “guardrails” that could mitigate the future risks of artificial intelligence.

He has met key players in the industryincluding numbers from Google also Chat GPT Maker OpenAI.

A government spokesperson told Sky News: “Child sexual exploitation and abuse content generated by AI is illegal, regardless of whether it depicts real children, meaning tech companies will be required to proactively identify content and protect it under the Online Safety Act. Remove it, it’s designed to keep pace with emerging technologies like artificial intelligence.

“The Online Safety Act will require companies to take aggressive action to address all forms of online child sexual abuse, including grooming, livestreaming, child sexual abuse material and prohibited images of children, or face significant fines.”

read more:
AI ‘threat to democracy’
Why Transparency Is Critical to the Future of AI

Please use Chrome browser for a more convenient video player

Sunak praises the potential of artificial intelligence

Criminals use artificial intelligence to help each other

The IWF said it also found online “manuals” written by perpetrators to help others use artificial intelligence to create more realistic images of abuse, thereby circumventing security measures put in place by image generators.

Like text-based generative AIs like ChatGPT, image tools like DALL-E 2 and Midjourney are trained on data from the internet to understand cues and deliver appropriate results.

No matter where you get your podcasts, you can click to subscribe to Sky News Daily

ChatGPT creator OpenAI’s popular image generator DALL-E 2 and Midjourney both said they restricted the software’s training data to limit its ability to produce certain content and block certain text inputs.

OpenAI also uses automated and human monitoring systems to prevent abuse.

Ms Hargreaves said AI companies must adapt to ensure their platforms were not exploited.

“Continued misuse of this technology could have profoundly dark consequences and could expose ever-increasing numbers of people to this harmful content,” she said.

Source link