Internet Watch Foundation warns against AI-generated child sexual abuse images that could flood the internet

NEW YORK – The already alarming proliferation of child sexual abuse images on the Internet could get much worse if something is not done to rein in the artificial intelligence tools that generate deepfake photos, a watchdog agency warned Tuesday.

In a written report, the UK-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated child sexual abuse images overwhelms enforcement investigators. the law and greatly expand the pool of potential victims.

“We’re not talking about the damage it could cause,” said Dan Sexton, the watchdog group’s chief technology officer. “This is happening right now and it needs to be addressed right now.”

In a first-of-its-kind case in South Korea, a man was sentenced in September to two and a half years in prison for using artificial intelligence to create 360-degree virtual images of child abuse, according to the Busan District Court in South Korea. southeast of the country. .

In some cases, children use these tools on each other.

At a school in southwestern Spain, police have been investigating the alleged use of a phone app by teenagers to make their fully clothed schoolmates appear naked in photographs.

Computer-generated images of child sexual abuse made with artificial intelligence tools like Stable Diffusion are beginning to proliferate on the Internet and are so realistic that they may be indistinguishable from real children.

The report exposes a dark side to the race to build generative AI systems that allow users to describe in words what they want to produce – from emails to artwork to novel videos – and have the system spit it out.

See also  Vicky Huw Health and Illness Update: Huw Edwards Serious Mental Health Issues

If not stopped, the flood of fake images of child sexual abuse could bog down investigators trying to rescue children who turn out to be virtual characters.

Perpetrators could also use the images to groom and coerce new victims.

Sexton said IWF analysts discovered famous children’s faces online, as well as a “massive demand for the creation of more images of children who have already been abused, possibly years ago.”

“They are taking existing real content and using it to create new content for these victims,” ​​he said. “That’s incredibly shocking.”

Sexton said her charity, which focuses on combating online child sexual abuse, began receiving reports of AI-generated abusive images earlier this year.

That led to an investigation into the so-called dark web forums, a part of the Internet hosted within an encrypted network and accessible only through tools that provide anonymity.

What IWF analysts found were abusers sharing advice and marveling at how easy it was to turn their home computers into factories to generate sexually explicit images of children of all ages. Some also market and try to profit from these types of images that appear more and more realistic.

“What we’re starting to see is this explosion of content,” Sexton said.

While the IWF report aims to point out a growing problem rather than offer prescriptions, it calls on governments to strengthen laws to make it easier to combat AI-generated abuses.

It is particularly aimed at the European Union, where there is debate over surveillance measures that could automatically scan messaging apps for suspected images of child sexual abuse, even if authorities are not previously aware of the images.

See also  Mysterious model-turned-pilot Nadia Marcinko, Epstein's 'Global Girl', has not been seen since the document's revelation.

A big goal of the group’s work is to prevent previous victims of sexual abuse from being abused again by redistributing their photographs.

UK-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated child sexual abuse images overwhelms law enforcement investigators and greatly expands the group of potential victims.The UK-based Internet Watch Foundation is urging governments and technology providers to act quickly before a flood of AI-generated child sexual abuse images overwhelms law enforcement investigators. AFP via Getty Images

The report says technology providers could do more to make it harder for the products they have created to be used in this way, although it is complicated by the fact that some of the tools are difficult to put back in the bottle.

Last year saw the introduction of a series of new AI image generators that surprised the public with their ability to conjure up whimsical or photorealistic images on command. But most of them are not favored by producers of child sexual abuse material because they contain mechanisms to block it.

Technology providers that have closed AI models, with full control over how they are trained and used (for example, OpenAI’s DALL-E imager) appear to have been more successful in blocking misuse, Sexton said.

By contrast, a tool favored by producers of child sexual abuse images is the open-source Stable Diffusion, developed by London-based startup Stability AI.

When Stable Diffusion burst onto the scene in the summer of 2022, a subset of users quickly learned how to use it to generate nudity and porn.

While most of that material depicted adults, it was often non-consensual, such as when it was used to create celebrity-inspired nude photographs.

Stability subsequently implemented new filters that block unsafe and inappropriate content, and a license to use Stability software also includes a prohibition on illegal uses.

See also  Student sues university, claims her cries for help were ignored as she spent the night in a pool of blood due to near-fatal infection

In a statement released Tuesday, the company said it “strictly prohibits any misuse for illegal or immoral purposes” on its platforms. “We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes,” the statement read.

However, users can still access older, unfiltered versions of Stable Diffusion, which are “overwhelmingly the software of choice… for people who create explicit content involving children,” said David Thiel, chief technologist at the Internet Observatory. from Stanford, another watchdog group studying the problem.

“You can’t regulate what people do on their computers, in their bedrooms. “It’s not possible,” Sexton added. “So how do you get to the point where they can’t use openly available software to create harmful content like this?”

Most AI-generated child sexual abuse images would be considered illegal under current laws in the US, UK and elsewhere, but it remains to be seen whether authorities have the tools to combat them.

The IWF report is timed ahead of a global meeting on AI safety next week hosted by the British government that will include high-profile attendees, including US Vice President Kamala Harris, and tech leaders.

“While this report paints a bleak picture, I am optimistic,” IWF executive director Susie Hargreaves said in a prepared written statement. She said it’s important to communicate the reality of the problem to “a wide audience because we need to discuss the darker side of this amazing technology.”

Categories: Trending
Source: vtt.edu.vn

Leave a Comment