New Google Tool Aims to Make AI-Generated Images More Traceable

For years, the Google DeepMind team has thought that developing powerful generative AI tools requires developing exceptional capabilities to detect what the AI ​​has produced. There are very obvious and high-risk reasons for this, according to Demis Hassabis, CEO of Google DeepMind. “Every time we talk about this and other systems, the question of deepfakes comes up.” With another heated election season in both the US and UK in 2024, Hassabis believes developing systems to recognize and detect AI images is becoming increasingly vital.

Hassabis and his team have been working on a tool that Google is now making public for several years. It’s called SynthID and is intended to mark an AI-generated image in a way that is undetectable to the human eye but detectable by a dedicated AI detection tool.

The watermark is implanted in the pixels of the image, although Hassabis claims that it has no noticeable effect on the image itself. “It doesn’t change the image, its quality or the experience of it,” he explains. “However, it is resistant to various transformations: cropping, resizing, and all the other things you could do to try to avoid normal, traditional, simple watermarks.” According to Hassabis, when the underlying SynthID models are developed, the watermark will be less noticeable to humans, but DeepMind technologies will identify it more easily.

This is as technical as Hassabis and Google DeepMind will be for now. Because SynthID is still a new technology, even the release blog page doesn’t provide many details. “The more you reveal how it works, the easier it will be for hackers and other bad actors to get around it,” Hassabis argues. SynthID will initially be available through Google Cloud users using the company’s Vertex AI platform and Imagen imager. Hassabis hopes that once the system undergoes more real-world testing, it will improve. Then Google will be able to use it in more places, share more information about how it works, and collect even more data.

See also  Don't you have anything to hide? Your personal data is more valuable than you think

Hassabis seems hopeful that SynthID will eventually become a standard for the entire Internet. The fundamental concepts could even be applied to other forms of media such as video and text. After Google has validated the technology, “the question is to scale it up, share it with other partners who want it, scale up the solution for the consumer and then have that debate with civil society about where we want to take this.” He repeatedly emphasizes that this is a beta test, a first attempt at something new, “and not a magic solution to the deepfake problem.” But he clearly believes it has the potential to be huge.

Of course, Google is not the only corporation with this goal. You are welcome. Last month, Meta, OpenAI, Google, and many other big names in AI pledged to incorporate more safeguards and security procedures into their AI. Several companies are also collaborating on the C2PA protocol, which uses encrypted metadata to tag AI-generated content. In many ways, Google is playing catch-up with all of its AI tools, including detection. And it looks like we’ll have too many AI detection standards before we find ones that work. But Hassabis is confident that watermarks will be part of the solution for the web.

ynthID will be unveiled at Google’s Cloud Next conference, where the company will inform business customers about the new capabilities of Google’s Cloud and Workspace products. According to Thomas Kurian, CEO of Google Cloud, utilization of the Vertex AI platform is skyrocketing: “The models are becoming more and more sophisticated and we’ve had a huge increase in the number of people using the models.” That expansion and improvement to the SynthID system convinced Kurian and Hassabis that now was the time to launch it.

See also  Tips to improve your grades in college using technology

Customers are concerned about deepfakes, but they also have much more common AI detection requirements, according to Kurian. “We have a lot of clients who use these tools to create images for advertising copy,” he explains, “and they want to verify the original image because many times the marketing department has a central team that actually creates the original image.” Another big one is retail: some stores are employing AI technologies to produce descriptions for their huge product catalogue, and they need to ensure that the product photos they upload don’t get mixed up with the generated images they use for the shower. ideas. and iteration. (You may have previously seen descriptions like these created by DeepMind on shopping websites and in places like YouTube Shorts.) They may not be as shocking as fake images of Trump’s mug or a strutting Pope, but these are the ways AI is already being used. business.

Aside from whether the system works, Kurian says he’s interested in how and where people want to use SynthID when it comes out. For example, he believes Slides and Docs will require SynthID integration. “When you use Slides, you want to know where you get your images from.” But where else can you go? SynthID, according to Hassabis, could be offered as a Chrome extension or even integrated into the browser to recognize images created across the web. But assuming that happens: should the tool flag everything that can be generated or wait for a query from the user? Is a big red triangle the right approach to convey “this was made with AI,” or should it be more subtle?

Kurian maintains that there may eventually be a host of user experience alternatives. He believes that as long as the underlying technology works consistently, people will be able to choose how it appears. It might even differ depending on the topic: you may not care if the Slides background you’re using was created by humans or AI, but “if you’re in hospitals scanning tumors, you’ll really want to make sure it’s not a synthetically generated background.” image.”

See also  Microsoft launches a Windows app to remotely access PCs from iOS, Android and other devices

The launch of any AI detection technology would undoubtedly spark an arms race. In many cases, you have already given up on a program designed to recognize the language created by your own ChatGPT chatbot. If SynthID becomes popular, it will simply inspire hackers and developers to find creative ways to fix the system, forcing Google DeepMind to improve the system, and so on. Hassabis responds, with just a hint of resignation, that the team is ready. “It will probably have to be a live solution that we have to update,” he says, “more like an antivirus or something like that.” You will always have to be on the lookout for new types of attacks and transformations.”

For now, that remains a distant concern because Google controls the entire initial AI image creation, use, and detection system. However, DeepMind designed this with the entire Internet in mind, and Hassabis says he’s prepared for the long process of getting SynthID to where it’s needed. But then he stops and says, “One thing at a time.” “It would be premature to think about scaling up and civil society debates until we have demonstrated that the fundamental piece of technology works.” That’s the first task and the reason SynthID is launching today. If SynthID or something similar works, we can find out what it means for life online.

Subscribe to our latest newsletter

To read our exclusive content, register now. $5/Monthly, $50/Yearly

Categories: Technology
Source: vtt.edu.vn

Leave a Comment