Google warns against health and finance AI content

Listen to the podcast:

Amidst Google’s preparations to launch its own chatbot-integrated search feature — a big push to compete with Bing integrated with Microsoft’s ChatGPT — the search giant has quietly issued some new warnings to interested publishers. to run AI-generated content.

These warnings pertain to Google’s guidelines for how publishers should handle AI-generated content.

Read more: Google Meet Now supports subtitles in video recording

To be more specific, Google is alerting the media that the search team will look for AI-generated articles related to “health, civic or financial information” with a higher level of scrutiny. In other words, these are the areas where you really want to get things right.

In the recently published Google FAQ, it is stated that “these challenges occur in both human-generated and AI-generated content.” This refers to “AI content that potentially spreads misinformation or violates the agreement on crucial issues.”

“Regardless of how content is produced, our systems seek to obtain high-quality information from trusted sources rather than information that contradicts a well-established consensus on important issues,” he continues. “This is in contrast to information that suggests alternative viewpoints on controversial issues.” “In areas where data quality is of paramount importance, such as health, civic or financial information, our systems focus even more on indications of reliability.”

To Google’s credit, it serves as a fitting warning.

Google is by far the most popular search engine, even allowing for the company’s own involvement in the AI ​​arms race. Big publications are already using generative AI to produce content, and the cult Hustle Bro is already encouraging its followers to use the tool that’s available to the public for free to set up their own personal content factories.

See also  Timothy Shamaly's Inspiring Journey: Defying the Odds to Achieve Success

Also read: WhatsApp introduces new status features

Google, as one of the preeminent curators of our digital lives, must adapt to new technologies that are transforming the way content is generated online. Generative AI, despite its obvious shortcomings, is already doing just that, and Google must adapt to remain competitive.

Having said that, Google undoubtedly has an interest in this conflict due to the fact that it is an interested party in the conflict. Since your own chatbot-infused search has already proven to be blatantly wrong, in an announcement, of all things, you’re probably better off getting ahead of the many problems that are likely to arise in a digital landscape teeming with cheap, fast, AI content. which sounds extremely safe but is often wrong. He seems pretty desperate to keep his head above water in the AI ​​market led by Microsoft and OpenAI.

To that end, it should come as no surprise that Google has identified healthcare and finances as content of particular concern. This is not only because of the general importance of these areas, but also because of the sad reality that existing generative AI tools constantly get that kind of content wrong. Large Language Models (LLMs) are notoriously bad with numbers, as evidenced by CNET’s embarrassingly bug-ridden AI-generated financial advice. On the other hand, medical professionals have discovered that ChatGPT fabricates medical diagnoses and even treatments, and even includes bogus sources for its alleged findings.

And in terms of politically related content, several practitioners have warned that the availability of ChatGPT is just right to turn our online world into a propaganda nightmare. To that end, I say, cheers.

See also  The future of gaming: emerging technologies and trends

Also read: How to save money the smart way after 30?

However, Google suggests that we should not spend too much time worrying about the situation. After all, they had been participating in this activity for some time.

According to the recently formulated Frequently Asked Questions (FAQ), “Our focus on the quality of the material, rather than how the information is produced, is a beneficial guide that has helped us deliver reliable, high-quality results to users over the years. years”.

Taking note of this, but we’re sure Google will forgive us for having its fair share of concerns, especially considering the fact that it doesn’t require content providers to mark anything as AI-generated.

According to what they wrote, “AI or automation disclosures are useful for content where someone would ask, ‘How was this created?’.”

Code of honor, all of you. That’s probably going to be true.

Subscribe to our latest newsletter

To read our exclusive content, sign up now. $5/month, $50/year

Categories: Technology
Source: vtt.edu.vn

Leave a Comment