Google and Microsoft target responsible AI with new $10 million fund

Leading AI companies, including Google, Microsoft, OpenAI and Anthropic, have jointly established a $10 million fund to support research on the safe development of AI systems.

But critics question whether the industry group will advocate for true oversight.

Big tech giants join forces to form AI security forum

The Frontier AI ethics consortium was launched in July to supposedly ensure responsible AI progress. Founding members Google, Microsoft, OpenAI and Anthropic now plan to fund research into AI safety.

Each company has invested billions in expanding AI capabilities. Raising a comparatively small amount of $10 million is intended to promote external confidence in its precautionary commitments.

Appoint leader to coordinate AI scrutiny

The Frontier alliance named Chris Meserole as CEO. Meserole previously led an AI research group focused on minimizing risks.

He will coordinate the fund’s efforts to audit artificial intelligence systems. But his close ties to the industry raise questions about impartiality.

The stated goal is Red Teaming with the latest AI models

The companies say the fund will primarily fund techniques to rigorously prepare powerful new AI models before deployment.

Red teaming involves uncovering flaws, biases, and vulnerabilities through adversarial testing. But details remain vague.

Tiny compared to each member’s AI spend

While not insignificant, $10 million pales in comparison to the billions member companies invest annually in AI development.

See also  How useful is the Huawei Ramadan promotion

This highlights that the fund is more about lip service and PR than meaningfully guiding AI priorities.

Critics question true independence

Critics point out that the fund appears intended to protect the industry from the public interest. Truly independent voices seem excluded.

There are concerns that it largely serves as a promotional vehicle to improve the reputation of members’ AI work.

The Dark Web already misuses AI to cause harm

The need for AI oversight is underlined by the misuse of AI by criminals. A UK watchdog revealed more than 20,000 illegal AI-generated images shared on the dark web.

Governments are fighting to curb harmful uses of AI. But voluntary consortia have limited authority over the activities of their members.

Big tech companies want “friendly” regulation

All of the giants that fund Frontier publicly support AI regulation in principle. But they oppose restrictions that could hinder their AI aspirations.

The alliance allows you to focus on specific oversight that adapts to the unlimited progress of AI. But it avoids binding external regulations.

In short, while Frontier AI’s security fund seems like a positive step, deeper scrutiny suggests it primarily serves its Big Tech backers and not the common good. True reform requires regulatory actions, not voluntary consortia.

Subscribe to our latest newsletter

To read our exclusive content, register now. $5/Monthly, $50/Yearly

Categories: Technology
Source: vtt.edu.vn

Leave a Comment