AI regulation comparable to nuclear power, suggests Sam Altman, creator of ChatGPT

Listen to the Podcast:

Sam Altman, CEO of OpenAI, has collaborated with two other experts to present a comprehensive plan to regulate the governance of ‘superintelligence’. This refers to advanced artificial intelligence systems such as Google Bard, ChatGPT and others.

According to Altman, artificial intelligence (AI) systems have the potential to surpass human expertise in various fields and reach productivity levels comparable to those of large corporations in the next decade.

The rise of superintelligence is a topic that has been widely discussed due to its potential for significant benefits and risks. The potential exists for a more prosperous future, but not without accompanying risks that must be carefully navigated.

According to Altman, the rule of superintelligence can be compared to historical examples such as nuclear power and synthetic biology. These fields required special treatment and coordination due to their potential risks.

The individual proposes three crucial concepts to maneuver the advance of superintelligence effectively.

  • Coordination: Artificial intelligence experts have emphasized the importance of coordination among major AI development initiatives. This coordination is considered crucial to ensure the security of AI technology and its seamless integration into society. Governments may consider establishing a project or reaching consensus on regulating the advancement of AI.
  • International Authority: An international authority has suggested that AI projects with a certain level of capability should be subject to regulation, similar to the International Atomic Energy Agency (IAEA) for nuclear power. An authority has been proposed to oversee the inspection of systems, compliance with security regulations, and the implementation of restrictions on deployment and security.
  • Security Research: As the development of superintelligence progresses, experts call for more attention to be paid to the security of this technology. Technical research is seen as a crucial component in ensuring that superintelligence develops safely. OpenAI and other organizations continue to study this area.
See also  Explorando las últimas tendencias en desarrollo de software: IA, aprendizaje automático y automatización

Altman has emphasized that the development of AI models below a certain capability threshold should not be stifled by regulation. According to the user, companies and open source projects should be able to create models without facing excessive regulation.

According to experts, the governance of the most powerful AI systems must involve strong public oversight. People around the world should make decisions democratically about the deployment and limitations of various measures. OpenAI is planning to experiment with developing a mechanism for public input, although the exact details of this mechanism have not yet been worked out.

In a recent post on the OpenAI blog, Altman and his fellow authors explained why the organization continues to develop this technology despite the potential risks. Advocates claim that implementing this solution will result in a significantly improved global picture, effectively addressing issues and improving communities.

Halting the advance of superintelligence poses a formidable challenge, while the potential benefits are too substantial to ignore. Therefore, it is imperative to handle your progress with the utmost caution.

Subscribe to our latest newsletter

To read our exclusive content, sign up now. $5/month, $50/year

Categories: Technology
Source: vtt.edu.vn

Leave a Comment