What does the rise of AI mean for the future of business?

The global debate on AI risk and regulation has seen a number of notable developments over the past few weeks.

A push for stricter regulation has arisen from both the OpenAI hearings in the US with Sam Altman and the introduction of the updated AI Law in the EU.

However, the agreement between governments, researchers and AI developers on the need for regulation has surprised some. Sam Altman, the CEO of OpenAI, suggested establishing a new government agency to license the development of large-scale AI models during his appearance before Congress.

In addition to recommending “a combination of licensing and testing requirements,” he also suggested that companies like OpenAI should be subject to independent audits as a means of regulating the industry.

However, there is still no agreement on the details of such legislation or on which areas potential audits should focus, despite growing unanimity about the dangers, including the potential effects on people’s work and privacy. Two main themes emerged at the World Economic Forum’s first Generative AI Summit, where AI practitioners from business, government and academia came together to promote alignment on how to handle these new ethical and regulatory considerations:

Responsible and Accountable AI Audit

We must first modernize the standards we have for companies that create and use AI models. When we consider what “responsible innovation” actually entails, this becomes very crucial. Since its government has provided advice for AI through five core principles, including security, transparency and fairness, the UK has taken the lead in this conversation. Recent Oxford research has also shown how “LLMs like ChatGPT create an urgent need to update our concept of accountability.”

See also  Google celebrates its 22nd birthday with a special doodle

The increasing difficulty of understanding and auditing the new generation of AI models is a major motivator behind this demand for additional features. We can use the example of suggesting job prospects as a way to illustrate this evolution by contrasting “traditional” AI with LLM AI, or big language model AI.

Traditional AI could be biased if it were trained on data showing employees of a particular race or gender in higher-level positions and then recommended those people for positions. Fortunately, this is something that could be identified or audited by looking at the information used to train these AI models and the conclusions they produced.

This form of bias auditing is becoming increasingly challenging, if not occasionally impossible, to assess for bias and quality with modern AI-powered LLMs. A conversational recommendation could include more subjective biases or “hallucinations” in addition to the fact that we don’t know what data a “closed” LLM was trained on.

Who will decide if ChatGPT’s account of a presidential candidate’s speech, for example, is biased?

As a result, it is more crucial than ever that products incorporating AI recommendations take into account additional tasks, such as how traceable the suggestions are, to ensure that the models used for recommendations can be biasedly audited rather than just employed. LLM.

The key to the new HR AI rules is defining what constitutes a recommendation or a decision. For example, New York City’s new AEDT law is pushing bias audits for technologies that explicitly address hiring decisions, such as those that can do so automatically.

However, the legal environment is increasingly expanding beyond how AI makes judgments to include how it is created and applied.

See also  AI technology detects premature births in gynecological ultrasound

Transparency around the transmission of AI standards to consumers

This brings us to the second main issue: the requirement that governments establish more detailed and comprehensive guidelines for the development of AI technology and its communication to users and employees.

Christina Montgomery, IBM’s director of privacy and trust, emphasized the need for guidelines at the most recent OpenAI hearing, highlighting the importance of ensuring users are informed every time they interact with a chatbot. The latest EU AI Law discussions to ban LLM APIs and open source models focus on this kind of transparency regarding how AI is built and the danger of bad actors using such models.

Before the trade-offs between dangers and advantages are more clearly understood, more discussion will be needed on how to manage the proliferation of new models and technologies. The need for rules and laws, as well as knowledge of both the dangers and the potential, becomes more urgent as the influence of AI accelerates.

Implications of AI regulation for HR teams and business leaders

HR teams may be the ones feeling the effects of AI the fastest. They are being asked to address new pressures to give employees the opportunity to learn new skills and provide their executive teams with up-to-date predictions and workforce plans based on the new skills it will take to change their business strategy.

At the two recent WEF Summits on Generative AI and the Future of Work, I spoke with leaders in AI and HR, as well as policymakers and academics, about an emerging consensus: that all companies should drive responsible AI adoption and understanding. . The World Economic Forum (WEF) has just released its “Future of Jobs Report”, which shows that 23% of jobs are likely to change in the next five years, with 69 million new jobs and 83 million lost jobs. That means at least 14 million jobs are at risk.

See also  TikTok to partner with Oracle in US after Microsoft loses bid

The report also says that by 2027, six out of ten workers will need to change their skills to do their job. They will have to upgrade or retrain. However, only half of employees have access to training opportunities that are good enough right now.

So how should teams keep their employees engaged in the rapid change brought about by AI? Leading an internal transformation that focuses on your employees and thinking carefully about how to create a compatible and connected set of people and technology experiences that give workers more insight into their careers and the tools to improve.

The new wave of regulations sheds new light on how to think about bias when making decisions about people, such as hiring. However, as these technologies are used by people inside and outside of the workplace, it is more important than ever for HR and business leaders to understand both the technology and regulatory landscape and drive a responsible AI strategy across their teams. and companies.

Subscribe to our latest newsletter

To read our exclusive content, sign up now. $5/month, $50/year

Categories: Technology
Source: vtt.edu.vn

Leave a Comment