Cambridge Dictionary chooses ‘hallucinate’ as word of the year, AI influence takes center stage

The Cambridge Dictionary has named “hallucinate” as the word of the year 2023, after being redefined in the context of artificial intelligence (AI). This new definition focuses on AI that produces misinformation, known as AI hallucinations.


Pexels

These results may seem confusing or incomprehensible, but they can be convincing despite factual or illogical flaws.

Jump to

jump link

What is the word of the year?

The classic definition of hallucination is the perception of nonexistent sights, sounds, sensations, or smells, which may occur as a result of medical disorders or drug use.

However, in the context of artificial intelligence, this concept has transformed our perception of the term in the digital age.

Why does it have an AI touch?

“AI hallucinations remind us that humans still need to apply their critical thinking skills to using these tools,” according to the website. Large language models are only as reliable as the data their algorithms learn from. Human knowledge may be more crucial than ever in creating reliable and up-to-date material on which large language models (LLMs) can be trained.

AI hallucinations have already begun to have an impact on the world. For example, a US law firm used ChatGPT for legal research a few months ago, resulting in lawyers alleging incorrect facts in court and being fined $5,000 (approximately Rs 4 lakh).

See also  Bigg Boss Malayalam 5 Online Voting Poll Results Week 12

Additionally, Google’s promotional film for Bard included an AI bug related to the James Webb Space Telescope, which was discovered by media experts.

According to NDTV, Wendalyn Nichols, editorial director of Cambridge Dictionary, emphasized the importance of AI hallucinations. She recognized AI’s ability to quickly extract and organize accurate information from large amounts of data. However, she cautioned that when data is provided to create original work, the AI ​​information can vary dramatically.

How did the authorities react?

In contrast, Henry Shevlin, an AI ethicist at the University of Cambridge, provided insight into the term hallucination in relation to errors produced by systems like ChatGPT.

He emphasized how this word reflects how humans perceive AI, almost as if it were human. He went on to say that while the distribution of false information is nothing new and is typically associated with people spreading rumors or fake news, hallucinations provide a different perspective.

The use of this term implies that AI may sometimes provide inaccurate content. Experts around the world, including those at OpenAI, Google, and Microsoft, are working hard to reduce AI hallucinations.

Its goal is to validate AI results by comparing them with reputable sources. To combat these hallucinations, several researchers are investigating techniques that involve human help.

What do you think about this? Tell us in the comments.

For more trending stories, follow us on Telegram.

Categories: Trending
Source: vtt.edu.vn

Leave a Comment