-4.4 C
Munich
Thursday, December 26, 2024

Cambridge Dictionary reveals word of the year – and it has a new meaning thanks to AI | UK News

Must read


Cambridge Dictionary has declared “hallucinate” as the word of the year for 2023 – while giving the term an additional, new meaning relating to artificial intelligence technology.

The traditional definition of “hallucinate” is when someone seems to sense something that does not exist, usually because of a health condition or drug-taking, but it now also relates to AI producing false information.

The additional Cambridge Dictionary definition reads: “When an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as the ability to produce language in a way that seems human) hallucinates, it produces false information.”

This year has seen a surge in interest in AI tools such as ChatGPT. The accessible chatbot has even been used by a British judge to write part of a court ruling while an author told Sky News how it was helping with their novels.

However, it doesn’t always deliver reliable and fact-checked prose.

AI hallucinations, also known as confabulations, are when the tools provide false information, which can range from suggestions which seem perfectly plausible to ones that are clearly completely nonsensical.

Wendalyn Nichols, Cambridge Dictionary’s publishing manager, said: “The fact that AIs can ‘hallucinate’ reminds us that humans still need to bring their critical thinking skills to the use of these tools.

More on Artificial Intelligence

“AIs are fantastic at churning through huge amounts of data to extract specific information and consolidate it. But the more original you ask them to be, the likelier they are to go astray.”

Read more:
Elon Musk says AI is ‘a risk to humanity’
Can AI help with dating app success?

Please use Chrome browser for a more accessible video player

Rishi Sunak has vowed to tackle fears around artificial intelligence ‘head-on’

Adding that AI tools using large language models (LLMs) “can only be as reliable as their training data”, she concluded: “Human expertise is arguably more important – and sought after – than ever, to create the authoritative and up-to-date information that LLMs can be trained on.”

AI can hallucinate in a confident and believable manner – which has already had real-world impacts.

A US law firm cited fictitious cases in court after using ChatGPT for legal research while Google’s promotional video for its AI chatbot Bard made a factual error about the James Webb Space Telescope.

‘A profound shift in perception’

Dr Henry Shevlin, an AI ethicist at Cambridge University, said: “The widespread use of the term ‘hallucinate’ to refer to mistakes by systems like ChatGPT provides […] a fascinating snapshot of how we’re anthropomorphising AI.”

“‘Hallucinate’ is an evocative verb implying an agent experiencing a disconnect from reality,” he continued. “This linguistic choice reflects a subtle yet profound shift in perception: the AI, not the user, is the one ‘hallucinating’.

“While this doesn’t suggest a widespread belief in AI sentience, it underscores our readiness to ascribe human-like attributes to AI.

“As this decade progresses, I expect our psychological vocabulary will be further extended to encompass the strange abilities of the new intelligences we’re creating.”



Source link

- Advertisement -spot_img

More articles

- Advertisement -

Latest articles