-
U.S. lawyers disciplined for citing falsehoods as legal industry's 'AI disillusionment' intensifies
Feb. 19 (Bloomberg) -- Morgan & Morgan, a leading U.S. personal injury law firm, sent an urgent email to more than 1,000 of its attorneys warning them that AI is capable of fabricating false jurisprudential information and that they could be fired if they use fictionalized content in court documents, Reuters reported today. This comes after a federal judge in Wyoming threatened to sanction two of the firm's attorneys for citing false case law in a lawsuit against Walmart. One of the attorneys admitted in court documents that he used a "creation...- 1.4k
-
Amazon Launches 'Automated Reasoning Check' Tool to Combat AI Illusions
Dec. 4 (Bloomberg) -- Amazon Web Services (AWS) has unveiled a new tool designed to address the problem of hallucinations generated by AI models. 1AI notes that at the re:Invent 2024 conference in Las Vegas, AWS introduced Automated Reasoning checks, a tool that verifies the accuracy of a model's response by cross-referencing information provided by customers. AWS claims that this is the "first" and "only" protection against illusions. However, this claim can be...- 2.6k
-
Google Launches New Paid Feature to Fight AI Illusion Problem with Search Results
Google Inc. issued a press release yesterday (October 31) announcing the launch of Grounding with Google Search functionality in its Google AI Studio and Gemini APIs to support users in verifying the content of AI responses via Google Search. Challenges for Mainstream Large Models Including OpenAI, Anthropic, and Google, most Large Language Models (LLMs) have a 1 knowledge cutoff due to the training dataset, so answering...- 4.1k
-
Chatbots talking nonsense? Oxford researchers use semantic entropy to see through AI "hallucinations"
In recent years, artificial intelligence has flourished, and applications such as chatbots have become increasingly popular. People can get information from these chatbots (such as ChatGPT) through simple instructions. However, these chatbots are still prone to the problem of "AI hallucination", that is, providing wrong answers and sometimes even dangerous information. Image source Pexels One of the reasons for the "hallucination" is inaccurate training data, insufficient generalization ability, and side effects during data collection. However, researchers at the University of Oxford have taken a different approach and detailed a newly developed method in the latest issue of Nature magazine...- 2.6k
-
Microsoft is working to cure AI hallucinations by using technology to block and rewrite unfounded information in real time
As GPT-4 makes headlines for conquering standardized tests, Microsoft researchers are subjecting other AI models to a very different kind of test — one designed to trick them into fabricating information. To cure this condition, known as “AI hallucinations,” they set a text-retrieval task that would give most people a headache, then tracked and improved the model’s responses, an example of Microsoft’s work in measuring, detecting, and mitigating AI hallucinations. “Microsoft wants all of its AI systems to…- 5.7k
-
Philosophy researchers at the University of Glasgow in the UK talk about "AI hallucination": it is more accurate to describe it as "nonsense"
For a long time, people have often referred to the plausible but erroneous answers provided by large language models as "AI hallucinations". However, three philosophical researchers from the University of Glasgow in the United Kingdom recently suggested otherwise -- that "AI hallucinations" is not an accurate description. The paper by the three researchers was published on June 8 (local time) in the journal Ethics and Information Technology. The paper points out that the behavior of chatbots "making up" answers should not be called "hallucinations," but rather "bullshitting...- 3.8k