Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs.
Do Foundation Model Providers Comply with the EU AI Act?
Authors: Rishi Bommasani and Kevin Klyman and Daniel Zhang and Percy Liang
Stanford researchers evaluate foundation model providers like OpenAI and Google for their compliance with proposed EU law on AI.
The Expanding Dark Forest and Generative AI
Proving you're a human on a web flooded with generative AI content