Image: Freepik
New data from the BBC wonât help anyone with trust issues.
The British news org recently gave four prominent AI chatbots â OpenAIâs ChatGPT, Microsoftâs Copilot, Googleâs Gemini, and Perplexity â access to its site, then asked the bots questions about the news, prompting them to use BBC articles as sources where possible.
Its findings:
Even though itâs impossible for them to consume LSD, generative AI models have a tendency to hallucinate â or present incorrect or misleading results with the confidence of Neil deGrasse Tyson.
And research indicates this may be more of a feature than a bug. A July 2024 study from Cornell concluded that no matter how advanced language models become, hallucinations are inevitable.
đ¤ ButâŚSolutions may exist. One such potential fix is called retrieval augmented generation, which the Wall Street Journal compares to looking through a library of photos from the past year before writing a holiday letter instead of writing it all from memory.
Another potential fix? Teaching the models to say: âI donât know.â
đ¤ The next live-action sequel in the Terminator franchise could soon play out in real time. Google updated its ethical guidelines around AI this week, removing a company-wide pledge to avoid using the technology to develop potentially harmful products like weapons or surveillance.
đ¤ OpenAI on Sunday unveiled Deep Research, a new AI agent thatâs capable of conducting complex, multi-step online research into a variety of topics (a DeepSeek, if you will).
âď¸đ§Ź Samples from the near-Earth asteroid Bennu, which were recently collected by NASA, contain a wide assortment of organic molecules â including many of the crucial building blocks of life.
Let's make our relationship official, no đ or elaborate proposal required. Learn and stay entertained, for free.đ
All of our news is 100% free and you can unsubscribe anytime; the quiz takes ~10 seconds to complete