Have you ever been having a perfectly normal chat with your favorite AI large language model, only to realise it's been confidently lying to your face?
In this episode of That’s Science, we dive into the glitchy, often confusing world of AI hallucinations - those moments when a large language model confidently feeds you a lie with a straight face. You aren't alone if you’ve felt misled by a chatbot.
Joining us is Dr Wei Zing to break down why these digital delusions happen and explore the surprising reality that while developers have the power to eliminate these errors, they might just choose to keep them around.