I went to a developer conference and accidentally learned something profound about human nature. It started innocently enough – the All Things AI Conference in Durham, NC had a title too good to pass up.

What I didn’t expect was to be the only marketer among 2,500 developers, nodding along as whurly, CEO of Strangeworks (one name, all lowercase), dove deep into quantum computing and AI. I was in over my head. But sometimes that’s exactly where the best insights hide.

It was until Luis Lastras, Director of Language and Multimodal Technology at IBM began talking about “small models” that I finally found something I recognized. Luis said something that struck me that I didn’t realize – and I think I’m not alone – “hallucinations are intentional.” Say what?

According to Luis hallucinations are a way for developers to learn how models work. Because the models operate autonomously they don’t filter out what they output – at least not yet. Think of letting your grandfather who lost his filter loose at a dinner party.

It’s one of things that IBM is working on.  Small models validate outputs and commands at various stages in the process to reduce hallucinations.

Anyone who’s worked with AI has experienced hallucination from made up sources to statistics that are just plain wrong. But what Lastras shared was something I didn’t realize, it’s the little extra pieces of information intended to be helpful that AI tools add in that weren’t asked for in the prompt.

For example, he showed a demo of a prompt asking how many moons Mars has and the response came back with two and their names, with the added extra – the distance from Earth which was not requested.

The distance between the planets may have been right but it requires another step to validate which then triggered a fascinating article I had read over the weekend.

In a study by Elon University conducted with 500 AI users (US adults) last year, almost 70% believed that AI models are at least as smart as they are, with 26% believing that they are “a lot smarter.”

What is more concerning is that we believe that AI is thinking like humans. As the article in the Wall Street Journal article Why Even Smart People Believe AI is Really Thinking goes on to say “our cognitive biases developed to help us survive in complex social environments…evolved to view linguistics fluency as a proxy for intelligence, engagement and helpfulness as indicators of trustworthiness.”

The same tendency innate to humans that leads us to trust social creatures who must cooperate for survival are leading us to trust systems that appear to listen, understand and want to help us.

The more AI tools and bots act like humans, the more likely we are to trust them. Which brings us back to the hallucination. The more AI tools act like they’re being helpful, the more likely we are to miss that “little extra” piece of information that wasn’t requested.

The convergence of intentional hallucinations and our deeply wired human instinct to trust fluent, helpful communicators creates a perfect storm of misplaced confidence.

As AI tools grow more sophisticated and human-like, our evolutionary instincts will only make it harder to maintain the critical distance needed to catch the errors, embellishments, and unrequested additions that slip through.

The good news is that awareness is the first step. Whether it’s IBM’s small models validating outputs in real time or simply slowing down to verify what AI hands us, the antidote to a cognitive bias millions of years in the making is something refreshingly simple – a healthy dose of human skepticism.

Next Post
0
Would love your thoughts, please comment.x
()
x