In this guest post, The Works, Part of Capgemini’s partner and tech maestro Douglas Nicol (lead image), talks AI and, whether we realise it or not, it is starting to disrupt buyer behaviour patterns. But, he adds, it’s not without its failings, too…
I think we have all experienced it, sometimes Generative AI loses the plot, and spits out dodgy content that is entertainingly incorrect, untruthful, or just plain weird. This is known as AI hallucinations and could well be highly problematic. AI is starting to change the way people buy products and services and as marketers we need to work hard to stay ahead of this trend so we can minimise the inherent risks that AI poses if used incorrectly.
First, let’s look some common misconceptions about Generative AI, to help set some context:
- Only geeks use Generative AI – its mainstream+ across age cohorts quote Capgemini research.
- People just use AI for writing blog posts and assignments: Chatbots and asking questions #1 use case – refer Capgemini research.
- AI Not mainstream yet – 70% of purchase decisions in certain categories are influenced by AI – refer Capgemini research.
In short AI is starting to disrupt the decision making and buying behaviours of mainstream Australians, and it works in their favour in a myriad of emerging use cases: comparing products, understand what the reviews are on products, price comparisons and asking questions of brands themselves by getting Generative AI to interrogate brand websites. And here is why AI hallucination is problematic for brands because if your digital and content footprint is not AI friendly you will be training AI to get it all wrong about your brand, your product features, and your value for money.
So, why does AI hallucinate? There are two core reasons:
- Compression sometimes causes confusion
This one it technical, but in essence knowledge learned from 40 GB of text ingested into a model that fits in about 3 GB of weight (13 times compression) means that sometimes the content also gets a bit jumbled, and AI get confused and trippy.
- 2. Low quality training data about your brand
This is one can you have more control over, hallucination also results from limitations or the inherent bias of the training data in the LLM. Internet content trains generative AI to answer questions, so how clean and accessible is your brand training data?
Firstly, on brand’s websites – Generative AI cannot access video content, PDF or HTML to be trained on their data. If the content is not AI Optimised, brands and businesses lost control of the narrative. And their sales funnel.
On non-owned assets, if Generative AI cannot be trained on the content, it will find training data on a myriad of other (sometimes dodgy) websites and suddenly you have an inaccurate and potentially dangerous set of ‘facts’ about brands, products and services.
So where can brands start to minimise the threat of AI Hallucination? A good place to start is to prompt a Generative AI chatbot to ask a bunch of questions of your website in isolation from the rest of the internet, take typical questions sourced from Google search queries and your call centre and see what answers you get back from the AI.
Chances are there will be some hallucination happening in those answers and brands will discover why you need to take this threat seriously.
From there you need to identify, understand, and monitor how customers and prospects are using AI to make their purchase decisions in new ways. If you understand the behaviours, you can start to figure out solutions.
In this moment, brands can choose to be on the front foot with this significant inflection point in marketing or do nothing and allow your brand call fall victim to the consequences of AI Hallucination.