SAS chief operating officer Gavin Day has warned marketers they can’t afford to “cross the line” with AI-generated content, saying a lack of transparency could destroy trust with brands.
It comes after research conducted by Ideally last month and commissioned by B&T, found nearly a third of Australians said they would trust a brand less after discovering it used AI-generated advertising – despite most admitting they can’t reliably tell the difference.
In Day’s role, he oversees the company’s use of AI, which includes the development of SAS Customer Intelligence 360, otherwise known as Marketing 360.
Yesterday, SAS announced its expansion of the agentic AI functions in Marketing 360, adding specialised agents for specific marketing tasks. According to the company, the update is intended to “preserve human oversight as marketers use more AI across campaigns, audiences and customer journeys.”
Rather than relying on a single general-purpose tool, the system uses multiple agents, each designed for a specific role. They can work across Customer Intelligence 360 functions including audiences, journeys, destinations and marketing decisioning.
Day said although the use of AI for marketers is encouraged, brands “will lose trust in the marketer if they aren’t being transparent about how exactly they are using it”.
He sat down with B&T during SAS Innovate in Dallas to discuss the responsibility marketers carry when deploying the technology.
“I think we have an obligation to tell people the technology under the covers that we use,” Day said. “Trust is very easy to lose and it’s even harder to get back.”
The findings from Ideally’s national survey of 400 Australians also found while 88 per cent of respondents are not fully confident they can distinguish AI-generated ads from human-made ones, a majority still want disclosure.
And when asked about their expectations of brands using AI in advertising, 58 per cent said brands should be transparent when AI is used.

“As soon as you start providing completely AI-created content without acknowledging where it came from, you’re going to lose the trust of consumers,” he told B&T.
Day said marketers who fail to disclose AI usage not only risk reputational damage but may soon run into regulatory pressure, particularly where personal data is involved.
“We’re going to get forced into that with regulation,” he said. “You’ll need to explain what models are being used, what data is behind it – and consumers will expect the ability to opt out.”
Day also discussed where AI use becomes problematic, highlighting campaigns involving vulnerable groups as “crossing the line”.
“If you are using AI when we’re looking at vulnerable populations, that is how you cross the line,” he said. “That also borders on dishonesty.”
While brands continue to experiment with AI-generated creative to reduce costs and speed up production, Day warned “efficiency gains shouldn’t come at the expense of authenticity”.
“Humans gravitate to other humans. There is a human creativity and a human touch that is very different to what AI is going to create,” he said.
“I hope we don’t lose that.”
So what’s next you might ask?
Day believes “the industry is still working through these tensions”, describing the current moment as both a “discovery phase” and “confusion phase”.
“I think over the next 12 months we’ll see widespread enterprise adoption,” he said. “It will move from personal productivity, like ‘Help me write this email’, to solving real corporate and industry problems.”
But while that shift happens, he stressed marketers must stay grounded in fundamentals, starting with trust.
“Authenticity is a big thing,” Day said. “The organisations that cross that line for cost savings or speed are the ones whose brands will struggle.”
B&T travelled to Dallas as a guest of SAS.

