At Advertising Week APAC, Jason Juma-Ross, director, technology industry strategy at Meta, extolled the virtues of artificial intelligence (AI) to an audience of adland professionals.
However, while Juma-Ross was able to wow the crowd with his pathos on AI and the potential futures it might create, when it came to current applications the crowd were less than impressed.
Using a film created by Paris-based creative Thibault Odiot, Juma Ross asked the crowd:
“How many people does it take to make a car ad or three car ads?
“I’m sure you’re visualising maybe a trip to the desert in WA or up some beautiful European mountain round the twisty switchback roads, lots of international flights and a big crew, maybe a director.”
Juma-Ross explained that Odiot created his advert with a phone, a drone and a laptop and AI program Unity. Here’s a look behind the scenes with Odiot on how he created the ad:
“It’s about getting things done more efficiently and more effectively,” said Juma-Ross.
“And whether you’re a creative strategist, a performance marketer or an executive, there are tonnes and tonnes of different application areas in that growth bucket.”
However, there was no applause or intrigue from the audience. Most seemed to be thoroughly underwhelmed by the display.
And, while AI is certainly capable of getting things done more efficiently, whether it is more effective is certainly a matter for debate.
One industry insider told B&T after the event that they were thoroughly unimpressed by the talk and Juma-Ross’ framing of the scale of AI’s potential for creative applications.
While there are huge applications for AI in certain advertising settings, the jury still seems to be out on the creative side. Juma-Ross told the crowd about the benefits clients had received by using Meta’s Advantage+ Shopping Campaigns tool. It can deliver a 17 per cent reduction in CPA with a 32 per cent bump in ROAS and a 25 per cent uplift in cost per incremental conversion, for example.
But even he acknowledged that tools such as Large Language Models — think ChatGPT — “have severe limitations.”
“They’re monomodal — they learn from one or two sources of data — whereas if you think about humans and animals, the way that we learn is that we incorporate a whole bunch of different data from different sources. Because of that ability to combine different data and something that we don’t really understand yet like few-shot learning in humans and individuals, we can create common sense or physical models of the world very quickly. A two-year-old has a physical model of knocking a coffee cup off a table. They know what happens.”
He went on to explain that auto-regressive Large Language Models just predict the next thing.
“They don’t have a higher order concept of what the thing should be. Is it a dog or is it a bird? Or is it a piece of text beyond that next token.”
There is a lot of research underway to improve the performance of generative AI — something that Juma-Ross made clear to the audience. The tools will improve and the computers will get smarter and more creative. But, until then, we would hold off on firing your creative agency and we’d hold off on using AI too much if you fancy scooping a Cannes Lion or Effy award.