Dr Catriona Wallace is an expert in the field of Artificial Intelligence and the Metaverse and is an adjunct professor, keynote speaker and founder of the Responsible Metaverse Alliance.
Her business, Flamingo AI, was the second only woman-led (CEO & Chair) business ever to list on the Australian Stock Exchange. She will be speaking to Australia’s best and brightest in Media and Marketing at this years’ iMedia Modern Media Summit where she will be speaking to how marketers can use AI responsibly.
Here she speaks to B&T about some of the risks and opportunities posed by AI.
“We’re Already Well Affected By AI”
Every time there’s a great technological advancement – the world seems to polarise into two camps. This is true for AI.
In one camp, we have the proud and slightly-blinded pioneers, enthusiastically declaring that everyone and their dog should have adopted AI yesterday, and on the other we have the prophets of doom, quietly muttering that AI will be the death of us all.
The problem is, whilst other great advancements such as the steam engine and the printing press were largely out of the hands of your average flawed human being, the same cannot be said for AI.
In fact, as Dr. Catriona Wallace says, all of us are already in touch with AI on a regular basis.
“At the moment, on average, we adults interact with AI 28 times a day, and teenagers interact with AI around 150 times a day. So we’re already well affected by AI”.
With the technology already so embedded in our lives, the stakes to use it responsibly are high, especially for marketers who are communicating with large quantities of people on a daily basis.
So Do We Need To Be Worried About AI?
Dr Catriona Wallace is neither a blinded pioneer nor a prophet of doom. She recognises the huge benefit that AI can pose to society (up 30 per cent of our daily tasks on average could be done by AI, she says), but she also acknowledges that there are risks that we need to consider.
“The main problem with AI is that it is unregulated.”
This unregulated nature means that it poses potential risks.
Whilst some governmental bodies such as the EU are looking to put regulation in place there are “no laws or governance, internationally for how AI is developed or deployed”.
Technology has been shaping the landscape for some time, the difference with AI, however, is that the algorithms are consistently developing.
“AI is one of the most powerful and scalable technologies that we have. That’s a challenge. The other challenge with AI is that when these algorithms and the models that create the automation do what we call either supervised or unsupervised learning, the machines can actually start to learn and behave in ways that the creators of the original algorithms don’t understand”.
That is “particularly dangerous”.
Naturally, when it comes to technology doing things we don’t understand, most of our minds will race to technology turning on us and taking over the world – but Wallace gives a much more concrete (and less scary example).
She recently asked Chat GPT to write her bio for her. At first, things seemed good – “it did about a half page and it said, Dr. Catriona Wallace is an expert on the metaverse and AI and is the founder of the responsible Metaverse Alliance”.
Things took a fictional turn, however, when it got to her authorship. Chat GPT went on to say that was the “author of a book called Customer Experience: The New Way”.
This was interesting, Wallace said, given she’d never written a book called Customer Experience: The New Way.
Essentially the AI had scanned everything on the internet about her, seen she had written a book, not been able to find the name of it, and “confidently” told her she’d written a book that didn’t exist.
“That was just my bio, that’s not harmful,” Wallace says, laughing. “But think about that when AI is built into military models, or into predicting who gets bank credit or not, or into who and how health care should be rolled out. These unintended consequences are potentially very damaging”.
How can governments and businesses use it responsibly?
With AI being so new, I ask Wallace if it’s possible to know how to regulate something that is still developing.
“I don’t think governments know enough,” she says.
“If we look at governments, general leaders, but we also look at the leaders in Australia, who sit on the boards of the ASX companies, not many of them are technologists, most of them are accountants or lawyers or salespeople. Very few of them are technologists. So I do believe that most of our politicians and government leaders are not particularly technology literate”.
Our current minister, who Wallace works with, Ed Husic, is “not bad” she says. He recently arranged a meeting with Australia’s AI experts to talk about Australia’s response to generative AI.
Whilst businesses are aware of the importance of responsible AI, there is a big gap between ideas and execution.
Research shows that 84 percent of business leaders “think that they should be doing responsible AI” but only about “26 per cent have actually physically got processes and techniques in place to do it”.
What Actually Is Responsible AI?
There’s a very clear business benefit to understanding AI – as Wallace says “We know that organisations that are mature with AI strategy outperform those who are not”.
Whilst around 60 per cent of Australian businesses do have some kind of AI strategy in place, very few of them have approached it as a responsible AI-first strategy because they don’t know too much about what responsible AI is.
So what actually is it?
“When we talk about responsible AI, we’re really talking about the need to have AI systems that are transparent, ethical, that there’s accountability, and that the systems and the design and deployment of the systems adhere to laws and rules and regulations, but also societal norms. That’s what we’re talking about with responsible AI”.
“And then when we look at what a model for Responsible AI is, it’s around governance, data security, privacy, it’s around leadership, it’s around culture. It’s monitoring, auditing, reviewing, and putting in tools to ensure that the AI is acting responsibly. So there are six core categories that responsible AI sits in, it’s not just the ethics of AI, ethical AI is a component of a responsible AI strategy”.
What are some of the ethical risks?
One of the main problems with AI is that it’s based on past data, which, in a world where gender and racial equality is finally getting the attention it deserves, can be problematic.
“So the challenge that we have with AI is that it’s been trained on existing historical data sets that already have some of society’s unfairness and biases built into it. And so when we come to looking at financial data, then if women have not been well represented, or indigenous groups or minority groups have not been well represented in the data sets that are then training algorithms that are going to do future credit or loans, then we’re just hard-coding society’s existing ills into the machines that will be automating what we do”.
Whilst there are methods of ensuring the data that AI uses is representative of the population, Wallace warns that some of the methods being used by Silicon Valley are unethical in themselves.
“There are very wealthy Silicon Valley organisations that outsource the cleaning of the data set to Kenya. So there’s a Kenyan outsource company that has mostly women going through all of the Internet data, including all of the reprehensible hate speech abuse, to identify those tags on that type of content on the Internet so that the chat GPT doesn’t pick it up and use it and become a racist misogynist machine.
“So who is that doing that work? Well, right at the moment, low paid, so pay between $1.32 and $2 an hour, Kenyan workers, and to me, that’s horrendous.”
What Opportunities Does AI Create?
Despite the risks, there are still plenty of reasons to get AI right. As Wallace says “AI is the fastest-growing technology sector in the world”. In 2022 it was valued at $327 billion and it continues to grow with much more investment going into it.
Across the world “22 per cent of companies that aren’t able to fill jobs with humans because then they not getting the right talent or labour force, and now looking to AI to fill those jobs”.
In the future, 30 per cent of the work any of us do on a daily basis could be done by AI, Wallace says – meaning a huge productivity boost.
“That’s mostly where the value of AI will come in and affect how we humans do our work”.
Dr Catriona Wallace will be speaking live on AI at this years’ iMedia Modern Media Summit, the event that sees media agencies partner with their brand clients and coming together to be inspired and informed over three days on the Gold Coast. Leading media and marketing professionals interested in attending the summit need to register an application of interest via the website. Dr Catriona Wallace joins an expert speaking line-up that includes Sarla Fernando, Head of Regulatory & Advocacy for ADMA, renowned consultant for advertising and media efficiency; Robert Brittain and the hugely popular Gus Balbontin, advisor, adventurer and alternative futurist, along with many others.