Artificial Intelligence Often “Bakes In Biases We’ve Spent 50 Years Eradicating”

  • Adobe_BreafkastClub (58 of 69)
  • Adobe_BreafkastClub (1 of 69)
  • Adobe_BreafkastClub (54 of 69)
  • Adobe_BreafkastClub (52 of 69)
  • Adobe_BreafkastClub (48 of 69)
  • Adobe_BreafkastClub (6 of 69)
  • Adobe_BreafkastClub (39 of 69)
  • Adobe_BreafkastClub (21 of 69)
  • Adobe_BreafkastClub (67 of 69)
  • Adobe_BreafkastClub (38 of 69)
1 / 10

While artificial intelligence (AI) continues to develop at an astronomical pace, one of the most pressing questions is how to stop human bias getting “baked into” algorithms.

Speaking at B&T’s inaugural Breakfast Club this morning presented by Adobe, Adobe director of digital marketing Michael Stoddart said despite our control, algorithms are not neutral by their very nature.

The impact of this is problematic bias, often resulting in conditioned sexism, according to Stoddart.

Stoddart referred back to an algorithm where a “computer to man” was likened to “homemakers to women”.

For Stoddart, biases which are inherent in society can often be translated into technology.

Speaking at the event, University of New South Wales Scientia Professor of AI Toby Walsh said, “Humans are terrible at decision making, and AI is certainly helping us with that.”

“Though we have to be careful, we might be accidentally baking in biases we’ve spent the last 50 years eradicating.

“Once we recognise the potential for biases, we have to perfect the tool to eliminate them.”

Using the example of an AI tool once mistaking people of African descent as gorillas and Caucasians as seals, Walsh encouraged the audience to be aware of the risks of AI, and the importance of human oversight.

“Humans brake much more gracefully than artificial intelligence and data,” Walsh said.

Adding to the topic, Cummins&Partners founding partner and chief innovation officer Kirsty Muddle agreed “there is a threat of bias being built in, though it all depends on who’s in control of the data”.

For Muddle, the best way to mitigate this is by putting “pressure on the integrity of data”, and recognising the importance of human touch.

“Data can read your business and give you results humans wouldn’t be able to identify because of empathy.

“These results can often be unpleasant or find things about our businesses we don’t want people to see.

“We need privacy and quarantine within data, but we also recognise the only truth in life is maths,” Muddle added.

Adding to this concept of “truth”, CHE Proximity Melbourne MD Michael Titshall admitted that though uncomfortable, data enables us to “highlight biases in society, meaning we’re not letting things like sexism or racism hide in the corner.”

Though despite the “truths” found in data, Titshall acknowledged, “imperfection is often what makes things beautiful, we have to make sure human oversight is still present to ensure creativity.”

One way of reducing the risk of biases and prejudice in artificially intelligent devices is releasing a product slowly, with ample room to alter when needed.

Such was the case for PayPal’s chatbot ‘Bae’, who was specifically given a soft launch to market, enabling consumers to point out programming errors early, said PayPal marketing director Elaine Herlihy.

“We wanted to involve customers and the community; people like trying new things.

Reiterating the importance of human touch, Herlihy added, “We didn’t go at it full tilt, we allowed her to be changed, and for us to make new alterations.”

Herlihy finished by warning the audience to be aware of a brand’s biases and always look out for the truth in data.

“If you only use Amazon, for example, be aware of Amazon’s biases and how they impact your experience.”

“Truth is a big word with big connotations, but its crucial as we move forward.”

B&TBFastClub_HoriLockUp




Latest News