What Happens When Chatbots Turn Racist

Man hands holding mobile phone on blurred urban city as background
SHARE
THIS



In 2016, Microsoft unveiled its social media chatbot ‘Tay’, tasked with engaging in light-hearted conversation with Twitter users around the world. Within 24 hours the bot had turned from a playful innovation to racist and misogynistic terror, spurting out a series of Donald Trump-esque tweets.

As the bot was simply engaging with and repeating posts it was seeing on Twitter, the question becomes, who’s fault is it when bots get it wrong?

This is now a pertinent question at a marketing level, with thousands of brands now using chatbots to automate and ultimately improve the experience of their customers.

“Ultimately the brand or the business is responsible for the customer experience,” said Bupa GM of digital growth Nick Blatt.

“If the customer has a bad experience with you [through a chatbot], they’re going to be brutal on you and your brand.”

In a recent survey of Australian business decision-makers, it was revealed that the most prominent internal concerns of businesses using AI systems such as chatbots was negative customer feedback or poor outcomes for certain groups of customers.

Fifth Quadrant research director Steve Nuttall, who helped conduct the survey, said that companies aren’t thinking about where they get their AI product from.

“The majority of people who are trialling or in the early implementation are not using their own AI, they’re not using locally based AI, they’re just taking it off the shelf,” he said.

And this is creating confusion when it comes to accountability of AI outcomes.

When asked whose responsibility it should be if something goes wrong with AI, half the survey respondents said the fault lies with the company that developed the system, while the other half said it is the responsibility of the company that deployed the AI.

Designing with ethics in mind

Despite fears of customer backlash, chatbots and automation now provide businesses with the unprecedented opportunity to reduce costs in customer service personnel, while simultaneously giving customers an improved and immediate service, particularly with simple enquiries.

But with the potential upside of the technology now so attractive, there is concern that businesses will simply look to implement it ASAP, rather than weighing up important ethical concerns.

Microsoft’s infamous bot is now three years old, and although the technology has improved significantly since, it is still far from perfect.

Just last year it was revealed Amazon had used an automated computer program to vet out job applicant’s resumes that gave preference to male applicants.

CEO and founder of international AI company LivePerson Rob LoCascio, said that diversity in the design process of these AI systems can create better outcomes.

“It’s a simple thing, bring in women, bring in people of colour, bring a diverse group in – even if they’re not technologists – just to look at this stuff and I think we can have a way to make this a fourth industrial revolution, versus an existential threat to humanity,” he said.

Please login with linkedin to comment

artificial intelligence Chatbot Ethics

Latest News