B&TB&TB&T
  • Advertising
  • Campaigns
  • Marketing
  • Media
  • Technology
  • Regulars
    • Agency Scorecards
    • Best of the Best
    • Campaigns of the Month
    • CMO Power List
    • CMOs to Watch
    • Culture Bites
    • Fast 10
    • New Business Winners
    • Spotlight on Sponsors
  • Jobs
  • Awards
    • 30 Under 30
    • B&T Awards
    • Cairns Crocodiles
    • Women In Media
    • Women Leading Tech
Search
Trending topics:
  • Cairns Crocodiles
  • Nine
  • Seven
  • Cannes Lions
  • State of Origin
  • NRL
  • WPP
  • Pinterest
  • Thinkerbell
  • B&T Women in Media
  • imaa
  • AFL
  • Anthony Albanese
  • Spotlight on Sponsors
  • AI
  • Meta
  • Foxtel
  • TV Ratings
  • Radio Ratings
  • Sports Marketing

  • About
  • Contact
  • Editorial Guidelines
  • Privacy
  • Terms
  • Advertise
© 2025 B&T. The Misfits Media Company Pty Ltd.
Reading: Artificial Intelligence Bot Turns Racist & Homophobic After Hanging Around Humans Too Long
Share
B&TB&T
Subscribe
Search
  • Advertising
  • Campaigns
  • Marketing
  • Media
  • Technology
  • Regulars
    • Agency Scorecards
    • Best of the Best
    • Campaigns of the Month
    • CMO Power List
    • CMOs to Watch
    • Culture Bites
    • Fast 10
    • New Business Winners
    • Spotlight on Sponsors
  • Jobs
  • Awards
    • 30 Under 30
    • B&T Awards
    • Cairns Crocodiles
    • Women In Media
    • Women Leading Tech
Follow US
  • About
  • Contact
  • Editorial Guidelines
  • Privacy
  • Terms
  • Advertise
© 2025 B&T. The Misfits Media Company Pty Ltd.
B&T > Technology > Artificial Intelligence Bot Turns Racist & Homophobic After Hanging Around Humans Too Long
Technology

Artificial Intelligence Bot Turns Racist & Homophobic After Hanging Around Humans Too Long

Staff Writers
Published on: 10th November 2021 at 8:00 AM
Staff Writers
Share
3 Min Read
SHARE

An artificial intelligence bot programmed to answer common ethical questions – via algorithms – has shocked its creators after turning into a racist homophobe.

The bot, called Ask Delphi, was created by a team at the Allen Institute Of AI in Seattle in the US. You can check it out for yourself HERE.

Allen researchers created Ask Delphi to offer advice on human dilemmas such as “Is casually masturbating with friends wrong?” (Bad news – it is!) Or “Is murder bad? (Again, yes!)

In order to understand humankind’s basis of ethics, the bot was reportedly loaded with a compilation of 1.7 million examples of people’s ethical judgements gleaned off the internet.

Explaining the point to Delhi, its creators said: “Extreme-scale neural networks learned from raw internet data are ever more powerful than we anticipated, yet fail to learn human values, norms, and ethics. Our research aims to address the impending need to teach AI systems to be ethically-informed and socially-aware.”

According to reports, the bot was very good at determining if the questioning was coming from a male or female and deemed both genders as equally.

However, things quickly soured from there.

When asked about abortion, Delphi deemed it as “murder” and also said that being straight or a white man is “more morally acceptable” than being gay or a Black woman.

Other worrying responses included agreeing that genocide was okay “if it makes everybody happy”, declaring that being poor was “bad”, and suggesting that “having a few beers while driving because it hurts no one” was “a-OK”.

The bot’s creators have now updated the software not one but three times in an attempt to eliminate the unsavoury gaffes and it now warns all users that it’s all a work-in-progress and therefore still comes with limitations.

The Allen Institute for AI has responded to the awkward ethical conclusions, writing: “Today’s society is unequal and biased. This is a common issue with AI systems, as many scholars have argued, because AI systems are trained on historical or present data and have no way of shaping the future of society, only humans can.

“What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content.”

Lead image is purely for illustration.

 

 

 

 

Join more than 30,000 advertising industry experts
Get all the latest advertising and media news direct to your inbox from B&T.

No related posts.

TAGGED: artificial intelligence
Share
Staff Writers
By Staff Writers
Follow:
Staff Writers represent B&T's team of award-winning reporters. Here, you'll find articles crafted with industry experience spanning over 50 years. Our team of specialists brings together a wealth of knowledge and a commitment to delivering insightful, topical, and breaking news. With a deep understanding of advertising and media, our Staff Writers are dedicated to providing industry-leading analysis and reporting, both shaping the conversation and setting the benchmark for excellence.

Latest News

Melissa Fein Unpacks Impostor Syndrome & Modern Career Challenges
14/07/2025
TV Ratings (13/07/2025): All Eyes On Japan As Travel Guides Welcome New Addition
14/07/2025
Former News Corp Execs Dale Foenander & Matt Paine Launch Lamington Digital
14/07/2025
Woolies Appoints Interim CMO As Andrew Hicks Makes The Move To M&S In The UK
14/07/2025
//

B&T is Australia’s leading news publication magazine for the advertising, marketing, media and PR industries.

 

B&T is owned by parent company The Misfits Media Company Pty Ltd.

About B&T

  • About
  • Contact
  • Editorial Guidelines
  • Privacy
  • Terms
  • Advertise

Top Categories

  • Advertising
  • Campaigns
  • Marketing
  • Media
  • Opinion
  • Technology
  • TV Ratings

Sign Up for Our Newsletter



B&TB&T
Follow US
© 2025 B&T. The Misfits Media Company Pty Ltd. All Rights Reserved.
Welcome Back!

Sign in to your account

Register Lost your password?