Google Warns Artificial Intelligence Is Not Without Its Dangers

Google Warns Artificial Intelligence Is Not Without Its Dangers

Worried that the Google Brain might eventually get a mind of its own? Don’t worry, so is the Google Brain – or at least its human analogue.

In a company blog, Chris Olah, Google Research revealed that the software giant “… is thinking through potential challenges and how best to address any associated risks” with artificial intelligence along with scientists from OpenAI, Stanford and Berkeley.

Together, the parties have produced an initial technical paper called Concrete Problems in AI Safety

Olah’s blog described the following five problems as forward thinking, long-term research questions. He said they are “minor issues today, but important to address for future systems”.

  • Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals? For example, a cleaning robot knocking over a vase because it can clean faster by doing so.
  • Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
  • Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
  • Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
  • Robustness to Distributional Shift: How do we ensure that an AI system recognises, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.

“While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative,” wrote Olah. “We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”

This article originally appeared on B&T’s sister business site www.which-50.com




Please login with linkedin to comment

cigarettes einsights PRIA

Latest News

Sydney Comedy Festival: Taking The City & Social Media By Storm
  • Media

Sydney Comedy Festival: Taking The City & Social Media By Storm

Sydney Comedy Festival 2024 is live and ready to rumble, showing the best of international and homegrown talent at a host of venues around town. As usual, it’s hot on the heels of its big sister, the giant that is the Melbourne International Comedy Festival, picking up some acts as they continue on their own […]

Global Marketers Descend For AANA’s RESET For Growth
  • Advertising

Global Marketers Descend For AANA’s RESET For Growth

The Australian Association of National Advertisers (AANA) has announced the final epic lineup of local and global marketing powerhouses for RESET for Growth 2024. Lead image: Josh Faulks, chief executive officer, AANA  Back in 2000, a woman with no business experience opened her first juice bar in Adelaide. The idea was brilliantly simple: make healthy […]