Study: Australians Want Morals Programmed Into AI

Study: Australians Want Morals Programmed Into AI

Exponential advancements in automation and robotics are happening now, and whilst mostly useful to mankind, they have also spelt disaster in instances where control has been lost, resulting in accidents, injuries and more.

As humans, we will require to develop ways to manage AI when robots take on crucial roles (such as driving cars or machinery or executing highly complex medical procedures for example) that will allow them to make their own, potentially life-altering decisions.

Given the increasing use of robots and Artificial Intelligence (AI)  in decision-making, new research commissioned by think tank – Thinque, has revealed that 79 per cent of Australians believe that morals should be programmed into robots.

When probed further to understand who respondents believed should be responsible for programming morals into robots, 59 per cent said the original creator/software programmer, 20 per cent said the government, 12 per cent said the manufacturer and nine per cent said the company that owns them.

Global futurist and innovation strategist, Anders Sörman-Nilsson says, “As AI and its capabilities become more sophisticated, concerns around how we will manage these developments continue to grow. As such, the need for humans to build an ethical code into robots is necessary if they are to take on more key roles in our lives.

“This code must be instilled into robots and AI taking on important roles, such as machine engineers, or military personnel, to prevent adverse situations. If this is not executed well, we as humans will be opening ourselves up to inevitably dangerous consequences.

“With robots being allowed to exist in society without a moral compass, the capacity for them to hurt or fatally harm humans through not being able to make ethically sound decisions is imminent. As such, the U.S government recently announced its plans to spend millions on developing machines that understand moral consequence.

“The stakes are high for these kinds of robots and their ability to know right from wrong and make decisions accordingly is absolutely crucial, highlighting the very reason AI and its capabilities must be monitored carefully.

“In order to be able to effectively programme ethics into AI, humans will have to have a collective set of ethical rules universally agreed upon (a far cry from the current state of the human world).

“And another dilemma that improvements in AI raises is that once robots advance enough to mimic human intelligence, awareness and emotions, we will need to consider if they should then also be granted human-equivalent rights, freedoms and protections,” Anders adds.

With the future of AI unfolding rapidly and consumer fear evident, Anders shares his insights into what human citizens can expect to see from robotics companies when it comes to ethics in AI in the near future:

  1.       They will need to set clear ethical boundaries: As humans alongside robotics developers, must collectively determine ethical values that can be coded into and followed by robots. These values will need to encompass all potential ethical problems and the correct way to respond in each situation. For example, driverless cars will need to have an ethics algorithm to help determine for example, that if in an unimaginable circumstance such as a car carrying two children would be swerved and kill two elderly pedestrians in its wake rather than killing the children in a head-on collision, or not. Only then will we be able to design robots that can reason through ethical dilemmas the way humans would (and complicating matters, of course, is that humans cross-culturally don’t yet agree on these philosophical thought experiments at this point in time either!).
  2.       They will also have to factor in the unexpected: Even after we have set out boundaries to determine ethical behaviour, there will still be numerous ethical ways to handle each situation, as well as unexpected moral dilemmas. For example, a robot needing to make a decision of whether to stop on its way to deliver urgent medical supplies to a hospital to help an injured person it encounters on its way, or not. To ensure robots follow a moral code, as humans do, we would be wise to provide AI with different potential solutions to moral dilemmas and train them to evaluate and be able to make the best moral decision in any given situation.
  3. They will have to constantly monitor AI: As with any technology, programmers will need to ensure that they are constantly monitoring and evaluating ethics in AI so that they are up to date and making the very best decisions possible. Mistakes will inevitably be made, yet programmers should be doing everything possible to both prevent these, as well as redevelop ethical codes to ensure AI is as morally sound as it can be.



Please login with linkedin to comment

AI Ethics robots

Latest News