In his latest for B&T, DDB Australia’s managing director of strategy and growth, Leif Stromnes explains that while humans are completely fallible (like machines) we are willing to forgive humans for their mistakes and even prefer sticking to sets of ‘morally right’ rules even when there could be bad consequences.
In his seminal tome Thinking, Fast and Slow, Daniel Kahneman illustrates the fallibility of human decision-making by studying the outcomes of parole determinations by Israeli judges just before and after lunch. He found that when judges were “hangry”, i.e. just before their lunch break, their parole approvals went to virtually zero. Once they had eaten, approvals increased by 65 per cent.
This is truly alarming. If you are applying for parole and you are the final case before lunch, you are 65 per cent less likely to walk free than the lucky bugger who is the first case after lunch. Your biggest crime might in fact turn out to be your spectacular lack of timing.
Unlike human beings, AI-powered computers do not get hangry and do not experience fatigue. In fact, they don’t even require a lunch break. An ethical AI could, in principle, be programmed to reflect the values and ideals of an impartial agent. Free from human limitations and biases, such machines could even be said to make better decisions than us. So, what about an AI judge? Unfortunately for the pre-lunch parole applicants, this is not going to happen anytime soon. The problem is not with the machines, it’s with our own psychology.
Artificial or machine decision-making is based on a cost and benefits algorithm and the decision with the best overall consequences (knows as consequentialism) is the decision that the machine will always make. But humans are different. Our default is to follow a set of moral rules in which certain actions are “just wrong”, even if they produce good consequences.
Our distaste for consequentialism has been demonstrated across several psychological studies in which participants are given hypothetical dilemmas that pit consequentialism against a more rule-based morality. In the “footbridge dilemma” for instance, participants are told a runaway train is set to kill five innocent people who are stuck on the train tracks. Its progress can be stopped with certainty by pushing a very large man, who happens to be standing on a small footbridge overlooking the tracks, to his death below (where his body will stop the trolley before it kills the other five). The vast majority of people believe it is wrong to push the man to his death in this case, despite the good consequences.
But this is only half the story. The minority of people in the study who were willing to coolly sacrifice a life for the greater good were rated as untrustworthy by the rest of the participants. This finding was validated across nine further experiments with more than 2,400 subjects. It would seem that humans have a fundamental mistrust in machines when it comes to morality because artificial machines lack the very features we use to infer trustworthiness in others. We prefer an irrational commitment to certain rules no matter the consequences and we prefer those whose moral decisions are guided by social emotions like guilt and empathy. Being a stickler for the rules of morality says something deep about your character.
Another quirk of human versus machine psychology is our ability to forgive human mistakes, but our almost total lack of tolerance for the same mistakes that are made by a machine.
Empathy is enormously powerful in alleviating anger in a human-to-human interaction, but completely useless in a self-service technology failure. It’s why we get angrier at bots and automated self-service systems when they mess us around than we do with humans. And why we are outraged when an autonomous motor vehicle kills an innocent pedestrian despite this being a daily reality with human drivers.
This has profound implications for brands and marketing. The inexorable rise of AI, and the automation of most customer service tasks in the name of efficiency and cost control, means the default interaction is human to machine. But as we have learnt, we don’t like the way machines make decisions, and we are much less forgiving of the mistakes that machines make.
Whilst using machines will almost certainly drive efficiency up and mistakes down, the outcome might be a lack of trust in the integrity of the decision, and ironically lead to lower customer satisfaction rates. Even if machines were able to perfectly mimic human moral judgements, we would know that the computer did not arrive at its judgements for the same reasons we would.
This insight played out with the launch of a robotic barista café in Melbourne in 2017. Rationally it made sense. The robot made perfect cup after perfect cup, didn’t call in sick and didn’t demand overtime on weekends. But every small imperfection was amplified without forgiveness and after one year the café closed down. As one customer elegantly put it, “I just didn’t trust the robot barista to know how I really liked my coffee.”
Whilst AI and machine decisioning with its efficiency and low error rate will undoubtedly win the day, an emotionally satisfying customer approach might be to prioritise human to human contact in high value social interactions. And automate everything else.
After all, to err is human, to forgive divine.