A few years ago I had the pleasure of listening to the highly-influential legal scholar Cass Sunstein speak in the flesh. Cass wrote the best-selling book Nudge, along with his long-time collaborator Richard Thaler.
Thaler subsequently won the Nobel Prize in Economics and Cass went to the White House to head up a team advising the Obama administration.
It was among the first of what came to be hundreds of government teams around the world using their insights into human behaviour to improve what governments did.
Cass was speaking in Canberra and I asked whether he could talk about nudges that hadn’t worked. His initial answer surprised me – he said none came to mind.
So what is nudging?
To backtrack, it’s important to understand what a nudge is. The concept is based on the idea that people often act “irrationally”.
By itself this isn’t a particularly useful insight. What is a useful is the insight that they behave irrationally in ways we can predict.
Here’s one. We are lazy, so when placed with a plethora of offers about what to buy or sign up to we often stick with what we’ve got, the “don’t need to think about it option”, even when there are better deals on the table.
Read more: The psychology of Christmas shopping: how marketers nudge you to buy
And we tend to value the present over the future – so while we know we shouldn’t eat junk food, we often prioritise short-term satisfaction over long-term health.
These insights into behavioural regularities allow us to tailor government programs to get better outcomes.
For example, in Britain 80% of people say they are willing to donate an organ when they die, but only 37% put their names on the register.
To bridge this gap the government is changing the system so that the default option is to be a donor.
Read more: An opt-out system isn't the solution to Australia's low rate of organ donation
People can still opt-out if they want to – but the simple switch is likely to save as many as 700 lives per year.
We like to behave like those around us, so here in Australia to help combat the rise of drug-resistant superbugs, the chief medical officer wrote to the highest prescribers of antibiotics pointing out they weren’t in line with their peers.
It cut the prescribing rate of the highest prescribers by 12% in six months.
Then why was Cass’ answer surprising?
I was surprised because nudging promotes rigorous trials, evidence and testing – so it’s hard to believe every proposal would be found to have worked.
In science, experiments frequently throw up unexpected results.
Only publishing the results of successful trials would lead to bulging cabinets of failures from which we would never learn.
Given that failure is one of our most effective teachers, it would be a huge missed opportunity.
And the false positives that would be published along with any genuine positives would inflate the belief that the intervention worked.
Any experiment involving an element of randomness (in the subjects selected or conditions in which it was conduced) will occasionally report a positive effect that wasn’t there.
This “replication crisis” has been recognised as big problem in psychology and economics, with many previously results being thrown into doubt.
Thankfully things are changing for the better. There are a range of initiatives encouraging the publication of both positive and negative results, along with a far greater awareness of these questionable research practices.
Read more: The replication crisis has engulfed economics
And they are embraced by the Australian government’s own Behavioural Economics Team, BETA, with whom I work.
To guard against the publishing of only results that fit a narrative, BETA pre-registers its analysis plan, which means it can’t decide to pick out only the results that fit a particular story once the trial is done.
BETA has also set up an external advisory panel of academics (on which I sit) to give independent advice on transparency, trial design and analysis.
It has had some very successful trials, but also some with surprising results.
When it set out to discover whether a fact sheet enabling households to compare electricity plans would encourage them to switch to better ones it discovered (at least in the experiment conducted) it did not.
When it set out to discover whether removing identifying information from public service job applications would increase the proportion of women and minorities shortlisted for interviews it discovered (at least in the experiment conducted) it did not.
These findings give us just as much useful information as the trials that were “successful”. They can help the government design better programs.
There’s a happy ending to this story
Back at the conference, after his initial answer Cass reflected further. He did recall some failures, and he talked about the lessons learned.
Since then, he has even published a paper, Nudges that Fail that provides insights every bit as good as those from nudges that succeed.
Feel free to check out BETA’s list, the good and the bad.
It’s important to embrace mistakes, and to make more than a few. It’s the only way to be sure we are really learning.
Ben Newell, Professor of Cognitive Psychology, UNSW
This article is republished from The Conversation under a Creative Commons license. Read the original article.