This is for a foreign language publication and it’s aimed at people not all that up on economics. Thus the detail is a bit sketchy:
Richard Thaler has won the 2017 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel – also known as the Economics Nobel. There is no controversy over the award to him – he’s widely thought of as being a thoroughly deserving recipient. There is a little pondering over the when of the prize though, some think he should have been noted in 2002, with Daniel Kahnemann, or perhaps 4 years ago with Robert Shiller. For all three have been working at the same basic problem we have within economics, when is it that we do actually work as the textbooks assume and when don’t we? This is more generally known as behavioural economics, the teasing out of what really are, instead of what are assumed to be, human reactions to stimuli.
This being, of course, what we’re trying to do in the whole field – what is it that humans do? Once we know that, and the way that they respond to changes in the incentives they face, then we can at least to begin to design methods to aid in a better world.
One outcome of Thaler’s work, something discussed in his best selling book with Cass Sunstein, “Nudge,” is that we can get people to save more for their retirement simply by assuming that they will. In more detail, Thaler’s work shows that we value what we’ve got rather more than we do what we might get. That is, $1 we have is worth more to us than $1, or perhaps even $2, that is on offer – this is known as the “endowment effect.” This feeds through into a possible overvaluation of the status quo in many situations.
At which point, we know that we want people to save for their retirements – excellent, so, how do we encourage this? We could imagine tax breaks for saving, a law stating that you must save (the UK does the first, Singapore the second) and so on. Thaler has pointed out the implications of that status quo bias. As and when people are signed up by their employer for pensions savings schemes, the default option could be the maximum permissible savings amount. People are more likely to take the choice their offered rather than change it. Just this, this little insight alone, is said to have led to some $30 billion more of pensions savings among Americans.
At the heart of the work though there is a great point being made. Market economics makes an assumption of rationality, something which itself comes in two flavours. The first being that people are just consistent. If they prefer A to B and B to C then they will also prefer A to C. The second is that people are rational calculating machines who zero in on the optimal strategy to gain their desires given their constraints. The first is held to be generally true, the second only at times. Thaler, along with other behavioural economists, has shown that neither are always true. However, as is so often true the interesting part is in trying to work out why the irrationality.
Some early parts of Thaler’s work concentrate upon the ultimatum game. In this there are two participants in the experiment, one is given $100 and told to split it, however they wish, with the second. The second can accept or reject the offer. Rejection means that neither keeps any cash, acceptance that the split happens. Logic would indicate that a 99/1 split should be accepted, a free dollar is after all a free dollar. That isn’t what happens though, splits of less than 70/30 are routinely rejected.
At which point we want to explain this and the usual idea is that humans have some innate sense of fairness. One that’s so strong that they will reject “unfair” offers at costs to themselves in order to punish that perceived unfairness. This does indeed seem to be true among American undergraduates who were the experimental subjects.
This result did not survive testing in other societies though. We now have a pretty good evidence base that resolutely non-market economies will accept just about any split, market economies will reject ones perceived as unfair. Which leaves our mystification just that one level deeper – do markets create that punishment against unfairness or are they a precondition for them?
In this instance we’re seeing something that doesn’t appear rational yet, upon deeper examination, it becomes entirely rational at that deeper level. Other parts of Thaler’s work talk about “limited rationality” which is where people do things that really aren’t rational in either of our two meanings. One of his examples here is a fund called “Cuba.” That’s the trading name for it, it doesn’t have any assets in Cuba itself, nothing to do with the place, that’s just the 4 letter acronym it’s known by. For years it bumbled along at the usual 10 to 15% discount to net assets that a closed end fund normally trades at – then it leapt to a 70% premium to that asset value. What? The trigger event was the US stating that it would open up a little more to trade with Cuba. The island that is, the place the fund has nothing to do with.
This change in price is clearly irrational and if it was something that persisted for a day or two we could just mark it down as a mistake brought on perhaps by bad information. But it persisted for months. Even financial markets – which Thaler and most others would agree are the most rational of our markets – aren’t really all that rational therefore.
The implication of this, one that Thaler pushes himself, is that we should therefore nudge people into making better, more rational, decisions. Sometimes this is obvious enough, greater savings for retirement probably are a good thing as above. However, the idea does run into a logical problem, which is that if people are irrational who is going to be the rational nudger? Of course, as with Kip’s Law (“the central planner always, but always, envisions himself as being the one doing the planning”) there’s no shortage of people who claim to be more rational than the rest of us. But given the way politics works it’s not in fact a given that those who are will be the people giving the nudges. A casual glance around the world shows that.
One of the joys of Thaler’s work is that it isn’t heavily mathematical and in fact, it’s all rather obvious. Except it’s obvious in a non-obvious manner. Once he’s explained a matter – two examples being why people like it when you take away the cashew nuts they’re enjoying eating, or how men will improve their aim when you stencil a fly on a urinal – then yes, it’s obvious! But before it was pointed out it wasn’t obvious at all.
What we really want to know of course is what is the deeper significance of all of this? Preventing bathroom messes, getting people to save, these are good and useful things but hardly about to set the world alight. That importance being that the standard microeconomics does indeed assume “Homo Economicus,” that we’re rationally reacting to stimuli and opportunities all the time. Thaler, along with colleagues, is pointing out that human beings simply don’t work that way all the time. But the important part of this observation is the “not all the time” part.
We are not in a binary situation here. It is not true that humans are irrational, thus must be told what to do. We’ve tried those planned economies and societies and we know they don’t work. It is also not true that humans are rational, always and all the time, so the unregulated world of varied fantasists also will not work. What we actually want to know is when is it that people are doing that irrationality and perhaps should be nudged or regulated out of it, when are they doing just fine and can be left alone?
The correct answer here is unknown as yet but in my opinion will come from something closely related to Thaler’s behavioural economics. Which is the result of the Prisoner’s Dilemma, an experiment in game theory. Two prisoners, if they both keep silent then they will both get a short prison sentence. If they both tell on the other then both will get a medium one. If one keeps silent and the other tells then silence earns a long sentence, informing freedom. What’s the best strategy for the prisoners?
This is still argued about when it’s a one time event. But when it happens over multiple iterations then the optimal strategy is clear, cooperation, or tit for tat. I’ll do this time whatever you did last time – inform or stay silent. At which point we’ve the beginnings of a useful structure.
Thaler tells us, proves to us, that we’re sometimes to often irrational. Regulation, nudges, can improve matters when we are – but what we want to know is, when are we? As the results of the two games, Ultimatum and Prisoner show, the answer is different in situations with multiple iterations, many interactions. Punishing people into fairness is part of – perhaps cause, possibly result – the market economy that so enriches, it’s worth it. Cooperation, tit for tat works over time. At which point a useful rule of thumb can be constructed.
Where we do things regularly then we’re probably acting rationally. We buy bread often enough that an unfair seller will quickly lose custom and be driven out of the market. We buy pensions usually only once in a lifetime, we’re far more likely to be irrational at that point. Thus we can leave the regular activities to humans to solve, regulation is needed more with the irregular.
Note that the conclusion is mine, not Thaler’s. His great contribution though is to get us to the point where we can even consider the point. People are not as rational as the standard theory assumes, we must therefore consider where, when and why people are irrational and what we plan to do about it.