Three overarching questions & Why does it matter
Topic: How can companies and individuals make better decisions?
The normative question: How should we evaluate thinking? According to what standards? (part 1)
The descriptive question: How do we think, what prevents us from doing better than we do according to normative standards? (part 2)
The prescriptive question: What can we do to improve our thinking (part 3)
Why does it matter: A shared capacity to make better judgements and choices is a key competitive advantage
Notes:
How can companies make better decisions? In the long run companies have to create some kind of competitive advantage – this can come from scale driven lower costs, proprietary intellectual property, highly motivated employees, but in the knowledge economy one very important aspect is a shared capacity to make better judgements and choices.
What do we mean by rational thinking (Baron, 2000) Thinking and Deciding
Some clarifications
Emotional ≠ irrational. If you think about emotions as a way of making decisions without thinking, then it is reasonable to assume emotional = irrational
But: The seeking of pleasant emotions and the avoidance of unpleasant ones are surely goals that most of us have, and ought to have. Emotions can impair thinking, but they can also help it
Rational thinking is the kind of thinking that helps us achieve our goals, whatever these may be
Rationality need not be self-interested. Goals such as the concern for others are among the goals that many of us have and ought to have
Rational thinking is not too much thinking. Rational thinking is where the amount of thinking is appropriate to the situation as far as possible
The relation between rational thinking and emotion is a complex one. We should not use the word emotion as a substitute for the word “irrational”. We need to think about emotions in the psychological sense of states with certain causes and effects. Emotions, in this sense, are goals of our decision. They also affect our decisions in ways that are sometime desired, and sometimes undesired. They may help us in the long run, even if they hurt us in the short run. They may make us behave more morally, or less morally, then we would without them. They can impair thinking, but they can also help it.
Rational thinking is the kind of thinking that helps us achieve our goals, whatever these may be.
We call our decisions foolish when they subvert our values and wise when they affirm them.
Think about your own life. Have you ever checked your phone while crossing a road. But if you worked out the probability and benefits of both outcomes against each other, would you really do it? And if you don’t think like this can you really call yourself rational?
Expected Utility Theory (EUT) & probability (Bernoulli)
EUT the foundational model of decision making in economics. It rests on probability theory.
Practically all our decisions rest upon our beliefs about the outcome of different actions: Will this investment succeed? Will I enjoy this course? Will it help me get a job?
Probability theory is the normative theory that deals with such estimates
The development of modern probability theory emerged in the 17th Century
Probability is the numerical measure of the strength in the belief in a certain proposition
By convention it ranges from 0 to 1. A probability of 0.5 means something is equally likely to happen or not.
Probability can also be expressed as a percentage, a fraction or an odds ratio. A probability of 0.75 is the same as 75% or ¾ or three to one odds (because the probability of a proposition being true (0.75) is three times the probability of it being false (0.25)).
Normative theory of choice under uncertainty
The theory of expected utility* (also referred to as rational choice, rational actor, homo economicus) says that when faced with a risky decision actors ought to choose the option that maximises their expected utility
-*Pinker (2021), Chapter 6
In its original form EU was formalised by von Neumann and Morgenstern (1947)
Utility represents whatever people want to achieve
Utility ≠ happiness or money. It is supposed to be a summary measure of how choices realise our ultimate values or goals
To compute the utility of each choice, you multiply the utility of the outcome by the probability of the state that leads to it and add across the states
Let’s look at an example…..
From Baron
Utility theory began in the 17th C. Its development since that time was largely associated with economic theory where it became part of the theory of the behaviour of buyers and sellers in markets. Psychologists became interested in utility theory in the early 1950s soon after the publication of von Neumann and Morgenstern’s (1947) Theory of Games and Economic Behaviour.
Utility theory is a partial theory. It assumes that we have already before us our possible choices, our goals and all the evidence we need. We do not need to search for these. If the decision involves buying a car, for example, we already have a list of possible cars and the strengths and weaknesses of each.
EUT is only considering part of the process behind making good decisions, it is not a complete normative model.
Rational choice: Expected value to Expected utility
Expected value:
Games of chance make it easy to explain the theory of rational choice, because they provide exact numbers we can multiply and add.
When a simple gamble involves money the expected value of a gamble can easily be computed mathematically by multiplying the probability of winning by the monetary value of the payoff.
But everyday life provides us with countless choices that we intuitively evaluate in terms of expected utility.
Expected utility
The same method can be used for computing expected utility rather than expected monetary value.
Expected Utility theory: See Pinker (2021) for a nice outline of RCT. The theory of expected utility (also referred to as rational choice, rational actor, Homo economicus) says that when faced with a decision actors ought to choose the option that maximises their expected utility, namely the sum of all rewards weighted by their probabilities.
Pascal’s wager
He made what many regard as the first decision analysis based on utility as part of an argument for living the Christian life:
But no one really thinks like this do they?? Enter Fama
Efficient market Hypothesis – this is one example of a theory (in financial Economics) that is based on the assumption of complete rationality. This is the idea that market prices – the price of assets like stocks and shares reflect all available information – in other words that it is next to impossible to beat the market.
Example of rational choice theory – The “Efficient Markets Hypothesis” – Eugene Fame
EMH holds that markets are informationally effective in three forms
The price of assets like stocks and shares reflect all available information – in other words that it is next to impossible to beat the market.
The EMH holds that markets are informationally efficient in three ‘forms’:
Weak-form efficiency holds that only new information induces price changes and thus information about past prices cannot be used to trade and earn rents (profits) in the long run. Price movements exhibit the characteristics of a memory-less process (a ‘random walk’).
Semi-strong efficiency adds the assumption that new fundamental information about assets can be traded on so quickly that it is also impossible to earn systematic long-run net returns from fundamental analysis or arbitrage.
Strong-form efficiency adds in the impossibility of gaining from trading on even private information: current prices incorporate all information that could be acquired by a painstaking fundamental analysis of the asset and economic circumstances.
Game theory & The prisoners dilemma
Game theory introduces to rational choice the anticipation of other people’s (rational) actions.
The Descriptive question – How do we think?
-> Enter Behavioural economics
Herbert Simon (Bounded rationality and Satisficing)
Herbert Simon is credited with kicking off the research area that we would now call behavioural economics.
Two key concepts: Bounded rationality & Satisficing
Theories of rational choice assume a knower with perfect information, unlimited time and memory. It assumes we optimise.
But normal humans have to factor in the time and trouble it takes to get information, and process the odds.
A wealth of information puts pressure on our truly limited resource – our capacity to pay attention
Simon suggests that given the costs of information it makes no sense to optimise, but instead we must satisfice – namely settle for the first alternative that exceeds some benchmark that is good enough.
https://www.economist.com/news/2009/03/20/herbert-simon
Bounded rationality: HS challenges the ‘heroic abstraction’ – the idealisation of human rationality – embodied in neoclassical economics.
HS was (among other things) a cognitive psychologist, computer scientist and economist. His primary research interest was organisational decision making and he is best known for his theory of bounded rationality and satisficing.
In contrast to the tenets of economics HS held that people did not seek to maximise their benefit from a particular course of action (as they could not assimilate and digest all the information that would be needed to do such a thing). Not only could they not get access to this information, but even if they could their minds would be unable to process it. The human mind necessarily restricts itself, it is, as Simon put it, bounded by cognitive limits.
Hence people tend to seek something that is “good enough”, something that is satisfactory. This real world behaviour is what is called satisficing.
He applied this idea to organisations as well as individuals. He notes that a wealth of information just puts pressure on our truly limited resource – our capacity to pay attention. Much of his work was on decision making in organisation and he wanted to find the techniques and processes that organisations could use to achieve the best results given the limits on rational decision making.
He was awarded the Nobel Prize for Economics in 1978, to his considerable surprise since by then he had not taught economics for two decades.
The “big bang” of behavioural economics: Kahneman & Tversky (1974)
Thinking is hard and the mind tries to avoid it when possible – dual process model
It is not so much that the brain is lazy as it is extremely busy…all kinds of tasks the mind must constantly perform/monitor plus adjusting heart rate and breathing, keeping muscles flexed etc.
Our mind copes with this amazing amount of demand by ‘automating’ as much as possible. i.e., information processing through automation/routines
The automation allows us to concentrate on one thing while performing another – we can multi-task.
To accomplish this, the mind has evolved short-cuts or rules-of-thumb to help it deal with complexity (heuristics)
These heuristics are not always wrong. They can often be very useful. But they do lead to systematic and predictable errors (biases)
Kahneman & Tversky (1974) – How do people assess the probability of an uncertain event? & Heuristics
Q: How do people assess the probability of an uncertain event? (Should you accept that job, invest in that firm, marry that person?)
Kahneman & Tversky: they rely on a number of heuristic principles which can lead to biased decisions
“In general these heuristics are quite useful, but sometimes they lead to severe and systematic errors”
Representativeness
Availability
Adjustment and anchoring
The issue is not that people make mistakes. But that these mistakes are predictable and systematic*. They are ingrained in our human nature
* The error is not random - it is systematic (in a regression it would be a variable, not just random error)
Tversky, A. and Kahneman, D., 1974. Judgment under uncertainty: Heuristics and biases. science, 185(4157), pp.1124-1131.
This is on the student’s reading list – one of the best-known social science papers. (Citations – over 46 thousand)
Heuristic = rule of thumb
How do people assess the probability of an uncertain event? T&K argue that people generally rely on a limited number of heuristic principals which reduce the complex task of assessing probabilities and predicting values to simpler judgemental operations. (As K puts it, when asked a difficult question the mind quite often substitutes it for an easier question]. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.
One of the key points of K&Ts work on biases and heuristics is to show not just that we are irrational, but that we are predictably irrational. The point of BE is to reveal the systematic patterns in our departure from rationality. If there are systematic patterns in the way in which our decision depart from rationality – then that is useful information that can help us make better decisions – for example, we can design systems that might help overcome these biases. Eg – how to avoid group think.
Thesis: “in many situations, an event A is judged to be more probable than an event B whenever A appears more representative than B”.
The more someone resembles your mental model of a successful entrepreneur the more likely you will be to think they are a successful entrepreneur
The problem is subtle – this is often a useful rule of thumb.
But: Representativeness often trumps all sorts of other relevant information (such as base rate probability, sample size, impossibility of prediction) that should be taken into account!
Another example: The gamblers fallacy: have you ever thought, after rolling a dice and getting lots of low numbers, that somehow it is your ‘turn’ for a high score? This is the gamblers fallacy.
Decision makers tend to use information that is readily available or easy to obtain, rather than what is most relevant.
The more easily people can call some scenario to mind – the more available it is to them – the more probable they find it.
The point, once again, is not that people are stupid
This particular rule used to judge probabilities (the easier it is for me to retrieve it from my memory, the more likely it is) often works well
But it can be misleading….
Adjustment and Anchoring
We typically make assessments of an uncertain outcome by starting from an initial anchor, based on past experience (or at random), and then making adjustments
K&T first dramatized its effects by giving a bunch of high school students five seconds to guess the answer to a math question. The first group was asked to estimate this product:
8 x 7 x 6 x 5 x 4 x 3 x 2 x 1
The second group was asked to estimate this product:
1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
5 seconds was not long enough to do the maths. The two groups' answers should have been at least roughly the same, but they weren’t, even roughly.
The first groups median answer was 2,250, the second groups median answer was 512
Prospect theory (Kahneman & Tversky, 1979)
3 Core Characteristics:
We evaluate utility according to a reference point
Loss aversion: Losses loom larger than gains. We are risk averse in the “gain” frame. We are risk seeking when things are framed as losses
Probability - we overweight probability of rare events and underweight probability of common events
The details are interesting, but the key takeaway is that K&T show many situations where if you manipulate the how the choice is framed (while the actual outcome remains the same) you can impact choices – from the point of view of EUT this is completely irrational.
Evaluation is relative to a neutral reference point. We evaluate the utility of difference choices with reference to a neutral reference point. You can easily see this with an experiment – place three bowls of water in front of you: Ice water in the left hand, warm water in the right hand and the middle bowl should be room temperature. Put one hand in the ice water and the other in the warm water for about a minute, then put both into the room temperature bowl. You will experience the same temperature as heat in one hand and cold in the other. For financial outcomes the usual reference point is the status quo, but it can also be the outcome you expect, or feel entitled to (for example the raise or the bonus that your colleagues receive).
Loss aversion: When directly compared to each other, losses loom larger than gains. This asymmetry has an evolutionary history – Organisms that treat threats as more urgent than gains has a better chance of survival. [If you think on balance you are a merit student – you will get more pain from a pass, than pleasure from a distinction]. Moreover, we are risk averse in the “gain” frame and we are risk seeking when things are framed as losses.
Probability: we overweight the probability of rare events and underweight the probability of common events. [i.e. we don’t worry enough about being knocked over by a car and worry too much about Ebola].
Other important biases
The winners curse
Sunk cost fallacy
Escalation of commitment
The illusion of control
The above are all about individual decision making – can groups overcome these biases?
Group think
Bazerman (2006)
The Winner’s Curse (Thaler 1994)
The winner of a bid will be the person who has the most optimistic expectation of the value of an asset
If estimates are on average accurate, then the winning bid will have paid too much (the winner’s curse)
When there are multiple bidders the rational thing to do is bid conservatively
Examples:
-Bidding for oil and gas drilling rights
-Corporate take overs
-3G licences
Concorde: Why was the project kept alive?
HS2? The next sunk cost disaster?
A sunk cost is any cost that has already been paid and cannot be recovered
The sunk cost fallacy is a mistake in reasoning in which the sunk costs of an activity - instead of only future costs and benefits - are considered when deciding whether to continue the activity.
The greater the size of the sunk investment, the more people tend to invest further, even when the return on added investment appears not to be worthwhile
Escalation of commitment (Staw, Barry M., 1976)
Commonly applied to situations of war. The price that has been paid, in lost lives, is seen as so terrible, that it would be a betrayal or humiliating to cut one's losses and quit.
When past investments in a project are very high, the psychological need to feel like these investments have not been wasted is so powerful that people continue to commit.
There are countless examples of projects in both the public and private sector where individuals and groups escalate commitment.
Examples of escalation situations
The illusion of control (Soane & Willman)
Fenton O’ Creevy et al. (including Emma Soane) argue that some behaviours that seem irrational, especially in the context of finance and economics, are in fact rational in terms of goal achievement and the maintenance of self-image and self-control. (We need to feel good about ourself, and so it is evolutionary adaptive that we ‘fool ourselves’).
They look at one particular bias: The illusion of control. They argue that we are powerfully motivated to restore our sense of control when we feel it is threatened. In order to do so we need to be supported by a framework of self-belief, and to sustain this we need evidence. We do this by taking credit for positive outcomes and attributing failure to third parties.
They ask:
Does trading have a tendency to attract people who are prone to the illusion, or might it attract those who are immune to it?
Is the trading environment especially conducive to the formation of the illusion or is it one where the illusion is difficult to sustain?
Is the illusion of control detrimental to trading performance?
They did this through a computer-based experiment where traders were given a computer game to play and were afterwards asked to rank their own performance on four dimensions. In reality the game, which traders thought they could influence the outcome of, was completely random.
Findings:
Those who gave themselves very high scores on their own performance typically stressed that it was important to believe in yourself.
Those who gave themselves low scores stressed that it was important to honestly appraise one’s own performance.
They found that there was a statistically significant negative association between illusion of control and total remuneration and contribution to profits. Traders with a higher propensity to illusion of control were rated by their managers as less effective. In sum they found clear evidence that the illusion of control is associated with poorer performance and lower earnings.
In reality the game, which traders thought they could influence, was completely random.
They found that those traders who gave themselves higher scores (ie had a higher propensity to the illusion of control) were rated by their managers as less effective, and had poorer performance.
An alternative view (Gigerenzer – Fast & Frugal Heuristics)
Most of the time heuristics work well and are necessary for the mind to work; heuristics are smart and are often the optimal response
Gigerenzer argues that heuristics are not irrational or always second-best to optimization, as the accuracy-effort trade-off view assumes, in which heuristics are seen as short-cuts that trade less effort for less accuracy.
His studies have identified situations in which "less is more", where heuristics make more accurate decisions with less effort.
Gives the example of catching the ball, this is an extraordinarily complex mathematic process to work out trajectory, height, speed of the catcher and the ball, yet many of us can do this naturally with minimal effort.
See what K&T., particularly Tversky, were very opposed to his work. It was a vicious turf battle.
[academic politics are so vicious because the stakes are so low]
Can groups overcome some of these problems?
Groups can, in theory, overcome some of the problems associated with individual biases
BUT
Group think (Janis, 1972)
Since much important decision making is done by groups not individual, the biases that are found in groups are important.
In some ways groups have an opportunity to overcome some of the biases that are shown by individuals. It is possible to choose the members of the group so as to represent a variety of points of view.
On the other hand, the individual members of the group may be too willing to assume that this has been done when it has not. There is a comfort in a group consensus.
Janis’s work is famous.
Why have I used this picture?
“Double edged sword” a phrase meaning that something has both positive and negative effects
8 symptoms of Groupthink
Preventing Groupthink
Other examples of Groupthink
The lessons from section 2: Be aware of common heuristics – and try to avoid the obvious ones
The reliance on heuristics, and the prevalence of biases is not restricted to lay people
People who have had extensive training in statistics also fall prey to these biases (although they might avoid some of the more obvious biases such as the gambler’s fallacy)
In sum these heuristics are highly economical and sometimes effective, but they lead to systematic and predictable errors
Groups can overcome some of the biases shown by individuals - but there is a dangerous comfort in a group consensus
Another (very public) example of a decision failure
The Good Judgement project (GJP)
The GJP was invited to be one of five teams in the (IARPA) tournament
Led by Phillip Tetlock, Barbera Mellers and Don Moore (all professors at the University of Pennsylvania)
They gathered a large number of talented amateurs (rather than geopolitical subject matter experts) gave them basic tutorials on forecasting best practice and overcoming cognitive biases and created an aggregation algorithm to combine the individual predictions of the forecasters.
Key insights
Talented generalists often outperform specialists in making forecasts
Carefully crafted training can enhance predictive acumen
Well-run teams can outperform individuals
The GJP won both seasons of the first season and as of 2013 were the only research team that ACE were still funding.
Tetlock’s recommendations
Find the sweet spot
Train for good judgement
Basic understanding of statstics
Understand cognitive biases
Model the experts in your midst
Build the right kind of teams
Track performance and give feedback
A caveat – What good judgement also requires
The GJP was looking at forecasting
This might be a very important part of good judgement – but (as Tetlock would be the first to admit) it is not the only requirement for good judgement, or good leadership.
Good judgment also requires:
-Good ethical judgment
-The ability to ask (not just answer) good questions
And leadership skills:
-Be confident and instil confidence
-Decisiveness. Leaders cannot ruminate endlessly
-Deliver a vision
Conclusion – parts 1-3
Zuletzt geändertvor einem Monat