Buffl

CP

1
by 12DayFIsh

What is the learning bias of uncontrollable events and how can it be explained?

  • Operationalisation

    • People have to operate a slot where they win something when they get two of the same symbols next to each other

    • In some trials the people can only press the start, in other they can alos press when the slot has to stop (but it is so fast it is kind of uncontrollable)

    • Two kind of trials:

      • Full miss trials: symbols are not the same and the corresponding same symbol is far apart

      • Near miss trial: symbols are not the same, but they are close to each other (like one position apart)

    • When people are then asked if they want to continue the game, they…

      • Are more likely to want to continue when it was a near rather than a full miss trial, but only when they are the ones who can stop the slot (likelyhood of full miss trial pretty much the same, even less likely to want to continue in a near miss trial when controlled by a computer)

        -> Problem: the game is completely casual and does not remember what happened before, so a near miss trial does not make the next trial any more likely to be a winning trial than a full miss trial!

  • Why are people more likely to want to continue in a near miss trial?

    • People treat uncontrollable events as if they were controllable

    • Thus, they see a near miss as being close to winning/learning how it works -> kind of like when we learn to bike and we manage to drive a few meters and then fall, as this would signal to us that we’re close to mastering it

    • This is also supported neurologically:

      • more activity in ventral striatum in near miss trials (subcortical structure assoiciated with reward-based learning)

        • In a learning task it would make sense for the brain to reward us for near misses, as this motivates us to keep going and fully mastering it

      • The brain thus treatsit like a learning task (and thus controllable), although it is uncontrollable


What is nudging? Which ways of nudging exist?

  • = influencing choice without restricting it

  • i.e. Taking the knowledge about choicepatterns we have from psychology and introducing it into framing so that people are more likely to make the choices that are best (for them or whomever introduced the nudging)

    • People are pushed to make better choices

  • Ways of nudging:

    • Changing the status quo

      • Status quo bias: people are more likely to leave things as they are than to change them

      • If avoiding a desirable behaviour requires actively changing something (instead of the other way around), people are more likely to engage in/accept the desirable behaviour

      • Changing the status quo does not restrict choice set, but can significantly impact choice (default effect)

      • Examples

        • Opt-in vs. opt-out for organ donation: having to actively sign up to become an organ donor vs. having to actively ask to be dismissed as an organ donor (the second leads to way more people being subscribed to donating their organs)

        • Introducing a renewable default option for energy (even though it costs more) where people would have to actively unsubscribe to get rid of it -> more people consume renewable energy

    • Nudging though loss frame

    • Nudging through information:

      • making information easier to process increases the probability that people follow the recommendation (i.e. nutriscore)

      • Avoiding base-rate neglect by presenting frequencies

        • instead of presenting probabilities (%) which are hard to understand, one could present frequencies (50 out of 1000 people instead of 5%)

    • Commitment devices to avoid hyperbolic discounting

      • i.e committing to go to the gym starting january 1st -> if person does not follow through, they’ll be forced to donate to charity they don’t like

        • commit to future costs/punishment if plan fails


What evidence is there for the existence of advantageous inequality aversion and for disadvantageous inequality aversion?

  • Advantageous inequality aversion:

    • The dictator game: people are given a resource (i.e. 10 Euro and are asked to distribute it between them and someone else as they please)

    • If there wasn’t advantageous inequality aversion, all people would keep all the money

    • However, only 36% or participants keep the whole amount of money

      • 13% even gives away more than half

      • 16% does a 50/50 split

  • Disadvantageous inequality aversion

    • The ultimatum game

      • Proposer gets a resource which they can divide between themselves and reciever

      • Reciever can then decide to either…

        • Accept the offer, at which point the reciever and the proposer will split the money according to how the proposer proposed

        • Reject the offer, at which point neither gets anything

      • Rational choice prediction: reciever accepts any offer above 0, because it is still a gain

      • Disadvantageous inequality aversion prediction: responders reject some x depending on their alpha

        -> what is seen is people rejecting offers if the proposer distributes the goods very unequally, with the tendency to reject increasing with more trials

        • 5 - 10% acceptance when offered money only 0-10% of total

        • 20 - 25% acceptance when offered money only 10-20% of total

        • 33 - 40% acceptance when offered money only 20-30% of total

        • 55% acceptance 30-40%

        • 80-85% acceptance when 40-50%

        • 95% acceptance offer >/= 50%

    • Also a lot of variance of acceptance across countries and cultures


What is tained altruism and what are some examples thereof? How can this effect be mitigated?

  • Doing good/altruistic stuff for selfish reasons is evaluated more negatively

  • Doing something good for selfish reasons is percieved as tainted

  • Examples:

    • Palotta teamworks:

      • Charity organization which raised 305 Millions

      • When people found out that the CED earned 400k a year that lead to public criticism and the company collapsed -> loss in charitable giving

    • When people have to judge two vignettes in which…

      • A guy starts working in a coffeeshop where his crush works and only does his job well to impress her vs.

      • A guy starts working in a homeless shelter where his crush works and only does his job well to impress her

        • they judge both actions as equally beneficial to society (altough one is giving out coffee and the other is helping the homeless -> helping the homeless devalued due to tainted altruism) and the altruism person as more immoral

          -> i.e. less moral to do something good for selfish reasons than to do something neutral for selfish reasons

    • GAP: people had to judge GAP in three different conditions

      • Control -> GAP simply making money for profit

      • Altruism -> GAP donating 50% of a certain product to chairty

      • Tainted altruism -> GAP donating, but the donating campaign earning them extra money

      • The tainted altruism led to people

        • disliking GAP the most

        • Judging GAP the least moral and the most manipulative

          -> GAP in tainted altruism condition judged even worse than in the condition where GAP did not donate anything!

  • Ways to work “against” tainted altruism bias:

    • If people were presented the coffeshop and the shelter vignette simultanously, the effect disappeared -> people realized their judgement inconsistency

    • Counterfactual: If in the GAP example people were made aware of the fact that GAP could also simply donate nothing, this reduced the tainted altruism effect


What are some possible reasons for why trustees send back money in the trust game and how can a distinction between those different reasons be operationalized?

  • Possible reasons for why trustees send back money

    • Guilt aversion

      • “I feel bad if I do not meet the expectations of the trustor”

    • Inequality aversion

      • “I feel bad if the other one has less than me”

  • Operationalization to distinguish between those two:

    • Two player play the trust game, but:

      • Trustor believes money will be multiplied by 3 (and trustee knows that this is the trustors knowledge)

      • Trustee is instead one of those three conditions:

        • Multiplied by 2, 4 or 6 (depending on trial)

    • Expectation:

      • If trustee sends it back due to guilt aversion, they will send back the amount the trustor expects, independently of which condition they are in

        • i.e. if the 10$ are doubled (20$) instead of tripled (30$) the trustess will still send back 15$ (as expected by the trustor, thus getting less)

        • i.e. if the 10$ are quadrupled (40$) instead of tripled (30$) the trustess will still send back 15$ (as expected by the trustor, thus getting more than the trustor)

      • If trustee sends it back due to inqueality aversion, they will send back half of it, independently of the trustors expectations

        • i.e. if the 10$ are doubled (20$) instead of tripled (30$) the trustess will still send back 10$ (even though the trustor expects more, leading to both having the same amount)

        • i.e. if the 10$ are quadrupled (20$) instead of tripled (30$) the trustess will send back 20$ (even though the trustor expects less, leading to both having the same amount)


What is betrayal aversion and how was it tested?

  • Definition of betrayal aversion

    • Betrayal hurts and has a cost beyond money, so people try to avoid being betrayed

  • Operationalisation of betrayal aversion

    • People have choice between

      • A: 10 points for sure

      • B: Either 15 points (with probability p) or 8 points (with probability 1 - p)

      • People have to say the minimum p probability they would accept to choose B

    • Three conditions

      • Decision Problem -> choice between…

        • No externalities

        • Outcome of B randomly assigned with given probability by computer

      • Risky Dictator Game

        • In condition A, chooser and other person both get 10 points

        • In option B, chooser gets same points, but other player gets additionally 15 points for the 15 points condition and 22 points for the 8 point condition

        • Still randomly chosen by computer

        • However, in option B second player might get more than oneself (inequality aversion, jealousy etc come into play)

      • Trust Game (risk of betrayal)

        • In condition A, chooser and other person both get 10 points

        • In option B, chooser gets same points, but other player gets additionally 15 points for the 15 points condition and 22 points for the 8 point condition

        • Also has externalities

        • BUT: player 2 gets to choose which option they prefer if it gets to option B (but essentially the chooser still choses the probability of that choice, so it should not make a difference in what they end up getting compared with other two situations)

      • Results

        • People willing to accept similiarily high/low p for decision and risky dictator condition

          • 60% p = .3 or less

          • 85% p = .5 or less

          • 100% p = .8 or less

        • People choose significantly higher probability for betrayal condition

          • 25% p = .3 or less

          • 40% p = .5 or less

          • 90% p = .8 or less

          • 100% p = 1.0 or less

        -> rational decision makers shouls choose same p across all conditions

      • Conclusion: People behave as though there is a psychological betrayal cost above and beyond any dollar losses


What experiment was conducted to show how markets influence morals/peoples reaction to externalities and what conclusion did the study come to?

  • General condition:

    • Every participant is assigned a mouse

    • Participants can choose to let the mouse live and get nothing or get a certain amount of money to have the mouse be killed

    • Question: For how much money are people willing to let the mouse die (externality)?

  • Three conditions

    • Individual condition:

      • Participants simply say the minimum price for which they would be willing to let the mouse die (max 10$)

    • Bilateral bargaining condition

      • There’s a buyer and a seller (the seller being the participant)

      • There are 10 bargaining rounds

      • Each round, the buyer makes an offer between 0 and 20$ to the seller

        • Seller gets amount of offered money, buyer keeps the rest

      • Seller can accept offer or reject it -> if it’s rejected it moves on to the next round and seller cannot go back to previous offer

    • Multilateral bargaining condition

      • There are 7 buyer and 9 seller (the seller being the participant)

        • 2 sellers will not get any money by default -> competition

      • There are 10 bargaining rounds

      • Buyers make offers simultanously -> if sellers see offer they like, they have to make a deal fast (first come, first serve)

      • Buyers and sellers that make deal leave the market

        -> Social Norm activation

  • Results

    • Willingness to kill the mouse:

      • Individual round: 40% accepted to kill the mouse for 10$

      • Bilateral bargaining: 70% accepted to kill the mouse for 10$ or less

      • Multilateral bargaining: 75% accepted to kill the mouse for 10$ or less

    • In the bargaining conditions, the price people were willing to get to kill the mouse decreased

  • Interpretation

    • Markets can erode moral values

      • Norm activation does not really seem to play a big role here since there was no big difference between the bi- and the multilateral bargaining conidtion

    • People with moral standards can abstain from trading, but cannot influence the price/market

    • A buyer just has to find “the right seller”


What reasons exist for the evolution of cooperation and what assumptions and formulas belong to those reasons?

  • Indirect fitness

    • Assumption: Individuals can identify relatedness/phenotype characteristics

    • Individuals cooperate because of inclusive fitness generally (indirect fitness more specifically)

    • Formul



Assumptions

Reason for cooperation

Formula for when cooperation takes place/develops

Indirect Fitness/ Green Beards

Individuals can identify relatedness/phenotypecharacteristics

To increase inclusive fitness (through indirect fitness)

Hamiltons Rule

helpf if rb > c

  • c = cost of helping

  • r = relatedness coefficient/degree of relatedness

  • b = reproductive benefit

Help calculated from rb-c

Direct reciprocity

  • Individuals can identify each other

  • Multiple interactions

  • Individuals remember past actions of opponent

Defection leads to more costs than cooperation if two people meet again and again (tid-for- tad -> if you don’t cooperate, others will also not cooperate with you)

p > t/c

p = probability of meeting again

t = temptation (additional benefit of not cooperating compared to cooperating)

c = benefit of cooperation


Essentially prisonner dilemma matrix


The higher the temptation to exploit (relative to the cooperative benefit), the higher the likelihood must be to meet again

Indirect reciprocity (downstream -> C helps A, because C saw A help B)

  • Individuals can identify each other

  • Multiple interactions

  • Individuals remember past actions of opponent

  • Players can observe other’s actions

  • Player can communicate actions to third parties

Motivation (and ability) to have a good reputation

q > c/b

q = probability of knowing one’s reputation

c = costs of helping

b = benefits of helping


the probability (or certainty) in knowing one’s reputation (q) needs to be larger than the cost-to-benefit ratio of cooperation (c/b)


Implication that reputation systems need to be more reliable when cooperation is risiker/there is a higher temptation to defect

Generalized reciprocity (upstream)

Same as indirect reciprocity since it is an alternative of indirect reciprocity


B helps C because A helped B

Possible explanation:

  • increased/more resources

  • percieved norms

  • Warm Glow

-


What proof is there for indirect reciprocity (in regards to image scoring)?

  • Experiment 1

    • Experiment where people were matched with different person every time

    • Switch in role: once helper (who could donate) once reciever (who recieved donation

    • People were anonymous, i.e. could not tell if they had interacted with person before

    • BUT people saw other persons score, i.e. how much person had donated in past rounds

    • Result: people with lower score got significantly fewer donations as rounds went on (cannot be due to direct reciprocity since they did not recognize former parterns)

  • Experiment 2

    • Before game, people could choose network they wanted to play with (request & acceptance of request), continued as game went on

    • Three conditions:

      • Global network: people simply see other peoples network, i.e. who belongs to which group and joins/leaves

      • Global reputation: people saw for every round how everyone acted (defect or cooperate) even from other group

      • Local reputation: people learned how other people in own network acted in terms of cooperation

    • Results

      • Cooperation level

        • Global network knowledge did not do much

        • global reputation knowledge led to more/more stable cooperation levels

        • local reputation knowledge/lrk + gnK led to less cooperation

          > people might’ve wanted to be seen as cooperative by other groups so they would be invited to those groups

      • Density of network

        • in LRK only and LRK/GNK density decreased drammatically -> not many new connections, number of new connections decreased

        • When global reputation was known, density of networks increased constantly -> more new connections happened

        -> global reputation knowledge helps to…

        • isolate defectos

        • let cooperators make links

        • leads to closely interconnected network of cooperators

    • Drug market study

      • in illegal drug markets, drug dealers who deliver get better reviews and thus can sell their products for higher prices


Author

12DayFIsh

Information

Last changed