Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Description: This lecture continues the discussion of social preferences by taking a deeper dive into games like the Dictator game, among others.
Instructor: Prof. Frank Schilbach
Note: In March 2020, MIT students were sent home because of the COVID-19 pandemic, and classes were taught virtually. For lectures 12 and 13, Prof. Schilbach continued to teach from the classroom on campus, until he began teaching via Zoom starting with the mid-term review.

Lecture 12: Social Preferen...
[SQUEEKING] [RUSTLING]
[CLICKING]
PROFESSOR: Welcome to lecture number 12. I'm very sorry that this has to be remote. I hope all of you are doing OK. I know this is lots of trouble and stress for many of you, having to move, not knowing necessarily where to move to, financial issues and so on and so forth, lots of worries about health, of your own health and others.
I hope you're doing OK. Try to take good care of yourselves and others try to look out for others, be there for each other. Try also to make use of mental health resources as you can. I'll send an email about that as well.
This is lecture number 12. We talk about social preferences. So we talked about sort of broadly given introduction about social preferences. Let me just move towards where we were towards the end. I showed you, essentially, different ways in which you can experimentally elicit people's social preferences using dictator games, ultimatum games, trust games. We looked at how generous do people behave in those kinds of games? Does it look like people care about others, they're nice towards others?
We then, to some degree, sort of found quite a bit of generosity in dictator games and other games. People seemed quite nice. In some pieces of evidence, it wasn't quite clear whether it was is it that people are generally nice to each other, or is it like they don't want to look like a mean person, either towards others or perhaps towards themselves. They want to think of themselves as being a nice person, and they don't want to be mean. And that might be a motivation of giving towards others.
In the games that have shown you so far, it wasn't quite clear how to distinguish that. So what we're going to do in this lecture, we're going to look at some pieces of evidence where there's versions of dictator and other games that lets us look at more the underlying motivation of why people give, and perhaps detect, in some cases, that when people are giving in some situations which look like people are quite nice, in fact, that might not be because they generally care about the other person.
It might be rather either because of social image concerns-- is this about what other people think about them if they give versus not-- or because of self-image concerns, these are, like, what do they think of themselves, as in you give to somebody else not necessarily because you care about the other person, or it might not even be that you care about what the other person thinks of you. It's just about you yourself want to feel good about yourself, and therefore you might be generous because you want to sort of keep or maintain the image of being a nice person to yourself as well.
Some of the evidence that I'm going to show you is-- you wouldn't say depressing, but it's a little bit sort of negative in the sense that essentially what I'm going to show you is that people appear, or in fact are, if you believe that evidence, less nice than it might appear. So it says that last time, at first sight, it looks like that's a little depressing or little negative in the sense of we might be disappointed that we thought everybody was really nice and friendly and cared so much about others. Maybe that's not quite true.
On the other hand, however, if we sort of understand the conditions under which people are nice and friendly to each other-- and in some ways, you might sort of say, well, it's not obvious that we really care that much about whether people are generally nice versus not. What we really care about, perhaps, is how people, in fact, behave. Are they friendly and prosocial and support each other?
And perhaps if we understand the conditions under which people are nice and friendly to each other, perhaps that allows us, then, to think about policies, how do we set up-- in firms or in organizations or in society as a whole-- how do we set up policies that might make people more prosocial overall.
So this lecture, we're going to talk about more trying to detect different motivations for people's prosociality. And then the next lecture, we're going to talk about now what are some field evidence So all I'm going to show you for now is essentially mostly lab evidence, but we're going to show some field evidence in real world situations in companies and so on, or in field experiments, how people are behaving. And in particular, we're going to think about some policies that might get people to be more prosocial versus not.
So we already talked about this as a whole. So the first thing we want to think about is essentially social recognition. This is essentially social image. How do others think about you when you give versus not? So then I'm going to show you now several papers, or evidence from several papers, that essentially does different modifications, mostly of the Dictator Game, that allows us to disentangle these different motivations.
So the first one is the very nice paper by Lazear et al that conduct an experiment to study motives for giving, as of many dictator games do. And they have essentially a very simple design, two treatments. One of them is the standard dictator game. This is what you have seen before. The only difference is this one is in euros. It's sort of, I think, done in Europe. Think of euros as just being the same as dollars. So people can decide to split 10 euros between themselves and another subject.
In another treatment, subjects decide whether to even participate in the Dictator Game. So essentially, what's being allowed here is an exit option. You could essentially say either you're going to participate in the dictator game with another person, or you could just decide you don't even play this game. And you get just get some fixed payout.
Now then, if the dictator chooses to participate, the recipient is informed of the game and the choice of the dictator. So the recipient will be informed and know about there was a dictator game that was played. Here's the amount that the dictator chose. Here's the amount that you got. And then the standard dictator game commences, and then the recipient just gets the money, and the recipient is essentially informed of the action.
So if you choose, for example, 9 euros for yourself and one euros for the other person, the other person will know that the one euro that they got came from a dictator game, and there was somebody who chose $9.00 for themselves and gave $1.00 for them.
So then one option of this, or one version of this game, was essentially just like a costless exit option. That is essentially the option where you say you receive 10 euros without the option of distributing the money. Notice that that's exactly the same as choosing to opt into the game and choosing 10-0 in terms of payouts. If I just say I want 10 euros for myself, I give nothing to the other person, that's exactly the same as saying you opt out of the dictator game.
The difference, of course, is that if you opt out, the participant will never know that you opted out. So the other person will never find out that somebody was kind of mean to them. And so you might sort of choose the opt out option, perhaps because you think the other person will feel bad. And if the other person feels bad, you try and avoid that and you might opt out. This is kind of the exit option.
So the first option here, the subject line of the slides, is costly exit. In this version of the game, the exit is actually not costly. It's costless to do so. You can just essentially take the $1.00 and run, and you don't have to actually play the dictator game.
Now when you think about distributional preferences, what do distributional preferences predict? Well, dictators who want to give some money strictly prefer to play the dictator game. If you think you prefer 9-1 over 10-0 well, then you're going to play the game and say I'm going to choose 9-1 because otherwise, there's no way to actually give this person money.
Notice distributional preferences, just to remind you, are preferences were only the outcomes matter. It doesn't matter how the outcomes come about. Only the actual outcome matters. Now there are some people who want to just keep everything to themselves. Well, those people who should just be indifferent-- they will just be like, it doesn't really matter whether I just opt out. Maybe that's easier to opt out. It doesn't really matter. I would give 10-0 anyway, either opt out or just stay in the game and choose 10-0 anyway.
But in either case, the option to exit the game should have no effect on how much the dictators share. It shouldn't matter. If you want to give some positive amount, if you opt that option, it doesn't really matter because you're not going to opt out. You're going to just opt in. You stay in the game and just give the money to that person.
And if you want to give 0 anyway, it doesn't matter whether you opt out or not. You just give 0 anyway, and if you opt out, it's essentially effectively giving 0 to the other person. So either way, the option to exit should have no effect on how much dictators share. In particular, it should not reduce how much dictators share. Does that make sense? OK.
The limited audience seems to understand. Hope this is clear to everybody else as well. By the way, I should have said the lectures now will be recorded currently with a very limited audience, but in general, with no audience. What I'm going to do is try to keep sort of what I very much miss in this format is sort of the interactive form of the lectures.
So what are we're going to do is trying to figure out, in particular, after spring break, trying to record the lectures and then finding some other ways of allowing students to interact, perhaps by having a version of, well, you can watch the lectures online, and then we have essentially either online office hours or discussions of the slides. Or if there are particular issues that you may, may answer some questions over Zoom or other formats we can then discuss some issues that perhaps aren't clear. Or you can ask these questions, and then I try to respond to that.
Anyway, getting back to this game, essentially we have now the comparison between standard dictator games and the option of, in this case, a free exit. So in this case, the exit option lowers giving, and the standard dictator game, dictators shared about one euro $0.87 on average. That's, again, sort of in line with the 20%, 25% that people tend to give in dictator games in the very simplest form. But when you allow people to exit, the share given goes down to $0.58. OK?
And so that's essentially to say there are many people who, when they're getting into the dictator game, they feel compelled to give at least some money because either they feel bad themselves somehow, or they feel that the other person will feel bad. So you kind of want to choose 10-0. But you feel bad about the other person feeling bad if you choose 10-0, so you might be inclined to choose 8-2, 7-3, or the like.
Now if you allow the costless exit option, the person who in the standard dictator game would choose like 7-3, 6-4, or 8-2 or the like, that person, in fact, wants to choose the 10-0 but feels bad in the standard dictator game, so chooses 7-3. Once you allow the costless exit, the person just chooses the 10-0 and essentially, then, by opting out, effectively is choosing 10-0.
Now a different version of this-- so what I showed you so far was costless exit. It essentially was free to exit. You could just choose the 10-0 and just run. There was no cost of doing so. The pie stayed the same. There was no subtraction of that. Now different versions of that games were the exit is actually costly. That is to say, instead of choosing a dictator game where you can share 10 euros between the dictator and the recipient, now you can choose the exit option. You have to pay, essentially, one euro for the exit option so you get 9 euros if you exit, and the other person still gets 0.
So that's now to say people then choose the costly exit, which is to say they say, I'd rather do the 9 euros and not play the dictator game with this other person. Rather than playing the dictator game where you get 10 euros potentially, you could choose 10 euros and 0 for the other person. But there seem to be quite a few people who choose, essentially, the costly exit, the 9 euros, without playing the game.
And so they're here. Say average subjects are willing to take 82% of the pie rather than split the full pie in the dictator game. That is to say people essentially are willing to forgo quite a bit of money to face the situation where you have to deal with this other person.
Notice that you could choose exactly the same. You could choose 10-0 or 9-0 or the like. People don't want to do that. Instead, they choose the costly exit option so the other person never finds out. So in some ways, you don't have to deal with yourself or the other person feeling bad about your not being nice.
And so that reveals, in some sense, saying if you see people being nice in dictator games, it seems to be, at least to some degree, that's not coming from people generally wanting to be nice to the other person and generally wanting to improve people's payout for the sake of making them richer. But it's rather they want to avoid the other person feels bad about them, or getting mad or the like.
Now do let me just repeat that. Why do people subject want to exit? Well, simply put, they want to take the money for themselves, but they don't want to indicate to the potential recipient that she's been treated unfairly. So exiting allows them to satisfy their greed and not to worry about the participant's reaction. Do you have any questions?
OK. So that's the first piece of evidence. That's Lazear et al on costly exit in a dictator game. Now we discussed already very last time another version of that, which essentially is a different version, which is providing excuses or covers to be selfish. This is a very nice paper by Andreoni and Bernheim, where essentially the short summary of this is what they do is they allow for a computer option. They allow for an option where essentially, the computer decides in some cases, which gives people cover to be not nice.
So if there's a chance that you will decide yourself and a chance that the computer decides, and the recipient doesn't know whether I chose or the computer decided-- say it's 50% or the like-- I might choose a very mean choice because I can always say, well, you know, I didn't do this. It was the computer. I wanted to be really nice to you. The computer happened to be just mean. Sorry about that. I really tried to be nice to you. And so people essentially use the computer as cover.
Let me show you what this exact evidence looks like. So this is a different version of a non-anonymous dictator game. So this game is sort of set up in a way that you actually know the other person, so you have to sort of face the other person, at least in some way. And so the way this is set up is that the dictator's choice was forced with some probability. So this is a dictator game with $20. Now in the simplest version, the computer chooses 20-0 or 0-20 with equal probability.
So if the computer chooses, the computer chooses either 20-0 or 0-20 with 50% chance each if it's the computer's turn. The dictator observes the allocation chosen by the computer. The recipient does not. If I'm the dictator, I can see what the computer does, and then I'm sort of asked to make my choice. The recipient does not know what the computer actually chose.
The dictator then makes a dictator game allocation with a pie of $20 such as before. Just happens to be $20 rather than $10. And the computer's forced allocation implemented with probability p is known both to the dictator and to recipient. So the recipient knows what the chance of p is that I'm choosing versus the computer choosing.
So the computer chooses with probability p. The dictator chooses with probability 1 minus p. That is known to both the recipient and to the dictator. And now the game will vary the probability p. So if the probability p is like a 0, if the computer never chooses, and only the dictator chooses, then we're back to the typical dictator game. Then I cannot hide behind the computer. It's obvious that I'm choosing. If I'm choosing 20-0, you know that, so I have no way of blaming the computer.
If, in contrast, the probability is larger than 0-- suppose the probability is 50%, 60%, 70%-- it's very likely that the computer chose. So now I can also choose 20-0 and just say, well, sorry, it happens to be that the computer implemented this choice. Sorry that you got 0. So I can essentially hide behind the computer.
And the larger the probability p is, the more credible is this hiding behind the computer because you know if the chance is only 5% or 10%, you might not believe me that the computer actually chose. You're like, yeah, yeah, Frank, you're telling me it was the computer. Really it was you.
But if the chance is 90% or the like, then it's actually very likely, very plausible that the computer chose the main outcome. And then I might also just choose the mean outcome because you're never going to find out. You're not going to suspect that this was me as opposed to the computer.
And then at the end, the recipient only learns the allocation, not the dictator's choice. The recipient only learns what they get. They do not learn what I chose or whether it was the computer or me who chose. They only know the probability p with which the computer or I made that choice. Yeah.
AUDIENCE: Does the dictator know p before he makes his allocation?
PROFESSOR: Yes. Yes. So exactly. The dictator knows p. The dictator even knows, I think, the actual choice that the computer made, yeah. But that's key here. So both the dictator and the recipient know p. That's really important because only if I know what p is, I can actually react to it and hide.
So the prediction here will be that if p is very high, if the computer chooses with high probability, people will be more likely to be mean in the 10-0 case because I could just essentially emulate the computer and hide behind it. But for that I need to know exactly what p is.
So now what do distributional preferences here predict? Again, distributional preferences are preferences such that you only care about the outcome. You don't care about how this came about. So the dictator here should only think about the case in which her choice counts. That is to say, the computer is entirely irrelevant because again, I can't influence what the computer does anyway.
It doesn't matter whether the computer chooses with 5% or 10% or a 20% chance, or 50%, even 70% chance. The only thing I should care about is well, for the chance of 1 minus p when I choose myself, what am I going to do? And what are my distributional preferences? How much do I want to give you versus the other person?
And that's essentially the only case in which she can, in fact, affect the distribution. And so p should have essentially no effect in people's choices. So that's a very straightforward prediction. Essentially, you only care about the case when you can make a difference. And in that case, you just choose whatever you want to choose between 20-0 and 0-20, whatever you want to give the other person. The other cases are just irrelevant because they are out of the person's control.
Now why might p matter anyway? I already said that. Essentially p might matter because you can hide behind the computer if p is particularly high. Yes.
AUDIENCE: So the recipient knows that the computer can only choose 20 or 0?
PROFESSOR: Yeah. There's different versions of that, exactly. Exactly. In some case, the computer could choose 20-0, 0-20 with equal probability. So the recipient knows that. That's known.
AUDIENCE: If it's like a 80-20 and the recipient knows automatically.
PROFESSOR: Exactly. That's exactly right. I'm going to show you exactly that evidence. In fact, what the game will do is-- sorry. The question was does the recipient know whether it's 20-0 versus 18-two or something else? And that's exactly right. The only plausible way in which I can hide behind the computer, in this case, is if I want to be selfish, is 20-0. If I choose 19-one, you know it was me. It can't end up in the computer because the computer can only do 20-0-0-20.
And that's key here. And there's going to be some variation. And that variation will really what the computer does. And essentially that will exactly reveal people hiding behind the computer because then some variation will be the computer does 19-1, and suddenly people choose a lot of 19-1. And the only reason why you might do that is exactly because you might want to hide behind the computer.
So let me show you that, exactly that, in fact. So here's now what we find. This is a bit of a complicated graph so let me try to walk you through that in detail. So the graph shows you the following. It shows you on the y-axis the fraction that people gave, on, average for, the different scenarios. So how much, what percentage of the $20 did the person give to the other person, decide the dictator to do? This is not the computer. So this is only the dictator's choices.
On the x-axis, we see the probability of the forced choice, of x 0 being 0. This is essentially to say what is the forced choice? What is the chance that the other person gets 0? The computer chooses with some probability 20-0, so 20 for the dictator and 0 for the other person, and that's happening for different probability. So the probability is either 0, 25%, 50%, or 75%. That's the p that I mentioned earlier.
Let's look at p equals 0% to start with. That's essentially the scenario in which the computer is irrelevant. The computer never makes any choices. What you see, essentially, is the typical behavior in dictator games. Sorry, I should have also mentioned that here's much, then the different lines are, like how much is given to the other person.
So the blue line that you see there is like 10. That's the 10-10 allocation. How much is the dictator giving to the other person? The red line is a 0. This is essentially the 20-0 allocation. And the other two lines down there are 1, 2 to 9, and there's also larger than 10.
What we're going to focus on is essentially how often does the person give 0, and how often does the person give, like, 50%? So when you look at the 0% line, essentially, that's the typical dictator choice. Again, p is 0. The computer is entirely irrelevant. The computer can't do anything.
Now I cannot hide behind the computer. You get sort of the typical dictator game outcomes. But just, like, 55% of people, actually, more than usual, give half. They choose the 10-10 allocation. About 30% of people choose 0. That's very common. And then there are some people below that give either 2 to 9, or 1 or even larger than 10. So that's typically what we see in a dictator game.
Now moving to the right is now positive p's. This is p equals 0.25%, 50%, 75%. And now what we're seeing, essentially, is that as you move to the right, in particular going from 0 to 25 and from 25 to 50, the probability of, or the fraction of people who choose 0 for the other person goes up quite a bit.
So now, for example, look at the 50% chance equals 50%. Now there's a 50% chance that the other person gets 0 anyway because the computer chose so. And now, if I'm the dictator, essentially I can also choose 20-0. And we know once you, as a recipient, see the outcome, you cannot tell if it was a 50% chance that it was me, it was a 50% chance it was the computer choosing that.
And so in that case, you see now that 70% of people actually choose 20-0. So that fraction goes up from 30% for p equals 0 to about 50% for p equals 25%, and to 70% for p equals 50%. So essentially, the larger p is, at least for the range of 0 to 50%, the larger is the fraction of people who actually choose the 20-0 allocation, presumably because it's now more plausible if I'm hiding behind the computer.
Now you see for people, 75%, now there seems to be sort of no increase anymore. It's a little unclear how to interpret that. To some degree, this could be because now, 50% is already large enough. That's already very plausible that the computer chose anyway, so going from 50 to 75, there's no additional benefits of hiding behind the computer. It could also be that people in some way sort feel bad for the other person because there's a high chance that they get 0 anyway from the computer. So you might as well give something to that other person, perhaps. I'm not quite sure, but that's, in some sense, not the point here.
The point here is that the fraction who are mean goes up by a lot when you can hide behind the computer. The fraction who looks or is quite nice who chooses the 10-10 allocation goes down by quite a bit. Does that make sense? And the following slides have added a fairly detailed description of this. But let's pause for a second to see whether that makes sense.
So I think I have already said all of this. This is to say, the higher p star or p is, the easier it is for the dictator to hide behind the computer. I said all of that already. So when you sort of take together this evidence, it seems to be evidence that people are not as nice as results from simple dictator games might suggest because, in some sense, in the simple dictator games, they would like to do something else, but they just can't really do so.
And this is kind of like your question here. This is a very nice variation that they have. This is a different variation where the computer now chooses 19-1 with probability p. So the computer essentially chooses 19 for the dictator and 1 for the other person with probability, again, p equals 0, 25%, 50%, 75%.
And exactly as you sort of predicted and thought about, is you see the same thing happening for the people choosing 10. That fraction goes down. But instead, people are not now choosing more of 20-0. Instead, what they're choosing-- in fact, there's a fraction of people that choose 20-0. The red line goes down a bit. Instead, what goes up is the 19-1 allocation. So lots of people now choose 19 for themselves and 1 for the other person.
Why is that? It's because now you can hide between the 19-1 choice of the computer. And that's the selfish choice. The 20-0 choice, in fact, goes down a little bit, perhaps because it becomes quite obvious that these are people who are quite selfish anyway. But now they choose the 19-1 option. They switch to 19-1 because now, you're kind of mean anyway, but now you can essentially hide behind the computer. OK? Any questions on this?
So now what's going on here? So one potential explanation is face-saving concerns, essentially social image. So these are the motivation to avoid unfavorable judgment by others. It's essentially if you care about what others think about you, you might engage in certain behaviors.
So the experiment provides a way to provide people to avoid such judgments. So remember, this is not an anonymous game. You have to face this other person and deal with them afterwards, at least in some way. So now, essentially, the experiment, the computer provides, essentially, people an opportunity to save face, and essentially to keep up their social image because essentially, what you're trying to avoid here is if you choose the 20-0 option the other person will think you're not very nice.
And either you're trying to avoid to have to see that they're unhappy. You might just feel bad about them if they're unhappy. Or perhaps more plausible, it just feels bad for you if other people think you're kind of mean. So you want to be mean, but you don't want others to think you're mean, and therefore you hide behind the computer.
I think I said all of this already. So both of these types of evidence, I think, the exit in the dictator game, but also the evidence that I showed you just now, hiding behind the computer, seems to say when there's an option to avoid people feeling bad or people feeling that you're mean to them, people tend to take that option, and then essentially, that reduces their giving behavior.
Now in some sense, when you think about many other-- and there's some empirical evidence on this by Gautam Rao and Stefano DellaVigna and others, in a situation when it comes to voting or other gift-giving choices overall, people are willing to pay, essentially, to avoid situations where they're asked to give.
This is like somebody comes to your door and says, would you like to donate for this cause, or would you like to vote for this candidate, and so on and so forth. People might, once they ask them, I'd say, yeah, sure, I'll do it because they feel bad, or it feels like it's not a very-- so [INAUDIBLE] shows you pictures of poor children and some terrible situations. If you are now saying, no, I don't want to give any money, you look like a really mean person.
So what people then tend to do is they tend to essentially just avoid the situation altogether by either pretending that they're not home or indicating never opening the door. Or if you see somebody on the street who wants to collect donations, you might just essentially avoid that person altogether, go out of your way.
If you see a beggar on the street, you essentially might go around that person because you try to avoid, essentially, judgment, either feeling bad for them about you being mean to them, or feeling bad about them just judging you in certain ways for not being a nice person.
How do we think about this in terms of utility function? We're not going to talk about this very much. We're going to get into this a little bit back about this when you talk about beliefs and belief-based utility. But just to give you a sense of how to think about this in terms of modeling it, so what you need, essentially, is like another term.
And utility function-- in addition to your distributional preferences about you put some weight on how much you get and how much the other person gets-- you need some other term-- I call it v-- which is player 1's beliefs about rho. Like what does the other person think that your rho is?
So if you really don't want to give anything to that other person, you want to avoid that the other person thinks that you put very little weight on the other person. So rho, again, to remind you, is the weight that's put on the other person. This is the perspective of player 2. So rho is the weight that you put on the other person.
And now essentially, player 2 might put some weight on player 1 in terms of the output that they get. They want to give them some money, and if they have more money, they feel happier. But they also put some weight on what player 1 thinks that my rho is. So if I have the choice, do I want to give 10? If I have $10, and I can choose that in a dictator game to give it to the other person, I might want to do 10-0, and that's essentially just coming from my rho.
But I also don't want the other person to think that my rho is 0. So I might derive positive utility from the other player thinking that my rho is whatever, 0.3, 0.5, or whatever. I like that the other person thinks that I'm of a positive weight on them. And therefore, I might give them more.
More generally, I guess you can just sort of have the utility function that depends on the other person's outputs or on each person's outcome, how much they get, and on the other's beliefs about what my utility function looks like. So essentially, people care about whether others think that they are nice or have concerns for others in the utility function.
Again, we're not going to talk about this in more detail, at least for now. We're going to get back to this when we think about beliefs and belief-based utility, which essentially is the idea that beliefs are potentially not just instrumental, but about like others usually help you make good decisions. But it might be that you also just derive utility from beliefs. You might drive utility from thinking that you're smart, good-looking, and so on. Any questions on this?
OK. So now one question you might have, and I already alluded to that, is are people giving because they enjoy giving, or because it's uncomfortable refusing to give? is? It like, I really want to give and I feel really good about it. I get some warm glow in some ways. Or is it in some ways, I'm giving not because I actually enjoy it and I'm happy about it, but rather, because otherwise I just feel bad and uncomfortable.
So the evidence we talked about suggests that the latter motivation is more important for many. So many people kind of just feel bad by not giving. This is essentially in particular, the evidence that people are willing to pay to avoid dictator games that I showed you.
There's a very nice paper about what's called moral wiggle room, which gets at the idea that in a way, if people opt out of the dictator game, they have to justify to themselves that they're mean. So if you have self-image, if you also care about how do you think about yourself, are you a good person or not, how do you justify to yourself, essentially, that you chose a 20-0 option.
And the Dana et al paper is a very nice paper that sort of illustrates that very nicely. So this is, again, a very simple game. Subjects are asked to choose between self, other. So how much do you get yourself and how much does the other person get? There's two options here. Option A is you get $6.00 for yourself and x dollars for the other person. Option B is you get $5.00 for yourself and y dollars for the other person. And x and y is varied across subjects subject and randomized.
So one option here is x equals 1 and y equals 5. Now what does that mean for Options A and B? That means option A is, if you had to choose, would be 6 for yourself and 1 for the other person. Option B is like 5 for each player.
So now choosing between Option B and Option A. If you choose Option A, essentially, you get $1.00 more than you would get if you chose Option B But the other person gets $4.00 less than they would get if you chose Option B. So choosing Option A is not a particularly nice option. Essentially you decide that one additional dollar for you is worth more than $4.00 that the other person gets less.
And so again, that's not a very nice move. So most people, in fact, tend to choose Option B when faced with that direct choice. That is to say, 26% of people choose Option A. They choose the selfish option, if you want. And 74% choose Option B, which is, in some ways, the nice thing to do.
This is the very standard version of it. There's no other things going on. There's no exit option and so on. And so again, I said all this. So now when people choose Option B, there's essentially two explanations here. One is the person is quite nice. The other explanation is in some ways, they're not that nice, but they're just trying to avoid feeling bad. They kind of want to choose Option A, but if you choose Option A, you feel really bad about it, and that's why you don't do it.
Now the game also has a twist here, which is for some other subjects-- these are some subjects just had this choice only. They choose, as I said, 74% chose Option B. Some other subjects, x and y were initially unknown with either x equals 1 and y equals 5, or x equals 5 and y equals 1. And those are implemented with 50% chance.
So initially you don't know. It could be either-- let me just write this down. Could be one of two scenarios. Both the chance 50%. So one of them is if you choose Option A, that's the choice that we just had before. The first one is Option A versus B. That's what we just had, which essentially is 6-1 versus 5-5. So their choosing A is a pretty selfish move. That's the option I just showed you.
The second scenario is it's 6-5 versus 5-1. Now it's unambiguously clear that you should choose Option A because you're better off, the other person is better off, everybody is happy. So surely you should choose Option A. Essentially Option A dominates Option B in the second scenario. OK?
Now here's the twist. Subjects could costlessly find out which one is the case. And the recipient, in this case, will not learn what the other person did. The recipient just essentially receives the money. The recipient does not know what the person does. So now, essentially, I will tell you there's two scenarios going on, either one or two. I'm going to ask you to choose A versus B.
And you have two options before that choice. The option is either you can just choose without not knowing which scenario you're in, or you can choose costlessly to find out whether you're in Choice 1 versus Choice 2. OK. So is that easy to set up, clear? Happy to repeat. Yeah. Yeah.
AUDIENCE: Who is choosing between 1 and 2?
PROFESSOR: So if you're in the game, there's a 50% chance that you're in scenario 1 and a 50% chance in scenario 2. That is fixed. You have two choices to make in your whole decision. The first one is, do you want to find out whether you're in scenario 1 or in scenario 2?
So either you don't find out-- it's just a 50% chance of scenario 1 and 50% chance of scenario 2-- or you find out costlessly I'm going to tell you you're in scenario 1 or scenario 2. It's already decided which scenario you're in. You just don't know it.
So your first choice is do you want to know which scenario you're in? The second choice, then, is between A and B. And you either choose without knowing what scenario you're in. You're either in 1 or 2. You choose A and B, and then it's just going to be implemented afterwards, the A or B for each of the scenarios, or the one that's actually implemented. Or you know already you're in scenario 1 or scenario 2 because you costlessly found out, and then you make that choice about A and B. Does it make sense? OK. Perfect.
So now first, we can think about distributional preferences. What happens if you just have purely distributional preferences? Could this additional twist in any way lower people's giving behavior? And so when you are distributional preferences, notice that what I added-- so what we're comparing here is only having the first scenario to choose. We're comparing that to the twist.
So we're comparing, essentially, this scenario-- this is the people who chose without having this additional twist going on. We're going to compare those kinds of people to people who are randomized to have this additional twist, where they first have the choice or the option to costlessly find out where they are.
And the question I'm going to ask is, now, how does adding this additional option affect people's giving behavior? And in particular, is it possible that people give less because of adding this additional choice?
So now there's two options overall. Either it's the case that your optimal choice depends on x and y-- so either you're going to tell me, well, whether I choose A or B depends on x and y. And if that's the case, well, of course then you want to find out what x and y is.
Remember, what is x and y? X and y is the thing that I have at the top. X and y is either x equals 1 and y equals 5, or it is x equals 5 and y equals 1. So it either is the case that it matters to you what x and y are-- either you really want to find out what x and y is. Well, in that case, surely if I ask you, do you want to costlessly find out, you don't want to find out because you want to make the right choice. You want to know, am I in scenario 1 or am I in scenario 2?
So then if I offer you the costless option to find out which scenario you're in, but you want to find out. And then if you find out that you're in the x equals 1 and y equals 5 situation, you should make the same choice that you chose previously. If you chose previously Option A or Option B, once you find out what scenario you're in, surely you're going to choose the same thing because you only care about distributiona; preferences. You're going to just choose the same thing.
Now if your choice, on the other hand, does not depend on x and y, you should be indifferent between finding out or not. And it shouldn't really matter whether they give you that option, in some sense. You should choose the same thing as you chose before.
In either case, when x equals 1 and y equals 5, leaving x and y initially unknown, so adding this additional option, should not decrease the amount of giving. If anything, it should not be affected in any case. But in particular, it should not decrease it.
So let me have you look at this for a second just to be clear on that. So essentially what I'm saying is adding this additional thing, the scenario 2, either the scenarios matter, and if the scenarios matter whether you're in scenario 1 versus scenario 2, if you make different choices, well, then surely you should find out which scenario you're in. And you're going to say, OK, I really want to know.
Once you know whether you're in scenario 1-- suppose you're actually in scenario 1-- you should make the same choice as you did before. And if you say, well, I actually don't care which scenario I'm in. I'm going to make the same choice anyway, then adding this additional scenario shouldn't really matter anyway because you choose the same thing anyway. But regardless, adding essentially the scenario 2 should not decrease how much you give.
Now what they find instead is 44% chose not to find out x and y. And of these subjects, 95% chose Option A. So these are people who essentially they say, I'd rather not know what's going on here, then they chose Option A, which is either the fairly selfish option, or it's the option that's better for both people. Let me go back to this so you see this.
So when given these two choices, they essentially say, I'd rather not know, even if it's costless to do. I don't want to know whether I'm in scenario 1 or 2. I'm going to choose Option A. Why do they do that, or what's going on in their minds? Yes.
AUDIENCE: They don't want to know if they're being mean to the other person.
PROFESSOR: Exactly. So if you find out whether it's scenario 1 or scenario 2, things can happen. One will be you'll be in scenario number 2. You're going to choose Option A because you know that's better for everybody. Or you're going to be in scenario 1, well, then, choosing A is kind of a selfish and mean move. And then, once you do that, then you have to sort of deal with, well, am I a mean person or not?
Instead, what people do or seem to do, is they just say, I actually don't want to find out. It could be either way. Who knows what's going on? Could be scenario 1, could be scenario 2. Who knows? But you know, there's a good chance, 50% chance that it's scenario 2, in which case choosing Option A is actually good for everybody. So let me just not find out and then choose Option A, and I'm going to essentially make myself think that, oh, probably it was scenario number 2.
So essentially what people are doing here is they sort of delude themselves in some way into thinking that they're doing the right thing, or they're doing not a mean thing by avoiding information that would essentially be free. So if you really wanted to find out whether you're mean or not, you could just get the information. And since it's like free there and entirely available. Instead, what people do is they just say, oh, I'd rather not know, and then they choose the thing that's at least potentially mean.
And so then not only is it the case that 44% of people choose not to find out, but also the fraction of selfish people goes up by a lot. So I showed you-- this is a little bit tricky-- but essentially what I showed you here is that there's a 50% chance to be in scenario 1 versus in scenario 2. And the 63% that I'm going to show you here, this is the fraction who essentially chose the selfish choice. This is only looking at the 50% chance for which x equals 1 and y equals 5. So essentially this is taking the half of the cases in which, actually, x equals 1 and y equals 5 was implemented. These are cases where either the person found out or not.
But when you take those things together-- so these are like essentially two scenarios or the sum of two things that happened-- one is the person didn't find out and chose Option A or B, or the person actually found out. But taken together, essentially 63% of people in those cases chose the selfish option, Option A, compared to the 26% in the baseline case that I showed you earlier.
So putting these things together essentially means that there's quite a few people who conveniently don't want to find out whether they're mean or not. They choose, then, the mean option, and much more so than when they don't have this whole-- what's called moral wiggle room, which essentially is the option to delude themselves, and they're actually not that mean because probably, it was better for everybody to choose Option A. Does that make sense?
So now you might ask two things. Well, it could be that there's this concern about others' beliefs. This could be about social image, or could be about self-image. Now the experiment is precisely set up in a way that, in fact, it's not about social image. The other person, in fact, doesn't find out whether you learned her payoff. And so not learning doesn't really help improve her. The other person only sees the result. So whether I actually learn about which scenario we're in doesn't really actually help improving the other person's opinion.
The only person who actually knows whether you found out what scenario you're in is actually you yourself when you make that choice, which really suggests that this is about saving face, about self-image. People essentially want to delude themselves or make themselves think that they're nicer than they actually are by avoiding information that could actually help them make the right choice or be nicer if they really wanted to be.
So if you really wanted to be nice, what you would do is you would really find out which scenario you're in. You would choose Option A and Scenario 2 when it's better for everybody, and you choose option B if it's better for the other person or really costly for the other person to choose Option A.
So let me sort of summarize here. So the most likely explanation is that it's really based on how dictators feel about themselves. So the uncertainty in whether the selfish action helps or hurts the other person gives an excuse to be selfish. So now I can say, oh, 50% chance that this is the right choice for everybody. So that's an excuse.
So you can keep telling yourself that you didn't mean any harm. I really didn't want to be mean. I thought with a good chance the other person will be better off as well. And this term is sort of called moral wiggle room, which essentially is sort of helping you be in an ambiguous situation because once you haven't found out, it's actually unclear whether the other person is better off or worse off by choosing Option A and B.
The experiment is set up in a way that if you do not find out which scenario you're in, it's not clear what's better for the other person what to choose. But of course, that's amazing in some sense because you could always find out costlessly. And sometimes you can delude yourself and say, ah, it's not clear. What should we do? I don't know. Could be good for the person, bad for the person, whatever, but I'm choosing A.
But of course that's silly because in some sense that's not an explanation that holds water because you could always find out, if you really wanted to know, whether you wanted to be nice to the other person. It's very easy to find out what's going on. So that's essentially to say people want to save face in front of themselves as opposed to in front of others. Any questions on this?
OK. So now a different version of that is giving and communication, which, is, again in some ways, about social image concerns, about having to deal with others' reactions to you. This could be about self-image or social image. But either way, it's about some form of communication, how others or you yourself feel.
So Ellingsen and Johannesson modified the dictator game by allowing for a very limited form of communication. One group of subjects played the usual dictator game, which is just as usual. And the other group, the recipients only were allowed to send anonymous messages to the dictator after receiving their share. Is that the version you played, or was it different for you? Could you already talk to them earlier, when there was communication?
It might have been the exact version that you played, or it might have been in your version, you could actually talk to the other person earlier.
AUDIENCE: You could only message before you decided.
PROFESSOR: I see. So that's a slightly different version from what you played. This one is a version where the recipient could only respond. So the dictator gets the $10 or whatever is played. The dictator makes a choice, and then the recipient can essentially send a message back, can in some way reciprocate if you want. Could be a nice message if you give quite a bit of money, or it could be a mean message if you didn't give a lot of money. And anticipating that might affect the dictator's choices.
And so this is kind of what they found in the paper. So essentially, the anticipation of a feedback increased sharing from 25% to 34% in this specific game. That's quite a bit. So most recipients, in fact, do choose to send a message. The messages vary quite a bit.
So it's about very mean messages saying, so you chose to take all the money yourself, you greedy bastard. I was just wondering if there was anyone who would do that. And the answer is apparently yes. Apparently people like you exist. Have a nice evening. So that's not a very nice message. I also censored a few messages that were in there earlier.
Another one is, thank you, greedy bastard. You will like it as investment banker. I hope you will buy something nice.
And then there's sort of positive messages, presumably, when people are actually given quite a bit of money, which is like, there's hope for a better world. With the help of your generosity and your big heart, I can tonight break the black pudding curse. No more pea soup. You have a standing invitation to dinner at my place. The door is always open with love.
So you know that varies by quite a bit. And anticipating those kinds of messages, you can see how dictators then might, particularly if they play this repeatedly and have some experience with it, might opt into giving more.
So this indicates that sort of beyond caring about the recipient's reaction, dictators care how shielded they are from it. So it's not just about, you might feel bad-- so what I showed you previously in the costless exit thing, it was more about I'm worried about you feeling bad, and I don't want you to feel bad even if I don't have to face you in any situations. This one is about, I might worry about you feeling bad, but I also might worry about receiving that message from you.
So there are some people who might actually be fine about the other person feeling bad as long as I don't have to face it. But now, if I actually have to face the message from the other person, I might behave nicer in some ways because I didn't have to actually deal with the fact or face the fact that perhaps I'm not a nice person.
In our version-- I looked this up-- it seems like there's not much evidence of communication affecting behavior. This could be, perhaps, because the version with communication was different from what was played here. So it doesn't seem like once the person gave, I think the game was just over, as opposed to once the person gave, you could send a message back and reciprocate in some way.
So here you can see that. In a dictator game without communication, notice that the axes are changing a little bit. This is like the monetary stakes, no communication. This is with communication. Notice that the axis changes. I can't really fix this from the software, but if you look at the average offer, with no communication, it's 38%. With communication, it's 37%, so if anything, it's a little bit lower.
The comparison is a little bit unclean because over time, maybe later in the game, you play different than earlier in the game, and it's not quite cleanly randomized. But it doesn't seem, in your case, like the communication mattered very much. Again, perhaps that's because the nature of the game is somewhat different.
Sorry. This is actually not a dictator game. This is the ultimatum game. This is a typo at the top. For the ultimatum game, it seems like communication matters a little bit when you look at this. When you look at the very left, people giving 0, for whatever reason, I don't know how you communicated and what you said, but it seems like your communication made people more likely to give 0, and then these 0 options being rejected.
So this is what you see on the very left is in the ultimatum game, the red is essentially, these are offers that are rejected. Without communication, nobody was giving 0. With communication, somehow people are giving 0, and then these offers were rejected. So perhaps some of you gave very negative messages or threats or whatever, and that really pissed off the dictator, the other person, and then essentially that really backfired.
So I don't know who is teaching you how to communicate, but maybe you want to think about this in future games. Somehow communication, if anything, made things worse compared to the no communication case. But even there, if you look at the average offer is 47% here on the top. And again, 46% with communication. So it doesn't make a huge difference either way. OK? Any questions on that?
So then we're going to talk a little bit about intentions-based social preferences. Let me tell you a brief tale about this that sort of illustrates this point quite nicely.
So a boy finds two ripe apples as he walks home from school. He keeps the larger one, gives a smaller one to his friend. The friend says, it wasn't nice to keep the larger one. The boy says, well, what would you have done? And the friend says, I'd have given you the larger one and kept the smaller one. And the boy says, well, we each got what you wanted. So what are you complaining about?
And so what does that illustrate? It illustrates that it's not always about outcomes or the distribution, even, of outcomes or the like. What seems to matter is some form of like justice or fairness in how the allocation came about. In most of the economics, essentially, we assume that our satisfaction of outcomes, like how much we like certain outcomes and so on, when you look at people's utility function, often, essentially, it's outcome-based. What matters is essentially how much you consume, and so on.
So the satisfaction with outcomes doesn't depend on how the outcome came about. And here, when you look at these apples, how the apples are distributed, the friend actually doesn't seem to care that much about whether he gets the larger or the smaller apple. What the friend cares about is, is your other friend a nice person, or is the allocation of the outcome fair, and a sense of didn't he just keep the larger apple because he could, and then was mean to his friend.
So really what seems to matter quite a bit is-- and there's quite a bit of work in organizational behavior and economics and social psychology-- is the idea of procedural justice. Fairness in the process used to allocate resources seems to matter quite a bit. So maybe another way to put this, it's kind of OK, actually, for most people. If some people are richer than others that's fine, but it's much less if they obtained money as coming in some corrupt process, or if it's just kind of unfair because some person was born really rich, and the other person was born really poor, and there's no mobility or opportunity in society.
That's sort of relating to the idea that in a way, the American dream is everybody can become rich, and that's essentially the key here. And then if some people are richer than others, people often think that's OK. What's not OK, however, is if some people have vastly different opportunities or chances than others. Or if there's lots of corruption, where essentially some people get certain jobs not because they're qualified, but rather because their parents or their family or their friends or whatever are very powerful or they have corrupted others in other ways.
Now there's a question of fairness, and then there's a question of reciprocity, which is essentially the idea that so fairness, people have preferences for fairness. They like fair outcomes more than others, unfair outcomes. Another form of that is reciprocity, is people like to treat others as others have treated or treating them.
More generally, the way we treat others, essentially, depends on the way they treat us. So if somebody is mean to you, you're going to be mean to them. If somebody is nice to you, you're going to be nice to them. And that's really important motivation in a lot of behavior.
Now this gets us back to the ultimatum game-- I'm going to talk about this only very briefly-- which is the question of why do responders reject low offers in the ultimatum game? We talked about this a little bit. You can think about distributional preferences. We talked about [INAUDIBLE] and Rabin and about the preferences about being behind versus not.
There's an explanation that's based on distributional preference where you just don't want to be really behind, and therefore you reject the unfair offers. But another one is you think you've really treated unfairly, and you're willing to punish the proposer for such unfairness. And in a simple ultimatum game, you can't really tell whether it's hypothesis one or two. But-- and we talked about this already-- you can compare responders' choices in the ultimatum game to choices when the offer isn't generated by the other person, the recipient or the other person.
For example, you can look at choices that were generated by the computer, by a third person and so on. So if it's generated by the third person, you might say, I shouldn't punish the other person in the game because it's not that person who chose it. It's some other person, so it's not their fault. So why should I punish them?
Similarly, if the computer chose, again, if I have less than the other person, that might be OK. I might not be happy about it, but it's not the other person's fault. It's the computer who chose.
In the next lecture, we're going to talk about a field application of that. It's a very nice paper by Emily Breza and co-authors, which is about the morale effects of pay inequality. This is about how workers behave in a work environment when people are paid unequally.
And the short summary of this paper is that when workers-- this is a field experiment in India-- when workers are paid unequally, they might be really unhappy and be less productive at their work because essentially, and particularly if they have to collaborate with other workers who are paid more than them.
But in particular, what they seem to care about is, is it fair that the other person earns more? So if you earn more than I do, it might actually be OK if I can understand that you are much smarter, faster, or better at what you do. So if in some observable ways, I can tell that you're better than I am, then, in some sense, I might be perfectly fine with you earning like 10%, 20%, 50% or whatever percent more money because I know you're a better worker. So OK, fine, you're paid more. That seems fine and OK.
However, in some other situation where it's kind of not that obvious, where it looks like you're doing the same thing that I'm doing and not really faster or better and so on, yet you're paid more, I might be really annoyed and unhappy about it. And you might also be uncomfortable because you feel bad about being paid more than I do for doing exactly the same thing.
So the experiment by Emily Breza and coauthors is doing exactly that, looking at situations where workers are paid unequally versus the same, and situations where this is justified versus not, and really seems to be that fairness matters, or perceived fairness matters quite a bit for these workers. When workers can justify being paid less in some way, or more than others, that's OK. When it's unjustified, they really get annoyed and produce a lot less in that particular setting.
OK. So let me now start with some field evidence. We're going to talk a lot more about field evidence next time. Let me briefly talk about this paper here, and the next time, I'm going to tell you a lot more about that.
So this is a very nice paper by Bandiera et al which looks at the impact of relative pay versus piece rates on productivity. Piece rates is just like whatever you produce, you're going to be paid by unit of your output. You just get some fixed payment per each unit of output that you produce.
Relative pay is essentially your workers are paid relative to others. So if I'm working 50% more than you do, I get paid a lot more than you do.
Now why does this matter for people, worker behavior? Well, there's a negative externality now on me working really hard. What's an externality? Essentially an effect on you that you're not really compensated for. That is to say, if I worked really hard and I'm really good at my work, not only am I getting paid more, but also you're going to be paid less because essentially relatively, you're going to look worse than me if you work in the same team.
So relative pay essentially has this issue about where it matters for social preferences because essentially, any increase in my pay comes at a reduction of your pay. To be clear, the relative pay is such that there's an average pay which depends on the average productivity of everybody. And then the relative pay is on top of that, is like the workers who work more, are more productive relative to others, are paid more than others.
And so how do you think about this? You can think about a model of pure altruism. We talked about this already before. There's self has a payoff of x. The other person has a payoff of x0 for other. The self assigns weight alpha to the other person's utility. This is sort of the typical model that's been used by Becker and others. We don't really think that's really the right model for many situations, but it's a really useful benchmark model.
And so now, if I have positive weight on this other person, what I'm going to do is I'm going to, in the relative pay scheme, I might not work particularly hard because I know if I'm working really hard, I'm going to make you worse off. Does this make sense? OK.
So I'm going to show you, essentially, briefly the experiment, the impact, the impact of social preference in the workplace. So what they do is they have really nice data from the UK-- sorry, the microphone doesn't really work-- from the UK on fruit pickers under different compensation schemes.
They have a quasi-field experiment. They have eight weeks in a 2002 picking season where fruit pickers are compensated on a relative performance scheme. That is to say, the per fruit piece rate is decreasing in the average productivity. So the incentive is to keep the productivity low if you care about others. Essentially, the more you produce, the more you get yourself, but the other person, everybody else, gets less. So essentially, now you have an incentive to not work too hard because if you work too hard, other people around you are going to be paid less.
In the next eight weeks, however, the compensation switched to a flat piece rate per fruit. And so now the externality is shut down. However, it doesn't really matter for you. It doesn't matter how much I produce. You essentially get however much you produce, or the payment for that.
The switch was announced on the day when the change took place. So it's kind of a surprise to workers. So workers were working for eight weeks, and then for another eight weeks, there was a switch, and they didn't know that this change was going to happen. This was a surprise change so we shouldn't see any anticipatory effects of that.
Now the relative effects, now, of this policy depends now on this alpha term. If I don't care about others at all, it shouldn't matter. I should just work really hard, as hard as I can, because I want to increase my relative pay. However, if my alpha is high, I might work not so hard because I'm worried about, essentially, my friends or co-workers being paid less.
And here's what they find. Essentially find a dramatic increase in productivity. So on the left-hand side, you see essentially the first eight weeks, which essentially is their productivity. This is kilograms per hour of fruit that are being collected. That goes up dramatically in the second half. You see on the left side, you essentially see no pre-trend. It doesn't look like people are getting just more productive over time, maybe because they become more experienced or they understand better how to work fast. It seems really flat on the left-hand side.
On the right-hand side it goes up by quite a bit pretty much very quickly after this policy is announced. It doesn't seem there's any other significant change. It really seems like worker productivity per hour goes up. For example, the number of workers in the same field stays the same and so on. It doesn't seem to be lots of other effects. Really, the main thing that seems to happen is that workers' productivity goes up.
Is this a response to changes in the piece rate? No, in fact, the piece rate went down. How much you were rewarded for the marginal kilogram that you collected actually went down. In fact, there were incentives to work less if you only cared about your own output. Also this is robust to controlling.
So now what's going on here, it seems to me that workers care about others. But not only do they care about others, it also seems to be that the effects are larger the more friends somebody had on the field. So some workers have lots of friends on the field in the same field that are sort of their comparison group, and others did not.
Now why does it matter how many friends you have in your field if care presumably you care more about your friends than other people? So now, you might have social preferences. You work less to help others because if you work not very much, they're going to be paid more and you might care about them.
So you work even less when the friends benefit because you care more about your friends than others. So then you should see stronger effects if you have more friends on the field of the piece rates versus the other effects.
The second part is it's also like a repeated game, and that's kind of a trickier to deal with, which is your friends might sort of punish you in various ways. If I work really hard, they can see, I'm working really hard. They might sort of punish you socially at night, or they might sort of be mean. They might just not want to be your friends anymore and so on. It's really a mean thing to do to your friends.
So if you do that, you might actually not want to work hard, not because you care that much about the others, but you're worried about, essentially, retribution. You're worried about them punishing you if you are working too fast.
Now what kind of variation could we look at to disentangle those two types of effects? So they have different types of fruit, I can tell you already. But what kind of variation would we need to disentangle those two explanations?
Or let me ask differently. What's crucial in part number two for that explanation?
So I'll give you another hint. There's raspberries and strawberries. For strawberries, you can see very well what the other person does. For raspberries, you cannot. And why does that matter?
AUDIENCE: Because then your friends won't know. [INAUDIBLE]
PROFESSOR: Exactly. If your friends don't know-- well, if it's explanation number one, you really care about your friends, it doesn't matter whether your friends know or not how much you produce. If you care about them a lot, you're not going to work particularly hard because working really hard is bad for your friends.
If instead, you're worried about retribution, well, then it matters a lot whether your friend can see what you do. If you do the strawberries, which [INAUDIBLE] on the field, you can see really easily what you do. If that's the case, you're going to depress your effort in the relative pay scheme. And then there's going to be large effects once you have your individual pay.
However, if you can hide what you're doing, then in some sense you might work secretly really hard in some of the bushes. Your friends will never be able to tell whether you will be working hard. And then you should see less of an effect when the switch happens.
And this is exactly that. Productivity is observed for the strawberries. It's unobserved for raspberries. And so what we see, essentially, is no impact of the piece rate for food type number two, which essentially is to say if you can hide from your friends, you will, in fact, work really hard, which is essentially suggestive that you don't care that much about your friends.
Instead, what you care about is social sanctions or some form of punishment, which only happens for the strawberries because that's when people can actually see what you produce.
I can maybe briefly recap this next time. Let me just stop here.
Next lecture, I'm going to show you a lot more field evidence from various settings. Essentially, the paper by Breza et al that have morale effects and so on, essentially looking at in field situations, I'm going to show you that social preferences seem to matter quite a bit.
And then in particular, we're going to look at the malleability of prosociality, which is the question of, well, should we take social preferences as given? Or are there some interventions or some policies that perhaps could make people become nicer or more prosocial in some situations? Is it something that the government or companies could do to get workers or people to be more prosocial?
That's all for now. Thanks so much.
Welcome!
This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.
Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.
Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)
Learn more at Get Started with MIT OpenCourseWare