Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Description: In this first video, Prof. Frank Schilbach introduces the topic of Psychology and Economics, a field that studies the influences of psychological and economic factors on behaviors.
Instructor: Prof. Frank Schilbach

Lecture 1: Introduction and...
[SQUEAKING]
[RUSTLING]
[CLICKING]
FRANK SCHILBACH: Welcome, everyone. This is 14.13 Psychology and Economics, also known as Behavioral Economics. My name is Frank Schilbach. I'm a faculty member in the Economics Department, teaching and doing research in Behavioral Economics and Psychology and Economics and Development Economics.
There's a syllabi over here in case anybody needs one. So let me sort of start with introducing ourselves, including myself. As I said, I'm Frank Schilbach. I'm an Economics PhD from Harvard recently.
I'm from Germany in case-- as you may have noticed. I do research at the intersection of development behavioral economics. In particular, I'm interested in integrating psychological issues and helping us understand better the lives of the poor.
And so I study all sorts of issues related to poverty, how poverty itself affects people's behavior, and how conditions of poverty, or things that are associated with poverty might feed back into people's decision-making and their productivity or labor market behaviors. And then perhaps, sort of lead to the persistence of poverty through those kinds of effects. So I have some work on financial constraints, how financial constraints affect people's behavior, in terms of thinking about money itself.
I have some work on sleep deprivation among the urban poor. I'm thinking about pain and substance abuse and how that might affect people's choices. Most recently, I'm interested in things related to mental health, in particular, depression and loneliness, how those might affect people's well-being, and then their behaviors.
I have office hours where you can sign up on my websites. Currently, they're on Tuesday afternoons, but if you cannot find a time set work for you, please email me and we'll find some time that works out. My assistant is Krista Moody in case you want to sort of get a hold of me and I'm not available for some reason, you can reach out to her.
So now, let me give you sort of the overview of the class. I'm going to start sort of with four things. One, first, I'm going to sort of tell you what is this class, what is behavioral economics, or what is psychology economics? What do we mean by that?
Is it sort of just putting two fields together, or is this something more specific, which I'll argue it is. I'm going to give you some example then about how you might use behavioral economics, and a specific example and a policy that might affect all of you. And in fact, it will because we have a specific laptop policy in this class.
And I'll tell you a little bit how to think about this. There will also be a problem set that'll help you think about this a bit more. Then we talk about somewhat boring, of course, logistics. And at the end, we do a questionnaire, a quiz, for everybody, in part, because we want to know some information for you. In part, we want to sort of learn a little bit about some decisions that you might make.
OK. Great. Sorry, this is-- OK. So what is psychology economics? So you may have heard of this. The class used to be called economics and psychology, or behavioral economics. For our purposes, that's kind of like all the same.
Some economists sort of argue that all economics is about behavior. So like behavioral economics is sort of a bit of a weird word. So that's why partially I'm using sort of the word psychology and economics.
In some sense, it's sort of broader than that. One definition is, "it's a field of academic research that studies the joint influences of psychological and economic factors on behaviors." You could be sort of broader and say we're trying to integrate insights from not just psychology, but also anthropology, sociology, medicine, psychiatry, et cetera, and so on, into economics.
And trying with an attempt to make economic models more realistic, and therefore, more predictive and help us understand people's behavior better. And then help us make better policies, perhaps, in trying to influence people's behavior. Now, as I said, that's medicine, sociology, et cetera.
Broadly, we're trying to use insight from those fields and try and understand how we missed something by making the fairly stark assumptions that economic models usually make. That leads me to what are then the standard economics models? In some sense, what we're going to try and study to some degree is deviations from the classical, or sort of the standard economic model.
So then, of course, we need to understand what is the standard economic model? To start with, which leads me to impart some prerequisites of the class, or you should have taken at least some economics or-- some economics to start with because I'm going to talk a lot about sort of deviations from those models and sort of-- if you haven't taken any class before, don't really know that very well, then understanding the deviations will be a little bit tricky.
So for those of you who have taken economic classes, what do you usually assume about people's behavior? What are some of the assumptions that you make about economic behavior? Yes.
AUDIENCE: Stable preferences.
FRANK SCHILBACH: Stable preferences, yes. So first, well-defined preferences in the sense of like people know what their preferences are. They can state them. And then they're also stable in the sense of when I ask you today would you rather have apples or bananas, that will not change unless there's sort of new information or your circumstances change.
Of course, if I ask you tomorrow and you already have a bunch of apples, and then you'd say you want bananas, that doesn't mean your preferences of changed. But if I ask you about tomorrow, what would you like for lunch, you tell me a choice, you want apples rather than bananas. And then tomorrow, you show up and nothing else has changed and you say, now, I want bananas. Well then, your preferences are not stable.
That's one assumption, yes. What else do you have? Yeah.
AUDIENCE: More broadly, they're rational.
FRANK SCHILBACH: Yes. And what does that mean?
AUDIENCE: They behaved deterministically to optimize some utility function.
FRANK SCHILBACH: Right, exactly. So people essentially optimize some utility function. So we say here, people have the utility function and know what that is. And you're sort of saying they maximize it to optimize in a certain way. They don't make mistakes in that maximization process.
So that's to say if you tell me you like apples over bananas, and then you choose bananas, well then, something is sort of going wrong in some ways that we haven't fully understood. It might be that you sort of like just making mistakes that could be construed as irrational. It's hard to rationalize with a model. What else?
AUDIENCE: Self-interest.
FRANK SCHILBACH: Self-interest, yes. So like usually in a lot of models all like this sort of-- the easiest model is a very narrowly defined self-interest is what people care about themselves. They care about what they consume and not about what others consume, or what others think of them. So sort of narrowly defined self-interest. Yes?
AUDIENCE: People have the self-control to [? smooth their ?] consumption over time.
FRANK SCHILBACH: Yes. So essentially, perfect self-control usually is sort of one version of putting that. One version of putting that is in some sense sort of going back to what you saying earlier is sort of to say stable preferences. If I'm telling you I'd like to exercise tomorrow, and that's my preference, tomorrow I'm not going to be like, oh, yeah, actually, I changed my mind and now I'm just watching movies, or the like, right?
So that's the version of sort of preferences being stable. But the inherent underlying issue there is like self-control problems in the sense of like, if I like certain things, or make certain plans for the future, I have the self-control to follow through on those plans. That's usually an assumption.
And that then shows up as preferences being stable. What else? Yeah.
AUDIENCE: They prefer consumption today than consumption tomorrow. [INAUDIBLE].
FRANK SCHILBACH: Yeah. So usually, there is like a discount factor in terms of how much you like consumption today versus consumption tomorrow. That is usually in any economic model. Usually, we think of like the discount factor being constant, as like how you think about today versus tomorrow is the same as you think about today versus in two days from now, or like a year from now and a year and a day from now. So usually, there's a constant discount factor for the future.
Usually, we sort of employ economic-- sorry-- constant discounting or exponential discounting. What else do we have? Yeah.
AUDIENCE: Risk aversion.
FRANK SCHILBACH: Yes, risk aversion in the sense of, I guess, people are risk averse, and people often sort of define their preferences over outcome. So they have a utility function that we tend to concave, and the utility function is-- and we'll get to this in a few lectures. But the utility function is concave and it's defined over outcome as opposed to over changes of outcomes.
And that shows up usually as sort of like essentially like a strictly concave utility function, and that's risk aversion. What about information, how do people think about information? How do they use information?
AUDIENCE: Bayesian.
FRANK SCHILBACH: Bayesian. And what does that mean?
AUDIENCE: They update [? their prior. ?]
FRANK SCHILBACH: Exactly. So people essentially are perfect information processors. You can call them Bayesians. But essentially, it's sort of-- for those of you know that-- it's sort of did they use Bayes' rule, essentially, which is to say if you gave a statistician a problem how to update information, people are able to do that in their heads. And that's a pretty stark assumption.
But essentially, it's like when you people have certain information, standard economics does not necessarily assume that everybody has full information in a sense that everybody knows everything. But if you give them some information, then they update their beliefs accordingly, right? So they-- essentially, they optimally use new information and then form their beliefs, their posterior beliefs based on having new information and what they believe previously, their priors.
There's another assumption, usually, and I sort of can actually sort of put these out for you, which is, there's another assumption. That is, people have no taste for beliefs or information. What do we mean by that?
We mean by that is, essentially, people use information only to make decisions. So if you tell me something about like what's going to happen tomorrow, or if you tell me something about my health status, or the like, I use that to make better decisions. So if you tell me I'm sick, I'm going to use that information to go to the doctor.
But people usually don't assume that people have certain beliefs about whether they're sick or healthy, or whether they're smart or not, or they are good looking or not. Usually, the assumption that beliefs are only used-- information is only used to make better decision. Not that people get utility from beliefs.
But instead, you might say, well, I'm really good looking and smart, and maybe smarter than you actually are. And the reason why people might believe that is because it makes them feel good about themselves. That's usually an assumption that standard economic models make. I think we mentioned almost everything.
We sort of said Bayesian information processor-- essentially processing information optimally-- well-defined and stable preferences, maximize expected utility, which is sort of like, in some sense, rationality if you want, apply exponential discount weighting current and future well-being. People are narrowly defined in terms of their self-interest. And people have preferences for final outcomes, not changes.
So what you care about is how the weather is today, not kind of how the weather changed between yesterday and today or tomorrow. And we'll talk about all of these. By the way, these are all sort of some terms that seem maybe unfamiliar to you. We'll talk about all of these kinds of assumptions and perhaps how to deviate from them specifically.
OK. So now, that we have those assumptions, now what kinds of deviations when you look into the world-- and sort of one part how you think about behavioral economics is, in some sense, think about the world, and try to sort of observe what's going on in the world. And try to see which assumptions might be violated in an important way. And try to sort of improve our models. So when you think about the world and look at these assumptions, can you come up with real world examples that might sort of violate these assumptions? Yes.
AUDIENCE: The last assumption that you listed, the one of taste and beliefs or information is violated on a constant basis, people tend to believe what they want to believe.
FRANK SCHILBACH: And do you have an example, a specific?
AUDIENCE: This is very common as it regards political matters. I'm not going to name a particular topic because obvious reasons. But people often choose to discount information that goes against their beliefs.
FRANK SCHILBACH: Right. So one reason could be-- and there's quite a bit of work recently. We'll talk a little bit about in this class about sort of how political beliefs or other reasons why people might be motivated to believe certain things. One clear example would be climate change, not just because of political reasons.
There's some interesting work about, when people live in flood areas, for example, what do they think about climate change? And you might sort of say, well, if you really live in an area that's potentially affected by floods, you might really want to know about climate change and know what's going on and really try to inform yourself. But what people tend to do is sort of try to ignore the issue and try to sort of be happy as long as sort of nothing happens. Now, that's one example, yes. Yes.
AUDIENCE: Some purposes might not be well-defined. So say, I have a decision on what I want to eat. I want to eat steak over chicken if I'm presented with the opportunity. But I prefer chicken over pork, but I might else prefer pork over steak.
FRANK SCHILBACH: Right.
AUDIENCE: So it's not always well-defined.
FRANK SCHILBACH: Right. So one part that you're telling is essentially-- so one part might be it might not just be well-defined. There's other issues, like, for example, that irrelevant other-- there's an assumption economics. Usually, as they say, if you choose between A and B, the availability of C should not matter for your choice between A and B. But often, that's not the case.
So if I offer you apples and bananas and you also can get cherries, now suddenly your choice between apples and bananas might change even if you can't get the cherries-- sorry, even if you don't choose the cherries eventually. Exactly. There might be sort of peoples' preferences might not be well-defined. They might also not be stable. Any other example? Yes.
AUDIENCE: Also, for the well-defined preferences, we have information processing constraints. If I'm given a menu of 100 choices, it's going to be difficult for me to know which one.
FRANK SCHILBACH: Right, exactly. So people have lots of information around themselves. And if you think about to process that information, if you go to a supermarket, it's impossible to know all the prices, all the goods, and make all these choices.
So in some sense, there's an abundance of information everywhere. And we have to sort of figure out how to deal with that. Any other example? Yes.
AUDIENCE: Some people who might have preferences [INAUDIBLE] equal an outcome of getting $1,500 would be different if you were initially promised $1,000 versus if you [INAUDIBLE].
FRANK SCHILBACH: Right, exactly. So one part is to say people have preferences of a final outcome, not changes, which is to say you might feel about the weather in certain ways, you know, depending on how the weather was yesterday. Another version of that would be you evaluate the outcomes that you get based on your expectations. And sort of if you thought, you know, today is going to be really nice or your day is going to be really good, but then it happened to be not as good for whatever reason-- the weather is bad or something bad happen to you today-- the typical assumption would be just say, well, you should just evaluate the final outcome.
It shouldn't matter what you thought about previously what might happen. You're going to look at what happens at the end. And that's how you evaluate your well-being. Yes.
AUDIENCE: You might need a commitment device, so that you-- you might avoid buying potato chips because you think you'll eat them. That didn't really make sense. [INAUDIBLE]
FRANK SCHILBACH: Right. So people might on purpose restrict their choices in certain ways, right? And this is what gets me eventually to the laptop policy. But people might sort of essentially know that in the future they make certain choices.
For example, in the laptop case, you might say, I'm going to use a laptop in class with all the best intentions to taking notes and paying attention. But you know, at the end of the day, you know, you're going to watch the football review from yesterday or whatever. Other stuff comes up. You're going to chat with your friends and so on and so forth.
So now, in some sense, if you're as sophisticated, as in if you know what you're going to do in case you sort of make certain choices, you might say, I choose not to even allow myself to use a laptop at all. The reason being because I know that essentially I'm going to misbehave in the future. And that's we'll talk about this soon, I think in lecture three or four, which is essentially people's demand for commitment, as we call it, which is to say people have demand for restricting their choice.
And in any neoclassical model, this doesn't make any sense. Because you're going to choose optimally anyway. And you're going to make the best choice for yourself in any case. That's an assumption. So why would you ever sort of shut down certain choices?
Because, for example, in an emergency, or there might be something really important coming up in class where you might want to use your laptop regardless of what's happening. Why would you shut that down? Exactly. So demand for commitment is another example. Yes.
AUDIENCE: People are also not anywhere near perfect Bayesian information processors either. Or as a general rule, people are pretty bad at updating their beliefs based on new information. And we frequently traffic in 100% and 0% certainties very often, which far more often than what you would expect from a Bayesian information processor.
FRANK SCHILBACH: Right.
AUDIENCE: Practically not [INAUDIBLE].
FRANK SCHILBACH: Exactly. So one summary of that is to say Bayes' rule, a lot of this sort of updating behavior, is really hard. It's actually tough to do. And for those of you who do statistics, math, et cetera, you might be able to do that very well.
But in a way you're a very small share of the population who actually does that well. But even among those, there's a bunch of stuff that's really hard to compute in your head. You might be able to write it down and figure it out eventually, but usually the assumption is that people just do all of this stuff in their heads and really complicated problems. And that's sort of clearly not the case in many situations.
And then, of course, that's sort of one part of that sort of among MIT students. But then, you know, there's other populations that are less educated. They might not even know what Bayes' rule is. They might be illiterate and so on and so forth. So the assumption of people perfectly processing information is even perhaps more tenuous. Yeah.
AUDIENCE: A lot of models are usually hedonic, but people usually care about what other people are doing as well. So for example, I may not be willing to take a class. But if my friend takes a class or gets into the class, then my utility would go up.
FRANK SCHILBACH: Right. So there's two parts to that. This is essentially the assumption of narrowly defined self-interest and people caring only about themselves or their own consumption. So there's two parts of that.
One is you might care about your friends. You might give money to your friends. You might sort of help them out. You might help them with homework. You might help them in other situations.
You might send money to charity and so on and so forth. So this is kind of like you care about other people. And the well-being of other people is in your utility function one way or the other.
The second part is what you already discussed as well, which is kind of social influences, call it, which is essentially peer effects. You care a lot about what other people think about you. You care a lot about what other people do.
People get jealous. People get angry. People get envious. And sort of their behavior about what they want at the end of the day is very volatile and malleable, essentially, based on influences from others, right?
And that might sort of show up in all sorts of ways, including in terms of peer effects. People do all sorts of things because other people think that's cool. But it's not necessarily what's best for them if they had to choose on their own.
OK, great. So let me stop here. There's a lot of other examples that we can come up with. I have a few here. So one is sort of limited self-control.
That's one of my favorite pictures about gym attendance where you say, well, they seem to be people who have preferences for going to the gym. But then there's an escalator, and people not working out so much. So in some sense, something is perhaps amiss here.
When you look at sort of-- and this is one of my favorite things to do is-- or not quite as-- it's a fun thing to do, but maybe not one of my favorite things. When you sort of Google terms, this is calories or Weight Watchers or the like, where you say, well, what is people's interest in googling those kinds of terms over the course of a year? And as you can imagine, what you see are these spikes that are essentially January 1st.
People start sort of googling calories. They start googling Weight Watchers and so on and so forth. And now, of course, you can sort think about some models that sort of can rationalize this in some sense in saying, well, January is just a good way to start a diet and so on.
But really the right thing to do is sort of a people, on January 1st, have all the best of intentions to lose weight, eat healthier, exercise more, and so on and so forth. And then the year starts, the semester happens and so on and so forth. And the good intentions sort of fade away, which is essentially limited self-control.
There's some interesting things about demand for information, especially when it comes to health. You know, every time I ask this-- and I've taught this class a few years so far. Every time I ask this, I sort of realize how old I am.
Who knows Dr. House? Oh, not bad. So can anybody explain to me Thirteen and Dr. House and what is sort of their deal? And what does this have to do with demand for information? So who's Thirteen? Or what is her health issues?
AUDIENCE: She's got a genetic disease, I think.
FRANK SCHILBACH: Yes, she has--
AUDIENCE: She has, like-- I'm not--
FRANK SCHILBACH: Huntington's, yes.
AUDIENCE: Yeah, Huntington's.
FRANK SCHILBACH: Right.
AUDIENCE: He finds out later on.
FRANK SCHILBACH: Right. So she has a disease, which is called Huntington's disease, which we'll also talk about more in the class, which essentially is a brain disease where over time your brain more or less degenerates. And it's a really bad condition where at age 40, 50 that manifests. And it's a very serious condition.
It's a genetic disease in the sense of, if your parents have that disease, the chance of you having it is way, way higher than in the general population. There's a test for it, but there's no cure. So now, the dilemma and the show then comes where she essentially is thinking about getting tested.
And Dr. House, who in some ways very rational and others not so much, essentially encourages her to get that test. And any rational model would say, well, you should really get that. Any sort of neoclassical model would say, you should get that test. Why should you get the test? Why is that helpful?
AUDIENCE: More information.
FRANK SCHILBACH: And what's the information helpful for?
AUDIENCE: [? It ?] [? would update ?] [? price ?] [? of action. ?]
FRANK SCHILBACH: Right, exactly. You would say information is good. Why is information good? Because there's lots of important choices that you might make that depend on that information.
You might think about how much money you want to save. You think about your choices about taking a vacation, about career choices, about partner choices, about having children, all sorts of issues about health behaviors. You might sort of take better care of yourself.
There's all sorts of really important decisions that might hinge on the fact your life will be dramatically different whether you have Huntington's disease or not. Now, Thirteen does not want to-- I think she actually does the test, and then doesn't want to see the result. And why is that? Yes.
AUDIENCE: Even if you have the condition, there's no cure for the disease. So it might help your quality of life if you [INAUDIBLE].
FRANK SCHILBACH: So that's one view. Just to view that, I think, in some sense, as I said, you could make other decisions, like economic choices. You could save more.
You could see the doctor more. You could go more on vacations or whatever, do fun things in life. But you could say, well, maybe that's not that important.
But the neoclassical or the classical economics model would still argue that you should get that information. It wouldn't hurt you. You wouldn't want to refuse it. But why is she refusing it? Yes.
AUDIENCE: I think maybe she doesn't want to live the rest of her life knowing she has the disease.
FRANK SCHILBACH: Exactly. So she essentially derives utility from information, if you want, from her beliefs. She likes to sort of pretend to herself that she's healthy or the chance of her being healthy is very high. Now, under some assumptions, you might want to not get tested and sort of see the result of that test.
Because once you have the test, if it's positive, it's very hard to pretend to yourself that you're healthy because there's a hard test saying that. Before the test, she can still sort of think, try to forget about it, and try to sort of live a happy life sort of until the symptoms set in. But that's a clearer violation of the classical model of the assumptions that I was showing you above.
The next part is default effects. So what I'm showing over here is the fraction of organ donors by country and type of default. There's two types of defaults, often, for organ donations.
This is from a few years ago, but I think, overall, things have not changed that much. There's opt-in versus opt-out. So in the countries on the left, essentially you have an opt-in policy.
In a sense, if you don't do anything, you're not an organ donor. So you have to actively say in some form or some form of declaration that you want to be an organ donor. On the right, there's opt-out policies, which means essentially, if you don't do anything, you'll be automatically registered or viewed as an organ donor. You can opt out, but only if you do so, you'll actually be opted out. Otherwise, if there's an accident or the like, you'll be viewed as an organ donor.
Now, what's weird about this graph? Or why is this sort of potentially a violation of the neoclassical or the typical economics assumptions? Yes.
AUDIENCE: Because assuming that the people have the same preference of whether they want to opt-in or opt-out, then there should be the same proportion across. But the difference is that the effort required to make that change or the lack of indifference causes there to be the imbalance between the opt-out and the opt-in.
FRANK SCHILBACH: Right, exactly. So some people in-- I don't want to get in trouble, but some people would argue Germany and Austria is actually not that different. So I'm German. So some Austrians might disagree.
But you would say, you know, overall, Germans and Austrians are not that different overall. And you think, you know, their preferences in particular, they might be different. But their preferences for organ donations are probably pretty similar across those two countries, which is a reasonable assumption.
Second, lots of people care a lot about what happens to them after they die, right? So they, for whatever reasons, religious or other, they might actually care a lot about, at the end of the day, what happens to their body and so on. And that's obviously their choice to make.
Now, if you assume those two things, then it's very hard to reconcile this picture. Because, essentially, the opt-in and opt-out is a very small change to make. It's very simple and easy to do.
You just have to fill out some form. Maybe it takes you an hour, maybe two. You have to maybe go to some office or the like. But if you really care, you could easily do it.
Yet you see huge differences in outcome based on this very simple small change, which is very hard to model or write down a model that sort of rationalizes this behavior. So somehow the default, sort of the way in which the decision is presented, seems to really affect people's behavior in a fundamental way.
OK, next one is GlowCaps. This is help for people to take their medication, in particular for the elderly, which essentially is a reminder for people. You can sort of program this in certain ways, like every day or every few hours, GlowCaps.
So it's either sort of making sounds or such blinking and the like and gets you to take your medication. And it's a very sort of simple proof of concept that reminders are important and can save people's lives. And people have done studies on that and so on.
But essentially, it's a very simple proof of concept that memory is limited, right? If your memory was perfect, if you could remember everything, if you were a perfect information processor, you would not forget about your medication, especially if it's important, especially if it's medication that potentially saves your life. Yet we see that people's medication usage is strongly affected by sort of types of policies or products that can help people remember. So that's essentially a rejection of perfect memory.
Next one is charity. So if you think about people in all sorts of ways seem to care about others in the sense of just giving money, here's one charity that's called GiveDirectly, which is a very nice charity in a sense that what it does, essentially, is a very simple way of helping the poor by just directly transferring money to poor countries. So on the right, you see an actual recipient from GiveDirectly.
That person has a phone. They're sort of a mobile phone based transfers where, essentially, you can send money to people with mobile phones. So if you decided now to give $100 to that person-- it could be that person or a similar person-- 95 of those or 90 of those dollars would then, in fact, arrive at the cell phone of that person.
Now, why are people doing this? Presumably, because they care about others one way or the other. It could be either that, in some sense, that's part of the utility function.
Just donating money to others makes you happier. And that's essentially a way in which you improve your being. You just have others well-being in your utility function.
Or it could be things like what we said previously. It could you sort of think it's appropriate thing to do. Maybe other people do. It's an important thing to do.
Maybe you can tell your friends about it and so on and so forth. But one way or the other, people give lots of money to charity, which in some sense says, one way or the other, you must care about others overall. So others are in your utility function.
So now, attention, if you have seen this before, don't tell your friends about it. Here's a simple attention test that I want you to do. What's interesting about this is, in some ways, you might have seen this before. Some of you may have seen the gorilla before.
Who didn't see the gorilla? That's OK. I didn't see it either. But some people have seen it before. There's a bunch of experiments that people have done that essentially show that, often, a large fraction of people don't see this gorilla.
And there's various ways in which people do other types of experiments. There's types of experiments where essentially people, they go to some bank teller. The bank teller says, I'm going to go away and then comes back. And then there's a different person who comes back, and people don't notice.
There's various versions of that. And so what's interesting about this in some sense is-- so A, it proves, in some ways, that attention is limited. You just focus your attention on some things and not others. And you might miss important things.
Now, that's OK in some ways. There's sort of a version of that that says, well, it might be sort of rational inattention. In some sense, you focus on the important things in life. And you might miss some things, and that's OK.
In some sense, I asked you, or the video asked you, to count the number of passes. So in some sense, the gorilla is irrelevant. And that's true in this case.
But in many other cases, people might sort of not rationally pay attention. They might sort of miss some really important things in their lives in part because they're distracted, in part because they don't want to pay attention and so on. And some strand of behavioral economics which we'll talk about is sort of when people do not pay attention and then make mistakes because of that.
OK. So I could go on and on with lots of different examples. But the bottom line here is that most researchers in psychology economics believe that the classical model of behavior, sort of the homo economicus, is too extreme in various ways. That person is too selfish, too rational, and too willful. And in some ways, we want to kind of understand how relaxing some of these assumptions might make economic models more realistic.
Now, in some sense, no economist would, in fact, argue that the assumptions of the standard models are exactly correct. The questions are, are these deviations important? Do they actually matter for something important in explaining people's behavior? And which of those assumptions or deviations actually matter?
And that's kind of the name of the game here. In fact, when you talk to cognitive scientist, psychologist, et cetera, they will tell you the world is full of cognitive biases. And you might not be able to read this. And that's kind of the point.
There's so many different things in which we are biased. In some sense, every choice that we make or any things that we do, there's lots of biases that interfere with people's choices. And almost any choice you can think of or any behavior they can think about, there will be psychology or other experiments showing that people do not behave perfectly.
Now, the key question then is, which of those assumptions are important? And which of those violations of assumptions should we focus on? And for that, I want to step back a little bit and say, OK, what is actually a model? What are economic models trying to do?
And so what is a model? A model is a simplified representation of the world. And we know, in some sense, that the assumptions of the models are not true. They're sort of supposed to be approximately true and exactly false.
So when you think about used models of the Earth, in some sense, they're flat, spherical, ellipsoid, and so on and so forth, models. Now, good models do not account for bumps and grooves and so on. A perfect replica of the Earth is not a useful model to use. You kind of want to simplify and capture the essence of what's important.
So now, then what's a good model? Well, a good model-- and you can read a bit about this in Gabaix and Laibson's paper. A good model is supposed to be simple. It's supposed to be easy to work with and tractable.
So in some sense, you have only a few variables, a few things that matter. It's conceptually insightful in the sense that it focuses on important things. It tells you about behavior that we really care about and important ideas.
It's generalizable in a sense that, ideally, we are trying to look for some behaviors that are generally in some sense. So if I can just explain you one simple choice in one domain and write a model about it, but that model doesn't apply for anything else, that's not a good model to work with. The model is supposed to be falsifiable in a sense that we can actually test them.
And that's kind of what we do in experimental and behavioral economics. We try to test people's theories and then empirical work and then sort of falsify or reject models that are wrong and sort of accept or do not reject models we think are better. There's supposed to be empirical consistency in a sense of, if I explain one behavior, I should also explain this over time or in different domains.
And we should be able to make good predictions for people's behavior. Now, are the assumptions of the standard models true for most people? The answer is no.
But the key insight here, the key properties of good models, is simplicity. So in some sense, assuming perfect rationality, selfishness and willpower is actually a simple thing to do in some sense. The reason why economists have made those assumptions to start with is not necessarily because they thought people are perfectly rational or that there are not these psychological issues going on.
The reason is actually simplicity. It's an easy thing to do. You model how you think people should behave and how they behave perfectly. And making some of these models sort of richer in their psychology is actually complicated and makes the models more complex and harder to analyze.
So that is just to say, well, then the question is, can we find some assumptions of economics? Can we make them more realistic in a tractable way? In a sense, can we find key things that we change that keep the models tractable while then also explaining important things better than we can before?
So it's not about just taking all sorts of psychological issues that might be going on and say, well, economic assumptions are wrong or models of economics are wrong or the assumptions of those are wrong. We know that these assumptions are wrong. The question is, can we make somewhat simple assumptions or improvements of those assumptions based on insights from psychology and other fields that help us improve those models and then make better predictions and all of that in a tractable way?
So now, this is very important here. A good behavior economist or a good student in this class is also a good economist. Behavioral economics is not sort of trying to replace standard economics. So I don't want you to go to my colleagues and say, you know, 1401, 1402, and so on, this is all garbage. Frank is telling you we do things differently.
In fact, you very much need to know those kinds of things from standard economics to understand what the assumptions are and how to best to deviate from that. And so such then, key principles of mainstream economics continue to apply. Decision makers are still highly sophisticated.
Markets and incentives matter. They, in fact, play a key role in shaping behavior. And markets allocate resources well most of the time.
The question is, can we sort of focus on important deviations and try to fix those? And then, again, sort of methodological principles still apply. Use observational experimental data. Mathematical models are good and so on and so forth.
And ideally, models would sort of finesse the special case of perfect rationality or perfectly sort of standard economic models. And they try to build in some parameters that deviate from that and try to understand can we make better predictions. And then so, often, prices are actually the most important aspects of choice.
Here's one experiment by Ito et al. that try to essentially sort of change people's energy usage and so on. And they have two treatments. One is called moral suasion, which essentially is telling people-- sort of appealing to their morals in some ways. And the other one is essentially just financial incentive, straight up changing the price of people's choices.
And what you see in this experiment essentially is there's, in red, you have the impact. These are all treatment effects over time comparing a treatment group and a control group. And you see, in red, the treatment effects of the incentives.
And you see these are relatively large and persist over time. And you see the treatment effects of the moral suasion treatment, which happens to be reasonably large to start with, but essentially just goes away. What did we learn from that?
We've learned from that prices matter. And maybe in this case, moral suasion is just not that important. And that's perfectly fine. We're trying to identify situations or cases where some of the underlying psychological issues are important.
And we've learned from an experiment that, in some cases, it doesn't apply. Well, so be it. Then we should focus on other cases where the psychological issues are more important.
OK. So then what's sort of our broad approach to each topic? We're trying to sort of start with an intuitive, empirical, and experimental examples how people behave in some situations. We try to think about their motivations, try to see about how they make choices and behaviors and how they behave in certain situations.
And then we're going to try and sort of model this in a more precise way and try to consider how the modeling of that perhaps deviates from the neoclassical or the classical model of economics. Some other times, you also just start from the classical model of economics, look at the predictions, and say, well, can we reject that? Or do we make certain predictions that are just not true? And then say, can we sort of improve those types of assumptions?
And then, overall, we're trying to think about then how these hypotheses or deviations might be able to explain how people behave in markets and what choices they make and how, perhaps, we can think about policies that might affect the people's well-being, their welfare, or any other consequences. Any questions so far?
OK. So now, I want to give you one very simple, in some sense, stylized example which sort of demonstrates a little bit what sort of the neoclassical economic standard economic assumptions are and how sort of enriching a model with psychological considerations might be more powerful. So when you think about laptops in class, what are sort of standard economic considerations? Should you allow laptops? Should there be laptops in class? Is that a good thing or a bad thing? Or what do you think about that?
And now, I'm asking for standard, non-psychological issues, just saying, if you went to a classical economist and say, should we allow laptops in class, what would be considerations for that? Yeah.
AUDIENCE: Externalities-- if you're distracting people who are around you?
FRANK SCHILBACH: Yes. So you could call that psychological or not. But essentially, it could distract people essentially. And externalities tend to be sort of, very much in classical economics-- essentially, if you smoke, it affects others. Similarly, if you had a laptop, it might affect others.
It's a little bit sort of in the middle between psychological and economic considerations. Because in some sense, if somebody next to you would use a laptop, then I would just say, well, why don't you just focus on class anyway? You should be able to just ignore that. But setting that aside that exactly externalities are surely one considerations that we have. What's the pro-side? Or why would we allow laptops? What's good about laptops in class? Yeah.
AUDIENCE: It could just be efficient for taking notes. Some people might need to use them. Some people may are dyslexic, stuff like that. So it's just good.
FRANK SCHILBACH: Exactly. It's a useful technology to take notes. And you might just be better at that or might be more comfortable. It's easier for you to do. Maybe it also saves some paper or whatever. What else are laptops good for? Yeah.
AUDIENCE: More choice is just always better. So if each person has their laptop, they can choose what's better for them to pay attention--
FRANK SCHILBACH: Yes, exactly. If you are so inclined and want to watch football from yesterday, the replay and so on, if you prefer that and that's good for you, you know, who should be stopping you, right? And so in some sense, laptops are useful for note taking. They're also useful for non-class activities.
And as you say, each student should be able to choose for themselves what's good. And that may involve paying attention or not. Now, what are sort of some psychological considerations?
We had one of them, which essentially distracting others, which essentially is the externality. What else? Yeah.
AUDIENCE: Temptation and not valuing your future learning?
FRANK SCHILBACH: Right, exactly. You might sort of have all the best intentions of taking notes, what I said previously. But then, you know, it gets a little dull and boring at minute 40.
And you might sort of be inclined to sort think about other stuff and sort of start surfing an internet or the like or start chatting or whatever. That's sort of essentially limited self-control one way or the other. Yeah.
AUDIENCE: Yeah. So I think even if you don't fall into the temptation, you kind of have to spend energy resisting the temptation and that might kind of distract you.
FRANK SCHILBACH: Right. So there's some cognitive resources potentially from that. That's a very nice observation. Exactly. That also might be true.
There's another part to that, which is people tend to overestimate how much they can multitask. And there's a large number of experiments that say, well, I'm actually paying attention. I'm just sort of reading some other stuff. I'm chatting with my friend.
And people actually think they pay attention, and they learn. Trust me, they do not. So essentially, there's a large, large literature on people thinking that they can multitask when, in fact, they cannot.
And this is a very human thing to do. You might think you can study for an exam by watching TV. Chances are you're not studying very well. So that's essentially some form of overconfidence.
Right. And then there's another part, which is a sort of somewhat different psychological consideration, which is people tend to not like hard paternalism, as in, if you sort of-- we talk about this at the very end when we talk about policy. People don't like certain hard rules about you're allowed to do this, or you don't allow to do x or y. You have to come to class or whatever.
People tend to not like that. That's more another sort of consideration if you think about what is the right policy to do. Now, what policy solutions could we do? What laptop policies have you seen? Or how do you think about them? Maybe somebody else? Yes.
AUDIENCE: I haven't seen it, but let the people with laptops sit in the back.
FRANK SCHILBACH: Yes. You could do that. Yeah. And sort of that's kind of minimizing the externality potentially. It's not really helping with self-control, I guess, right? Because particularly, if you sit in the back, nobody sees what you're doing. And you might be so inclined to do all sorts of things. Yes.
AUDIENCE: I've had professors say a hard no on laptops, but if you have special needs or if you think you're a special case, you can tell the professor [INAUDIBLE].
FRANK SCHILBACH: Right. So that's sort of like hard paternalism with some exceptions potentially. Yeah.
AUDIENCE: I've seen only devices that are flat on the table allowed for [INAUDIBLE].
FRANK SCHILBACH: For note-taking, yeah.
AUDIENCE: I've seen TAs sitting in the back and keeping track of class participation. So if they see you doing something not academic, they could technically ding you for it.
FRANK SCHILBACH: Yeah. That's what they're for. No. No. Anything else?
AUDIENCE: I've seen classes where you're just allowed to use your laptop because, I guess, you're an adult. And you're responsible for letting you learn.
FRANK SCHILBACH: Right, exactly. That's sort of laissez-faire and saying you know what's best for you. In some sense, since this is sort of a psychology and behavioral class, I tend to disagree with some of those assumptions. But, yes, exactly. That's laissez-faire. Any other thoughts?
OK. So then here they are. So there's laissez-faire. There's educational intervention which essentially is providing people information about laptops or what laptops do for learning. There's some experiments I'll show you in a second.
Educational interventions tend to actually not work particularly well. So essentially, just giving people information tends to not change behavior often in the way we'd like to do that. You could tax laptop use. You say, it's costly to, but that would be the typical sort of public economic sort of solution.
I think I'm not allowed to take money from you guys. So I might not do that. You could ban laptops. That was a previous suggestion to say except for students with medical need.
You could make a non-laptop section the default and let students opt out or opt in depending on that. It's essentially saying, the default choice is one, essentially no laptops. But you can opt into that by emailing to a TA and then use it in a certain section.
We'll talk about, in the problem set, a little bit why that might be a good idea. You could also set up an active choice between the laptop and the no laptop sections. So here's the educational interventions.
The evidence-- and there's very clear evidence from various settings that shows that laptops in class are not good for learning on average. There's a very nice article by Susan Dynarski in The New York Times. We'll put this online as well that you could read.
But essentially, one of these studies is a randomized controlled trial that, actually, an MIT grad student, a former MIT grad student had done, in an intro econ class. This is Carter et al. And essentially what they find is that allowing computers in class reduced test scores by 0.18 standard deviations.
That's quite a large number. That's essentially saying they did randomize across classes, like at the classroom level, whether they allowed laptops in class or not. Interestingly, they found negative effects both of unconstrained laptops, like on any laptop use, but also the flat tablet solution. So even the flat tablet solution was worse than sort of the note-taking with pen and paper.
You know, and there's a bunch of evidence that sort of shows that. So in this class, what we're going to have a sort of a version of an opt-in policy, which is there's going to be a laptop section starting next class, which is going to be in the front on one side of the class. The reason being we're trying to sort of minimize externalities.
It's in the front, not in the back. The reason being that, if you're in the very back, then there's no supervision. I'll, I guess, ask some TAs to sit sort of in the back. If there's too much other activities going on, maybe there won't be a laptop section anymore.
Anyway, there will be sort of considerations about that in the problem set. But the point of all of that is to say, if you just had the standard economic considerations, you would make certain policies on very sort of simple considerations by missing important facts. And that might be sort of the psychological externality. It might be the self-control problems, the overestimation of people's ability to multitask and so on and so forth.
And all of those things might sort of interfere with people's optimal choices. And that makes certain more paternalistic policies potentially more promising. But again, in the problem set, you'll think about some of these solutions. Any questions on the laptops? Yes.
AUDIENCE: I might be thinking of a different study, but if this is the same one, I got that, in their other classes, they previously were not allowed to use laptops. So it might be that they had not optimized themselves for laptop usage during class.
FRANK SCHILBACH: That's interesting. So I think there are several studies that show that. I don't know the details on that. But if you can send that to me, I'm happy to look at that and reconsider.
I think my view overall is that sort of the existing evidence essentially shows laptops are bad. Since I'm interested in your learning, that's what I go with. But I'm happy to be educated. I'll try and put the slides online always the night before in case you want to print them out or look at them or whatever during class. You're welcome to do that.
OK. So then let me tell you very briefly about different topics of the class and what we're going to cover. So first, we're going to have an introduction and overview of today's introduction lecture. The overview will be on Wednesday, which will providing you an overview of what the different topics are and how to think about them and what evidence do we actually have.
So I was kind of a little bit handwavy in a sense of showing you lots of flashy pictures of people misbehaving. But in fact, I'm going to show you some more sort of rigorous evidence that we have that, in fact, some of the assumptions of the classical model are violated and sort of how we think about them and how we might sort incorporate them into economics. So that's going to be an overview on Wednesday.
Then we're going to talk a lot about preferences. You can think about essentially, when people make choices, there's a utility function. And sort of one set of behavioral economics issues are changes to the utility function.
This might be time preferences and people's self-control, which we already discussed. It might be risk preferences, how people think about risk, how they think about gains and losses and how their preferences are reference-dependent. This is what a student was saying earlier about how you evaluate something might depend a lot on your expectation of that outcome as opposed to just the outcome itself.
Then we're going to talk quite a bit about social preferences. This is sort of like how much we weigh others' utility and our utility function or their consumption and our utility function. We'll do some experiments in class on that, which should be a lot of fun, and then discuss various issues about, A, how much people care about others and how they're affected by others' behavior and social influences.
So that's kind of the first half of the class broadly about changes in people's preferences. Then we're going to talk more about beliefs broadly speaking and sort of how people view information in the world. We talk a little about emotions, what's called projection and attribution bias.
These are issues about, for example, when people are hungry or tired, they might make quite different choices. And so that might affect their preferences. But it might also affect their beliefs in some sense.
If you think about how you're going to behave, if you're hungry right now, it's very hard for you to believe or think about that you might not be hungry in the future. This is sort of the classical example is shopping on an empty stomach. There's lots of different applications of that. There's people buy convertibles when it's sunny and then return them when it rains.
But there's also much more serious issues, in particular things like depression. For example, when people are depressed, it's very hard for them to think about how it might feel when they're not depressed anymore in the future. So we'll talk about sort of projection and attribution bias, which is kind of these biases and how to think about people states of the world.
How about limited attention, that people sort of don't pay attention to certain things in the world and sort of how that might affect people's behavior, how we might sort of exploit that potentially if we're taxing them or how we might sort of direct their attention to certain things and might improve their behavior? Similarly, then we talk about beliefs and learning. This is kind of like what information do they have available-- A, how they update their beliefs when they get information. And are they able to process information well?
B, is there demand for information? Do people get utility from beliefs? This is what I was talking about like health behaviors or the health information where people might have motivated beliefs in the sense that they like to believe certain things when, in fact, they're not true in part because it makes them happy or in part because they want to be right about something or their party or whatever.
We'll talk a little about augmental accounting, which is people tend to sort of narrowly bracket their choices. They might sort of have certain accounts in their behavior and decide sort of separately as opposed to aggregating their behavior as a whole. And that's sort of an issue that's less researched, but quite interesting overall.
Then we're going to move towards sort of more radical deviations, if you want, from the standard model, which is things malleability and accessibility of preferences, which is to say people might not actually know what they want. They might not even understand what their preferences are or their preferences are easily manipulable. I can sort make you choose A or B or A versus B or B versus A, depending on what kind of situation I put you in.
And you might not even notice that I'm doing that. And that makes things a lot trickier. Because then, in some sense, it's much harder to sort of say should the government or any sort of other policy maker choose A or B if we don't even know what people's preferences are, when people don't even know what their own preferences are?
We're going to talk about happiness and mental health. Just kind of broadly speaking, what makes people happy? And can we think about that, in particular, sort of financial choices and others?
I'll tell you a little bit about some of the work on mental health that I have been doing, thinking about kind of how mental health might affect economic behaviors and choices and, in part, how people's demand for mental health interventions might be shaped by influences of others. We're going to talk about gender and racial discrimination, which sometimes there's sort of classical and neoclassical models of discrimination. But we, in particular, think about sort of issues of discrimination that are sort of not rational in the sense of think about unfair, in some sense, discrimination.
Finally, we're going to talk about sort of policy and paternalism. One broad issue is about-- and this goes back to malleability and inaccessibility of preferences, which is essentially ways in which choices can be affected through frames, defaults, and nudges. And sometimes I say, I can manipulate your choice architecture in certain ways that make you choose certain things.
You know, I might sort of send you letters to do your taxes. I can sort of frame or set certain defaults for organ donations or for savings choices and the like. And that might have profound effects on people's behavior.
Once we go through that in a sense of showing you some evidence that it's possible to do that, we're going to talk a little bit about policy and paternalism in a sense of say, OK, now, if I know I can change your behavior in certain ways, what kinds of policies should we do? And should we, in fact, do that? Or should we tax people?
Should we set certain frames and defaults? And you know, are we potentially making some people worse off while making some people better off by doing these kinds of policies? There's often also some ethical issues associated with that.
And then, finally, I'm going to talk about poverty. This is sort of mostly the research that I do and thinking about poverty issues through the lens of psychology. Again, I'll tell you a little bit about the work I do myself.
Partially, we get to talk a little bit about so how financial constraints or just thinking about money affects people's behavior and then, second, how others' issues related to poverty might shape people's choices, decision making, and their labor market outcomes, earnings, and so on and where there's potentially something that you might want to call a psychological or behavioral poverty trap. Any questions on these topics?
OK. So then readings for next time, this is Wednesday. Please read-- there's a paper on the course website by Matthew Rabin, who is one of sort of the all-stars in behavioral economics. 20 years ago, he wrote a very useful perspective on psychology and economics that sort of discusses a lot of the issues of behavioral economics, how to think about that.
Again, read sections one and two. We're not going to test you specifically on very specific things, but you should sort of be able to sort of remember, at least roughly, what's in that paper.
Welcome!
This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.
Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.
Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)
Learn more at Get Started with MIT OpenCourseWare